Archives January 2020

6 AI solutions every commercial bank needs

DataRobot’s automated machine learning platform helps banks leverage their substantial investments in data to meet today’s challenges. By learning from their own data, banks can find and attract the best new clients, deepen existing client relationships, improve the client experience, and identify new growth opportunities while meeting regulatory requirements and fighting financial crime effectively and efficiently.

1. NLP – Natural Language Processing

This new eBook highlights practical use cases for AI in today’s investment banking market. Armed with this knowledge, investment bankers can take advantage of the enormous amount of data they generate and transform into AI-enabled enterprises.

The most important use cases of Natural Language Processing are:

Sentiment analysis aims to determine the attitude or emotional reaction of a person with respect to some topic – e.g. positive or negative attitude, anger, sarcasm. It is broadly used in customer satisfaction studies (e.g. analyzing product reviews).

2. Reinforcement learning

Download your copy to find out how AI and machine learning can help you grow your business and outperform your competition.

3. Dataset

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

DataRobot's platform makes my work exciting, my job fun, and the results more accurate and timely -- it's almost like magic!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Delivering next best action with Artificial Intelligence

In marketing analytics, a marketing and sales funnel is the set of steps a visitor goes through before making a purchase. More and more, this customer’s journey includes a diverse set of touchpoints, involving both push and pull:

1. A Customer Journey Can Include Many Touchpoints

Historically, organizations have sent the same messages to all customers, using the same media channels. But, in the modern world, this may be perceived as SPAM that annoys customers and pushes them away. The diagram below (Figure 2) shows the traditional approach in action – every customer sees the same sequence of touchpoints.

The most important use cases of Natural Language Processing are:

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

2. When All Customers Receive the Same Outbound Marketing Content

More sophisticated marketers have applied customer segmentation to their customer database. They group customers into segments according to their demographics, creating different customer journeys within each customer segment. Then, they send the same messages, using the same media, to all customers within each segment. This is a marked improvement on the previous approach, but not every customer within a segment is the same. The customer segments are typically grouped by demographics and not according to the content preferences of the customers.

3. When All Customers Within a Segment Receive the Same Marketing Content

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

How Artificial Intelligence is changing the industry

Artificial intelligence in retail is being applied in new ways across the entire product and service cycle—from assembly to post-sale customer service interactions, but retail players need answers to important questions:

1. Sales and CRM Applications

Natural Language Processing (NLP) is a common notion for a variety of Machine Learning methods that make it possible for the computer to understand and perform operations using human (i.e. natural) language as it is spoken or written.

The most important use cases of Natural Language Processing are:

Conversica “sales assistant” software is designed to automate and enhance sales operations processes by identifying and conversing with internet leads. The sales lead and management company claims the authentic-sounding messages result in an average engagement rate of 35%.

2. Brilliant Manufacturing

General Electric’s (GE) Brilliant Manufacturing software, in part inspired by GE’s relationships with client manufacturing companies over the past two decades, was designed to make the entire manufacturing process—from design to distribution and services—more efficient and hence save big costs over time. The software includes a suite of analytics and operational intelligence tools appropriate for a range of manufacturers.

3. Dataset

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Why today’s retail banks need AI to win

Competition in retail banking may be more intense than ever as FinTechs and new market entrants fight with established players for deposits and market share. Retail banks that embrace advanced analytics and leverage their valuable data can gain a decisive competitive advantage.

1. Predicting client needs

Deeper client relationships are both more profitable and more loyal. By learning from their data, banks can identify, even anticipate, client needs that they can help with. Clients are far more likely to respond to a relevant offer and to have a favorable impression of your bank than they are if you are still sending indiscriminate offers with minuscule response rates.

The most important use cases of Natural Language Processing are:

Document Summarization is a set of methods for creating short, meaningful descriptions of long texts (i.e. documents, research papers).

2. Keeping existing customers is at least as important as finding new ones.

Banks can learn from their client interaction data to identify customers at risk of attrition and take preemptive action. Even better, good models can identify the leading causes of attrition risk so that you can make process adjustments or improvements in order to hold onto more of your most valuable clients.

3. Price optimization and lifetime value

Many banks use a score-carding process in consumer lending, determining what terms to offer if the borrower meets certain criteria. Often, these are based on risk appetite rather than any insight into price elasticity or profit margin/volume tradeoffs. If banks knew which clients were likely to be the most profitable and knew how those clients were likely to respond to price differences, then they might price more aggressively in order to land those clients.

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

AI Simplified: Machine Learning problem types

With so many questions to answer, what are some of the most common machine learning problem types that come up while building out AI systems? Jake Shaver, Special Projects Manager at DataRobot, walks us through four problem types in this installment of AI Simplified.

1. Classification

Classification is a systematic grouping of observations into categories, such as when biologists categorize plants, animals, and other lifeforms into different taxonomies. It is one of the primary uses of data science and machine learning.

The most important use cases of Natural Language Processing are:

The goal of this task is to predict a class (label) of a document, or rank documents within in a list based on their relevance. It could be used in spam filtering (predicting whether an e-mail is spam or not) or content classification (selecting articles from the web about what is happening to your competitors).

2. Why is Classification Important?

There are many practical business applications for machine learning classification. For example, if you want to predict whether or not a person will default on a loan, you need to determine if that person belongs to one of two classes with similar characteristics: the defaulter class or the non-defaulter class. This classification helps you understand how likely the person is to become a defaulter, and helps you adjust your risk assessment accordingly.

3. Classification + DataRobot

The DataRobot automated machine learning platform includes a number of classification algorithms and automatically recognizes whether your target variable is a categorical variable that’s suitable for classification or a continuous variable that is suitable for regression. Furthermore, DataRobot’s various tools allow you to examine the performance of classification models for both binary and multiclass problems.

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s important to understand which problem you’re solving as each problem can use different models, have different accuracy metrics, and other problem-specific parameters that you need to account for.

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

AI simplified: What computers are good at

Getting started with AI? Perhaps you’ve already got your feet wet in the world of Machine Learning, but still looking to expand your knowledge and cover the subjects you’ve heard of but didn’t quite have time to cover?

1. Investment banks can use AI in six critical ways

Natural Language Processing (NLP) is a common notion for a variety of Machine Learning methods that make it possible for the computer to understand and perform operations using human (i.e. natural) language as it is spoken or written.

The most important use cases of Natural Language Processing are:

Sentiment analysis aims to determine the attitude or emotional reaction of a person with respect to some topic – e.g. positive or negative attitude, anger, sarcasm. It is broadly used in customer satisfaction studies (e.g. analyzing product reviews).

2. Reinforcement learning

Reinforcement Learning differs in its approach from the approaches we’ve described earlier. In RL the algorithm plays a “game”, in which it aims to maximize the reward. The algorithm tries different approaches “moves” using trial-and-error and sees which one boost the most profit.

3. Dataset

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Deep Learning Chatbot – analysis and implementation

If you have a business with a heavy customer service demand, and you want to make your process more efficient, it’s time to think about introducing chatbots. In this blog post, we’ll cover some standard methods for implementing chatbots that can be used by any B2C business.

1. Chatbots

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.

The most important use cases of Natural Language Processing are:

Document Summarization is a set of methods for creating short, meaningful descriptions of long texts (i.e. documents, research papers).

2. Deep learning

At this point, your data is prepared and you have chosen the right kind of chatbot for your needs. You will have a sufficient corpora of text on which your machine can learn, and you are ready to begin the process of teaching your bot. In the case of a retrieval model bot, the teaching process consists of taking in an input a context (a conversation with a client with all prior sentences) and outputting a potential answer based on what it read. Google Assistant is using retrieval-based model (Smart Reply – Google). Which can help give you an idea of what it looks like.

3. Conclusion

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

How companies are making money by recommend system

Simply put, a recommender system is an AI algorithm (usually Machine Learning) that utilizes Big Data to suggest additional products to consumers based on a variety of reasons. These recommendations can be based on items such as past purchases, demographic info, or their search history.

1. There are many types of recommender systems available

Choosing the right type of recommender system is as important as choosing to utilize one in the first place. Here is a quick overview of the options available to you.

The most important use cases of Natural Language Processing are:

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

2. Reinforcement learning

Reinforcement Learning differs in its approach from the approaches we’ve described earlier. In RL the algorithm plays a “game”, in which it aims to maximize the reward. The algorithm tries different approaches “moves” using trial-and-error and sees which one boost the most profit.

3. Dataset

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Machine Learning Terms every manager should know

Getting started with AI? Perhaps you’ve already got your feet wet in the world of Machine Learning, but still looking to expand your knowledge and cover the subjects you’ve heard of but didn’t quite have time to cover?

1. NLP – Natural Language Processing

Natural Language Processing (NLP) is a common notion for a variety of Machine Learning methods that make it possible for the computer to understand and perform operations using human (i.e. natural) language as it is spoken or written.

The most important use cases of Natural Language Processing are:

Document Summarization is a set of methods for creating short, meaningful descriptions of long texts (i.e. documents, research papers).

2. Reinforcement learning

Reinforcement Learning differs in its approach from the approaches we’ve described earlier. In RL the algorithm plays a “game”, in which it aims to maximize the reward. The algorithm tries different approaches “moves” using trial-and-error and sees which one boost the most profit.

3. Dataset

All the data that is used for either building or testing the ML model is called a dataset. Basically, data scientists divide their datasets into three separate groups:

- Training data is used to train a model. It means that ML model sees that data and learns to detect patterns or determine which features are most important during prediction.

- Validation data is used for tuning model parameters and comparing different models in order to determine the best ones. The validation data should be different from the training data, and should not be used in the training phase. Otherwise, the model would overfit, and poorly generalize to the new (production) data.

- It may seem tedious, but there is always a third, final test set (also often called a hold-out). It is used once the final model is chosen to simulate the model’s behaviour on a completely unseen data, i.e. data points that weren’t used in building models or even in deciding which model to choose.

It’s not by any means exhaustive, but a good, light read prep before a meeting with an AI director or vendor – or a quick revisit before a job interview!

Aron Larsson

– CEO, Strategy Director

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.