Is explainability a top inhibitor in Financial Services teams adopting/rolling out pilots? admin mars 2, 2022

Is explainability a top inhibitor in Financial Services teams adopting/rolling out pilots?

Explainability is not always an inhibitor but it surely prevents several use cases to fully take advantage on the latest technics available today. Example of applications where it is not an inhibitor are chatbot as it is pretty easy to conclude whether the chatbots is working or not. Its capability to work and provide a compelling experience is more important than understanding how the algorithms generated the different sentences. Another example is voice to text.

Regulated use cases such as investment advice, lending, fraud detection and anti-money laundering or use cases associated with potentially high loss require a deeper understanding of the AI results. Right now several firms are looking to adopt deep-learning for credit scoring for example . Yet despite the gains they identify in using these new approaches, they are prevented to roll out the algorithms due to a lack of explainability.

Teams responsible in the development of these models rely therefore on simple models such as combination of decision rules, logistic regressions, generalized linear models or decision trees that are explainable.

Yet even if these models are explainable, the quality of the explainability they generate is dependant on their overall precision. We have assessed on several cases that more advanced algorithms with the proper explainability technics were providing better explanations than a poorly trained logistic regression.

Explainability is a top feature for teams that look for tools to audit models, for risk modelling and stress-testing teams that are looking to roll out deep-learning based models, teams working in customer advice (relationship managers, private banker), teams working on regulated use cases as well as marketing teams who need to tailor retention actions or customer service teams that need to understand why a customer is potentially leaving to have the right discussion with the customer.

The quality of the explainability is assessed before rolling out the models and the quality of the explanation is a major criteria when choosing between several models. A more precise model could be discarded if explanations are not considered compelling.

Explainability is also a feature for any technical teams looking for biases or errors in the selection of input variables and should become essential for all teams building advanced models nowadays.

To what extent the industry solved this challenge and how ?

So far, the financial services industry is at the beginning of the adoption of explainable AI and responsible AI technologies. Several regulators such as the ACPR in France or the Monetary Authority of Singapore have released guidelines and principles and launched pilots with startups and FSI institutions to understand better these technologies, identify existing barriers, establish metrics and clarify definitions (there is still a misunderstanding between explainability, transparency, justifiability and interpretability).

Large institutions are investigating these technologies at the moment. When they need explainable technologies they often rely on simple models and rules systems and, because of a lack of clear guidelines from regulators and legislators, they have difficulties to massively adopt more efficient technologies, in particular in the use cases that are the most regulated. They therefore mostly assess them on use cases that are less regulated for the moment. A few companies are more ambitious and launched experiments or even production agendas around risk, compliance or customer experience.

Several companies (we could say that most of companies) do not integrate explainability in what they build for the moment. Interpretability, explainability… is most of the time not thought beforehand. But FSI realize now that explainable AI could help them solve use cases that were not solved because AI can not be used on these use cases due to a lack of explainability.

For the pilots they are mostly relying on the Shap library and experiments are often restricted to the data-science teams, the most advanced are looking for most industrial solutions and solutions that are more advanced than Shap, that address several use cases. Auditors also are looking to adopt such platforms to be able to challenge the work of data scientists. Finally we are now at the stage where explainable AI is being assessed by business teams to deliver the value generated by the algorithms.

DreamQuark is helping on the two last cases.

How does your product or capabilities achieve explainability?

Because we could benefit of better more precise models powered by deep-learning or other advanced technics such as boosting technics (XGBoost for example), DreamQuark has been working since 2016 to provide explainability by default to its algorithms. To date we are supporting deep- learning algorithms, boosting models, random forests, logistic regression and are also enabling combination of several models (ensembles) as well as combination of ML models with business rules.

We are supporting explainability on classification and multiclassification tasks, regression, customer segmentation, recommendation (in particular providing an explanation per product recommended). We are now supporting explainability for large NLP models (to be added in our product this year) and will add explainability for time-series based models.

We are providing local (at decision level) explainability and global (at model) level explainability and give the opportunity to use different technics such as gradient based (a proprietary technics developed by DreamQuark in 2016) to support deep-learning explainability, self attention to self attentive neural networks, integrated gradient for NLP, shapley value (additive technics for tree based algorithms and boosting algorithms) and we combined these technics when we use an ensembling model.

We also provide other elements associated with precision, data-drift and model stability, as well as the documentation on how the model has been built.

These technics are still mainly oriented for data-scientists but as technology is being spread to the front office of the bank new technics need to be invented to fit with the skills and usage of front teams. It is even more important as customer may require to see the explanations when they are affected by the decision of an automated system (as stated by the GDPR).

DreamQuark is therefore moving to the generation of textual explanations that are more adapted to this audience compared to traditional technics that may be difficult to understand.

Different axioms are important for explainability methods such as reproducibility, the capacity to identify the most important variables, the capacity to be consistent even if the underlying models are different – the same explanation for the same prediction , the capacity to identify if two directly correlated variables are used as input (for example you may have the birthdate and the age as input and a good explainability would present only age or birthdate.

Our explainability is also used to identify potential biases in the conception phase.

What are the top techniques that you find effective in convincing prospects/customers about the explainability of your algorithms?

We are using our gradient approach since 2016 and our customers find the explainability compelling (the explainability is assessed before score are sent to the business teams). We are also using Shap for tree based methodology. The self attentive technology provides a good trade-off between performances and explainability.

We have used a select and retrain approach to identify the methods that are best identifying the most important variables and have integrated the best explainable methods for a given use case.

Our customers want to have the details of the explainability (most important variables, their value, whether the variable positively impacts or negatively impacts the score, they want the detail on the distribution of the variable to know how the value relates to a higher probability or lower probability ; our customers also want to know how they should change the variable to impact the score – counterfactual analysis) The frontline want to have simpler explanations.

Also our customers want to know if they are confronted with data that has not been seen before (in particular for categorical data or if it is out of bound, for example if a person’s age is higher than the one of the person in the training set). They also want to know if the data has changed as it may impact the results.

We provide solutions to all these challenges.

Do you think your explainability product can help increase financial inclusion to underbanked?

We have already worked with a French credit card provider to build a solution for small businesses to help them better manage their finances. We have also worked on revolving credit challenges to help identify customers that are at risk of default in order to identify solutions that would avoid default without preventing them the access to their payment means.

We are doing research on AI biases to redress them and avoid all sorts of discriminations across all algorithms available through our product Brain.

Also as part of our strategy we are aligning our developments with the UN sustainable goals.

The key challenge is to have data on these underbanked and to sensitize on the fact that biases need to be identified and corrected in order to recommend them adapted products. These biases can be identified using the data drift. It is also important to build and sell these products through the proper channels.

Finally Brain could be used to transform a relationship manager based model to a self service model which is key to scale banking outside of the more traditional audience of the banks. New data could be captured to better reflect the risk of these underbanked. Brain is not currently capturing data.