August 4th, 2021

Asset Managers Need Explainable AI. But What Is It, And How Can I Incorporate It Into My Process?

Written By: Nick Abe

There are a lot of acronyms for asset managers to keep up with. ESG. EBITDA. COGS. Any institutional investors looking to incorporate artificial intelligence (AI) into their process should be aware of another important acronym: XAI. XAI, or explainable AI, is critical for asset managers to explain artificial intelligence and machine learning (ML) decisions to their stakeholders. 

Black box AI and ML

Broadly, AI and ML are based on algorithms that are capable of learning and adapting. As it becomes more advanced, understanding the way an algorithm makes a decision is more and more difficult. Any artificial intelligence or machine learning that is uninterpretable by the end user is called “black box” AI. Unlike an airplane’s black box, which records everything, black box AI and ML are opaque to the user – even the data scientists that create black box algorithms cannot fully recreate every decision made by the algorithm. Consider it like this: black box AI is like a 100,000,000 piece puzzle where all the pieces are the same colour. Unravelling decisions is incredibly difficult. 

Machine learning is an exciting new field in asset management. It’s been proven to be predictive in capital markets, and finding any way to eke out additional alpha or performance is key to an asset manager’s success. However, any AI they incorporate into their process cannot be black box. In the Harvard Data Science Review, authors Cynthia Rudin and Joanna Radin put it this way: “Trusting a black box model means that you trust not only the model’s equations, but also the entire database that it was built from.” Asset management is built on trust, but it would be foolhardy for any asset manager to fully trust any one system (from AI to back office to sales and trading). AI and ML for investment managers must be explainable not only to the investment manager themself, but also to their various stakeholders (shareholders, investors, CIOs and portfolio managers, etc.) 

Explainable AI

Explainable AI is the opposite of black box AI. XAI is readily interpreted by the user and how the machine’s algorithms made decisions are understandable to humans. Not surprisingly, many people in the field of artificial intelligence and machine learning are dedicated to finding ways to make AI and ML that are more explainable. Transparent AI (we like to think of our AI as more glass box than black box) is critical for asset managers to implement artificial intelligence in a responsible way. 

Image adapted from DARPA

Benefits of Explainable AI

Some of the benefits of explainable AI for asset managers are:

  • Tailored to your process:

    Some investment managers may feel that macroeconomic data is most important in stock selection, whereas others may only care about alternative data, or prefer bottom-up analysis. With explainable AI that shows which features were most important within the ML model, institutional investors can create predictive AI specific to their stock picking process. 

  • Explainable to all stakeholders:

    It is not enough for portfolio managers to simply say: “I like this stock because a machine learning model predicts it will perform”. Not only do PMs have to explain why they select a particular stock to senior level folks at their firm, they have to explain to investors why a certain stock might be in their fund. Explainable AI arms a PM with much more data, which highlights exactly how and why a stock is predicted to perform. They can say “The machine learning model suggests this stock will do well because of price-to-earnings ratios, which are inversely related to this alternative data we have purchased.” and so on.

  • Discern the value of your data:

    Institutional investors are embracing alternative data. However, being able to know the value of that data in a quick and efficient manner allows fund managers to deploy capital efficiently in purchasing alternative data sets. With explainable AI, a manager can zero in on exactly which sets of alternative data contributed to a model’s performance. 

  • Full control over your portfolio:

    Our team is made of many former portfolio managers. We know that most asset managers are not keen on ceding control of their carefully honed portfolios to a machine. However, pairing the PM’s capital markets expertise with advanced, explainable ML is how all teams win. More on quantamental investing and why we believe it’s the future here. Implementing explainable AI allows the manager to fully adjust every facet of their portfolio, from the inputs to the portfolio construction, all while seeing which features contribute most to their model’s performance. 

  • Regulatory compliance:

    AI is a new and growing field. As regulations on its use may come into play in the future, asset managers that employ explainable AI will be on the forefront of compliance with these rules. Explainable AI – where all features are known and understood – is seen as more fair and balanced than black box techniques.

Leading explainable AI for asset managers

We at Boosted.ai believe so strongly in explainable AI and its benefits that we have designed our platform – Boosted Insights – since day one to be as explainable as possible. To that end, we have added new functionality within the product to increase explainability. 

Rankings v2:

Our Rankings v2 improves upon our existing explainability by not only showing every feature that contributed to a model’s performance, but exactly how much that feature drove performance. With Rankings v2, a user can see every feature on a per stock basis, for every stock in their universe and how it affected that stock’s performance. It also shows the variation and dispersion of those scores, and when our machine learning algorithms favour one stock over another and why. 

Factor Timing:

Our new Factor Timing function allows users to compare two quantile, factor or sector predictions our machine learning makes within the AI models they create. This gives a visual screen for users to compare stocks in their universe, because we believe that AI should also be visually compelling (as well as accurate and predictive, of course!). 

Feature Importance:

Feature Importance showcases to users how individual features tend to impact the machine’s decisions and are explorable over the entire timeframe of the model. It also allows users a deeper understanding of what the machine is doing and interesting relationships it has found across the input space. 

Webinar:

Check out the recap of our webinar on explainable AI here or watch it in full here.

Takeaways

Asset managers looking to implement AI have only one option: to use explainable AI and ML in their process. In order to fully succeed, all AI must be transparent and trusted so investment managers can explain AI/ML decisions to their stakeholders. Asset managers can look to firms that specialize in explainable AI, created specifically with institutional investors in mind, like Boosted.ai. If you want to learn more about XAI or how to get started with AI today, please reach out to us. 

Back to Blog