June 28th, 2021

Webinar Recap: Faster, Smarter Decisions with Explainable AI in Stock Prediction

Written By: Fraser Abe

As the adoption of new technology in investment management continues to grow, AI-based insights have proven invaluable for investment managers seeking to add additional alpha to their portfolios. However, one deterrent to fully embracing AI is the ‘black box’ nature of core machine learning systems.

Boosted.ai CEO Josh Pantony recently sat down for a webinar to discuss how business leaders can make faster and smarter decisions with transparent, trusted and easily explainable AI models. From this discussion, we want to highlight the importance of interpretability, how we define the problem and how we solve the problem by building explainable AI models.

Why Interpretability Matters

Fundamental managers are inherently distrustful of ‘black box’ algorithms. Without knowing what’s driving the machine, it’s difficult to have a high degree of confidence in the results. There’s increased value in the ability to understand what exactly the machine learned.

The machine can regularly reveal decisions, but a lack of context behind what’s driving those decisions makes it challenging to derive value from the decisions. For example, if a manager wants to go long on a stock and the machine recommends shorting it, we expect to distrust the decision without the ability to understand where it came from.

However, if the machine shows that analysts think any given stock will have a good quarter, but credit card data shows that the stock will have one of the worst quarters in history, we have a hook that allows us to dive deeper into the decision. A manager can use that hook to conduct independent research, combining the machine’s understanding with their own.

Further, an understanding of the hooks that drive decisions coupled with an understanding of what the machine is learning gives the manager the power to ensure that the machine learns the correct lessons to drive good stock pick decisions.

 

Defining the Data Disconnect 

Boosted.ai aims to build the best and most predictive models. To do this, we use Information Coefficient. Information Coefficient shows the highest possible correlation to the raw return of the stocks in any given universe.

For example, if we rank 100 stocks, we want the #1 stock to have the highest return over the given investment horizon, the #2 stock to have the second-highest return, the #3 stock to have the third-highest return, and so on. Ultimately, we want to maximize the rank correlation between the ranks and the actual set of companies that have returns. Ideally, we want the machine to display the biggest possible spread between the top 20% of stocks and the bottom 20% of stocks in our universe.

Once the machine has accurately ranked the stocks, the next step is building explanations that conform to a manager’s understanding of the world. We do this by building intuitive explanations that reflect what the model is doing. Users can look at the explanations, understand them and easily incorporate them into their own investment process.

 

How Boosted.ai Builds Explainable Models

Within any specified universe, input space and investment horizon, Boosted.ai creates models with a high Information Coefficient, meaning the model is successful in predicting stocks that will beat or lose the benchmark.

There are several ways that Boosted.ai shows and visualizes what the model learned.

The first method is a feature word cloud:

The word cloud displays all of the variables the machine is looking at. The size of each word shows how important any given variable is and the distance between words shows how the variables were used together.

In this example, we can see that analyst expectations are an important variable. Specifically, analyst expectations are important relative to market cap or relative to the sector.

It shows that analyst expectations are important, but that the context of the market cap matters. Similarly, analyst expectations are important, but the sector we are in matters. It shows what variables are important and how they interact with each other, which allows us to derive additional value.

At the next stage, we find that there are some sets of variables where the machine needs additional context:

A higher orange value indicates that a variable is a strong predictor and doesn’t need additional context, whereas a higher blue value indicates that the variable needs additional context. If a variable is further to the right, it’s more indicative of a buy. If a variable is further to the left, it’s more indicative of a sell.

That leads us to how these variables interact with each other:

This particular model shows that price momentum is a positive variable for the information technology sector. But if we look at industrials, it’s the opposite. For industrials, high price momentum is indicative of a sell, and low price momentum is indicative of a buy.

This gives the user an understanding of what the machine learned and how the variables are used in combination. Further, once a user has gleaned information from the model, that information can be whittled down to a simplified rule that managers can incorporate into their processes.

We then move to the prediction level:

On this screen, we can ask the machine why it recommends going long or short and what lesson or pattern it learned that led to that decision. In this example, the model predicts that SPG is a good buy. We can see the factors that went into this decision and the variables that are influencing the prediction.

The model gives accurate predictions and more importantly, it explains how those predictions are derived. It offers an alternative perspective and method for evaluating the decisions that may not come naturally to a human but arise when a human’s knowledge is combined with the machine’s knowledge.

We can walk away with confidence that the machine learned nuance and picked up on otherwise uncovered information about the market through this process.

 

The Glass Box Solution: Quantamental Investing  

Explainability and interpretability are critical for the next wave of investment management – the human + machine approach. To get there, we build models with intuitive explanations that match a user’s expectations and we do it in a way that accurately models and explains what exactly it is that the ‘black box’ learned. When you combine fundamental with quant investing techniques, the result is a powerful, glass box explanation that is easy to incorporate into investment processes across the industry.

To explore how you can get started with high-performance, Explainable AI today, contact the Boosted team here: https://boosted.ai/book-a-demo

 

 

 

Back to Blog