There are a lot of acronyms for asset managers to keep up with. ESG. EBITDA. COGS. Any institutional investors looking to incorporate artificial intelligence (AI) into their process should be aware of another important acronym: XAI. XAI, or explainable AI, is critical for asset managers to explain artificial intelligence and machine learning (ML) decisions to their stakeholders.
Broadly, AI and ML are based on algorithms that are capable of learning and adapting. As it becomes more advanced, understanding the way an algorithm makes a decision is more and more difficult. Any artificial intelligence or machine learning that is uninterpretable by the end user is called “black box” AI. Unlike an airplane’s black box, which records everything, black box AI and ML are opaque to the user - even the data scientists that create black box algorithms cannot fully recreate every decision made by the algorithm. Consider it like this: black box AI is like a 100,000,000 piece puzzle where all the pieces are the same colour. Unravelling decisions is incredibly difficult. Machine learning is an exciting new field in asset management. It’s been proven to be predictive in capital markets, and finding any way to eke out additional alpha or performance is key to an asset manager’s success. However, any AI they incorporate into their process cannot be black box. In the Harvard Data Science Review, authors Cynthia Rudin and Joanna Radin put it this way: “Trusting a black box model means that you trust not only the model’s equations, but also the entire database that it was built from.” Asset management is built on trust, but it would be foolhardy for any asset manager to fully trust any one system (from AI to back office to sales and trading). AI and ML for investment managers must be explainable not only to the investment manager themself, but also to their various stakeholders (shareholders, investors, CIOs and portfolio managers, etc.)
Explainable AI is the opposite of black box AI. XAI is readily interpreted by the user and how the machine’s algorithms made decisions are understandable to humans. Not surprisingly, many people in the field of artificial intelligence and machine learning are dedicated to finding ways to make AI and ML that are more explainable. Transparent AI (we like to think of our AI as more glass box than black box) is critical for asset managers to implement artificial intelligence in a responsible way.
Image adapted from DARPA
Some of the benefits of explainable AI for asset managers are:
We at Boosted.ai believe so strongly in explainable AI and its benefits that we have designed our platform - Boosted Insights - since day one to be as explainable as possible. To that end, we have added new functionality within the product to increase explainability.
Our Rankings v2 improves upon our existing explainability by not only showing every feature that contributed to a model’s performance, but exactly how much that feature drove performance. With Rankings v2, a user can see every feature on a per stock basis, for every stock in their universe and how it affected that stock’s performance. It also shows the variation and dispersion of those scores, and when our machine learning algorithms favour one stock over another and why.
Our new Factor Timing function allows users to compare two quantile, factor or sector predictions our machine learning makes within the AI models they create. This gives a visual screen for users to compare stocks in their universe, because we believe that AI should also be visually compelling (as well as accurate and predictive, of course!).
Feature Importance showcases to users how individual features tend to impact the machine’s decisions and are explorable over the entire timeframe of the model. It also allows users a deeper understanding of what the machine is doing and interesting relationships it has found across the input space.
Check out the recap of our webinar on explainable AI here or watch it in full here.
Asset managers looking to implement AI have only one option: to use explainable AI and ML in their process. In order to fully succeed, all AI must be transparent and trusted so investment managers can explain AI/ML decisions to their stakeholders. Asset managers can look to firms that specialize in explainable AI, created specifically with institutional investors in mind, like Boosted.ai. If you want to learn more about XAI or how to get started with AI today, please reach out to us.
There are a lot of acronyms for asset managers to keep up with. ESG. EBITDA. COGS. Any institutional investors looking to incorporate artificial intelligence (AI) into their process should be aware of another important acronym: XAI. XAI, or explainable AI, is critical for asset managers to explain artificial intelligence and machine learning (ML) decisions to their stakeholders.
Broadly, AI and ML are based on algorithms that are capable of learning and adapting. As it becomes more advanced, understanding the way an algorithm makes a decision is more and more difficult. Any artificial intelligence or machine learning that is uninterpretable by the end user is called “black box” AI. Unlike an airplane’s black box, which records everything, black box AI and ML are opaque to the user - even the data scientists that create black box algorithms cannot fully recreate every decision made by the algorithm. Consider it like this: black box AI is like a 100,000,000 piece puzzle where all the pieces are the same colour. Unravelling decisions is incredibly difficult. Machine learning is an exciting new field in asset management. It’s been proven to be predictive in capital markets, and finding any way to eke out additional alpha or performance is key to an asset manager’s success. However, any AI they incorporate into their process cannot be black box. In the Harvard Data Science Review, authors Cynthia Rudin and Joanna Radin put it this way: “Trusting a black box model means that you trust not only the model’s equations, but also the entire database that it was built from.” Asset management is built on trust, but it would be foolhardy for any asset manager to fully trust any one system (from AI to back office to sales and trading). AI and ML for investment managers must be explainable not only to the investment manager themself, but also to their various stakeholders (shareholders, investors, CIOs and portfolio managers, etc.)
Explainable AI is the opposite of black box AI. XAI is readily interpreted by the user and how the machine’s algorithms made decisions are understandable to humans. Not surprisingly, many people in the field of artificial intelligence and machine learning are dedicated to finding ways to make AI and ML that are more explainable. Transparent AI (we like to think of our AI as more glass box than black box) is critical for asset managers to implement artificial intelligence in a responsible way.
Image adapted from DARPA
Some of the benefits of explainable AI for asset managers are:
We at Boosted.ai believe so strongly in explainable AI and its benefits that we have designed our platform - Boosted Insights - since day one to be as explainable as possible. To that end, we have added new functionality within the product to increase explainability.
Our Rankings v2 improves upon our existing explainability by not only showing every feature that contributed to a model’s performance, but exactly how much that feature drove performance. With Rankings v2, a user can see every feature on a per stock basis, for every stock in their universe and how it affected that stock’s performance. It also shows the variation and dispersion of those scores, and when our machine learning algorithms favour one stock over another and why.
Our new Factor Timing function allows users to compare two quantile, factor or sector predictions our machine learning makes within the AI models they create. This gives a visual screen for users to compare stocks in their universe, because we believe that AI should also be visually compelling (as well as accurate and predictive, of course!).
Feature Importance showcases to users how individual features tend to impact the machine’s decisions and are explorable over the entire timeframe of the model. It also allows users a deeper understanding of what the machine is doing and interesting relationships it has found across the input space.
Check out the recap of our webinar on explainable AI here or watch it in full here.
Asset managers looking to implement AI have only one option: to use explainable AI and ML in their process. In order to fully succeed, all AI must be transparent and trusted so investment managers can explain AI/ML decisions to their stakeholders. Asset managers can look to firms that specialize in explainable AI, created specifically with institutional investors in mind, like Boosted.ai. If you want to learn more about XAI or how to get started with AI today, please reach out to us.
There are a lot of acronyms for asset managers to keep up with. ESG. EBITDA. COGS. Any institutional investors looking to incorporate artificial intelligence (AI) into their process should be aware of another important acronym: XAI. XAI, or explainable AI, is critical for asset managers to explain artificial intelligence and machine learning (ML) decisions to their stakeholders.
Broadly, AI and ML are based on algorithms that are capable of learning and adapting. As it becomes more advanced, understanding the way an algorithm makes a decision is more and more difficult. Any artificial intelligence or machine learning that is uninterpretable by the end user is called “black box” AI. Unlike an airplane’s black box, which records everything, black box AI and ML are opaque to the user - even the data scientists that create black box algorithms cannot fully recreate every decision made by the algorithm. Consider it like this: black box AI is like a 100,000,000 piece puzzle where all the pieces are the same colour. Unravelling decisions is incredibly difficult. Machine learning is an exciting new field in asset management. It’s been proven to be predictive in capital markets, and finding any way to eke out additional alpha or performance is key to an asset manager’s success. However, any AI they incorporate into their process cannot be black box. In the Harvard Data Science Review, authors Cynthia Rudin and Joanna Radin put it this way: “Trusting a black box model means that you trust not only the model’s equations, but also the entire database that it was built from.” Asset management is built on trust, but it would be foolhardy for any asset manager to fully trust any one system (from AI to back office to sales and trading). AI and ML for investment managers must be explainable not only to the investment manager themself, but also to their various stakeholders (shareholders, investors, CIOs and portfolio managers, etc.)
Explainable AI is the opposite of black box AI. XAI is readily interpreted by the user and how the machine’s algorithms made decisions are understandable to humans. Not surprisingly, many people in the field of artificial intelligence and machine learning are dedicated to finding ways to make AI and ML that are more explainable. Transparent AI (we like to think of our AI as more glass box than black box) is critical for asset managers to implement artificial intelligence in a responsible way.
Image adapted from DARPA
Some of the benefits of explainable AI for asset managers are:
We at Boosted.ai believe so strongly in explainable AI and its benefits that we have designed our platform - Boosted Insights - since day one to be as explainable as possible. To that end, we have added new functionality within the product to increase explainability.
Our Rankings v2 improves upon our existing explainability by not only showing every feature that contributed to a model’s performance, but exactly how much that feature drove performance. With Rankings v2, a user can see every feature on a per stock basis, for every stock in their universe and how it affected that stock’s performance. It also shows the variation and dispersion of those scores, and when our machine learning algorithms favour one stock over another and why.
Our new Factor Timing function allows users to compare two quantile, factor or sector predictions our machine learning makes within the AI models they create. This gives a visual screen for users to compare stocks in their universe, because we believe that AI should also be visually compelling (as well as accurate and predictive, of course!).
Feature Importance showcases to users how individual features tend to impact the machine’s decisions and are explorable over the entire timeframe of the model. It also allows users a deeper understanding of what the machine is doing and interesting relationships it has found across the input space.
Check out the recap of our webinar on explainable AI here or watch it in full here.
Asset managers looking to implement AI have only one option: to use explainable AI and ML in their process. In order to fully succeed, all AI must be transparent and trusted so investment managers can explain AI/ML decisions to their stakeholders. Asset managers can look to firms that specialize in explainable AI, created specifically with institutional investors in mind, like Boosted.ai. If you want to learn more about XAI or how to get started with AI today, please reach out to us.