January 22, 2021 Matthias Weber

Artificial foolishness

About the limits of machine-learning.

As an investor, you are tired of negative interest rates on your savings account and are considering investing more in stocks. Your investment advisor expects an eight to ten percent return on stocks next year – actually, like every year since you started working with him.

You google “successful equity strategies” and come across ads for the newly launched “Global AI-driven Equity” fund. It invests in stocks based on artificial intelligence. A simulation shows that it would have coped outstandingly with every crisis over the last 20 years. Every year would have resulted in a positive performance. For the coming year, the model promises a fund performance of 15 to 25 percent despite the expectation of a stock market crash!

Your investment advisor obviously has no idea what’s in store for investors next year. The AI fund raises big expectations, will it be able to deliver?

Who should you listen to, who should you trust with your money?

Your investment advisor works with a simple rule of thumb: stocks have been performing at eight to ten percent per year for the last few decades. So, they will do the same next year. On average, he’s probably doing pretty well. In individual years, however, the mispredictions are enormous. At least you are aware of this. His advice is simple but quite helpful: “Buy shares and have staying power. It may go wrong in the short term, but it will pay off in the long term.”

The AI Fund, on the other hand, has used “machine learning” to intensively tap the past twenty years for patterns. This has enabled it to position itself optimally at every point in the simulation. Everything seems perfect. However, all that glitters is not gold. The model is obviously overfitted! The more freedom and thus complexity a model is granted, the more precisely it is able to represent the past. The coders obviously succumbed to the temptation to consider even purely random, unstable patterns in the optimization in order to generate a deceptively perfect simulation.

However, this also means that the model is extremely sensitive to the smallest changes: If the past had played out only slightly differently, the model would quickly be way off. This instability makes it completely unfit for the future. It acts similar to a car driver who only looks in the rearview mirror. But the past does not repeat itself. The world is not stationary. It changes. The promise of the “Global AI-driven Equity” fund is not credible. It is based only on simulations, not a tangible track record in the real world. The model will deliver momentous false signals. You may lose an enormous amount of money.

Why forecasting models are doomed to fail in the financial markets, but no one likes to talk about it, is what we cover in the following post.

Background information for those interested:

Fortmann-Roe, Scott (2012): Understanding the Bias-Variance Tradeoff.

Read the latest posts from this series: