Erez Katz, CEO and Co-founder Lucena Research
How Dynamic Machine Learning Models Prolong Your Investment Strategy
When substantial R&D has been spent on an investment strategy but the strategy stops performing, it can be difficult to rewind and start from scratch.
A way to prolong a strategy’s life expectancy is to design it with the notion of self-adjustment. I recently showcased a multi-strategy fund approach in which a compilation of uncorrelated strategies are used together with convex optimization. This method emphasizes the allocation of the most suitable strategy for any market regime.
In general, strategies that adjust dynamically can be harder to develop. The extra effort is worthwhile as they ultimately provide unemotional self-adjustment behavior over time.
How do you know a strategy no longer works?
In the context of big and alt data signals it is rather easy to realize when a signal experiences alpha decay. However, at the strategy level there is no easy way to detect if a strategy is temporarily re-adjusting or fundamentally no longer relevant.
An important overarching rule of thumb is to set the guidance of when a strategy should be taken offline well before deployment. Several guidelines to consider are:
- Volatility exceeds certain upper or lower threshold bounds.
- If peak to trough returns fall below a max drawdown guideline.
- Sharpe ratio falls below a predetermined threshold.
- Proportional NAV allocation falls below or above predetermined levels.
The chart below highlights crossover performance to validate if recent performance is within the expected volatility and returns bounds. The orange cone represents upper and lower bounds. The mean line in the center of the cone denotes expected behavior.
Read more about how to scientifically develop uncorrelated strategies.
Cross validation of daily return distribution is a common method to identify a fundamental change in the way a strategy performs. In the chart below, the blue area represents the baseline of the daily return distribution of the strategy’s historical performance. The orange area represents the most recent evaluation period. A telltale sign is when the skewness of the orange distribution is favoring the left relative to the blue representation.
How are dynamic models built and validated?
In machine learning, a model is built by inspecting historical data over a certain look back period. Here are two methods of integrating dynamic strategy adjustment:
- Ensemble Voting Models – Ensemble voting enables the selection of certain models from a wider universe. Every day all models vote for certain assets and the weight of the votes are adjusted based on how accurate the models were historically. In essence, the strategy gets smarter perpetually as it adds new uncorrelated models based on new situations or new data. Models that are no longer applicable (those that scored low in accuracy for recent predictions) go into a dormant stage until they matter again.
- Dynamic Model Creation – If we limit the historical data a model is constructed on, we can generate new models for every predetermined timeframe or when a strategy falls below its predetermined health score guidelines.
The chart below covers cross validation of dynamic models every six months. The green section is the timeframe used to develop or train the models while the purple timeframe is used to test its efficacy. You'll notice the testing period never overlaps the training period. This is done mainly to reduce overfitting as we always strive to test against unseen data.
More about how our models reduce overfitting and selection bias here.
The benefits of dynamic models
Incorporating dynamic models provides several benefits:
- Prolongs the efficacy of an algorithmic strategy.
- Supports self-adjustment to respond to changing market conditions.
- Dynamic strategies are harder to overfit since the rules are subject to change and are data dependent. Consequently, they are less subject to human bias.
See how we use machine learning to create winning