Quantitative methods for macro information efficiency

While economic information undeniably wields a significant and widespread influence on financial markets, the systematic incorporation of macroeconomic data into trading strategies has thus far been limited. This reflects skepticism towards economic theory and serious data problems, such as revisions, distortions, calendar effects, and, generally, the lack of point-in-time formats. However, the emergence of industry-wide quantamental indicators and the rise of statistical learning methods in financial markets make macroeconomic information more practical and powerful. Successful demonstrations of statistical learning and macro-quantamental indicators have been achieved, with various machine learning techniques poised to further improve the utilization of economic information.

The struggles of using macroeconomic data for trading strategies

The principal case for incorporating macroeconomic information into trading strategies has long been compelling. Economic theory shows that market prices balance a broader macroeconomic equilibrium and, hence, depend on economic states and shocks. Meanwhile, the full information efficiency of the broader market is unlikely due to research costs and attention limitations (view post here). Discretionary trading, rooted in macroeconomic fundamentals, has a long history and has been the catalyst for numerous successes in the hedge fund industry. Furthermore, trading based on macroeconomic information is not a zero-sum game. Trading profits are not solely derived from the losses of others but are also paid out of the economic gains from a faster and smoother alignment of market prices with economic conditions. Therefore, technological advancements in this field can increase the value generation or “alpha” of the asset management industry overall (view post here).

And yet, macroeconomic data have hitherto played a very modest role in systematic trading. This reflects two major obstacles.

  • First, the relations between economic information and market prices are often indirect and potentially convoluted. Building trading signals requires good judgment based on experience with macroeconomic theory and data. Alas, macroeconomics is not normally the core strength of portfolio managers or trading system engineers. Meanwhile, economists do not always converge on clear common views.
  • Second, to use macroeconomic data in trading, professionals must wrangle many deficiencies and inconveniences of standard series:
    • Sparse history: Many economic data series, particularly in emerging economies, have only a few decades of history. Whilst this would be abundant in other fields, in macroeconomics, this only captures a limited number of business cycles and financial crisis events. Often, this necessitates looking at multiple currency areas simultaneously and stitching together different data series, depending on what markets used to watch in the recent and more distant past.
    • Revisions of time series: Standard economic databases store economic time series in their latest revised state. However, initial and intermediate releases of many economic indicators, such as GDP or business surveys, may have looked very different. This is not only because the data sources have updated information and changed methods but also because adjustment factors for seasonal and calendar effects, as well as for data outliers, are being modified with hindsight. The information recorded for the past is typically not the information that was available in the past.
    • Dual timestamps: Unlike market data, economic records have different observation periods and release dates. The former are the periods when the economic event occurred, and the latter are the dates on which the statistics became public. Standard economic databases only associate values with observation periods.
    • Distortions: Almost all economic data are at least temporarily distorted relative to what they promise to measure. For example, inflation data are often affected by one-off tax changes and administered price hikes. Production and balance sheet data often reflect disruptions, such as strikes or unseasonal weather. Also, there can be sudden breaks in time series due to changes in methodology. Occasionally, statistics offices have even released plainly incorrect data for political reasons.
    • Calendar effects: Many economic data series are strongly influenced by seasonal patterns, working day numbers, and school holiday schedules. While some series are calendar-adjusted by the source, others are not. Also, calendar adjustment is typically incomplete and not comparable across countries.
    • Multicollinearity: The variations of many economic time series are correlated due to common influences, such as business cycles and financial crises. Oftentimes, a multitude of data all seem to tell the same story. It is typically necessary to distill latent factors that make up common trends in macro data. This can be done using domain knowledge, statistical methods, or combinations of these two (view post here).

Generally, data wrangling means transforming raw, irregular data into clean, tidy data sets. In many fields of research, this requires mainly reformatting and relabelling. For macroeconomic trading indicators, the wrangling and preparation of data is a lot more comprehensive:

  • Adapting macroeconomic indicators for trading purposes requires transforming activity records into market information states. Common procedures include [1] stitching different series across time to account for changing availability and convention, [2] combining updates and revisions of time series into “vintage matrixes” as the basis of a single “point-in-time” series, and [3] assigning publication time stamps to the periodic updates and revisions of time series.
  • Economic information typically involves filters and adjustments. The parameters of these filters must be estimated sequentially without look-ahead bias. Standard procedures are seasonal, working day, and calendar adjustment (view post here), special holiday pattern adjustment, outlier adjustment, and flexible filtering of volatile series. Seasonal adjustment is still largely the domain of official software, albeit there are modules in R and Python that provide access to these.
  • Markets often view information through the lens of economists. To track economic analyses over time, one must account for changing models, variables, and parameters. A plausible evolution of economic analysis can be replicated through machine learning methods. This point is very important. Conventional econometric models are immutable and not backtestable because they are built with hindsight and do not aim to replicate perceived economic trends of the past but actual trends. Machine learning can simulate changing models, hyperparameters, and model coefficients. One practical approach is “two-stage supervised learning” (view post here). The first stage is scouting features. The second stage evaluates candidate models and selects the one that is best at any point in time. Another practical statistical learning example is the simulation of the results of “nowcasters” over time (view post here). This method estimates past information states through a three-step approach of (1) variable pre-selection, (2) orthogonalized factor formation, and (3) regression-based prediction.

News and comments are major drivers for asset prices, probably more so than conventional price and economic data. Yet, no financial professional can read and analyze the vast flow of verbal information. Therefore, comprehensive news analysis is increasingly becoming the domain of natural language processing, a technology that supports the quantitative evaluation of humans’ natural language (view post here). Natural language processing delivers textual information in a structured form that makes it usable for financial market analysis. A range of useful packages is available for extracting and analyzing financial news and comments.

Macro-quantamental indicators

Overall, statistical programming nowadays allows the construction of quantamental systems (view post here). A quantamental system combines customized, high-quality databases and statistical programming outlines in order to systematically investigate relations between market returns and plausible predictors. The term “quantamental” refers to a joint quantitative and fundamental approach to investing.

Macro quantamental indicators record the market’s information state with respect to macroeconomic activity, balance sheets, and sentiment. Quantamental indicators are distinct from regular economic time series insofar as they represent information that was available at the time of reference. Consequently, indicator values are comparable to market price data and are well-suited for backtesting trading ideas and implementing algorithmic strategies.

Quantamental indicators increase the market’s macro information efficiency (and trading profits) for two simple reasons:

  • Quantamental indicators broaden the scope of easily backtestable and tradable strategy inputs. Currently, most systematic strategies focus on market data, such as prices and volumes. Quantamental indicators capture critical aspects of the economic environment, such as growth, inflation, profitability, or financial risks, directly and in a format that is similar to price data. Data in this format can be easily combined across macroeconomic concepts and with price data.
  • Readily available quantamental indicators reduce information costs through scale effects. A quantamental system spreads the investment of low-level data wrangling and codifying fundamental domain know-how across many institutions. For individual managers, developing trading strategies that use fundamentals becomes much more economical. Access to the system removes expenses for data preparation and reduces development time. It also centralizes curation and common-sense oversight.
  • Finally, quantamental indicators reduce moral hazard in systematic strategy building. Typically, if the production of indicators takes much time and high costs, there is a strong incentive to salvage failed related strategy propositions through “flexible interpretation” and effective data mining.

The main source of macro quantamental information for institutional investors is the J.P. Morgan Macrosynergy Quantamental System (JPMaQS). It is a service that makes it easy to use quantitative-fundamental (“quantamental”) information for financial market trading. With JPMaQS, users can access a wide range of relevant macro quantamental data that are designed for algorithmic strategies, as well as for backtesting macro trading principles in general.

Quantamental indicators are principally based on a two-dimensional data set.

  • The first dimension is the timeline of real-time dates or information release dates. It marks the progression of the market’s information state.
  • The second dimension is the timeline of observation dates. It describes the history of an indicator for a specific information state.

For any given real-time date, a quantamental indicator is calculated based on the full information state, typically a time series that may be based on other time series and estimates that would be available at or before the real-time date. This information state-contingent time series is called a data vintage.

The two-dimensional structure of the data means that, unlike regular time series, quantamental indicators convey information on two types of changes: changes in reported values and reported changes in values. The time series of the quantamental indicator itself shows changes in reports arising from updates in the market’s information state. By contrast, quantamental indicators of changes are reported dynamics based on the latest information state alone.

Macro indicators and statistical learning in general

Statistical learning refers to a set of tools or models that help extract insights from datasets, such as macro-quantamental indicators. Not only does statistical learning support the estimation of relations across variables (parameters), but it also governs the choice of models for such estimates (hyperparameters). Moreover, for macro trading, statistical learning has another major benefit: it allows realistic backtesting. Rather than choosing models and features arbitrarily and potentially with hindsight, statistical learning can simulate a rational rules-based choice of method in the past. Understanding statistical learning is critical in modern financial markets, even for non-quants(view post here). This is because statistical learning illustrates and replicates how investors’ experiences in markets shape their future behavior.

Within statistical learning pipelines, simple and familiar econometric models can be deployed to simulate point-in-time economic analysis.

  • Linear regression remains the most popular tool for supervised learning in financial markets. It is appropriate if there is a monotonous relation between today’s indicator value and tomorrow’s expected return that can be linearized. Statistical learning based on regression can optimize both model parameters and hyperparameters sequentially and produce signals based on whichever model has predicted returns best up to a point in time (view post here). In the macro trading space, mixed data sampling (MIDAS) regressions are a useful method for nowcasting economic trends and financial market variables, such as volatility (view post here). This type of regression allows combining time series of different frequencies and limits the number of parameters that need to be estimated.
  • Structural vector autoregression (SVAR) is a practical model class that captures the evolution of a set of linearly related observable time series variables, such as economic data or asset prices. SVAR assumes that all variables depend in fixed proportion on past values of the set and new structural shocks. The method is useful for macro trading strategies (view post here) because it helps identify specific interpretable markets and macro shocks (view post here). For example, SVAR can identify short-term policy, growth, or inflation expectation shocks. Once a shock is identified, it can be used for trading in two ways.
    • First, one can compare the type of shock implied by markets with the actual news flow and detect fundamental inconsistencies.
    • Second, different types of shocks may entail different types of subsequent asset price dynamics and, hence, form a basis for systematic strategies.
  • Another useful set of models tackles dimension reduction. This refers to methods that condense the bulk of the information of many macroeconomic time series into a small set with the most important information for investors. In macroeconomics, there are many related data series that have only limited incremental relevant information value. Cramming all of them into a prediction model undermines estimation stability and transparency. There are three popular types of statistical dimension reduction methods.
    • The first type of dimension reduction selects a subset of “best” explanatory variables by means of regularization, i.e., reducing coefficient values by penalizing coefficient magnitudes in the optimization function applied for statistical fit. Penalty functions that are linear in individual coefficient values can set some of them to zero. Classic methods of this type are Lasso and Elastic Net (view post here).
    • The second type selects a small set of latent background factors of all explanatory variables and then uses these background factors for prediction. This is the basic idea behind static and dynamic factor models. Factor models are the key technology behind nowcasting in financial markets, a modern approach to monitoring current economic conditions in real-time (view post here). While nowcasting has mostly been used to predict forthcoming data reports, particularly GDP, the underlying factor models can produce a lot more useful information for the investment process, including latent trends, indications of significant changes in such trends, and estimates of the changing importance of various predictor data series (view post here).
    • The third type generates a small set of functions of the original explanatory variables that historically would have retained their explanatory power and then deploys these for forecasting. This method is called Sufficient Dimension Reduction and is more suitable for non-linear relations. (view post here).

Dimension reduction methods not only help to condense information about predictors of trading strategies but also support portfolio construction. In particular, they are suited for detecting latent factors of a broad set of asset prices (view post here). These factors can be used to improve estimates of the covariance structure of these prices and – by extension – to improve the construction of a well-diversified minimum variance portfolio (view post here).

“When data volume swells beyond a human’s ability to discern the patterns in it…we need a new form of intelligence.”

Mansour Raad

A practical approach to statistical learning and macro trading signals

Compared with other research fields, data on the relation between macroeconomic developments and modern financial market returns are scant. This reflects the limited history of modern derivatives markets and the rarity of critical macroeconomic events, such as business cycles, policy changes, or financial crises. Superficially, it seems that many data series and data points are available. However, occurrences of major shocks and trends are limited.

The scarcity of major economic events has two major consequences for the application of statistical learning to macro trading strategies:

  • Statistical learning must typically use data panels, i.e., draw on the experience of multiple and diverse macroeconomies over time. Using such two-dimensional datasets calls for special methods of cross-validation and hyperparameter optimization.
  • Statistical learning for macro trading signals has to accept a steeper (more unfavorable) bias-variance trade-off than other areas of quantitative research. This means that as one shifts from restrictive to flexible models, the benefits of reduced bias (misspecification) typically come at a high price of enhanced variance (dependence of models on data sets). This reflects the scarcity and seasonality of critical macro events and regimes.

Statistical learning with reasonable and logical priors for model choice can support trading signal generation through sequential optimization based on panel cross-validation for the purpose of trading signal selection, return prediction, and market regime classification (view post here). This approach can broadly be summarized in six steps:

  1. Specify suitable data frames of features and targets at the appropriate frequency, typically weekly or monthly. In the features data frame, the columns are indicator categories, and the rows are double indices of currency areas and time periods. The targets are a double-index series of (lagged) target returns.
  2. Define model and hyperparameter grids. These mark the eligible set of model options over which the statistical learning process optimizes based on cross-validation. It is at this stage that one must apply theoretical priors and restrictions to prevent models from hugging the data of specific economic episodes.
  3. Choose optimization criteria for the cross-validation of models. The quality of a signal-generating model depends on its ability to predict future target returns and to generate material economic value when applied to positioning. Statistical metrics of these two properties are related but not identical. The choice depends on the characteristics of the trading signal and the objective of the strategy (view post here). In addition to various financial metrics, common machine learning metrics can be employed for model and hyperparameter selection. These include RMSE, MAPE, MAE, balanced accuracy, AUC-PR and F1 score. A special concept is the discriminant ratio (‘D-ratio’), which measures an algorithm’s success in improving risk-adjusted returns versus a related buy-and-hold portfolio (view post here).
  4. Specify the cross-validation splitter for the panel. Cross-validation is an assessment of the predictive quality of a model based on multiple splits of the data into training and test sets, where each pair is called a “fold.”. Cross-validation splitters for panel data must ascertain the logical cohesion of the training and test sets based on a double index of cross-sections and time periods, ensuring that all sets are sub-panels over common time spans and respecting missing or blacklisted time periods for individual cross-sections (view customized Python classes here).
  5. Perform sequential cross-validation and optimization of models and derive signals based on concurrent optimal model versions. This process ensures that backtests based on past signals are not contaminated by hindsight regarding the choice of models, features, and hyperparameters.
  6. Finally, evaluate the sequentially optimized signals in terms of predictive power, accuracy, and naïve PnL generation. For example, view a post here on the application or regression-based trading factors for FX trading.

Machine learning and macro-based trading: a broader perspective

Machine learning encompasses statistical learning methods but partly automates the construction of forecast models through the study of data patterns, the selection of the best functional form for a given level of complexity, and the selection of the best level of complexity for out-of-sample forecasting. Machine learning can add efficiency to classical asset pricing models, such as factor models and macro trading rules, mainly because it is flexible, adaptable, and generalizes knowledge well (view post here). Machine learning is conventionally divided into three main fields: supervised learning, unsupervised learning, and reinforcement learning.

  • In supervised learning, one distinguishes input and output variables and uses an algorithm to learn which function maps the former to the latter. This covers most statistical learning applications in financial markets. An example is the assessment of whether the change in interest rate differential between two countries can predict the dynamics of their exchange rate. Supervised learning can be divided into regression, where the output variable is a real number, and classification, where the output variable is a category, such as “policy easing” or “policy tightening” for central bank decisions.
  • Unsupervised learning only knows input data. Its goal is to model the underlying structure or distribution of the data in order to learn previously unknown patterns. Application of unsupervised machine learning techniques includes clustering (partitioning the data set according to similarity), anomaly detection, association mining, and dimension reduction. More specifically, unsupervised learning methods have been proposed to classify market regimes, i.e., persistent clusters of market conditions that affect the success of trading factors and strategies (view post here), for example by using the similarity of return correlation matrices across different asset classes (view post here). An advanced method of unsupervised learning is autoencoders, a type of algorithm with the primary purpose of learning an informative representation of the data, as well as a latent presentation that is useful and meaningful.
  • Reinforcement learning is a specialized application of machine learning that interacts with the environment and seeks to improve the way it performs a task so as to maximize its reward (view post here). The computer employs trial and error. The model designer defines the reward but gives no clues on how to solve the problem. Reinforcement learning holds potential for trading systems because markets are highly complex and quickly changing dynamic systems. Conventional forecasting models have been notoriously inadequate. A self-adaptive approach that can learn quickly from the outcome of actions may be more suitable. Reinforcement learning can benefit trading strategies directly by supporting trading rules and indirectly by supporting the estimation of trading-related indicators, such as real-time growth (view post here).

Artificial neural networks have become increasingly practical for (supervised and unsupervised) macro trading research. Neural networks are adaptive machine learning methods that use interconnected layers of neurons. Any given layer of n neurons refers to n learned features. These are passed through a linear map, followed by a one-to-one nonlinear activation function, to form k neurons in the next layer representing a collection of k transformed features. Learning corresponds to finding an optimal collection of trainable weights. Neural networks learn by finding activation function weights and biases through training data.
Recurrent neural networks are a class of neural networks designed to model sequence data such as time series. Specialized recurrent neural networks have been developed to retain longer memory, particularly LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit). The advantage of neural networks is their flexibility in including complex interactions of features, non-linear effects, and various types of non-price information.

Neural networks for financial market trading can be implemented in Python with TensorFlow or PyTorch. For example, neural networks can principally be used to estimate the state of the market on a daily or higher frequency based on an appropriate feature space, i.e., data series that characterize the market (view post here). Also, they have gained prominence for predicting the realized volatility of asset prices (view post here). Beyond, neural networks can be used to detect lagged correlations between different asset prices (view post here) or market price distortions (view post here).

A word on backtesting

Backtesting refers to calculations of theoretical profits and losses that would have arisen from applying an algorithmic trading strategy in the past. Its function is to assess the quality of a trading strategy in the future. Statistical programming has made backtesting easy. However, its computational power and convenience can also be corrosive to the investment process due to its tendency to hug temporary patterns, while data samples for cross-validation are limited. Moreover, the business of algorithmic trading strategies, unfortunately, provides strong incentives for overfitting models and embellishing backtests (view post here). Similarly, academic researchers in the field of trading factors often feel compelled to resort to data mining in order to produce publishable ‘significant’ empirical findings (view post here).

Good backtests require sound principles and integrity (view post here). Sound principles should include [1] formulating a logical economic theory upfront, [2] choosing sample data upfront, [3] keeping the model simple and intuitive, and [4] limiting tryouts when testing ideas. Realistic performance expectations of trading strategies should be based on a range of plausible versions of a strategy, not an optimized one. Bayesian inference works well for that approach, as it estimates both the performance parameters and their uncertainty. The most important principle of all is integrity: aiming to produce good research rather than good backtests and to communicate statistical findings honestly rather than selling them.

One of the greatest ills of classical market prediction models is exaggerated performance metrics that arise from choosing the model structure with hindsight. Even if backtests estimate model parameters sequentially and apply them strictly out of sample, the choice of hyperparameters is often made with full knowledge of the history of markets and economies. For example, the type of estimation, the functional form, and – most importantly – the set of considered features are often chosen with hindsight. This hindsight bias can be reduced by sequential hyperparameter tuning or ensemble methods.

  • data-driven process for tuning hyperparameters can partly endogenize model choice. In its simplest form, it involves three steps: model training, model validation, and method testing. This process [1] optimizes the parameters of a range of plausible candidate models (hyperparameters) based on a training data set, [2] chooses the best model according to some numerical criterion (such as accuracy or coefficient of determination) based on a separate validation data set, and [3] evaluates the success of the learning method, i.e. the combination of parameter estimation and model selection, by its ability to predict the targets of a further unrelated test set.
  • An alternative is ensemble learning. Rather than choosing a single model, ensemble methods combine the decisions of multiple models to improve prediction performance. This combination is governed by a “meta-model.” For macro trading, this means that the influence of base models is endogenized and data-dependent, and -hence – the overall learning method can be simulated based on the data alone, reducing the hindsight bias from model choice.
    Ensemble learning is particularly useful if one uses flexible models, whose estimates vary greatly with the training set because they mitigate these models’ tendency to memorize noise. There are two types of ensemble learning methods:
    • Heterogeneous ensemble learning methods train different types of models on the same data set. First, each model makes its prediction. Then, a meta-model aggregates the predictions of the individual models. Preferably, the different models should have different “skills” or strengths. Examples of this approach include the voting classifier, averaging ensembles, and stacking.
    • Homogeneous ensemble learning methods use the same model type but are trained on different data. The methods include bootstrap aggregation (bagging), random forests, and popular boosting methods (Adaboost and gradient boosting). Homogeneous ensemble methods have been shown to produce predictive power for credit spread forecasts (view post here), switches between risk parity strategies (paper here), stock returns(paper here), and equity reward-risk timing (view post here).

Integrating transaction costs into the development process of algorithmic trading strategies can be highly beneficial. One can use a “portfolio machine learning method” to that end (view post here).

 

“If you torture the data long enough, it will confess to anything.”

Ronald Coase