Challenges in Economic Forecasting

Published at 1716437349.621551

Economic forecasting is vital for businesses and policymakers. It informs investment decisions, resource allocation, and economic interventions. However, economic forecasting is a complex endeavor. Despite advanced models and big data, analysts often struggle to accurately predict the future. Forecasting involves predicting future economic conditions based on historical data, trends, and various technical indicators. Yet there’s always an underlying factor of uncertainty. Peter L. Bernstein, the editor of The Journal of Portfolio Management, wrote: “fundamental law of investing is the uncertainty of the future”.

This article discusses challenges in developing capital market forecasts. Setting wrong expectations of the capital market can lead to faulty portfolios. Keeping the same investment structure through evolving markets leads to suboptimal performance, while excessively changing portfolio composition may incur huge fees and costs with no additional benefits. The balancing act requires deep understanding of asset classes, although it’s not without challenges in a number of aspects.

Economic Data Reliability

Economic data is collected from various sources, including government agencies, surveys, and private corporations. Surveys and samples are used to estimate broader economic trends. However, if the sample is biased toward certain industries or demographics, it can distort the overall economic health. Also data often exhibit seasonal patterns (e.g. agricultural cycles, or holiday spending in the retail sector). These fluctuations can obscure underlying trends and introduce further errors if the underlying patterns change unexpectedly.

Economic indicators are often released with a time lag. For example, quarterly financial statements get published several weeks after quarter ends, and yearly audited reports are published months after year ends. High-level macroeconomic indicators such as employment reports are typically weeks after the reference period. The International Monetary Funds sometimes reports data for developing economies with a lag of two years. This requires forecasters and analysts to work with preliminary data while anticipating revisions. Sometimes these revisions are substantial, which may give totally different inferences.

Economic data can be subject to measurement errors. These errors occur during data gathering, estimation, or aggregation. An example is misclassifying capital flows can distort economic indicators. When analyzing time series data, if forecasters only consider entities that survived until the end of the period, the resulting statistics may be affected by survivorship bias. This bias occurs since we ignore those that did not survive, leading to potentially incorrect conclusions.

The past and the future. Economic forecasting relies on past data to identify trends and patterns. However, using historical estimates as a sole foundation to forecast can introduce several limitations.

Nonstationarity. Economies are dynamic systems that constantly adapt to technological advancements (e.g. artificial intelligence in industries), demographic shifts (e.g. aging population), and changing consumer behavior (e.g. traditional shopping to e-commerce). Historical data is only a snapshot of the past economic state. If the data doesn’t reflect the ongoing changes, the economic models may miss critical turning points.

Unique events. Historical data is based on past events. However, unforeseen events like pandemics, natural disasters, or major geo-political changes can disrupt historical trends in unpredictable ways. Forecasts relying solely on historical estimates may not be able to account for these black-swan events, leading to inaccurate predictions of the future economic state.

Extrapolation limitations. A core function of economic forecasting is to extrapolate historical trends into the future. This approach assumes that past patterns will continue. It may not always be the case. Economic behavior can change abruptly, rendering historical trends unreliable to predict the future. An example is a high ex post (after-the-fact) return can be a poor estimate of ex ante (before-the-fact) expected return. Similarly, considering risk measures such as value at risk, a high ex post risk can be a biased measure of ex ante risk.

Analysts generally benefit from using the longest data history available. This practice ensures stationarity, as sample statistics derived from a longer history tend to be more stable than those with fewer observations. Although it might be tempting to use higher-frequency data (like weekly instead of monthly), it doesn’t guarantee more precise estimates. While higher-frequency data can enhance the precision of sample variances, covariances, and correlations, it doesn’t improve the mean estimate.

In the context of big data where many variables are involved, having a substantial number of observations becomes essential. For instance, when calculating a sample covariance matrix, it’s necessary to have more observations than the total number of variables (e.g. assets). This challenge often arises in investment analysis at the national or global scale. Also, it’s important to consider whether trading data follow a normal distribution or not. In the case of historical asset returns, they often display skewness and heavy tails, which lead them to deviate from normality. Addressing this non-normality can be analytically complex and costly.

Model Uncertainty

Economic models are similar in essence. They represent a simplified version of reality, capturing the interplay between various factors. However the challenge lies in selecting factors to include and represent them mathematically. The inherent limitations in any model simplification certainly contribute to model uncertainty.

The majority of economic variables and their potential relationships can lead to a dilemma. Choosing which variables to include and which to leave out can introduce a selection bias. This behavioral bias may eventually lead to uncertainty and inaccurate forecasts.

Economic models rely on mathematical equations to represent complex economic relationships. These equations are, by necessity, simplifications for readability. The real world is often messy with disruptive events. Simplifications in data modeling like normality or linearity adds an element of uncertainty into future forecasts. In practice, besides simplified models, there are complex ones that may become opaque black boxes. It’s conceptually challenging to understand how these models arrive at those predictions. It’s even more difficult to assess the model’s reliability and trace the source of potential errors.

When model assumptions become flawed, it often leads to catastrophic uncertainty and disastrous consequences. While parameter uncertainty and input uncertainty can be mitigated through different statistical approaches (e.g. estimation errors and attention to unobserved random variables), incomplete model assumptions can crumble the entire forecast.

An example is from the financial crisis when many economic models leading up to 2008 assumed continued stability in the housing market and the financial system. These models assume housing price declines are geographically isolated. They failed to account for the risks associated with complex financial instruments like mortgage-backed securities, made by institutions who had very little incentive to vet loan quality.

Another example is in the 90s when most models set the long-term equity expectation higher and reallocated more capital into equities due to the rise of internet web-based business sectors. This led to further equity price appreciation. It misread the technological disruption, and many forecasts overestimated the internet potential on established industries with infinite growth fallacy. The bubble eventually burst in 2000, and the NASDAQ stock exchange, heavily weighted towards tech stocks, plummeted. This reminds the dangers of basing forecasts on assumptions that misaligned economic reality.

Economists, nevertheless, are working to reduce uncertainty and improve model accuracy in different approaches. One is to employ a technique of model ensembling, using multiple models instead of relying on a single model. This involves running multiple models with different assumptions and combining their forecasts. This approach is to mitigate the model biases and provide robustness. Another approach is to employ Bayesian statistics. It allows economists to incorporate prior knowledge into the model. By factoring in economic knowledge, it can be helpful when dealing with high levels of uncertainty to improve the model’s adaptability. Also a technical approach to research models is to conduct stress tests. Economists often run simulations to explore how models behave under various exogenous shocks and unforeseen events, providing valuable insights into the potential outcomes and model limitations.