S&OP, IBP, and S&OE are NOT the same. This infographic compares S&OP, IBP (integrated business planning), and S&OE (sales and operations execution): Key Focus ↳ S&OP: volume balancing across functions ↳ IBP: strategic alignment and financial integration ↳ S&OE: short-term execution and issue resolution Planning Inputs ↳ S&OP: forecasts + capacity + inventory + lead times + promotions + historical sales ↳ IBP: strategic plan + commercial plan + demand plan + supply plan + inventory plan + financial plan + scenario planning ↳ S&OE: confirmed orders + actual production + delivery schedules + real-time disruptions Planning Outputs ↳ S&OP: demand plan + supply plan + inventory plan ↳ IBP: aligned financial plans + operational plans + strategy execution ↳ S&OE: updated production schedule + fulfillment plan + logistics plans Challenges ↳ S&OP: functional silos, inconsistent data, lack of ownership ↳ IBP: complex alignment of financial and operational goals ↳ S&OE: firefighting, poor visibility, lack of short-term capacity flexibility Financial Integration ↳ S&OP: limited to top-line revenue and cost of goods sold (COGS) ↳ IBP: fully integrated with P&L, cash flow, and balance sheet ↳ S&OE: not typically integrated; advanced setups provide cash flow visibility Scenario Planning ↳ S&OP: moderate; volume-based what-ifs ↳ IBP: high; financial, strategic, market-driven scenarios ↳ S&OE: low; focused on immediate adjustments KPIs ↳ S&OP: forecast accuracy, bias, inventory turns, service level, OTIF ↳ IBP: margin, revenue, working capital, EBITDA, EBIT ↳ S&OE: OTIF, order backlog, service level, schedule adherence, production attainment Any others to add?
Economic Forecasting Methods
Explore top LinkedIn content from expert professionals.
-
-
Deterministic vs “stochastic” forecasting Talking #supplychainmanagement I am sometimes stunned by the confusion surrounding the use of “point” (or deterministic) forecasts, versus “probabilistic” (or stochastic) forecasting. Let me use the context of inventory planning in a setting with long lead times (say several months). Uncertainty can be traced to the manufacture of the product, shipping delays, and the weekly demands for a product which can vary from the randomness of customer choice, seasonal and holiday variations, competitor behavior and corporate decisions (pricing, marketing). Despite all these uncertainties, industry continues to equate “forecast” with “point forecast” where forecast errors are given by the difference between the actual demand and the (point) forecast. There are two ways to meet service requirements such as meeting 95 percent of demand: 1) Inventory decisions are made using a stochastic (probabilistic) lookahead model, the heart of which would be a probabilistic (or stochastic) forecast. This is the most familiar approach in inventory textbooks (e.g. using the 95th percentile of the lead time demand). 2) We can use a parameterized policy (possibly based on a point forecast) that is tuned using a stochastic simulator which captures all the forms of uncertainties. There are two ways of performing a stochastic simulation: 1) Development of a computer program to simulate the system (often called a “digital twin”). 2) Evaluate the policy in the real world. The use of a simulator to tune (optimize) a policy seems to be completely overlooked in the standard textbooks on inventory problems. It is easy to overlook that *any* policy will eventually be tested in the field, which is a form of simulation which is more realistic, but very slow.
-
AI is Turning Agricultural Price Forecasting into a Foresight Engine!! The last two years have seen a quantum leap in how AI tackles agricultural price forecasting. We’ve moved beyond static predictions to always-on, adaptive systems powered by: 🔹 Large Language Models (LLMs) that scan market reports, trade bulletins, and farmer discussions to extract real-time economic signals. 🔹 Multimodal AI merging time-series data, satellite imagery, text, and even audio for richer context. 🔹 Attention mechanisms that focus models on the most relevant features for precise forecasting. 🔹 Explainable AI (XAI), ensuring transparency, trust, and regulatory compliance. The result? Price forecasting is evolving from a black-box tool into a transparent decision-support system. Soon, we’ll see: Self-learning models that adapt continuously. Climate-aware forecasts that integrate seasonal and local agronomic data. Blockchain-verified datasets to prevent market manipulation. Predictive supply chain simulations to optimize logistics before disruptions hit. Imagine: delayed sowing detected by satellite, dry-spell warnings from weather models, reduced exports flagged in trade data — all synthesized by AI into a risk alert weeks in advance, advising farmers, traders, and policymakers on proactive interventions. This shift isn’t just about “What will prices be?” — it’s about shaping markets, stabilizing incomes, and securing food systems. Those who embed AI into their decision-making DNA today will be the ones shaping tomorrow’s agricultural landscape. 📖 Read the full article titled "Artificial Intelligence for Agricultural Price Prediction: The Next Frontier in Market Intelligence" to explore how this transformation is unfolding — and how you can be part of it.
-
I’ve made mistakes in demand planning—mistakes that cost time, resources, and accuracy. If I could go back, here’s what I would tell my younger self: ↳ Don’t treat forecasting as a numbers game. Early in my career, I was obsessed with statistical models. I thought if I fine-tuned the algorithm enough, I’d get the perfect forecast. But I missed one thing—forecasting is as much about people and market insights as it is about data. Now, I ensure that numbers tell a story, not just a trend. ↳ Ignoring real-world disruptions is a dangerous game. Once, I confidently predicted demand for a product based on past trends. But then, a sudden regulatory change disrupted the entire market. My forecast? Completely irrelevant. Since then, I’ve learned that supply chain issues, economic shifts, and geopolitical events can break even the best predictions. ↳ Your best data source isn’t always in the system. I once dismissed a sales team’s warning about a shifting customer preference, thinking, "If the data doesn’t show it, it’s not real."" I was wrong. The sales team had firsthand insights from customers—insights that never made it into my spreadsheets. Now, I make sure to tap into sales, marketing, and customer service teams before finalizing forecasts. ↳ Gut feelings can be useful—but only if measured. I’ve seen leaders make brilliant intuitive calls—and also some that backfired spectacularly. The real lesson? Don’t discard intuition, but validate it. I now use Forecast Value Added (FVA) analysis to measure whether a human override improves or worsens accuracy. Data and experience must go hand in hand. Demand planning is a mix of art and science. What’s one forecasting lesson you learned the hard way? Let’s discuss.
-
I’m currently doing a literature review for one of my papers on intermittent demand forecasting with machine learning, and I’ve noticed a recurring fundamental mistake in several recently published papers, even in respectable peer-reviewed journals. The mistake? Using error measures based on the Mean Absolute Error (MAE). This is a crime against the humanity when working with intermittent demand. I’ve explained this issue multiple times before (https://lnkd.in/e9D99Ph6, https://lnkd.in/ePdSn4XX, and https://lnkd.in/eibAft56), but it appears that this idea needs to be repeated over and over again. Let me explain. MAE is minimised by the median. In the case of intermittent demand, the median can often be zero. If you use MAE (or scaled measures like MASE or sMAE) to evaluate forecasts and compare, for example, Croston, TSB, ETS, and an Artificial Neural Network (ANN), you may find the ANN outperforming the others. However, this could simply mean that the ANN produces forecasts closer to zero than the alternatives. This is not what you want for intermittent demand! The goal is to capture the structure correctly and produce conditional mean forecasts (typically). Instead, by relying on MAE, you might conclude: "We won’t sell anything in the next two weeks", implying that there’s no need to stock products. This is apparently wrong and unhelpful. Attached to this post is a figure showing three forecasts for an intermittent demand series: - The blue line represents the mean of the data; - The green line is a forecast from an Artificial Neural Network; - The red line is the zero forecast. In the figure’s legend, you’ll see error measures indicating that the zero forecast performs best in terms of MAE, followed by the ANN, and lastly, the mean forecast. Based on MAE, the conclusion would be: "We won’t sell anything, so don’t bother stocking the product". But this outcome occurs solely because 12 out of 20 values in the holdout are zeros, making the median zero as well. On the other hand, RMSE provides a more reasonable evaluation, showing that the mean of the data is more informative and preferable to the other methods. The brief summary of this post is: *Don’t use MAE-based error measures for intermittent demand!* (Insert as many exclamation marks as you’d like!) P.S. Actually, as a general rule, avoid using MAE for evaluating methods that produce mean forecasts. For more details, check out this post: https://lnkd.in/ePdSn4XX. P.P.S What frustrates me a lot is that the reviewers of those papers did nothing to fix this issue, which means that they are clueless about that as well. #forecasting #datascience #machinelearning
-
My previous post discussed the pitfalls of applying AI/ML models to proxy demand signals in forecasts. Today’s post discusses randomness in demand. Here’s a pop quiz - Is a demand forecast: 1) a single time series of predicted demand, OR 2) Is it a statistical distribution? If you chose 1, I believe you would be in the majority of planners in how we apply forecasts in supply chains. However, we can intuitively agree that demand is composed of predictable and random components. Enter probabilistic forecasting - the ability to produce statistical demand distributions. But the big question is what is the utility of the complexity introduced by demand distributions? Quick sidebar - there is an entire family of demand patterns where you are better off not forecasting and just using replenishment models to “pull” signals. That is not the focus of my discussion here. Having led large demand planning teams, I have observed planners across e-commerce and consumer product supply chains. The one thing I have observed is that planners are not wired to think in probabilistic terms, especially around demand. It’s far more tempting to operate with a single demand prediction and make operational plans around it. So why do analytically sharp planners struggle with demand distributions? A demand distribution models a statistical distribution with probabilities around quantiles of the distribution. For example, the 90th percentile of demand (denoted P90) is one where there is only a 10% chance that demand is above it. Planners already struggle to align demand with S&OP/IBP stakeholders. It is natural that planners have little patience to deal with complex distributions that are unwieldy. And the natural response is to run with a single prediction - typically the mean forecast. What a tragedy then to invest in a sophisticated AI/ML-forecasting solution but only use mean forecasts that ignore the randomness in demand! So what is the fix? In my opinion, it is essential that we generate demand distributions - another story on how valid the distributions themselves are. But I would keep distributions out of the S&OP/IBP domain. S&OP should continue to focus on aligning the mean forecast along with any business overrides. Instead, I would move demand distributions to the domain of inventory strategy to model the trade-off between service levels and cost-to-serve. Smart teams have figured out inventory models that ingest demand distributions, business inputs, lifecycle policies, and costs to optimize inventory investments. This gives planners the room to have a conscious inventory strategy that is codified in policies, respect randomness in demand and explicitly target service levels and costs. In Summary, build agile and robust supply chains using probabilistic demand but be thoughtful on when and where to introduce probabilistic computation in your planning process!! Drop a comment on what has worked for you.
-
In one of my earlier roles in demand forecasting, our BI system used a very simple percentage error formula: (forecast - actuals)/forecast. At the time, I questioned whether dividing by the actuals might provide a more accurate picture of forecast performance, but like many things in practice, it wasn’t a high priority for discussion. Later, at another company, we used MAPE across all products, including those with highly intermittent demand. It was a consistent approach, but no one really questioned whether a different metric might better capture the nuances of different demand patterns. It wasn’t until my time back to university doing my PhD that I encountered the broader landscape of forecast accuracy metrics. That’s when I started asking a bigger question: which metric should be used for which purpose? Forecast accuracy seems simple until you try to measure it consistently across products, teams, or tools. Most people start with MAPE or RMSE because that’s what the software provides. But eventually, the questions come up: – Why does one model look better on RMSE but worse on MAPE? – Why do different teams report accuracy differently? – Why does it feel like the numbers don’t tell the full story? I wrote this article to help unpack those questions: what each accuracy metric emphasizes, when it’s most useful, and what happens when different metrics lead to different conclusions. It includes: – A breakdown of common metrics like RMSE, MAE, MAPE, sMAPE, MASE, and more – Practical examples of when each metric works best — and when it doesn’t – Guidance on how to choose the right metrics based on product portfolios and business goals I'm curious, which forecasting error measures are being used where you work? Are you using more than one?
-
𝗔𝗿𝗲 𝗬𝗼𝘂𝗿 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗔𝗱𝗷𝘂𝘀𝘁𝗺𝗲𝗻𝘁𝘀 𝗛𝗲𝗹𝗽𝗶𝗻𝗴 𝗼𝗿 𝗛𝘂𝗿𝘁𝗶𝗻𝗴? In demand planning, we often tweak forecasts based on market intelligence, gut feel, or stakeholder inputs. But do these adjustments actually improve accuracy? 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗩𝗮𝗹𝘂𝗲 𝗔𝗱𝗱 (𝗙𝗩𝗔) is a quantitative metric that measures whether manual or system-driven adjustments enhance or degrade forecast accuracy. The goal? 𝗘𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗲 𝘂𝗻𝗻𝗲𝗰𝗲𝘀𝘀𝗮𝗿𝘆 𝗯𝗶𝗮𝘀 𝗮𝗻𝗱 𝗶𝗺𝗽𝗿𝗼𝘃𝗲 𝗱𝗲𝗺𝗮𝗻𝗱 𝗽𝗹𝗮𝗻𝗻𝗶𝗻𝗴 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆. How to Calculate FVA? FVA compares the Mean Absolute Percentage Error (MAPE) before and after forecast adjustments: 𝗙𝗩𝗔= ((𝗠𝗔𝗣𝗘 𝗼𝗳 𝗦𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝗮𝗹 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁-𝗠𝗔𝗣𝗘 𝗼𝗳 𝗔𝗱𝗷𝘂𝘀𝘁𝗲𝗱 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁))/𝗠𝗔𝗣𝗘 𝗼𝗳 𝗦𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝗮𝗹 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗫 𝟭𝟬𝟬 𝗜𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗮𝘁𝗶𝗼𝗻: > 𝗣𝗼𝘀𝗶𝘁𝗶𝘃𝗲 𝗙𝗩𝗔 (%) → Adjustments improved accuracy > 𝗡𝗲𝗴𝗮𝘁𝗶𝘃𝗲 𝗙𝗩𝗔 (%) → Adjustments worsened accuracy > 𝗭𝗲𝗿𝗼 𝗙𝗩𝗔 → No impact (waste of effort) Let’s say: Statistical Forecast MAPE = 15% Final Adjusted Forecast MAPE = 10% FVA = (15−10)/15×100=33.3% 𝗔 𝗽𝗼𝘀𝗶𝘁𝗶𝘃𝗲 𝗙𝗩𝗔 𝗼𝗳 𝟯𝟯.𝟯% 𝗺𝗲𝗮𝗻𝘀 𝗺𝗮𝗻𝘂𝗮𝗹 𝗶𝗻𝗽𝘂𝘁𝘀 𝘀𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝘁𝗹𝘆 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗱 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆. Why Should You Track FVA? > Helps differentiate useful vs. biased forecast changes > Reduces forecasting inefficiencies > Strengthens data-driven decision-making Track FVA by planner, product category, or forecast horizon to identify which inputs add value! #SupplyChain #DemandPlanning #Forecasting #InventoryManagement #Analytics #SafetyStock #CostOptimization #Logistics #Procurement #InventoryControl #LeanSixSigma #Cost #OperationalExcellence #BusinessExcellence #ContinuousImprovement #ProcessExcellence #Lean #OperationsManagement
-
Lately, my team has published our research "Uncertainty Quantification of Spatiotemporal Travel Demand with Probabilistic Graph Neural Networks" in IEEE Transactions on Intelligent Transportation Systems! This work addresses a critical gap in travel demand prediction by introducing a framework of probabilistic graph neural networks (Prob-GNNs). Unlike previous approaches, Prob-GNNs not only offer deterministic forecasts but also quantify the inherent uncertainty in travel demand predictions. Our findings underscore the significance of incorporating randomness into deep learning models, revealing that probabilistic assumptions play a crucial role in accurately capturing demand uncertainty. I extend my heartfelt thanks to the esteemed collaborators: Qingyi Wang, Dingyi Zhuang, Haris N. Koutsopoulos, and Jinhua Zhao. See the IEEE Publication here: https://lnkd.in/gKskqDG3 The arXiv version here: https://lnkd.in/g_JnR2fz Massachusetts Institute of Technology MIT Mobility Initiative MIT School of Architecture and Planning (MIT SAP) Singapore-MIT Alliance for Research & Technology Centre University of Florida University of Florida College of Design, Construction and Planning #ResearchPublication #TravelDemandPrediction #ProbabilisticModels #TransportationInnovation #TransportationResearch #NeuralNetworks #TravelForecasting #DeepLearning #DataScience #TransportationModeling #ProbabilisticForecasting #SpatiotemporalAnalysis #ResearchFindings #TransportationPlanning #UncertaintyQuantification #TravelPatterns #MITResearch #FloridaResearch #IEEEJournal #TransportationTechnology #GraphNeuralNetworks #TravelBehavior #UrbanMobility
-
If you're in manufacturing, you know that accurate demand forecasting is critical. It's the difference between smooth operations, happy customers, and a healthy bottom line – versus scrambling to meet unexpected demand, dealing with excess inventory and having liquidity issues, or losing out on potential sales and not meeting your Sales / EBITDA targets. But with constantly shifting customer preferences, disruptive market trends, and global events throwing curveballs, it's also one of the toughest nuts to crack. While often reliable in stable environments (especially in settings with lots of high-frequency transactions and no data sparsity), traditional stats-based forecasting methods aren't built for the complexity and volatility of today's market. They rely on historical data and often miss those subtle signals, indicating a major shift is on the horizon. Traditional stats-based approaches are also not that effective for businesses with high data sparsity (e.g., larger tickets, choppier transaction volume) That's where AI/ML-enabled forecasting comes in. Unlike foundational stats forecasting, it can include various structured and unstructured data, such as social media sentiment, competitor activity, and various economic indicators. One of the most significant advancements in recent years is the rise of powerful open-source AI/ML packages for forecasting. These tools, once the domain of large enterprises with extensive resources or turnkey solution providers (with hefty price tags), are now readily accessible to companies of all sizes, offering a significant opportunity to level the playing field and drive smarter decision-making. The power of AI and ML in demand forecasting is more than just theoretical. Companies across various industries are already reaping the benefits: • Marshalls: This UK manufacturer used AI to optimize inventory management during the pandemic. It made thousands of model-driven decisions daily and managed orders worth hundreds of thousands of pounds. • P&G: Their PredictIQ platform, powered by AI and ML, significantly reduced forecast errors, improving inventory management and cost savings. • Other Industries: Retailers, e-commerce companies, and even the energy sector are using AI to predict everything from consumer behavior to energy demand, with impressive results. If you're in manufacturing or distribution and haven't explored upgrading your demand forecasting (and S&OP) capabilities, I highly encourage you to invest. These capabilities are table stakes nowadays, and forecasting on random spreadsheets and basic methods (year-over-year performance, moving average, etc.) is not cutting it anymore.