23 June 2021

The Evolution of Digital Twins – Probabilistic Planning

Return to blog

The Evolution of Digital Twins – Probabilistic Planning

In a previous blog we discussed the design origins of Digital Twins and their current capabilities, which includes policies and behaviors, meaning they capture the desired and actual performance of systems. This enables both quicker iterations of designs through the use of simulations, and the capture of the performance characteristics of individual pieces of equipment.

Let us now discuss how all this rich information can be used to analyse, predict, and prescribe actions in the supply chain. I want to emphasize that all current supply chain planning tools are a form of simulation used to predict future supply chain performance, just a very limited form because they ignore variability in their predictions. We only have to look back at all the Lean/SixSigma literature to understand the value leakage caused by variability. And yet none of the standard supply chain planning tools include variability in their predictions.

From Static to Dynamic Digital Twins

Using a Digital Twin to capture demonstrated performance is a key use case. The capturing of demonstrated performance facilitates the one time analysis to feed better parameter values into supply chain planning, and other simulation systems, and the continuous monitoring of the demonstrated values.

The capture of demonstrated operational performance also facilitates design improvements, policy improvements, process improvements, and conformance and compliance.

As stated before, all ERP systems and supply chain planning systems use a single value for all inputs, including demand. We all know that these input values are not single values, and that all of them are really represented by a distribution. The question of what single value to use as an input parameter to the supply chain planning systems still remains. The obvious value would seem to be the mean/average or the mode. This has a huge impact on the fidelity of the planning and simulation models.

Sam Savage wrote an article in Harvard Business Review about his book “The Flaw of Averages” that captures the issue of using averages to represent a distribution very well. The cover of his book captures the concept really well. On average, the person throwing the darts hit the bull-eye, but in reality none of the darts hit the bulls-eye.

To put this into concrete supply chain terms, below are the results of throughput analysis for a particular product on a particular production line over a 12-month period performed in LOP.ai. Which value on this histogram should we use as the throughput in a planning engine? Remember that a planning engine can only accept one number for throughput, but the truncated range is 5,000 – 40,000 units/hour. If we use the average of 27,800 units/hour in the planning engine, in quite a lot of cases we will use less capacity than we planned, which has a direct impact on the fixed cost absorption, because the mode is 35,200 units/hour.

The diagram above shows the analysis of a single production line for a single product. The diagram below shows the value stream analysis of a product across 4 sites and 4 ERP systems, clearly showing the cumulative impact of variability at each step. Using a single number for each of these steps in a planning engine will greatly underestimate the risk of delivering to a customer request date, for example. In fact traditional planning engines cannot provide a risk estimate. Instead, they tell you that you will “hit the bulls-eye”, when clearly you will not.

From Deterministic Representation to Stochastic Prediction

Once the both the design and the demonstrated digital twin has been captured, any number of algorithms can be applied for prediction.

Conformance and compliance can be performed by comparing the design values to the demonstrated values. Are the materials flowing through the supply chain in the expected manner? Are we achieving the desired throughput from our factories? Of course these analyses should always provide a risk score, such as in 90% of cases we achieve our desired throughput.

Another great use case of a digital twin is for control, specifically performance control. As stated before, a supply chain is a dynamic system, meaning its performance is changing constantly. A key value of the digital twin is alerting when significant changes have occurred, and predicting the consequences of the changes. After all, the importance of a change can only be measured by its impact.

Adding Risk to the usual Service, Cash, and Cost Balance

As stated earlier, traditional planning tools are indeed prediction engines, albeit prediction engines with a serious and fundamental flaw, namely the flaw of averages on the inputs and the lack of a risk estimate on the output. LOP.ai gets over this problem by using the distributions as input, instead of single values, and runs thousands of simulations to predict the likelihood of achieving your goals given the variability of the planning input parameters.

Not only does LOP.ai predict the probability to execute (PTE), it also highlights the primary causes of risk. This way, you can create scenarios to look at the impact of different risk mitigation policies.

This is a fundamental shift in the quality of the plans produced, adding risk to the traditional balance of service, cost, and cash, or revenue, asset utilization, and inventory.

The addition of risk to the balance enables you to both establish the likelihood that you will be able to execute your preferred plan, you can also run additional scenarios in which the service, cost, and cash KPI’s are reduced in order to reduce the risk.

 

Many people confuse probabilistic planning and range planning.

  • Range planning tries to address the issue of demand uncertainty by including an upside and downside demand plans to the commit. Supply plans are generated for the downside, commit, and upside demand plans as a way of ensuring that the supply side has playbooks that will cope with anticipated demand swings.
  • Probabilistic planning includes demonstrated demand and supply-side variability – lead times, throughputs, quality, etc. – in the generation of the plans by running 1000’s of simulations, each sampling from the distributions that represent the variability of the planning input parameters.

Probabilistic planning generates one plan of service, cost, and cash, and provides a “Probability-to-Execute” estimate, meaning how likely is it that the plan will be achieved in reality. In addition, the major causes of risk are highlighted for immediate or business process improvement action.

Summary

Clearly there are many uses of a digital twin, each of which is complementary, and the value gained from a digital twin expands hugely as the number of use cases increases.

 

The Digital Supply Chain Twin provides the basis to:

  • Compare designed to demonstrated performance
  • Feed more representative demonstrated performance parameters to planning systems
  • Monitor the demonstrated performance for early detection of changes in performance
  • Add risk/likelihood to the supply chain metric balance of service, cost, and cash

 

The capabilities of a digital twin can be captured in the diagram below.  Each of these capabilities have value, and the layers describe the evolution of a digital twin to an intelligent decision making and orchestration capability. It is the combination of these capabilities that really provide the differentiated value.

  • Design: The capture of the desired characteristics of the supply chain
  • Data: The raw representation of the design and operational information, including version control and change notification
  • Individual: The as-built specifics of both the assets that constitute the supply chain and the material flowing through the supply chain.
  • State: The as-operated specifics of the assets that constitute the supply chain and the operational performance of the materials produced by the supply chain
  • Analysis: Anomaly detection and prediction to capture changes in performance and market conditions
  • Control: Adjustment of operations within set constraints and the orchestration of decisions across different domains
  • Simulation: Predictive exploration of performance under different conditions
  • Intelligence: The application of AI/ML to automate many of the decisions and adjustments made in the Analysis, Control, and Simulation capabilities.

What’s next?

Why traditional Supply Chain planning technology prescribes the wrong plan?
Trevor Miles and Bob Trebilcock answer this question and a lot more in the latest #loptalks

Watch the complete video for free at: www.lop.ai/nextgen-sc