Formerly known as Wikibon

Creating A Sustainable Competitive Edge With Digital Twins

Premise: Many applications have promised competitive advantage. Few deliver. Properly designed and orchestrated, however, applications that build on the technology of Digital Twins (DTs) can deliver a sustainable competitive edge. The key lies in implementing and coordinating data feedback loops for the machine learning models at the heart of each DT across the enterprise. These feedback loops put DTs on accelerating, self-reinforcing cycles that make them ever smarter.

 

All businesses employ processes for sourcing, making, delivering, and maintaining offerings. Companies that employ superior processes or that better orchestrate those processes typically gain competitive advantage. Sustaining an advantage requires companies to innovate processes and the orchestration of those processes faster than competitors.

Digital Twins enable faster innovation of processes as well as their orchestration in several ways. DTs serve as integration points for “programming” the real world. In addition, the machine learning models integrated into DTs enable processes to improve continuously based on data feedback loops. These feedback loops drive improvements based on richer history and a wider perspective. For example, a firm could build DTs to represent each of its warehouses. The firm could also create a DT that contained the DTs for all the warehouses as well as how goods moved between them. The machine learning models in the DTs for each warehouse would get better at directing picking, packing, and shipping over time. In addition, the DT that represented all the warehouses would get better at anticipating where to pre-position goods across locations. These data feedback loops drive continuous improvement closer to the way agile development works as opposed to older and slower waterfall development practices.

Managing the processes involved with machine learning, however, pushes the envelope of what’s accessible with current technology. As a result, mastering two key operational processes are the secret sauce to creating and sustaining a competitive edge building on DTs:

  • Use machine learning to accelerate the process of improving DT fidelity. Machine learning is a superior technology for improving the fidelity of DTs because the data feedback loops in predictive models can start automating the process of learning from experience. As the DTs’ fidelity improve, they drive continuous improvements to business outcomes in a virtuous cycle.
  • Orchestrate machine learning across an enterprise of DTs. Enterprises can achieve sustainable competitive advantage when the DTs representing services, products, and processes and their associated self-reinforcing feedback loops are aligned and orchestrated across the firm. Less integrated firms will continue falling further behind.

Use data feedback loops in machine learning to accelerate the process of improving DTs.

Traditional systems of record were relatively static. They typically improved only with vendor-supplied upgrades or with one-off customization. DT applications are very different. They get continuously better in how they can inform and optimize products and processes over time. Machine learning models are at the heart of this continuous improvement.

Machine learning models rely on data feedback loops to keep improving their fidelity. A DT of a machine tool on an assembly line produces data over time about how the tool performs – for example, changes in machining tolerance or the impact of power spikes on unplanned maintenance. This data feeds back into the models and makes them more accurate.

A data scientist can improve the structure or behavior of a DT with code by extending it to measure additional mechanical behavior, such as acceleration, heat, and vibration; or by adding contextually relevant information from another application, such as its service history from an MRO application. These types of manual improvement are based on the same hand-coding done in traditional applications. But the secret sauce to accelerating the rate of improvement of the Digital Twin beyond what’s possible with hand coding is the machine learning data feedback loop. Once that self-reinforcing feedback process is in place, DTs become ever more effective at informing improvements to business processes.

Building a machine learning model with current technology requires deep collaboration between a domain expert and a data scientist. In our machine tool example, the domain expert would be a skilled operator or maintainer of the machine tool. Between the expert and the data scientist, the two would have to identify the relevant variables that make the most accurate predictions of a particular model. As the number of variables, or inputs, increases, the data scientist would need to better understand the balance, weighting, and relationship of these variables. This optimization process is typically necessary to determine how specific changes in individual variables might affect outcomes.

The main challenge with all these statistical activities is that the necessary skills are in very short supply in all but the most sophisticated enterprises. There is hope that over the next 5 years deep learning might significantly augment what a data scientist can do. But today that technology remains over the horizon.

Orchestrate the machine learning process across all the DTs in a firm.

Sustainable competitive advantage almost never comes from doing one business process better over time than competitors. Rather, lasting advantage typically comes when many processes are orchestrated to reinforce a particular business outcome through common objectives.

Figure 1: A Digital Twin of a machine tool, an assembly line that includes multiple tools, and a factory with multiple assembly lines.

Let’s return to our example of the machine tool DT. The enterprise that developed that twin might also create DTs of each assembly line and for each of its factories that contain them (see figure 1 above). And this is where the orchestrated self-reinforcing data feedback loops enter the story. Each individual DT, whether an instance of a particular machine tool or an instance of an assembly line, would have its own data feedback loop. The assembly line DT would actually be a composition of the individual machine tools as well as the mechanics of how they interact.

Importantly, the machine learning for each DT wouldn’t take place in an uncoordinated fashion. Rather, the relevant, filtered, data feedback for each instance of a twin would typically be collected in the cloud. Then the machine learning models for each instance of a particular machine tool and each assembly line would be retrained and the “master” twin would be retrained from the collective feedback (see figure 2 below).

Figure 2: Data feedback loops from all the instances of a twin retrain both the individual instances and the master copy of the twin.

The software orchestrating this retraining would push a new copy of the master and the appropriate instance down to the gateway managing each machine tool and assembly line. The two models together would define how the device operates. When the feedback loops for all these masters and instances are orchestrated together, the competitive advantage becomes self-reinforcing.

Moreober, not only is the rate of improvement in each Digital Twin accelerating, but the twins across the enterprise are collectively accelerating improvement. This self-reinforcing feedback that is orchestrated across so many assets and processes is not only more advanced than traditional waterfall development methodologies, but it gets faster even compared to agile methodologies. And this speed is why operationalizing the feedback process is so strategic – and also why it’s still immature.

Orchestrating the data feedback loops and the continual training for all these DTs isn’t easy. Retraining models requires understanding just what is the relevant, new data for each instance that must be forwarded to the cloud. A typical rule of thumb is to flag any event that is two standard deviations outside the expected range. Cloud-based orchestration has to ensure that the master can learn from the experience of all the instances and that each instance can also learn just from its own data.

This type of functional DT organization is especially important in edge IoT situations, where the DT operates in close proximity to its physical counterpart. Having both master and instance working together at the IoT edge improves the twins’ fidelity. Managing the lifecycle of a single machine learning model is hard enough. But managing the lifecycle, coherence, and distribution of models embedded in Digital Twins across an enterprise is the machine learning equivalent of DevOps (See figure 3). It’s not easy. But when these orchestrated feedback loops are continuously improving the simulations and predictive analytics across all the DTs, the enterprise can optimize its mission-critical business processes across the value chain.

In many ways, the Digital Twin operates like a simulation in that you can’t fully comprehend all of its inner workings. But you can plug in data and get the most likely answers and then use that data to improve the simulation and then the associated business process.

Figure 3: Data about the operations Digital Twins inform operational decisions at the edge. That same data from all the DTs’ continuously retrains their associated machine learning models in the cloud. Orchestrating this process is crucial to a sustainable advantage.

Action Item: Tools to design DTs and set up the individual data feedback loops are still somewhat immature. Much more challenging is the process of orchestrating the feedback loops across many related DTs. CTOs in LOBs should team with architects in IT to choose a high-value pilot project. For all but the most sophisticated shops, the team should engage a system integrator, either a major one such as IBM Global Services or Accenture, or a specialty boutique such as Pivotal Labs, both to reduce the risk and to benefit from the skills transfer. The factor most critical to determining success has to be the integration of the DTs with the relevant business processes and associated domain expertise. 

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.

Skip to content