So the bottm line is that simple black box method of capturing system dynamics is not the solution. For a digital Twin, you need to apply first principle to model the system and THEN reinforce it with dynamic field data.
The term DT is gaining traction these days in research but I find it repetition of idea used in product/building life cycle. Similarly Reduced Order Model has been there in prototype-model and in phenomenological elements in finite element analysis. I have seen that sometimes, researchers coin new terminology or buzz word that help them to publish their work as new tech.
Can someone explain, for prediction tasks, why should we do all of this modeling, instead of building a Deep learning model on the historic data of the physical asset? and retrain it every x amount of time to be up to date. I can understand the interpretability advantage of physics-driven, but is there any other advantage?
Because prediction will need training the models? it is a dumb and brute force way to do things. Things would quickly go out of hand. You need lot of computing power.