Multi-Fidelity methods for combining computer simulations and machine learning

Many complex systems are not full theoretically understood in all aspects, or do not model the underlying physical phenomenon without loss of generality. Thus, such systems cannot be fully depicted using classical methods like algebraic solvers or differential equations. An example for such a system is the resorption of biodegradable implants in medical surgeries. Some of the fundamental underlying physical mechanisms are well understood (e.g., diffusion, mechanical loading by elastic deformation), however, other mechanisms are merely inferable via data that has as yet not been subsumed into an complete theoretical framework (e.g., exact interactions between the implant and the adjacent living tissue on a cellular level). In order to incorporate such mixed systems into computer simulations, it is necessary to combine theory-based methods (such as finite element simulations) with data driven methods (such as artificial neural networks). For that combination, the fundamental driving question is how to link theory-based methods and artificial intelligence as efficiently as possible, given the typically prohibitive computational expense of large numbers of high fidelity computer simulations [2].

Our research in this area has – amongst other results – yielded a generally applicable multi fidelity approach, which enables such an efficient connection between computer simulations and the training of a machine learning algorithm based on those simulations [1]. Using this approach was shown (depending on the application) to result in computational savings amounting to half to fully one order of magnitude (for application examples see figure 1, figure 2). The approach relies on „shared information“, i.e., functional similarities, between fine-grained and coarse-grained computer simulations of the same problem, such that a machine learning algorithm is first trained using the simplified (but commensurately cheaper) simulations, and then eventually switches to the actual full-fidelity simulations for the final convergence. This training of a machine learning algorithm across different levels of fidelity in order to save computational cost has coined the name of the approach.

 

Figure 1: Example application for the multi fidelity approach: simulation model of a membrane for three different numbers of elements: (A) 100 elements, (B) 1600 elements, (C) 25600 elements. A machine learning algorithm is pre-trained using simulation results from (A), only the fine-tuning when the algorithm converges is then done using the more complex (and hence more computationally expensive) simulation data from (B) and (C). Reprinted from [1].


 

 

 

Figure 2: Example application for the multi fidelity approach: (A) simulation model of a representative volume element of an ellipsoid inclusion using four different levels of discretization (B): 4, 8, 16, 32 elements per coordinate axis and quadrant. As in the previous example, a machine learning algorithm is again pre-trained using coarse-grained simulation data with fewer elements, while more complex and expensive simulation levels are only calculated when the machine learning algorithm can extract no further benefit from the simplified simulations. Reprinted from [1].

 

References:

[1]: Aydin RC, Braeu FA, Cyron CJ (2019) General Multi-Fidelity Framework for Training Artificial Neural Networks With Computational Models. Frontiers in Materials, 6/2019. (DOI=10.3389/fmats.2019.00061)

[2]: Bock FE, Aydin RC, Cyron CJ, Huber N, Kalidindi SR, Klusemann B (2019) A Review of the Application of Machine Learning and Data Mining Approaches in Continuum Materials Mechanics. Frontiers in Materials, 6/2019. (DOI=10.3389/fmats.2019.00110)