Physics-Informed Machine Learning
Main.PhysicsInformedMachineLearning History
Hide minor edits - Show changes to output
Added lines 248-249:
[[https://apmonitor.com/pds/index.php/Main/ThermophysicalProperties|Physics-Informed Learning of Thermophysical Properties]]
Deleted lines 250-251:
[[https://apmonitor.com/pds/index.php/Main/ThermophysicalProperties|Physics-Informed Learning of Thermophysical Properties]]
Changed lines 246-250 from:
'''Case Study''': [[https://apmonitor.com/pds/index.php/Main/ThermophysicalProperties|Physics-Informed Learning of Thermophysical Properties]]
to:
'''Case Study'''
%width=550px%Attach:thermophysical_pinn.png
[[https://apmonitor.com/pds/index.php/Main/ThermophysicalProperties|Physics-Informed Learning of Thermophysical Properties]]
%width=550px%Attach:thermophysical_pinn.png
[[https://apmonitor.com/pds/index.php/Main/ThermophysicalProperties|Physics-Informed Learning of Thermophysical Properties]]
Changed lines 240-271 from:
This case study applies a hybrid PIML approach for model predictive control (MPC) of a continuous stirred-tank reactor (CSTR) where an exothermic reaction A → B takes place. The reactor is described by mass and energy balance ODEs, with uncertain reaction kinetics. A neural network is used to represent a correction to the nominal reaction rate, and its output is embedded into the reactor ODEs.
The MPC is implemented in GEKKO by:
* Defining the CSTR ODEs with the learned kinetics.
* Setting the cooling jacket heat removal (Q_cool) as the manipulated variable.
* Specifying controlled variables for reactor temperature and concentration.
* Solving the optimization problem with GEKKO control modes (IMODE=6) to maintain safe operating conditions and optimal conversion.
(:toggle hide mpc_code button show="Chemical Reactor MPC Code Example":)
(:div id=mpc_code:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO(remote=False)
# Define reactor controlled variables and manipulated variable
T = m.CV(value=350) # reactor temperature
C_A = m.Var(value=1.0) # concentration of A
Qcool = m.MV(lb=-100, ub=0) # cooling jacket heat removal
# Define mass and energy balance equations (with reaction kinetics correction by an imported NN)
# For example:
# dC_A/dt = F/V*(C_Ain - C_A) - k0*exp(-E/(R*T))*C_A*f_theta(T,C_A)
# dT/dt = F/V*(T_in - T) + (-ΔH/(ρCp))*k0*exp(-E/(R*T))*C_A*f_theta(T,C_A) + Qcool/(ρCp*V)
# (Equations defined here using GEKKO's m.Equation)
m.options.IMODE = 6 # control mode for MPC
m.solve(disp=False)
(:sourceend:)
(:divend:)
!!!! IV. Conclusion
to:
!!!! III. Conclusion
Changed lines 246-247 from:
to:
'''Case Study''': [[https://apmonitor.com/pds/index.php/Main/ThermophysicalProperties|Physics-Informed Learning of Thermophysical Properties]]
!!!! IV. References
!!!! IV. References
Changed lines 269-270 from:
* Babaei, M.R., Stone, R., Knotts, T.A., Hedengren, J.D., Physics-Informed Neural Networks with Group Contribution Methods, Journal of Chemical Theory and Computation, American Chemical Society, 2023, DOI: 10.1021/acs.jctc.3c00195. [[https://pubs.acs.org/doi/10.1021/acs.jctc.3c00195|Article]]
to:
* Babaei, M.R., Stone, R., Knotts, T.A., Hedengren, J.D., Physics-Informed Neural Networks with Group Contribution Methods, Journal of Chemical Theory and Computation, American Chemical Society, 2023, DOI: 10.1021/acs.jctc.3c00195. [[https://pubs.acs.org/doi/10.1021/acs.jctc.3c00195|Article]]
Changed lines 291-297 from:
* Patel, D. et al. (2024). Model Predictive Control Using Physics Informed Neural Networks for Process Systems. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
to:
* Patel, D. et al. (2024). Model Predictive Control Using Physics Informed Neural Networks for Process Systems. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
* Gunnell, L., Lu, X., Vienna, J.D., Kim, D-S, Riley, B.J., Hedengren, J.D., Uncertainty propagation and sensitivity analysis for constrained optimization of nuclear waste vitrification, 2025, doi: 10.1111/jace.20446 [[https://ceramics.onlinelibrary.wiley.com/doi/10.1111/jace.20446|Article (Open-Access)]]
* Arce Munoz, S., Hedengren, J.D., Transfer Learning for Thickener Control, Processes, Special Issue: Machine Learning Optimization of Chemical Processes, 2025, 13, 223, doi: 10.3390/pr13010223 [[https://www.mdpi.com/2227-9717/13/1/223|Article]]
* Arce Munoz, S., Pershing, J., Hedengren, J.D., Physics-Informed Transfer Learning for Process Control Applications, Industrial & Engineering Chemistry Research, 2024, doi: 10.1021/acs.iecr.4c02781 [[https://pubs.acs.org/doi/10.1021/acs.iecr.4c02781|Article]]
* Gunnell, L., Nicholson, B., Hedengren, J.D., Equation-based and data-driven modeling: Open-source software current state and future directions, Computers & Chemical Engineering, 2024, 108521, ISSN 0098-1354, DOI: 10.1016/j.compchemeng.2023.108521. [[https://www.sciencedirect.com/science/article/pii/S0098135423003915|Article]]
* Park, J., Babaei, M.R., Arce Munoz, S., Venkat, A.N., Hedengren, J.D., Simultaneous Multistep Transformer Architecture for Model Predictive Control, Computers & Chemical Engineering, Volume 178, October 2023, 108396, DOI: 10.1016/j.compchemeng.2023.108396 [[https://www.sciencedirect.com/science/article/abs/pii/S0098135423002661|Article]]
* Babaei, M.R., Stone, R., Knotts, T.A., Hedengren, J.D., Physics-Informed Neural Networks with Group Contribution Methods, Journal of Chemical Theory and Computation, American Chemical Society, 2023, DOI: 10.1021/acs.jctc.3c00195. [[https://pubs.acs.org/doi/10.1021/acs.jctc.3c00195|Article]]
* Gunnell, L., Lu, X., Vienna, J.D., Kim, D-S, Riley, B.J., Hedengren, J.D., Uncertainty propagation and sensitivity analysis for constrained optimization of nuclear waste vitrification, 2025, doi: 10.1111/jace.20446 [[https://ceramics.onlinelibrary.wiley.com/doi/10.1111/jace.20446|Article (Open-Access)]]
* Arce Munoz, S., Hedengren, J.D., Transfer Learning for Thickener Control, Processes, Special Issue: Machine Learning Optimization of Chemical Processes, 2025, 13, 223, doi: 10.3390/pr13010223 [[https://www.mdpi.com/2227-9717/13/1/223|Article]]
* Arce Munoz, S., Pershing, J., Hedengren, J.D., Physics-Informed Transfer Learning for Process Control Applications, Industrial & Engineering Chemistry Research, 2024, doi: 10.1021/acs.iecr.4c02781 [[https://pubs.acs.org/doi/10.1021/acs.iecr.4c02781|Article]]
* Gunnell, L., Nicholson, B., Hedengren, J.D., Equation-based and data-driven modeling: Open-source software current state and future directions, Computers & Chemical Engineering, 2024, 108521, ISSN 0098-1354, DOI: 10.1016/j.compchemeng.2023.108521. [[https://www.sciencedirect.com/science/article/pii/S0098135423003915|Article]]
* Park, J., Babaei, M.R., Arce Munoz, S., Venkat, A.N., Hedengren, J.D., Simultaneous Multistep Transformer Architecture for Model Predictive Control, Computers & Chemical Engineering, Volume 178, October 2023, 108396, DOI: 10.1016/j.compchemeng.2023.108396 [[https://www.sciencedirect.com/science/article/abs/pii/S0098135423002661|Article]]
* Babaei, M.R., Stone, R., Knotts, T.A., Hedengren, J.D., Physics-Informed Neural Networks with Group Contribution Methods, Journal of Chemical Theory and Computation, American Chemical Society, 2023, DOI: 10.1021/acs.jctc.3c00195. [[https://pubs.acs.org/doi/10.1021/acs.jctc.3c00195|Article]]
Added lines 6-9:
(:html:)
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kwe4WFjnB5I?si=CvjcvKyBJlEDlduz" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
(:htmlend:)
Changed lines 274-275 from:
* Karniadakis, G. E., Kevrekidis, I. G., et al. (2021). Physics-informed machine learning. Nat. Rev. Phys., 3(6), 422–440.
[[https://www.avantipublishers.com/index.php/jaacm/article/view/1246|A Gentle Introduction to Physics-Informed Neural Networks]] | [[https://encyclopedia.pub/entry/51383|NSGA-PINN for Physics-Informed Neural Network Training]]
[[https://www.avantipublishers.com/index.php/jaacm/article/view/1246|A Gentle Introduction to Physics-Informed Neural Networks]] |
to:
* Karniadakis, G. E., Kevrekidis, I. G., et al. (2021). Physics-informed machine learning. Nat. Rev. Phys., 3(6), 422–440. [[https://encyclopedia.pub/entry/51383|NSGA-PINN for Physics-Informed Neural Network Training]]
Changed lines 274-288 from:
* Karniadakis, G. E., Kevrekidis, I. G., et al. (2021). *Physics-informed machine learning*. Nat. Rev. Phys., 3(6), 422–440.
[A Gentle Introduction to Physics-Informed Neural Networks](https://www.avantipublishers.com/index.php/jaacm/article/view/1246) | [NSGA-PINN for Physics-Informed Neural Network Training](https://encyclopedia.pub/entry/51383)
* Willard, J., Jia, X., et al. (2022). *Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems*. ACM Comput. Surv. 55(4): 73.
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](https://ar5iv.org/pdf/2003.04919#:~:text=commercial%20applications%2C%20are%20beginning%20to,black%20box%20ML%20models%20has)
* Raissi, M., Perdikaris, P., & Karniadakis, G. (2019). *Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs*. J. Comput. Phys., 378, 686–707.
* Greydanus, S., Dzamba, M., & Yosinski, J. (2019). *Hamiltonian Neural Networks*. NeurIPS 2019.
[Hamiltonian Neural Networks](https://ar5iv.labs.arxiv.org/html/1906.01563#:~:text=the%20basic%20laws%20of%20physics,is%20perfectly%20reversible%20in%20time)
* Karpatne,A., Watkins, W., Read, J., & Kumar, V. (2017). *Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling*. SIAM Int. Conf. on Data Mining (SDM).
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](https://ar5iv.org/pdf/2003.04919#:~:text=Karpatne%20et%20al,water%2C%20a%20known%20monotonic%20relationship)
* Parish, E. & Duraisamy, K. (2016). *A paradigm for data-driven predictive modeling using field inversion and machine learning*. J. Comput. Phys., 305, 758–774.
* GEKKO Documentation (2020). *Machine Learning in GEKKO*.
[Machine Learning — GEKKO 1.3.0 documentation](https://gekko.readthedocs.io/en/latest/ml.html#:~:text=,informed%20hybrid%20modeling)
* Sanyal, S. & Roy, K.(2023). *RAMP-Net: Robust Adaptive MPC for Quadrotors via Physics-informed Neural Networks*. IEEE Int. Conf. Robotics and Automation (ICRA).
* Zheng, Y. & Wu, W. (2023). *Physics-informed recurrent neural network based MPC for chemical processes*. J. Process Control, 118, 65–77.
* Patel, D. et al. (2024). *Model Predictive Control Using Physics Informed Neural Networks for Process Systems*. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
[
* Willard, J., Jia, X., et al
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](
* Raissi, M., Perdikaris, P., & Karniadakis, G. (2019). *Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs*. J. Comput. Phys., 378, 686–707.
* Greydanus, S., Dzamba, M
* Karpatne,
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems]
[Machine Learning — GEKKO 1
* Sanyal, S. & Roy, K.
* Patel, D. et al. (2024). *Model Predictive Control Using Physics Informed Neural Networks for Process Systems*
to:
* Karniadakis, G. E., Kevrekidis, I. G., et al. (2021). Physics-informed machine learning. Nat. Rev. Phys., 3(6), 422–440.
[[https://www.avantipublishers.com/index.php/jaacm/article/view/1246|A Gentle Introduction to Physics-Informed Neural Networks]] | [[https://encyclopedia.pub/entry/51383|NSGA-PINN for Physics-Informed Neural Network Training]]
* Willard, J., Jia, X., et al. (2022). Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems. ACM Comput. Surv. 55(4): 73.
[[https://ar5iv.org/pdf/2003.04919|Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems]]
* Raissi, M., Perdikaris, P., & Karniadakis, G. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs. J. Comput. Phys., 378, 686–707.
* Greydanus, S., Dzamba, M., & Yosinski, J. (2019). Hamiltonian Neural Networks. NeurIPS 2019.
[[https://ar5iv.labs.arxiv.org/html/1906.01563|Hamiltonian Neural Networks]]
* Karpatne, A., Watkins, W., Read, J., & Kumar, V. (2017). Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling. SIAM Int. Conf. on Data Mining (SDM).
[[https://ar5iv.org/pdf/2003.04919|Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems]]
* Parish, E. & Duraisamy, K. (2016). A paradigm for data-driven predictive modeling using field inversion and machine learning. J. Comput. Phys., 305, 758–774.
* GEKKO Documentation (2020). Machine Learning in GEKKO.
[[https://gekko.readthedocs.io/en/latest/ml.html|Machine Learning — GEKKO documentation]]
* Sanyal, S. & Roy, K. (2023). RAMP-Net: Robust Adaptive MPC for Quadrotors via Physics-informed Neural Networks. IEEE Int. Conf. Robotics and Automation (ICRA).
* Zheng, Y. & Wu, W. (2023). Physics-informed recurrent neural network based MPC for chemical processes. J. Process Control, 118, 65–77.
* Patel, D. et al. (2024). Model Predictive Control Using Physics Informed Neural Networks for Process Systems. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
[[https://www.avantipublishers.com/index.php/jaacm/article/view/1246|A Gentle Introduction to Physics-Informed Neural Networks]] | [[https://encyclopedia.pub/entry/51383|NSGA-PINN for Physics-Informed Neural Network Training]]
* Willard, J., Jia, X., et al. (2022). Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems. ACM Comput. Surv. 55(4): 73.
[[https://ar5iv.org/pdf/2003.04919|Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems]]
* Raissi, M., Perdikaris, P., & Karniadakis, G. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs. J. Comput. Phys., 378, 686–707.
* Greydanus, S., Dzamba, M., & Yosinski, J. (2019). Hamiltonian Neural Networks. NeurIPS 2019.
[[https://ar5iv.labs.arxiv.org/html/1906.01563|Hamiltonian Neural Networks]]
* Karpatne, A., Watkins, W., Read, J., & Kumar, V. (2017). Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling. SIAM Int. Conf. on Data Mining (SDM).
[[https://ar5iv.org/pdf/2003.04919|Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems]]
* Parish, E. & Duraisamy, K. (2016). A paradigm for data-driven predictive modeling using field inversion and machine learning. J. Comput. Phys., 305, 758–774.
* GEKKO Documentation (2020). Machine Learning in GEKKO.
[[https://gekko.readthedocs.io/en/latest/ml.html|Machine Learning — GEKKO documentation]]
* Sanyal, S. & Roy, K. (2023). RAMP-Net: Robust Adaptive MPC for Quadrotors via Physics-informed Neural Networks. IEEE Int. Conf. Robotics and Automation (ICRA).
* Zheng, Y. & Wu, W. (2023). Physics-informed recurrent neural network based MPC for chemical processes. J. Process Control, 118, 65–77.
* Patel, D. et al. (2024). Model Predictive Control Using Physics Informed Neural Networks for Process Systems. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
Changed line 1 from:
(:title Physics-Informed Machine Learning for Dynamic Optimization in Engineering:)
to:
(:title Physics-Informed Machine Learning:)
Changed lines 230-235 from:
|| **Feature Engineering** || Simple to implement; leverages domain insight; improves interpretability || Requires accurate *a priori* identification of key physical features; may not capture complex dynamics fully ||
||**Custom Neural Architectures** || Guarantees adherence to physical laws (e.g. conservation) by design; yields physically interpretable representations || Requires specialized knowledge for design; less flexible if system deviates from assumed physics ||
||**Physics-Based Loss (PINNs)** || General and flexible; learns solutions that satisfy PDE/ODE constraints even with limited data; naturally addresses inverse problems || Training can be slow and challenging; balancing physics and data loss requires careful tuning ||
||**Hybrid Modeling** || Combines the strengths of first-principles and ML; improved accuracy with limited data; high interpretability of known physics components || Integration can be complex; interfacing ML with simulators requires careful scaling and calibration ||
||**Data Assimilation & Regularization** || Adaptive correction with real-time data; enhances robustness; gently guides the solution towards physical plausibility || Additional computational overhead; tuning of assimilation parameters is often problem-dependent ||
||
||
||
||
to:
|| '''Feature Engineering''' || Simple to implement; leverages domain insight; improves interpretability || Requires accurate ''a priori'' identification of key physical features; may not capture complex dynamics fully ||
|| '''Custom Neural Architectures''' || Guarantees adherence to physical laws (e.g. conservation) by design; yields physically interpretable representations || Requires specialized knowledge for design; less flexible if system deviates from assumed physics ||
|| '''Physics-Based Loss (PINNs)''' || General and flexible; learns solutions that satisfy PDE/ODE constraints even with limited data; naturally addresses inverse problems || Training can be slow and challenging; balancing physics and data loss requires careful tuning ||
|| '''Hybrid Modeling''' || Combines the strengths of first-principles and ML; improved accuracy with limited data; high interpretability of known physics components || Integration can be complex; interfacing ML with simulators requires careful scaling and calibration ||
|| '''Data Assimilation & Regularization''' || Adaptive correction with real-time data; enhances robustness; gently guides the solution towards physical plausibility || Additional computational overhead; tuning of assimilation parameters is often problem-dependent ||
|| '''Custom Neural Architectures''' || Guarantees adherence to physical laws (e.g. conservation) by design; yields physically interpretable representations || Requires specialized knowledge for design; less flexible if system deviates from assumed physics ||
|| '''Physics-Based Loss (PINNs)''' || General and flexible; learns solutions that satisfy PDE/ODE constraints even with limited data; naturally addresses inverse problems || Training can be slow and challenging; balancing physics and data loss requires careful tuning ||
|| '''Hybrid Modeling''' || Combines the strengths of first-principles and ML; improved accuracy with limited data; high interpretability of known physics components || Integration can be complex; interfacing ML with simulators requires careful scaling and calibration ||
|| '''Data Assimilation & Regularization''' || Adaptive correction with real-time data; enhances robustness; gently guides the solution towards physical plausibility || Additional computational overhead; tuning of assimilation parameters is often problem-dependent ||
Changed lines 241-244 from:
to:
* Defining the CSTR ODEs with the learned kinetics.
* Setting the cooling jacket heat removal (Q_cool) as the manipulated variable.
* Specifying controlled variables for reactor temperature and concentration.
* Solving the optimization problem with GEKKO control modes (IMODE=6) to maintain safe operating conditions and optimal conversion.
Changed line 1 from:
(:title Physics-Informed Machine Learning for Dynamic Optimization:)
to:
(:title Physics-Informed Machine Learning for Dynamic Optimization in Engineering:)
Changed lines 3-20 from:
(:description Survey of Physics-Informed Machine Learning methods for dynamic simulation and optimization in engineering, with Python examples using PyTorch, GEKKO, and scikit-learn.:)
Physics-Informed Machine Learning (PIML)combines scientific domain knowledge with machine learning techniques, creating models that respect physical laws and constraints. These methods are particularly valuable in dynamic optimization, enabling accurate and interpretable predictions and control decisions for engineering systems.
Key methods to inject physics into ML models include:
-> '''I''': Physics-based Feature Engineering
-> '''II''': Custom Neural Network Architectures
-> '''III''': Physics-based Constraints (Soft Constraints)
-> '''IV''': Hybrid Modeling (Physics + ML)
-> '''V''': Data Assimilation and Regularization
!!!! I. Physics-based Feature Engineering
Using features derived from known physical laws improves model interpretability and efficiency.
(:toggle hide feature_engineering button show="Feature Engineering Example":)
(:div id=feature_engineering:)
Physics-Informed Machine Learning (PIML)
Key methods to inject physics into ML models include:
-> '''II''': Custom Neural Network Architectures
-> '''III''': Physics-based Constraints (Soft Constraints)
-> '''IV''': Hybrid Modeling (Physics + ML)
-> '''V''': Data Assimilation and Regularization
!!!! I. Physics-based Feature Engineering
Using features derived from known physical laws improves model interpretability and efficiency.
(:toggle hide feature_engineering button show="Feature Engineering Example":)
(:div id=feature_engineering:
to:
(:description Survey of Physics-Informed Machine Learning methods for dynamic simulation and optimization in engineering, with Python examples using PyTorch, GEKKO, and scikit-learn:)
Physics-Informed Machine Learning (PIML) is an emerging paradigm that integrates scientific domain knowledge (physical laws, constraints, or models) into machine learning workflows. In engineering systems—especially dynamic systems governed by differential equations—purely data-driven models often struggle with generalization and physical consistency. PIML addresses these challenges by blending first-principles physics with data-driven learning, thereby reducing required data, improving extrapolation, and ensuring that model outputs obey known physical rules. This survey outlines key methods to inject physics into ML models, presents code examples for simple dynamic systems (e.g. damped oscillators, reactors, thermal systems), and reviews both foundational and recent literature. Optimization applications, such as model predictive control (MPC), are emphasized alongside forward simulation.
!!!! I. Methods of Injecting Physics into ML Models
PIML techniques can be broadly categorized into several approaches which, when combined appropriately, yield models that are both accurate and interpretable.
!!!!! A. Physics-Based Feature Engineering
Feature engineering uses physical insights—such as conservation laws, symmetry, and nondimensional numbers—to construct input features that better capture underlying dynamics.
For example, consider a pendulum where the true acceleration depends on sin(θ) rather than θ itself. Using sin(θ) as a feature allows even a linear regression model to capture the nonlinear dependence of acceleration on the angle.
(:toggle hide feature_engineering_code button show="Feature Engineering Code Example":)
(:div id=feature_engineering_code:)
Physics-Informed Machine Learning (PIML) is an emerging paradigm that integrates scientific domain knowledge (physical laws, constraints, or models) into machine learning workflows. In engineering systems—especially dynamic systems governed by differential equations—purely data-driven models often struggle with generalization and physical consistency. PIML addresses these challenges by blending first-principles physics with data-driven learning, thereby reducing required data, improving extrapolation, and ensuring that model outputs obey known physical rules. This survey outlines key methods to inject physics into ML models, presents code examples for simple dynamic systems (e.g. damped oscillators, reactors, thermal systems), and reviews both foundational and recent literature. Optimization applications, such as model predictive control (MPC), are emphasized alongside forward simulation.
!!!! I. Methods of Injecting Physics into ML Models
PIML techniques can be broadly categorized into several approaches which, when combined appropriately, yield models that are both accurate and interpretable.
!!!!! A. Physics-Based Feature Engineering
Feature engineering uses physical insights—such as conservation laws, symmetry, and nondimensional numbers—to construct input features that better capture underlying dynamics.
For example, consider a pendulum where the true acceleration depends on sin(θ) rather than θ itself. Using sin(θ) as a feature allows even a linear regression model to capture the nonlinear dependence of acceleration on the angle.
(:toggle hide feature_engineering_code button show="Feature Engineering Code Example":)
(:div id=feature_engineering_code:)
Changed lines 23-28 from:
# Pendulum dynamics
theta = np.linspace(-np.pi, np.pi, 100).reshape(-1,1)
acc_true = -(9.81) * np.sin(theta)
# Linear model using sin(theta) as feature
X_feat = np.sin(theta)
theta = np.linspace(-np.pi, np.pi, 100).reshape(-1
acc_true = -(
# Linear model using sin
X_feat = np
to:
# Generate synthetic data for a pendulum
g, L = 9.81, 1.0
theta = np.linspace(-np.pi, np.pi, 100).reshape(-1, 1) # angles from -π to π
acc_true = -(g/L) * np.sin(theta) # true angular acceleration
# Model 1: Linear regression on raw angle (theta)
model_raw = LinearRegression().fit(theta, acc_true)
pred_raw = model_raw.predict(theta)
# Model 2: Linear regression on sin(theta) as feature
X_feat = np.sin(theta) # physics-inspired feature
g, L = 9.81, 1.0
theta = np.linspace(-np.pi, np.pi, 100).reshape(-1, 1) # angles from -π to π
acc_true = -(g/L) * np.sin(theta) # true angular acceleration
# Model 1: Linear regression on raw angle (theta)
model_raw = LinearRegression().fit(theta, acc_true)
pred_raw = model_raw.predict(theta)
# Model 2: Linear regression on sin(theta) as feature
X_feat = np.sin(theta) # physics-inspired feature
Added lines 36-41:
# Evaluate at a sample angle
test_angle = np.array([[50 * np.pi / 180]]) # 50 degrees in radians
print("True acceleration:", -(g/L) * np.sin(test_angle))
print("Linear model (raw theta) prediction:", model_raw.predict(test_angle))
print("Linear model (sin(theta)) prediction:", model_feat.predict(np.sin(test_angle)))
Changed lines 45-50 from:
!!!! II. Custom Neural Network Architectures
Architectures embedding physical structures (e.g., Hamiltonian Neural Networks) ensure adherence to physics by design.
(:toggle hide hnn_example button show="HamiltonianNeural Network Example":)
(:div id=hnn_example:)
Architectures embedding physical structures (e.g., Hamiltonian Neural Networks) ensure adherence to physics by design
(:toggle hide hnn_example button show="Hamiltonian
to:
!!!!! B. Custom Neural Network Architectures with Physical Structure
Neural network architectures can be designed so that they inherently respect physical laws. Examples include Hamiltonian Neural Networks (HNNs) and Lagrangian Neural Networks. These models embed conservation laws into their topology, such that—for instance—a Hamiltonian network is constructed so that
dq/dt = ∂H/∂p
dp/dt = –∂H/∂q
are satisfied automatically.
(:toggle hide hnn_code button show="Hamiltonian Neural Network Code Example":)
(:div id=hnn_code:)
Neural network architectures can be designed so that they inherently respect physical laws. Examples include Hamiltonian Neural Networks (HNNs) and Lagrangian Neural Networks. These models embed conservation laws into their topology, such that—for instance—a Hamiltonian network is constructed so that
dq/dt = ∂H/∂p
dp/dt = –∂H/∂q
are satisfied automatically.
(:toggle hide hnn_code button show="Hamiltonian Neural Network Code Example":)
(:div id=hnn_code:)
Added lines 56-57:
# Define a simple HNN model: input (q, p) -> output H (the system energy)
Changed line 59 from:
def __init__(self):
to:
def __init__(self, hidden_dim=32):
Changed lines 62-65 from:
torch.nn.Linear(16,1))
def forward(self,q,p):
return self.net(torch.stack([q,p],dim=-1)).squeeze(-1)
to:
torch.nn.Linear(2, hidden_dim),
torch.nn.Tanh(),
torch.nn.Linear(hidden_dim, 1) # output: scalar H
)
def forward(self, q, p):
# Stack q and p to form the input vector and return H(q, p)
return self.net(torch.stack([q, p], dim=-1)).squeeze(-1)
hnn = HNN(hidden_dim=16)
optimizer = torch.optim.Adam(hnn.parameters(), lr=0.01)
# Synthetic data: small oscillations of a mass-spring system (q: position, p: momentum)
t = torch.linspace(0, 10, steps=101)
q_data = torch.cos(t)
p_data = -torch.sin(t)
dq_data = torch.gradient(q_data, spacing=(t,))[0] # true dq/dt = p for m=1
dp_data = torch.gradient(p_data, spacing=(t,))[0] # true dp/dt = -q for k=1
# Training loop to enforce Hamilton's equations as loss
for epoch in range(1000):
idx = torch.randint(0, len(t), (32,))
q, p = q_data[idx], p_data[idx]
H = hnn(q, p)
# Compute gradients of H with respect to p and q via autograd
dH_dp = torch.autograd.grad(H, p, grad_outputs=torch.ones_like(H),
retain_graph=True, create_graph=True)[0]
dH_dq = torch.autograd.grad(H, q, grad_outputs=torch.ones_like(H),
retain_graph=True, create_graph=True)[0]
dq_pred = dH_dp # predicted dq/dt
dp_pred = -dH_dq # predicted dp/dt
dq_true = dq_data[idx]
dp_true = dp_data[idx]
loss = torch.mean((dq_pred - dq_true)**2 + (dp_pred - dp_true)**2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
q_test, p_test = 1.0, 0.0
H_pred = hnn(torch.tensor(q_test), torch.tensor(p_test)).item()
print("Learned Hamiltonian at (q=1, p=0):", H_pred)
print("True Hamiltonian at (q=1, p=0):", 0.5 * q_test**2 + 0.5 * p_test**2)
torch.nn.Tanh(),
torch.nn.Linear(hidden_dim, 1) # output: scalar H
)
def forward(self, q, p):
# Stack q and p to form the input vector and return H(q, p)
return self.net(torch.stack([q, p], dim=-1)).squeeze(-1)
hnn = HNN(hidden_dim=16)
optimizer = torch.optim.Adam(hnn.parameters(), lr=0.01)
# Synthetic data: small oscillations of a mass-spring system (q: position, p: momentum)
t = torch.linspace(0, 10, steps=101)
q_data = torch.cos(t)
p_data = -torch.sin(t)
dq_data = torch.gradient(q_data, spacing=(t,))[0] # true dq/dt = p for m=1
dp_data = torch.gradient(p_data, spacing=(t,))[0] # true dp/dt = -q for k=1
# Training loop to enforce Hamilton's equations as loss
for epoch in range(1000):
idx = torch.randint(0, len(t), (32,))
q, p = q_data[idx], p_data[idx]
H = hnn(q, p)
# Compute gradients of H with respect to p and q via autograd
dH_dp = torch.autograd.grad(H, p, grad_outputs=torch.ones_like(H),
retain_graph=True, create_graph=True)[0]
dH_dq = torch.autograd.grad(H, q, grad_outputs=torch.ones_like(H),
retain_graph=True, create_graph=True)[0]
dq_pred = dH_dp # predicted dq/dt
dp_pred = -dH_dq # predicted dp/dt
dq_true = dq_data[idx]
dp_true = dp_data[idx]
loss = torch.mean((dq_pred - dq_true)**2 + (dp_pred - dp_true)**2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
q_test, p_test = 1.0, 0.0
H_pred = hnn(torch.tensor(q_test), torch.tensor(p_test)).item()
print("Learned Hamiltonian at (q=1, p=0):", H_pred)
print("True Hamiltonian at (q=1, p=0):", 0.5 * q_test**2 + 0.5 * p_test**2)
Changed lines 106-111 from:
!!!! III. Physics-based Constraints (Soft Constraints)
Physics-Informed Neural Networks (PINNs) incorporate physics directly into the loss function.
(:toggle hide pinn_example button show="PINN Damped Oscillator Example":)
(:div id=pinn_example:)
Physics-Informed Neural Networks
(:toggle hide pinn_example button show="PINN Damped Oscillator Example":)
(:div id=pinn_example
to:
!!!!! C. Physics-Based Constraints in the Loss Function (Soft Constraints)
Physics-Informed Neural Networks (PINNs) integrate the governing equations directly into the loss function as penalty terms. The loss penalizes the residual of the underlying PDE or ODE, so that the neural network learns a solution that satisfies both the data and the physics.
(:toggle hide pinn_code button show="PINN Damped Oscillator Code Example":)
(:div id=pinn_code:)
Physics-Informed Neural Networks (PINNs) integrate the governing equations directly into the loss function as penalty terms. The loss penalizes the residual of the underlying PDE or ODE, so that the neural network learns a solution that satisfies both the data and the physics.
(:toggle hide pinn_code button show="PINN Damped Oscillator Code Example":)
(:div id=pinn_code:)
Changed lines 114-115 from:
# ODE residual as physics loss
to:
# Known parameters for a damped oscillator: m*ddx + c*dx + k*x = 0, with m=1, k=1, c=0.1
c, k = 0.1, 1.0
# Neural network representing x(t)
model = torch.nn.Sequential(
torch.nn.Linear(1, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, 1)
)
# Collocation points in time domain
t_colloc = torch.linspace(0, 10, steps=100, requires_grad=True).reshape(-1, 1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(1000):
x_pred = model(t_colloc)
# Compute first and second derivatives with respect to time
dx_pred = torch.autograd.grad(x_pred, t_colloc, grad_outputs=torch.ones_like(x_pred),
retain_graph=True, create_graph=True)[0]
ddx_pred = torch.autograd.grad(dx_pred, t_colloc, grad_outputs=torch.ones_like(dx_pred),
retain_graph=True, create_graph=True)[0]
# Compute ODE residual: ddx + c*dx + k*x should be zero
ode_residual = ddx_pred + c * dx_pred + k * x_pred
phys_loss = torch.mean(ode_residual**2)
# Enforce initial conditions: x(0) = 1, dx(0) = 0
x0_pred = model(torch.zeros(1, 1))
dx0_pred = torch.autograd.grad(x0_pred, torch.zeros(1, 1),
grad_outputs=torch.ones(1, 1), create_graph=True)[0]
ic_loss = (x0_pred - 1.0)**2 + (dx0_pred - 0.0)**2
loss = phys_loss + 100 * ic_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
c, k = 0.1, 1.0
# Neural network representing x(t)
model = torch.nn.Sequential(
torch.nn.Linear(1, 20),
torch.nn.Tanh(),
torch.nn.Linear(20, 1)
)
# Collocation points in time domain
t_colloc = torch.linspace(0, 10, steps=100, requires_grad=True).reshape(-1, 1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(1000):
x_pred = model(t_colloc)
# Compute first and second derivatives with respect to time
dx_pred = torch.autograd.grad(x_pred, t_colloc, grad_outputs=torch.ones_like(x_pred),
retain_graph=True, create_graph=True)[0]
ddx_pred = torch.autograd.grad(dx_pred, t_colloc, grad_outputs=torch.ones_like(dx_pred),
retain_graph=True, create_graph=True)[0]
# Compute ODE residual: ddx + c*dx + k*x should be zero
ode_residual = ddx_pred + c * dx_pred + k * x_pred
phys_loss = torch.mean(ode_residual**2)
# Enforce initial conditions: x(0) = 1, dx(0) = 0
x0_pred = model(torch.zeros(1, 1))
dx0_pred = torch.autograd.grad(x0_pred, torch.zeros(1, 1),
grad_outputs=torch.ones(1, 1), create_graph=True)[0]
ic_loss = (x0_pred - 1.0)**2 + (dx0_pred - 0.0)**2
loss = phys_loss + 100 * ic_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
Changed lines 150-155 from:
!!!! IV. Hybrid Modeling (Physics + ML)
Combines first principles with ML models, improving generalization and interpretability.
(:toggle hide hybrid_example button show="Hybrid Modeling Example with GEKKO":)
(:div id=hybrid_example:)
Combines first principles with ML models, improving generalization and interpretability.
(:toggle hide hybrid_example button show="Hybrid Modeling Example with GEKKO":)
(:div id=
to:
!!!!! D. Hybrid Modeling with Differential Equations or Simulators
Hybrid modeling—also known as gray-box modeling—combines known physics with data-driven ML components. This can take the form of serial, parallel, or component-replacement strategies.
For instance, one can replace an unknown reaction rate term in a chemical process model with a neural network learned from data, and then embed this hybrid model into an ODE solver.
(:toggle hide hybrid_code button show="Hybrid Modeling Code Example with GEKKO":)
(:div id=hybrid_code:)
Hybrid modeling—also known as gray-box modeling—combines known physics with data-driven ML components. This can take the form of serial, parallel, or component-replacement strategies.
For instance, one can replace an unknown reaction rate term in a chemical process model with a neural network learned from data, and then embed this hybrid model into an ODE solver.
(:toggle hide hybrid_code button show="Hybrid Modeling Code Example with GEKKO":)
(:div id=hybrid_code:)
Added line 159:
import numpy as np
Changed lines 162-178 from:
m = GEKKO(); x = m.Var
m
!!!! V. Data Assimilation and Regularization
Continuous
(:div id=assimilation_example:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO()
k = m.FV(value=1.0); k.STATUS=1
to:
# Create training data for the unknown physics f_true(x) = -0.2*x^2
X = np.linspace(0, 2, 21).reshape(-1, 1)
y = -0.2 * X.ravel()**2
mlp = MLPRegressor(hidden_layer_sizes=(5,), activation='tanh', max_iter=5000).fit(X, y)
# Set up GEKKO model and import the trained MLP
m = GEKKO(remote=False)
m.time = np.linspace(0, 5, 101) # simulate 5 seconds
Changed lines 172-173 from:
m
to:
# Create scaling for GEKKO's ML interface (custom scaler with min/max values)
data = np.concatenate([X, y.reshape(-1, 1)], axis=1)
# Assume the scaler is defined with known min/max (details omitted for brevity)
scaler = ML.CustomMinMaxGekkoScaler(data, features=[0], label=[1])
mma = scaler.minMaxValues()
ml_model = ML.Gekko_NN_SKlearn(mlp, mma, m)
f = ml_model.predict([x])
# Define the hybrid ODE: dx/dt = -0.5*x + f(x)
m.Equation(x.dt() == -0.5 * x + f)
m.options.IMODE = 4 # dynamic simulation mode
m.solve(disp=False)
print("Hybrid model final x:", x.value[-1])
data = np.concatenate([X, y.reshape(-1, 1)], axis=1)
# Assume the scaler is defined with known min/max (details omitted for brevity)
scaler = ML.CustomMinMaxGekkoScaler(data, features=[0], label=[1])
mma = scaler.minMaxValues()
ml_model = ML.Gekko_NN_SKlearn(mlp, mma, m)
f = ml_model.predict([x])
# Define the hybrid ODE: dx/dt = -0.5*x + f(x)
m.Equation(x.dt() == -0.5 * x + f)
m.options.IMODE = 4 # dynamic simulation mode
m.solve(disp=False)
print("Hybrid model final x:", x.value[-1])
Changed lines 188-193 from:
!!!! Case Study: MPC for Chemical Reactor using Hybrid PIML
Hybrid physics-informed model for reactor kinetics enhances MPC accuracy.
(:toggle hide mpc_example button show="Chemical Reactor MPC Example":)
(:div id=mpc_example:)
Hybrid physics-informed model for reactor kinetics enhances MPC accuracy.
(:div id=mpc_example:
to:
!!!!! E. Data Assimilation and Regularization with Physical Priors
Data assimilation techniques—such as Kalman filters or Moving Horizon Estimation (MHE)—continuously update the model state or parameters by blending model predictions with measurements. Regularization with physical priors (e.g., smoothness, monotonicity) further guides the learning towards physically plausible solutions.
(:toggle hide assimilation_code button show="Data Assimilation Code Example with GEKKO":)
(:div id=assimilation_code:)
Data assimilation techniques—such as Kalman filters or Moving Horizon Estimation (MHE)—continuously update the model state or parameters by blending model predictions with measurements. Regularization with physical priors (e.g., smoothness, monotonicity) further guides the learning towards physically plausible solutions.
(:toggle hide assimilation_code button show="Data Assimilation Code Example with GEKKO":)
(:div id=assimilation_code:)
Changed lines 196-198 from:
# Reactor model with learned kinetics
m.options.IMODE=6; m.solve()
to:
import numpy as np
# True system: dx/dt = -k_true * x, with k_true = 0.4
k_true = 0.4
t_points = np.linspace(0, 5, 51)
x_true = np.exp(-k_true * t_points)
x_meas = x_true[-1] # measurement at t=5
# GEKKO model for parameter estimation (MHE)
m = GEKKO(remote=False)
m.time = np.linspace(0, 5, 51)
k = m.FV(value=1.0, lb=0, ub=2)
k.STATUS = 1 # allow optimization to adjust k
x = m.Var(value=1.0)
m.Equation(x.dt() == -k * x)
# Define controlled variable for measurements
y = m.CV(value=np.nan * np.ones(len(m.time)))
y.FSTATUS = 0
y.MEAS = np.nan * np.ones(len(m.time))
y.FSTATUS[-1] = 1 # provide measurement feedback at final time
y.MEAS[-1] = x_meas
m.Equation(y == x)
m.options.IMODE = 5 # MHE mode
m.options.EV_TYPE = 1 # L2 norm in objective
m.solve(disp=False)
print("Estimated k:", k.value[0])
print("Predicted x(5):", x.value[-1], "Measured x(5):", x_meas)
# True system: dx/dt = -k_true * x, with k_true = 0.4
k_true = 0.4
t_points = np.linspace(0, 5, 51)
x_true = np.exp(-k_true * t_points)
x_meas = x_true[-1] # measurement at t=5
# GEKKO model for parameter estimation (MHE)
m = GEKKO(remote=False)
m.time = np.linspace(0, 5, 51)
k = m.FV(value=1.0, lb=0, ub=2)
k.STATUS = 1 # allow optimization to adjust k
x = m.Var(value=1.0)
m.Equation(x.dt() == -k * x)
# Define controlled variable for measurements
y = m.CV(value=np.nan * np.ones(len(m.time)))
y.FSTATUS = 0
y.MEAS = np.nan * np.ones(len(m.time))
y.FSTATUS[-1] = 1 # provide measurement feedback at final time
y.MEAS[-1] = x_meas
m.Equation(y == x)
m.options.IMODE = 5 # MHE mode
m.options.EV_TYPE = 1 # L2 norm in objective
m.solve(disp=False)
print("Estimated k:", k.value[0])
print("Predicted x(5):", x.value[-1], "Measured x(5):", x_meas)
Changed lines 226-227 from:
!!!! Summary of Methods
to:
!!!! II. Comparison of PIML Techniques
Changed lines 229-240 from:
|| Feature Engineering || Simple, interpretable || Requires known physics ||
|| Neural Architectures || Hard physics constraints || Less flexible ||
|| Physics Constraints || General, flexible || Difficult tuning ||
|| Hybrid Modeling || Accurate, interpretable || Complex interfaces ||
|| Data Assimilation || Adaptive, robust || Computationally intensive ||
!!!! References
* Raissi, M. et al. (2019). ''Physics-informed neural networks''. [[https://doi.org/10.1016/j.jcp.2018.10.045|Journal of Computational Physics]].
* Greydanus, S. et al. (2019). ''Hamiltonian Neural Networks''. [[https://arxiv.org/abs/1906.01563|NeurIPS 2019]].
* Karniadakis, G. E. et al. (2021). ''Physics-informed machine learning''. [[https://doi.org/10.1038/s42254-021-00314-5|Nature Reviews Physics]].
to:
|| '''Approach''' || '''Strengths''' || '''Limitations''' ||
|| **Feature Engineering** || Simple to implement; leverages domain insight; improves interpretability || Requires accurate *a priori* identification of key physical features; may not capture complex dynamics fully ||
|| **Custom Neural Architectures** || Guarantees adherence to physical laws (e.g. conservation) by design; yields physically interpretable representations || Requires specialized knowledge for design; less flexible if system deviates from assumed physics ||
|| **Physics-Based Loss (PINNs)** || General and flexible; learns solutions that satisfy PDE/ODE constraints even with limited data; naturally addresses inverse problems || Training can be slow and challenging; balancing physics and data loss requires careful tuning ||
|| **Hybrid Modeling** || Combines the strengths of first-principles and ML; improved accuracy with limited data; high interpretability of known physics components || Integration can be complex; interfacing ML with simulators requires careful scaling and calibration ||
|| **Data Assimilation & Regularization** || Adaptive correction with real-time data; enhances robustness; gently guides the solution towards physical plausibility || Additional computational overhead; tuning of assimilation parameters is often problem-dependent ||
!!!! III. Case Study: Physics-Informed Learning in MPC of a Chemical Reactor
This case study applies a hybrid PIML approach for model predictive control (MPC) of a continuous stirred-tank reactor (CSTR) where an exothermic reaction A → B takes place. The reactor is described by mass and energy balance ODEs, with uncertain reaction kinetics. A neural network is used to represent a correction to the nominal reaction rate, and its output is embedded into the reactor ODEs.
The MPC is implemented in GEKKO by:
- Defining the CSTR ODEs with the learned kinetics.
- Setting the cooling jacket heat removal (Q_cool) as the manipulated variable.
- Specifying controlled variables for reactor temperature and concentration.
- Solving the optimization problem with GEKKO’s control modes (IMODE=6 or 8) to maintain safe operating conditions and optimal conversion.
(:toggle hide mpc_code button show="Chemical Reactor MPC Code Example":)
(:div id=mpc_code:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO(remote=False)
# Define reactor controlled variables and manipulated variable
T = m.CV(value=350) # reactor temperature
C_A = m.Var(value=1.0) # concentration of A
Qcool = m.MV(lb=-100, ub=0) # cooling jacket heat removal
# Define mass and energy balance equations (with reaction kinetics correction by an imported NN)
# For example:
# dC_A/dt = F/V*(C_Ain - C_A) - k0*exp(-E/(R*T))*C_A*f_theta(T,C_A)
# dT/dt = F/V*(T_in - T) + (-ΔH/(ρCp))*k0*exp(-E/(R*T))*C_A*f_theta(T,C_A) + Qcool/(ρCp*V)
# (Equations defined here using GEKKO's m.Equation)
m.options.IMODE = 6 # control mode for MPC
m.solve(disp=False)
(:sourceend:)
(:divend:)
!!!! IV. Conclusion
Physics-informed machine learning represents a significant advancement in bridging the gap between black-box AI and traditional engineering practices. By embedding physical laws into the ML modeling process—whether through feature engineering, custom neural architectures, physics-based loss functions, hybrid modeling, or data assimilation—engineers obtain models that are more data-efficient, generalizable, and interpretable. This synthesis of physics and data empowers more reliable dynamic optimization and control strategies, as demonstrated by the presented MPC case study for a chemical reactor.
Looking ahead, active research is exploring not only the parameters but also the discovery of entire physical laws via data-driven methods, further enhancing the robustness and interpretability of PIML in complex engineering applications.
!!!! V. References
* Karniadakis, G. E., Kevrekidis, I. G., et al. (2021). *Physics-informed machine learning*. Nat. Rev. Phys., 3(6), 422–440.
[A Gentle Introduction to Physics-Informed Neural Networks](https://www.avantipublishers.com/index.php/jaacm/article/view/1246) | [NSGA-PINN for Physics-Informed Neural Network Training](https://encyclopedia.pub/entry/51383)
* Willard, J., Jia, X., et al. (2022). *Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems*. ACM Comput. Surv. 55(4): 73.
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](https://ar5iv.org/pdf/2003.04919#:~:text=commercial%20applications%2C%20are%20beginning%20to,black%20box%20ML%20models%20has)
* Raissi, M., Perdikaris, P., & Karniadakis, G. (2019). *Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs*. J. Comput. Phys., 378, 686–707.
* Greydanus, S., Dzamba, M., & Yosinski, J. (2019). *Hamiltonian Neural Networks*. NeurIPS 2019.
[Hamiltonian Neural Networks](https://ar5iv.labs.arxiv.org/html/1906.01563#:~:text=the%20basic%20laws%20of%20physics,is%20perfectly%20reversible%20in%20time)
* Karpatne, A., Watkins, W., Read, J., & Kumar, V. (2017). *Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling*. SIAM Int. Conf. on Data Mining (SDM).
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](https://ar5iv.org/pdf/2003.04919#:~:text=Karpatne%20et%20al,water%2C%20a%20known%20monotonic%20relationship)
* Parish, E. & Duraisamy, K. (2016). *A paradigm for data-driven predictive modeling using field inversion and machine learning*. J. Comput. Phys., 305, 758–774.
* GEKKO Documentation (2020). *Machine Learning in GEKKO*.
[Machine Learning — GEKKO 1.3.0 documentation](https://gekko.readthedocs.io/en/latest/ml.html#:~:text=,informed%20hybrid%20modeling)
* Sanyal, S. & Roy, K. (2023). *RAMP-Net: Robust Adaptive MPC for Quadrotors via Physics-informed Neural Networks*. IEEE Int. Conf. Robotics and Automation (ICRA).
* Zheng, Y. & Wu, W. (2023). *Physics-informed recurrent neural network based MPC for chemical processes*. J. Process Control, 118, 65–77.
* Patel, D. et al. (2024). *Model Predictive Control Using Physics Informed Neural Networks for Process Systems*. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
|| **Feature Engineering** || Simple to implement; leverages domain insight; improves interpretability || Requires accurate *a priori* identification of key physical features; may not capture complex dynamics fully ||
|| **Custom Neural Architectures** || Guarantees adherence to physical laws (e.g. conservation) by design; yields physically interpretable representations || Requires specialized knowledge for design; less flexible if system deviates from assumed physics ||
|| **Physics-Based Loss (PINNs)** || General and flexible; learns solutions that satisfy PDE/ODE constraints even with limited data; naturally addresses inverse problems || Training can be slow and challenging; balancing physics and data loss requires careful tuning ||
|| **Hybrid Modeling** || Combines the strengths of first-principles and ML; improved accuracy with limited data; high interpretability of known physics components || Integration can be complex; interfacing ML with simulators requires careful scaling and calibration ||
|| **Data Assimilation & Regularization** || Adaptive correction with real-time data; enhances robustness; gently guides the solution towards physical plausibility || Additional computational overhead; tuning of assimilation parameters is often problem-dependent ||
!!!! III. Case Study: Physics-Informed Learning in MPC of a Chemical Reactor
This case study applies a hybrid PIML approach for model predictive control (MPC) of a continuous stirred-tank reactor (CSTR) where an exothermic reaction A → B takes place. The reactor is described by mass and energy balance ODEs, with uncertain reaction kinetics. A neural network is used to represent a correction to the nominal reaction rate, and its output is embedded into the reactor ODEs.
The MPC is implemented in GEKKO by:
- Defining the CSTR ODEs with the learned kinetics.
- Setting the cooling jacket heat removal (Q_cool) as the manipulated variable.
- Specifying controlled variables for reactor temperature and concentration.
- Solving the optimization problem with GEKKO’s control modes (IMODE=6 or 8) to maintain safe operating conditions and optimal conversion.
(:toggle hide mpc_code button show="Chemical Reactor MPC Code Example":)
(:div id=mpc_code:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO(remote=False)
# Define reactor controlled variables and manipulated variable
T = m.CV(value=350) # reactor temperature
C_A = m.Var(value=1.0) # concentration of A
Qcool = m.MV(lb=-100, ub=0) # cooling jacket heat removal
# Define mass and energy balance equations (with reaction kinetics correction by an imported NN)
# For example:
# dC_A/dt = F/V*(C_Ain - C_A) - k0*exp(-E/(R*T))*C_A*f_theta(T,C_A)
# dT/dt = F/V*(T_in - T) + (-ΔH/(ρCp))*k0*exp(-E/(R*T))*C_A*f_theta(T,C_A) + Qcool/(ρCp*V)
# (Equations defined here using GEKKO's m.Equation)
m.options.IMODE = 6 # control mode for MPC
m.solve(disp=False)
(:sourceend:)
(:divend:)
!!!! IV. Conclusion
Physics-informed machine learning represents a significant advancement in bridging the gap between black-box AI and traditional engineering practices. By embedding physical laws into the ML modeling process—whether through feature engineering, custom neural architectures, physics-based loss functions, hybrid modeling, or data assimilation—engineers obtain models that are more data-efficient, generalizable, and interpretable. This synthesis of physics and data empowers more reliable dynamic optimization and control strategies, as demonstrated by the presented MPC case study for a chemical reactor.
Looking ahead, active research is exploring not only the parameters but also the discovery of entire physical laws via data-driven methods, further enhancing the robustness and interpretability of PIML in complex engineering applications.
!!!! V. References
* Karniadakis, G. E., Kevrekidis, I. G., et al. (2021). *Physics-informed machine learning*. Nat. Rev. Phys., 3(6), 422–440.
[A Gentle Introduction to Physics-Informed Neural Networks](https://www.avantipublishers.com/index.php/jaacm/article/view/1246) | [NSGA-PINN for Physics-Informed Neural Network Training](https://encyclopedia.pub/entry/51383)
* Willard, J., Jia, X., et al. (2022). *Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems*. ACM Comput. Surv. 55(4): 73.
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](https://ar5iv.org/pdf/2003.04919#:~:text=commercial%20applications%2C%20are%20beginning%20to,black%20box%20ML%20models%20has)
* Raissi, M., Perdikaris, P., & Karniadakis, G. (2019). *Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear PDEs*. J. Comput. Phys., 378, 686–707.
* Greydanus, S., Dzamba, M., & Yosinski, J. (2019). *Hamiltonian Neural Networks*. NeurIPS 2019.
[Hamiltonian Neural Networks](https://ar5iv.labs.arxiv.org/html/1906.01563#:~:text=the%20basic%20laws%20of%20physics,is%20perfectly%20reversible%20in%20time)
* Karpatne, A., Watkins, W., Read, J., & Kumar, V. (2017). *Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling*. SIAM Int. Conf. on Data Mining (SDM).
[Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems](https://ar5iv.org/pdf/2003.04919#:~:text=Karpatne%20et%20al,water%2C%20a%20known%20monotonic%20relationship)
* Parish, E. & Duraisamy, K. (2016). *A paradigm for data-driven predictive modeling using field inversion and machine learning*. J. Comput. Phys., 305, 758–774.
* GEKKO Documentation (2020). *Machine Learning in GEKKO*.
[Machine Learning — GEKKO 1.3.0 documentation](https://gekko.readthedocs.io/en/latest/ml.html#:~:text=,informed%20hybrid%20modeling)
* Sanyal, S. & Roy, K. (2023). *RAMP-Net: Robust Adaptive MPC for Quadrotors via Physics-informed Neural Networks*. IEEE Int. Conf. Robotics and Automation (ICRA).
* Zheng, Y. & Wu, W. (2023). *Physics-informed recurrent neural network based MPC for chemical processes*. J. Process Control, 118, 65–77.
* Patel, D. et al. (2024). *Model Predictive Control Using Physics Informed Neural Networks for Process Systems*. ADCHEM 2024, IFAC-PapersOnLine, 57(9), 276–281.
Added lines 1-127:
(:title Physics-Informed Machine Learning for Dynamic Optimization:)
(:keywords Physics-Informed Machine Learning, dynamic optimization, PyTorch, GEKKO, scikit-learn, engineering optimization, machine learning, MPC:)
(:description Survey of Physics-Informed Machine Learning methods for dynamic simulation and optimization in engineering, with Python examples using PyTorch, GEKKO, and scikit-learn.:)
Physics-Informed Machine Learning (PIML) combines scientific domain knowledge with machine learning techniques, creating models that respect physical laws and constraints. These methods are particularly valuable in dynamic optimization, enabling accurate and interpretable predictions and control decisions for engineering systems.
Key methods to inject physics into ML models include:
-> '''I''': Physics-based Feature Engineering
-> '''II''': Custom Neural Network Architectures
-> '''III''': Physics-based Constraints (Soft Constraints)
-> '''IV''': Hybrid Modeling (Physics + ML)
-> '''V''': Data Assimilation and Regularization
!!!! I. Physics-based Feature Engineering
Using features derived from known physical laws improves model interpretability and efficiency.
(:toggle hide feature_engineering button show="Feature Engineering Example":)
(:div id=feature_engineering:)
(:source lang=python:)
import numpy as np
from sklearn.linear_model import LinearRegression
# Pendulum dynamics
theta = np.linspace(-np.pi, np.pi, 100).reshape(-1,1)
acc_true = -(9.81) * np.sin(theta)
# Linear model using sin(theta) as feature
X_feat = np.sin(theta)
model_feat = LinearRegression().fit(X_feat, acc_true)
pred_feat = model_feat.predict(X_feat)
(:sourceend:)
(:divend:)
!!!! II. Custom Neural Network Architectures
Architectures embedding physical structures (e.g., Hamiltonian Neural Networks) ensure adherence to physics by design.
(:toggle hide hnn_example button show="Hamiltonian Neural Network Example":)
(:div id=hnn_example:)
(:source lang=python:)
import torch
class HNN(torch.nn.Module):
def __init__(self):
super().__init__()
self.net = torch.nn.Sequential(
torch.nn.Linear(2,16), torch.nn.Tanh(),
torch.nn.Linear(16,1))
def forward(self,q,p):
return self.net(torch.stack([q,p],dim=-1)).squeeze(-1)
(:sourceend:)
(:divend:)
!!!! III. Physics-based Constraints (Soft Constraints)
Physics-Informed Neural Networks (PINNs) incorporate physics directly into the loss function.
(:toggle hide pinn_example button show="PINN Damped Oscillator Example":)
(:div id=pinn_example:)
(:source lang=python:)
import torch
model = torch.nn.Sequential(torch.nn.Linear(1,20), torch.nn.Tanh(), torch.nn.Linear(20,1))
# ODE residual as physics loss
(:sourceend:)
(:divend:)
!!!! IV. Hybrid Modeling (Physics + ML)
Combines first principles with ML models, improving generalization and interpretability.
(:toggle hide hybrid_example button show="Hybrid Modeling Example with GEKKO":)
(:div id=hybrid_example:)
(:source lang=python:)
from sklearn.neural_network import MLPRegressor
from gekko import GEKKO, ML
mlp = MLPRegressor().fit(X, y)
m = GEKKO(); x = m.Var()
ml_model = ML.Gekko_NN_SKlearn(mlp, scaler_minmax, m)
m.Equation(x.dt() == -0.5*x + ml_model.predict([x]))
(:sourceend:)
(:divend:)
!!!! V. Data Assimilation and Regularization
Continuous model refinement using new data ensures accuracy and robustness.
(:toggle hide assimilation_example button show="Data Assimilation Example with GEKKO":)
(:div id=assimilation_example:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO()
k = m.FV(value=1.0); k.STATUS=1
x = m.Var(value=1.0)
m.Equation(x.dt()==-k*x)
m.options.IMODE=5; m.solve()
(:sourceend:)
(:divend:)
!!!! Case Study: MPC for Chemical Reactor using Hybrid PIML
Hybrid physics-informed model for reactor kinetics enhances MPC accuracy.
(:toggle hide mpc_example button show="Chemical Reactor MPC Example":)
(:div id=mpc_example:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO(); T = m.CV(); Qcool = m.MV(lb=-100, ub=0)
# Reactor model with learned kinetics
m.options.IMODE=6; m.solve()
(:sourceend:)
(:divend:)
!!!! Summary of Methods
|| border=0
|| '''Method''' || '''Strengths''' || '''Limitations''' ||
|| Feature Engineering || Simple, interpretable || Requires known physics ||
|| Neural Architectures || Hard physics constraints || Less flexible ||
|| Physics Constraints || General, flexible || Difficult tuning ||
|| Hybrid Modeling || Accurate, interpretable || Complex interfaces ||
|| Data Assimilation || Adaptive, robust || Computationally intensive ||
!!!! References
* Raissi, M. et al. (2019). ''Physics-informed neural networks''. [[https://doi.org/10.1016/j.jcp.2018.10.045|Journal of Computational Physics]].
* Greydanus, S. et al. (2019). ''Hamiltonian Neural Networks''. [[https://arxiv.org/abs/1906.01563|NeurIPS 2019]].
* Karniadakis, G. E. et al. (2021). ''Physics-informed machine learning''. [[https://doi.org/10.1038/s42254-021-00314-5|Nature Reviews Physics]].
(:keywords Physics-Informed Machine Learning, dynamic optimization, PyTorch, GEKKO, scikit-learn, engineering optimization, machine learning, MPC:)
(:description Survey of Physics-Informed Machine Learning methods for dynamic simulation and optimization in engineering, with Python examples using PyTorch, GEKKO, and scikit-learn.:)
Physics-Informed Machine Learning (PIML) combines scientific domain knowledge with machine learning techniques, creating models that respect physical laws and constraints. These methods are particularly valuable in dynamic optimization, enabling accurate and interpretable predictions and control decisions for engineering systems.
Key methods to inject physics into ML models include:
-> '''I''': Physics-based Feature Engineering
-> '''II''': Custom Neural Network Architectures
-> '''III''': Physics-based Constraints (Soft Constraints)
-> '''IV''': Hybrid Modeling (Physics + ML)
-> '''V''': Data Assimilation and Regularization
!!!! I. Physics-based Feature Engineering
Using features derived from known physical laws improves model interpretability and efficiency.
(:toggle hide feature_engineering button show="Feature Engineering Example":)
(:div id=feature_engineering:)
(:source lang=python:)
import numpy as np
from sklearn.linear_model import LinearRegression
# Pendulum dynamics
theta = np.linspace(-np.pi, np.pi, 100).reshape(-1,1)
acc_true = -(9.81) * np.sin(theta)
# Linear model using sin(theta) as feature
X_feat = np.sin(theta)
model_feat = LinearRegression().fit(X_feat, acc_true)
pred_feat = model_feat.predict(X_feat)
(:sourceend:)
(:divend:)
!!!! II. Custom Neural Network Architectures
Architectures embedding physical structures (e.g., Hamiltonian Neural Networks) ensure adherence to physics by design.
(:toggle hide hnn_example button show="Hamiltonian Neural Network Example":)
(:div id=hnn_example:)
(:source lang=python:)
import torch
class HNN(torch.nn.Module):
def __init__(self):
super().__init__()
self.net = torch.nn.Sequential(
torch.nn.Linear(2,16), torch.nn.Tanh(),
torch.nn.Linear(16,1))
def forward(self,q,p):
return self.net(torch.stack([q,p],dim=-1)).squeeze(-1)
(:sourceend:)
(:divend:)
!!!! III. Physics-based Constraints (Soft Constraints)
Physics-Informed Neural Networks (PINNs) incorporate physics directly into the loss function.
(:toggle hide pinn_example button show="PINN Damped Oscillator Example":)
(:div id=pinn_example:)
(:source lang=python:)
import torch
model = torch.nn.Sequential(torch.nn.Linear(1,20), torch.nn.Tanh(), torch.nn.Linear(20,1))
# ODE residual as physics loss
(:sourceend:)
(:divend:)
!!!! IV. Hybrid Modeling (Physics + ML)
Combines first principles with ML models, improving generalization and interpretability.
(:toggle hide hybrid_example button show="Hybrid Modeling Example with GEKKO":)
(:div id=hybrid_example:)
(:source lang=python:)
from sklearn.neural_network import MLPRegressor
from gekko import GEKKO, ML
mlp = MLPRegressor().fit(X, y)
m = GEKKO(); x = m.Var()
ml_model = ML.Gekko_NN_SKlearn(mlp, scaler_minmax, m)
m.Equation(x.dt() == -0.5*x + ml_model.predict([x]))
(:sourceend:)
(:divend:)
!!!! V. Data Assimilation and Regularization
Continuous model refinement using new data ensures accuracy and robustness.
(:toggle hide assimilation_example button show="Data Assimilation Example with GEKKO":)
(:div id=assimilation_example:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO()
k = m.FV(value=1.0); k.STATUS=1
x = m.Var(value=1.0)
m.Equation(x.dt()==-k*x)
m.options.IMODE=5; m.solve()
(:sourceend:)
(:divend:)
!!!! Case Study: MPC for Chemical Reactor using Hybrid PIML
Hybrid physics-informed model for reactor kinetics enhances MPC accuracy.
(:toggle hide mpc_example button show="Chemical Reactor MPC Example":)
(:div id=mpc_example:)
(:source lang=python:)
from gekko import GEKKO
m = GEKKO(); T = m.CV(); Qcool = m.MV(lb=-100, ub=0)
# Reactor model with learned kinetics
m.options.IMODE=6; m.solve()
(:sourceend:)
(:divend:)
!!!! Summary of Methods
|| border=0
|| '''Method''' || '''Strengths''' || '''Limitations''' ||
|| Feature Engineering || Simple, interpretable || Requires known physics ||
|| Neural Architectures || Hard physics constraints || Less flexible ||
|| Physics Constraints || General, flexible || Difficult tuning ||
|| Hybrid Modeling || Accurate, interpretable || Complex interfaces ||
|| Data Assimilation || Adaptive, robust || Computationally intensive ||
!!!! References
* Raissi, M. et al. (2019). ''Physics-informed neural networks''. [[https://doi.org/10.1016/j.jcp.2018.10.045|Journal of Computational Physics]].
* Greydanus, S. et al. (2019). ''Hamiltonian Neural Networks''. [[https://arxiv.org/abs/1906.01563|NeurIPS 2019]].
* Karniadakis, G. E. et al. (2021). ''Physics-informed machine learning''. [[https://doi.org/10.1038/s42254-021-00314-5|Nature Reviews Physics]].