For piece-wise function approximation PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the extreme theory of functional connections (X-TFC) framework, where a single-layer Neural Network and the
extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems,
optimal control, aerospace, and rarefied gas dynamics applications.
Physics-informed PointNet (PIPN) for multiple sets of irregular geometries Regular PINNs are only able to obtain the solution of a forward or
inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for
deep learning of 3D object classification and segmentation by the research group of
Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for
incompressible flow,
heat transfer and
linear elasticity.
For inverse computations Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown to be useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization,
multiphase flow in porous media, and high-speed fluid flow. PINNs can also be used in connection with
symbolic regression for discovering the mathematical expression in connection with discovery of parameters and functions. One example of such application is the study on chemical ageing of cellulose insulation material, in this example PINNs are used to first discover a parameter for a set of ordinary differential equations (ODEs) and later a function solution, which is later used to find a more fitting expression using a
symbolic regression with a combination of operators.
For elasticity problems Ensemble of physics-informed neural networks is applied to solve plane elasticity problems. Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions. The computational approach is based on principles of artificial intelligence. This approach can be extended to nonlinear elasticity problems, where the constitutive equations are nonlinear. PINNs can also be used for Kirchhoff plate bending problems with transverse distributed loads and to contact models with elastic Winkler's foundations. A comparison of PINNs with classical approximation methods in mechanics, especially the Least Squares Finite Element Method, is outlined in .
For shunted piezoelectric materials A multi-physics-informed neural network (multi-PINN) is used to effectively solve static and dynamic electromechanical vibration suppression problems with a shunted circuit. Electromechanical problems, such as those involving a cantilever piezoelectric beam, can be converted into the appropriate PINN formulation. A time partitioning approach reduces the computational cost of the dynamic problem. The resulting system of ordinary differential equations (ODEs) is resolved using a multi-PINN to accurately predict the dynamic response to principal mode resonance excitation.
With backward stochastic differential equation Deep backward stochastic differential equation method is a numerical method that combines deep learning with
backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of
deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
For biology An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function L_{tot} is modified to include L_{constr}, a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases. A natural example of BINNs can be found in cell dynamics, where the cell density u(x,t) is governed by a reaction-diffusion equation with diffusion and growth functions D(u) and G(u), respectively: u_t = \nabla \cdot [ D(u) \nabla u ] + G(u) u , \quad x \in \Omega, \quad t \in[0, T] In this case, a component of L_{constr} could be ||D||_\Gamma for D D_{max} , which penalizes values of D that fall outside a biologically relevant diffusion range defined by D_{min} \leq D \leq D_{max}. Furthermore, the BINN architecture, when utilizing
multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct u_{MLP}(x,t) from model inputs (x,t), serving as a surrogate model for the cell density u(x,t). This surrogate is then fed into the two additional MLPs, D_{MLP}(u_{MLP}) and G_{MLP}(u_{MLP}), which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of u_{MLP},D_{MLP} and G_{MLP} to form the governing reaction-diffusion equation. Note that since u_{MLP} is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a
method-of-lines approach. == Limitations ==