Translate this page into:
Development of Mamadu–Njoseh Polynomial Neural Networks for Physics-Informed Learning

*Corresponding author: Dr. Ebimene James Mamadu, Department of Mathematics, Delta State University, Abraka, Nigeria. emamadu@delsu.edu.ng
-
Received: ,
Accepted: ,
How to cite this article: Mamadu EJ, Nwankwo JC, Njoseh NI. Development of Mamadu–Njoseh Polynomial Neural Networks for Physics-Informed Learning. Sci Tech Nex. 2026;2:3-13. doi: 10.25259/STN_35_2025
Abstract
Objective
This present work seeks to address the challenges in solving differential equations through the development of an improved Physics-informed neural network approach aimed at ensuring efficient convergence and precise solutions.
Material and Methods
A novel approach, termed the Mamadu-Njoseh polynomial physics-informed neural Network (MNP-PINN), is introduced. This approach utilises the orthogonal Mamadu-Njoseh polynomials (MNP) as activation functions in the network architecture. Physical constraints arising from the governing differential equation are embedded into the loss function to guide the learning process. The performance validation for the new approach is carried out by testing it with the one-dimensional Poisson equation. All computations are implemented in Python 3.11, with consistent procedures for network initialisation, collocation point generation, and training to ensure reproducibility.
Results
The results show excellent convergence of MNP-PINN solutions with the analytical solution of the Poisson equation. The solution has spectral accuracy with errors consistently small throughout the solution domain. In addition, there is stability in the optimisation process of the training algorithm, showing improved efficiency in the machine learning process compared with the analytical PINN solution.
Conclusion
The MNP-PINN has provided a robust and highly accurate framework for the solution of the differential equations. By utilizing the concepts of orthogonality and approximation provided by the MNP, improvement has been made in convergence acceleration and accuracy to that provided in PINNs. It appears from the results that the MNP-PINN framework has tremendous potential for further generalisation and expansion to more complex models.
Keywords
Machine learning
Mamadu–Njoseh polynomials
Network activation function
Physics-informed neural networks
Poisson equation

1. INTRODUCTION
Scientific machine learning has experienced a paradigm shift in the past several years, driven by the integration of data-driven methods and physics-based modelling.[1-3] Among these advancements, Physics-Informed Neural Networks (PINNs) have been revolutionary in the solution of differential equations through the inclusion of physical constraints governing equations, boundary, and initial conditions, within the training loss function.[4,5] In contrast to traditional data-driven models that only rely on empirical data, our PINNs enforce compliance with the underlying physical laws, thus recovering generalisable, interpretable, and physically relevant solutions.[6,7]
However, despite their success, conventional PINNs still consistently suffer from limitations. Traditional activation functions, such as tanh, rectified linear unit (ReLu), and sigmoid, cannot reflect the mathematical or physical properties of many systems, particularly those with oscillatory, fractional, or stochastic dynamics.[8,9] They often lack orthogonality and weighted smoothness, thus, numerical inefficiencies, slow convergence, and deeper network architectures are necessitated.[10,11] Besides, they perform poorly when they are multi-scale, nonlocal, or memory-dependent processes involved, for which the classical neural representations fail to capture the underlying structure of the solution.[12]
Due to such constraints, there is a developing research direction to embed mathematical basis functions and orthogonal polynomial systems within neural architectures for enhancing approximation accuracy, interpretability, and convergence behaviour. Some recent works have successfully applied Chebyshev, Legendre, Hermite, and Jacobi polynomial bases in neural approximators to enhance spectral expressiveness and learning efficiency.[13,14,9] While these polynomial-based neural architectures have been promising, they are themselves restricted to specific weight functions and do not have flexibility in approximating nonlinear, stochastic, and fractional operators.
In order to address these challenges, the present study suggests a new class of physics-informed neural architectures based on the Mamadu–Njoseh Polynomials (MNP), a system of orthogonal polynomials defined on the interval with the weight function.[15] These polynomials possess desirable mathematical properties, including orthogonality, stability, and normalised boundary behaviour (), which make them ideal candidates for developing mathematically stable and physically consistent activation functions. Their use in neural architectures yields the Mamadu–Njoseh Polynomial Physics-Informed Neural Network (MNP-PINN), a hybrid model that fuses orthogonal polynomial theory and physics-informed deep learning.
The motivation for the current research is the need to develop mathematics-aware learning systems that exceed conventional data-fitting paradigms by providing structured and physically constrained approximations. The proposed MNP-PINN introduces MNP basis functions as the primitive computational elements, allowing the network to efficiently approximate both the solution and its derivatives while ensuring consistency with the underlying physical laws. Through this coupling, the MNP-PINN model aims to improve training stability, convergence rate, and accuracy of solutions, especially for systems that are modelled using fractional, stochastic, and wave-type equations.[9,13]
To this end, this study develops a rigorous theoretical and computational framework for the Mamadu–Njoseh Polynomial Physics-Informed Neural Network (MNP-PINN), grounded in orthogonal polynomial basis functions. The framework establishes foundational mathematical properties of the proposed architecture, including orthogonality, convergence, and numerical stability, under the weighted inner-product structure induced by the weight function . Furthermore, the governing physical laws are seamlessly incorporated into the MNP-PINN loss formulation to ensure physically consistent learning.
To validate the effectiveness of the proposed approach, the MNP-PINN is benchmarked against exact analytical solutions, with performance evaluated in terms of solution accuracy, computational efficiency, and numerical robustness. Numerical experiments demonstrate the ability of the MNP-PINN to efficiently approximate physically meaningful solutions, showcasing its potential for solving complex differential equations beyond traditional data-driven paradigms. In particular, the one-dimensional Poisson equation is employed as a model problem to assess the method’s performance. This equation, despite its apparent simplicity, plays a fundamental role in mathematical physics and computational science. It encapsulates key characteristics of elliptic partial differential equations, serving as a canonical benchmark for evaluating approximation theories, assessing stability and convergence behaviour, and verifying computational methodologies. In physical applications, the Poisson equation governs a broad spectrum of steady-state phenomena, including electrostatic potential distribution, heat conduction processes, fluid pressure variations, and mechanical deformation under equilibrium.
Hence, the paper attempts to put the Mamadu–Njoseh Polynomial Neural Network on a novel, efficient, and theoretically sound foundation as a paradigm for physics-informed learning. By combining the rigour of orthogonal polynomial approximation theory and the flexibility of deep neural architecture, the MNP-PINN paradigm has the potential to revolutionize solution techniques for physical systems with nonlinearity, fractionality, and stochasticity, a feat of great value to computational mathematics and applied artificial intelligence.
2. MATERIAL AND METHODS
2.1. Weighted space and orthogonality
Let the weight inner product and associated norm be defined as[16]
where is a prescribed weight function.
The MNP form a sequence of orthogonal polynomials such that
satisfying
and the normalisation condition
Using properties (2.2)–(2.4) with the weight the first three MNP are obtained as
Similarly,
which were obtained by exact polynomial integration on
Analogue to classical weighted Sobolev spaces, for integer define
where denotes an appropriate family of derivative weights.[17] The corresponding norm is
Because are orthogonal in every has the expansion
where the spectral coefficients are given by
To facilitate analysis, we introduce the spectral Sobolev norm[18]
where is a sequence of positive numbers associated with the spectral scaling (typically related to eigenvalues of a Sturm–Liouville operator).
Let be the weighted orthogonal projection onto span , that is,
If with integer then for
where is a constant. This result establishes algebraic convergence of the projection error for functions of Sobolev regularity.
Similarly, if is analytic in an ellipse of the complex plane that contains the interval then the projection achieves spectral convergence,[19]
for some constant independent of
2.2. Mamadu-Njoseh polynomial neuron and layer
Let denote the weighted Hilbert space defined by
equipped with the weighted inner product
The MNP form a complete orthogonal basis for
where is the Kronecker delta, and is the normalisation constant associated with the nth basis function
A neuron in a MNP layer computes
where and are trainable parameters for neuron , and is the MNP basis function.
Thus, the hidden transformation
is given[20] by
which maps the input vector into a nonlinear feature space spanned by the orthogonal polynomial basis ,.
The operator therefore enriches the representational capability of the network by embedding the input in a structured polynomial function space, ensuring controlled smoothness and improving numerical stability relative to generic activation functions.
2.3. MNP operator and polynomial neuron mapping
Each MNP admits a finite polynomial expansion
where the coefficients are determined by the weighted orthogonality condition
This ensures that lies in the orthogonal complement of the polynomial subspace spanned by .
Consequently, can be constructed using properties (2.2)–(2.4) with the weight . Thus, each MNP basis element is generated by removing lower-order components, ensuring strict orthogonality in the weighted Hilbert space.
In the context of neural networks, each neuron of the form
nonlinearly lifts the input into a higher-order orthogonal polynomial feature space. This performs an implicit orthogonal projection onto the spectral mode, enriching the representation and enhancing numerical stability relative to generic activations.[20,21]
Define the MNP operator acting on an affine input
as
Thus, a single MNP neuron computes the nonlinear mapping
where and are the weight and bias for neuron .
Applying the binomial expansion yields
This representation shows that each MNP neuron induces a structured polynomial mapping of degree in the input variables. In particular, the terms contain mixed products of input features, meaning that nonlinear cross-feature interactions are introduced through the multinomial expansion of .
2.4. Differential properties and learning dynamics of an MNP layer
Consider a neural network (or a single neuron) employing the order MNP as its activation function. For neuron , define the affine transformation
and the activation output
where the polynomial form of is given by
Assuming the inputs , the network output can be represented as
where the collection of trainable parameters is
and are output-layer weights. The model thus forms a polynomial neural network where each neuron expands the input into an -degree orthogonal polynomial basis component.
Since the pre‐activation for neuron is
the first and second derivatives of the MNP activation follow directly from the polynomial definition,
These closed forms enable efficient Jacobian and Hessian computation, critical for physics-informed neural network (PINN) training where higher–order derivatives appear in the PDE residual.
Parameter gradients for neutron can be computed as,
Similarly, the gradient of the network output is given as,
2.5. Gradient magnitudes
Let be a bound on over the data distribution or during training. Define, for integer ,
then the derivative of satisfies the uniform bound
The first–derivative bound of activation is defined by
This implies that if then the operator norm of the Jacobian w.r.t satisfies
Thus, the local Lipschitz constant of neuron with respect to is bounded by , where controls derivative growth. Large induces large gradients and potential instability, while small may lead to vanishing gradients.
There are two primary mechanisms responsible for vanishing and exploding gradients in MNP–based networks:[1]
-
a.
Since is a polynomial of degree (, the following scenario exists;
-
i.
If and coefficients are moderate, can grow as , implying exploding gradients.
-
ii.
If and higher–order coefficient dominate but small, implying small gradients.
-
b.
Gradients for scale with implies the following possibilities;
-
i.
Large amplify, and
-
ii.
Regularising or initialising small mitigates an explosion.
A practical bound of the gradient to avoid explosion is to enforce[22]
such that
However, (3.18) can be satisfied by a scaling coefficient or enforcing small via weight normalisation.
2.6. MNP-PINN construction
Let be the spatial domain and the time interval. Consider a partial differential equation (PDE) operator acting on the scalar field defined as
with boundary conditions on and initial conditions is the source term.
We seek an approximate parametrized by that satisfies the PDE and boundary/initial conditions, in an informed sense.
Since is orthogonal on w.r.t. and , then for any pre-activation we define a polynomial layer
with learned coefficients
The choice of network representation determines how the neural architecture captures and expresses the spatial-temporal dependencies in the solution . Two formulations are commonly considered.
-
a.
Spectral form: The spectral form defines a global functional expansion of the neural approximation over a set of basis functions . It is expressed as
where are small neural networks that modulate each function.
-
b.
Activation form: Alternatively, the activation form incorporates the orthogonal polynomial basis directly as an activation function within the hidden layers of the network. It is defined as
such that the orthogonal polynomial structure is embedded in the layer.
In this paper, for analytical clarity and interpretability, the spectral form (4.2) is adopted. Under this formulation, the approximation is rewritten as:
where is an affine map or normalised coordinate mapping are Learnable scaling and shift parameters used to normalise spatial and temporal coordinates into the standard domain [−1,1] [Figure 1].

- Architecture of the proposed Mamadu-Njoseh polynomial physics informed neural network (MNP-PINN).
The MNP-PINN is implemented as a fully connected feedforward neural network. The network consists of one hidden layer with 40 neurons. The hidden layer employs quadratic Mamadu–Njoseh polynomials (MNPs) as activation functions, providing orthogonality and stability benefits over conventional activations. The output layer is linear to produce predicted solution values, as shown in Figure 1. The input dimension is for the one-dimensional Poisson problem.
2.7. Physics-informed loss
Let be sampled collocation points for PDE residuals, boundary points , and initial points . The physics-informed loss is given as
where,
Here, , , and denotes PDE residual loss, boundary condition loss, and initial condition loss, respectively, to ensure consistency, stabilization and smoothness. , , denotes the total number of residual points, boundary points, and initial points, respectively. The weighting parameters , are positive scaling coefficients that control the relative importance of boundary, initial, and regularization losses, respectively.
Since the basis functions are polynomials of the transformed coordinate , their derivatives with respect to and can be computed analytically through the chain rule. Hence,
If depends on , the polynomial structure reduces numerical error in derivative evaluation relative to generic activation because are known polynomials of degree When the coefficient functions are allowed to depend on both spatial and temporal variables, i.e., , the spectral representation becomes semi-local, combining data adaptability with the analytical structure of the orthogonal basis.
2.8. Network initialisation
An MNP-PINN of depth consists of layers
where
with and , where we choose distributions for and such that
-
i.
neither collapses to 0 nor blows up (non-degenerate forward pass).
-
ii.
Gradients remain well-scaled across layers.
-
iii.
The MNP orthogonality structure and its weighted normalisation are preserved (at least initially), so polynomial derivatives behave as expected.
Let’s analyse the propagation of the activation statistics through layers. Assume input have zero mean and variance , and weights are independent and identically distributed random variables with
Then, for neutron in layer , we have that
Assume are . with and . Also let. Then by independence, we get
For a nonlinear activation, let the activation at layer be Define the second moment map as
Then the variance of post-activation is
For a symmetric input distribution, we can approximate using even-order coefficients of , given as
where are the MNP coefficients.
If then
To prevent variance blow-up, we impose energy conservation between layers by setting
Substituting (5.7) into ((5.6) gives
Hence,
which defines the critical weight variance to ensure stability. To this effect, we set for the first layer. Deeper layers can be recursively estimated.
For instance, let us consider a special case involving the second-order MNP . We have the coefficients . Then,
Hence, the critical initialisation becomes
Setting arbitrarily, we have
which ensures stable propagation for quadratic MNP activations. Also, quadratic activations magnify magnitudes more strongly, so weights must be initialised smaller to maintain stable variance.
3. RESULTS AND DISCUSSION
3.1. Numerical example
We demonstrate the accuracy of the proposed method by considering a standard one-dimensional Poisson problem. Let satisfy
subject to the homogeneous Dirichlet boundary conditions
The analytical solution of this boundary value problem is
This example is smooth and well-posed, and therefore serves as a suitable benchmark for testing both accuracy and convergence behaviour of the proposed Mamadu–Njoseh-based Physics-Informed Neural Network (MNP-PINN) framework. The aim is to approximate numerically over and compare the obtained solution with the exact one in terms of the and error norms. The coefficients , the lifted-basis multiplier , and the training parameters are chosen to effectively control the dynamics of the equation.
All network parameters are initialised according to Table 1 to ensure stable and efficient training:
| Parameter | Value | Description |
|---|---|---|
| Polynomial degree | Quadratic Mamadu–Njoseh Polynomial (MNP) basis | |
| Input dimension | One-dimensional input | |
| Target preserved variance | Aim for unit activation variance | |
| Pre-activation bound | Conservative bound to keep | |
| Hidden neurons | Number of neurons in the hidden layer | |
| Interior collocation points | Interior training/collocation points | |
| Boundary points | Boundary training points | |
| Output-weight init. std. dev. | Standard deviation for output weight initialisation | |
| Bias initialisation | All biases are initialised to zero | |
| Weight initialisation | Critical variance initialisation for quadratic MNP | Use the derived optimal variance for stability |
| Scaling coefficient |
Based on the initialisation and configuration parameters summarized in Table 1 for the MNP-PINN framework, the ensuing results are obtained via Python implementation and are presented in the subsequent tables and figures.
The network is trained using the Adam optimiser with an initial learning rate of 1×, decaying exponentially by a factor of 0.5 every 1000 epochs. The total training duration is 3000 epochs. Loss function and error convergence at selected epochs are reported in Table 2, demonstrating rapid convergence and high-accuracy approximations with and errors on the order of .
| Epoch | Loss (×10⁰) | L₂ Error | L∞ Error |
|---|---|---|---|
| 0 | 4.854 × 101 | 9.440 × 10−1 | 9.926 × 10−1 |
| 500 | 2.520 × 10−⁶ | 1.600 × 10−⁵ | 2.050 × 10−⁵ |
| 1000 | 6.402 × 10−2 | 7.337 × 10−⁴ | 1.137 × 10−3 |
| 1500 | 2.657 × 10−⁶ | 1.565 × 10−⁵ | 1.988 × 10−⁵ |
| 2000 | 2.620 × 10−⁶ | 1.422 × 10−⁵ | 1.749 × 10−⁵ |
| 2500 | 7.875 × 10−⁶ | 2.343 × 10−⁵ | 3.578 × 10−⁵ |
| 2999 | 4.558 × 10−⁶ | 1.075 × 10−⁵ | 1.586 × 10−⁵ |
Collocation points are sampled uniformly within the computational domain , resulting in 200 interior points. Boundary points are fixed at the endpoints and , enforcing Dirichlet boundary conditions. This procedure provides sufficient coverage of the domain for the 1D Poisson problem while maintaining computational efficiency.
3.2. Discussion
Figure 2 depicts the performance of the MNP-PINN applied to the 1D Poison Problem, illustrating the model’s ability to approximate the analytical solution of the target problem with remarkable precision. The figure typically shows both the predicted solution obtained by the MNP-PINN and the exact analytical solution, often plotted together for direct visual comparison. The near-perfect overlap between predicted solution and exact solution across the entire spatial domain demonstrates the spectral accuracy of the MNP-PINN. This high degree of correspondence implies that the MNP activation functions successfully capture the underlying smoothness and structure of the solution, even in regions with steep gradients or boundary effects. Additionally, the residual error often visualised below the main plot remains close to zero throughout, confirming numerical stability and robust convergence during training. The network’s ability to generalise well on such benchmark examples underscores the effectiveness of MNP layers in achieving physics-consistent approximations for differential problems, outperforming traditional neural architectures that rely on standard nonlinear activations.

- Mamadu-Njoseh polynomial physics informed neural network (MNP-PINN) for 1D poison problem. X represents the independent variable, u(x) represents the dependent variable. MNPNN: Mamadu–Njoseh polynomial neural network..
Table 2 summarizes the training performance of the MNP-PINN across selected epochs for the 1D Poisson equation. The table records the evolution of the loss function, the L2 norm error, and the L∞ norm error throughout the optimisation process. The results show a progressive and stable convergence pattern. Initially, the loss begins at a relatively high value (4.854×101) and decreases by nearly seven orders of magnitude, reaching values around 10 −6 after sufficient training. This sharp decline signifies efficient error minimization and well-conditioned gradient flow within the MNP-PINN framework. Both the L2 and L∞ error norms exhibit consistent reduction, converging to approximately 10−5, which corresponds to spectral-level accuracy, which is a hallmark of high-order polynomial approximation. A minor oscillation in the error curve around epoch 1000 reflects transient optimiser nonlinearity or adaptive learning rate adjustment, but the model rapidly stabilizes thereafter, reaffirming its robust training dynamics. Overall, the table confirms that the MNP-PINN achieves high numerical precision, strong generalisation, and stable convergence behaviour, validating the effectiveness of MNP activation structures in solving partial differential equations like the 1D Poisson problem.
Figure 3 presents the coefficient spectrum of the MNP-PINN, showing the magnitude decay of the polynomial coefficients as a function of the polynomial degree n, this spectrum provides insight into the representational efficiency and smoothness of the learned solution. As increases, the coefficients exhibit a rapid exponential-like decay, indicating that the MNP-PINN effectively captures the dominant low-order modes of the solution while suppressing higher-order components. This behaviour confirms spectral convergence, meaning that the error decreases faster than any algebraic rate as the polynomial order increases. Such rapid coefficient attenuation signifies that higher-degree Mamadu–Njoseh basis functions contribute negligibly to the final approximation, thereby ensuring a compact spectral representation. This property reduces overfitting and numerical oscillations, common issues in classical neural architectures using purely analytical activations. Consequently, the MNP-PINN achieves smooth, stable, and physically consistent approximations across the spatial domain, showcasing the advantage of embedding Mamadu–Njoseh orthogonal structures within the neural framework.

- Coefficient spectrum of Mamadu-Njoseh polynomial physics informed neural network (MNP-PINN) for the 1D poison problem.
Figure 4 presents the pointwise absolute error distribution between the MNP-PINN-predicted solution and the analytical solution u(x) across the spatial domain. The error is defined as

- Pointwise absolute error of Mamadu-Njoseh polynomial physics informed neural network (MNP-PINN) for the 1D poison problem. x represents the independent variable.
where is the computed solution, and is the exact solution. The plot reveals that the MNP-PINN achieves uniformly low errors throughout the domain, with the maximum deviation confined to the boundary regions where solution gradients are typically steeper. The central region maintains near-zero error, highlighting the network’s strong approximation capacity and stability. The observed smoothness and near-symmetry of reflect the polynomial orthogonality and spectral convergence characteristics of the Mamadu–Njoseh basis functions. Overall, the error magnitude remains on the order of 10−5, confirming high numerical accuracy and demonstrating the capability of MNP-PINN to resolve fine-scale spatial structures in the Poisson solution with minimal numerical dispersion.
Figure 5 illustrates the evolution of the MNP-PINN during training. The top panel shows the training loss decreasing exponentially by several orders of magnitude, confirming rapid and stable convergence. The lower four panels present MNP-PINN predictions at selected epochs (initial, early, mid, and final), revealing how the network progressively refines its approximation until achieving near-perfect overlap with the analytical solution. Minor oscillations in the loss reflect optimiser nonlinearity but do not affect overall stability. By the final epoch, both L2 and L∞ errors reach the order of 10−5, demonstrating spectral-level accuracy and excellent generalisation. The figure highlights the MNP-PINN’s ability to efficiently learn smooth PDE solutions through polynomial-based activation structures.

- Training progression of Mamadu-Njoseh polynomial physics informed neural network (MNP-PINN) for the ID poison problem. u(x) represents the dependent function. x represents the independent function.
4. CONCLUSION
The numerical experiments clearly establish the effectiveness of the MNP-PINN for solving the one-dimensional Poisson problem. The solution plots demonstrate an almost perfect overlap between the MNP-PINN predictions and the analytical solution, confirming spectral-level accuracy and strong physics consistency. Quantitative metrics further support this finding: the loss function decreases by several orders of magnitude, while both and errors converge to the order of (), highlighting efficient optimisation and stable gradient dynamics.
The coefficient spectrum reveals rapid exponential decay in the polynomial coefficients, validating the compactness and representational efficiency of the MNP basis and indicating that only low-order modes are required to accurately approximate the solution. Pointwise error profiles remain uniformly small across the domain, with slight elevations near boundaries consistent with steep gradient regions, further demonstrating smooth and well-behaved convergence. Training-trajectory visualisations reinforce these findings, showing progressive refinement toward the exact solution with stable optimisation despite minor transient oscillations.
Hence, the results confirm that integrating MNP polynomial structures within the PINN framework yields highly accurate, stable, and spectrally convergent solutions. Thus, MNP-PINN represents a powerful and reliable methodology for solving differential equations, outperforming traditional PINN architectures that rely solely on standard nonlinear activations and demonstrating strong potential for broader PDE applications.
Ethical approval
Institutional Review Board approval is not required.
Declaration of patient consent
Patient's consent not required as there are no patients in this study.
Financial support and sponsorship
Nil
Conflicts of interest
There are no conflicts of interest.
Use of artificial intelligence (AI)-assisted technology for manuscript preparation
The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.
REFERENCES
- Journal of Computational Physics. 2019;378:686-707.
- Nature Reviews Physics. 2021;3:422-40.
- Nature Computational Science. 2022;2:325-38.
- Journal of Computational Physics. 2020;425:109913.
- Water Resources Research. 2020;56:e2019WR026731.
- Journal of Computational Physics. 2021;436:110207.
- SIAM Review. 2021;63:208-2.
- SIAM Journal of Scientific Computing. 2022;44:A1526-A1553.
- Computers Mathematics with Applications. 2023;126:96-112.
- Computer Methods in Applied Mechanics and Engineering. 2022;389:114333.
- Neural Networks. 2021;135:192-200.
- [CrossRef] [PubMed]
- Computers Mathematics with Applications. 2022;104:233-52.
- AIMS Mathematics. 2024;9:12775-777.
- Neural Networks. 2021;136:176-88.
- Science World Journal. 2016;11:21-24.
- Springer. 2004;44:165-179.
- Journal of Mathematical Analysis and Applications. 1995;192:142-60.
- Introduction to numerical solution of partial differential equations. In: Programming phase-field modeling. Springer; 2017. p. :9-11.
- [Google Scholar]
- Mathematics of Computation. 1992;59:145-62.
- The Finite Element Method for Elliptic Problems. Amsterdam: North Holland; 1978.
- A Mathematical Journal. 2004;6:1-32.
- Journal of Scientific Computing. 2024;100:35.
