Close
AI & Tech

A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods

A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods


In this tutorial, we explore an innovative approach that blends deep learning with physical laws by leveraging Physics-Informed Neural Networks (PINNs) to solve the one-dimensional Burgers’ equation. Using PyTorch on Google Colab, we demonstrate how to encode the governing differential equation directly into the neural network’s loss function, allowing the model to learn the solution 𝑢(𝑥,𝑡) that inherently respects the underlying physics. This technique reduces the reliance on large labeled datasets and offers a fresh perspective on solving complex, non-linear partial differential equations using modern computational tools.

!pip install torch matplotlib

First, we install the PyTorch and matplotlib libraries using pip, ensuring you have the necessary tools for building neural networks and visualizing the results in your Google Colab environment.

import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt


torch.set_default_dtype(torch.float32)

We import essential libraries: PyTorch for deep learning, NumPy for numerical operations, and matplotlib for plotting. We set the default tensor data type to float32 for consistent numerical precision throughout your computations.

x_min, x_max = -1.0, 1.0
t_min, t_max = 0.0, 1.0
nu = 0.01 / np.pi


N_f = 10000  
N_0 = 200    
N_b = 200    


X_f = np.random.rand(N_f, 2)
X_f[:, 0] = X_f[:, 0] * (x_max - x_min) + x_min  # x in [-1, 1]
X_f[:, 1] = X_f[:, 1] * (t_max - t_min) + t_min    # t in [0, 1]


x0 = np.linspace(x_min, x_max, N_0)[:, None]
t0 = np.zeros_like(x0)
u0 = -np.sin(np.pi * x0)


tb = np.linspace(t_min, t_max, N_b)[:, None]
xb_left = np.ones_like(tb) * x_min
xb_right = np.ones_like(tb) * x_max
ub_left = np.zeros_like(tb)
ub_right = np.zeros_like(tb)


X_f = torch.tensor(X_f, dtype=torch.float32, requires_grad=True)
x0 = torch.tensor(x0, dtype=torch.float32)
t0 = torch.tensor(t0, dtype=torch.float32)
u0 = torch.tensor(u0, dtype=torch.float32)
tb = torch.tensor(tb, dtype=torch.float32)
xb_left = torch.tensor(xb_left, dtype=torch.float32)
xb_right = torch.tensor(xb_right, dtype=torch.float32)
ub_left = torch.tensor(ub_left, dtype=torch.float32)
ub_right = torch.tensor(ub_right, dtype=torch.float32)

We establish the simulation domain for the Burgers’ equation by defining spatial and temporal boundaries, viscosity, and the number of collocation, initial, and boundary points. It then generates random and evenly spaced data points for these conditions and converts them into PyTorch tensors, enabling gradient computation where needed.

class PINN(nn.Module):
    def __init__(self, layers):
        super(PINN, self).__init__()
        self.activation = nn.Tanh()
       
        layer_list = []
        for i in range(len(layers) - 1):
            layer_list.append(nn.Linear(layers[i], layers[i+1]))
        self.layers = nn.ModuleList(layer_list)
       
    def forward(self, x):
        for i, layer in enumerate(self.layers[:-1]):
            x = self.activation(layer(x))
        return self.layers[-1](x)


layers = [2, 50, 50, 50, 50, 1]
model = PINN(layers)
print(model)

Here, we define a custom Physics-Informed Neural Network (PINN) by extending PyTorch’s nn.Module. The network architecture is built dynamically using a list of layer sizes, where each linear layer is followed by a Tanh activation (except for the final output layer). In this example, the network takes a 2-dimensional input, passes it through four hidden layers (each with 50 neurons), and outputs a single value. Finally, the model is instantiated with the specified architecture, and its structure is printed.

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

Here, we check if a CUDA-enabled GPU is available, set the device accordingly, and move the model to that device for accelerated computation during training and inference.

def pde_residual(model, X):
    x = X[:, 0:1]
    t = X[:, 1:2]
    u = model(torch.cat([x, t], dim=1))
   
    u_x = torch.autograd.grad(u, x, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)[0]
    u_t = torch.autograd.grad(u, t, grad_outputs=torch.ones_like(u), create_graph=True, retain_graph=True)[0]
    u_xx = torch.autograd.grad(u_x, x, grad_outputs=torch.ones_like(u_x), create_graph=True, retain_graph=True)[0]
   
    f = u_t + u * u_x - nu * u_xx
    return f


def loss_func(model):
    f_pred = pde_residual(model, X_f.to(device))
    loss_f = torch.mean(f_pred**2)
   
    u0_pred = model(torch.cat([x0.to(device), t0.to(device)], dim=1))
    loss_0 = torch.mean((u0_pred - u0.to(device))**2)
   
    u_left_pred = model(torch.cat([xb_left.to(device), tb.to(device)], dim=1))
    u_right_pred = model(torch.cat([xb_right.to(device), tb.to(device)], dim=1))
    loss_b = torch.mean(u_left_pred**2) + torch.mean(u_right_pred**2)
   
    loss = loss_f + loss_0 + loss_b
    return loss

Now, we compute the residual of Burgers’ equation at the collocation points by calculating the required derivatives via automatic differentiation. Then, we define a loss function that aggregates the PDE residual loss, the error from the initial condition, and the errors from the boundary conditions. This combined loss guides the network to learn a solution that satisfies both the physical law and the imposed conditions.

optimizer = optim.Adam(model.parameters(), lr=1e-3)
num_epochs = 5000


for epoch in range(num_epochs):
    optimizer.zero_grad()
    loss = loss_func(model)
    loss.backward()
    optimizer.step()
   
    if (epoch+1) % 500 == 0:
        print(f'Epoch {epoch+1}/{num_epochs}, Loss: {loss.item():.5e}')
       
print("Training complete!")

Here, we set up the PINN’s training loop using the Adam optimizer with a learning rate of 1×10−3. Over 5000 epochs, it repeatedly computes the loss (which includes the PDE residual, initial, and boundary condition errors), backpropagates the gradients, and updates the model parameters. Every 500 epochs, it prints the current epoch and loss to monitor progress and finally announces when training is complete.

N_x, N_t = 256, 100
x = np.linspace(x_min, x_max, N_x)
t = np.linspace(t_min, t_max, N_t)
X, T = np.meshgrid(x, t)
XT = np.hstack((X.flatten()[:, None], T.flatten()[:, None]))
XT_tensor = torch.tensor(XT, dtype=torch.float32).to(device)


model.eval()
with torch.no_grad():
    u_pred = model(XT_tensor).cpu().numpy().reshape(N_t, N_x)


plt.figure(figsize=(8, 5))
plt.contourf(X, T, u_pred, levels=100, cmap='viridis')
plt.colorbar(label="u(x,t)")
plt.xlabel('x')
plt.ylabel('t')
plt.title("Predicted solution u(x,t) via PINN")
plt.show()

Finally, we create a grid of points over the defined spatial (𝑥) and temporal (𝑡) domain, feed these points to the trained model to predict the solution 𝑢(𝑥, 𝑡), and reshape the output into a 2D array. Also, it visualizes the predicted solution as a contour plot using matplotlib, complete with a colorbar, axis labels, and a title, allowing you to observe how the PINN has approximated the dynamics of the Burgers’ equation.

In conclusion, this tutorial has showcased how PINNs can be effectively implemented to solve the 1D Burgers’ equation by incorporating the physics of the problem into the training process. Through careful construction of the neural network, generation of collocation and boundary data, and automatic differentiation, we achieved a model that learns a solution consistent with the PDE and the prescribed conditions. This fusion of machine learning and traditional physics paves the way for tackling more challenging problems in computational science and engineering, inviting further exploration into higher-dimensional systems and more sophisticated neural architectures.


Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *