Linear Regression using PyTorch
Linear regression is a statistical model that is used to predict a continuous dependent variable based on one or more independent variables. It assumes that there is a linear relationship between the independent variables and the dependent variable.
To perform linear regression using PyTorch, you will need to install PyTorch and import it into your Python script. Then, you will need to define the independent and dependent variables as PyTorch tensors.
Next, you will define the linear regression model using the nn.Linear
module from PyTorch's nn
library. This module implements a linear transformation of the input tensor, which is the core of the linear regression model.
Here is an example of linear regression using PyTorch:
import torch
import torch.nn as nn
# Define the independent and dependent variables as tensors
x = torch.tensor(x_train, dtype=torch.float)
y = torch.tensor(y_train, dtype=torch.float)
# Define the linear regression model
model = nn.Linear(in_features=1, out_features=1)
# Define the loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# Training loop
for i in range(100):
# Forward pass: compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = loss_fn(y_pred, y)
print(f'Step {i}, Loss: {loss.item()}')
# Zero the gradients before running the backward pass
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
# Update the parameters using gradient descent
optimizer.step()
# Make predictions on the test data
x_test = torch.tensor(x_test, dtype=torch.float)
y_pred = model(x_test).detach().numpy()
In this example, x
and y
are the independent and dependent variables, respectively. They are defined as PyTorch tensors. The linear regression model is defined as a single nn.Linear
module, which takes an input tensor and applies a linear transformation to produce an output tensor. The loss function is defined as the mean squared error between the predicted values and the true values. The optimizer is defined as the SGD optimizer with a learning rate of 0.01. The training loop is run for 100 steps, and the model is evaluated on the test data by making predictions using the model
object and the test data as the input.
Leave a Comment