Skip to main content

PyTorch CPU Tensors

Introduction

PyTorch tensors are multi-dimensional arrays that form the foundation of all computations in PyTorch. By default, PyTorch creates tensors that reside in CPU memory. These CPU tensors are essential for various operations and are particularly useful when GPU resources are unavailable or unnecessary for your computational tasks.

In this tutorial, we'll explore how to create, manipulate, and utilize CPU tensors in PyTorch, with beginner-friendly examples to help you understand their behavior and applications.

Creating CPU Tensors

PyTorch offers multiple ways to create tensors on the CPU. Let's explore the most common methods:

From Python Data Structures

You can create tensors from Python lists or NumPy arrays:

python
import torch
import numpy as np

# Creating a tensor from a Python list
list_tensor = torch.tensor([1, 2, 3, 4, 5])
print(f"Tensor from list: {list_tensor}")

# Creating a tensor from a NumPy array
numpy_array = np.array([6, 7, 8, 9, 10])
numpy_tensor = torch.tensor(numpy_array)
print(f"Tensor from NumPy array: {numpy_tensor}")

# Check device (should be 'cpu')
print(f"Device: {list_tensor.device}")

Output:

Tensor from list: tensor([1, 2, 3, 4, 5])
Tensor from NumPy array: tensor([6, 7, 8, 9, 10])
Device: cpu

Using Factory Functions

PyTorch provides several factory functions to create tensors with specific properties:

python
# Create a tensor filled with zeros
zeros = torch.zeros(3, 4)
print(f"Zeros tensor:\n{zeros}")

# Create a tensor filled with ones
ones = torch.ones(2, 3)
print(f"Ones tensor:\n{ones}")

# Create a tensor with random values
random_tensor = torch.rand(2, 2)
print(f"Random tensor:\n{random_tensor}")

# Create an uninitialized tensor
empty_tensor = torch.empty(2, 3)
print(f"Empty (uninitialized) tensor:\n{empty_tensor}")

Output:

Zeros tensor:
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
Ones tensor:
tensor([[1., 1., 1.],
[1., 1., 1.]])
Random tensor:
tensor([[0.1234, 0.5678],
[0.9012, 0.3456]])
Empty (uninitialized) tensor:
tensor([[4.5918e-41, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00]])

Specifying Data Types

You can explicitly set the data type when creating tensors:

python
# Integer tensor
int_tensor = torch.tensor([1, 2, 3], dtype=torch.int32)
print(f"Integer tensor: {int_tensor}, Type: {int_tensor.dtype}")

# Float tensor
float_tensor = torch.tensor([1.1, 2.2, 3.3], dtype=torch.float32)
print(f"Float tensor: {float_tensor}, Type: {float_tensor.dtype}")

# Boolean tensor
bool_tensor = torch.tensor([True, False, True], dtype=torch.bool)
print(f"Boolean tensor: {bool_tensor}, Type: {bool_tensor.dtype}")

Output:

Integer tensor: tensor([1, 2, 3], dtype=torch.int32), Type: torch.int32
Float tensor: tensor([1.1000, 2.2000, 3.3000]), Type: torch.float32
Boolean tensor: tensor([ True, False, True]), Type: torch.bool

Tensor Properties and Attributes

Let's examine some key properties of CPU tensors:

python
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]])

# Shape (dimensions)
print(f"Shape: {tensor.shape}")

# Number of dimensions
print(f"Dimensions: {tensor.ndim}")

# Data type
print(f"Data type: {tensor.dtype}")

# Device
print(f"Device: {tensor.device}")

# Total number of elements
print(f"Number of elements: {tensor.numel()}")

Output:

Shape: torch.Size([2, 3])
Dimensions: 2
Data type: torch.int64
Device: cpu
Number of elements: 6

Tensor Operations on CPU

Let's explore common operations that can be performed on CPU tensors:

Arithmetic Operations

python
# Create two tensors
a = torch.tensor([1, 2, 3], dtype=torch.float32)
b = torch.tensor([4, 5, 6], dtype=torch.float32)

# Addition
print(f"a + b = {a + b}")

# Subtraction
print(f"a - b = {a - b}")

# Multiplication (element-wise)
print(f"a * b = {a * b}")

# Division
print(f"a / b = {a / b}")

Output:

a + b = tensor([5., 7., 9.])
a - b = tensor([-3., -3., -3.])
a * b = tensor([ 4., 10., 18.])
a / b = tensor([0.2500, 0.4000, 0.5000])

Matrix Operations

python
# Create matrices
matrix_a = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
matrix_b = torch.tensor([[5, 6], [7, 8]], dtype=torch.float32)

# Matrix multiplication
print(f"Matrix multiplication:\n{torch.matmul(matrix_a, matrix_b)}")

# Transpose
print(f"Transposed matrix_a:\n{matrix_a.t()}")

Output:

Matrix multiplication:
tensor([[19., 22.],
[43., 50.]])
Transposed matrix_a:
tensor([[1., 3.],
[2., 4.]])

Reshaping and Resizing Tensors

Changing tensor dimensions is a common operation in deep learning workflows:

python
# Create a tensor
tensor = torch.tensor([1, 2, 3, 4, 5, 6])

# Reshape to 2x3 matrix
reshaped = tensor.reshape(2, 3)
print(f"Reshaped tensor:\n{reshaped}")

# View (creates a new view of the same data)
viewed = tensor.view(3, 2)
print(f"Viewed tensor:\n{viewed}")

# Permute dimensions
permuted = reshaped.permute(1, 0) # Swap dimensions
print(f"Permuted tensor:\n{permuted}")

Output:

Reshaped tensor:
tensor([[1, 2, 3],
[4, 5, 6]])
Viewed tensor:
tensor([[1, 2],
[3, 4],
[5, 6]])
Permuted tensor:
tensor([[1, 4],
[2, 5],
[3, 6]])

Indexing and Slicing

Accessing tensor elements follows patterns similar to NumPy:

python
tensor = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

# Get a single element
print(f"Element at [1, 2]: {tensor[1, 2]}")

# Get a row
print(f"First row: {tensor[0]}")

# Get a column
print(f"Second column: {tensor[:, 1]}")

# Slicing
print(f"Slice [0:2, 1:3]:\n{tensor[0:2, 1:3]}")

# Boolean indexing
mask = tensor > 5
print(f"Elements > 5: {tensor[mask]}")

Output:

Element at [1, 2]: 6
First row: tensor([1, 2, 3])
Second column: tensor([2, 5, 8])
Slice [0:2, 1:3]:
tensor([[2, 3],
[5, 6]])
Elements > 5: tensor([6, 7, 8, 9])

Converting Between CPU Tensors and Other Formats

It's often necessary to convert between PyTorch tensors and other data formats:

Converting to/from NumPy

python
# PyTorch tensor to NumPy
tensor = torch.ones(3, 3)
numpy_array = tensor.numpy()
print(f"NumPy array:\n{numpy_array}")

# NumPy array to PyTorch tensor
numpy_array = np.random.rand(2, 2)
back_to_tensor = torch.from_numpy(numpy_array)
print(f"Back to tensor:\n{back_to_tensor}")

Output:

NumPy array:
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
Back to tensor:
tensor([[0.1234, 0.5678],
[0.9012, 0.3456]], dtype=torch.float64)

Converting to Python Scalars

python
# Single-element tensor to Python scalar
scalar_tensor = torch.tensor(42)
python_scalar = scalar_tensor.item()
print(f"Python scalar: {python_scalar}, Type: {type(python_scalar)}")

# Multi-element tensor to Python list
tensor = torch.tensor([1, 2, 3, 4])
python_list = tensor.tolist()
print(f"Python list: {python_list}, Type: {type(python_list)}")

Output:

Python scalar: 42, Type: <class 'int'>
Python list: [1, 2, 3, 4], Type: <class 'list'>

Real-World Applications

Let's explore some practical examples of CPU tensor usage in common scenarios:

Linear Regression Using CPU Tensors

python
import torch
import matplotlib.pyplot as plt

# Generate synthetic data
x = torch.linspace(0, 10, 100).unsqueeze(1)
y_true = 2*x + 1 + 0.2*torch.randn(x.size())

# Initialize parameters (weights and bias)
w = torch.randn(1, requires_grad=True)
b = torch.randn(1, requires_grad=True)

# Training parameters
learning_rate = 0.01
epochs = 100

# Training loop
for epoch in range(epochs):
# Forward pass
y_pred = w * x + b

# Compute loss (MSE)
loss = ((y_pred - y_true) ** 2).mean()

# Backward pass
loss.backward()

# Update parameters
with torch.no_grad():
w -= learning_rate * w.grad
b -= learning_rate * b.grad

# Reset gradients
w.grad.zero_()
b.grad.zero_()

# Print progress
if epoch % 10 == 0:
print(f"Epoch {epoch}, Loss: {loss.item():.4f}, w: {w.item():.4f}, b: {b.item():.4f}")

print(f"Final parameters - w: {w.item():.4f}, b: {b.item():.4f}")

Output (your values may differ due to random initialization):

Epoch 0, Loss: 26.4321, w: 0.6832, b: 0.3245
Epoch 10, Loss: 4.0012, w: 1.2345, b: 0.7432
Epoch 20, Loss: 1.8721, w: 1.5678, b: 0.8911
...
Epoch 90, Loss: 0.2013, w: 1.9876, b: 0.9932
Final parameters - w: 1.9921, b: 1.0023

This example demonstrates a simple linear regression model using CPU tensors with automatic differentiation.

Image Processing Example

python
import torch
import matplotlib.pyplot as plt
import numpy as np

# Create a simple 8x8 image tensor
image = torch.zeros(8, 8)
image[2:6, 2:6] = 1 # Create a white square in the middle

# Apply a simple kernel for edge detection
kernel = torch.tensor([[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1]], dtype=torch.float32)

# Manual convolution function for edge detection
def apply_kernel(image, kernel):
h, w = image.shape
k_h, k_w = kernel.shape
result = torch.zeros(h-k_h+1, w-k_w+1)

for i in range(result.shape[0]):
for j in range(result.shape[1]):
result[i, j] = torch.sum(image[i:i+k_h, j:j+k_w] * kernel)

return result

# Apply convolution
edge_detected = apply_kernel(image, kernel)

print("Original image shape:", image.shape)
print("Edge detected image shape:", edge_detected.shape)

Output:

Original image shape: torch.Size([8, 8])
Edge detected image shape: torch.Size([6, 6])

This example demonstrates how to use CPU tensors for basic image processing operations.

CPU vs. GPU Tensors

It's worth understanding when to use CPU vs. GPU tensors:

AspectCPU TensorsGPU Tensors
AvailabilityAlways availableRequire CUDA-enabled GPU
Processing SpeedSlower for large computationsMuch faster for parallel operations
Memory AccessDirect access from PythonRequire transfers between CPU and GPU
Best ForSmall datasets, quick prototyping, debuggingLarge datasets, deep learning training
Default in PyTorchYesNo (requires explicit transfer)

CPU tensors are ideal for:

  • Development and debugging
  • Small models and datasets
  • When GPU resources are unavailable
  • Operations that aren't heavily parallelizable

Summary

In this tutorial, we've covered the essentials of PyTorch CPU tensors:

  • Creating tensors from various data sources
  • Understanding tensor properties and attributes
  • Performing common operations on tensors
  • Reshaping and manipulating tensor dimensions
  • Converting between tensors and other data formats
  • Real-world applications using CPU tensors

CPU tensors are the default in PyTorch and provide a solid foundation for learning and working with the framework. As your models and datasets grow, you might want to move to GPU tensors for better performance, but mastering CPU tensors is an essential first step in your PyTorch journey.

Additional Resources

Exercises

  1. Create a 3x3 tensor with random values and calculate its mean, minimum, and maximum values.
  2. Implement matrix multiplication manually (without using torch.matmul) using element-wise operations.
  3. Create a function that normalizes a tensor (subtract mean and divide by standard deviation).
  4. Build a simple neural network with one hidden layer using only CPU tensors.
  5. Write a function that performs a 2D convolution operation on an input tensor using a given kernel.


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)