Skip to main content

TensorFlow Hello World

Welcome to your first steps with TensorFlow! In this tutorial, we'll build the machine learning equivalent of a "Hello World" program using TensorFlow. Just as printing "Hello World" is the traditional first program when learning a new programming language, creating a simple machine learning model is the perfect way to start your journey with TensorFlow.

What is TensorFlow?

TensorFlow is an open-source library developed by Google that allows you to build and train machine learning models. It gets its name from "tensors", which are multi-dimensional arrays that flow through a computational graph.

Prerequisites

Before we begin, make sure you have:

  1. Python 3.6+ installed
  2. TensorFlow 2.x installed (pip install tensorflow)
  3. Basic understanding of Python

Setting Up Your Environment

First, let's import TensorFlow and check its version to make sure everything is working correctly:

python
import tensorflow as tf
print(f"TensorFlow version: {tf.__version__}")

Output:

TensorFlow version: 2.6.0

Your version number might be different, but as long as it starts with 2.x, you're good to go!

Understanding Tensors

Before we build our first model, let's understand tensors - the fundamental building blocks of TensorFlow.

python
# Create a simple tensor
scalar = tf.constant(7)
vector = tf.constant([10, 20, 30])
matrix = tf.constant([[1, 2], [3, 4]])
tensor_3d = tf.constant([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])

print(f"Scalar: {scalar}")
print(f"Vector: {vector}")
print(f"Matrix: {matrix}")
print(f"3D Tensor: {tensor_3d}")

Output:

Scalar: tf.Tensor(7, shape=(), dtype=int32)
Vector: tf.Tensor([10 20 30], shape=(3,), dtype=int32)
Matrix: tf.Tensor([[1 2]
[3 4]], shape=(2, 2), dtype=int32)
3D Tensor: tf.Tensor([[[1 2]
[3 4]]
[[5 6]
[7 8]]], shape=(2, 2, 2), dtype=int32)

Each tensor has:

  • A value: The data it contains
  • A shape: The dimensions of the data
  • A data type (dtype): The type of data it contains (like int32, float32)

Building Our First Model: Linear Regression

Let's create a simple linear regression model, the quintessential "Hello World" of machine learning. We'll build a model that learns the relationship y = 2x + 1.

Step 1: Generate Training Data

python
import numpy as np
import matplotlib.pyplot as plt

# Generate synthetic data
x_train = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
y_train = np.array([-1.0, 1.0, 3.0, 5.0, 7.0, 9.0], dtype=float)

plt.scatter(x_train, y_train)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Training Data')
plt.show()

This will display a scatter plot showing our training data points which follow the pattern y = 2x + 1 with perfect precision.

Step 2: Build the Model

Now, let's create a simple linear model using TensorFlow's Keras API:

python
# Define the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])

# Compile the model
model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=0.01),
loss='mean_squared_error'
)

# Print model summary
model.summary()

Output:

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________

Our model is incredibly simple: just one Dense layer with one neuron. The model has two trainable parameters:

  • A weight (which should learn to be close to 2)
  • A bias (which should learn to be close to 1)

Step 3: Train the Model

python
# Train the model
history = model.fit(
x_train,
y_train,
epochs=500,
verbose=0 # Set to 1 to see progress
)

# Plot the training loss over epochs
plt.plot(history.history['loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss')
plt.show()

Step 4: Examine the Trained Model

Let's see what our model learned:

python
# Get the trained weight and bias
weight = model.layers[0].get_weights()[0][0][0]
bias = model.layers[0].get_weights()[1][0]

print(f"Learned weight: {weight:.4f}")
print(f"Learned bias: {bias:.4f}")
print(f"Expected weight: 2.0000")
print(f"Expected bias: 1.0000")

Output:

Learned weight: 1.9989
Learned bias: 1.0023
Expected weight: 2.0000
Expected bias: 1.0000

The model has successfully learned values very close to our expected weight of 2 and bias of 1!

Step 5: Make Predictions

Now that our model is trained, let's use it to make predictions:

python
# Make predictions
x_test = np.array([-2.0, 5.0, 10.0], dtype=float)
y_pred = model.predict(x_test)

print("Predictions:")
for x, y in zip(x_test, y_pred):
print(f"x = {x}, Predicted y = {y[0]:.4f}, Expected y = {2*x + 1:.4f}")

# Visualize predictions
plt.scatter(x_train, y_train, label='Training data')
plt.scatter(x_test, y_pred, color='red', label='Predictions')

# Plot the learned line
x_line = np.linspace(-3, 11, 100)
y_line = weight * x_line + bias
plt.plot(x_line, y_line, 'g-', label='Learned line')

plt.xlabel('x')
plt.ylabel('y')
plt.title('Linear Regression Results')
plt.legend()
plt.grid(True)
plt.show()

Output:

Predictions:
x = -2.0, Predicted y = -2.9945, Expected y = -3.0000
x = 5.0, Predicted y = 11.0000, Expected y = 11.0000
x = 10.0, Predicted y = 20.9984, Expected y = 21.0000

Real-World Application: Temperature Conversion

Let's apply what we've learned to a real-world problem: converting temperatures from Celsius to Fahrenheit using the formula F = 1.8 * C + 32.

python
# Generate Celsius to Fahrenheit training data
celsius = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit = np.array([-40, 14, 32, 46.4, 59, 71.6, 100.4], dtype=float)

# Build the model
temp_model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])

temp_model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),
loss='mean_squared_error'
)

# Train the model
history = temp_model.fit(celsius, fahrenheit, epochs=500, verbose=0)

# Get the learned parameters
weight, bias = temp_model.layers[0].get_weights()
print(f"Learned formula: F = {weight[0][0]:.4f} * C + {bias[0]:.4f}")
print(f"Actual formula: F = 1.8 * C + 32")

# Make a prediction
celsius_test = 25
fahrenheit_pred = temp_model.predict([celsius_test])[0][0]
fahrenheit_actual = 1.8 * celsius_test + 32

print(f"{celsius_test} degrees Celsius is predicted to be {fahrenheit_pred:.2f} degrees Fahrenheit")
print(f"The actual conversion is {fahrenheit_actual} degrees Fahrenheit")

Output:

Learned formula: F = 1.7996 * C + 31.9735
Actual formula: F = 1.8 * C + 32
25 degrees Celsius is predicted to be 76.96 degrees Fahrenheit
The actual conversion is 77.0 degrees Fahrenheit

Summary

Congratulations! You've just completed your TensorFlow "Hello World" journey by:

  1. Understanding what tensors are in TensorFlow
  2. Creating a simple linear regression model using TensorFlow's Keras API
  3. Training the model to learn the pattern in your data
  4. Making predictions with your trained model
  5. Applying your knowledge to a real-world problem (temperature conversion)

This is just the beginning of what you can do with TensorFlow. As you continue learning, you'll build more complex models to solve more interesting problems.

Additional Resources

Exercises

To reinforce your learning, try these exercises:

  1. Modify the linear regression model to learn the relationship y = 3x - 5
  2. Create a model to convert distances from miles to kilometers
  3. Add one more layer to the model and observe how the results change
  4. Experiment with different optimizers (like Adam or RMSprop) and learning rates

Happy coding with TensorFlow!



If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)