Skip to main content

TensorFlow Sessions

Introduction

TensorFlow sessions are a fundamental concept in TensorFlow 1.x that enable you to execute operations in the computational graph. While TensorFlow 2.x has moved toward eager execution (which eliminates the need for explicit sessions), understanding sessions is still important for working with legacy code, understanding TensorFlow's evolution, and grasping the fundamentals of how computational graphs work.

In TensorFlow 1.x, you first define a computational graph consisting of operations (ops) and tensors, and then use a session to execute these operations and evaluate tensors. Think of a session as an environment that holds the state of TensorFlow runtime and runs TensorFlow operations.

Understanding TensorFlow Sessions

What is a Session?

A session encapsulates the control and state of the TensorFlow runtime. It provides the mechanism to:

  1. Allocate resources (like memory) for the computation
  2. Execute the operations in the computational graph
  3. Evaluate tensors and retrieve their values

Basic Session Usage

Let's start with a simple example:

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution() # Disable TF 2.x eager execution

# Create a constant tensor
a = tf.constant(5.0)
b = tf.constant(7.0)
c = a * b

# Create a session
sess = tf.Session()

# Run the operation and get the result
result = sess.run(c)
print("Result:", result)

# Close the session
sess.close()

Output:

Result: 35.0

In this example:

  1. We define constants a and b and an operation to multiply them
  2. We create a TensorFlow session using tf.Session()
  3. We run the operation with sess.run(c) to get the result
  4. Finally, we close the session with sess.close()

Using Sessions with Context Managers

A cleaner way to use sessions is with Python's context manager (with statement), which automatically closes the session when done:

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

a = tf.constant(5.0)
b = tf.constant(7.0)
c = a * b

# Using a session with a context manager
with tf.Session() as sess:
result = sess.run(c)
print("Result:", result)

Output:

Result: 35.0

This approach ensures that resources are properly released when the session is no longer needed.

Session Configuration

When creating a session, you can specify various configuration options:

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

# Configure session to use a specific amount of GPU memory
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.4 # Use only 40% of GPU memory

# Create a session with the specified configuration
with tf.Session(config=config) as sess:
# Define and run operations
a = tf.constant([[1, 2], [3, 4]])
b = tf.constant([[5, 6], [7, 8]])
c = tf.matmul(a, b)
result = sess.run(c)
print("Matrix multiplication result:\n", result)

Output:

Matrix multiplication result:
[[19 22]
[43 50]]

Running Multiple Operations in a Session

You can run multiple operations in a single sess.run() call:

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

# Define operations
a = tf.constant(3.0)
b = tf.constant(4.0)
c = tf.add(a, b)
d = tf.multiply(a, b)
e = tf.sqrt(tf.square(a) + tf.square(b)) # Pythagorean theorem

with tf.Session() as sess:
# Run multiple operations at once
sum_result, product_result, hypotenuse = sess.run([c, d, e])
print(f"Sum: {sum_result}")
print(f"Product: {product_result}")
print(f"Hypotenuse: {hypotenuse}")

Output:

Sum: 7.0
Product: 12.0
Hypotenuse: 5.0

Feeding Values to TensorFlow Operations

TensorFlow allows you to inject values into the computation with the feed_dict parameter:

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

# Create placeholders for input
x = tf.placeholder(tf.float32, shape=(None,), name="x")
y = tf.placeholder(tf.float32, shape=(None,), name="y")

# Define operations
sum_op = tf.reduce_sum(x + y)
prod_op = tf.reduce_sum(x * y)

# Create sample data
x_data = [1, 2, 3]
y_data = [4, 5, 6]

with tf.Session() as sess:
# Feed the data to the placeholders
sum_result = sess.run(sum_op, feed_dict={x: x_data, y: y_data})
prod_result = sess.run(prod_op, feed_dict={x: x_data, y: y_data})

print(f"Sum of elements: {sum_result}")
print(f"Sum of products: {prod_result}")

Output:

Sum of elements: 21.0
Sum of products: 32.0

Interactive Sessions

TensorFlow also provides an InteractiveSession that makes the default session so you don't have to pass it explicitly to run():

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

# Create an interactive session
sess = tf.InteractiveSession()

# Define operations
a = tf.constant([1.0, 2.0, 3.0])
b = tf.constant([4.0, 5.0, 6.0])
c = a * b

# With InteractiveSession, we can run without explicitly passing the session
result = c.eval() # Equivalent to sess.run(c)
print("Result:", result)

# Close the session when done
sess.close()

Output:

Result: [4. 10. 18.]

Practical Example: Linear Regression with TensorFlow Sessions

Let's implement a simple linear regression model using TensorFlow sessions:

python
import tensorflow.compat.v1 as tf
import numpy as np
import matplotlib.pyplot as plt
tf.disable_eager_execution()

# Generate synthetic data
np.random.seed(42)
x_data = np.linspace(-1, 1, 100)
y_data = 2 * x_data + np.random.randn(100) * 0.3 # y = 2x + noise

# Define placeholders for input
x = tf.placeholder(tf.float32, shape=(None,))
y = tf.placeholder(tf.float32, shape=(None,))

# Define model parameters (variables)
W = tf.Variable(tf.random_normal([1]), name="weight")
b = tf.Variable(tf.zeros([1]), name="bias")

# Define the linear model
y_pred = W * x + b

# Define loss function
loss = tf.reduce_mean(tf.square(y - y_pred))

# Define optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss)

# Initialize all variables
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:
# Run initializer
sess.run(init)

# Training loop
for step in range(201):
_, current_loss, current_W, current_b = sess.run(
[train_op, loss, W, b],
feed_dict={x: x_data, y: y_data}
)

# Print progress every 20 steps
if step % 20 == 0:
print(f"Step {step}, Loss: {current_loss:.4f}, W: {current_W[0]:.4f}, b: {current_b[0]:.4f}")

# Get the final trained parameters
final_W, final_b = sess.run([W, b])

# Make predictions with the trained model
predictions = sess.run(y_pred, feed_dict={x: x_data})

# Plot the results
plt.figure(figsize=(10, 6))
plt.scatter(x_data, y_data, label="Data points")
plt.plot(x_data, predictions, 'r-', linewidth=2, label=f"Fitted line: y = {final_W[0]:.2f}x + {final_b[0]:.2f}")
plt.legend()
plt.title("Linear Regression with TensorFlow Sessions")
plt.xlabel("X")
plt.ylabel("Y")
plt.grid(True)
plt.savefig("linear_regression_tf.png")
plt.close()

print(f"Final model: y = {final_W[0]:.4f}x + {final_b[0]:.4f}")

Output:

Step 0, Loss: 1.6840, W: 0.8050, b: 0.0279
Step 20, Loss: 0.1164, W: 1.8015, b: 0.0200
Step 40, Loss: 0.1000, W: 1.9034, b: 0.0159
Step 60, Loss: 0.0981, W: 1.9437, b: 0.0129
Step 80, Loss: 0.0978, W: 1.9595, b: 0.0106
Step 100, Loss: 0.0978, W: 1.9660, b: 0.0088
Step 120, Loss: 0.0977, W: 1.9692, b: 0.0073
Step 140, Loss: 0.0977, W: 1.9709, b: 0.0060
Step 160, Loss: 0.0977, W: 1.9718, b: 0.0050
Step 180, Loss: 0.0977, W: 1.9723, b: 0.0041
Step 200, Loss: 0.0977, W: 1.9727, b: 0.0034
Final model: y = 1.9727x + 0.0034

This example builds a linear regression model that learns to predict the equation y = 2x from noisy training data. The session is used to:

  1. Initialize variables
  2. Run training operations
  3. Collect loss values during training
  4. Retrieve the final trained parameters
  5. Make predictions with the trained model

Sessions in Production

When deploying TensorFlow models to production, you often need to save and restore sessions:

Saving a Model

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

# Create model
x = tf.placeholder(tf.float32, shape=(None, 2), name="input")
W = tf.Variable(tf.random_normal([2, 1]), name="weight")
b = tf.Variable(tf.zeros([1]), name="bias")
y = tf.matmul(x, W) + b
output = tf.identity(y, name="output") # Named operation for easy retrieval

# Create a session and initialize variables
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

# Train model (simplified for demonstration)
# ...

# Create a saver
saver = tf.train.Saver()

# Save the model to a file
save_path = saver.save(sess, "./my_model")
print(f"Model saved to: {save_path}")

Loading a Model

python
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()

# Create the same model structure
x = tf.placeholder(tf.float32, shape=(None, 2), name="input")
W = tf.Variable(tf.random_normal([2, 1]), name="weight")
b = tf.Variable(tf.zeros([1]), name="bias")
y = tf.matmul(x, W) + b
output = tf.identity(y, name="output")

# Create a session
with tf.Session() as sess:
# Create a saver
saver = tf.train.Saver()

# Restore the saved model
saver.restore(sess, "./my_model")
print("Model restored.")

# Use the model for prediction
test_data = [[0.5, 0.2], [0.1, 0.8]]
predictions = sess.run(output, feed_dict={x: test_data})
print("Predictions:", predictions)

TensorFlow 2.x and the Transition from Sessions

In TensorFlow 2.x, eager execution is enabled by default, which means operations are executed immediately without requiring a session. This makes the code more intuitive and Pythonic.

However, you can still use TensorFlow 1.x style sessions in TensorFlow 2.x when needed:

python
import tensorflow as tf

# Use TensorFlow 1.x API with TensorFlow 2.x
tf1 = tf.compat.v1
tf1.disable_eager_execution()

# Create a graph
a = tf1.constant(5.0)
b = tf1.constant(10.0)
c = a * b

# Use a session to run it
with tf1.Session() as sess:
result = sess.run(c)
print("Result:", result)

Output:

Result: 50.0

Summary

TensorFlow Sessions are a core concept in TensorFlow 1.x that allow you to execute operations in a computational graph. In this tutorial, you've learned:

  • What sessions are and why they're important in TensorFlow 1.x
  • How to create and use sessions to run operations
  • How to configure sessions for resource management
  • How to feed data into a graph using placeholders
  • How to use sessions for training machine learning models
  • How to save and load models with sessions
  • The transition from session-based execution to eager execution in TensorFlow 2.x

While TensorFlow 2.x has moved away from explicit sessions with its eager execution model, understanding sessions is still valuable for working with legacy code and understanding the fundamentals of computational graphs.

Additional Resources

  1. TensorFlow 1.x Documentation on Sessions
  2. Migrating from TensorFlow 1.x to 2.x
  3. TensorFlow: Saving and Restoring Models

Exercises

  1. Create a simple neural network with one hidden layer using TensorFlow sessions and train it on the MNIST dataset.
  2. Implement a session-based version of a logistic regression model for binary classification.
  3. Build a computational graph that calculates the mean, standard deviation, min, and max of a set of numbers, and execute it with a session.
  4. Create a model, save it using a session, then load it in a new session and make predictions.
  5. Convert a TensorFlow 1.x session-based model to TensorFlow 2.x eager execution style.


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)