Skip to main content

Rust Test Configuration

Introduction

When writing tests in Rust, the default configuration works well for simple cases, but as your projects grow in complexity, you'll need more control over how tests are run, organized, and executed. This guide will walk you through various ways to configure the Rust testing framework to suit your specific needs.

Testing is a critical part of software development, and Rust's built-in testing framework offers numerous configuration options that help you create more effective and efficient test suites. Whether you need to run specific test subsets, change test behavior based on conditions, or optimize test performance, understanding these configuration options is essential.

Basic Test Configuration

Test Directory Structure

In Rust, tests are typically organized in one of two ways:

  1. Unit tests: Located in the same file as the code they test, within a tests module marked with the #[cfg(test)] attribute.
  2. Integration tests: Located in the tests/ directory at the root of your project.

Here's a common project structure:

project/
├── Cargo.toml
├── src/
│ ├── main.rs
│ └── lib.rs
└── tests/
├── integration_test1.rs
└── integration_test2.rs

Cargo Test Command Options

The cargo test command offers several configuration options that you can use right away:

bash
# Run all tests
cargo test

# Run tests with names containing "file"
cargo test file

# Run a specific test
cargo test test_read_file

# Run tests without capturing output
cargo test -- --nocapture

# Run tests in parallel (with 4 threads)
cargo test -- --test-threads=4

Test Attributes

Rust provides several attributes to configure test behavior:

#[test]

The basic attribute that marks a function as a test:

rust
#[test]
fn test_addition() {
assert_eq!(2 + 2, 4);
}

#[ignore]

Mark tests that should be skipped by default:

rust
#[test]
#[ignore]
fn expensive_test() {
// This test is slow or requires special setup
// Run only with: cargo test -- --ignored
}

Let's see a complete example of using the #[ignore] attribute:

rust
// lib.rs
pub fn complex_calculation(input: u64) -> u64 {
// Simulate an expensive calculation
std::thread::sleep(std::time::Duration::from_secs(2));
input * input + input / 2
}

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn test_simple() {
assert_eq!(5 + 5, 10);
}

#[test]
#[ignore]
fn test_expensive_calculation() {
assert_eq!(complex_calculation(10), 105);
}
}

Output when running cargo test:

running 2 tests
test tests::test_simple ... ok
test tests::test_expensive_calculation ... ignored

test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out

To run ignored tests specifically:

cargo test -- --ignored

#[should_panic]

Test that a function panics as expected:

rust
#[test]
#[should_panic(expected = "division by zero")]
fn test_division_by_zero() {
let _result = 1 / 0;
}

Let's look at a practical example:

rust
pub fn divide(a: i32, b: i32) -> i32 {
if b == 0 {
panic!("Cannot divide by zero");
}
a / b
}

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn test_normal_division() {
assert_eq!(divide(10, 2), 5);
}

#[test]
#[should_panic(expected = "Cannot divide by zero")]
fn test_division_by_zero() {
divide(10, 0);
}
}

Conditional Testing with #[cfg]

You can conditionally compile tests based on various factors:

rust
// Only run on Linux
#[cfg(target_os = "linux")]
#[test]
fn test_linux_specific() {
// Linux-specific test code
}

// Only in debug builds
#[cfg(debug_assertions)]
#[test]
fn test_debug_only() {
// Test that only makes sense in debug mode
}

Custom Test Configurations

Test-Only Dependencies

In your Cargo.toml, you can specify dependencies that are only used for testing:

toml
[dev-dependencies]
tempfile = "3.8.0" # Create temporary files/directories for tests
mockall = "0.11.4" # Mocking library for tests
criterion = "0.5.1" # Benchmarking library

Example of using a test-only dependency:

rust
#[cfg(test)]
mod tests {
use super::*;
use tempfile::tempdir;

#[test]
fn test_file_operations() {
let temp_dir = tempdir().unwrap();
let file_path = temp_dir.path().join("test.txt");

// Test file operations using the temporary directory
// ...
}
}

Test Modules and Helpers

Create helper functions to reduce repetition in tests:

rust
#[cfg(test)]
mod tests {
use super::*;

// Helper function for test setup
fn setup_test_data() -> Vec<String> {
vec!["apple".to_string(), "banana".to_string(), "cherry".to_string()]
}

#[test]
fn test_sorting() {
let mut data = setup_test_data();
data.sort();
assert_eq!(data, vec!["apple", "banana", "cherry"]);
}

#[test]
fn test_contains() {
let data = setup_test_data();
assert!(data.contains(&"banana".to_string()));
}
}

Test Organization with Modules

For larger projects, organize tests into modules:

rust
#[cfg(test)]
mod tests {
use super::*;

mod file_tests {
use super::*;

#[test]
fn test_read_file() {
// Test file reading
}

#[test]
fn test_write_file() {
// Test file writing
}
}

mod network_tests {
use super::*;

#[test]
fn test_connection() {
// Test network connection
}
}
}

You can run specific modules with:

bash
cargo test tests::file_tests

Integration Testing Configuration

Integration tests are placed in the tests/ directory and test your code as an external user would.

Here's a sample integration test file (tests/api_tests.rs):

rust
// tests/api_tests.rs
use my_crate::api::Client;

// Test setup function
fn setup_client() -> Client {
Client::new("https://api.example.com")
}

#[test]
fn test_api_get_request() {
let client = setup_client();
let response = client.get("/users");
assert!(response.is_ok());
}

#[test]
fn test_api_post_request() {
let client = setup_client();
let data = r#"{"name": "John Doe"}"#;
let response = client.post("/users", data);
assert!(response.is_ok());
}

Advanced Configuration

Setting Rust Test Environment Variables

You can configure tests using environment variables:

rust
#[test]
fn test_with_env() {
std::env::set_var("TEST_MODE", "integration");

// Your code might behave differently based on TEST_MODE
let result = perform_operation();

assert!(result.is_ok());
}

Or set environment variables when running tests:

bash
TEST_MODE=integration cargo test

Using Test Fixtures

Create reusable test fixtures for complex setup and teardown:

rust
struct TestFixture {
temp_dir: tempfile::TempDir,
db_connection: Connection,
}

impl TestFixture {
fn new() -> Self {
let temp_dir = tempfile::tempdir().unwrap();
let db_path = temp_dir.path().join("test.db");
let db_connection = Connection::open(db_path).unwrap();

// Initialize database schema
db_connection.execute("CREATE TABLE users (id INTEGER, name TEXT)", []).unwrap();

TestFixture {
temp_dir,
db_connection,
}
}

fn add_test_user(&self, id: i32, name: &str) {
self.db_connection.execute(
"INSERT INTO users VALUES (?1, ?2)",
[id, name]
).unwrap();
}
}

#[test]
fn test_user_query() {
let fixture = TestFixture::new();
fixture.add_test_user(1, "Alice");

// Test querying users
let users = query_users(&fixture.db_connection);
assert_eq!(users.len(), 1);
assert_eq!(users[0].name, "Alice");
}

Running Tests in a Specific Order

Tests normally run in parallel and in a non-deterministic order. If you need tests to run sequentially:

rust
// Set to run just one test at a time
#[test]
fn test_sequence() {
// This will run in isolation when you use:
// cargo test -- --test-threads=1
}

For tests that need to run in a specific order, you might need a more complex approach:

rust
struct TestSequence;

impl TestSequence {
fn step1() {
// First step
println!("Step 1 completed");
}

fn step2() {
// Second step
println!("Step 2 completed");
}

fn step3() {
// Third step
println!("Step 3 completed");
}
}

#[test]
fn test_full_sequence() {
// Run the steps in order
TestSequence::step1();
TestSequence::step2();
TestSequence::step3();
}

Visualizing Test Configuration

Below is a diagram showing how different test configurations work together in a Rust project:

Real-World Example: Testing a Configuration Parser

Let's create a comprehensive example of testing a configuration file parser with different test configurations:

rust
// src/lib.rs
use std::fs::File;
use std::io::{self, Read};
use std::path::Path;

#[derive(Debug, PartialEq)]
pub struct Config {
pub database_url: String,
pub port: u16,
pub debug_mode: bool,
}

pub fn parse_config_file(path: &Path) -> io::Result<Config> {
let mut file = File::open(path)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;

parse_config_string(&contents)
}

pub fn parse_config_string(contents: &str) -> io::Result<Config> {
let mut config = Config {
database_url: String::new(),
port: 8080, // Default port
debug_mode: false,
};

for line in contents.lines() {
let line = line.trim();

// Skip comments and empty lines
if line.is_empty() || line.starts_with('#') {
continue;
}

if let Some((key, value)) = line.split_once('=') {
let key = key.trim();
let value = value.trim();

match key {
"database_url" => config.database_url = value.to_string(),
"port" => {
config.port = value.parse().map_err(|_| {
io::Error::new(io::ErrorKind::InvalidData, "Invalid port number")
})?;
}
"debug_mode" => {
config.debug_mode = match value.to_lowercase().as_str() {
"true" | "yes" | "1" => true,
"false" | "no" | "0" => false,
_ => return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Invalid debug_mode value"
)),
};
}
_ => {
// Ignore unknown keys
}
}
}
}

// Validate required fields
if config.database_url.is_empty() {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Missing required field: database_url"
));
}

Ok(config)
}

#[cfg(test)]
mod tests {
use super::*;
use std::io::Write;
use tempfile::NamedTempFile;

// Helper function to create a temporary config file
fn create_temp_config(contents: &str) -> io::Result<NamedTempFile> {
let mut file = NamedTempFile::new()?;
file.write_all(contents.as_bytes())?;
Ok(file)
}

#[test]
fn test_parse_valid_config_string() {
let config_str = r#"
# Database configuration
database_url = postgres://user:pass@localhost/dbname
port = 9000
debug_mode = true
"#;

let config = parse_config_string(config_str).unwrap();

assert_eq!(config.database_url, "postgres://user:pass@localhost/dbname");
assert_eq!(config.port, 9000);
assert_eq!(config.debug_mode, true);
}

#[test]
fn test_parse_config_with_defaults() {
let config_str = "database_url = sqlite:///data.db";

let config = parse_config_string(config_str).unwrap();

assert_eq!(config.database_url, "sqlite:///data.db");
assert_eq!(config.port, 8080); // Default value
assert_eq!(config.debug_mode, false); // Default value
}

#[test]
#[should_panic(expected = "Missing required field: database_url")]
fn test_missing_required_field() {
let config_str = "port = 8080";

// This should panic due to missing database_url
parse_config_string(config_str).unwrap();
}

#[test]
fn test_invalid_port() {
let config_str = r#"
database_url = postgres://localhost/db
port = invalid
"#;

let result = parse_config_string(config_str);
assert!(result.is_err());

let err = result.unwrap_err();
assert_eq!(err.kind(), io::ErrorKind::InvalidData);
}

#[test]
fn test_parse_config_file() {
let config_str = r#"
database_url = mysql://localhost/testdb
port = 3306
debug_mode = false
"#;

let temp_file = create_temp_config(config_str).unwrap();
let config = parse_config_file(temp_file.path()).unwrap();

assert_eq!(config.database_url, "mysql://localhost/testdb");
assert_eq!(config.port, 3306);
assert_eq!(config.debug_mode, false);
}

// Only run this test when the "integration" feature is enabled
#[cfg(feature = "integration")]
#[test]
fn test_with_real_database() {
// This would connect to an actual test database
// Only runs when the feature is enabled
}

// Benchmark-style test that's ignored by default
#[test]
#[ignore]
fn benchmark_config_parsing() {
let config_str = r#"
database_url = postgres://user:pass@localhost/dbname
port = 9000
debug_mode = true
"#;

// Measure parsing performance
let start = std::time::Instant::now();
for _ in 0..10000 {
let _ = parse_config_string(config_str).unwrap();
}
let duration = start.elapsed();

println!("Parsed 10000 configs in {:?}", duration);
// On a modern system, this should be much less than a second
assert!(duration.as_secs() < 1);
}
}

Customizing Test Output

Using the --format Flag

You can change the test output format using the --format flag:

bash
# Simple format
cargo test -- --format=terse

# More detailed output
cargo test -- --format=pretty

# Output in JSON format
cargo test -- --format=json

Creating Custom Test Reporters

For complex projects, you might want to create a custom test reporter that formats test results in a specific way. This usually involves:

  1. Capturing the test output using --format=json
  2. Processing the JSON with a separate tool
  3. Generating reports or visualizations

Best Practices for Test Configuration

  1. Keep Unit Tests Fast: Configure unit tests to be quick and focused.
  2. Isolate Integration Tests: Use configuration to ensure integration tests don't interfere with each other.
  3. Use Meaningful Test Names: Configure test organization with clear naming conventions.
  4. Leverage Conditional Testing: Use #[cfg] attributes to run different tests in different environments.
  5. Mock External Dependencies: Configure tests to use mocks for external services.
  6. Control Test Environment: Use environment variables to configure test behavior.
  7. Separate Slow Tests: Use #[ignore] for tests that are resource-intensive.
  8. Control Parallelism: Configure the number of test threads based on your system capabilities.

Summary

Rust's testing framework offers a robust set of configuration options that allow you to customize how tests are run, organized, and executed. By leveraging these options, you can create more effective test suites that catch bugs early, run efficiently, and adapt to different environments.

Key configuration options we covered include:

  • Basic test command options with cargo test
  • Test attributes like #[test], #[ignore], and #[should_panic]
  • Conditional compilation with #[cfg]
  • Test organization with modules
  • Integration test configuration
  • Environment variables and custom test fixtures
  • Test output formatting

By mastering these configuration options, you'll be able to create test suites that are well-organized, efficient, and effective at catching bugs in your Rust code.

Additional Resources

Exercises

  1. Create a test suite for a simple function that validates user input, using different test attributes.
  2. Configure an integration test that mocks an external API and verifies your code's behavior.
  3. Write a test that uses environment variables to simulate different runtime environments.
  4. Create a test fixture for database testing with proper setup and teardown.
  5. Write a set of tests that use conditional compilation to test platform-specific code.


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)