Skip to main content

Git Performance Tips

Introduction

When working with Git, especially on large projects or with distributed teams, you might encounter performance issues that slow down your workflow. Git is generally fast, but certain operations can become time-consuming as your repository grows. This guide will walk you through practical tips and techniques to optimize Git's performance, helping you maintain an efficient development process.

Why Git Performance Matters

As your repository grows in size and complexity, common Git operations like cloning, pulling, pushing, and checking status can become noticeably slower. Poor Git performance can:

  • Interrupt your coding flow
  • Increase waiting time during builds and deployments
  • Reduce team productivity
  • Create frustration during collaboration

Fortunately, there are many ways to optimize Git's performance without sacrificing functionality.

Core Performance Tips

1. Use Shallow Clones for Large Repositories

When you don't need the entire commit history, you can create a shallow clone to download only the most recent commits.

bash
# Clone with only the latest commit
git clone --depth=1 https://github.com/username/repository.git

# Clone with the latest 10 commits
git clone --depth=10 https://github.com/username/repository.git

When to use: When working with very large repositories where you don't need the full history, or for CI/CD systems that just need the latest code.

2. Enable Git Compression

Git's built-in compression can significantly improve performance:

bash
# Set compression level (0-9, where 0 is no compression and 9 is maximum)
git config --global core.compression 9

Effect: This increases CPU usage slightly but reduces the amount of data transferred during network operations.

3. Use Partial Clones for Monorepos

For large monorepos, you can use Git's partial clone feature to avoid downloading all blob objects:

bash
# Clone without blob objects
git clone --filter=blob:none https://github.com/username/repository.git

# Clone with blobs for specific directory only
git clone --filter=blob:limit=10m --sparse https://github.com/username/repository.git
cd repository
git sparse-checkout set path/to/directory

4. Optimize Git Status Performance

The git status command can be slow in large repositories. Speed it up with:

bash
# Enable status cache
git config --global core.untrackedCache true

# Use faster status algorithm
git config --global feature.manyFiles true

Before Optimization:

$ time git status
# ... output ...
real 0m3.456s

After Optimization:

$ time git status
# ... output ...
real 0m0.789s

5. Prune Unnecessary Remote-Tracking Branches

Over time, your repository accumulates references to deleted remote branches:

bash
# Remove references to remote branches that no longer exist
git fetch --prune

# Configure Git to automatically prune during fetch
git config --global fetch.prune true

6. Use Git LFS for Large Files

Large binary files can bloat your repository and slow down operations. Git Large File Storage (LFS) replaces these files with text pointers:

bash
# Install Git LFS
# (Instructions vary by OS, this is for a typical Linux system)
sudo apt-get install git-lfs

# Initialize LFS in your repository
git lfs install
git lfs track "*.psd" "*.zip" "*.pdf"
git add .gitattributes
git commit -m "Configure Git LFS tracking"

Performance Improvement: A repository with many large binary files might reduce in size from several GB to just a few MB, significantly improving clone and fetch times.

7. Use Sparse Checkouts for Monorepos

When working with monorepos, you may only need specific directories:

bash
# Initialize sparse checkout
git clone --no-checkout https://github.com/username/repository.git
cd repository
git sparse-checkout init
git sparse-checkout set path/to/directory1 path/to/directory2
git checkout main

Advanced Performance Techniques

1. Garbage Collection and Repository Maintenance

Regular maintenance helps keep your repository lean:

bash
# Basic garbage collection
git gc

# Aggressive garbage collection (more thorough but slower)
git gc --aggressive

# Prune outdated objects
git prune

You can visualize the space savings:

2. Optimize Git Hooks

Git hooks can slow down operations if they contain inefficient code. Review your hooks:

bash
# Location of hooks
ls -la .git/hooks/

Tips for optimizing hooks:

  • Keep pre-commit hooks lightweight
  • Use asynchronous processing for non-critical checks
  • Consider moving intensive operations to pre-push hooks instead

3. Use git worktree for Multiple Working Directories

Instead of maintaining multiple clones of the same repository:

bash
# Add a new working directory without copying the entire repository
git worktree add ../path/to/new/directory feature-branch

This shares the Git objects and references across multiple working directories, saving disk space and improving performance.

4. Optimize Your .gitignore File

A well-configured .gitignore file prevents Git from tracking unnecessary files:

# Example .gitignore optimizations
# Ignore node modules
node_modules/

# Ignore build directories
/build/
/dist/

# Ignore logs
*.log

# Ignore OS and editor files
.DS_Store
.vscode/
*.swp

Performance Impact: On a project with many generated files, this can reduce git status time from several seconds to nearly instant.

5. Use --no-optional-locks for Read-Only Operations

When you're just reading data and don't need to write:

bash
git --no-optional-locks status
git --no-optional-locks log

This is especially helpful on network file systems or when multiple processes need to access the repository simultaneously.

Real-World Optimizations: Case Study

Let's look at a real-world example of optimizing a large repository:

Initial State:

  • Repository size: 2.3GB
  • Clone time: 4 minutes 30 seconds
  • git status time: 3.5 seconds

Applied Optimizations:

  1. Implemented Git LFS for design assets (.psd, .ai files)
  2. Configured .gitignore to exclude build artifacts
  3. Enabled compression and untracked cache
  4. Set up regular garbage collection in CI pipeline

Results:

  • Repository size: 650MB (72% reduction)
  • Clone time: 45 seconds (83% faster)
  • git status time: 0.8 seconds (77% faster)

Performance Benchmarking

You can measure Git performance using:

bash
# Time basic operations
time git status
time git pull
time git push

# Check repository size and structure
git count-objects -v -H

Key metrics to monitor:

  • Size of .git directory
  • Time for common operations
  • Network transfer sizes during fetch/push

Summary

Optimizing Git performance is essential for maintaining an efficient development workflow, especially as projects grow. The techniques covered in this guide—shallow clones, compression, Git LFS, sparse checkouts, and regular maintenance—can dramatically improve Git's speed and reduce resource usage.

By implementing these performance tips, you'll spend less time waiting for Git operations and more time coding.

Additional Resources

Exercises

  1. Measure the current size of your Git repository using git count-objects -v -H. Apply garbage collection with git gc --aggressive and measure again. What was the difference?

  2. Set up Git LFS for a project with large binary files. Compare clone times before and after.

  3. Configure and test a sparse checkout for a large repository where you only need to work on a specific subdirectory.

  4. Create a script that implements the performance tips from this guide and run it on a repository you work with regularly.



If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)