Go Goroutines
Introduction
Concurrency is one of Go's most powerful features, and goroutines are at the heart of Go's concurrency model. A goroutine is a lightweight thread managed by the Go runtime, allowing you to run functions concurrently with minimal overhead. Unlike traditional threads in other programming languages, goroutines are incredibly lightweight and efficient.
In this tutorial, we'll explore how goroutines work, how to use them effectively, and how they contribute to building high-performance web applications with Gin.
What are Goroutines?
Goroutines are functions that can run concurrently with other functions. They're called "lightweight threads" because they consume far less memory than operating system threads (starting at around 2KB of stack memory versus megabytes for OS threads) and are managed by the Go runtime rather than the operating system.
The Go runtime scheduler multiplexes goroutines onto OS threads, allowing hundreds of thousands or even millions of goroutines to run on just a handful of threads.
Creating and Running Goroutines
Starting a goroutine is as simple as adding the go
keyword before a function call:
package main
import (
"fmt"
"time"
)
func sayHello() {
fmt.Println("Hello from goroutine!")
}
func main() {
go sayHello() // Start a new goroutine
// Without this, main might exit before the goroutine has a chance to run
time.Sleep(100 * time.Millisecond)
fmt.Println("Hello from main!")
}
Output:
Hello from goroutine!
Hello from main!
In this example, sayHello()
runs concurrently with the main()
function. We've added a small delay to give the goroutine time to execute before the program exits.
Anonymous Goroutines
You can also start goroutines with anonymous functions:
package main
import (
"fmt"
"time"
)
func main() {
go func() {
fmt.Println("Hello from anonymous goroutine!")
}()
time.Sleep(100 * time.Millisecond)
fmt.Println("Hello from main!")
}
Output:
Hello from anonymous goroutine!
Hello from main!
Passing Data to Goroutines
When passing data to a goroutine, be careful about variable scoping:
package main
import (
"fmt"
"time"
)
func main() {
// Bad practice: shared variable
for i := 0; i < 5; i++ {
go func() {
fmt.Println(i) // May not print what you expect!
}()
}
// Good practice: pass as parameter
for i := 0; i < 5; i++ {
go func(n int) {
fmt.Println(n) // Will correctly print the value of i at the time the goroutine was created
}(i)
}
time.Sleep(time.Second)
}
The output is not deterministic but might look like:
5
5
5
5
5
0
1
2
3
4
In the first loop, the anonymous function captures the variable i
by reference, and by the time the goroutines execute, the loop might have completed and i
might equal 5. In the second loop, the value is passed as a parameter, so each goroutine gets its own copy.
Synchronization with WaitGroups
Using time.Sleep()
for synchronization is unreliable. A better solution is to use sync.WaitGroup
:
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1) // Increment the counter
go func(n int) {
defer wg.Done() // Decrement the counter when the goroutine completes
fmt.Println("Processing item", n)
}(i)
}
wg.Wait() // Wait for all goroutines to finish
fmt.Println("All goroutines complete")
}
Output (order may vary):
Processing item 0
Processing item 4
Processing item 3
Processing item 2
Processing item 1
All goroutines complete
WaitGroup
is a counter that tracks how many goroutines are still running. Add(n)
increments the counter by n
, Done()
decrements it by one, and Wait()
blocks until the counter reaches zero.
Communication Between Goroutines: Channels
Goroutines often need to communicate with each other. Go provides channels for safe communication and synchronization between goroutines:
package main
import (
"fmt"
)
func main() {
messages := make(chan string)
// Send a message
go func() {
messages <- "Hello from goroutine!"
}()
// Receive the message
msg := <-messages
fmt.Println(msg)
}
Output:
Hello from goroutine!
Channels are typed conduits through which you can send and receive values. The <-
operator is used both for sending and receiving:
ch <- v
sends valuev
to channelch
v := <-ch
receives a value from channelch
and assigns it tov
Buffered Channels
By default, channels are unbuffered, meaning they'll block until both sender and receiver are ready. Buffered channels can hold a limited number of values without a receiver being ready:
package main
import "fmt"
func main() {
// Create a buffered channel with capacity for 2 messages
messages := make(chan string, 2)
messages <- "Hello"
messages <- "World"
fmt.Println(<-messages) // Receive "Hello"
fmt.Println(<-messages) // Receive "World"
}
Output:
Hello
World
Select Statement for Multiple Channels
The select
statement lets you wait on multiple channel operations:
package main
import (
"fmt"
"time"
)
func main() {
c1 := make(chan string)
c2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
c1 <- "one"
}()
go func() {
time.Sleep(2 * time.Second)
c2 <- "two"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-c1:
fmt.Println("Received", msg1)
case msg2 := <-c2:
fmt.Println("Received", msg2)
}
}
}
Output:
Received one
Received two
The select
statement blocks until one of its cases can run, then executes that case. If multiple cases are ready, it picks one at random.
Practical Example: Concurrent Web Scraper
Let's build a simple concurrent web scraper that fetches multiple URLs simultaneously:
package main
import (
"fmt"
"io"
"net/http"
"sync"
"time"
)
func fetchURL(url string, wg *sync.WaitGroup) {
defer wg.Done()
start := time.Now()
resp, err := http.Get(url)
if err != nil {
fmt.Printf("Error fetching %s: %v\n", url, err)
return
}
defer resp.Body.Close()
bytes, _ := io.ReadAll(resp.Body)
fmt.Printf("Fetched %s: %d bytes in %v\n", url, len(bytes), time.Since(start))
}
func main() {
urls := []string{
"https://golang.org",
"https://github.com",
"https://stackoverflow.com",
}
var wg sync.WaitGroup
start := time.Now()
for _, url := range urls {
wg.Add(1)
go fetchURL(url, &wg)
}
wg.Wait()
fmt.Printf("Total time: %v\n", time.Since(start))
}
Sample output (times will vary):
Fetched https://golang.org: 12345 bytes in 235.682ms
Fetched https://github.com: 23456 bytes in 352.409ms
Fetched https://stackoverflow.com: 34567 bytes in 454.021ms
Total time: 454.023ms
This example demonstrates how goroutines make it easy to do multiple operations concurrently. If we had fetched these URLs sequentially, the total time would have been the sum of all individual request times.
Goroutines in Gin Applications
In Gin applications, goroutines can be especially useful for:
- Handling expensive operations without blocking responses:
router.GET("/process", func(c *gin.Context) {
// Respond immediately
c.JSON(200, gin.H{"status": "processing"})
// Perform expensive operation in background
go func() {
// Process data, send emails, etc.
processLargeDataset()
}()
})
- Concurrent database operations:
func GetDashboardData(c *gin.Context) {
var wg sync.WaitGroup
var userProfile UserProfile
var userPosts []Post
var userStats UserStats
wg.Add(3)
go func() {
defer wg.Done()
userProfile = fetchUserProfile(userID)
}()
go func() {
defer wg.Done()
userPosts = fetchUserPosts(userID)
}()
go func() {
defer wg.Done()
userStats = calculateUserStats(userID)
}()
wg.Wait()
c.JSON(200, gin.H{
"profile": userProfile,
"posts": userPosts,
"stats": userStats,
})
}
Important Considerations and Best Practices
- Never access Gin's context from a goroutine after the request handler returns:
// DON'T DO THIS - UNSAFE!
router.GET("/", func(c *gin.Context) {
go func() {
time.Sleep(5 * time.Second)
c.JSON(200, gin.H{"message": "Hello"}) // BAD! c might not be valid anymore
}()
})
// Instead, extract what you need before starting the goroutine
router.GET("/", func(c *gin.Context) {
userID := c.GetInt64("userID")
go func(id int64) {
// Process something with the ID
processUserData(id)
}(userID)
c.JSON(200, gin.H{"message": "Processing started"})
})
- Limit the number of goroutines you create for resource-intensive tasks:
// Worker pool pattern
func processItems(items []string, concurrency int) {
semaphore := make(chan struct{}, concurrency)
var wg sync.WaitGroup
for _, item := range items {
wg.Add(1)
semaphore <- struct{}{} // Acquire token
go func(item string) {
defer wg.Done()
defer func() { <-semaphore }() // Release token
// Process the item
processItem(item)
}(item)
}
wg.Wait()
}
- Handle panics in goroutines:
go func() {
defer func() {
if r := recover(); r != nil {
fmt.Println("Recovered from panic:", r)
}
}()
// Code that might panic
}()
Summary
Goroutines are one of Go's most distinctive and powerful features:
- They allow for concurrent execution with minimal overhead
- Starting a goroutine is as simple as adding the
go
keyword before a function call - Channels provide a safe way for goroutines to communicate
sync.WaitGroup
helps coordinate the completion of multiple goroutines- Goroutines enable high concurrency in Gin applications, allowing for responsive APIs even when performing complex operations
By mastering goroutines, you'll be able to build highly concurrent and efficient web applications with Gin that can handle many operations simultaneously without blocking, leading to better performance and user experience.
Exercises
- Create a Gin endpoint that fetches data from three different external APIs concurrently and combines the results.
- Implement a worker pool that processes background jobs with a limited number of goroutines.
- Build a real-time notification system using goroutines and channels to handle incoming events.
- Create a file processing endpoint that handles large file uploads in the background while immediately responding to the user.
Additional Resources
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)