Worker Pool Pattern
A worker pool limits concurrency by spawning a fixed number of goroutines that process jobs from a shared channel:
func worker(id int, jobs <-chan int, results chan<- int) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(time.Second) // Simulate work
results <- job * 2
}
}
func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)
// Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send 9 jobs
for j := 1; j <= 9; j++ {
jobs <- j
}
close(jobs)
// Collect results
for r := 1; r <= 9; r++ {
fmt.Println(<-results)
}
}
Key Takeaway: Worker pools are the most common Go concurrency pattern. Use them when you need to limit parallelism (database connections, API rate limits, CPU-bound work).
Pipeline Pattern
A pipeline chains stages where each stage is a goroutine reading from an input channel and writing to an output channel:
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums { out <- n }
close(out)
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in { out <- n * n }
close(out)
}()
return out
}
// Pipeline: generate -> square -> print
for val := range square(generate(2, 3, 4)) {
fmt.Println(val) // 4, 9, 16
}
Worker Pool Pattern
Producer
generate jobs
→
Job Channel
buffered
→
Worker 1
process
→
Worker 2
process
→
Worker 3
process
Fan-Out / Fan-In
Input
single source
→
Worker 1
parallel
→
Worker 2
parallel
→
Merger
combine
→
Output
single result
Practice Exercises
Hard Production Scenario
Design a solution using these concepts for a real-world production system.
Hard Performance Analysis
Benchmark two different approaches and explain which is better and why.