Introduction #
If you are stepping into a Go interview in 2025, you can bet your bottom dollar on one thing: You will be grilled on concurrency.
For mid-to-senior level roles, interviewers have moved past the basic definition of a Goroutine. They aren’t just asking “what is a channel?”; they want to know if you can design non-blocking systems, prevent memory leaks, and choose the right synchronization primitives for the job. They are looking for production-ready mindsets.
In this article, we will bypass the textbook definitions and dive straight into the code patterns and “gotchas” that separate the juniors from the pros. We will cover channel mechanics, the worker pool pattern, and how to effectively hunt down race conditions.
Prerequisites and Environment #
To get the most out of this guide, you should have a basic understanding of Go syntax. We will be writing executable code, so ensure you have the following setup:
- Go Version: Go 1.22 or higher (we want access to the latest runtime optimizations).
- IDE: VS Code (with the Go extension) or GoLand.
- Terminal: Any standard shell to run
go runandgo test.
No external dependencies are required for these examples; we are sticking to the powerful standard library.
1. The Channel Handshake: Unbuffered vs. Buffered #
A classic interview question is: “What happens if you write to an unbuffered channel when no one is listening?”
The answer, as you know, is that it blocks. But explaining why and visualizing the flow is what impresses interviewers. Go’s unbuffered channels perform a synchronous handoff. The sender cannot complete the operation until the receiver has accepted the data.
Visualizing the Block #
The Deadlock Trap #
If you don’t handle this synchronization correctly, you get the dreaded fatal error: all goroutines are asleep - deadlock!.
Here is a simple example of how not to do it, followed by the fix:
package main
import "fmt"
func main() {
ch := make(chan int)
// BAD: This will deadlock immediately because main
// waits for a write, but there's no other goroutine reading.
// ch <- 42
// GOOD: Launch a goroutine to handle the read/write
go func() {
fmt.Println("Sender: Sending value...")
ch <- 42
fmt.Println("Sender: Value sent!")
}()
fmt.Println("Main: Waiting for value...")
val := <-ch
fmt.Println("Main: Received", val)
}Interview Tip: When asked about buffered channels (make(chan int, 5)), mention that they decouple the sender and receiver temporarily. They are great for handling bursty traffic, but they don’t prevent blocking forever—once the buffer is full, the sender blocks again.
2. Implementing the Worker Pool Pattern #
Senior interviews often require you to implement a concurrency pattern live. The Worker Pool (Fan-Out/Fan-In) is the gold standard. It demonstrates your ability to manage resources and coordinate multiple goroutines.
Here is a robust, production-style implementation. We use sync.WaitGroup to ensure all workers finish before the program exits—a crucial detail often missed in quick coding tests.
package main
import (
"fmt"
"sync"
"time"
)
// Job represents the work to be done
type Job struct {
ID int
Val int
}
// Result represents the outcome
type Result struct {
JobID int
Squared int
}
func worker(id int, jobs <-chan Job, results chan<- Result, wg *sync.WaitGroup) {
defer wg.Done() // Signal completion when the channel closes
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job.ID)
// Simulate expensive work
time.Sleep(100 * time.Millisecond)
results <- Result{JobID: job.ID, Squared: job.Val * job.Val}
}
}
func main() {
const numJobs = 5
const numWorkers = 3
jobs := make(chan Job, numJobs)
results := make(chan Result, numJobs)
var wg sync.WaitGroup
// 1. Start Workers (Fan-Out)
for w := 1; w <= numWorkers; w++ {
wg.Add(1)
go worker(w, jobs, results, &wg)
}
// 2. Send Jobs
for j := 1; j <= numJobs; j++ {
jobs <- Job{ID: j, Val: j}
}
close(jobs) // Crucial: tells workers no more jobs are coming
// 3. Wait for workers in a separate goroutine to close results
go func() {
wg.Wait()
close(results)
}()
// 4. Collect Results (Fan-In)
for res := range results {
fmt.Printf("Result: Job %d squared is %d\n", res.JobID, res.Squared)
}
}Why this code works for interviews: #
- Buffered Channels: Used for performance optimization.
sync.WaitGroup: Handles graceful shutdown.- Separation of Concerns: Workers process, Main distributes and collects.
- Closing Channels: We explicitly close channels to prevent leaks and deadlocks.
3. Channels vs. Mutexes: When to Use Which? #
A subjective but critical question: “Why didn’t you use a Mutex for that?” or “When should you use Channels over Mutexes?”
Go’s proverb is “Share memory by communicating, don’t communicate by sharing memory.” However, pragmatism beats dogma in production.
Here is a comparison table to help you answer this decisively:
| Feature | Channels | Sync.Mutex |
|---|---|---|
| Primary Philosophy | Communicate data flow / Orchestration | Protect internal state / Caching |
| Complexity | Higher (potential for deadlocks if complex) | Lower (simple locking) |
| Performance | Slower (allocation + locking overhead) | Extremely fast (low-level atomic ops) |
| Best Use Case | Passing ownership of data, worker pools | Caches, counters, struct field access |
The Race Condition Trap #
The most common concurrency bug is the Race Condition. This happens when two goroutines access the same variable concurrently, and at least one access is a write.
The Buggy Code (Do not use in prod):
package main
import (
"fmt"
"sync"
)
func main() {
counter := 0
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
counter++ // RACE CONDITION HERE!
wg.Done()
}()
}
wg.Wait()
// Value will likely be less than 1000
fmt.Println("Final Counter:", counter)
}The Fix (Using Mutex):
package main
import (
"fmt"
"sync"
)
func main() {
counter := 0
var mu sync.Mutex // The Guard
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock() // Lock before access
counter++
mu.Unlock() // Unlock immediately after
}()
}
wg.Wait()
fmt.Println("Safe Counter:", counter)
}Pro Tip: The Race Detector #
Never finish an interview coding session without mentioning the Race Detector. It is your safety net.
Run your tests or code with:
go run -race main.goIf you mention this tool during the interview, it shows you care about code correctness and tooling.
4. Preventing Goroutine Leaks #
In languages with Garbage Collection, we often forget about memory. But in Go, a Goroutine is not garbage collected until it exits. If you start a goroutine that blocks on a channel that is never written to, that goroutine lives forever (until the program dies). This is a leak.
Solution: The context Package
Always allow a way to stop your goroutines. The context package is the standard way to propagate cancellation signals.
func worker(ctx context.Context, jobs <-chan int) {
for {
select {
case job := <-jobs:
fmt.Println("Processing", job)
case <-ctx.Done(): // Listen for cancellation
fmt.Println("Worker stopping...")
return
}
}
}Conclusion #
Concurrency in Go is powerful, but it requires discipline. To ace your interview:
- Understand blocking mechanics of channels.
- Be able to write a Worker Pool from scratch.
- Know when to choose a Mutex (state) over a Channel (flow).
- Always mention the Race Detector (
-race). - Watch out for Goroutine leaks using
context.
Practice the code snippets above until you can type them out without looking. The syntax is easy; the logic is where the challenge lies.
Good luck with your interview!
Found this guide helpful? Check out our “Go Data Structures” series next to round out your interview preparation.