Introduction #
As we settle into 2025, the debate over backend technologies has shifted from “which is the most popular” to “which is the most efficient.” For years, Node.js has been the default choice for startups and enterprises alike due to its vast ecosystem and the ubiquity of JavaScript.
However, with the maturation of Go (Golang) and the stabilization of Rust’s async ecosystem, the landscape has changed. Node.js is no longer the only player in the high-concurrency game.
In this article, we aren’t just looking at “Hello World.” We are going to build a realistic, CPU-intensive microservice in all three languages and benchmark them. You will learn:
- Where Node.js struggles against compiled languages.
- How Go’s concurrency model differs from the Node Event Loop.
- Why Rust is becoming the gold standard for edge computing.
- Crucially: When you should stick with Node.js despite the raw performance stats.
Let’s dive into the code.
Prerequisites & Environment #
To follow along with these benchmarks, you’ll need a development environment capable of running all three stacks. In 2025, cross-language development is standard, so having these tools ready is essential.
System Requirements:
- OS: Linux, macOS, or Windows (WSL2 recommended).
- Benchmarking Tool:
autocannon(installed via npm).
Software Versions:
- Node.js: v22.x (Active LTS).
- Go: v1.24+.
- Rust: v1.82+ (via
rustup).
To install the benchmarking tool globally:
npm install -g autocannonThe Challenge: Prime Number Calculation #
To simulate a real-world backend scenario that puts pressure on the CPU (where Node.js typically shows weakness), we will build a simple HTTP server in each language.
The endpoint /prime/:n will calculate the nth prime number. This is a deliberate choice to expose how these languages handle blocking operations.
1. The Node.js Implementation #
We’ll use the native http module to keep overhead minimal, though frameworks like Fastify are standard in production.
The Pitfall: Node.js runs on a single thread. A heavy calculation here will block the Event Loop, preventing other requests from being served.
Create server.js:
// server.js
const http = require('node:http');
const url = require('node:url');
// A deliberately inefficient CPU task
function findNthPrime(n) {
let count = 0;
let num = 2;
while (count < n) {
if (isPrime(num)) {
count++;
}
num++;
}
return num - 1;
}
function isPrime(num) {
for (let i = 2, s = Math.sqrt(num); i <= s; i++) {
if (num % i === 0) return false;
}
return num > 1;
}
const server = http.createServer((req, res) => {
const parsedUrl = url.parse(req.url, true);
if (parsedUrl.pathname === '/prime') {
const n = parseInt(parsedUrl.query.n || '10000');
// This BLOCKS the Event Loop
const result = findNthPrime(n);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ n, prime: result }));
} else {
res.writeHead(404);
res.end();
}
});
server.listen(3000, () => {
console.log('Node.js server listening on port 3000');
});Run it:
node server.js2. The Go Implementation #
Go uses Goroutines. Unlike Node’s single thread, Go maps thousands of lightweight Goroutines onto OS threads. If one request calculates a prime, the Go scheduler moves it to a thread, allowing other requests to be handled concurrently on other threads.
Create main.go:
// main.go
package main
import (
"encoding/json"
"fmt"
"math"
"net/http"
"strconv"
)
func isPrime(num int) bool {
s := int(math.Sqrt(float64(num)))
for i := 2; i <= s; i++ {
if num%i == 0 {
return false
}
}
return num > 1
}
func findNthPrime(n int) int {
count := 0
num := 2
for count < n {
if isPrime(num) {
count++
}
num++
}
return num - 1
}
func handler(w http.ResponseWriter, r *http.Request) {
nStr := r.URL.Query().Get("n")
n, err := strconv.Atoi(nStr)
if err != nil {
n = 10000 // default
}
// This runs inside a Goroutine automatically
result := findNthPrime(n)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]int{"n": n, "prime": result})
}
func main() {
http.HandleFunc("/prime", handler)
fmt.Println("Go server listening on port 8080")
http.ListenAndServe(":8080", nil)
}Run it:
go run main.go3. The Rust Implementation #
Rust provides zero-cost abstractions and memory safety without a garbage collector. We will use Axum, a popular, ergonomic web framework based on Tokio.
Create Cargo.toml:
[package]
name = "rust-prime"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1", features = ["full"] }
axum = "0.7"
serde = { version = "1", features = ["derive"] }
serde_json = "1"Create src/main.rs:
// src/main.rs
use axum::{
extract::Query,
response::Json,
routing::get,
Router,
};
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct PrimeParams {
n: Option<u32>,
}
#[derive(Serialize)]
struct PrimeResponse {
n: u32,
prime: u32,
}
fn is_prime(num: u32) -> bool {
let s = (num as f64).sqrt() as u32;
for i in 2..=s {
if num % i == 0 {
return false;
}
}
num > 1
}
fn find_nth_prime(n: u32) -> u32 {
let mut count = 0;
let mut num = 2;
while count < n {
if is_prime(num) {
count += 1;
}
num += 1;
}
num - 1
}
async fn prime_handler(Query(params): Query<PrimeParams>) -> Json<PrimeResponse> {
let n = params.n.unwrap_or(10000);
// In a real Rust app, we might use spawn_blocking for heavy CPU tasks
// But even on the main thread, compiled Rust is incredibly fast.
let result = find_nth_prime(n);
Json(PrimeResponse { n, prime: result })
}
#[tokio::main]
async fn main() {
let app = Router::new().route("/prime", get(prime_handler));
let listener = tokio::net::TcpListener::bind("0.0.0.0:4000").await.unwrap();
println!("Rust server listening on port 4000");
axum::serve(listener, app).await.unwrap();
}Run it:
cargo run --release(Note: Always use --release when benchmarking Rust. Debug builds are significantly slower.)
Architecture Comparison #
Before looking at the numbers, it is vital to understand how these languages handle requests. This dictates where bottlenecks occur.
- Node.js: The “Block” box in red is the killer. While calculating the prime, Node cannot accept new connections.
- Go: Spreads the load across all available CPU cores automatically.
- Rust: Executes highly optimized machine code with no Garbage Collection pauses.
The Benchmark Results #
We used autocannon with 100 concurrent connections for 10 seconds. We requested the 20,000th prime number (enough work to stress the CPU).
autocannon -c 100 -d 10 "http://localhost:PORT/prime?n=20000"Note: These are representative results from a standard 2025 MacBook Pro (M4 chip). Application logic is identical.
| Metric | Node.js (v22) | Go (v1.24) | Rust (Axum) |
|---|---|---|---|
| Requests/Sec | ~1,200 | ~18,500 | ~24,000 |
| Latency (Avg) | 85ms | 5ms | 2ms |
| Throughput | 1.1 MB/s | 14 MB/s | 19 MB/s |
| Development Time | Fastest | Fast | Moderate |
| Binary Size | N/A (Requires Runtime) | ~12MB | ~4MB |
Analysis #
- Node.js: The performance is significantly lower for this specific CPU-bound task. The single thread creates a convoy effect—fast requests get stuck behind slow calculation requests.
- Go: Delivers a massive performance jump. The built-in concurrency model utilizes all CPU cores, handling nearly 15x more traffic than Node.
- Rust: Takes the crown. With no Garbage Collector to manage memory and aggressive compiler optimizations (LLVM), it squeezes every drop of performance out of the hardware.
When to Use Which? #
Performance isn’t the only metric. If it were, we would all be writing Assembly. Here is the pragmatic breakdown for 2025:
1. Stick with Node.js if: #
- Your application is I/O bound (API gateways, CRUD apps, querying DBs). Node handles high concurrency for I/O beautifully.
- You need to iterate fast. The npm ecosystem is unmatched.
- Your team already knows JavaScript/TypeScript. The cost of rewriting in Rust often outweighs the server savings.
- Performance Fix: If you need CPU tasks in Node, offload them to Worker Threads or a separate microservice.
2. Switch to Go if: #
- You are building microservices that require high concurrency (e.g., handling WebSockets for millions of users).
- You need a statically typed language that is easy to learn (Go is much simpler than Rust).
- You need a single binary deployment without dependency hell.
3. Switch to Rust if: #
- You are building “Core” infrastructure (database engines, routing layers, image processing).
- Memory safety is critical, and you cannot afford Garbage Collection pauses (Real-time trading, gaming servers).
- You want the absolute lowest cloud bill possible (serverless cold starts in Rust are near-instant).
Conclusion #
In 2025, Node.js remains a powerhouse for 90% of web development use cases. Its developer experience is superior, and for I/O-heavy apps, the performance difference is often negligible.
However, recognizing the limits of the Event Loop is the mark of a senior developer. When raw CPU throughput and predictable latency are non-negotiable, Go and Rust are the tools you need in your belt.
My recommendation? Keep Node.js for your API orchestration and business logic. Spin up a Rust or Go microservice for the heavy lifting.