Skip to main content
  1. Languages/
  2. Nodejs Guides/

Node.js Logging Mastery: Winston, Pino, and Structured Patterns

Jeff Taakey
Author
Jeff Taakey
21+ Year CTO & Multi-Cloud Architect.

Node.js Logging Mastery: Winston, Pino, and Structured Patterns
#

If there is one thing that separates a hobbyist project from an enterprise-grade application, it’s observability. When your Node.js application crashes at 3 AM, or a user reports a transaction failure, your logs are the only witness to the crime.

In the early days of Node, console.log was the trusty Swiss Army knife. But as we navigate the backend landscape of 2025, relying on standard output for debugging production applications is not just inefficient—it’s dangerous. It blocks the event loop, lacks structure, and makes log aggregation impossible.

This guide is designed for mid-to-senior Node.js developers. We aren’t just going to install a library; we are going to architect a logging strategy using the two heavyweights of the ecosystem: Winston and Pino. We will explore structured logging, performance implications, and how to implement request tracing using AsyncLocalStorage.

Prerequisites and Environment
#

Before we dive into the code, ensure your environment is ready. We will be using ES Modules (ESM) syntax, which is the standard for modern Node.js development.

  • Node.js: Version 20.x (LTS) or 22.x is recommended.
  • Package Manager: npm or pnpm.
  • OS: Linux/macOS preferred for production simulation, but Windows works fine.

To follow along, initialize a new project:

mkdir node-logging-mastery
cd node-logging-mastery
npm init -y
# We will install packages as we go

Edit your package.json to enable ES modules:

{
  "type": "module"
}

Why Structured Logging Matters
#

Before comparing libraries, we must agree on the format. Structured logging means outputting logs in a machine-readable format—almost exclusively JSON in the Node.js world.

Why JSON?

  1. Parsing: Log aggregators (ELK Stack, Datadog, Splunk, CloudWatch) can natively parse JSON fields.
  2. Querying: You can search for level="error" AND service="payment-api" instead of grep-ing text files.
  3. Context: You can attach complex metadata objects without messy string concatenation.

The Flow of Logging Data
#

Understanding where your logs go is as important as writing them. Here is a high-level view of a modern logging architecture:

graph TD subgraph "Node.js Application" A[Application Code] -->|Emits Log| B(Logger Instance) B -->|Middleware| C{Formatter} C -->|JSON| D[Transport Layer] end subgraph "Output Destinations" D -->|Stream| E[stdout/stderr] D -->|Write| F[File System] D -->|Network| G[External Service] end subgraph "Aggregation Infrastructure" E -->|Docker Driver/Fluentd| H[Log Aggregator] F -->|Filebeat| H H -->|Index| I[Elasticsearch/OpenSearch] I -->|Visualize| J[Kibana/Grafana] end style A fill:#f9f,stroke:#333,stroke-width:2px style H fill:#bbf,stroke:#333,stroke-width:2px style J fill:#bfb,stroke:#333,stroke-width:2px

The Heavyweight Champion: Winston
#

Winston is the most popular logging library in the Node.js ecosystem. It is famous for its flexibility and massive ecosystem of “transports” (plugins that send logs to different destinations).

Installation
#

npm install winston winston-daily-rotate-file

Implementing a Production-Ready Winston Logger
#

In a real-world scenario, you want logs to be colorful and human-readable during local development, but strict JSON in production. You also need log rotation to prevent filling up the server disk.

Create a file named logger-winston.js:

// logger-winston.js
import winston from 'winston';
import 'winston-daily-rotate-file';

const { combine, timestamp, json, colorize, printf, errors } = winston.format;

// Define custom log levels (optional, but good practice)
const levels = {
  error: 0,
  warn: 1,
  info: 2,
  http: 3,
  debug: 4,
};

// Determine environment
const env = process.env.NODE_ENV || 'development';

// Custom format for local development (Human readable)
const consoleFormat = printf(({ level, message, timestamp, stack, ...meta }) => {
  return `${timestamp} ${level}: ${stack || message} ${Object.keys(meta).length ? JSON.stringify(meta) : ''}`;
});

// Configure the transport for file rotation
const fileRotateTransport = new winston.transports.DailyRotateFile({
  filename: 'logs/app-%DATE%.log',
  datePattern: 'YYYY-MM-DD',
  zippedArchive: true,
  maxSize: '20m',
  maxFiles: '14d', // Keep logs for 14 days
  level: 'info', // In production file, we usually want info and above
});

const logger = winston.createLogger({
  levels,
  level: env === 'development' ? 'debug' : 'info',
  format: combine(
    timestamp({ format: 'YYYY-MM-DD HH:mm:ss' }),
    errors({ stack: true }), // Make sure to capture stack trace!
    json() // Default to JSON for file/prod
  ),
  transports: [
    fileRotateTransport,
    // Add a specific error log file
    new winston.transports.File({ filename: 'logs/error.log', level: 'error' }),
  ],
});

// If we're not in production, log to the `console` with the format:
// `${info.level}: ${info.message} JSON.stringify({ ...rest }) `
if (env !== 'production') {
  logger.add(
    new winston.transports.Console({
      format: combine(
        colorize({ all: true }),
        timestamp({ format: 'HH:mm:ss' }),
        consoleFormat
      ),
    })
  );
}

export default logger;

Usage
#

// app-winston.js
import logger from './logger-winston.js';

logger.info('Server started', { port: 3000, env: 'development' });

try {
  throw new Error('Database connection failed');
} catch (error) {
  logger.error('Critical failure', error); 
  // Winston handles the Error object gracefully due to format.errors()
}

logger.warn('Memory usage high', { memory: '512MB' });

Winston Pros & Cons
#

Winston is incredibly configurable. You can create multiple loggers, pipe streams, and find a community transport for almost any service (Slack, MongoDB, Email). However, this flexibility comes with a slight runtime overhead.


The Speed Demon: Pino
#

Pino claims to be over 5x faster than alternatives. In high-throughput Node.js applications (thousands of requests per second), the overhead of logging can actually degrade performance. Pino achieves speed by minimizing resource usage and offloading formatting.

Installation
#

npm install pino pino-pretty

Note: pino-pretty should ideally only be used in development. In production, pipe logs to a processor or let your logging agent handle the formatting.

Implementing a High-Performance Pino Logger
#

Create logger-pino.js:

// logger-pino.js
import pino from 'pino';

const env = process.env.NODE_ENV || 'development';

const logger = pino({
  level: env === 'development' ? 'debug' : 'info',
  
  // In production, we keep standard JSON keys (time, pid, hostname).
  // In dev, we might want to translate time to human readable.
  timestamp: pino.stdTimeFunctions.isoTime,

  // Redact sensitive keys automatically
  redact: {
    paths: ['req.headers.authorization', 'user.password', 'email'],
    censor: '***REDACTED***'
  },

  // Transport configuration
  transport: env === 'development' 
    ? {
        target: 'pino-pretty',
        options: {
          colorize: true,
          translateTime: 'HH:MM:ss Z',
          ignore: 'pid,hostname',
        },
      }
    : undefined, // In production, log straight to stdout (JSON)
});

export default logger;

Usage
#

// app-pino.js
import logger from './logger-pino.js';

const user = { 
  id: 1, 
  email: '[email protected]', 
  password: 'supersecretpassword' 
};

// Pino merges the object into the log JSON automatically
logger.info({ user }, 'User login attempt'); 
// Output will show email and password as "***REDACTED***"

logger.error(new Error('Payment gateway timeout'));

Why Pino is Faster
#

Pino buffers logs and writes them asynchronously (if configured). It avoids complex object serialization unless necessary and focuses strictly on JSON output.


Comparison: Winston vs. Pino
#

Choosing between the two often comes down to specific project needs. Here is a breakdown of the differences.

Feature Winston Pino
Philosophy Maximum flexibility and configuration. Minimum overhead and speed.
Output Format Configurable (JSON, Text, Custom). JSON First (Native).
Performance Good, but slower due to features. Excellent (Low allocation).
Ecosystem Massive (Transports for everything). Growing (Focuses on stdout).
Log Rotation Built-in via community transports. Recommends external tools (logrotate) or specific transports.
Learning Curve Gentle, easy to customize strings. Slight curve for async transports.

Advanced Pattern: Correlation IDs with AsyncLocalStorage
#

In a busy Node.js application, logs from different requests get interleaved. If 50 users hit your API at once, how do you know which “Database Error” belongs to which User?

This is where Correlation IDs (or Request IDs) come in. We use AsyncLocalStorage (native to Node.js) to store a unique ID for the duration of a request, available anywhere in the code without passing it as a function argument.

Implementation
#

We will create a mock Express/Http server flow to demonstrate this.

  1. Install uuid: npm install uuid
  2. Create the context store:
// context.js
import { AsyncLocalStorage } from 'node:async_hooks';

export const asyncLocalStorage = new AsyncLocalStorage();
  1. Update the Logger (Pino Example):

Pino has a mixin feature that allows us to inject data into every log statement automatically.

// logger-context.js
import pino from 'pino';
import { asyncLocalStorage } from './context.js';

const logger = pino({
  level: 'info',
  mixin() {
    // This runs for every log call
    const store = asyncLocalStorage.getStore();
    if (store) {
      return { reqId: store.get('requestId') };
    }
    return {};
  }
});

export default logger;
  1. Simulate the Application:
// app-context.js
import { v4 as uuidv4 } from 'uuid';
import { asyncLocalStorage } from './context.js';
import logger from './logger-context.js';

// Simulate an HTTP Middleware
function requestMiddleware(req, next) {
  const store = new Map();
  const requestId = req.headers['x-request-id'] || uuidv4();
  
  store.set('requestId', requestId);

  // Run the rest of the request within this context
  asyncLocalStorage.run(store, () => {
    logger.info({ method: req.method, url: req.url }, 'Incoming Request');
    next();
  });
}

// Simulate Business Logic (Notice no logger passed explicitly)
function processPayment() {
  // This log will automatically have the reqId!
  logger.info('Processing payment logic...');
  
  // Simulate heavy work
  setTimeout(() => {
    logger.info('Payment successful');
  }, 100);
}

// Mocking a Request
const mockReq = { method: 'POST', url: '/api/buy', headers: {} };

requestMiddleware(mockReq, () => {
    processPayment();
});

// Output (Conceptual JSON):
// {"level":30, "time":..., "reqId":"a1b2-c3d4...", "method":"POST", "msg":"Incoming Request"}
// {"level":30, "time":..., "reqId":"a1b2-c3d4...", "msg":"Processing payment logic..."}
// {"level":30, "time":..., "reqId":"a1b2-c3d4...", "msg":"Payment successful"}

This pattern is incredibly powerful. You can trace a single request through controllers, services, and database layers by filtering for the reqId in your log aggregator.


Best Practices and Common Pitfalls
#

1. Don’t Log Sensitive Data (PII)
#

In 2025, GDPR and CCPA are stricter than ever. Never log:

  • Passwords
  • Credit Card Numbers
  • Full API Keys
  • Emails (unless hashed or necessary for debugging)

Use Pino’s redact feature or Winston’s custom formatters to strip these fields automatically.

2. Standardize Log Levels
#

Don’t use console.log for everything.

  • FATAL: The app is crashing.
  • ERROR: Current operation failed (DB error), but app keeps running.
  • WARN: Something looks wrong (deprecated API usage, high memory), but no error yet.
  • INFO: Normal lifecycle events (Server started, Request completed).
  • DEBUG: Detailed variable states for local dev.

3. Log Rotation is Mandatory
#

If you write to a file, you must rotate it. I have seen production servers crash because a 50GB app.log file filled the disk.

  • With Winston: Use winston-daily-rotate-file.
  • With Pino: Pipe stdout to a tool like logrotate (Linux) or use pino-roll.

4. Avoiding the console.log Trap
#

console.log writes to process.stdout. In Node.js, this is synchronous when writing to a TTY (terminal) and asynchronous when writing to a pipe/file (mostly). However, console.error is usually blocking. Using a library ensures buffering and non-blocking writes where possible, keeping your Event Loop free for handling requests.


Conclusion
#

Logging is the voice of your application. When implemented correctly, it tells you a story of health, performance, and user behavior.

  • Use Winston if you need complex transports, robust file rotation built-in, and don’t mind a tiny bit of overhead.
  • Use Pino if you are building high-performance microservices and prefer the “Unix philosophy” of piping logs to standard out for another process to handle.
  • Always use Structured Logging (JSON) in production.
  • Implement AsyncLocalStorage for request tracing to preserve your sanity during debugging sessions.

Further Reading
#

By adopting these patterns today, you ensure that your Node.js applications remain maintainable and observable well into the future.