You’ve been writing async/await for years. But do you know what Node.js is actually doing between your await points? Most developers operate on a fuzzy mental model — “the event loop handles it” — and get burned in production when setTimeout(() => {}, 0) behaves differently than setImmediate() or when process.nextTick() starves the loop entirely.
This post tears open the event loop and explains every phase using libuv’s actual implementation. No hand-waving.
Table of contents
Open Table of contents
- Why This Matters in Production
- libuv: The Engine Underneath
- The Six Phases of the Event Loop
- The Microtask Queue: Runs Between EVERY Phase
- setImmediate vs setTimeout(fn, 0): The Race
- async/await Under the Hood
- Diagnosing Event Loop Lag with Clinic.js
- Worker Threads: True Parallelism
- Key Takeaways
- Related posts
Why This Matters in Production
Misunderstanding the event loop causes real bugs:
- Latency spikes: A synchronous loop that takes 50ms blocks all I/O callbacks, including incoming HTTP requests
- Event loop starvation: Recursive
process.nextTick()chains never yield to I/O callbacks — the microtask queue drains indefinitely, starving timers and network handlers - Race conditions: Assuming
setImmediate()andsetTimeout(fn, 0)are equivalent (they’re not) - Starvation: Long poll phases prevent timers from firing on time
Before we fix these, we need to understand the machine.
libuv: The Engine Underneath
Node.js is built on two main components:
- V8: The JavaScript engine (parses, compiles, executes JS)
- libuv: The C library that provides the event loop, thread pool, and OS abstraction
When you call fs.readFile(), you’re not blocking V8 — you’re registering a callback with libuv, which dispatches the actual file I/O to the OS (or its thread pool) and polls for completion.
The Thread Pool
libuv maintains a thread pool (default size: 4, configurable via UV_THREADPOOL_SIZE=8). Operations that use it:
- File system operations (
fs.*) - DNS resolution (
dns.lookup()) - Crypto (
crypto.pbkdf2(),crypto.scrypt()) - Zlib compression
Network I/O does NOT use the thread pool. TCP/UDP uses non-blocking OS primitives (epoll on Linux, kqueue on macOS, IOCP on Windows). This is why Node.js handles thousands of concurrent connections without spawning thousands of threads.
The Six Phases of the Event Loop
Each “tick” of the event loop cycles through these phases in order:
┌───────────────────────────┐ │ timers │ setTimeout, setInterval callbacks └─────────────┬─────────────┘ │ ┌─────────────▼─────────────┐ │ pending callbacks │ I/O callbacks deferred to next iteration └─────────────┬─────────────┘ │ ┌─────────────▼─────────────┐ │ idle, prepare │ internal libuv use only └─────────────┬─────────────┘ │ ┌─────────────▼─────────────┐ │ poll │ retrieve new I/O events, execute callbacks └─────────────┬─────────────┘ │ ┌─────────────▼─────────────┐ │ check │ setImmediate callbacks └─────────────┬─────────────┘ │ ┌─────────────▼─────────────┐ │ close callbacks │ socket.on('close', ...) └───────────────────────────┘Between each phase transition, Node.js drains two special queues:
process.nextTick()queue (highest priority)- Promise microtask queue (
Promise.resolve().then(...))
Phase 1: Timers
Executes callbacks for setTimeout() and setInterval() whose delay has elapsed.
Critical detail: The delay is a minimum, not a guarantee. If the poll phase is busy with I/O, timers will fire late. A setTimeout(() => {}, 100) will execute at or after 100ms — not exactly at 100ms.
const start = Date.now();setTimeout(() => { console.log(`fired after ${Date.now() - start}ms`);}, 100);
// Simulate blockingconst buf = crypto.randomBytes(1024 * 1024 * 100); // sync, takes ~200ms// Timer fires at ~200ms, not 100msThe timer phase loops through all expired timers and executes their callbacks. It does NOT wait for new timers to expire — it runs whatever is already due and moves on.
Phase 2: Pending Callbacks
Runs I/O callbacks that were deferred from the previous loop iteration. Rare in practice — mainly used for TCP error callbacks on some systems.
Phase 3: Idle, Prepare
Internal to libuv. You never touch these directly.
Phase 4: Poll (The Most Important Phase)
This is where Node.js spends most of its time. The poll phase does two things:
- Calculates how long to block waiting for I/O events
- Processes I/O callbacks as events arrive
The blocking duration is calculated as:
- 0 if there are any
setImmediate()callbacks queued (proceed immediately to check phase) - 0 if any timers are about to expire
- Otherwise: blocks until the next timer is due, or indefinitely if no timers exist
When a TCP connection receives data, the OS notifies libuv via epoll_wait, which returns the file descriptor. libuv finds the registered callback and executes it — all within the poll phase.
This is why Node.js can handle 10,000 concurrent connections. A single thread is waiting on an epoll_wait call for all 10,000 sockets simultaneously. When any of them has data, the callback runs. No threads-per-connection needed.
Phase 5: Check
Executes setImmediate() callbacks. Always runs after poll completes (even if poll was blocking, setImmediate fires on the same tick).
Phase 6: Close Callbacks
Cleanup callbacks: socket.on('close', ...), server.on('close', ...).
The Microtask Queue: Runs Between EVERY Phase
Before moving to the next phase, Node.js drains two queues entirely:
// Order of execution:setTimeout(() => console.log('1: timer'), 0);
Promise.resolve().then(() => console.log('2: promise microtask'));
process.nextTick(() => console.log('3: nextTick'));
console.log('4: synchronous');
// Output:// 4: synchronous// 3: nextTick ← nextTick drains first// 2: promise microtask ← then promises// 1: timer ← then timer phaseprocess.nextTick() has higher priority than Promise microtasks. Both run before any event loop phase.
┌─── synchronous code (call stack) ──────────────────────────────────┐ │ console.log('sync') │ └─────────────────────────────────────────────────────────────────────┘ │ ┌─── microtask queues (drain completely before next phase) ───────────┐ │ 1. process.nextTick queue (ALL callbacks, in order) │ │ 2. Promise microtask queue (ALL .then/.catch/.finally, in order) │ └─────────────────────────────────────────────────────────────────────┘ │ ┌─── event loop phase ────────────────────────────────────────────────┐ │ timers / poll / check / … │ │ ↓ (one callback executes) │ │ → microtask queues drain again │ │ ↓ (next callback in phase) │ │ → microtask queues drain again │ └─────────────────────────────────────────────────────────────────────┘Warning: Recursive process.nextTick() is catastrophic:
// DO NOT DO THISfunction recursiveTick() { process.nextTick(recursiveTick); // never yields to event loop}recursiveTick(); // starves all I/O permanentlyUse setImmediate() instead for recursive async work — it yields after each check phase.
setImmediate vs setTimeout(fn, 0): The Race
The infamous question. In the main module (outside any I/O callback), the order is non-deterministic due to timer resolution:
setTimeout(() => console.log('timeout'), 0);setImmediate(() => console.log('immediate'));
// Sometimes: timeout, immediate// Sometimes: immediate, timeoutThis happens because setTimeout(fn, 0) is clamped to minimum 1ms. If the event loop starts and less than 1ms has passed, the timer isn’t ready, so setImmediate fires first in the check phase. If more than 1ms has passed, timer fires first.
Inside an I/O callback, the order IS deterministic:
fs.readFile('/etc/hosts', () => { setTimeout(() => console.log('timeout'), 0); setImmediate(() => console.log('immediate')); // Always: immediate, timeout // Because we're inside poll phase, check comes next before timers restart});async/await Under the Hood
await is syntactic sugar for Promises. This:
async function fetchData() { const data = await fetch('https://api.example.com/data'); console.log('got data');}Compiles to roughly:
function fetchData() { return fetch('https://api.example.com/data').then(data => { console.log('got data'); });}The .then() callback is a Promise microtask. It runs after the current synchronous code finishes, before the next event loop phase. So await is not magic — it’s scheduled like any other microtask.
Timeline: async function fetchData()
call stack microtask queue event loop (poll phase) ────────────────────── ──────────────────── ────────────────────── fetchData() starts fetch() called ──────────────────────────────────→ kernel: HTTP request await suspends fetchData ← caller continues ─────────────────────────────────────────────────── … waiting for I/O … ← HTTP response arrives .then(cb) enqueued ←─── (call stack empty) cb runs: 'got data' fetchData resumesThe Gotcha with await in Loops
// WRONG: sequential (total time = sum of all request times)for (const url of urls) { const data = await fetch(url); results.push(data);}
// RIGHT: concurrent (total time = max of all request times)const results = await Promise.all(urls.map(url => fetch(url)));The await inside a loop suspends the entire loop iteration. Promise.all() fires all requests concurrently and resumes once all complete.
Diagnosing Event Loop Lag with Clinic.js
Install Clinic.js:
npm install -g clinicRun your app with Doctor:
clinic doctor -- node server.jsThen apply load:
npx autocannon -c 100 -d 10 http://localhost:3000Clinic Doctor generates a report showing:
- Event loop delay histogram: P50, P99 latency in event loop
- CPU profile: Where time is being spent
- Memory usage: Potential leaks
A healthy event loop delay P99 should be <1ms for most APIs. If you’re seeing 50-100ms, you have synchronous blocking code that needs to move to Worker Threads or the thread pool.
Worker Threads: True Parallelism
For CPU-bound work (image processing, crypto, ML inference), use Worker Threads:
import { Worker, isMainThread, parentPort, workerData } from 'worker_threads';
if (isMainThread) { const worker = new Worker(new URL(import.meta.url), { workerData: { numbers: [1, 2, 3, 4, 5] } }); worker.on('message', result => console.log('Sum:', result));} else { const sum = workerData.numbers.reduce((a, b) => a + b, 0); parentPort.postMessage(sum);}Workers have their own V8 instance and event loop. They communicate via message passing (like web workers). Use them for:
- Image/video processing
- Cryptographic operations
- Parsing large JSON files
- Any CPU-intensive computation taking >50ms
Key Takeaways
| Mechanism | When it runs | Use for |
|---|---|---|
| Synchronous code | Immediately, blocks everything | Fast computations only |
process.nextTick() | Before next event loop phase | Deferring without yielding |
Promise.then() | Before next event loop phase, after nextTick | Standard async code |
setImmediate() | Check phase (after poll) | Recursive async work |
setTimeout(fn, 0) | Next timer phase (≥1ms) | Non-critical deferred work |
| I/O callbacks | Poll phase | Network, file system responses |
| Worker Threads | Parallel execution | CPU-bound tasks |
Understanding these phases is the difference between a Node.js server that handles 50,000 req/s and one that falls over at 500. The event loop is simple — but simple doesn’t mean obvious.
Related posts
- Node.js Diagnostic Tools — Heap Snapshots, Flame Graphs, and DoctorJS in 2026 — how to measure event loop lag and I/O phase bottlenecks in production
- Python AsyncIO vs Node.js Event Loop — The Differences That Bite You — side-by-side comparison of the two models and where they diverge on blocking, cancellation, and thread integration