Skip to content

Node.js Event Loop Internals — What Actually Happens When You await

Posted on:February 12, 2024 at 10:00 AM

You’ve been writing async/await for years. But do you know what Node.js is actually doing between your await points? Most developers operate on a fuzzy mental model — “the event loop handles it” — and get burned in production when setTimeout(() => {}, 0) behaves differently than setImmediate() or when process.nextTick() starves the loop entirely.

This post tears open the event loop and explains every phase using libuv’s actual implementation. No hand-waving.

Table of contents

Open Table of contents

Why This Matters in Production

Misunderstanding the event loop causes real bugs:

Before we fix these, we need to understand the machine.

libuv: The Engine Underneath

Node.js is built on two main components:

When you call fs.readFile(), you’re not blocking V8 — you’re registering a callback with libuv, which dispatches the actual file I/O to the OS (or its thread pool) and polls for completion.

The Thread Pool

libuv maintains a thread pool (default size: 4, configurable via UV_THREADPOOL_SIZE=8). Operations that use it:

Network I/O does NOT use the thread pool. TCP/UDP uses non-blocking OS primitives (epoll on Linux, kqueue on macOS, IOCP on Windows). This is why Node.js handles thousands of concurrent connections without spawning thousands of threads.

The Six Phases of the Event Loop

Each “tick” of the event loop cycles through these phases in order:

┌───────────────────────────┐
│ timers │ setTimeout, setInterval callbacks
└─────────────┬─────────────┘
┌─────────────▼─────────────┐
│ pending callbacks │ I/O callbacks deferred to next iteration
└─────────────┬─────────────┘
┌─────────────▼─────────────┐
│ idle, prepare │ internal libuv use only
└─────────────┬─────────────┘
┌─────────────▼─────────────┐
│ poll │ retrieve new I/O events, execute callbacks
└─────────────┬─────────────┘
┌─────────────▼─────────────┐
│ check │ setImmediate callbacks
└─────────────┬─────────────┘
┌─────────────▼─────────────┐
│ close callbacks │ socket.on('close', ...)
└───────────────────────────┘

Between each phase transition, Node.js drains two special queues:

  1. process.nextTick() queue (highest priority)
  2. Promise microtask queue (Promise.resolve().then(...))

Phase 1: Timers

Executes callbacks for setTimeout() and setInterval() whose delay has elapsed.

Critical detail: The delay is a minimum, not a guarantee. If the poll phase is busy with I/O, timers will fire late. A setTimeout(() => {}, 100) will execute at or after 100ms — not exactly at 100ms.

const start = Date.now();
setTimeout(() => {
console.log(`fired after ${Date.now() - start}ms`);
}, 100);
// Simulate blocking
const buf = crypto.randomBytes(1024 * 1024 * 100); // sync, takes ~200ms
// Timer fires at ~200ms, not 100ms

The timer phase loops through all expired timers and executes their callbacks. It does NOT wait for new timers to expire — it runs whatever is already due and moves on.

Phase 2: Pending Callbacks

Runs I/O callbacks that were deferred from the previous loop iteration. Rare in practice — mainly used for TCP error callbacks on some systems.

Phase 3: Idle, Prepare

Internal to libuv. You never touch these directly.

Phase 4: Poll (The Most Important Phase)

This is where Node.js spends most of its time. The poll phase does two things:

  1. Calculates how long to block waiting for I/O events
  2. Processes I/O callbacks as events arrive

The blocking duration is calculated as:

When a TCP connection receives data, the OS notifies libuv via epoll_wait, which returns the file descriptor. libuv finds the registered callback and executes it — all within the poll phase.

This is why Node.js can handle 10,000 concurrent connections. A single thread is waiting on an epoll_wait call for all 10,000 sockets simultaneously. When any of them has data, the callback runs. No threads-per-connection needed.

Phase 5: Check

Executes setImmediate() callbacks. Always runs after poll completes (even if poll was blocking, setImmediate fires on the same tick).

Phase 6: Close Callbacks

Cleanup callbacks: socket.on('close', ...), server.on('close', ...).

The Microtask Queue: Runs Between EVERY Phase

Before moving to the next phase, Node.js drains two queues entirely:

// Order of execution:
setTimeout(() => console.log('1: timer'), 0);
Promise.resolve().then(() => console.log('2: promise microtask'));
process.nextTick(() => console.log('3: nextTick'));
console.log('4: synchronous');
// Output:
// 4: synchronous
// 3: nextTick ← nextTick drains first
// 2: promise microtask ← then promises
// 1: timer ← then timer phase

process.nextTick() has higher priority than Promise microtasks. Both run before any event loop phase.

┌─── synchronous code (call stack) ──────────────────────────────────┐
│ console.log('sync') │
└─────────────────────────────────────────────────────────────────────┘
┌─── microtask queues (drain completely before next phase) ───────────┐
│ 1. process.nextTick queue (ALL callbacks, in order) │
│ 2. Promise microtask queue (ALL .then/.catch/.finally, in order) │
└─────────────────────────────────────────────────────────────────────┘
┌─── event loop phase ────────────────────────────────────────────────┐
│ timers / poll / check / … │
│ ↓ (one callback executes) │
│ → microtask queues drain again │
│ ↓ (next callback in phase) │
│ → microtask queues drain again │
└─────────────────────────────────────────────────────────────────────┘

Warning: Recursive process.nextTick() is catastrophic:

// DO NOT DO THIS
function recursiveTick() {
process.nextTick(recursiveTick); // never yields to event loop
}
recursiveTick(); // starves all I/O permanently

Use setImmediate() instead for recursive async work — it yields after each check phase.

setImmediate vs setTimeout(fn, 0): The Race

The infamous question. In the main module (outside any I/O callback), the order is non-deterministic due to timer resolution:

setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));
// Sometimes: timeout, immediate
// Sometimes: immediate, timeout

This happens because setTimeout(fn, 0) is clamped to minimum 1ms. If the event loop starts and less than 1ms has passed, the timer isn’t ready, so setImmediate fires first in the check phase. If more than 1ms has passed, timer fires first.

Inside an I/O callback, the order IS deterministic:

fs.readFile('/etc/hosts', () => {
setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));
// Always: immediate, timeout
// Because we're inside poll phase, check comes next before timers restart
});

async/await Under the Hood

await is syntactic sugar for Promises. This:

async function fetchData() {
const data = await fetch('https://api.example.com/data');
console.log('got data');
}

Compiles to roughly:

function fetchData() {
return fetch('https://api.example.com/data').then(data => {
console.log('got data');
});
}

The .then() callback is a Promise microtask. It runs after the current synchronous code finishes, before the next event loop phase. So await is not magic — it’s scheduled like any other microtask.

Timeline: async function fetchData()
call stack microtask queue event loop (poll phase)
────────────────────── ──────────────────── ──────────────────────
fetchData() starts
fetch() called ──────────────────────────────────→ kernel: HTTP request
await suspends fetchData
← caller continues ───────────────────────────────────────────────────
… waiting for I/O …
← HTTP response arrives
.then(cb) enqueued ←───
(call stack empty)
cb runs: 'got data'
fetchData resumes

The Gotcha with await in Loops

// WRONG: sequential (total time = sum of all request times)
for (const url of urls) {
const data = await fetch(url);
results.push(data);
}
// RIGHT: concurrent (total time = max of all request times)
const results = await Promise.all(urls.map(url => fetch(url)));

The await inside a loop suspends the entire loop iteration. Promise.all() fires all requests concurrently and resumes once all complete.

Diagnosing Event Loop Lag with Clinic.js

Install Clinic.js:

Terminal window
npm install -g clinic

Run your app with Doctor:

Terminal window
clinic doctor -- node server.js

Then apply load:

Terminal window
npx autocannon -c 100 -d 10 http://localhost:3000

Clinic Doctor generates a report showing:

A healthy event loop delay P99 should be <1ms for most APIs. If you’re seeing 50-100ms, you have synchronous blocking code that needs to move to Worker Threads or the thread pool.

Worker Threads: True Parallelism

For CPU-bound work (image processing, crypto, ML inference), use Worker Threads:

import { Worker, isMainThread, parentPort, workerData } from 'worker_threads';
if (isMainThread) {
const worker = new Worker(new URL(import.meta.url), {
workerData: { numbers: [1, 2, 3, 4, 5] }
});
worker.on('message', result => console.log('Sum:', result));
} else {
const sum = workerData.numbers.reduce((a, b) => a + b, 0);
parentPort.postMessage(sum);
}

Workers have their own V8 instance and event loop. They communicate via message passing (like web workers). Use them for:

Key Takeaways

MechanismWhen it runsUse for
Synchronous codeImmediately, blocks everythingFast computations only
process.nextTick()Before next event loop phaseDeferring without yielding
Promise.then()Before next event loop phase, after nextTickStandard async code
setImmediate()Check phase (after poll)Recursive async work
setTimeout(fn, 0)Next timer phase (≥1ms)Non-critical deferred work
I/O callbacksPoll phaseNetwork, file system responses
Worker ThreadsParallel executionCPU-bound tasks

Understanding these phases is the difference between a Node.js server that handles 50,000 req/s and one that falls over at 500. The event loop is simple — but simple doesn’t mean obvious.