Skip to content

The Node.js Event Loop — An Interactive Walkthrough

Posted on:April 5, 2026 at 10:00 AM

Most Node.js developers can recite “the event loop handles async” on command. Ask them whether setTimeout(fn, 0) runs before or after setImmediate(fn) — and whether the answer changes inside an fs.readFile callback — and the explanations get much shakier. This post is the visual, hands-on version of that explanation.

Three interactive scenes, stepped through one frame at a time:

  1. The core mental model — stack, libuv, microtasks, tasks, nextTick.
  2. The six phases of the libuv event loop, walked through end to end.
  3. A prediction quiz to test whether the model stuck.

No hand-waving. Every animation mirrors what Node actually does.

Table of contents

Open Table of contents

Why bother understanding the event loop

A fuzzy model is enough until one day it isn’t. The real bugs it causes:

Before any of this is fixable, the compartments and their priorities have to be crisp.

Scene 1 · The compartments

Node.js is not one thing. It’s a small collection of specialised compartments that pass callbacks between each other. The call stack runs one frame at a time. Async work lives inside libuv. When libuv finishes a piece of work, the callback is handed to one of three queues — nextTick, microtasks, or tasks — and the event loop, acting as a traffic cop, picks the next thing to run.

The simulator below animates that dance. Pick a scenario, press ▶ Play, and watch where each callback actually lives at each moment.

The Event Loop, compartment by compartment
Stack · libuv · nextTick · microtasks · tasks — watch code flow between them.
Two sync logs and one setTimeout(0). The timer callback can only run after the current script has finished — even with a 0ms delay.
Code
console.log("A");
setTimeout(() => {
  console.log("B");
}, 0);
console.log("C");
Console output
(nothing logged yet)
Call stack (LIFO)
empty
libuv / Node APIs (in-flight async)
no pending work
process.nextTick queue (highest priority)
empty
Microtask queue (Promises, queueMicrotask)
empty
Task queue (timers, I/O, setImmediate)
empty
Step 0 / 24Pick a scenario and press ▶ Play to walk through it step by step.
Speed:

A few rules worth burning in from this scene:

Scene 2 · The six phases

The first scene treats “the event loop” as a single box. Zoom in and it’s actually six phases that Node cycles through, in this fixed order:

┌───────────────┐
│ timers │ setTimeout, setInterval callbacks whose delay has elapsed
├───────────────┤
│ pending │ some deferred I/O callbacks (e.g. TCP connection errors)
├───────────────┤
│ idle, prepare │ libuv internals — you never interact with these directly
├───────────────┤
│ poll │ wait for new I/O events, run their callbacks
├───────────────┤
│ check │ setImmediate callbacks
├───────────────┤
│ close │ 'close' events (sockets, handles)
└───────────────┘
└─ back to timers

After every individual callback — and therefore between every phase transition — Node drains the nextTick queue first, then the promise microtask queue. A single tick can run through this drain cycle many times.

The six phases of the libuv event loop
One full tick, phase by phase. nextTick and microtasks drain between every transition.
At the top level, the order is non-deterministic: it depends on whether the event loop enters the timers phase before or after the 0→1ms timer has actually elapsed. Node usually logs 'timeout' first, but not always.
Code
setTimeout(() => console.log("timeout"), 0);
setImmediate(()  => console.log("immediate"));
Console output · tick 0
(nothing logged yet)
libuv in-flight
no pending async work
nextTick queue
empty
microtask queue
empty
Event loop phases
1.timerssetTimeout / setInterval
0 queued
2.pendingdeferred I/O errors (TCP)
0 queued
3.idle, preparelibuv internals
0 queued
4.pollnew I/O events + callbacks
0 queued
5.checksetImmediate
0 queued
6.closesocket.on('close', …) handlers
0 queued
Between each phase, Node drains nextTick then microtasks before moving on.
Step 0 / 19Pick a scenario and step through it. Each box on the ring is one phase of the libuv event loop.
Speed:

This is where the setTimeout(0) vs setImmediate puzzle finally makes sense:

The deterministic one is the useful guarantee: if you’re inside an I/O callback and want something to run on the next tick before any timers fire, reach for setImmediate.

Where each API actually lives

Memorise this table once:

APILives inPriority
process.nextTick(fn)nextTick queuehighest — drains between every phase
Promise.resolve().then(fn)Promise microtask queuedrains after nextTick, between every phase
queueMicrotask(fn)Promise microtask queuesame as above
setTimeout(fn, ms) / setIntervaltimers phaseruns once the delay has elapsed
setImmediate(fn)check phaseruns in the next check phase
fs.*, dns.lookup, crypto.pbkdf2, zlib.*libuv thread pool → poll phasecallback runs in poll
server.on('close', fn), socket.on('close', fn)close phaselast phase before wrap-around
net/http I/Olibuv non-blocking OS primitives → poll phasecallback runs in poll

Scene 3 · Predict the output

The only real test of a mental model is whether you can use it to predict what a program does before running it. Seven quizzes, in ascending difficulty. Click the log lines in the order you think they’ll print, then hit Check.

Predict the output
Read the code. Click the log lines in the order you think Node will print them.
easySync, promise, timer
console.log("A");
setTimeout(() => console.log("B"), 0);
Promise.resolve().then(() => console.log("C"));
console.log("D");
Available log lines — click in predicted order
Your predicted output
(click lines above)

If you nailed “Inside an I/O callback” and “nextTick starvation” on the first try, the model is solid. If not, re-run the scenarios in Scene 1 until the ordering clicks — the reason for every answer is visible step by step in those animations.

Traps worth internalising

A handful of pitfalls that the model above makes obvious, once you have it:

Starvation by microtask cascade

function loop() {
process.nextTick(loop);
}
loop();
setTimeout(() => console.log("I never run"), 0);

process.nextTick drains before Node advances to the timers phase — and if it keeps re-queueing itself, Node never advances at all. The timer sits in the timers queue forever. The same pattern applies to self-scheduling queueMicrotask or recursive Promise.resolve().then(loop). You’ve built an infinite loop that looks async.

setTimeout(fn, 0) is never 0

libuv clamps the minimum delay to 1ms. So setTimeout(fn, 0) is really setTimeout(fn, 1). And even that 1ms is a minimum: if the poll phase is still handling I/O when the timer expires, the timer waits until the loop cycles back to the timers phase.

const start = Date.now();
setTimeout(() => console.log(Date.now() - start), 0);
// Blocking sync work:
while (Date.now() - start < 50) {} // 50ms of CPU
// prints ~50, not 0 or 1

await is just sugar for a microtask

When you write:

async function run() {
doThing();
await somePromise;
doOtherThing();
}

Node literally runs doThing() synchronously, then schedules doOtherThing() as a microtask continuation of somePromise. Everything after await is on the microtask queue. If the thing before await also kicks off a timer, the timer always loses — microtasks drain first.

Long-running sync code blocks everything

The event loop is cooperative. A 200ms JSON.parse or a tight for loop over 10 million items doesn’t yield — it hogs the CPU and every queue builds up behind it. No timer fires, no I/O callback runs, no microtask drains, until the sync work finishes.

This is why CPU-bound work belongs in a worker thread (node:worker_threads), and why chunking big loops with await new Promise(r => setImmediate(r)) can be a useful pressure-release valve.

A complete mental model, in one paragraph

Every time Node wakes up for a tick, it cycles through six phases — timers, pending, idle/prepare, poll, check, close — in that order. Before moving between any two of those phases, it drains the process.nextTick queue entirely, then the promise microtask queue entirely. Timers run in the timers phase, setImmediate runs in the check phase, and every fs/dns/crypto/zlib callback runs in the poll phase. The call stack is single-threaded, so every queue is blocked while sync code runs. setTimeout(0) is at minimum 1ms and only a lower bound; setImmediate inside an I/O callback is guaranteed to run before any setTimeout(fn, 0) scheduled alongside it.

That’s the whole thing. Everything else is pattern-matching against it.

Further reading

The model above is worth internalising once and then keeping: production latency bugs, flaky tests that depend on ordering, and Promise chains that behave strangely nearly all reduce to this one picture.