Most Node.js developers can recite “the event loop handles async” on command. Ask them whether setTimeout(fn, 0) runs before or after setImmediate(fn) — and whether the answer changes inside an fs.readFile callback — and the explanations get much shakier. This post is the visual, hands-on version of that explanation.
Three interactive scenes, stepped through one frame at a time:
- The core mental model — stack, libuv, microtasks, tasks, nextTick.
- The six phases of the libuv event loop, walked through end to end.
- A prediction quiz to test whether the model stuck.
No hand-waving. Every animation mirrors what Node actually does.
Table of contents
Open Table of contents
Why bother understanding the event loop
A fuzzy model is enough until one day it isn’t. The real bugs it causes:
- Latency spikes — a synchronous loop that takes 50ms blocks every timer and I/O callback behind it.
- Order-of-execution surprises — assuming
setImmediateandsetTimeout(fn, 0)are interchangeable (they aren’t — and sometimes one of them is deterministic where the other isn’t). - Loop starvation — a
process.nextTickchain, or a cascade of.then()handlers that keep scheduling more of themselves, can prevent Node from ever advancing to the next phase. - Broken ordering assumptions across files — the module that schedules a
Promise.resolve().then(work)is implicitly competing with whateverqueueMicrotaskorprocess.nextTickruns elsewhere.
Before any of this is fixable, the compartments and their priorities have to be crisp.
Scene 1 · The compartments
Node.js is not one thing. It’s a small collection of specialised compartments that pass callbacks between each other. The call stack runs one frame at a time. Async work lives inside libuv. When libuv finishes a piece of work, the callback is handed to one of three queues — nextTick, microtasks, or tasks — and the event loop, acting as a traffic cop, picks the next thing to run.
The simulator below animates that dance. Pick a scenario, press ▶ Play, and watch where each callback actually lives at each moment.
console.log("A");
setTimeout(() => {
console.log("B");
}, 0);
console.log("C");A few rules worth burning in from this scene:
- Everything synchronous runs first. The call stack has to empty before the event loop looks at any queue.
process.nextTickwins over promises. nextTick has its own dedicated queue that drains before the promise microtask queue.- Microtasks drain completely. The event loop doesn’t run one microtask and return to tasks — it drains the entire microtask queue, including microtasks scheduled by other microtasks, before moving on.
- Timers don’t fire on time.
setTimeout(fn, 0)means at least 0ms (libuv actually rounds this up to 1ms). If the script is busy, the callback waits.
Scene 2 · The six phases
The first scene treats “the event loop” as a single box. Zoom in and it’s actually six phases that Node cycles through, in this fixed order:
┌───────────────┐│ timers │ setTimeout, setInterval callbacks whose delay has elapsed├───────────────┤│ pending │ some deferred I/O callbacks (e.g. TCP connection errors)├───────────────┤│ idle, prepare │ libuv internals — you never interact with these directly├───────────────┤│ poll │ wait for new I/O events, run their callbacks├───────────────┤│ check │ setImmediate callbacks├───────────────┤│ close │ 'close' events (sockets, handles)└───────────────┘ │ └─ back to timersAfter every individual callback — and therefore between every phase transition — Node drains the nextTick queue first, then the promise microtask queue. A single tick can run through this drain cycle many times.
setTimeout(() => console.log("timeout"), 0);
setImmediate(() => console.log("immediate"));This is where the setTimeout(0) vs setImmediate puzzle finally makes sense:
- Top-level code — Node starts a new loop iteration. If the 1ms timer has already elapsed by the time the timers phase starts, the timer callback wins. If not, poll → check runs first, and
setImmediatewins. Non-deterministic by design. - Inside an I/O callback — the callback is running during the poll phase. The very next phase is check (setImmediate). Timers isn’t visited again until the next tick.
setImmediateis guaranteed to run first.
The deterministic one is the useful guarantee: if you’re inside an I/O callback and want something to run on the next tick before any timers fire, reach for setImmediate.
Where each API actually lives
Memorise this table once:
| API | Lives in | Priority |
|---|---|---|
process.nextTick(fn) | nextTick queue | highest — drains between every phase |
Promise.resolve().then(fn) | Promise microtask queue | drains after nextTick, between every phase |
queueMicrotask(fn) | Promise microtask queue | same as above |
setTimeout(fn, ms) / setInterval | timers phase | runs once the delay has elapsed |
setImmediate(fn) | check phase | runs in the next check phase |
fs.*, dns.lookup, crypto.pbkdf2, zlib.* | libuv thread pool → poll phase | callback runs in poll |
server.on('close', fn), socket.on('close', fn) | close phase | last phase before wrap-around |
net/http I/O | libuv non-blocking OS primitives → poll phase | callback runs in poll |
Scene 3 · Predict the output
The only real test of a mental model is whether you can use it to predict what a program does before running it. Seven quizzes, in ascending difficulty. Click the log lines in the order you think they’ll print, then hit Check.
console.log("A");
setTimeout(() => console.log("B"), 0);
Promise.resolve().then(() => console.log("C"));
console.log("D");If you nailed “Inside an I/O callback” and “nextTick starvation” on the first try, the model is solid. If not, re-run the scenarios in Scene 1 until the ordering clicks — the reason for every answer is visible step by step in those animations.
Traps worth internalising
A handful of pitfalls that the model above makes obvious, once you have it:
Starvation by microtask cascade
function loop() { process.nextTick(loop);}loop();setTimeout(() => console.log("I never run"), 0);process.nextTick drains before Node advances to the timers phase — and if it keeps re-queueing itself, Node never advances at all. The timer sits in the timers queue forever. The same pattern applies to self-scheduling queueMicrotask or recursive Promise.resolve().then(loop). You’ve built an infinite loop that looks async.
setTimeout(fn, 0) is never 0
libuv clamps the minimum delay to 1ms. So setTimeout(fn, 0) is really setTimeout(fn, 1). And even that 1ms is a minimum: if the poll phase is still handling I/O when the timer expires, the timer waits until the loop cycles back to the timers phase.
const start = Date.now();setTimeout(() => console.log(Date.now() - start), 0);// Blocking sync work:while (Date.now() - start < 50) {} // 50ms of CPU// prints ~50, not 0 or 1await is just sugar for a microtask
When you write:
async function run() { doThing(); await somePromise; doOtherThing();}Node literally runs doThing() synchronously, then schedules doOtherThing() as a microtask continuation of somePromise. Everything after await is on the microtask queue. If the thing before await also kicks off a timer, the timer always loses — microtasks drain first.
Long-running sync code blocks everything
The event loop is cooperative. A 200ms JSON.parse or a tight for loop over 10 million items doesn’t yield — it hogs the CPU and every queue builds up behind it. No timer fires, no I/O callback runs, no microtask drains, until the sync work finishes.
This is why CPU-bound work belongs in a worker thread (node:worker_threads), and why chunking big loops with await new Promise(r => setImmediate(r)) can be a useful pressure-release valve.
A complete mental model, in one paragraph
Every time Node wakes up for a tick, it cycles through six phases — timers, pending, idle/prepare, poll, check, close — in that order. Before moving between any two of those phases, it drains the process.nextTick queue entirely, then the promise microtask queue entirely. Timers run in the timers phase, setImmediate runs in the check phase, and every fs/dns/crypto/zlib callback runs in the poll phase. The call stack is single-threaded, so every queue is blocked while sync code runs. setTimeout(0) is at minimum 1ms and only a lower bound; setImmediate inside an I/O callback is guaranteed to run before any setTimeout(fn, 0) scheduled alongside it.
That’s the whole thing. Everything else is pattern-matching against it.
Further reading
- libuv event loop docs — the authoritative source; the six phases are defined here.
- Node.js guide: event loop, timers and process.nextTick — official, with the canonical
setTimeoutvssetImmediateexample. - MDN: queueMicrotask — when to prefer it over
Promise.resolve().then()(almost always, unless you need the promise chain).
The model above is worth internalising once and then keeping: production latency bugs, flaky tests that depend on ordering, and Promise chains that behave strangely nearly all reduce to this one picture.