Learn how JavaScript engines work. Understand V8’s parsing, JIT compilation, hidden classes, inline caching, and garbage collection.
What happens when you run JavaScript code? How does a browser turn const x = 1 + 2 into something your computer actually executes? When you write a function, what transforms those characters into instructions your CPU understands?
Copy
function greet(name) { return "Hello, " + name + "!"}greet("World") // "Hello, World!"
Behind every line of JavaScript is a JavaScript engine. It’s the program that reads your code, understands it, and makes it run. The most popular engine is V8, which powers Chrome, Node.js, Deno, and Electron. Understanding how V8 works helps you write faster code and debug performance issues.
What you’ll learn in this guide:
What a JavaScript engine is and what it does
How V8 parses your code and builds an Abstract Syntax Tree
How Ignition (interpreter) and TurboFan (compiler) work together
What JIT compilation is and why it makes JavaScript fast
How hidden classes and inline caching optimize property access
How garbage collection automatically manages memory
Practical tips for writing engine-friendly code
Prerequisite: This guide assumes you’re comfortable with basic JavaScript syntax. Some concepts connect to the Call Stack and Event Loop, so reading those first helps!
A JavaScript engine is a program that executes JavaScript code. It takes the source code you write and converts it into machine code that your computer’s processor can run.Every browser has its own JavaScript engine:
Browser
Engine
Also Used By
Chrome
V8
Node.js, Deno, Electron
Firefox
SpiderMonkey
—
Safari
JavaScriptCore
Bun
Edge
V8 (since 2020)
—
We’ll focus on V8 since it’s the most widely used engine and powers both browser and server-side JavaScript.
All JavaScript engines implement the ECMAScript specification, which defines how the language should work. That’s why JavaScript behaves the same way whether you run it in Chrome, Firefox, or Node.js.
Raw materials (source code): Your JavaScript files come in as text
Quality control (parser): Checks for syntax errors, breaks code into pieces
Blueprint (AST): A structured representation of what needs to be built
Assembly line workers (Ignition): Start working immediately, steady pace
Robotic automation (TurboFan): Takes time to set up, but once running, it’s much faster
Just like a factory might start with manual workers and add robots for repetitive tasks, V8 starts interpreting code immediately, then optimizes the parts that run frequently.
Once V8 has the AST, Ignition takes over. Ignition is V8’s interpreter. It walks through the AST and generates bytecode, a compact representation of your code.
Copy
Bytecode for add(a, b): Ldar a1 // Load argument 'a' into accumulator Add a2 // Add argument 'b' to accumulator Return // Return the accumulator value
Ignition then executes this bytecode immediately. No waiting around for optimization. Your code starts running right away.While executing, Ignition also collects profiling data:
Which functions are called often?
What types of values does each variable hold?
Which branches of if/else statements are taken?
This profiling data becomes important for the next step.
When Ignition notices a function is called many times (it becomes “hot”), V8 decides it’s worth spending time to optimize it. Enter TurboFan, V8’s optimizing compiler.TurboFan takes the bytecode and profiling data, then generates highly optimized machine code. It makes assumptions based on the profiling data:
Copy
function add(a, b) { return a + b}// V8 observes: add() is always called with numbersadd(1, 2)add(3, 4)add(5, 6)// ... called many more times with numbers// TurboFan thinks: "This always gets numbers. I'll optimize for that!"// Generates machine code that assumes a and b are numbers
The optimized code runs much faster than interpreted bytecode because:
It’s native machine code, not bytecode that needs interpretation
It makes type assumptions (no need to check “is this a number?” every time)
It can inline function calls, eliminate dead code, and apply other optimizations
// After 1000 calls with numbers...add("hello", "world") // Strings! TurboFan assumed numbers!
When this happens, V8 performs deoptimization. It throws away the optimized machine code and falls back to Ignition’s bytecode. The function runs slower temporarily, but at least it runs correctly.V8 might try to optimize again later, this time with better information about the actual types being used.
You might have heard that JavaScript is an “interpreted language.” That’s only half the story. Modern JavaScript engines use JIT compilation (Just-In-Time), which combines interpretation and compilation.
JavaScript is a dynamic language. Variables can hold any type, objects can change shape, and functions can be redefined at runtime. This makes ahead-of-time compilation difficult because the compiler doesn’t know what types to expect.
Copy
function process(x) { return x.value * 2}// x could be anything!process({ value: 10 }) // Object with numberprocess({ value: "hello" }) // Object with string (NaN result)process({ value: 10, extra: 5 }) // Different shape
JIT compilation solves this by:
Starting with interpretation (works for any types)
Observing what types actually appear at runtime
Compiling optimized code based on real observations
Falling back to interpretation if observations were wrong
The “warm-up” period: When you first run JavaScript code, it’s slower because it’s being interpreted. After functions run many times, they get optimized and become faster. This is why benchmarks often include a “warm-up” phase.
Hidden classes (called “Maps” in V8, “Shapes” in other engines) are internal data structures that V8 uses to track object shapes. They let V8 know exactly where to find properties like obj.x without searching through every property name.Why does V8 need them? JavaScript objects are dynamic. You can add or remove properties at any time. This flexibility creates a problem: how does V8 efficiently access obj.x if objects can have any shape?
V8 assigns a hidden class to every object. Objects with the same properties in the same order share the same hidden class.
Copy
const point1 = { x: 1, y: 2 }const point2 = { x: 5, y: 10 }// point1 and point2 have the SAME hidden class!// V8 knows: "For objects with this hidden class, 'x' is at offset 0, 'y' is at offset 1"
Inline Caching (IC) is an optimization where V8 remembers where it found a property and reuses that information on subsequent calls. Instead of looking up property locations every time, V8 caches: “For this hidden class, property X is at memory offset Y.”This optimization is possible because of hidden classes. When V8 knows an object’s shape, it can cache the exact memory location of each property.
function getX(obj) { return obj.x // V8 caches: "For HC1, x is at offset 0"}const p1 = { x: 1, y: 2 }const p2 = { x: 5, y: 10 }getX(p1) // First call: look up x, cache the locationgetX(p2) // Second call: same hidden class! Use cached locationgetX(p1) // Third call: cache hit again!
The first time getX runs, V8 does the full property lookup. But it caches the result: “For objects with hidden class HC1, property ‘x’ is at memory offset 0.”Subsequent calls with the same hidden class skip the lookup entirely.
The inline cache can be in different states depending on how many different hidden classes it encounters:
Monomorphic (Fastest)
The function always sees objects with the same hidden class.
Copy
function getX(obj) { return obj.x}// All objects have the same shapegetX({ x: 1, y: 2 })getX({ x: 3, y: 4 })getX({ x: 5, y: 6 })// IC: "Always HC1, x at offset 0" - ONE entry, super fast!
Performance: Excellent. Single comparison, direct memory access.
Polymorphic (Still Good)
The function sees a few different hidden classes (typically 2-4).
Copy
function getX(obj) { return obj.x}getX({ x: 1 }) // Shape AgetX({ x: 2, y: 3 }) // Shape B getX({ x: 4, y: 5, z: 6 }) // Shape C// IC: "Could be A, B, or C" - checks a few options
Performance: Good. Checks a small list of known shapes.
Megamorphic (Slowest)
The function sees many different hidden classes.
Copy
function getX(obj) { return obj.x}// Every call has a completely different shapegetX({ x: 1 })getX({ x: 2, a: 1 })getX({ x: 3, b: 2 })getX({ x: 4, c: 3 })getX({ x: 5, d: 4 })// ... many more different shapes// IC gives up: "Too many shapes, doing full lookup every time"
Performance: Poor. Falls back to generic property lookup.
For best performance: Pass objects with consistent shapes to your functions. Factory functions help:
Unlike languages like C where you manually allocate and free memory, JavaScript automatically manages memory through garbage collection (GC). V8’s garbage collector is called Orinoco.
V8’s GC is based on an observation about how programs use memory: most objects die young.Think about it: temporary variables, intermediate calculation results, short-lived callbacks. They’re created, used briefly, and never needed again. Only some objects (your app’s state, cached data) live for a long time.V8 exploits this by splitting memory into generations:
Copy
┌─────────────────────────────────────────────────────────────────────────┐│ V8 MEMORY HEAP │├─────────────────────────────────────────────────────────────────────────┤│ ││ YOUNG GENERATION OLD GENERATION ││ (Short-lived objects) (Long-lived objects) ││ ││ ┌─────────────────────────┐ ┌─────────────────────────┐ ││ │ Nursery │ Intermediate │ ───► │ Survived multiple GCs │ ││ │ │ │ survives │ │ ││ │ New │ Survived │ │ App state, caches, │ ││ │ objects │ one GC │ │ long-lived data │ ││ └─────────────────────────┘ └─────────────────────────┘ ││ ││ Minor GC (Scavenger) Major GC (Mark-Compact) ││ • Very fast • Slower but thorough ││ • Runs frequently • Runs less often ││ • Only scans young gen • Scans entire heap ││ │└─────────────────────────────────────────────────────────────────────────┘
Give objects the same shape by adding properties in the same order:
Copy
// ✓ Good: Consistent shapefunction createUser(name, age) { return { name, age } // Always name, then age}// ❌ Bad: Inconsistent shapesfunction createUser(name, age) { const user = {} if (name) user.name = name // Sometimes name first if (age) user.age = age // Sometimes age first return user}
Using delete changes an object’s hidden class and can cause deoptimization:
Copy
// ❌ Bad: Using deleteconst user = { name: "Alice", age: 30, temp: true }delete user.temp // Changes hidden class!// ✓ Good: Set to undefined or use a different structureconst user = { name: "Alice", age: 30, temp: true }user.temp = undefined // Hidden class stays the same
Setting a property to undefined keeps the property on the object (it just has no value). If you need to truly remove properties frequently, consider using a Map instead of a plain object.
Partially true, but misleading. Modern JavaScript engines use JIT compilation. Your code is initially interpreted, but hot functions are compiled to native machine code. V8’s TurboFan generates highly optimized machine code that rivals traditionally compiled languages for computational tasks.
'More code = slower execution'
Not necessarily! V8 performs dead code elimination and function inlining. A well-structured program with more lines can be faster than a “clever” one-liner that’s hard to optimize. Write clear, predictable code and let the engine optimize it.
'I need to manually manage memory in JavaScript'
No! JavaScript has automatic garbage collection. You don’t need to (and can’t) manually free memory. However, you should avoid creating unnecessary object references that prevent garbage collection (memory leaks).
It’s worse than slow.eval() prevents many optimizations because V8 can’t predict what code will run. Variables in scope become “unoptimizable” because eval might access them. Avoid eval() and new Function() with dynamic strings.
'typeof null === 'object' is a V8 bug'
No, it’s in the ECMAScript specification. This is a historical quirk from JavaScript’s original implementation that was kept for backwards compatibility. All JavaScript engines must return "object" for typeof null to comply with the spec.
Question 1: What's the difference between Ignition and TurboFan?
Answer:Ignition is V8’s interpreter. It generates bytecode from the AST and executes it immediately. It’s fast to start but doesn’t produce the fastest possible code. While running, it collects profiling data about types and execution patterns.TurboFan is V8’s optimizing compiler. It takes bytecode and profiling data from Ignition, then generates highly optimized machine code. It takes longer to compile but produces much faster code. TurboFan kicks in for “hot” functions that run many times.
Question 2: Why does property order matter when creating objects?
Answer:V8 assigns hidden classes to objects based on their properties and the order those properties were added. Objects with the same properties in the same order share a hidden class and can use the same optimizations.
Copy
const a = { x: 1, y: 2 } // Hidden class Aconst b = { y: 2, x: 1 } // Hidden class B (different!)
Different hidden classes mean different inline cache entries and less optimization sharing. For best performance, always add properties in a consistent order.
Question 3: What triggers deoptimization?
Answer:Deoptimization happens when TurboFan’s assumptions about your code are violated. Common triggers include:
Type changes: A function optimized for numbers receives a string
Hidden class changes: An object’s shape changes (adding/deleting properties)
Unexpected values:undefined where a number was expected
Megamorphic call sites: Too many different object shapes at one location
Copy
function add(a, b) { return a + b }// Optimized for numbersadd(1, 2)add(3, 4)// Deoptimizes!add("hello", "world")
Question 4: What is inline caching and why does it speed up property access?
Answer:Inline caching (IC) is an optimization where V8 remembers where it found a property for a given hidden class. Instead of doing a full property lookup every time, it caches: “For objects with hidden class X, property ‘foo’ is at memory offset Y.”On subsequent accesses with the same hidden class, V8 skips the lookup and reads directly from the cached offset. This turns an O(n) dictionary lookup into an O(1) memory access.
Copy
function getX(obj) { return obj.x // IC: "For HC1, x is at offset 0"}getX({ x: 1, y: 2 }) // Cache miss, full lookup, cache resultgetX({ x: 3, y: 4 }) // Cache hit! Direct access to offset 0
Question 5: What is the 'generational hypothesis' in garbage collection?
Answer:The generational hypothesis states that most objects die young. Temporary variables, function arguments, intermediate results. They’re created, used briefly, and become garbage quickly.V8 exploits this by dividing the heap into:
Young generation: Where new objects are allocated. Collected frequently with a fast “scavenger” algorithm.
Old generation: Objects that survive multiple young generation collections. Collected less frequently with a slower but thorough algorithm.
This is efficient because checking young objects frequently catches most garbage quickly, while long-lived objects aren’t constantly re-checked.
Question 6: Which code pattern is more engine-friendly?
Copy
// Pattern Afunction createPoint(x, y) { return { x: x, y: y }}// Pattern Bfunction createPoint(x, y) { const point = {} point.x = x point.y = y return point}
Answer:Pattern A is more engine-friendly.In Pattern A, the object literal { x: x, y: y } creates an object with a known shape immediately. V8 can skip the empty object transition.In Pattern B, the object goes through three hidden class transitions:
{} - empty shape
{ x } - after adding x
{ x, y } - after adding y
Pattern A is faster to create and produces the same final shape more directly. Modern engines optimize object literals with known properties, skipping intermediate shapes.