Catalogue of CPython opcodes: LOAD/STORE variants, binary ops, comparison, jumps, and call instructions.
Bytecode instructions are the operations executed by the CPython evaluation loop. They are the compact, interpreter-level form of Python code after parsing, AST construction, symbol analysis, and compilation.
A Python function such as:
def add(a, b):
return a + bdoes not execute as source text. CPython compiles it into a code object. That code object contains an instruction stream.
You can inspect that stream with dis:
import dis
def add(a, b):
return a + b
dis.dis(add)The output depends on the Python version, but it usually shows instructions like:
LOAD_FAST
LOAD_FAST
BINARY_OP
RETURN_VALUEThose instructions are the vocabulary of the CPython virtual machine.
30.1 What a Bytecode Instruction Is
A bytecode instruction tells the interpreter to perform one small operation.
Examples:
load a local variable
load a constant
store into a local variable
perform binary addition
call a function
jump to another instruction
return from a frame
raise an exception
build a list
load an attributeAn instruction normally has two parts:
opcode
operandThe opcode says what to do.
The operand supplies a small integer argument, when needed.
For example:
LOAD_FAST 0means:
load fast local variable at slot 0And:
LOAD_CONST 1means:
load constant at index 1 in the code object's constants tableSome instructions have no meaningful operand. Some have operands whose meaning depends entirely on the opcode.
30.2 Bytecode Lives in Code Objects
Bytecode belongs to a code object.
A function object points to a code object:
def f(x):
return x + 1
print(f.__code__)The code object contains the instruction stream and the tables used by instructions.
Important code object data includes:
| Field | Purpose |
|---|---|
co_code | Bytecode representation exposed to Python |
co_consts | Constants used by LOAD_CONST |
co_names | Names used by global, attribute, and import operations |
co_varnames | Fast local variable names |
co_freevars | Free variables from outer scopes |
co_cellvars | Locals captured by inner functions |
co_stacksize | Maximum value stack depth |
co_flags | Execution flags |
co_filename | Source filename |
co_name | Code object name |
co_qualname | Qualified name |
| exception table | Exception handler metadata |
| line table | Source position metadata |
The bytecode stream is compact because it does not store full names, constants, or object pointers directly. It stores integer indexes into these tables.
30.3 Instructions Refer to Tables
Consider:
def f(x):
return x + 10The constant 10 is stored in the code object’s constants table.
The local name x is stored in the local variable table.
The instruction stream refers to them by index.
Conceptually:
co_consts:
[None, 10]
co_varnames:
["x"]
bytecode:
LOAD_FAST 0
LOAD_CONST 1
BINARY_OP +
RETURN_VALUEThis design keeps instructions small.
Instead of storing the string "x" in every local access instruction, CPython stores slot number 0.
Instead of storing the object 10 directly inside the instruction, CPython stores constant index 1.
30.4 Disassembly
The dis module converts bytecode into a readable form.
import dis
def f(a, b):
c = a + b
return c
dis.dis(f)A disassembly usually includes:
source line number
bytecode offset
opcode name
operand
resolved operand meaning
jump target markers
cache entries, depending on options and versionExample shape:
3 0 RESUME 0
2 LOAD_FAST 0 (a)
4 LOAD_FAST 1 (b)
6 BINARY_OP 0 (+)
10 STORE_FAST 2 (c)
4 12 LOAD_FAST 2 (c)
14 RETURN_VALUEThe exact output changes by Python version. Bytecode is a CPython implementation detail, not a stable external instruction set.
30.5 Instruction Offsets
Each instruction has a position in the bytecode stream.
Disassembly shows this as an offset:
0 RESUME
2 LOAD_FAST
4 LOAD_FAST
6 BINARY_OP
10 STORE_FASTOffsets are used by:
jump instructions
exception tables
line number mapping
tracebacks
debuggers
profilers
coverage toolsA jump instruction may target another offset.
Conceptually:
POP_JUMP_IF_FALSE 24means:
if condition is false, continue execution at bytecode offset 24Modern CPython details differ, but the idea is the same: bytecode is a sequence of addressable instructions.
30.6 Stack Effects
Every instruction has a stack effect.
The stack effect describes how the instruction changes the frame’s value stack.
| Instruction | Stack before | Stack after |
|---|---|---|
LOAD_CONST | [] | [constant] |
LOAD_FAST | [] | [local] |
STORE_FAST | [value] | [] |
BINARY_OP | [left, right] | [result] |
LOAD_ATTR | [object] | [attribute] |
CALL | [callable, args...] | [result] |
RETURN_VALUE | [value] | exits frame |
The compiler must emit bytecode with valid stack discipline. At every instruction, the stack must contain the values that instruction expects.
At control-flow merge points, all incoming paths must produce compatible stack shapes.
30.7 Basic Load Instructions
Load instructions push values onto the stack.
Common load categories:
| Instruction | Meaning |
|---|---|
LOAD_CONST | Push a constant from co_consts |
LOAD_FAST | Push a local variable from a fast local slot |
LOAD_GLOBAL | Push a global or builtin name |
LOAD_DEREF | Push a closure cell value |
LOAD_ATTR | Push an attribute from an object |
LOAD_NAME | Push a name using class or dynamic namespace lookup |
Example:
def f(x):
return x + 1Conceptual bytecode:
LOAD_FAST x
LOAD_CONST 1
BINARY_OP +
RETURN_VALUELOAD_FAST reads a slot from the current frame. LOAD_CONST reads from the code object.
30.8 Store Instructions
Store instructions consume values from the stack and place them somewhere.
| Instruction | Meaning |
|---|---|
STORE_FAST | Store into a fast local slot |
STORE_GLOBAL | Store into global namespace |
STORE_NAME | Store into current local namespace |
STORE_ATTR | Store into object attribute |
STORE_SUBSCR | Store into subscription target |
STORE_DEREF | Store into closure cell |
DELETE_FAST | Delete a local slot |
DELETE_ATTR | Delete an attribute |
DELETE_SUBSCR | Delete an item |
Example:
def f(a, b):
c = a + b
return cConceptual stack behavior:
LOAD_FAST a stack: [a]
LOAD_FAST b stack: [a, b]
BINARY_OP + stack: [result]
STORE_FAST c stack: []STORE_FAST consumes the result. It does not leave the assigned value on the stack unless the compiler explicitly duplicates it for another use.
30.9 Local Variable Instructions
Fast local instructions use indexes, not names.
For:
def f(a, b):
c = a + b
return cThe compiler assigns local slots:
| Slot | Name |
|---|---|
| 0 | a |
| 1 | b |
| 2 | c |
The instruction:
LOAD_FAST 1means:
push local slot 1, which is bThis is much cheaper than dictionary lookup. The frame stores fast locals in an array-like layout.
30.10 Constant Instructions
Constants are stored in co_consts.
Example:
def f():
return 123Conceptual code object:
co_consts:
[None, 123]
bytecode:
LOAD_CONST 1
RETURN_VALUEThe constant table can contain:
None
numbers
strings
bytes
tuples of constants
frozensets of constants
nested code objectsNested functions and comprehensions often appear as nested code objects inside co_consts.
30.11 Name Instructions
Name lookup depends on scope.
At module level:
x = 10
print(x)names live in the module dictionary.
Inside a function:
def f():
return xif x is not local, CPython performs global or builtin lookup.
Important instructions:
| Instruction | Typical use |
|---|---|
LOAD_GLOBAL | Function global and builtin lookup |
LOAD_NAME | Class body and dynamic namespace lookup |
STORE_NAME | Module or class namespace assignment |
LOAD_FAST | Function local slot lookup |
LOAD_DEREF | Closure variable lookup |
The compiler chooses the instruction based on symbol table analysis.
30.12 Attribute Instructions
Attribute access uses instructions such as LOAD_ATTR and STORE_ATTR.
value = obj.xConceptually:
LOAD_FAST obj
LOAD_ATTR x
STORE_FAST valueAttribute lookup may involve:
object type
descriptor protocol
instance dictionary
slots
class dictionary
base classes
custom __getattribute__
custom __getattr__
inline cachesBut the bytecode-level stack effect is simple:
LOAD_ATTR:
input: [object]
output: [attribute_value]For assignment:
obj.x = valueConceptually:
LOAD_FAST value
LOAD_FAST obj
STORE_ATTR xThe exact operand order is defined by the opcode implementation.
30.13 Subscript Instructions
Subscription uses stack operands.
value = xs[i]Conceptual bytecode:
LOAD_FAST xs
LOAD_FAST i
BINARY_SUBSCR
STORE_FAST valueThe BINARY_SUBSCR instruction consumes the container and key, then pushes the result.
For assignment:
xs[i] = valueConceptually:
LOAD_FAST value
LOAD_FAST xs
LOAD_FAST i
STORE_SUBSCRThis calls the object’s item assignment protocol.
For deletion:
del xs[i]the compiler emits deletion-oriented subscription bytecode.
30.14 Binary Operations
Modern CPython uses BINARY_OP for many binary operations, with an operand describing the specific operation.
Python expression:
a + bConceptual bytecode:
LOAD_FAST a
LOAD_FAST b
BINARY_OP +Other operations include:
+
-
*
@
/
%
//
**
<<
>>
&
|
^In-place variants also exist conceptually:
x += yThis may use an in-place operation form or an operand variant that attempts in-place semantics.
Binary operations are dynamic. + may mean integer addition, float addition, string concatenation, list concatenation, or user-defined __add__.
30.15 Unary Operations
Unary operations consume one stack value and push one result.
Examples:
-x
+x
~x
not xConceptual bytecode:
LOAD_FAST x
UNARY_NEGATIVEStack effect:
input: [x]
output: [-x]Unary operations still use Python object semantics. -x may call x.__neg__() for user-defined objects.
30.16 Comparison Instructions
Comparisons consume operands and push a result.
a < bConceptually:
LOAD_FAST a
LOAD_FAST b
COMPARE_OP <Comparison operations include:
<
<=
==
!=
>
>=
in
not in
is
is not
exception matchingComparisons may call user code:
class X:
def __lt__(self, other):
return TrueSo even a comparison instruction can allocate, call Python code, raise an exception, or return non-Boolean objects in some protocol contexts before truth testing.
30.17 Jump Instructions
Jump instructions change the instruction pointer.
They implement:
if statements
while loops
for loops
boolean short-circuiting
conditional expressions
exception flow
pattern matching branchesExample:
def f(x):
if x:
return 1
return 0Conceptual bytecode:
LOAD_FAST x
POP_JUMP_IF_FALSE else_branch
LOAD_CONST 1
RETURN_VALUE
else_branch:
LOAD_CONST 0
RETURN_VALUESome jumps are unconditional. Some test the top stack value. Some preserve the value. Some pop it.
The stack effect of a jump is just as important as its target.
30.18 Loop Instructions
Loops use jump instructions plus iterator-specific instructions.
A while loop:
while cond:
body()conceptually compiles to:
loop_start:
evaluate cond
if false, jump loop_end
execute body
jump loop_start
loop_end:A for loop:
for x in xs:
body(x)uses iterator protocol instructions:
LOAD_FAST xs
GET_ITER
loop:
FOR_ITER end
STORE_FAST x
...
JUMP loop
end:FOR_ITER keeps the iterator on the stack while iteration continues.
30.19 Call Instructions
Calls are among the most performance-critical bytecode operations.
For:
result = f(a, b)conceptual stack setup:
LOAD_FAST f
LOAD_FAST a
LOAD_FAST b
CALL 2
STORE_FAST resultThe call instruction consumes callable and arguments, invokes the call machinery, and pushes the return value.
Calls may target:
Python functions
built-in functions
bound methods
classes
callable instances
C extension functions
coroutines
descriptorsCPython uses optimized call conventions such as vectorcall to reduce temporary tuple and dictionary creation.
30.20 Return Instructions
A return instruction exits the current frame.
def f():
return 42Conceptual bytecode:
LOAD_CONST 42
RETURN_VALUEThe stack before return:
[42]RETURN_VALUE consumes the return object and gives it to the caller.
For functions without explicit return:
def f():
passthe compiler emits a return of None.
Conceptually:
LOAD_CONST None
RETURN_VALUE30.21 Raise Instructions
Raising an exception uses bytecode too.
raise ValueError("bad")Conceptually:
LOAD_GLOBAL ValueError
LOAD_CONST "bad"
CALL 1
RAISE_VARARGS 1The raise instruction exits normal execution and enters exception propagation.
It must interact with:
thread exception state
tracebacks
exception handlers
finally blocks
context and cause
frame unwindingA raise instruction usually does not push a normal result.
30.22 Exception Handling Instructions
Exception handling bytecode has changed significantly across Python versions.
Modern CPython uses exception tables associated with code objects rather than older block stack opcodes for many tasks.
Still, the interpreter needs instructions and metadata for:
entering handlers
matching exception types
binding exception variables
reraising
clearing exception state
running finally blocks
handling with-statementsExample:
try:
risky()
except ValueError as exc:
recover(exc)The compiled code must describe:
protected bytecode range
handler target
stack depth to restore
exception matching operation
binding of exc
cleanup of excException bytecode is delicate because it must preserve Python semantics while cleaning temporary stack values correctly.
30.23 Import Instructions
Imports have dedicated bytecode operations.
import mathConceptually:
LOAD_CONST level
LOAD_CONST fromlist
IMPORT_NAME math
STORE_NAME mathFor:
from math import sqrtconceptual operations include:
IMPORT_NAME math
IMPORT_FROM sqrt
STORE_NAME sqrtImport bytecode calls the import machinery. It may execute module code, acquire import locks, load cached bytecode, run package initialization, or raise import errors.
An import statement is executable code, not a static declaration.
30.24 Container Instructions
Container literals use build instructions.
xs = [a, b, c]Conceptually:
LOAD_FAST a
LOAD_FAST b
LOAD_FAST c
BUILD_LIST 3
STORE_FAST xsOther build instructions include operations for:
tuples
sets
dicts
slices
strings
lists from comprehensions
maps from key-value pairsFor dictionary literals:
d = {"x": a, "y": b}the compiler arranges keys and values so a dict-building instruction can consume them.
30.25 Unpacking Instructions
Unpacking instructions decompose iterable values.
a, b = pairConceptually:
LOAD_FAST pair
UNPACK_SEQUENCE 2
STORE_FAST a
STORE_FAST bExtended unpacking:
a, *middle, z = valuesuses an unpacking instruction capable of producing a list for the starred target.
Unpacking instructions must enforce correct arity. If the iterable has too few or too many values, CPython raises an error.
30.26 Closure Instructions
Nested functions require closure-related instructions.
Example:
def outer():
x = 10
def inner():
return x
return innerThe compiler must arrange for x to live in a cell object.
Relevant operations include:
make cell variables
load closure cells
load dereferenced values
store dereferenced values
build function with closureConceptually:
outer frame:
x stored in cell
inner function:
closure points to same cell
inner bytecode:
LOAD_DEREF xThis is how inner can access x after outer returns.
30.27 Function Creation Instructions
A def statement creates a function object at runtime.
def f(x):
return x + 1At module execution time, CPython does not simply register a static function. It executes bytecode that builds a function object from a code object.
Conceptually:
LOAD_CONST <code object f>
MAKE_FUNCTION
STORE_NAME fIf the function has defaults, annotations, keyword defaults, or closure cells, those are loaded and attached during function creation.
This explains why def is executable:
if debug:
def f():
return "debug"
else:
def f():
return "normal"Only one branch creates and binds f.
30.28 Class Creation Instructions
A class statement also executes bytecode.
class C:
x = 1Conceptual high-level behavior:
load build_class
load class body code object
make function for class body
load class name
call build_class
store class objectThe class body itself has a code object. It runs in a prepared namespace. After it finishes, the metaclass creates the actual class object.
This explains why class bodies can run arbitrary code:
class C:
print("building class")
x = compute()The evaluation loop executes that body like other code.
30.29 Generator and Coroutine Instructions
Generators and coroutines need instructions for suspension and resumption.
Example:
def gen():
yield 1
yield 2A yield instruction returns a value to the caller while preserving frame state.
Conceptually:
LOAD_CONST 1
YIELD_VALUE
resume later
LOAD_CONST 2
YIELD_VALUE
resume later
LOAD_CONST None
RETURN_VALUECoroutines use related mechanisms for await.
async def f():
result = await g()
return resultThe bytecode must support:
creating coroutine objects
awaiting awaitables
suspending execution
resuming with values
resuming with exceptions
returning final result30.30 Pattern Matching Instructions
Structural pattern matching compiles to specialized tests, unpacking, attribute access, mapping checks, sequence checks, and branches.
Example:
match value:
case [x, y]:
return x + y
case _:
return 0The compiler emits bytecode that roughly does:
load subject
check sequence pattern
check length
unpack values
bind x and y
execute body
otherwise try next casePattern matching bytecode must preserve Python semantics around failed matches. Bindings from failed alternatives must not leak incorrectly into successful later cases.
30.31 Cache Instructions
Modern CPython includes inline cache entries associated with some bytecode instructions.
In disassembly, you may see cache-related entries depending on options and version.
These cache entries support specialization for operations such as:
attribute access
global lookup
binary operations
method calls
function calls
subscript operationsCache entries are not normal Python operations. They are interpreter metadata.
The stack effect of the logical instruction remains the important semantic part.
For example:
LOAD_ATTR name
CACHE
CACHELOAD_ATTR still consumes an object and pushes an attribute value. The cache entries help make that faster.
30.32 Adaptive Instructions
Adaptive bytecode allows CPython to specialize hot operations.
A generic operation may be rewritten or interpreted as a specialized form after repeated execution.
Example conceptual flow:
BINARY_OP
observes int + int repeatedly
↓
specialized int-add pathThe specialized instruction must preserve the same stack contract:
input: [left, right]
output: [result]If the observed assumptions stop holding, CPython can deoptimize or fall back to the generic operation.
This mechanism gives performance improvements without changing Python-level semantics.
30.33 Pseudo-Instructions
Some instruction names may appear in compiler internals or generated metadata but not as ordinary runtime opcodes in the final bytecode stream.
Pseudo-instructions can help represent:
abstract control-flow operations
exception handling structure
compiler intermediate forms
assembler-level markersWhen reading CPython internals, distinguish:
source-level syntax
compiler intermediate instructions
runtime bytecode instructions
inline cache entries
generated metadataNot every name in opcode-related files behaves like a normal instruction executed by the evaluation loop.
30.34 Instruction Families
Many instructions belong to families.
Examples:
load family
store family
delete family
binary operation family
unary operation family
jump family
call family
import family
closure family
container-build family
exception familyInstruction families help you read disassembly.
When you see LOAD_*, expect a value to be pushed.
When you see STORE_*, expect a value to be consumed.
When you see JUMP_*, expect control flow to change.
When you see CALL, expect callable and argument stack layout to matter.
30.35 Stack-Neutral Instructions
Some instructions do not change the logical Python value stack.
Examples may include:
RESUME
NOP
cache-related entries
some instrumentation markersThese instructions can still matter for execution, tracing, specialization, or interpreter state.
A stack-neutral instruction can affect runtime behavior even if it does not push or pop a Python object.
For example, RESUME marks execution resumption points in modern CPython bytecode.
30.36 Version Differences
CPython bytecode changes across versions.
Changes may include:
new opcodes
removed opcodes
combined opcodes
different call protocol
different exception handling representation
different cache layout
different jump semantics
different disassembly format
specialized instruction changesThis is why bytecode should be treated as version-specific.
Code that depends on exact bytecode should declare which Python version it targets.
Examples of version-sensitive tools:
bytecode transformers
coverage tools
debuggers
decompilers
optimizers
security analyzers
teaching visualizers
profilersFor ordinary Python application code, bytecode details are usually irrelevant. For CPython internals, they are central.
30.37 Reading Bytecode by Hand
A useful reading process:
identify locals
identify constants
identify names
track stack effects
mark jumps
mark call sites
mark exception regions
mark return pathsExample:
def f(a, b):
if a > b:
return a - b
return b - aConceptual bytecode:
LOAD_FAST a
LOAD_FAST b
COMPARE_OP >
POP_JUMP_IF_FALSE else
LOAD_FAST a
LOAD_FAST b
BINARY_OP -
RETURN_VALUE
else:
LOAD_FAST b
LOAD_FAST a
BINARY_OP -
RETURN_VALUEStack tracking:
LOAD_FAST a [a]
LOAD_FAST b [a, b]
COMPARE_OP > [a > b]
POP_JUMP... []Both branches return directly, so there is no merge after the branch.
30.38 Example: List Comprehension
Source:
def f(xs):
return [x * 2 for x in xs if x > 0]Conceptually, CPython creates a nested code object for the comprehension.
Outer function:
load comprehension code object
make function
load xs
get iterator
call comprehension function
return resultInner comprehension code:
build empty list
for each x in iterator:
if x > 0:
append x * 2
return listThis explains why comprehensions have their own scope.
The bytecode instruction stream makes this visible because the nested code object appears in co_consts.
30.39 Example: Closure
Source:
def outer(x):
def inner(y):
return x + y
return innerImportant bytecode concepts:
x becomes a cell variable in outer
x becomes a free variable in inner
outer creates inner with closure data
inner uses LOAD_DEREF to read xInspection:
def outer(x):
def inner(y):
return x + y
return inner
print(outer.__code__.co_cellvars)
inner = outer(10)
print(inner.__code__.co_freevars)
print(inner.__closure__)Bytecode instructions show closure construction and dereference access.
30.40 Example: Try Except
Source:
def f(x):
try:
return 10 / x
except ZeroDivisionError:
return 0Conceptual structure:
protected region:
LOAD_CONST 10
LOAD_FAST x
BINARY_OP /
RETURN_VALUE
handler:
check exception matches ZeroDivisionError
LOAD_CONST 0
RETURN_VALUEThe code object contains exception table metadata describing which bytecode ranges are protected and where handlers begin.
When reading this bytecode, inspect both:
instruction stream
exception tableThe exception table is part of the executable structure.
30.41 Bytecode and Source Lines
Bytecode instructions are mapped back to source positions.
This mapping supports:
tracebacks
debuggers
coverage tools
profilers
line tracing
error messagesA single source line can compile to many bytecode instructions.
x = f(a) + g(b)Conceptual bytecode includes:
load f
load a
call f
load g
load b
call g
binary add
store xSource location metadata lets CPython report more precise positions for errors and trace events.
30.42 Bytecode and Tracebacks
When an exception occurs, the traceback records the frame and the relevant instruction/source location.
Example:
def f(x):
return 10 / x
f(0)The failing operation is the division bytecode. CPython uses the frame’s code object and instruction position to report the source line.
A traceback is therefore connected to:
frame
code object
instruction offset
source location table
exception state30.43 Bytecode and Optimization
CPython performs some compile-time and runtime optimizations.
Compile-time examples may include:
constant handling
dead code handling in simple cases
jump simplification
stack size computation
scope resolution
literal container optimizationsRuntime examples include:
adaptive specialization
inline caches
optimized call paths
fast locals
specialized attribute access
specialized global lookupThe bytecode instruction stream sits between the compiler and runtime optimizer. It is both the compiler’s output and the interpreter’s input.
30.44 Bytecode Is Not a Stable API
CPython bytecode is not designed as a stable public virtual machine target.
It can change between releases to support:
better performance
simpler interpreter implementation
new language features
better debugging information
new exception machinery
new call conventions
specialization
free-threading work
JIT experimentsThis does not mean bytecode is unusable. It means bytecode-level tools must be version-aware.
For stable program behavior, rely on Python language semantics. For CPython internals work, study the bytecode for the exact CPython version.
30.45 A Minimal Bytecode Interpreter
A toy interpreter helps show the idea.
LOAD_CONST = "LOAD_CONST"
LOAD_FAST = "LOAD_FAST"
STORE_FAST = "STORE_FAST"
ADD = "ADD"
RETURN = "RETURN"
def run(code, consts, locals_):
stack = []
for op, arg in code:
if op == LOAD_CONST:
stack.append(consts[arg])
elif op == LOAD_FAST:
stack.append(locals_[arg])
elif op == STORE_FAST:
locals_[arg] = stack.pop()
elif op == ADD:
right = stack.pop()
left = stack.pop()
stack.append(left + right)
elif op == RETURN:
return stack.pop()
raise RuntimeError("missing RETURN")A tiny program:
code = [
(LOAD_FAST, "a"),
(LOAD_FAST, "b"),
(ADD, None),
(STORE_FAST, "c"),
(LOAD_FAST, "c"),
(RETURN, None),
]
print(run(code, [], {"a": 2, "b": 3}))Output:
5This toy leaves out most of CPython:
objects
reference counts
exceptions
calls
descriptors
classes
imports
closures
generators
coroutines
specialization
inline caches
tracing
thread stateBut it captures the core idea: bytecode instructions operate on a frame state and a value stack.
30.46 Common Misunderstandings
| Misunderstanding | Correct model |
|---|---|
| Bytecode is Python source in another syntax | Bytecode is an interpreter instruction stream |
| Bytecode is portable across all Python implementations | CPython bytecode is CPython-specific |
| Bytecode is stable across versions | Bytecode changes between CPython versions |
| Instructions contain full variable names | Many instructions contain indexes into code object tables |
dis output is the full runtime story | Runtime also uses frames, caches, exception tables, and specialization |
| One source line means one instruction | One line often compiles to many instructions |
| Bytecode always maps directly to syntax | Some bytecode exists for runtime protocol machinery |
| Inline caches are Python operations | They are interpreter optimization metadata |
30.47 Reading Strategy
To understand bytecode instructions, work from small examples.
Start with:
def f(a, b):
return a + bThen inspect:
import dis
dis.dis(f)Then check:
print(f.__code__.co_consts)
print(f.__code__.co_varnames)
print(f.__code__.co_names)
print(f.__code__.co_stacksize)For each instruction, ask:
What does it consume from the stack?
What does it push?
Which code object table does it reference?
Can it jump?
Can it raise?
Can it call Python code?
Can specialization change its fast path?This method scales from simple arithmetic to functions, closures, imports, classes, exceptions, and comprehensions.
30.48 Chapter Summary
Bytecode instructions are CPython’s executable instruction format. They live inside code objects and are executed by frames through the evaluation loop.
The core model is:
code object holds bytecode and metadata
frame holds execution state
bytecode instruction mutates frame state
evaluation loop dispatches instructions
stack effects define operand flowInstructions load values, store values, call functions, perform operations, build containers, branch, handle exceptions, create functions and classes, import modules, suspend generators, and return results.
Bytecode is compact, dynamic, stack-based, and version-specific. Understanding it gives you a direct view of how Python source becomes CPython execution.