Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Does the location of an import statement affect performance in Python?
When writing Python-based apps (e.g. Django, Flask, etc.), it's often the case that import statements can be found all over the place, often more than once for the same module. For example, you can:
- have the imports at the top of a module;
- place the imports inside the functions where they're actually used;
- end up importing the same module multiple times (e.g. both modules
a.pyandb.pyhaveimport mathin it);
So, while you can place your import statements anywhere:
- is there a noticeable cost (e.g. memory, performance/speed, etc.) associated with a particular choice? And
- what's the "best practice" for module
imports and why?
2 answers
Summary
The location within a module where an import statement is found by the interpreter is not expected to cause differences in performance such as speed or memory usage. Modules are singleton objects, which means that they're only ever loaded once and will not be re-imported or re-loaded again even if additional import statements are encountered.
Therefore, you should follow the best-practice of keeping import statements at the top of the module. All of that being said, how you do the import and/or subsequent attribute lookups, does have an impact.
Imports and Attribute Look-ups
Suppose you import math and then, every time you need to use the sin(...) function, you have to do math.sin(...). This will generally be slower than from math import sin and then using sin(...) directly because Python has to keep looking up the function name within the module every time an attempt to invoke it is made.
This lookup-penalty applies to everything that gets accessed using the dot . operator and will be particularly noticeable in a loop. It's therefore advisable to at least get a local reference to things you need to use/invoke frequently in performance critical sections.
For example, using the original import math example, right before a critical loop, you could do something like this:
# ... within some function
sin = math.sin
for i in range(0, REALLY_BIG_NUMBER):
x = sin(i) # faster than: x = math.sin(x)
# ...
This is a trivial example, but note that something similar can happen with methods on other objects (e.g. lists, dictionaries, etc) because methods are still attributes that have to be looked up. (Remember, it's everything that requires using the dot . operator.)
Benchmark
Here're some benchmarks with 2 different CPUs.
This one is from an Intel Core i9 (8-CPUs: 4-Core + HT) I bought back in 2010:
>>> # with lookup
>>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
89.7203312900001
>>> # without lookup
>>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
78.27029322999988
And the same tests repeated on an AMD Ryzen 9 3900X (24-CPUs: 12-Core + SMT) I bought earlier this year:
>>> # with lookup
>>> timeit('for i in range(0, 10000): x = math.sin(i)', setup='import math', number=50000)
37.06144698499884
>>> # without lookup
>>> timeit('for i in range(0, 10000): x = sin(i)', setup='from math import sin', number=50000)
26.76371130500047
There's a 10+ second difference in the look-up vs no look-up cases for both CPUs.
Note that the difference depends on how much time the program spends running this code, hence why the "performance critical section" qualifier is so important. The fact is that, for most (not all) other cases, the benchmarks above can be safely ignored because the actual impact of more sporadic usage will be negligible.
Where to Import and Why
The import statements should be kept at the top of the module, as it's normally done. Straying away from that pattern for no good reason is just going to make the code more difficult to go through. For example, module dependencies will be more difficult to find because import statements will be scattered throughout the code instead of being in a single easily-seen location. (You could say dependencies are "hidden".)
It may also make a module less reliable for clients and more error-prone for their own developers because it's easier to forget about dependencies. As a trivial example, suppose you have this in a module:
# ... lots of code above
def fn_j(x: int) -> float:
import math
return math.sin(x)
# lots of code below ...
Ok, that works. But then you add:
# ... lots of code above
def fn_z(x: int) -> float:
# BUG: notice the missing, but required, duplicate `import math` here
return math.cos(x)
Clients that call fn_j will be fine, but calling fn_z will run into a NameError: name 'math' is not defined, which is a very avoidable bug and no one wants that.
Ok ...
But you can catch this in your unit tests!
... I hear you think. Yes, you can, but that's beside the point.
Import performance
Module import is cached by default in Python. Loaded modules are stored in the sys.modules dictionary, keyed by (absolute) symbolic name. A line of code like import foo.bar or from .foo import bar is an executable statement, not a directive; imports happen when the code is encountered. However, because of caching, this is extremely fast (on the order of 100 nanoseconds on consumer hardware) when the module has already been loaded.
The "direct" import of a function, class etc. from a module still imports and caches the module. Try this example from a cold start of Python:
import sys
print('json' in sys.modules)
from json import loads
print('json' in sys.modules)
Normally, json won't be imported at startup. But it will appear in sys.modules.keys() after the import, even though this import doesn't define the global name json.
When the module is not in cache, and an import is attempted, it normally won't matter when the attempt occurs.
Because of these facts, the common practice is to put all imports at the top of the file. While they can be used elsewhere, this rarely accomplishes anything useful, while making it harder to read the code by violating the reader's expectations. Because the top-level code runs first, this ensures that everything needed in the current module is imported before it's used. Anything else would either create unexpected coupling in the code (e.g. x() must be called before y(), in order to import things y needs) or pollute the code with redundant imports.
Import timing
However, it's sometimes useful to delay imports deliberately by running them within a single function that needs them (again, if the function is repeatedly called, the import will be cached, so the effect on performance is minimal). If the imported module is very large, it may be desirable to make that import happen only when it needs to happen, rather than at program startup. Besides which, the import cost can then be avoided entirely, on program runs where the function isn't called at all.
Lazy imports
In order to make it easier to defer imports, the next minor version of Python (3.15) is planned to include new "lazy" syntax for imports.
Full details are in the PEP, but:
This is an opt-in mechanism to ensure that the behaviour of old code doesn't change; but using it by default in new code (except where it's known to break module initialization) may turn out to be reasonable.
The feature introduces a new "soft keyword" lazy, meaning that anywhere else in the code lazy is just an identifier name like usual. A new import statement like lazy import json will store a proxy object in sys.modules. Attempting to access any attribute of json will "reify" the module in sys.modules, attempting to replace it with a normally-loaded module object.
A new import statement like lazy from json import dumps, loads will also assign proxies to loads and dumps. Attempting to access an attribute of loads will again reify json, and then re-bind loads to the corresponding attribute of the newly loaded json. This will not, however, re-bind dumps, and vice-versa. Similarly, a later explicit import json will not rebind either loads or dumps.
Python will check that a proper source exists for the lazy-loaded module at the time of the lazy-import statement, but not investigate any further. If the source is invalid, or gets removed before the module is reified, this can cause an ImportError.

0 comment threads