No, it doesn't.
The compilation to CPython byte code is only passed through a small peephole optimizer that is designed to do only basic optimizations (See test_peepholer.py in the test suite for more on these optimizations).
To take a look at what's actually going to happen, use dis
* to see the instructions generated. For the first function, containing the assignment:
from dis import dis
dis(func)
2 0 LOAD_CONST 1 (42)
2 STORE_FAST 0 (a)
3 4 LOAD_FAST 0 (a)
6 RETURN_VALUE
While, for the second function:
dis(func2)
2 0 LOAD_CONST 1 (42)
2 RETURN_VALUE
Two more (fast) instructions are used in the first: STORE_FAST
and LOAD_FAST
. These make a quick store and grab of the value in the fastlocals
array of the current execution frame. Then, in both cases, a RETURN_VALUE
is performed. So, the second is ever so slightly faster due to less commands needed to execute.
In general, be aware that the CPython compiler is conservative in the optimizations it performs. It isn't and doesn't try to be as smart as other compilers (which, in general, also have much more information to work with). The main design goal, apart from obviously being correct, is to a) keep it simple and b) be as swift as possible in compiling these so you don't even notice that a compilation phase exists.
In the end, you shouldn't trouble yourself with small issues like this one. The benefit in speed is tiny, constant and, dwarfed by the overhead introduced by the fact that Python is interpreted.
*dis
is a little Python module that dis-assembles your code, you can use it to see the Python bytecode that the VM will execute.
Note: As also stated in a comment by @Jorn Vernee, this is specific to the CPython implementation of Python. Other implementations might do more aggressive optimizations if they so desire, CPython doesn't.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…