Does the existence of such a statement in a given program mean that
the whole program is undefined or that behavior only becomes undefined
once control flow hits this statement?
Neither. The first condition is too strong and the second is too weak.
Object access are sometimes sequenced, but the standard describes the behavior of the program outside of time. Danvil already quoted:
if any such execution contains an undefined operation, this
International Standard places no requirement on the implementation
executing that program with that input (not even with regard to
operations preceding the first undefined operation)
This can be interpreted:
If the execution of the program yields undefined behavior, then the whole program has
undefined behavior.
So, an unreachable statement with UB doesn't give the program UB. A reachable statement that (because of the values of inputs) is never reached, doesn't give the program UB. That's why your first condition is too strong.
Now, the compiler cannot in general tell what has UB. So to allow the optimizer to re-order statements with potential UB that would be re-orderable should their behavior be defined, it's necessary to permit UB to "reach back in time" and go wrong prior to the preceding sequence point (or in C++11 terminology, for the UB to affect things that are sequenced before the UB thing). Therefore your second condition is too weak.
A major example of this is when the optimizer relies on strict aliasing. The whole point of the strict aliasing rules is to allow the compiler to re-order operations that could not validly be re-ordered if it were possible that the pointers in question alias the same memory. So if you use illegally aliasing pointers, and UB does occur, then it can easily affect a statement "before" the UB statement. As far as the abstract machine is concerned the UB statement has not been executed yet. As far as the actual object code is concerned, it has been partly or fully executed. But the standard doesn't try to get into detail about what it means for the optimizer to re-order statements, or what the implications of that are for UB. It just gives the implementation license to go wrong as soon as it pleases.
You can think of this as, "UB has a time machine".
Specifically to answer your examples:
- Behavior is only undefined if 3 is read.
- Compilers can and do eliminate code as dead if a basic block contains an operation certain to be undefined. They're permitted (and I'm guessing do) in cases which aren't a basic block but where all branches lead to UB. This example isn't a candidate unless
PrintToConsole(3)
is somehow known to be sure to return. It could throw an exception or whatever.
A similar example to your second is the gcc option -fdelete-null-pointer-checks
, which can take code like this (I haven't checked this specific example, consider it illustrative of the general idea):
void foo(int *p) {
if (p) *p = 3;
std::cout << *p << '
';
}
and change it to:
*p = 3;
std::cout << "3
";
Why? Because if p
is null then the code has UB anyway, so the compiler may assume it is not null and optimize accordingly. The linux kernel tripped over this (https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-1897) essentially because it operates in a mode where dereferencing a null pointer isn't supposed to be UB, it's expected to result in a defined hardware exception that the kernel can handle. When optimization is enabled, gcc requires the use of -fno-delete-null-pointer-checks
in order to provide that beyond-standard guarantee.
P.S. The practical answer to the question "when does undefined behavior strike?" is "10 minutes before you were planning to leave for the day".