The current CTU effort erases the boundaries we have between a single translation unit and the whole program, but it isn't going to be powerful enough to be described as a "whole-program" analysis, similarly to how our existing inter-procedural analysis isn't quite "whole translation unit" analysis.
With out static symbolic execution-based approach, we do not ever attempt to understand any significant module of the program "as a whole". Instead, we try to model specific individual functions, and sometimes, occasionally, depending on numerous unobvious circumstances, when we encounter calls of other functions during such modeling, we allow ourselves to descend into the callee function to explore consequences of the function call in the current context. It opens up execution paths that traverse multiple functions, but we always keep in mind that we're still analyzing the program by focusing on a very small part of the code at a time, conducting multiple independent analyses even within a single translation unit, and never assuming understanding the program as a whole.
CTU allows us, sometimes, occasionally, depending on numerous unobvious circumstances, to do the same when we encounter calls of functions that have their bodies defined in a different translation unit, therefore erasing the boundaries and allowing us to focus on more promising execution paths. The current effort is for now focused at that first step for now - erasing the boundaries. As far as i know, not much effort has been done to tweak our heuristics to determine the promising execution paths, but the existing heuristics work pretty well in the new circumstances, and a significant improvement of the bugs-per-second metric is observed, together with a considerable skew from finding deeper bugs within the current translation unit towards finding shallower bugs that require understanding of multiple translation units. But still, and probably even more so, CTU is not whole-program analysis - it's only an effort to erase the artificial boundaries of translation unit, but our static symbolic execution approach would never scale enough to understanding the program as a whole. Even if at all possible, it requires a way more significant effort and advanced techniques.
So the real question here is - what kind of analysis do you want to perform? Is symbolic execution the right tool for your work? Like, for ~1/2 of problems, symbolic execution is not even the right tool: if, for instance, you're trying to find a problem that can be identified by an invariant that holds on all paths (dead code, expression always has the same value, various check-after-use), then the analyzer wouldn't be of much help, because it never guarantees to explore all paths through the program; it's only good for finding specific paths on which a certain invariant is violated (use-after-failed-check, null dereference, memory leak). And also symbolic execution of the whole program's source code doesn't scale, but another analysis method may scale well.