[analyzer] Do we have implicit assumptions on extents of memory regions?

Is it reasonable to assume in the analyzer that a symbol representing the extent
of an analyzed memory region can not be greater then size_t_max/2?

If so, why does the constraint manager not know about this fact?
If not, why does the ProgramState::assumeInBound still heavily exploit this
fact in its implementation?

Without deciding this Z3 refutation will not be able to filter as many false
positives as it potentially could.

Regards,
Balazs

I think it’s pretty reasonable. Otherwise ssize_t won’t work. In fact, neither can it be equal to SIZE_MAX/2, for the same reason. The actual limits are probably smaller, e.g. Clang won’t let you declare a fixed-length array larger than a certain size that’s much smaller than SIZE_MAX/2 (even if you are willing to find a computer that has that much RAM). It’s not up to the constraint manager to decide whether or not a specific symbol represents an extent. SymbolExtent obviously does represent an extent, so i guess it’ll be reasonable to hardcode that. The opposite is not true though; say, the extent of heap-based symbolic region allocated by malloc(x) is the completely arbitrary symbol $x (it can even be a concrete value). So instead of making the constraint manager make such guesses i’d much rather have the entities responsible for allocation (MallocChecker, ExprEngine for operator new/new and VLAs, etc.) actively tell the constraint manager that these symbols may only have limited range (it doesn’t need to know that this is because of them being an extent of something, it only needs the raw facts) by explicitly adding the respective constraints via assume(). IteratorChecker already does that for iterator position / difference symbols and i think that’s the right thing to do. Now it’s entirely up to the constraint solver to take these hints into account.