patch for Bug 1889

This patch fixes a portion of case 1889 in Bugzilla. This patch causes the compiler to report an error when a constant array is declared with an integer literal size that is too big.

I’m new to the code base, so, I’m not sure if I did everything correctly. For example, what is the best way to get the number of bytes of each allocated per element of an array?
Here is the way I’m doing it…
unsigned SizeTypeBits = static_cast(Context.getTypeSize(Context.getSizeType()));
llvm::APSInt EltBytes(SizeTypeBits); EltBytes = Context.getTypeSize(T) / Context.Target.getCharWidth();
If this is right, maybe it could be added to ArrayType?

Also, what is the max address of an array? I can’t find anything in the c or c++ standard about that. I ended up checking a few compilers, google, etc. on this and it seems to be LONG_MAX.
Is this always the case meaning is it different on different targets?

If it is different based on the platform I think it needs to be in the TargetInfo (i.e. llvm::APSInt TargetInfo::MaxStackAddress() const or something).

Additionally, are there different rules about how large an array can be allocated if it is static?

Sadly, this patch doesn’t help with the specific problem for the case in Bugzilla. That case involved an expression that evaluated to larger than maxlong, for example char BigArray[0x7fffffff + 1].
The expression is evaluated in bool Expr::isIntegerConstantExpr and that function has a few FIXME comments:

/// FIXME: Pass up a reason why! Invalid operation in i-c-e, division by zero,
/// comma, etc
///
/// FIXME: This should ext-warn on overflow during evaluation! ISO C does not
/// permit this. This includes things like (int)1e1000
///
/// FIXME: Handle offsetof. Two things to do: Handle GCC’s __builtin_offsetof
/// to support gcc 4.0+ and handle the idiom GCC recognizes with a null pointer
/// cast+dereference.

So, to patch this problem correctly, I’ll need to know if overflow occurred as opposed to simply an expression that ended up less than zero. I’m going to work on returning various
error codes from the isIntegerConstantExpr, but a change to that method will affect a bunch of different areas, so, I’m sure I’ll need some help! Does anyone have any ideas about how
to handle overflow during the evaluation of expressions?

Regards,

Chris

bug1889.patch (3.8 KB)

This patch fixes a portion of case 1889 in Bugzilla. This patch causes the compiler to report an error when a constant array is declared with an integer literal size that is too big.

Nice, it would be great to fix that issue.

I’m new to the code base, so, I’m not sure if I did everything correctly. For example, what is the best way to get the number of bytes of each allocated per element of an array?
Here is the way I’m doing it…
unsigned SizeTypeBits = static_cast(Context.getTypeSize(Context.getSizeType()));
llvm::APSInt EltBytes(SizeTypeBits); EltBytes = Context.getTypeSize(T) / Context.Target.getCharWidth();

Yep, that seems right, but use uint64_t instead of ‘unsigned’ to avoid clipping in 64-bit cases.

If this is right, maybe it could be added to ArrayType?

Is this specific to arrays?

Also, what is the max address of an array? I can’t find anything in the c or c++ standard about that. I ended up checking a few compilers, google, etc. on this and it seems to be LONG_MAX.
Is this always the case meaning is it different on different targets?

Do you mean the maximum index? I don’t think there is a specification. I think it would make sense to do “sizeof pointer - 2 bits” for example. Clang can establish its own arbitrary limit.

Additionally, are there different rules about how large an array can be allocated if it is static?

Nope, I think it should just be a limit on object size, independent of whether that is a big struct, array or whatever.

Sadly, this patch doesn’t help with the specific problem for the case in Bugzilla. That case involved an expression that evaluated to larger than maxlong, for example char BigArray[0x7fffffff + 1].
The expression is evaluated in bool Expr::isIntegerConstantExpr and that function has a few FIXME comments:

/// FIXME: Pass up a reason why! Invalid operation in i-c-e, division by zero,
/// comma, etc
///
/// FIXME: This should ext-warn on overflow during evaluation! ISO C does not
/// permit this. This includes things like (int)1e1000

Yeah, I think catching overflow is really what is needed here.

So, to patch this problem correctly, I’ll need to know if overflow occurred as opposed to simply an expression that ended up less than zero. I’m going to work on returning various
error codes from the isIntegerConstantExpr, but a change to that method will affect a bunch of different areas, so, I’m sure I’ll need some help! Does anyone have any ideas about how
to handle overflow during the evaluation of expressions?

There is code in PPExpressions.cpp that both does arithmetic and captures whether overflow happened or not, stuff like:

case tok::star:
Res = LHS.Val * RHS.Val;
if (LHS.Val != 0 && RHS.Val != 0)
Overflow = Res/RHS.Val != LHS.Val || Res/LHS.Val != RHS.Val;

I really don’t think this logic should be in the preprocessor :). It would be better to add methods to APSInt that do this, something like:

APSPInt smul(const APSInt&, const APSInt&, bool &Overflow)

etc.

-Chris