sizeof(DIFlags)

Hi list,

Is there a standards based reason why the DIFlags enum is set to uint32_t[1]? I am sure my DWARF-std-reading-fu is not up to snuff and so I cannot seem to find it.

The reason I ask is that we are running out of space for our own DIFlags and would like to nail this down before deciding on an approach.

Thanks!

Sohail

[1] The code in question: https://github.com/llvm-mirror/llvm/blob/master/include/llvm/IR/DebugInfoMetadata.h#L194

DIFlags is internal to the compiler, not directly determined by the DWARF standard. It mostly happens to be full of data that gets turned into DWARF flags.

I suspect it’s nailed down to 32 bits mainly because we haven’t needed more, so far. Also MSVC historically failed to handle enum values wider than 32 bits; I don’t know whether that’s still true.

–paulr

Thanks Paul! This was our conclusion as well so it’s encouraging that you feel similarly. The next question is whether these are being cast to int/unsigned somewhere. That will be fun to track down :slight_smile:

I think mainly to ensure that we don’t have extra padding in DIType objects:

class DIType : public DIScope {
unsigned Line;
DIFlags Flags;
uint64_t SizeInBits;
uint64_t OffsetInBits;
uint32_t AlignInBits;

If DIFlags grew to 64-bits, we’d want to re-arrange things to save memory. DITypes have been known to consume large amounts of memory.