Is there a standards based reason why the DIFlags enum is set to uint32_t[1]? I am sure my DWARF-std-reading-fu is not up to snuff and so I cannot seem to find it.
The reason I ask is that we are running out of space for our own DIFlags and would like to nail this down before deciding on an approach.
DIFlags is internal to the compiler, not directly determined by the DWARF standard. It mostly happens to be full of data that gets turned into DWARF flags.
I suspect it’s nailed down to 32 bits mainly because we haven’t needed more, so far. Also MSVC historically failed to handle enum values wider than 32 bits; I don’t know whether that’s still true.
Thanks Paul! This was our conclusion as well so it’s encouraging that you feel similarly. The next question is whether these are being cast to int/unsigned somewhere. That will be fun to track down