>> On the other hand, if "Byte Order" makes sense to include, should
>> other parts of targetdata be included? Pointer size seems the next
>> most desirable -- endianness and pointer size would be sufficient for
>> many elf tools, for example. However, the other parts of
>> targetdata could conceivably be useful too.
> Possibly useful again from an LLDB perspective. I could imagine
> debugging x86_64 operating system code and needing a way to communicate
> transitions from 64-bit mode and 32-bit compatibility mode seamlessly.
> However, I must stress this is *possibly* useful -- I do not have a firm
> conclusion to offer here. Perhaps this is something that we could
> support on an as needed basis.
I think that this can be reliably determined from the arch (through a
predicate). x86-64 will always be 64-bit, x86 will always be 32-bit.
Doing a "32-bit ABI in 64-bit mode" needs to be a new arch anyway, so
that sort of thing isn't an issue IMO.
Ya. You are right. The use case I was thinking of would probably be
better addressed using mechanisms completely unrelated to TargetSpec.
To Dan's point, this argues for forcing a 1-1 mapping between arch and
endianness, which would allow making endianness be a predicate instead
of being an encoded part of the data structure.
The *only* downside I see to that is that we couldn't form a
TargetSpec that *just* contains an endianness, at least without
introducing a "unknown-64bit" and "unknown-32bit" archspec, which
Thinking about this a bit more, from an API point of view I agree.
If we encode endianness as a fixed property of an arch then provided we
have methods like "getLittleEndian("ppcbe") => "ppcle" then an "endian
bit" is largely irrelevant -- the functionality is certainly equivalent
and I think just as easy to use.
Also, we will need something like that anyways to reason about GNU style
However, one downside I can see is that we would effectively double the
number of architectures (for the bi-endian case) by having a 1-1
mapping. The tables needed to model all cpu type, subtype and abi combos
would be quite large even with the extra level of indirection an
"endianness bit" gives us.
So from an implementation point of view it seems to me like having an
endian field would help here. Implementing a "setByteOrder" method
might read like "is this arch bi-endian? If so flip the bit", as
opposed to implementing (and maintaining) the tables needed to get
specifically from the "armv5l" entry to "armv5b", etc, etc. And if that
turns out to be true then embedding the endianness in a TargetSpec's string
representation makes good sense (to me :).
Perhaps I can find some time to get a rough code sketch together. Might
be useful to experiment with a few different approaches.