>>> Krzysztof suggested much the same thing that I think you are currently
>>> doing, which is deliberately configure a default triple but exclude
>>> corresponding backend.
>> You and Takumi were considering this as an unsupported configuration
>> before, and I tend to agree with that (this is the configuration I’m
>> for our tests but it was not intentional to leave the default triple
> Right, intuitively it doesn't make sense. Is it actually useful to
> GPU compiler that will crash unless you ask it to generate GPU code?
> to me it should default to producing GPU code.
Correct me if I’m wrong:
You’re viewing this from the “clang” point of view. A default triple is
needed because the command line interface does not require to specify it.
I see LLVM as a library or compiler *framework* in the first place, and
clang is just a use case as another.
When you build a compiler using LLVM as a library: 1) it does not have to
be a command line compiler, 2) the interface does not have to make
optional the target selection
Most GPU compilers are embedded in the driver (they compile shaders on-
demand during host program execution). The driver can detect the hardware
and initialize LLVM with the right triple.
We build LLVM as a shared library, we then build multiple compiler that
will link to this library to CodeGen to various backend. The compiler is
responsible to select and initialize the appropriate backend, we *never*
rely on the default triple, and I don’t even see how we could.
You could also see LLVM as a system library that can have multiple
clients, each client responsible of its own initialization.
If you want to write your tests as unit tests linked against the library,
I have no problem with that.
If you want to take advantage of the existing tests, the bulk of them are
written using command-line tools, some of which make use of the default
triple. So, you need to configure your builds appropriately. It's not
about clang, it's about the command-line tools used to implement the tests.
If you don't like how the tests work, you don't have to use them.
>>> I expect we can detect that situation in lit.cfg
>>> and exclude tests on that basis, rather than 'native'. It would solve
>>> the problem for my case (host triple != target triple, although the
>>> parts of the triple do match) and the "normal" cross-compiler case
>>> host = X86, backend + target triple = ARM).
>>> I'm going to play around with that and see what I can do to make it
>>>>>>> IMO, the problem is in general about tests that are written
>>>>>>> specifying a triple, that will be executed with the default
>>>>>>> Most of these tests were written with X86 (or ARM) in mind, and
>>>>>>> no guarantee that they will behave as intended with every possible
>>>>>>> The DataLayout for instance has to be the one from the target, and
>>>>>>> I think a "portable backend test” is pretty rare in general.
>>>>>> It depends on what the test is trying to do. I'm sure it is quite
>>>>>> for IR tests to behave essentially the same way regardless of
>>>>> IR tests != backend test (I may miss your point here, it’s late…).
>>> Right, sorry, lost focus for a moment there... nevertheless it is
>>> the case that many tests exercise functionality that is not
>>> target-centric and these should be run for any target that actually
>>> supports that functionality. For example, the DebugInfo tests should
>>> be run for any target that supports emitting debug info.
>> I’m not sure that “debug info” support is all or nothing.
>> As an extreme example, I know targets that support debug info but do
>> support function calls, what if your "debug info” test involves these?
> Then as part of getting the test suite to work for you, you would need
> disable that particular test for your target. It sounds like this kind
> thing is exactly what the Hexagon folks did, and it seems quite
> (And in fact I see two DebugInfo tests marked XFAIL: hexagon.)
It seems conceptually wrong to me, for the reason I already exposed.
It should go the other way (whitelist instead of blacklist)
I think the default expectation for a new target is that it should support
basically all target-independent LLVM features, and there are tests for lots
of those features. Therefore if a target fails to support something, the
owner of the target is responsible for making sure those tests are disabled
It sounds like you think targets should get to pick and choose which things
they want to support, and then have to go do work to explicitly enable the
target-independent tests for the things they choose to support.
We disagree on this point, and I think historically my view is where the
community is coming from; this is just my opinion however and I could
easily be wrong, but it is clearly how the test suite operates. If you
want to reorganize the test suite along different lines, you should start
your own llvm-dev discussion about that.
>> Also, I’m not a DebugInfo expert, but when a front-end generated them,
>> aren’t they dependent on the DataLayout? Hence the target?
> Not really. DebugInfo tests primarily care what the DWARF description
> looks like, not so much what the generated code looks like,
My question is less will the CHECK matches than “will the backend be able
to generate code with invalid debug information (ex: pointer size, etc.)
or just crash?”
The target's address size is given by the DataLayout, which all targets
are required to define, so I'm completely failing to understand the problem.
Why would the debug info be invalid?
Offhand I can't think of anything the target has to do specially to support
debug info, it should Just Work. The details of the object-file format
matter more than the target itself, in my experience, and even that matters
relatively little. If your target supports emitting arbitrary data, and
assembler labels/object-file relocations, then your debug info should be fine.
DebugInfo /tests/ end up "depending" on the target only because there's no
convenient way to see what the debug info looks like, without producing
either an assembly file or an object file. Producing either of those things
requires passing data through the target. The details of the target itself
are largely irrelevant (although as I've mentioned before, details of the
assembler syntax might matter for writing actual CHECK lines).
If your target can't produce an object file, then you should turn off
'object-emission' (Hexagon does this, for example). Other than that, I don't
see any reason why debug-info tests can't in principle be target-neutral.
> I have experimented with implementing the thing Takumi and I think
> be a configuration error. Basically it takes the same kind of
> that I did in D12506, except it checks for the existence of the target
> matches the default triple. If that target exists then 'llc' with no
> will succeed, and it looks like the bulk of the tests that you disabled
> in that category. I'm not especially happy about this tactic, though.
Why aren’t you happy about that?
Because it takes what would seem to be a configuration error and makes it
something we explicitly tolerate. If people conclude that it should be
tolerated, then that's fine.