lit improvement

Hi,

Me and my supervisor are interested in improving llvm’s lit a little bit. Currently, there are few things, that seem to be not very convenient about the tool:

· Having several RUN commands in a failing test, it’s impossible to figure out, which of them actually caused the test to fail. We can output the number of the failing RUN line.

· It would be nice to distinguish test fails caused by some internal errors, e.g. segfaults and those which fail because of non 0 error code.

· Having support for different hardware extensions with no need to write custom test for each of them seems to be useful also. For example, it would be cool to have possibility to add some custom test options, which can be checked during test initialization to avoid running unsupported by the hardware tests.

· Also it would be nice to add more parameterization into RUN commands: for example, iterate over a predefined set of values, and run test with each of them as a parameter.

· Add lit.cfg directory explicit setting: now it is always in the root, but maybe you want to use one file for many test suites, then you need to copy it to each suite root, which is not really convenient.

Main question is, are there some serious reasons why this was not implemented yet? If so, please describe them, for I’m really new one to llvm and particularly lit, and maybe don’t understand all the underneath problems of the suggestions above.

Best regards,

Andrei Serebro

Hi,

Me and my supervisor are interested in improving llvm’s lit a little
bit. Currently, there are few things, that seem to be not very
convenient about the tool:

·Having several RUN commands in a failing test, it’s impossible to
figure out, which of them actually caused the test to fail. We can
output the number of the failing RUN line.

I wouldn't say impossible: when a test fails, all the run lines get printed out at the top. From that you can look at which CHECK lines are failing to match, and infer from that which RUN line was being executed.

I agree though, this could be improved. It could be a little tricky because RUN lines can be stateful. (i.e. imagine one RUN line that writes a file, and another that reads from it. Which one 'caused' the failure?). Maybe the best thing to do here would be to print the command that led to the nonzero exit code in a different color?

·It would be nice to distinguish test fails caused by some internal
errors, e.g. segfaults and those which fail because of non 0 error code.

I suspect this might be tricky to pull off in a cross-platform way. If you'd like to do this, I'd recommend not adding a new failure kind as there is tooling outside of lit that relies on the FAIL/XFAIL/PASS/XPASS/etc markers to understand it's output. IIRC, these names were brought over from DejaGNU.

·Having support for different hardware extensions with no need to write
custom test for each of them seems to be useful also. For example, it
would be cool to have possibility to add some custom test options, which
can be checked during test initialization to avoid running unsupported
by the hardware tests.

This is what `REQUIRES` and `UNSUPPORTED` are for (unless I've misunderstood what you want here?).

·Also it would be nice to add more parameterization into RUN commands:
for example, iterate over a predefined set of values, and run test with
each of them as a parameter.

I could see this being useful if the syntax for it was dead-simple.

For this sort of thing, I think it'd be best to write up a proposal RFC and send it to the *-dev@lists.llvm.org lists before sending a patch for review.

·Add lit.cfg directory explicit setting: now it is always in the root,
but maybe you want to use one file for many test suites, then you need
to copy it to each suite root, which is not really convenient.

Have you seen how the `lit.site.cfg` and `lit.local.cfg` files work? Maybe they could suit your needs w.r.t. this.

Main question is, are there some serious reasons why this was not
implemented yet? If so, please describe them, for I’m really new one to
llvm and particularly lit, and maybe don’t understand all the underneath
problems of the suggestions above.

I think it's mostly an issue of time. Working on test harnesses is not a particularly glamorous thing to do, so lit isn't much of a focus in and of itself. I know this phrase is cliché, but patches are very welcome.

Feel free to add me on patch reviews for these things if you end up implementing them.

Cheers,

Jon

Hi,

Me and my supervisor are interested in improving llvm’s lit a little bit.
Currently, there are few things, that seem to be not very convenient about
the tool:

· Having several RUN commands in a failing test, it’s impossible
to figure out, which of them actually caused the test to fail. We can
output the number of the failing RUN line.

Rather than just the number - we could probably print a caret diagnostic
that quotes it? (what to do with multi-line run lines? I assume the caret
diagnostic support doesn't support printing multiple lines)

· It would be nice to distinguish test fails caused by some
internal errors, e.g. segfaults and those which fail because of non 0 error
code.

Perhaps - but usually we print the output, which includes the crash stack
trace, etc - so I've not personally had much confusion over which I'm
looking at.

Has it been a problem for you?

Hi Andrei

Hi,

Me and my supervisor are interested in improving llvm’s lit a little
bit. Currently, there are few things, that seem to be not very
convenient about the tool:

·Having several RUN commands in a failing test, it’s impossible to
figure out, which of them actually caused the test to fail. We can
output the number of the failing RUN line.

You don't need to implement this because this works in lit today.

By default on non Windows systems lit use's the system's shell and
won't tell you how a sequence of RUN commands failed. However if you
use lit's internal shell then it will tell you the standard output,
standard error and exit code of each RUN line.

Here's an example. Take test/Feature/packed.ll and add an extra RUN line

RUN: false

now run

$ LIT_USE_INTERNAL_SHELL=1 llvm-lit test/Feature/packed.ll

It's worth noting that this environment is specific to LLVM's
lit.site.cfg file which is used to control the ``execute_external``
variable passed in.

lit.formats.ShTest(execute_external)

If you're using lit for your own projects you just need to pass True
to lit.format.ShTest in your own configuration.

Hope that helps.