Adding SYCL tests in test-suite

Hi all,

We’d like to some SYCL tests to LLVM’s test suite. The SYCL support in the LLVM repo is still very much a work-in-progress, but since the test-suite is supposed to be able to support compilers other than clang, I thought it would be reasonable to start adding the tests there now, disabled by default, rather than maintaining a fork of the test-suite repo until SYCL support is fully in place in the main repo.

These tests would involve compiling one or more source files containing SYCL kernels and then executing the resulting binaries using either host, CPU, or accelerator devices, based on the user’s configuration. Running these tests in some configurations would require an OpenCL runtime and, of course, the chosen hardware.

Am I over-reaching?

If not, I’d like some feedback on where the tests should go. For the most part, I think these will be correctness tests, though I expect we’ll want to add benchmarks at some point. For the correctness tests, I thought it would make sense to either (a) create a top-level SYCL folder with SingleSource and MultiSource folders under it, or (b) create SYCL folders in appropriate locations under the existing SingleSource and MultiSource folders (e.g. llvm-test-suite/SingleSource/UnitTests/SYCL). I would favor (b) but I wasn’t sure if SYCL is enough of a departure from the normal C/C++ tests to push it into its own location.

I haven’t added anything to test-suite before, so if I’m approaching this in completely the wrong way don’t be shy about telling me so.

Thanks,

Andy

Hi Andy,

> We'd like to some SYCL tests to LLVM's test suite. The SYCL support in
> the LLVM repo is still very much a work-in-progress, but since the
> test-suite is supposed to be able to support compilers other than
> clang, I thought it would be reasonable to start adding the tests
> there now, disabled by default, rather than maintaining a fork of the
> test-suite repo until SYCL support is fully in place in the main repo.

I think adding such tests is generally a good idea and I do think we
could have them in the test suite while SYCL support in clang is still
maturing.

> These tests would involve compiling one or more source files
> containing SYCL kernels and then executing the resulting binaries
> using either host, CPU, or accelerator devices, based on the user's
> configuration. Running these tests in some configurations would
> require an OpenCL runtime and, of course, the chosen hardware.

Now this is where the dicey part begins. The test suite, as of now, is
not well equipped for tests "competing for shared resources". I mean,
parallel tests or tests running on accelerators. To be fair, I am
assuming we don't want the test suite to be run completely sequential if
the SYCL parts are run, potentially with the non-SYCL parts. I'm saying
this because OpenMP is in a similar position right now.

Our goal is to run OpenMP tests as part of the test suite but that
requires infrastructure we only started to build (downstream so far). I
think the goals of basically all parallel (and offloading) extensions
are pretty much the same so we should work together on them.

> Am I over-reaching?

No.

> If not, I'd like some feedback on where the tests should go. For the
> most part, I think these will be correctness tests, though I expect
> we'll want to add benchmarks at some point. For the correctness tests,
> I thought it would make sense to either (a) create a top-level SYCL
> folder with SingleSource and MultiSource folders under it, or (b)
> create SYCL folders in appropriate locations under the existing
> SingleSource and MultiSource folders (e.g.
> llvm-test-suite/SingleSource/UnitTests/SYCL). I would favor (b) but I
> wasn't sure if SYCL is enough of a departure from the normal C/C++
> tests to push it into its own location.

Option (c) would be an parallel or accelerator subfolder in which we
nest things. Howe we nest things is again the question you asked. While
I can see the appeal of a (almost) top-level SYCL folder it also has
drawbacks, mostly when it comes to these really cool applications that
come in 2+ different languages. Keeping these sources together as they
are seems pretty important. In that spirit, I am unsure why we should do
any of the options (a-c) at all. We could just put a test, let's say
XYZ, into the existing structure SingleSource/Benchmarks/XYZ regardless
of the language XYZ is written in. We mixed C and C++, why stop there.

To be honest, I don't really care too much and the above is mostly to
start a discussion on the pros and cons of the different options.

> I haven't added anything to test-suite before, so if I'm approaching
> this in completely the wrong way don't be shy about telling me so.

I think writing an email like this is exactly the right way :slight_smile:

Cheers,
 Johannes

Thanks, Johannes!

It would be nice to have some additional infrastructure to control execution of tests that have special resource requirements like this. We've seen some problems in our internal testing with parallel test execution causing system gridlock. Having a common way to address that would be great.

One reason I thought separate SYCL folders (either at the top level or elsewhere in the tree) would be useful is that I think we'll probably want a single option to turn these tests on or off as a group. A "parallel" folder may make sense for the same reason. I see your point about mixed languages, but perhaps we could add a "parallel/mixed" subfolder if/when such a test is added.

Any other opinions?

-Andy

Hi Andy,

[-cfe-dev, llvm-dev seems sufficient for this]

First, apologies for not responding earlier.

I CC’ed a few people that might be interested in this (or at least in the path we’ll take).

Hi Johannes,

The structure you suggested makes sense to me.

Vladimir Lazarev has been working on moving some end-to-end tests out of the source tree (in the intel/llvm GitHub branch where the parts of our SYCL development that aren’t ready to be included in the main LLVM repo are being shared). He has a local working copy that can run the tests with various hardware and device runtimes. The last version of this that I saw put the tests in llvm-test-suite/SYCL, but it should be easy enough to move them another level deeper to create a parallel structure to be shared with OpenMP. Vladimir is on vacation right now, but I believe he’ll be preparing a patch for review shortly after he returns.

Thanks,

Andy

Sounds great. Did he also put in nobs to limit parallelism, make sure not to oversubscribe the system, etc. ?

That’s a good question. I’m afraid I don’t know how/if that’s controlled in these tests.

Hi Johannes and Andrew,

  1. I agree with the structure suggested by Johannes for the test-suite, it will make things easier.

  2. Regarding the lit changes that I made :

I made changes to the lit utility that can be used to execute the tests in the LLVM test-suite. I enabled an option for tests that run with multiple threads using OpenMP, that allows you to control how many threads you will like to assign to the tests. So by default, a test will run with 1 thread unless you mention otherwise.

This was done as the tests in consideration could do both(parallel and serial execution). But if we make the directory structure as suggested above then we can even have control over if we want to run parallel tests or just the serial ones without much changes.

These changes are not included in the current TOT.

Let me know if you have any questions.

Hi Neeraj,

Are your changes available somewhere that I could see them?

It sounds like a useful feature, and I’d like to see how you have implemented it.

Thanks,

Andy

Hi Andrew,

I am very sorry for replying late. I had to just clean up some things before I pushed the code.

You can find it here: https://github.com/nvg24/llvm-project

It’s not a big change. But the main assumption my change has it that the tests respect the OMP_NUM_THREADS environment variable. The current tests(eg. XSBench) in the LLVM test-suite use omp_get_num_procs() to get the number of threads, which is a problem.

You can contact me if you have any questions.

Thank you,
Neeraj Ganu