Can I control HSA config generated by AMDGPU backend?

I am using llvm clang to offline compile my opencl code into assembly. My target is amdgpu–amdhsa. The assembly file generated by clang has config of “enable_sgpr_dispatch_ptr = 1”. Can I do something to turn that off in the generated assembly file? Also, it seems that the order of kernel arguments is in the reverse order of AMDCL2 convention. i.e. user argument is placed at the first place while hidden arguments like “HiddenGlobalOffsetX” are placed after user arguments. Can I change the order of the arguments so that the first argument will be hidden arguments before user arguments?

Thanks a lot!

Finally I kind of modified llvm to generate assembly that can run on AMDGPU pro drivers. One problem is the performance of the code generated by llvm is about 10% slower than amdgpu’s online compiler. Anything I can tune the performance up the performance of llvm?\

Thanks!

I am using llvm clang to offline compile my opencl code into assembly. My target is amdgpu--amdhsa. The assembly file generated by clang has config of "enable_sgpr_dispatch_ptr = 1". Can I do something to turn that off in the generated assembly file?

This is inferred from whether or not the kernel requires it. You can avoid it by not using features that require the dispatch pointer. Just about anything does in practice, since it’s necessary for the normal workitem calculations.

Also, it seems that the order of kernel arguments is in the reverse order of AMDCL2 convention. i.e. user argument is placed at the first place while hidden arguments like "HiddenGlobalOffsetX" are placed after user arguments. Can I change the order of the arguments so that the first argument will be hidden arguments before user arguments?

Thanks a lot!

No, the AMDGPU backend doesn’t support the old HSAIL OpenCL ABI

-Matt

This is very dependent on the case you are looking at, so without a specific example or ISA comparison between the compilers there’s no guessing

-Matt

The target algorithm is lyra2 and the target kernel is “search2” in https://github.com/fancyIX/sgminer-phi2-branch/blob/master/kernel/phi2.cl.
The detail is implemented in https://github.com/fancyIX/sgminer-phi2-branch/blob/master/kernel/lyra2mdz.cl
If you have time to take a look at the assembly, I can upload them later today.

Thanks,
Changdao

Matt,

Attached are the two assembly code for kernel “search2”, one is from amdgpu pro online compiler, one is from llvm-roc-1.6.x. It will be great if you can take a look and see how to improve the llvm one.
In case you missed:

The target algorithm is lyra2 and the target kernel is “search2” in https://github.com/fancyIX/sgminer-phi2-branch/blob/master/kernel/phi2.cl.
The detail is implemented in https://github.com/fancyIX/sgminer-phi2-branch/blob/master/kernel/lyra2mdz.cl

Thanks,
Changdao

search2_llvm.isa (609 KB)

search2_amdgpupro.isa (1.05 MB)

clrxasm’s docs say it only supports GCN devices to begin with, so it seems like you wouldn’t actually want to use the --amdhsa “os” flag (or the amdgpu target, you’d want amdgcn); that’s for things that will be directly loaded with the HSA API as far as I know. If you felt like it you could load and execute them with that API instead of the opencl one and not mess around with it further than that. I’ve never worked with that, so Artem can probably tell you more if that doesn’t explain things. It looks relatively straightforward. https://gpuopen.com/rocm-with-harmony-combining-opencl-hcc-hsa-in-a-single-program/

This page https://openwall.info/wiki/john/development/AMD-IL (linked from another AMD list posting last year about something similar) says that the following work:

**(i)**Setting the environment variable:
AMD_OCL_BUILD_OPTIONS_APPEND=-save-temps ./Name_of_executable
**(ii)**Using the build options:
In clBuildProram() specify ”-save-temps” in the build option field to generate IL and ISA.

…and the driver will retain the .isa and .il files, but then you’d still be left with patching in your changes somehow. If that works it would at least give you an example of what LLVM is currently generating vs. the driver so you can compare those and also modify / test assembly changes to determine if they’re worthwhile for whatever issue you’re trying to solve.

If this is an optimization thing, I’d strongly suggest going through the files as-is and trying to perform some of the ocl-level optimizations AMD’s guides suggest. You’d be surprised what removing a couple of conditionals in often-called loops can do for performance of many things. Looking at the code, vectorizing / using native opencl data types would probably show some gains as well. Many of them seem to be straight C source conversions of stuff that was optimized for x86 at some point before SSE2 existed and promptly forgotten.

Cheers,
-G

This page https://gpuopen.com/opencl-rocm1-6/ also suggests that inline asm is supported by the rocm toolchain, and there are example exercises / solutions here:

https://github.com/HandsOnOpenCL/Exercises-Solutions/tree/master/Solutions

The AMD PRO driver says it has supported rocm 1.6 since last year, but it sounds like that doesn’t work with it, so ???

Finally I got something working. The speed went up after I disable loop unrolling and replace “get_local_size(1)” with constant. So llvm indeed is a very good compiler, comparable to AMD’s own.