MLIR gpu-module-to-binary using the CAPI


I’m trying to transform a GPU module into a GPU binary using the CAPI but I’m running into a problem.
I’m taking the -test-lower-to-nvvm pass as a guide and have been able to follow conversion using the CAPI for all-reduce-and.mlir up until createGpuModuleToBinaryPass.
When I try adding this pass in the CAPI (with mlirCreateGPUGpuModuleToBinaryPass()), I get an error:

error: cannot be converted to LLVM IR: missing `LLVMTranslationDialectInterface` registration for dialect for op: gpu.module
error: Failed creating the llvm::Module.
error: An error happened while serializing the module.

If I try to apply this last pass using mlir-opt instead, i.e. --gpu-module-to-binary, the program runs without errors.

I’m using the CAPI from wrappers in Julia so it’s difficult to provide an easily reproducible example but this gist contains the creation of the passmanager:

Does anyone know what might be going wrong or how I should go about debugging this further?


Should be the same situation as OpenCL example - #2 by mehdi_amini

1 Like

Thanks a lot @mehdi_amini!

For anyone with the same problem, using the CAPI, the translation interface can be registered by calling mlirRegisterAllLLVMTranslations(context) which I initially hadn’t seen but makes a lot of sense in hindsight!