Just to be clear - this is running entirely in the browser - no phoning home to some Jupyter server.
It went better than expected, in terms of how much of a lift it was, and in terms of functionality. Pyodide tooling did most of the work (hats off to those people for gluing all of this together…). The hardest part was finding a place to “host” the wheel so it could be “pip installed” due to the CORS requirements on the browser.
Everything seems to work but one thing that currently doesn’t work is ExecutionEngine (i.e., jitting) - you get Failed to create a TargetMachine for the host. But there should be a path forward there too since clang-repl already figured this part out. Hopefully I’ll have some free time soon to get that part to work too. Then we can do E2E in the browser or maybe even GPU execution via the SPIR-V path? Not sure.
So this is LLVM/MLIR and the Python bindings and the Python interpreter running entirely in the browser JS runtime. Reminds me of a meme…
Oh and if [RFC] MLIR Dialect for WebAssembly lands we can take the inception one level deeper (or something like that) and use the bindings for that dialect to write and compile wasm.
One of the maintainers of clang-repl and emscripten-forge here (similar to pyodide hosting recipes for emscripten-wasm32 but not restricted to python. Feel free to find recipes for C++, Rust, Fortran packages on emscripten-forge)
Yes we were able to solve the “conventional JIT unfeasible with wasm in the browser” problem and host clang-repl on emscripten-forge. We now put it to use through our Jupyterlite C++ Kernel that runs entirely in the browser. Feel free to play around with the kernels or try out the demo notebooks we provide.
I am new to MLIR so possibly don’t understand your usecase just yet. Would you like to explain your usecase to me and tell me what you are trying to achieve here? Feel free to share the build scripts/any other resources you use to compile MLIR and the python bindings to wasm.
For our C++ Kernel, we just hide it in Jupyterlite’s tmp directory provided in the virtual file system . Not sure that’s the best approach though and hence it lives as a patch in our llvm/clang recipe on emscripten-forge. Let me know if you find a better approach here.
Also feel free to host your wasm builds for MLIR/MLIR-python bindings as recipes on emscripten-forge, so that you won’t have to fetch them at runtime while in Jupyterlite and can add them to your wasm environment before serving.