How to use different out-of-tree Python bindings

TL;DR: I am trying to combine two projects that both have their own out-of-tree dialects and corresponding Python bindings, currently with Bazel. I am running into the problem that the symbols of some static variables are defined multiple times, such as the various TypeIDs of various dialects, ops, etc, which then exists several times at runtime and thus cause problems. I know a solution based on shared libraries but struggle implementing it in Bazel.

I have posted a more detailed question on StackOverflow but the essence is that (1) the symbols of Python extensions aren’t visible from different extensions (even if they have public linker visibility) and (2) symbols from common object files that are linked statically into multiple extensions exist in all of them. The two combined lead to multiple instances of static variables at runtime.

A solution that seems to work is to have the symbols of all static variables live in separate shared libraries that the extensions share. In that case, all Python extensions that link to the same set of shared libraries see the same set of instances of static variables.

The question is how to organize the build system to produce that. My question on StackOverflow contains details about why this isn’t trivial with Bazel.

For the MLIR C API, there is a solution: there is a custom rule called mlir_c_api_cc_library that defines (1) a target for “normal” consumers, (2) a *Header target for only the headers that the Python extensions would use, and (3) a *Objects target that only the (single instance of the) shared libraries would use. For example, the CAPIInterfaces target looks like this:

    name = "CAPIInterfaces",
    srcs = [
    capi_deps = [
    includes = ["include"],
    deps = [
        # ...

With this rule, the source files from the srcs argument and the files from the (transitive) capi_deps argument exist in the *Objects target but not in the *Header. However, the source files from any target that is pulled in via deps either exists in all targets (if it has been defined with alwayslink, see my SO post for details) or it isn’t exported by the shared libraries that depend on that target. This affects the symbols in the :IR target: by default, they aren’t exported, and if I set them to alwayslink, they are exported in all extensions and exist several times at runtime.

The only solution that I am currently aware of that solves this problem technically is to apply the mlir_c_api_cc_library rule to all transitive dependencies of MLIR, including LLVM (among potentially other things, command line options are defined in static variables, and I ran into run time issues with those being defined twice). This doesn’t sound very realistic, or at least highly non-trivial. Among things I might not have though about, it will require to change hundreds or thousands of targets ~manually and convincing a lot of involved people.

Before I consider embarking into that mission: is this really the only solution? Potentially just a work-around that unblocks me on my current project while we look for a more sustainable solution?

1 Like

I don’t think too many folks here use bazel too actively. I’m assuming the alternative of building a single library is not feasible as you want to reuse a prebuilt library, or is there another reason? (And also assuming that library doesn’t have a cmake build).

Why were the target set alwayslink?

I thought most of these have been changed, where have you found these in the libs?

You’re in quite deep water and are using toolchains that are quite far off the mainline and not built for the level of build graph hacking that is needed.

We do this all the time but always by way of a superproject that builds the combination and does not fork the common c++ dependencies into unrelated binaries.

The python API only depends on the public C API, so there are numerous ways to mix things before this level so you have a C library to link against, but you’ll be in custom build splicing land, not a turnkey macro that currently exists.

It is possible with what is there today to use a shared library build of llvm, mlir, and the out of tree leaf projects to achieve mostly what you want but it is non trivial and approximately none of the infrastructure for that exists in bazel. Sorry… I wouldn’t even know how to start such a thing given how steep of an uphill battle shared library work in bazel is – to say nothing of the cross platform stuff. A superproject approach would probably work but still require some massaging.

With all of that said, if someone told me I had a half hour to solve the problem, I would extract a base shared library out containing all vague linked TypeID and global cl symbols. Then linkedit everything to dep on that (or LD_PRELOAD it) to make sure its definitions take precedence and all dependencies resolve the same things.

This isn’t that dissimilar from how I understand clang plugins to work but it is very platform tricky to get right and something you do to the builds vs trying to play within the bounds of what something like bazel wants you to think shared libraries and build graphs are.