Thanks again for starting work in this area. I’ve done a review in passing of your GitHub repo and found that I had a number of points of feedback/discussion that I wanted to capture. Pretty much all of them fall in to the category of “can be worked out as we go”, and please don’t take them as blocking or critical of the work! (more in the vein of “lessons I’ve learned the hard way”… category)
I’ll separate my response into Integration Issues and Style. The first will likely be forced in your first upstream submission anyway, and the latter can be elaborated over time.
Integration Issues
Composable modules
There are likely two use cases for the MLIR python bindings:
-
Support users who expect that an installed version of LLVM/MLIR will yield the ability to import mlir
and use the API in a pure way out of the box.
-
Downstream integrations will likely want to include parts of the API in their private namespace or specially built libraries, probably mixing it with other python native bits.
What you have is sufficient for #1 but it gets tricky with #2. As a half measure to supporting #2 without indexing heavily on it, the following design points get you most of the way there:
-
Separate the construction/populating of a py::module
from PYBIND11_MODULE
global constructor.
-
Introduce headers for your C+±only wrapper classes (PythonModule
is the one you have now) as other related C++ modules will need to interop with it.
-
Separate any infectious global initialization into its own module/dependency that can be optionally linked (currently registerAllDialects
falls into this category).
As a relatively old example (that has other issues such as not being in LLVM style), consider how we have laid out the pyiree.compiler
module by providing C+±side declarations in a header and providing a SetupCommonCompilerBindings(pybind11::module)
global function that is called from a binary-level PYBIND11_MODULE
initializer. We actually embed this module into several different binaries that, for reasons, have to be built individually and have complicated dependencies.
There are a lot of co-related issues of shared library linkage, distribution concerns, etc that affect such things. Suffice it to say that this flexibility will both be needed and used. Also, it is well known that compilation time for all of the template meta-programming in pybind scales with the number of things you define in a translation unit. Breaking into multiple translation units can significantly aid compile times for APIs with a large surface area.
Consider submodules
This would need more discussion, but I have found that having modules that roughly correspond to the MLIR directory structure makes good semantic sense:
You’re also going to want a dedicated _init
module or something where you stash various global init kind of things (such as dialect registration, etc).
I’ve gone back and forth on how fine-grained to make the correspondance. There is something nice about having an ir
or core
package that is just about the core constructs the IR
directory: that is fairly tightly scoped and controlled. Ditto for Pass
.
Leave room for a “loader”
Any non-trivial python-native project (of which a dependency on LLVM/MLIR and the related ecosystem is definitely putting this in that category) will eventually grow the need for some kind of “loader” that is responsible for actually finding/loading the native part and patching into the more “public” module namespace. This gets arbitrarily complex. To start, I would only make a small concession in this direction: separate the native modules into a namespace that is different from what you get when you run import mlir
: this will let aribtrary python-goo to be inserted in the mlir/__init__.py
that can grow to handle the environmental issues that arise with this kind of project. A convention that has worked for other things is to make the native part of the API be something like _mlir
or even something more esoteric like _bundled_llvm10_python_mlir
. The key is to create the separation point so it can be elaborated when needed and avoid nesting the native modules in the same top-level module as the python loader shim.
You can start with an mlir/__init__.py
loader shim that is something like this:
from _mlir import *
Consider having very few load bearing globals
You’ll get more to this when generating new IR (versus mostly parsing/inspecting). But consider these two code fragments:
op = build_my_op()
region = mlir.Region(op)
vs
op = build_my_op()
region = op.new_region()
For tightly coupled data structures like Operation
, I generally prefer the latter because:
-
It is syntactically not possible to create something that is going to access illegal memory (less error handling in the bindings, less testing, etc).
-
It reduces the global-API surface area for creating related entities. This makes it more likely that if constructing IR and I just get an Operation instance from somewhere that I know nothing about, I can just call methods on it to do what I want versus needing to reach back into the global namespace and find the right Region
class.
-
It leaks fewer things that are in place for C++ convenience (i.e. default constructors to invalid instances).
Be mindful of MLIR evolution
If I read this properly, the way you have it carefully avoids “the RTTI problem” because you are directly binding only non-polymorphic classes. If any of those become polymorphic in the future, the bindings would fail to link on non-RTTI builds of LLVM. For some of these low-level parts of MLIR, I think that is probably safe, if for no other reason than that they were designed to be non-polymorphic and it would be a fairly substantial change of design principles to make them so. However, outside of this very closely held core, there are virtual methods, and no one is in general thinking “is it ok to add a virtual method?” When I’ve done similar things in the past, I’ve coded the bindings very defensively, relying on trampoline wrappers (probably in excess). I’m not sure what the right answer is, but I’m wary of such action at a distance design constraints imposed on the rest of the system by a binding layer.
Style feedback
Properties vs get*() methods
I generally favor converting trivial methods like getContext()
, getName()
, isEntryBlock()
, etc to read-only Python properties (i.e. context
). It is primarily a matter of calling def_property_readonly
vs def
in your binding code, and imo, makes things feel much nicer to the Python side.
repr methods
I like things that print
If there is a reasonable printed form, I often find it to be a significant productivity boost to wire that to the __repr__
method (and verify it with a doctest).
Snake case?
Open to debate, but just spelling things as snake_case instead of CamelCase makes it feel a lot less like a C++ mirror. I’m generally pro fairly close to 1:1 API correspondance, but this kind of spelling transformation is systematic enough that I don’t personally consider it a deviation.
Pseudo-containers
Right now iterating over blocks is something like:
region = ...
for block in region:
pass
I have found it somewhat more readable/pythonic to make this more like:
region = ...
for block in region.blocks:
pass
print(len(region.blocks))
print(region.blocks[0])
print(region.blocks[-1])
The way it is spelled now leaks things that are perfectly natural in STL-derived things (front
, back
, etc) but don’t translate well, imo.
Most of this can be dealt with by constructing a couple of “magic” container templates that provide the methods that make things appear as more native python sequences. Partial example.
This becomes more pronounced with parts of the data structure that model multiple containers (i.e. Block
contains both operations and arguments). There are often a variety of quick shortcuts on the C++ side that aid conciseness in that language, but just mirroring these sugar methods vs exposing the underlying structure doesn’t necessarily translate well.
Provide one stop helpers for common things
Make Context
have a parse_asm
or equivalent that avoids needing to explicitly construct a SourceMgr. Also, don’t just make heavy-weight things like parsing a file be part of a constructor invocation. Prefer to anchor as many of such things on the context as possible. This will help you in one of your next steps when you have to map the DiagnosticEngine
in some way that makes sense to Python (routing exceptions, etc).
It is fine to expose more of the guts of the low level parts of source parsing/printing, but one stop high level APIs to do the common things aids immensely.