I’m working on developing a custom out-of-tree MLIR frontend for a specialized neural network model format. The current goal is to create a program that parses this custom model file and converts it into MLIR code using a custom dialect specific to the model format.
Here’s my current code snippet:
mlir::DialectRegistry registry;
registry.insert<mlir::MyDialect>();
registerAllDialects(registry);
MLIRContext ctx(registry);
mlir::ModuleOp moduleOp = readFromFile(ctx, inputFilename);
moduleOp->print(output->os(), OpPrintingFlags{}.printGenericOpForm(false));
The readFromFile
function is responsible for reading the model format and converting it to MLIR. Inside the module, I want to use func.func
to represent the model code, and I create it like this:
return builder.create<func::FuncOp>(loc, funcName, funcType);
However, I encountered an issue with the following error message:
LLVM ERROR: Building op `func.func` but it isn't registered in this MLIRContext: the dialect may not be loaded or this operation isn't registered by the dialect. See also https://mlir.llvm.org/getting_started/Faq/#registered-loaded-dependent-whats-up-with-dialects-management
To address this error, I tried adding ctx.loadAllAvailableDialects();
to my code, which resolved the previous issue. However, I now face a new problem: despite specifying that I want non-generic operations, the output still includes generic operations. Here’s an example of the undesired behavior:
"builtin.module"() ({
"func.func"() <{function_type = (tensor<784x1xf32>) -> (), sym_name = "mnist"}> ({
}) : () -> ()
}) : () -> ()
I couldn’t find any loadAllAvailableDialects
calls in MlirOptMain
, so I suspect I might be doing something wrong initially, leading to the ICE. I’m seeking guidance on how to resolve this issue.