Contextual type verification?

Over the in the CIRCT project, we have two cases where we’d benefit from “contextual” type verification hooks, where a type is valid only in certain regions/scopes. Both cases are related to generating verilog source code, which puts us a bit more in an “AST like” domain than a traditional LLVM-like IR.

The two use cases are:

Type declarations: these are basically just typedefs, used to shorten names in specific cases. They can exist in scopes like this:

hw.module @testTypeAlias() {
   hw.typedecl @twoBitType : i2

    sv.initial {
        %x = hw.constant 1 : !hw.typealias<@twoBitType, i2>
    }
    ...
}

The design of this is that the !hw.typealias type caries both the pretty type as well as the canonical type (ala clang’s type system) to retain type sugar. In this case, we link up the usage with symbols stored in the type. Although the types are uniqued and immortal, they can only be used in certain scopes.

A second example is for parameterized modules, which are similar to templates in C++ (this slightly fudges syntax for clarity):

hw.module @genericInt<width: i32>(%inVal: !hw.int<width>) {
  %a = comb.add %inVal, %inVal : !hw.int<width>
}

In this case, we store the name in a pile of dialect attributes, because we have a small grammar we need to support (more details here). Although the types/attributes are unique and immortal, they are only valid to use in certain modules.

Coming back to the problem statement, I’d like to diagnose invalid uses of these in a compositional way. There isn’t a good way to handle this in the ODS system: we can define type predicates with DialectType etc, but the validation hook isn’t passed the operation in question. Furthermore, verifying the first efficiently would require something like SymbolUserOpInterface/verifySymbolUses.

Has anyone faced this before, and does any one have a suggestion on the preferred way to go for this? One way that goes is that I could introduce a new op interface, have all the corresponding operations conform to it, and have them verify the operand and result types? That wouldn’t allow the symbol-driven cases to work efficiently, but would handle half of the problem. Any other thoughts?

-Chris

Yes, we have the same kind of problem in the torch dialect.

torch.class_type @__torch__.MyModule {
  torch.attr "b" : !torch.bool
  torch.attr "i" : !torch.int
}
torch.nn_module {
  torch.slot "b", %true : !torch.bool
  torch.slot "i", %int3 : !torch.int
} : !torch.nn.Module<"__torch__.MyModule">

func @f(%arg0: !torch.nn.Module<"__torch__.MyModule">) {
  %0 = torch.prim.GetAttr %arg0["i"] : !torch.nn.Module<"__torch__.MyModule"> -> !torch.int
 ...
}

In this case, the "__torch__.MyModule" in the !torch.nn.Module type refers to the symbol @__torch__.MyModule. We don’t have a notion of sugared types in our case, and we don’t store the canonical type on the type. (the declaring op “is the canonical type” in a sense – not saying this is a great approach. Happy to adapt to better supported patterns as they emerge).

What I wished I had at the time I wrote that code was something like an optional SymbolTableCollection passed to the type verifier, based on where the type appears in the operation/region tree. From that, I would look up the type declaration and verify stuff. Essentially, SymbolUserOpInterface but for types.

For example, I would like to do something like getAttrOp.getOperandType().cast<Torch::NnModuleType>().getDeclaringOp() and then verify certain properties.

One annoying aspect, which we discussed in an ODM but never resolved, was that we currently need to duplicate the original type in the IR in order to parse and verify things like container types. I looked into solutions a bit, but never got anywhere. This issue tracks this topic: [HW] TypeAliasType should reference a declaration, instead of keeping the inner type duplicated · Issue #1642 · llvm/circt · GitHub.

That’s what I wished I had when I did the ‘hw.typealias’ as well, and I think that would address what I mentioned above.

1 Like