Yes, the new type is simply a static object managed by Type and LLVMContext. This is only referred to by Values of fixed point type. New instructions should not interfer with existing passes, as opcodes are switched and handled and would simply continue to be unaware of the new opcodes.
My main concern is that code evolution demands following of the LLVM release series of the future. So, even if this got done locally, it would be a hassle to deal with if future releases did not include it at all. So, even if only a small part of it got committed, like say all parts that do not concern or disturb the optimizers - like the type and instructions themselves, this would be better then nothing at all. If then some other party wanted to contribute, this would not lead to enormous merge aches.
Also, the DSPTargetMachine class would not interfer at all with the LLVMTargetMachine. Personally, I think LLVM could become a leading DSP choice with these two features, but tha! t’s just me, or?
I am investigating the possibilities of incorporating fixed
point support into the LLVM I/R.
I think you should write a rationale explaining why you want to
introduce new types etc rather than using the existing integer
! > types, with intrinsic functions for the operations, or some other
such scheme. Introducing new types is hard work and creates a
maintenance burden for everyone, since they will need to be
properly supported by all parts of the compiler forever more. It
is therefore important to give a cogent explanation of why this
is the best approach, why the benefits outweigh the costs, and so
Also can’t fixed point be handled entirely by the frontend?
You store the scaling factor as an attribute on the type.
When you perform operations that involve the same fixed point types
you can perform them with integers, and when you need to perform
conversions you emit the appropriate code to perform the
! ; conversions. The emitted LLVM IR needs to know nothing about the
&! gt; > > scaling factors involved.