recommended workaround for distinguishing signed vs unsigned integers

Since llvm doesn’t distinguish signed vs unsigned integers anymore, what is the recommended way to represent a language that distinguishes them? Is that to introduce new types, eg:

%SignedI32 = type { i32 }

%UnsignedI32 = type { i32 }

?

It probably depends on whether types are dynamic or static.

If static, then the front-end should be keeping track of them anyway
so you should be able to stick with iN and emit the correct
operations.

You *could* also do some kind type aliases like that but I wouldn't
bother: without the struct they're just stripped by LLVM, with the
struct you'd do more harm to readability and performance with the
extra extractvalue/insertvalue instructions than you'd gain. And you'd
probably want the optimisers to strip them as much as possible anyway.

If dynamic, then the actual representation has to include some way of
determining at runtime whether a signed or unsigned operation is
needed.

Cheers.

Tim.