Llvm-exegesis questions

I’m working on a prototype of RISC-V target support in llvm-exegesis. Basic stuff is up and running - that is, it can measure latency of “simple” “computational” instructions such as ADD or MUL, it can measure latency of some assembly code snippet, etc. However, when it comes to doing some more extensive measurements, I have some questions regarding how it was intended to work. I’m using X86 target as a “baseline” of sort since it seems to have the most extensive feature coverage; can’t check it on other targets now, anyway. I’m trying to “benchmark every instruction” (running exegesis with -opcode-index=-1) and see that some things that look quite essential are actually missing.

Instructions with memory address operands
Exegesis relies on OPERAND_MEMORY OperandType to detect memory operands, which is not used in some targets (such as AArch64 and RISCV). And, unfortunately, it looks like it’s not really supported on X86 (unsupported memory operand in latency measurements; target can’t really generate all operands required for X86 instructions working with memory).
If we put deep technical details of memory subsystem aside for a while and focus on a basic task of measuring latency of some load/store instruction (in some sense), how is one expected to do it?

Data-dependent latencies
Instruction latencies can depend on input data, and for many instructions initializing inputs with zero doesn’t correspond to maximum latency (which you usually need for a scheduling model). I see some code that probably could be used to provide input data for floating-point instructions (bitcastFloatValue in llvm-exegesis/lib/RegisterValue.h), but it’s not actually used.
One can generate a few snippet files with corresponding input data to achieve required data coverage, but was it expected to work that way?