The Jotai Collection of Executable Benchmarks

Dear LLVMers,

We have released a collection of executable benchmarks in this repository: GitHub - lac-dcc/jotai-benchmarks: Collection of executable benchmarks. This collection consists of a bit more than 30K executable functions that we have taken from AnghaBench and adapted to execute. Our idea is to use the benchmarks in autotuning projects involving clang and LLVM’s opt.

Each benchmark consists of a single file (which includes the input generator and everything to compile and run it). They are deterministic and terminate with exit code 0 when compiled with “clang -g -O1 -fsanitize=address,undefined,signed-integer-overflow -fno-sanitize-recover=all”.

Each benchmark consists of a single function: they do not call other functions. That’s for safety, but also to prevent invisible instructions from polluting experiments.

To compile and run any of these benchmarks, you can do, for instance:

$> cd benchmarks/anghaLeaves/
$> clang extr_Arduinotestsdevicetest_libcmemmove1.c_mymemmove_Final.c
$> ./a.out 0 # run the bench with the first available input.

Most of the benchmarks have multiple inputs. Just run the executable without arguments to see how many inputs it provides. The benchmarks are also available in CompilerGym, so you can run them through that framework.

The project is a moving target: everything there is still up for changes. Thus, feedback and contributions are much appreciated!

Kind regards,