Is anyone currently working on overhauling the command-line processing code?
We’re currently having some design issues with this component, and I’d like to start a larger conversation on it. For context, I am talking from an “LLVM as a library” perspective instead of an “LLVM as a set of tools” perspective. In a nut-shell, the problems we are facing are as follows:
As per bug 11944, the command-line processing logic accounts for a non-trivial amount of global constructors and leads to memory that is allocated when the containing library is loaded, and not freed until the containing library is unloaded. Granted, this is not a lot of data…
Command-line options are currently used for codegen options in inherently non-thread-safe ways.
(1) is mostly self-explanatory and has a bug about it, but (2) requires a bit more explanation. Take for example the “-unroll-threshold” option in LoopUnroll. If I’m setting up a pass pipeline programmatically, I add the LoopUnroll pass to the PassManager, but I cannot set the unroll threshold without calling cl::ParseCommandLineOptions() [or perhaps some ugly hacks that involve getting at and modifying the global state]. In addition to being awkward, this is not thread safe! I cannot run two PassManager pipelines concurrently with different unroll threshold values. In this case, I am singling out the LoopUnroll pass, but this design is very prevalent within the LLVM analysis/transformation/codegen infrastructure. This has no effect on users of opt/llc as tools, but library users can be greatly affected by this.
Ideally, what I would like to see is a separation between the definition of analysis/transformation/codegen options and their value storage. To get the conversation started, I would like to propose the following:
Make it easy for passes to declare arbitrary options, just like they do now
Let different pass pipelines have different sets of option values
Keep the option registry dynamic, so plugins loaded with “-load” can register new options
Let option values be parseable from the command-line (for opt, llc, …)
As a first design draft, I propose that cl::opt and friends be extended to support argument storage in an LLVMContext. Instead of storing the value directly in a cl::opt instance or specifying a global variable, this new storage would utilize a StringMap stored in an LLVMContext. As a consequence, parsing would be delayed until the option is read.
- Command Line Parsing
The cl::ParseCommandLineOptions() call would take an additional parameter, an LLVMContext reference. Global options would be written directly to their cl::opt instances as they are now, but per-context options will be copied in the LLVMContext instance. Tools would use something like:
LLVMContext &Ctx = getGlobalContext();
cl::ParseCommandLineOptions(argc, argv, “my tool”, Ctx);
In addition, library users can use a new API:
which will only parse per-context options (and not overwrite any global state).
- Reading Option Values
For per-context options, an LLVMContext would be required when reading back the option values. For example, in the LoopUnroll example, you could write:
unsigned Threshold = UnrollThreshold(Mod.getContext());
This would get the “-unroll-threshold” option value for the context for the current module. Parsing of the value would be delayed until the value is read, since the value would need to be stored as a string in the context. Global options can be used as they currently are. It would be a run-time error to read a per-context option without a context parameter.
Can we just make all options per-context, and assign tool options to the global context? May require special handling for “-debug”, “-time-passes”, and friends.
Alternatively, we could try to just eliminate the codegen options altogether and rely on per-module data, like the new attributes functionality. But this seems like it may be less flexible and would require the Module to know what options to use. Supporting command-line options would then require changes to the Module instance.
This is just a first-pass idea for making options more scalable to library users. Please let me know if you have any other ideas, or if someone is already working on this.