Set up ExecutionEngine according to actual machine capabilities

I am currently setting up my Module with


#ifdef _WIN32

  • “-elf”



And my ExecutionEngine with





This works fine on most machines, however on some virtualized machines this fails because the host CPU name implies AVX, however AVX is in fact disabled, leading to an illegal instruction when running JIT-compiled functions.

Is there a better way to set up the executionengine so that such failure do not occur (but all available features are used)?

That’s odd. getHostCPUName uses cpuid to get the actual capabilities of the CPU before selecting the name.
Does cpuid reports incorrect information in these virtual machines?

In any case you can further tune CPU capabilities with EngineBuilder::setMAttrs.

It has no AVX according to CPUz (which I assume uses cpuid?), but it reports as an E5-2680, which normally does have AVX.


The proper way to do this in LLVM trunk is to use
sys::getHostCPUFeatures instead of the name and set the individual
feature bits via setMAttrs. getHostCPUName used to be aware of
AVX-disabled environments but that was a bit of a hack and was removed

- Ben

Ok I now have something like the following, which appears to work (at least it doesn't crash)

auto host_features = llvm::StringMap<bool>{};
  throw std::runtime_error("could not retrieve host CPU features");
auto host_attrs = llvm::SmallVector<llvm::StringRef, 16>{};
for(auto const & pair : host_features)
auto ee = std::unique_ptr<llvm::ExecutionEngine>{

Too bad setMattrs doesn't take a StringMap directly