LLVM Developers Meeting JIT BoF -- Request for Topics of Interest

Hi All,

The 2020 Virtual LLVM Developer’s Meeting is coming up the week after next. I’ll be hosting a JIT Birds-of-a-Feather session on Thursday the 8th at 10:55am PDT (See http://llvm.org/devmtg/2020-09/schedule/). I’m planning to run this more like a Round Table: Minimal introduction, plenty of time for discussion.

Does anyone have any LLVM JIT related topics that they would particularly like to discuss at this session? Any questions that they think could be best answered by a round-table style discussion? I’ll aim to rope in the right people and prepare answers to make sure we get the most out of the session.

Looking forward to seeing you all there!

– Lang.

Hi,

The 2020 Virtual LLVM Developer's Meeting is coming up the week after next.
I'll be hosting a JIT Birds-of-a-Feather session on Thursday the 8th at
10:55am PDT (See http://llvm.org/devmtg/2020-09/schedule/). I'm planning to
run this more like a Round Table: Minimal introduction, plenty of time for
discussion.

Does anyone have any LLVM JIT related topics that they would particularly
like to discuss at this session? Any questions that they think could be
best answered by a round-table style discussion? I'll aim to rope in the
right people and prepare answers to make sure we get the most out of the
session.

A topic that I'd find interesting is what it'd take to make it easier to
use profile guided optimization in the context of JIT. A few years back
I crudely hacked this together, but it was fairly ugly. It'd be nice to
make that easier. Possibly too complicated for that type of session?

I dimly recall that the main issue was that there is no good way to
extract the profile information without going through files / applying
the profile data without going through files.

Greetings,

Andres Freund

Hi Andres,

A topic that I’d find interesting is what it’d take to make it easier to
use profile guided optimization in the context of JIT. A few years back
I crudely hacked this together, but it was fairly ugly. It’d be nice to
make that easier. Possibly too complicated for that type of session?

We couldn’t go into detail, but we could kick off a discussion and identify who is interested in working on this. My pitch: We should solve this by investing in JITLink and implementing profiling (and debugger support) via ObjectLinkingLayer::Plugins. Maybe a brief overview of / discussion on how these work would be useful?

– Lang.

Importing symbols into the JIT from an object file or static library…?

Hi,
I would like to discuss the future/potential use-cases of LLVM JIT. I’m also interested in reducing the compilation time of code in JIT component independent of static compiler. Is it sounds interesting? :slight_smile:

Hi Geoff,

Importing symbols into the JIT from an object file or static library…?

Sure! Are you interested in doing this with the C API, LLJIT, or raw OrcV2 components?

The high-level answer here (which we can dig into further in the BoF) is:

For object files:

  • For raw OrcV2 components you’ll want to create an RTDyldObjectLinkingLayer or ObjectLinkingLayer and use the ‘add’ method.
  • For an LLJIT instance you can just call the ‘addObjectFile’ method.
  • For the OrcV2 C API you can call the ‘LLVMOrcLLJITAddObjectFile’ function.

For static libraries:

  • For raw OrcV2 components or an LLJIT instance you can attach a StaticLibraryDefinitionGenerator [1] to the JITDylib that you would like to load the library into. See [2] for an example.
  • For the OrcV2 C API we will just need to add a method to wrap the operation above. We already have ‘LLVMOrcCreateDynamicLibrarySearchGeneratorForProcess’. We should add counterparts to that to load dynamic and static libraries from paths.

– Lang.

[1] https://github.com/llvm/llvm-project/blob/5aa56b242951ab0f0181386ea58509f19b09206e/llvm/include/llvm/ExecutionEngine/Orc/ExecutionUtils.h#L361
[2] https://github.com/llvm/llvm-project/blob/e7549dafcd33ced4280a81ca1d1ee4cc78ed253f/llvm/tools/lli/lli.cpp#L968

Hi Praveen,

I would like to discuss the future/potential use-cases of LLVM JIT.

That sounds like a great topic. I can mention a couple of my use cases in the meeting, but I’ll be very keen to hear what everyone else has in mind.

I’m also interested in reducing the compilation time of code in JIT component independent of static compiler. Is it sounds interesting? :slight_smile:

That sounds great too. To-date I’ve been focused on the API design problems, but it’s a great time for us to start monitoring/improving performance too.

– Lang.

We use LLJIT. Do addObjectFile() and StaticLibraryDefinitionGenerator
work for ELF objects?

Hi Geoff,

We use LLJIT. Do addObjectFile() and StaticLibraryDefinitionGenerator work for ELF objects?

They do. :slight_smile:

I’ve not tested StaticLibraryDefinitionGenerator extensively on Linux. but we have a regression test checking basic usage. If you run into any trouble at all please file a bug and assign it to me.

– Lang.

StaticLibraryDefinitionGenerator works as promised on ELF archives!

I have only encountered one problem. If a static library has not been compiled with -fPIC and uses symbols from a shared library, LLJIT does not complain, but the code may crash without warning when it is executed.

Geoff

Hi Geoff,

I have only encountered one problem. If a static library has not been compiled with -fPIC and uses symbols from a shared library, LLJIT does not complain, but the code may crash without warning when it is executed.

Was the static library compiled with large code model too?

I think this is probably a RuntimeDyld bug: It’s not great at error reporting. A few people in the community are working on an ELF/x86-64 implementation of JITLink at the moment – Once we have that you should get small code model support and better error handling.

– Lang.

Hi Geoff,

I have only encountered one problem. If a static library has not been compiled with -fPIC and uses symbols from a shared library, LLJIT does not complain, but the code may crash without warning when it is executed.

Was the static library compiled with large code model too?

I don’t think so… All libraries are compiled with gcc 6.3.1, and the gcc documentation says that the default code model is small. I don’t know what the default code model is for clang…

Hi Geoff,

I believe clang also defaults to small for ELF / x86-64 (I’d expect it to be the same as gcc).

RuntimeDyld currently assumes large code model. Testing and support for the small code model are spotty at best – it’s essentially unsupported. JITLink will support small code model (already does for MachO). The JITLink ELF/x86-64 implementation isn’t quite ready for testing yet, but I’ll make an announcement as soon as it is.

– Lang.

Hi,

I'm also interested in reducing the compilation time of code in JIT
component independent of static compiler. Is it sounds interesting? :slight_smile:

That would be of interest to me too. One thing around this I have been
wondering about is whether it's realistic to merge the optimization and
code generation phases - right now we spend a lot of time re-doing
analyses during codegen that we already had done during optimization.

Possibly also related to LLJIT design - having LLJIT first generate
minimally optimized code and then, while that is in use, doing optimization
and optimized codegen concurrently, would be neat. It feels like that'd
fit well into LLJIT, given that it already provides things like
background compile threads.

- Andres

Hi Andres,

That would be of interest to me too. One thing around this I have been
wondering about is whether it’s realistic to merge the optimization and
code generation phases - right now we spend a lot of time re-doing
analyses during codegen that we already had done during optimization.

Sounds good to me. I think there are two sub-topics here:

(1) JIT specifics. E.g. What default optimization pipelines should we provide in the JIT? The standard 0/1/2/3/s options, or would it make sense to develop something JIT specific?
(2) General compile time improvements. Everyone will benefit from compile time improvements, but JIT clients are likely to be extra sensitive to it. Have we identified any problem areas or redundancies that would be of interest to the broader LLVM community, and that we could solicit help in fixing.

Possibly also related to LLJIT design - having LLJIT first generate
minimally optimized code and then, while that is in use, doing optimization
and optimized codegen concurrently, would be neat. It feels like that’d
fit well into LLJIT, given that it already provides things like
background compile threads.

Absolutely. Supporting this use-case was one of the motivations for the concurrency support in OrcV2. It’s doable at the moment, but it requires a fair bit of manual work on the client’s part. Implementation and API design in this area seem like good topics.

– Lang.

Hi,

Sounds good to me. I think there are two sub-topics here:
(1) JIT specifics. E.g. What default optimization pipelines should we
provide in the JIT? The standard 0/1/2/3/s options, or would it make sense
to develop something JIT specific?

Yea. I've some hopes for "new PM" making it easier to have maintainable
and customizable pipelines. I've not played around with it too much -
largely because there's no C API last time I checked.

(2) General compile time improvements. Everyone will benefit from compile
time improvements, but JIT clients are likely to be extra sensitive to it.
Have we identified any problem areas or redundancies that would be of
interest to the broader LLVM community, and that we could solicit help in
fixing.

I'd guess that some of the things that can be done to significantly
improve JIT performance aren't generally applicable to most other uses
of LLVM. E.g. the overhead of redoing the same analyses for code gen is
mostly an issue on higher optimization levels, and I assume that a large
portion of e.g. clang users using -O3 will do LTO. Where a split between
optimization and code gen seems necessary in number of cases.

> Possibly also related to LLJIT design - having LLJIT first generate
> minimally optimized code and then, while that is in use, doing optimization
> and optimized codegen concurrently, would be neat. It feels like that'd
> fit well into LLJIT, given that it already provides things like
> background compile threads.

Absolutely. Supporting this use-case was one of the motivations for the
concurrency support in OrcV2. It's doable at the moment, but it requires a
fair bit of manual work on the client's part. Implementation and API design
in this area seem like good topics.

Cool.

Greetings,

Andres Freund

Hi All,

I’ve listed the current topics of interest below, along with some notes on each. We only have 30 minutes so we’ll barely scratch the surface of these during the BoF itself. My main aims are for you to meet each other, identify potential areas of collaboration, identify things that I can do to unblock you, and get the ball rolling on some conversations that we can continue on the mailing lists. It looks like we’ll have an opportunity to set up impromptu meet-ups too (see https://www.youtube.com/watch?v=Qpbefagv6Ts) so if you want to do a deeper dive on a topic area we can set one or more of those up – just let me know in this thread.

Topics so far:

A very brief status update on OrcV2 and JITLink.

Future / Potential use-cases for the JIT.
– LLDB, the Swift interpreter, REPLs.

Compile time improvements
– Experimenting with / performance-testing custom JIT pipelines (any volunteers?)
– Compile-time improvements for the existing pipeline. This is probably a broad community project, but JIT users might have interesting workloads / results to contribute.
– Hiding compile times with concurrency.

Profiling, Debugging, PGO
– Profiling and Debugging support (especially via JITLink)
– How do we integrate PGO (any volunteers to experiment with this?)

Reoptimization
– What it is.
– Any volunteers to start working on API design and experiments?
– Is resource management a problem (do we need to free unoptimized code) and if so how do we make it safe?

I’ll add another of my own topics in here:

Documentation
– What would make life better for Orc beginners?
– What would make life better for Orc experts?
– What would make it easier for you to contribute to Orc?
– Any volunteers to help with the documentation effort?

Looking forward to seeing you all on Thursday.

– Lang.

Hey Lang,

Not surprising I guess – but lately I’m much interested into MaterializationUnit and MaterializationResponsibility xD

Not only because I’m total stuck there again, but also because I wonder what is possible with it besides from what I failed to do.

I know that this might be to much for the BoF, but might be nice for the documentation.

Kind greetings

Björn

Hi Bjoern,

Not surprising I guess – but lately I’m much interested into MaterializationUnit and MaterializationResponsibility xD

They’re very central APIs. I’m happy to answer questions on them during the BoF, though we probably can’t go in to great detail. Any discussion will help identify ways to improve the documentation though.

Alex Denisov has set up a JIT Compilers community page for the Virtual Dev Meeting: https://whova.com/portal/webapp/llvm_202010/CommunityBoard/topic/406390/ . I’ll be checking that periodically for discussion (~10am to 6pm PDT).

Praveen has proposed a meet up for JIT clients during the coffee break tomorrow – I’ll be there for that if anyone wants to drop in. I’m also happy to schedule Zoom meetings for more detailed discussions for other topics. If anyone wants to propose one in the community page above – I’ll be free from 10am to 6pm PDT tomorrow and Thursday (outside the BoF itself).

– Lang.