Hello all,
Continuing from the pre-announcement, the workshop will be on Tuesday, Oct 10 2023, in the afternoon (so half-day workshop) - keep an eye on the Dev Meeting Agenda. For convenience, here’s the registration link.
As discussed, the workshop format is essentially unconference. We will use the first ~1 hr or so for quick talks - 5 or so minutes - by folks that want to share new results, or ongoing work, or set the stage for discussing a position or a topic. Then we can identify topics for roundtable discussions. At the end, the round table participants present the main summary or outcomes of their discussion. We will post back on discourse this collage of summaries.
If you have topics you want discussed, feel free to message the organizers [1], if you haven’t already.
If you plan to present in the first part of the workshop - same as above, if you can please let us [1] know by Sept. 30, so we make sure we have budgeted the time correctly.
Last, if you want to attend but there’s no way you can make it in person, please also let us know and we see what we can do.
Thank you, and looking forward to seeing us all on Oct. 10th!
[1] @mtrofin @jdoerfert
4 Likes
Thank you all for attending, and in particular, the speakers for the interesting and engaging presentations!
Here are the presentations.
I promised I’d also upload a summary from each of the round table sessions we had. We ran over time (good! lively discussions!) but didn’t have a chance to gather a summary. So I have a favor to ask, if you have notes from your particular table, could you DM me (here on discourse - and please also remind me which that was), and I’ll collate them back here (easier to read a non-repeating summary later)
Thanks!
3 Likes
Here are the summaries - thanks @AmirAshouri @boomanaiden154-1 @ChrisCummins @jdoerfert @ondrasej !
LLM Table
-
How powerful LLMs can be in the domain of compiler optimization, i.e., code optimization, phase ordering of passes, etc.
-
Potential ideas for future LLMs applications:
- Construction of a better heuristics for Register Allocation?
- Leveraging Meta’s pretrained LLM model in compiler domain
-
LLVM and the vast number of available knobs under optimization O3
-
Inherent shortcomings of LLVM with Phase Ordering of passes on breaking the code
-
[…] ideas for how LLMs could be useful for compiler writers, such as as a large scale pattern matching tool to identify possible optimization opportunities (supplementing the role of the profiter), and we talked about the ingredients needed to replicate and expand on LLM work: a large dataset, base models, and compute.
Latency Prediction Table
- Dynamic Causal Modelling & modeling a “generic”, parameterized CPU, and learning the parameters. PMEvo was mentioned; also using bayesian inference
- discovering internal resource usage by adding one instruction at a time and measuring
- predicting a probability distribution instead of a single number
- the scalability (or not) of creating simulators by hand
- these are it’s in my notes, but don’t remember the specifics, if someone does, please update:
- cost modeling “per app”
- cost modeling for future chips
Datasets Table
We discussed mostly input capturing and generating to make IR datasets executable, with different granularity
There was one more round table, but I haven’t gotten any signal - please do, and I’ll update.
Thanks!
1 Like