Here are the summaries - thanks @AmirAshouri @boomanaiden154-1 @ChrisCummins @jdoerfert @ondrasej !
LLM Table
-
How powerful LLMs can be in the domain of compiler optimization, i.e., code optimization, phase ordering of passes, etc.
-
Potential ideas for future LLMs applications:
- Construction of a better heuristics for Register Allocation?
- Leveraging Meta’s pretrained LLM model in compiler domain
-
LLVM and the vast number of available knobs under optimization O3
-
Inherent shortcomings of LLVM with Phase Ordering of passes on breaking the code
-
[…] ideas for how LLMs could be useful for compiler writers, such as as a large scale pattern matching tool to identify possible optimization opportunities (supplementing the role of the profiter), and we talked about the ingredients needed to replicate and expand on LLM work: a large dataset, base models, and compute.
Latency Prediction Table
- Dynamic Causal Modelling & modeling a “generic”, parameterized CPU, and learning the parameters. PMEvo was mentioned; also using bayesian inference
- discovering internal resource usage by adding one instruction at a time and measuring
- predicting a probability distribution instead of a single number
- the scalability (or not) of creating simulators by hand
- these are it’s in my notes, but don’t remember the specifics, if someone does, please update:
- cost modeling “per app”
- cost modeling for future chips
Datasets Table
We discussed mostly input capturing and generating to make IR datasets executable, with different granularity
There was one more round table, but I haven’t gotten any signal - please do, and I’ll update.
Thanks!