[MLIR] Optimizing MLIR’s Presburger library

Description: MLIR’s Presburger Library, FPL (FPL: A Fast Presburger Library), provides mathematical abstractions for polyhedral compilation and analysis. The main abstraction that the library provides is a set of integer tuples defined by a system of affine inequality constraints. The library supports standard set operations over such sets. The result will be a set defined by another constraint system, possibly having more constraints. When many set operations are performed in sequence, the constraint system may become very large, negatively impacting performance. There are several potential ways to simplify the constraint system; however, this involves performing additional computations. Thus, spending more time on more aggressive simplifications may make each individual operation slower, but at the same time, insufficient simplifications can make sequences of operations slow due to an explosion in constraint system size. The aim of this project is to find the right balance between the two.

The goals of this project:

  • Understand the library’s performance in terms of runtime and output size.
  • Optimize the library by finding the best output size and performance tradeoff.

Expected outcomes:

  • Benchmarking the performance and output constraint complexity of the primary operations of the library.
  • Implementing simplification heuristics.
  • A better understanding of which simplification heuristics improve overall performance enough to be worth the additional computational cost.

Desirable skills: Intermediate C++, Experience in benchmarking

Project size: Large

Difficulty: Medium

Confirmed mentors: Kunwar Grover

4 Likes

Hello! I would love to contribute to this project.

Hi! :wave:

I am a master’s student in computer science and I am very interested in this project.I have a basic understanding of deep learning model inference optimization and I have experience in using tools such as MLIR, TensorRT, and TVM. Currently I am trying to implement cutlass-based int4 precision inference on the PaddlePaddle project. On the other hand, I am also writing my own small MLIR project, which is currently trying to implement quantitative extensions based on the MLIR project my tiny mlir project.

I would like to know what I need to start doing or reading if I want to be involved in this project.

Also I’ve read that the preferred medium of contact is Slack. My email address is gilsaianine@gmail.com

1 Like

Hi @gilsaia!

The first thing you will need to understand for this project is what the Presburger library is and what it does. I would recommend the FPL Tutorial given at EuroLLVM 2022 and the FPL paper (Section 1, 2, 6 and a high-level glance of Section 4 for now). This should get you up to speed with what the library does.

After this, the first step would be to create a benchmarking tool for the library. This benchmarking tool would benchmark the output size representation as defined in Fig 7. of the paper and the runtime of the operations as Fig 8. of the paper. You should be able to understand this more once you read Section 6 (Evaluation) of the paper.

I don’t think it’s mentioned anywhere that the preferred medium of contact is Slack. I primarily use Discord or Email. Let me know if you prefer any of these or if you have any other preferences and we can use that for communication.

Thanks for the guidance @Groverkss
I will watch the video you mentioned and read the article, and also try to create the benchmark tool.
I may be a little slow due to being a little busy lately, but I’ll keep working on this project!
Also, I will try Discord to get in touch with you :wink:!

I am also interested in this project.

Hi! Glad to hear you are interested. Could you try following the same initial steps I recommended above and let me know if you encounter any problems and if you have any doubts?

I’m also available on LLVM Discord with the same username, “Groverkss”, so feel free to tag me to ask anything you need

Hi–is this project still open to contributions?