[ThinLTO] Import functions to enable function specialization in ThinLTO

Hi all,

Before I sent a mail to ask the opinion to how to import functions to enable function specialization in ThinLTO.
Maybe the question is too empty or the function specialization pass didn’t get checked in at that time. So I didn’t get responses.

Then now, the function specialization passes got checked in and I also made it a version in the downstream.
So I tried to update my patches. The first patch is to extract the analysis part of function specialization as an analysis pass.
(⚙ D105524 [FuncSpec] Extract the analysis part as an analysis pass).

And @sjoerd.meijer suggests that it may be better that I ping this thread with more infomation to make us confident we are
in the right direction.

Then I updated this one: ⚙ D105966 [WIP] [ThinLTO] Importing function by function specialization, which contains the detailed description and numbers.
Response in this thread or in that review page is fine to me either.

BTW, could any one suggest some people who are familiar with ThinLTO?

Thanks,
Chuanqi

I’d try to summarise the heuristics you propose to add, rather than focusing on the implementation details to start with. I think it might be helpful to motivate a set of heuristics first. You should also be able to collect and provide date on the effectiveness of those heuristics, like the number of additional functions imported, compared to the number of additionally specialised functions.

Also, the scope/focus on function specialisation seems a bit narrow for substantially increasing the number of imported functions. Do you think the proposed heuristics would also benefit other optimisations, like regular IPSCCP?

Cheers,
Florian

Hi Florian,

Thanks for looking into this!

I’d try to summarise the heuristics you propose to add, rather than focusing on the implementation details to start with.

Thanks. I agree with it. The direction are more important than details.

You should also be able to collect and provide date on the effectiveness of those heuristics, like the number of additional functions imported, compared to the number of additionally specialised functions.

If I understand right, you mean that I could collect these numbers after we summarised the heuristics we decide to use instead of collecting now.
Collecting the numbers may not make sense to me. Since I think we should do statistics after we decided the direction.

Also, the scope/focus on function specialisation seems a bit narrow for substantially increasing the number of imported functions.

I think there are two apsects.

  • It depends on how much we could get from function specialization.
  • It depends the cost model to import more functions.

For the first aspects, I think function specialization is potential because ipa-cp plays an important role in gcc.
And the interprocedural value range propogation based on function specialization seems very potential to me.
That’s the reason that I want to look into function specialization.

For the second aspects, I think we need more time to tuning it. Or in another words, we need to decide the heuristics first.
I know it’s hard to decide heuristics without measuring. But the work process I imaged maybe:

Get a set of heuristics -> Tuning the cost model to decrease the overhead -> if succed, we got it -> else go back to the first step.

Do you think the proposed heuristics would also benefit other optimisations, like regular IPSCCP?

Yes, I think regular IPSCCP could benefit from the proposed heuristics. Out of curiousty, I guess we could replace regular IPSCCP
once the function specialization pass is strong enough. What’s your opinion?

Finally, let me clarify the intention of this mail. I don’t ask to get a solution right now. I know it is hard for every one.
The intention is that if we could get a consensus that it is valuable that importing functions by heuristics for function specialization.
If yes, we could continue the review process for ⚙ D105524 [FuncSpec] Extract the analysis part as an analysis pass, which extract the analysis part from function
specialization pass. I believe this patch should be NFC and harmless.

Of course, it could much much better that we could discuss the heuristics and the cost model.
Let me introduce the heuristics and the cost model I proposed simply.
The heuristics are:

  • Specialize Function Cost. An unsigned number, which is an estimation for the cost to clone one function.
  • Base Bonus for specializing specific argument. A map from ArgNo to the corresponding base bonus. Here the base bonus means the bonus we could calculate by visiting the function body itself only (in other words, doesn’t need to see the call site).
  • ArgUsage. A map from ArgNo to the extra bonus for each call site. The extra bonus means the bonus we could calculate for specific call site. For example, if we passed a function in the call, we could calculate the possibility that the function could get inlined at the specific call site only.
    And the cost model are the same with function specialization pass right now and we could adjust it in the future.

The first one Specialize Function Cost is the number of instruction in a function multiplies a factor.

The second one Base Bonus is calculated by how many possible loads for this argument and its alias now.

I plan to add more situations for example:


int foo(bool cond) {

if (cond) { // We could get more base bonus by the average instruction count in the branches.

} else {

}

}

Or


int foo(int cond) {

if (cond > 10) { // We could get the condition if cond is constant, so we could get more bonus.

} else {

}

}

The third one ArgUsage, which describes the infomation in callsites and we could use it to calculate the extra bonus.

Now it only contains the instruction count if the corresponding parameter is a function. Then we could get extra bonus

from the potential inlining.

Thanks,
Chuanqi

gentle ping~

The motivation for this work was getting this working ThinLTO, which I know very little about that, so that’s why I recommended getting buy in from ThinLTO folks for the general direction.

This general direction is that we change the cost-model, import more functions as a result, so that we can do more transformations. The questions that need answering are: what is this going to cost in compile-times, and what is the performance benefit?

To have a more efficient discussion on this, I think it is best if you prototype your proposal. This makes things more concrete and also allows to get some first costs and benefits of the approach to test the whole idea. Without this, it all just remains a bit abstract.

Regarding https://reviews.llvm.org/D105524, yes it is NFC and harmless, but it is not important. I.e., it is trivial and I’m sure we can get it approved in no-time, but more important are the bigger questions that needs answers first. It’s also better to avoid unnecessary churn if things turns out to be different.

My 2 cents.
Sjoerd.

I can answer questions from the ThinLTO standpoint. Missed the earlier emails on this, but I just skimmed the other llvm-dev email and the patch. I have a high level question and can give some thoughts on the approach from a ThinLTO standpoint.

One important thing to note is that we import functions with available_externally linkage, because they still live in their original translation unit. The intention is to make them available for inlining, and then they get dropped. But specialization presumably mainly benefits the out of line copy, I think? So I’m not sure that the approach of allowing the specialization candidates to get imported will ultimately do what you want. It probably would be better to allow the necessary info to be propagated via the index back to the original TU so that it can be specialized in its original location.

Agree with others that the solution should be prototyped so you can show some results in terms of not only the benefit, but also the cost. I.e. adding information about all of the function arguments will increase the size of the index - how significant is that increase? We want to ensure that the thin link portion of the ThinLTO build stays efficient as that is the only monolithic, serial portion of a ThinLTO build. That being said, I think it is a great idea to try to extend function specialization to ThinLTO if it is giving good benefits with regular LTO.

Teresa

Many thanks for commenting on the ThinLTO part and the suggestions how to approach this!
I also think that specialisation benefits the out of line copy.

Cheers,
Sjoerd.

Sorry for the late reply. Thanks everyone for looking into this!

To Sjoerd Meijer:

This general direction is that we change the cost-model, import more functions as a result, so that we can do more transformations. The questions that need answering are: what is this going to cost in compile-times, and what is the performance benefit?

For the cost and beneift, I marked them in https://reviews.llvm.org/D105966. Simply, for compile-time, there is no significant change except 502.gcc_r increased 30%. For code-size, the average increasement is 6.7%. And the maximum is 19%. For the performance benefit, the ThinLTO for function specialization version gains all the benefit from FullLTO version.
Obiviously, the number are not so satisfying and this is the reason I marked that patch as WIP. The things I want to say is:

  • The increasement for compile-time and code-size mainly comes the newly imported functions been specialized. Instead of calculating and storing the extra heuristics.
  • The main purpose of this mail is to make sure we are on the right direction. In other words, ThinLTO for function specialization is pursued.

To Sjoerd Meijer:

To have a more efficient discussion on this, I think it is best if you prototype your proposal. This makes things more concrete and also allows to get some first costs and benefits of the approach to test the whole idea. Without this, it all just remains a bit abstract.

Yeah, I thought I made the proposal in D105524, D105966 and D107136. Since the codes for function specialization changes relatively fast. It may not be applied directly. I would try to rebase it recently.
Another way to measure it is to git reset --hard d4840175c95f6edcba21baae411589468.

To Sjoerd Meijer:

Regarding https://reviews.llvm.org/D105524, yes it is NFC and harmless, but it is not important. I.e., it is trivial and I’m sure we can get it approved in no-time

The problem is that function specialization gets changed fast. So that this patch couldn’t be approved directly. It may be an extra cost. If we decide the ThinLTO for function specialization is a long-term direction, we should check it in first to avoid extra cost.

To Teresa Johnson:

One important thing to note is that we import functions with available_externally linkage, because they still live in their original translation unit. The intention is to make them available for inlining, and then they get dropped. But specialization presumably mainly benefits the out of line copy, I think? So I’m not sure that the approach of allowing the specialization candidates to get imported will ultimately do what you want.

For imported functions with available_externally linkage, the strategy now is that if we decide to specialize it, we would copy the importanted function and change the linkage for cloned to local so that it wouldn’t affect other translation unit. It looks fine to me. Correct me if there is any problems. Since linking is really magic.

To Teresa Johnson:

It probably would be better to allow the necessary info to be propagated via the index back to the original TU so that it can be specialized in its original location.

It looks hard to do. I think the key point is that we make decision to specialize functions in the process of ThinLTO compiling, which is pararrel to other compilation unit. It looks like we need to do synchronization for ThinLTO, which sounds like to violate the design principles for ThinLTO.

To Teresa Johnson:
Agree with others that the solution should be prototyped so you can show some results in terms of not only the benefit, but also the cost. I.e. adding information about all of the function arguments will increase the size of the index - how significant is that increase? We want to ensure that the thin link portion of the ThinLTO build stays efficient as that is the only monolithic, serial portion of a ThinLTO build.

It’s a great point to measure the size increased for index. I forget to measure it before. Then here are the numbers for SPEC2017 int. The method I used to measure the size for index is to compile the sizefor the .o objects compiled by -flto=thin. Correct me if the measurement is not precise.

The results shows that for SPEC2017 intrate, the overall increased size is about 2.8%. More precisely,

benchamrk size change for all the .o files
500.perlbench_r +1.9%
502.gcc_r +3.8%
505.mcf_r +3%
520.omnetpp_r +12%
525.x264_r +2.9%
531.deepsjeng_r +1.4%
541.leela_r +2%

The size changes less than 1% are omitted.

To Teresa Johnson:

That being said, I think it is a great idea to try to extend function specialization to ThinLTO if it is giving good benefits with regular LTO.

Thanks for your approval for the direction very much!

Thanks,
Chuanqi

Sorry for the slow reply, I’ve been out of office. Some replies inline below. Teresa

Sorry for the late reply. Thanks everyone for looking into this!

To Sjoerd Meijer:

This general direction is that we change the cost-model, import more functions as a result, so that we can do more transformations. The questions that need answering are: what is this going to cost in compile-times, and what is the performance benefit?

For the cost and beneift, I marked them in [https://reviews.llvm.org/D105966](⚙ D105966 [WIP] [ThinLTO] Importing function by function specialization). Simply, for compile-time, there is no significant change except 502.gcc_r increased 30%. For code-size, the average increasement is 6.7%. And the maximum is 19%. For the performance benefit, the ThinLTO for function specialization version gains all the benefit from FullLTO version.
Obiviously, the number are not so satisfying and this is the reason I marked that patch as WIP. The things I want to say is:

  • The increasement for compile-time and code-size mainly comes the newly imported functions been specialized. Instead of calculating and storing the extra heuristics.
  • The main purpose of this mail is to make sure we are on the right direction. In other words, ThinLTO for function specialization is pursued.

To Sjoerd Meijer:

To have a more efficient discussion on this, I think it is best if you prototype your proposal. This makes things more concrete and also allows to get some first costs and benefits of the approach to test the whole idea. Without this, it all just remains a bit abstract.

Yeah, I thought I made the proposal in D105524, D105966 and D107136. Since the codes for function specialization changes relatively fast. It may not be applied directly. I would try to rebase it recently.
Another way to measure it is to git reset --hard d4840175c95f6edcba21baae411589468.

To Sjoerd Meijer:

Regarding https://reviews.llvm.org/D105524, yes it is NFC and harmless, but it is not important. I.e., it is trivial and I’m sure we can get it approved in no-time

The problem is that function specialization gets changed fast. So that this patch couldn’t be approved directly. It may be an extra cost. If we decide the ThinLTO for function specialization is a long-term direction, we should check it in first to avoid extra cost.

To Teresa Johnson:

One important thing to note is that we import functions with available_externally linkage, because they still live in their original translation unit. The intention is to make them available for inlining, and then they get dropped. But specialization presumably mainly benefits the out of line copy, I think? So I’m not sure that the approach of allowing the specialization candidates to get imported will ultimately do what you want.

For imported functions with available_externally linkage, the strategy now is that if we decide to specialize it, we would copy the importanted function and change the linkage for cloned to local so that it wouldn’t affect other translation unit. It looks fine to me. Correct me if there is any problems. Since linking is really magic.

Ah ok, I missed that aspect of the approach. Yes, making them internal copies should work, in that they will get kept and linking will work fine. However, this could definitely exacerbate the code size increases from specialization. Here is a possible case:

Assume a.cc:A() calls b.cc:B() (i.e. A and B are all in different TUs initially). Let’s say B could be specialized when called via A. And further, assume A is small and hot, and is called from many many other TUs. Normally, we would be likely to import A aggressively since it is likely to be inlined (as it is small and hot). But if B is very large we may not want to normally import and inline it (and as such function specialization is very desirable). With this approach, B could be imported and specialized as a local copy many many times (wherever A is imported and inlined). It is possible that linker ICF could merge these copies, but that’s not enabled by default and I’m not sure if it would always be able to do that. This could seem like an extreme example, but we have applications with 20K+ TUs internally.

To Teresa Johnson:

It probably would be better to allow the necessary info to be propagated via the index back to the original TU so that it can be specialized in its original location.

It looks hard to do. I think the key point is that we make decision to specialize functions in the process of ThinLTO compiling, which is pararrel to other compilation unit. It looks like we need to do synchronization for ThinLTO, which sounds like to violate the design principles for ThinLTO.

Actually, ThinLTO does utilize cross-module synchronization. The synchronization is implemented via the index provided to the ThinLTO backends in which whole program optimization decisions are recorded.

I could envision providing via the index a way to record the arg number and specialized value on the FunctionSummary for the function to be specialized, along with a globally unique suffix to append (decided during the thin link), for each specialization, and then on each caller FunctionSummary, which callsite should call a specialized version (indicated by the globally unique suffix to apply to the the called function name). I actually need to do something similar for context sensitive heap optimization (WIP), so presumably they could rely on a similar summary mechanism to coordinate the cross module renaming.

This approach will also require some additional info in the index for the callsite arguments that are constant (i.e. pass the constant instead of a marker that it is a constant) so we can make these decisions during the thin link.

It’s definitely a bit more work, but probably something we’d want to do longer term, rather than creating many specialized local copies, so I’d just make sure that the work done for specialization via importing is extensible to this approach.

To Teresa Johnson:
Agree with others that the solution should be prototyped so you can show some results in terms of not only the benefit, but also the cost. I.e. adding information about all of the function arguments will increase the size of the index - how significant is that increase? We want to ensure that the thin link portion of the ThinLTO build stays efficient as that is the only monolithic, serial portion of a ThinLTO build.

It’s a great point to measure the size increased for index. I forget to measure it before. Then here are the numbers for SPEC2017 int. The method I used to measure the size for index is to compile the sizefor the .o objects compiled by -flto=thin. Correct me if the measurement is not precise.

The results shows that for SPEC2017 intrate, the overall increased size is about 2.8%. More precisely,

benchamrk size change for all the .o files
500.perlbench_r +1.9%
502.gcc_r +3.8%
505.mcf_r +3%
520.omnetpp_r +12%
525.x264_r +2.9%
531.deepsjeng_r +1.4%
541.leela_r +2%

The omnetpp increase is a little large, and it is more likely representative of large C++ applications. Do you know how much of this is due to the ArgUsage info vs the FuncSpecCostInfo? Not sure if there is a good way to shrink these though (I’ve only skimmed your patch).

Thanks,
Teresa

Thanks for your reply. It is meaning and inspiring! It looks like that I didn’t understand ThinLTO enough. It looks like I need to redesign my proposal for how to import functions to specialize.

To Teresa Johnson:

One important thing to note is that we import functions with available_externally linkage, because they still live in their original translation unit. The intention is to make them available for inlining, and then they get dropped. But specialization presumably mainly benefits the out of line copy, I think? So I’m not sure that the approach of allowing the specialization candidates to get imported will ultimately do what you want.

For imported functions with available_externally linkage, the strategy now is that if we decide to specialize it, we would copy the importanted function and change the linkage for cloned to local so that it wouldn’t affect other translation unit. It looks fine to me. Correct me if there is any problems. Since linking is really magic.

Ah ok, I missed that aspect of the approach. Yes, making them internal copies should work, in that they will get kept and linking will work fine. However, this could definitely exacerbate the code size increases from specialization. Here is a possible case:

Assume a.cc:A() calls b.cc:B() (i.e. A and B are all in different TUs initially). Let’s say B could be specialized when called via A. And further, assume A is small and hot, and is called from many many other TUs. Normally, we would be likely to import A aggressively since it is likely to be inlined (as it is small and hot). But if B is very large we may not want to normally import and inline it (and as such function specialization is very desirable). With this approach, B could be imported and specialized as a local copy many many times (wherever A is imported and inlined). It is possible that linker ICF could merge these copies, but that’s not enabled by default and I’m not sure if it would always be able to do that. This could seem like an extreme example, but we have applications with 20K+ TUs internally.

For the exmaple, assume there are c.cc, d.cc and e.cc and all of them called A very often but none of them would call B. In my original imagination, B would get imported into a.cc only. But from your description, it looks like B would possibly get imported into c.cc, d.cc and e.cc. So it looks like the ThinLTO would do importing in many iterations instead of one iteration. If it is the case, I agree that the current design is not scalable.

Actually, ThinLTO does utilize cross-module synchronization. The synchronization is implemented via the index provided to the ThinLTO backends in which whole program optimization decisions are recorded.
I could envision providing via the index a way to record the arg number and specialized value on the FunctionSummary for the function to be specialized, along with a globally unique suffix to append (decided during the thin link), for each specialization, and then on each caller FunctionSummary, which callsite should call a specialized version (indicated by the globally unique suffix to apply to the the called function name). I actually need to do something similar for context sensitive heap optimization (WIP), so presumably they could rely on a similar summary mechanism to coordinate the cross module renaming.

This approach will also require some additional info in the index for the callsite arguments that are constant (i.e. pass the constant instead of a marker that it is a constant) so we can make these decisions during the thin link.

. It’s definitely a bit more work, but probably something we’d want to do longer term, rather than creating many specialized local copies, so I’d just make sure that the work done for specialization via importing is extensible to this approach.

If it is the case, I agree with that we need to specialize functions in their own TUs. And it reveals that I need to take more time to look into the details for the ThinLTO implementations and it would take more time to redesign anddevelop.

The omnetpp increase is a little large, and it is more likely representative of large C++ applications. Do you know how much of this is due to the ArgUsage info vs the FuncSpecCostInfo? Not sure if there is a good way to shrink these though (I’ve only skimmed your patch).

I could say the enlarged size is caused by ArgUsage. Since FuncSpecCostInfo is simply an unsigned number and a array of pairs (A pair consists of two unsigned number whcih stands for argument index and the base bonus to specialize it). I think enlarged size is easy to reduce since we could reduce data width directly. For example, now I used i64 to represent the index of arguments. But it is clearly that we didn’t need so many bits. I guess i4 or i8 is enough and we can save many spaces. It is the same situations for the number of IR instructions, now we use i64 to represent it. But i32 should be enough. So I am not so worried about the enlarged size.

Many Thanks,

Chuanqi