[RFC] Clang SourceLocation overflow

Original thread here: https://lists.llvm.org/pipermail/cfe-dev/2019-October/063459.html

I’m bringing this back up since we have a reproduction of this in Boost now. We haven’t finished analyzing what boost is doing, but simply doing an include of:

A possible solution not mentioned: reserve a bit in SourceLocation for "IsBigLoc". When IsBigLoc = true, the remaining 30 bits of the "ID" field is not an offset but an index into a vector of uintptr_ts in SourceManager, each holding 64-bit offset data.

Only a few changes to private methods and isolated details would be needed I think: e.g. method `unsigned SourceLocation::getOffset()` would become `uintptr_t SourceLocation::getOffset(const SourceManager &SM)`, and would dereference and return the larger data when IsBigLoc. And more generally `unsigned` types should be changed to `uintptr_t` everywhere 32-bit encodings are currently assumed (mostly in SourceManager).

This might be easier and less intrusive than allowing 64-bit SourceLocations depending on a build flag, if I understand that proposal correctly, since that would require templating virtually everything with the SourceLocation type.

Huh, that is an interesting idea! The issue ends up being the callers of getOffset though, since it is a previate method.

That said, I wonder if the extra bit-use is just putting off the inevitable, and we should just use the vector in SourceManager for everything. My initial thought is that we could do away with the ‘IsMacro’ bit as well, and just encode that in the vector too. It makes isFileID and isMacroID require the SourceManager as well, but I wonder if that is OK?

An additional consideration: if the files/buffers are so large that each SourceLocation cannot fit in 30 bits, does that imply that so many SourceLocations will be generated that the indices to them won’t be able to fit in 30 bits? That may be the case now that I think of it, in which case the only solution really would be 64 bit SourceLocations, or something still more creative/costly.

Presumably, yes, we could hit that limit, and that limit is closer than we’d think.

Based on my understanding though, we currently stores the ‘offset’ into the buffer. If we were to go with your plan, we’d reduce the ‘SourceLocation’ space significantly, since we’d only have 1 value taken up by “ALongFunctionName” instead of ~15? Or, at least, only 1 instead of 3 in ‘int’.

I presume that anything short of expanding the size of SourceLocation is going to be a temporary measure at best.

One consideration: What do we think about making SourceLocation have an underlying type of ‘bitfield’ that is a smaller size than 64 bits? Then at least we could make SourceRange be a packed value? So make SourceLocation be 40 bits (obviously would still be 64 if stored directly), but at least we could pack 2 of them into 80 bits for any type that does some sort of bit-packing?

Alternatively, what about making SourceRange (instead of 2 SourceLocation objects) be a SourceLocation + unsigned? It would make it a little smaller? Then we could do similar optimizations over time to the ASTNodes. That is, all of the TypeLoc types could just store things like open/close parens as offsets-from-the-original rather than SourceLocations, then calculate them when needed? I would assume anything that required re-calculation would be acceptable, since SourceLocations are seemingly quite rarely used (except for the few cases that Richard mentioned below).

My preference is obviously to just make SourceLocation be uint64_t instead, but the impact on AST size is going to be significant. So I guess I’m hoping that Richard Smith can comment and help us figure out how much pain we are willing to go through for this?

Sounds like we could learn a few lessons from UTF-8 and use the first several bits to say where to find the rest (ie spread the rest over multiple ‘extra’ containers in SourceManager)?

Thanks,

Stephen.

I’m not sure what this bucketing would accomplish besides potentially reducing the key-space? If I have 31 bits of keyspace (as I do now), and we take 1 bit away to decide between two ‘extra’ containers, we end up having 30 bits of keyspace *2 (at most), which is 32 bits, right?

I thought the purpose of doing all this in UTF-8 was to encode information in their keyspace as well as making them unique.

Presumably, yes, we could hit that limit, and that limit is closer than we’d think.

Based on my understanding though, we currently stores the ‘offset’ into the buffer. If we were to go with your plan, we’d reduce the ‘SourceLocation’ space significantly, since we’d only have 1 value taken up by “ALongFunctionName” instead of ~15? Or, at least, only 1 instead of 3 in ‘int’.

I presume that anything short of expanding the size of SourceLocation is going to be a temporary measure at best.

One consideration: What do we think about making SourceLocation have an underlying type of ‘bitfield’ that is a smaller size than 64 bits? Then at least we could make SourceRange be a packed value? So make SourceLocation be 40 bits (obviously would still be 64 if stored directly), but at least we could pack 2 of them into 80 bits for any type that does some sort of bit-packing?

I doubt this would help in practice – most of the storage in AST nodes tends to be SourceLocations and pointers.

Alternatively, what about making SourceRange (instead of 2 SourceLocation objects) be a SourceLocation + unsigned? It would make it a little smaller? Then we could do similar optimizations over time to the ASTNodes. That is, all of the TypeLoc types could just store things like open/close parens as offsets-from-the-original rather than SourceLocations, then calculate them when needed? I would assume anything that required re-calculation would be acceptable, since SourceLocations are seemingly quite rarely used (except for the few cases that Richard mentioned below).

That doesn’t seem likely to work: we can’t assume the beginning and end of a source range are close together in source location space (they could be in different SLocEntrys, which could be a long way apart from each other).

My preference is obviously to just make SourceLocation be uint64_t instead, but the impact on AST size is going to be significant. So I guess I’m hoping that Richard Smith can comment and help us figure out how much pain we are willing to go through for this?

We could do that, and having a build-time selector for 32/64-bit SourceLocations seems like it might not impose a huge cost. But I think we should first try to gain some confidence that we’re addressing the right problem – running out of source locations seems likely to indicate that there’s something more fundamental wrong with the compilation. Currently we reserve one bit for an ‘is macro location’ flag, and divide the remaining addressable 2GB into a 1GB local region and a 1GB imported-from-AST-file region. The resulting “1GB of preprocessed source (including all intermediate stages of macro expansion)” implementation limit does not seem especially restrictive to me, so the first thing I think we should find out is how we’re actually hitting that limit in the boost example. We can currently handle huge compilations without hitting the limit, so if a single header file can hit it, that seems indicative of a bug that’s not just caused by the limit being too low.

One possible cause would be that a large header is included a lot and doesn’t have a proper include guard. If that’s the case, that seems like a problem that we should get fixed in boost – or in Clang if it’s a bug in include guard detection – because that will contribute to long compile times too. I expect there are other cases where source location address space gets wasted that we could drill down into.

If we find the problem is our location tracking for macro expansions (eg, pushing a large volume of tokens through deeply nested macro expansions), we could probably find a way to turn that off; there may even be ways we can intelligently turn it off selectively in cases where we think the intermediary information is unlikely to be useful. We could also look into throwing away source location address space for macro expansions that ended up producing no tokens. But we need to understand the nature of the problem first.

There are also some relatively cheap things we could do to expand our capacity: we could remove the ‘is macro location’ bit without much effort (though this would slow down the current places that check the flag), which would double our available source location capacity. And we could allow the division between local and imported locations to be determined dynamically rather than fixing a 50/50 split, which would in practice be likely to double the available capacity again. But those would only be useful if we’re just a little over the limit; they wouldn’t help if there’s an asymptotic problem in our source location usage.

Thanks for the response Richard! I’m working with my QA team to get a better reproducer to see if we can figure out what is the root-cause. It DOES include a couple of other files so I’m not sure all of what is entailed.

There are also some relatively cheap things we could do to expand our capacity: we could remove the ‘is macro location’ bit without much effort (though this would slow down the current places that check the flag), which would double our available source location capacity.

This seems like a useful and cheap thing to do, if find this is a legitimate issue, I’ll see if I can put a patch together to do this one.

And we could allow the division between local and imported locations to be determined dynamically rather than fixing a 50/50 split, which would in practice be likely to double the available capacity again. But those would only be useful if we’re just a little over the limit; they wouldn’t help if there’s an asymptotic problem in our source location usage.

I don’t have a great idea on how to do this, or how we use this, but based on your description it seems worth-while. At least in non-module TUs I’d assume that the “imported-from-ast-file” is relatively rare. Also, I would think these imports happen ‘first’, right? So we could figure out the split as soon as we’re done with imports and optimize our space.

Not sure if this helps but: I didn’t think about the boost file you were trying to compile at first, but now that I see it involves the boost preprocessor stuff, I bet the issue involves code somewhere that is written like

`BOOST_PP_IF(condition, HUGEMACROEXPANSION_A, HUGEMACROEXPANSION_B)`

which should instead be written

`BOOST_PP_CAT(HUGEMACROEXPANSION, BOOST_PP_IF(0, _A, _B))`

The former results in two huge expansions, one of which is thrown away, whereas the latter only results in one expansion.

I recall writing reasonably small but complex boost pp code that would take like 15 minutes just to preprocess, and now that I think about it, it might have resulted in the "ran out of source locations" a few times, before I finally figured out to always write those conditionals the second way.

Not sure if that’s the issue, but it’s a place to look.

Dave

Thanks for the response Richard! I’m working with my QA team to get a better reproducer to see if we can figure out what is the root-cause. It DOES include a couple of other files so I’m not sure all of what is entailed.

There are also some relatively cheap things we could do to expand our capacity: we could remove the ‘is macro location’ bit without much effort (though this would slow down the current places that check the flag), which would double our available source location capacity.

This seems like a useful and cheap thing to do, if find this is a legitimate issue, I’ll see if I can put a patch together to do this one.

And we could allow the division between local and imported locations to be determined dynamically rather than fixing a 50/50 split, which would in practice be likely to double the available capacity again. But those would only be useful if we’re just a little over the limit; they wouldn’t help if there’s an asymptotic problem in our source location usage.

I don’t have a great idea on how to do this, or how we use this, but based on your description it seems worth-while. At least in non-module TUs I’d assume that the “imported-from-ast-file” is relatively rare. Also, I would think these imports happen ‘first’, right? So we could figure out the split as soon as we’re done with imports and optimize our space.

Not necessarily, no – in some configurations, we don’t find out which AST files we want to load until we see a #include, which could happen arbitrarily late through preprocessing.

I don’t think we need to pick the split point up front. Currently, we essentially use positive source locations for local locs and negative ones for imported locs. So we could handle this by effectively allowing both/either to wrap around, making sure they don’t pass each other, and checking which side a location is on by performing an (unsigned) comparison instead of checking the sign bit.

An important test to perform before committing to any action might be this: what is the proportion of unique SourceLocations constructed by the Lexer to the number of bytes in the input buffers, when you run out of SourceLocations?

If it is extremely high, that suggests the problem may be large unused macro expansions cluttering up the buffer (if that is possible). This means however that the “index solution” introduced earlier would solve the issue.

I don’t know quite how the Preprocessor works but the reason I suspect there may be large unused macro expansions is a) the use of BOOST_PP stuff in this problematic case, and b) the documentation of SourceLocation:

/// Technically, a source location is simply an offset into the manager's view
/// of the input source, which is all input buffers (including macro
/// expansions) concatenated in an effectively arbitrary order.

Does “all input buffers (including macro expansion)” include even unused expansions, such as the first arg in in
BOOST_PP_IF(1, UNUSED_BUT_EXPANDED_MACRO(a,b,c), USED_EXPANDED_MACRO(a,b,c))?

If so I suspect there will be many more bytes than SourceLocations, and thus the index solution may be viable, just in case it is easier to implement than other solutions (which is another matter).

I don’t know if it is related to that, but I had the reporter run -dD and it crashed with these two at the end:

#define TUPLE_IS_VALID_ARRAY_E (2,(3,4))

#define TUPLE_IS_VALID_LIST_E (anydata,BOOST_PP

So presumably it is related to the boost-preprocessor that you mentioned, however from a quick googling of the source code, that macro is:

#define TUPLE_IS_VALID_LIST_E (anydata,BOOST_PP_NIL)

So there isn’t any huge expansion here. That said, since this is just a header include FROM boost, I’d expect us to be able to compile it.

I don’t know if it is related to that, but I had the reporter run -dD and it crashed with these two at the end:

#define TUPLE_IS_VALID_ARRAY_E (2,(3,4))
#define TUPLE_IS_VALID_LIST_E (anydata,BOOST_PP

So presumably it is related to the boost-preprocessor that you mentioned, however from a quick googling of the source code, that macro is:
#define TUPLE_IS_VALID_LIST_E (anydata,BOOST_PP_NIL)

So there isn’t any huge expansion here. That said, since this is just a header include FROM boost, I’d expect us to be able to compile it.

Agreed, Boost probably isn’t doing unnecessary expansions.

However, it is possible it is doing a lot of nested expansions that are necessary for it to implement its logic, but which don’t actually contain any SourceLocations *which end up in the AST* (since those are the only SourceLocations that really need to be 32 bit).

If these ultimately-unused expansions (e.g. expansions into arguments of other macros cannot be used in the AST) are added to the concatenation of buffers through which a SourceLocation must specify an offset, then there may be many more bytes in the buffers than SourceLocations in the AST when dealing with extremely heavy macro usage such as with the Boost VMD/PP stuff.

Hard to wade through the Preprocessor/Lexer details to know if this is the case, so an easier question is just: how many unique SourceLocations are in the AST, in each of the various described cases in which a user runs out of SourceLocations? If the number of AST-used SourceLocations exceeds 2^31, the only answer is 64-bit SourceLocations. If it significantly under that, an indexing solution would work, and might well give the best bang for the buck.

Tthe other cases mentioned in the thread you linked to are also worth considering. In the current Boost VMD case, the problem results from macro expansions, somehow. In the previous thread, the cause was extremely frequent use of large unguarded #includes (but which might involve macro expansions too, or somehow otherwise result in large chunks of bytes never referenced via SourceLocations in the AST).

However someone else in that thread encountered problems while working on modules, which seems not to involve macros/preprocessor buffers at all, and so might be the most troubling case, and might indicate that even if macros are the problem here, the only long term solution is to allow the user to select 64-bit SourceLocations — unless there are *also* large chunks of bytes in the imported-from-AST buffers that do not contain SourceLocations. So again would be interesting to know the number of AST-used unique SourceLocations in those other cases, when we run out of them.

but which don’t actually contain any SourceLocations which end up in the AST (since those are the only SourceLocations that really need to be 32 bit).

I think we still need to keep non-ast source locations around, since they are necessary to print the ‘expanded from macro’ notes. So even ones that aren’t in the AST get referenced anyway. So we would need some way of differentiating the two. I wonder if we would be able to create a 64-bit “large source location” that converted to a “small source location” when added to the AST that used the indexing feature.

how many unique SourceLocations are in the AST, in each of the various described cases in which a user runs out of SourceLocations?

This I do not have a good feel of unfortunately.

, the only long term solution is to allow the user to select 64-bit SourceLocations

I can’t think of any way to do this that doesn’t mean we have to have 2x of a number of AST nodes in the compiler. It feels like that would be a pretty massive inflation of our executable size, so I’m not sure what we’re willing to allow for that.

Another side-note: If we start using some sort of ‘index into a vector of real source locations’ thing, we would likely need to change how modules are emitted/retrieved, though presumably we could just have them contain the ‘large source location’ listed above.

> but which don’t actually contain any SourceLocations *which end up in the AST* (since those are the only SourceLocations that really need to be 32 bit).

I think we still need to keep non-ast source locations around, since they are necessary to print the ‘expanded from macro’ notes. So even ones that aren’t in the AST get referenced anyway. So we would need some way of differentiating the two. I wonder if we would be able to create a 64-bit “large source location” that converted to a “small source location” when added to the AST that used the indexing feature.

> how many unique SourceLocations are in the AST, in each of the various described cases in which a user runs out of SourceLocations?

This I do not have a good feel of unfortunately.

I think in this case, the macros probably create very few if any AST nodes at all. Here’s a link to the problematic code, I think it is pure macro wizardry:
https://github.com/boostorg/vmd/blob/develop/test/test_doc_modifiers_return_type.cxx

So if our solution allowed passed around 64-bit SourceLocations in the preprocessing stage, and only converting them to 32-bit (interpreted as an index to 64-bit data where necessary, or all the time) when constructing an AST node, I am pretty sure that would solve the problem, at least in this case.

>, the only long term solution is to allow the user to select 64-bit SourceLocations

I can’t think of any way to do this that doesn’t mean we have to have 2x of a number of AST nodes in the compiler. It feels like that would be a pretty massive inflation of our executable size, so I’m not sure what we’re willing to allow for that.

Agree, but if the AST can get to be above that size (disregarding template instantiation nodes) that is the only answer I think. But somehow I doubt that is happening, or that any case in which that is happening is reasonable code.

Another side-note: If we start using some sort of ‘index into a vector of real source locations’ thing, we would likely need to change how modules are emitted/retrieved, though presumably we could just have them contain the ‘large source location’ listed above.

I know preciously little about the modules implementation, but I would imagine we could store the vector<uint64_t> in the module as well - or store the big locations, either way.

Also, I wonder if that, when "Ran out of SourceLocations" is encountered with modules, the problem might be that template instantiation data within the modules are occupying large chunks of bytes, such that the problem could again be resolved with an indexing solution that only ever refers to the SourceLocations of their patterns (i.e. reducing the number of unique indices to keep it under 2^31).

So if our solution allowed passed around 64-bit SourceLocations in the preprocessing stage, and only converting them to 32-bit (interpreted as an index to 64-bit data where necessary, or all the time) when constructing an AST node, I am pretty sure that would solve the problem, at least in this case.

Right… So what I envision is at the moment is:

PPSourceLocation: Does everything the current one does, so it is represented as a 63 bit location + 1 bit for macro/file discriminator.

ASTSourceLocation: This is ONLY a 32 bit unsigned index and contains no info nor functionality itself (besides conversions). However, ASTSourceLocation and a SourceManager (which contains the lookup vector) gets you the PPSourceLocation, where we do all the work.

Lex/Parse deal with PPSourceLocation. When it comes to store in the AST, we store the ASTSourceLocation and create an index in the SourceManager’s vector<uint64_t>. When the AST needs to do something with the location besides copying/storing it, it does the conversion. The conversion from PPSourceLocation->ASTSourceLocation might be expensive (since we would presumably want to de-dupe the vector), but ASTSourceLocation->PPSourceLocation is just a vector-lookup.

Agree, but if the AST can get to be above that size (disregarding template instantiation nodes) that is the only answer I think. But somehow I doubt that is happening, or that any case in which that is happening is reasonable code.

At the moment, we aren’t getting to the 31^2-1 ast-source-locations, since obviously we would run out of offsets way before that would happen (assuming the de-dupe step above). But the above system buys us at least 2x*, but likely much more. At the moment, this is likely enough to make me happy.

I know preciously little about the modules implementation, but I would imagine we could store the vector<uint64_t> in the module as well - or store the big locations, either way.

I Hadn’t considered storing the array itself in the module header. I know about as much as you do (besides the ASTReader/ASTWriter interface), but my initial thought was to just convert to big-locations first, then store those. Hopefully Richard can comment.

Also, I wonder if that, when “Ran out of SourceLocations” is encountered with modules, the problem might be that template instantiation data within the modules are occupying large chunks of bytes, such that the problem could again be resolved with an indexing solution that only ever refers to the SourceLocations of their patterns (i.e. reducing the number of unique indices to keep it under 2^31).

Agreed… I think the above/below indexing solution buys us quite a bit of headroom. It isn’t as good as true 64 bit source-locations, but the work to do this would make the eventual 64-bit switch quite a bit easier I would expect (since it would just be removing the vector and making the conversion be a simple copy).

So I guess the open questions are:

So 1 more thing:

Richard suggested one way about this was to switch from ‘negative numbers are loaded, positive are from files’ to something with a split.

I believe based on my reading that we already DO that for SourceLocation. They are represented by an unsigned, and we have a “CurrentLoadedOffset” that is the split between the two. SO, we are already using as much room as we can for that.

The negative/positive discriminator Richard was mentioning seems to be the FileID, which is an index into 1 of 2 vectors (which contain the SLocEntrys, which store a list of macro expansions AND file + offset). I don’t believe we are running into that limit however. SO, I don’t think that buys us much.

I’ve also been looking into replacing the bit for IsMacro vs IsFile. I believe that wouldn’t be too horrible, other than the need to access SourceManager. It WOULD be somewhat expensive (since we end up binary searching the files list to get the FileID), but would buy us 2x the data we have now.

So 1 more thing:
Richard suggested one way about this was to switch from ‘negative numbers are loaded, positive are from files’ to something with a split.

I believe based on my reading that we already DO that for SourceLocation. They are represented by an unsigned, and we have a “CurrentLoadedOffset” that is the split between the two. SO, we are already using as much room as we can for that.

The negative/positive discriminator Richard was mentioning seems to be the FileID, which is an index into 1 of 2 vectors (which contain the SLocEntrys, which store a list of macro expansions AND file + offset). I don’t believe we are running into that limit however. SO, I don’t think that buys us much.

I’ve also been looking into replacing the bit for IsMacro vs IsFile. I believe that wouldn’t be too horrible, other than the need to access SourceManager. It WOULD be somewhat expensive (since we end up binary searching the files list to get the FileID), but would buy us 2x the data we have now.

From: Keane, Erich
Sent: Thursday, February 4, 2021 8:18 AM
To: David Rector <davrecthreads@gmail.com>
Cc: Richard Smith <richard@metafoo.co.uk>; clang developer list <cfe-dev@lists.llvm.org>
Subject: RE: [cfe-dev] [RFC] Clang SourceLocation overflow

> So if our solution allowed passed around 64-bit SourceLocations in the preprocessing stage, and only converting them to 32-bit (interpreted as an index to 64-bit data where necessary, or all the time) when constructing an AST node, I am pretty sure that would solve the problem, at least in this case.

Right… So what I envision is at the moment is:

PPSourceLocation: Does everything the current one does, so it is represented as a 63 bit location + 1 bit for macro/file discriminator.
ASTSourceLocation: This is ONLY a 32 bit unsigned index and contains no info nor functionality itself (besides conversions). However, ASTSourceLocation and a SourceManager (which contains the lookup vector) gets you the PPSourceLocation, where we do all the work.

Lex/Parse deal with PPSourceLocation. When it comes to store in the AST, we store the ASTSourceLocation and create an index in the SourceManager’s vector<uint64_t>. When the AST needs to do something with the location besides copying/storing it, it does the conversion. The conversion from PPSourceLocation->ASTSourceLocation might be expensive (since we would presumably want to de-dupe the vector), but ASTSourceLocation->PPSourceLocation is just a vector-lookup.

Agree. Maybe leave "ASTSourceLocation" as "SourceLocation" since the change will otherwise be invisible to the vast majority of users, who only interact with SourceLocations via AST nodes (I think).

> Agree, but if the AST can get to be above that size (disregarding template instantiation nodes) that is the only answer I think. But somehow I doubt that is happening, or that any case in which that is happening is reasonable code.
At the moment, we aren’t getting to the 31^2-1 ast-source-locations, since obviously we would run out of offsets way before that would happen (assuming the de-dupe step above). But the above system buys us at least 2x*, but likely much more. At the moment, this is likely enough to make me happy.

> I know preciously little about the modules implementation, but I would imagine we could store the vector<uint64_t> in the module as well - or store the big locations, either way.
I Hadn’t considered storing the array itself in the module header. I know about as much as you do (besides the ASTReader/ASTWriter interface), but my initial thought was to just convert to big-locations first, then store those. Hopefully Richard can comment.

> Also, I wonder if that, when "Ran out of SourceLocations" is encountered with modules, the problem might be that template instantiation data within the modules are occupying large chunks of bytes, such that the problem could again be resolved with an indexing solution that only ever refers to the SourceLocations of their patterns (i.e. reducing the number of unique indices to keep it under 2^31).
Agreed… I think the above/below indexing solution buys us quite a bit of headroom. It isn’t as good as true 64 bit source-locations, but the work to do this would make the eventual 64-bit switch quite a bit easier I would expect (since it would just be removing the vector and making the conversion be a simple copy).

If I understood the Boost VMD example correctly (i.e. almost no AST nodes actually generated in that file, notwithstanding the huge buffers generated by macro expansions), the indexing approach will free up plenty of bits in all but the most unusual cases, such that I would venture it will keep us from ever having to go 64 bit.

The only case that would not be solved by this, I think, is if a huge, non-templated AST were to be built, almost certainly by doing massive template-like stuff with the preprocessor, in which case the indexing would only buy us what you say below: a multiple equal to the average token length in bytes (though even more to the extent there are also unused-in-the-AST macro expansions in the concatenation of buffers).

If such cases really need to be supported, then at that point I would think it reasonable to add an option to build clang with 64-bit SourceLocations used in the AST — i.e. the user would have to use a different clang binary to compile that sort of stuff (is that the option that was proposed before?). But it might be worthwhile to put that off until we see actual cases in which that really is necessary, where we need more than 2^32 indices to uniquely identify source locations for all the non-template-instantiated nodes in the AST.

So I guess the open questions are:
Is this acceptable to **Richard**/etc?

Bump.

Does anyone want to help :blush:

While I’m not super set up for clang development at the moment, if, pending Richard’s thoughts, you wanted to start and when the task came into full focus it looked not-so-trivial, I would be happy to lend a hand, since I think we are on the same page about what it seems needs to be done.

So 1 more thing:
Richard suggested one way about this was to switch from ‘negative numbers are loaded, positive are from files’ to something with a split.

I believe based on my reading that we already DO that for SourceLocation. They are represented by an unsigned, and we have a “CurrentLoadedOffset” that is the split between the two. SO, we are already using as much room as we can for that.

The negative/positive discriminator Richard was mentioning seems to be the FileID, which is an index into 1 of 2 vectors (which contain the SLocEntrys, which store a list of macro expansions AND file + offset). I don’t believe we are running into that limit however. SO, I don’t think that buys us much.

I’ve also been looking into replacing the bit for IsMacro vs IsFile. I believe that wouldn’t be too horrible, other than the need to access SourceManager. It WOULD be somewhat expensive (since we end up binary searching the files list to get the FileID), but would buy us 2x the data we have now.

From: Keane, Erich
Sent: Thursday, February 4, 2021 8:18 AM
To: David Rector <davrecthreads@gmail.com>
Cc: Richard Smith <richard@metafoo.co.uk>; clang developer list <cfe-dev@lists.llvm.org>
Subject: RE: [cfe-dev] [RFC] Clang SourceLocation overflow

So if our solution allowed passed around 64-bit SourceLocations in the preprocessing stage, and only converting them to 32-bit (interpreted as an index to 64-bit data where necessary, or all the time) when constructing an AST node, I am pretty sure that would solve the problem, at least in this case.

Right… So what I envision is at the moment is:

PPSourceLocation: Does everything the current one does, so it is represented as a 63 bit location + 1 bit for macro/file discriminator.
ASTSourceLocation: This is ONLY a 32 bit unsigned index and contains no info nor functionality itself (besides conversions). However, ASTSourceLocation and a SourceManager (which contains the lookup vector) gets you the PPSourceLocation, where we do all the work.

Lex/Parse deal with PPSourceLocation. When it comes to store in the AST, we store the ASTSourceLocation and create an index in the SourceManager’s vector<uint64_t>. When the AST needs to do something with the location besides copying/storing it, it does the conversion. The conversion from PPSourceLocation->ASTSourceLocation might be expensive (since we would presumably want to de-dupe the vector), but ASTSourceLocation->PPSourceLocation is just a vector-lookup.

Agree. Maybe leave “ASTSourceLocation” as “SourceLocation” since the change will otherwise be invisible to the vast majority of users, who only interact with SourceLocations via AST nodes (I think).

Yes, if we make this change, I agree we should call the small representation SourceLocation in order to minimize churn.

Hmm, but will this actually be better than making SourceLocations 64 bits wide? Assuming we use the side table unconditionally, and assuming that most SourceLocations are stored in exactly one place (not true with templates, but often the case otherwise), this means that each time we store a SourceLocation, we’ll use up 96 bits of storage (32 bits for the SourceLocation and 64 bits for the table entry). That would be 50% worse than the simpler approach of making SourceLocations always be 64 bits wide. If we could use regular 32 bit source locations for smaller values, and then switch to the side table once we get (say) half way through them, maybe that would be acceptable, but it would mean that we take a constant-factor memory usage (and, due to loss of locality, probably performance) hit for all translation units larger than some threshold. Also, we’d still need full-width source locations for the spelling and expansion locations of the macro expansion SLocEntrys, so those are going to grow significantly larger – and if we have a large number of macro expansions (such as in the boost case), that’ll be a big cost.

Agree, but if the AST can get to be above that size (disregarding template instantiation nodes) that is the only answer I think. But somehow I doubt that is happening, or that any case in which that is happening is reasonable code.
At the moment, we aren’t getting to the 31^2-1 ast-source-locations, since obviously we would run out of offsets way before that would happen (assuming the de-dupe step above). But the above system buys us at least 2x*, but likely much more. At the moment, this is likely enough to make me happy.

I know preciously little about the modules implementation, but I would imagine we could store the vector<uint64_t> in the module as well - or store the big locations, either way.
I Hadn’t considered storing the array itself in the module header. I know about as much as you do (besides the ASTReader/ASTWriter interface), but my initial thought was to just convert to big-locations first, then store those. Hopefully Richard can comment.

Also, I wonder if that, when “Ran out of SourceLocations” is encountered with modules, the problem might be that template instantiation data within the modules are occupying large chunks of bytes, such that the problem could again be resolved with an indexing solution that only ever refers to the SourceLocations of their patterns (i.e. reducing the number of unique indices to keep it under 2^31).

We don’t have different SourceLocations for templates versus instantiations; it’s not that. The problem on the modules side of things is that if you import N different modules, all of which textually include some large non-modular header, then they all contain SourceLocations for the contents of that header. So the consumer of those modules ends up paying N x the cost in source location address space. And also N x the cost in memory usage for the declarations in that header and so on – there, running out of SourceLocations is not the problem, it’s a symptom of the problem, and the problem is that too many copies of a large textual header got brought into the same compilation.

Agreed… I think the above/below indexing solution buys us quite a bit of headroom. It isn’t as good as true 64 bit source-locations, but the work to do this would make the eventual 64-bit switch quite a bit easier I would expect (since it would just be removing the vector and making the conversion be a simple copy).

If I understood the Boost VMD example correctly (i.e. almost no AST nodes actually generated in that file, notwithstanding the huge buffers generated by macro expansions), the indexing approach will free up plenty of bits in all but the most unusual cases, such that I would venture it will keep us from ever having to go 64 bit.

The only case that would not be solved by this, I think, is if a huge, non-templated AST were to be built, almost certainly by doing massive template-like stuff with the preprocessor, in which case the indexing would only buy us what you say below: a multiple equal to the average token length in bytes (though even more to the extent there are also unused-in-the-AST macro expansions in the concatenation of buffers).

If such cases really need to be supported, then at that point I would think it reasonable to add an option to build clang with 64-bit SourceLocations used in the AST — i.e. the user would have to use a different clang binary to compile that sort of stuff (is that the option that was proposed before?). But it might be worthwhile to put that off until we see actual cases in which that really is necessary, where we need more than 2^32 indices to uniquely identify source locations for all the non-template-instantiated nodes in the AST.

So I guess the open questions are:

  1. Is this acceptable to Richard/etc?

Bump.

I don’t think we’re at the point where we can answer this question yet – I don’t think we sufficiently understand the problem. From what I’ve seen so far, it still seems likely to me that the problem is that we’re allocating too much of the SourceLocation address space for macro expansions that don’t need locations to be allocated for them, either due to an outright bug or a pathological case in the boost code.

I think it would be worth looking at a SourceManager dump from around the crash and analyzing what we’re spending the source location space on – is this a death by a million cuts situation, or are there some surprisingly large allocations in there? If the former, are there patterns? Are we generating duplicate locations (eg, repeatedly expanding the same macros with the same arguments in a way we can reuse)? Are there lots of SLocEntries that are unreferenced by file locations (eg, macro expansions that ended up producing no tokens)? There may be some smart way we can conditionally disable updateLocForMacroArgTokens (https://github.com/llvm/llvm-project/blob/master/clang/lib/Lex/TokenLexer.cpp#L1054) or make it lazy, so we don’t assign full location information to preprocessor-tokens that are formed during macro expansion but don’t result in any real tokens – or if there isn’t a smart way to do that, maybe we can give boost a way of disabling location tracking for some of its macros.

I think it would also be worth pursuing the 64-bit source locations option (making SourceLocation larger, with no side table) as a configure-time flag, and fixing up all the places where we make the 32-bit assumption. If nothing else, that’ll let us measure the memory and compile time overhead of doubling our source location storage. I expect that the numbers we get back from that will make that option an immediate non-starter, but perhaps not – in particular, if we could store a SourceLocation as a (FileID, offset) pair, that would save us from doing binary searches to locate the FileID for a given SourceLocation, which we do sometimes spend a non-trivial amount of our time doing, and it’d be worth measuring whether we can improve compile time at the cost of memory usage by doing that.

I’d also be interested in seeing whether we can add a flag to turn off tracking macro expansion locations entirely (so we’d only ever track spelling locations). That would result in some QoI loss, but it might also allow cases such as this boost example to work without too much effort from us. It’d be interesting to see how much maintenance burden we think such a flag would add.