make DataLayout a mandatory part of Module

The LLVM Module has an optional target triple and target datalayout. Without them, an llvm::DataLayout can’t be constructed with meaningful data. The benefit to making them optional is to permit optimization that would work across all possible DataLayouts, then allow us to commit to a particular one at a later point in time, thereby performing more optimization in advance.

This feature is not being used. Instead, every user of LLVM IR in a portability system defines one or more standardized datalayouts for their platform, and shims to place calls with the outside world. The primary reason for this is that independence from DataLayout is not sufficient to achieve portability because it doesn’t also represent ABI lowering constraints. If you have a system that attempts to use LLVM IR in a portable fashion and does it without standardizing on a datalayout, please share your experience.

The cost to keeping this feature around is that we have to pass around the DataLayout object in many places, test for its presence, in some cases write different optimizations depending on whether we have DataLayout, and in the worst case I can think of, we have two different canonical forms for constant expressions depending on whether DL is present. Our canonical IR is different with and without datalayout, and we have two canonicalizers fighting it out (IR/ConstantFold.cpp and Analysis/ConstantFolding.cpp).

I’m trying to force the issue. Either this is a useful feature to maintain in which case I want to see a design on how to defer ABI decisions until a later point in time, or else we do not support it and target triple and target datalayout become a mandatory part of a valid Module again. I think the correct direction is to make them mandatory, but this is a large change that warrants debate.

If we decide that target information should be a mandatory part of a module, there’s another question about what we should do with existing .bc and .ll files that don’t have one. Load in a default of the host machine?

Nick

Hi Nick,

The main use case I’ve seen is that it makes writing generic test cases for ‘opt’ easier in that it’s not necessary to specify a target triple on the command line or have a data layout in the .ll/.bc file. That is, in my experience, it’s more for convenience and perhaps historical layering considerations.

I have no philosophical objection to the direction you’re suggesting.

For modules without a data layout, use the host machine as you suggest. That’s consistent with what already happens with llc, so extending that to opt and other such tools seems reasonable to me.

-Jim

Hi Nick,

The main use case I’ve seen is that it makes writing generic test cases
for ‘opt’ easier in that it’s not necessary to specify a target triple on
the command line or have a data layout in the .ll/.bc file. That is, in my
experience, it’s more for convenience and perhaps historical layering
considerations.

I have no philosophical objection to the direction you’re suggesting.

For modules without a data layout, use the host machine as you suggest.
That’s consistent with what already happens with llc, so extending that to
opt and other such tools seems reasonable to me.

This is also what many clang tests do, where TUs get parsed using the host
triple. If we keep target datalayout out of the test files and fill it in
with the host's information, then our test coverage expands as our buildbot
diversity grows, which is a neat property.

Nick

Unfortunately, reproducibility suffers. You commit a change, a test fails on two buildbots but passes on all of the others and on your local system. Now what do you do? I've already hit this problem in clang, with host-defined tool search paths leaking into the tests and causing them to fail on Windows only. It's hard to fix a bug that causes a buildbot failure if you can't reproduce it. At the very least, the target / data layout should be in the failure message that the test suite generates in case of failure so that you can reproduce it locally if a buildbot reports failure.

David

Why not default to using a generic datalayout that just uses the defaults for everything?. There are already defaults, since not every option needs to be specified in it, you just don’t get them when you don’t have one at all. Some places without one already make some assumptions like that.

Nick, I don't have a current system in place, but I do want to put forward an alternate perspective.

We've been looking at doing late insertion of safepoints for garbage collection. One of the properties that we end up needing to preserve through all the optimizations which precede our custom rewriting phase is that the optimizer has not chosen to "hide" pointers from us by using ptrtoint and integer math tricks. Currently, we're simply running a verification pass before our rewrite, but I'm very interested long term in constructing ways to ensure a "gc safe" set of optimization passes.

One of the ways I've been thinking about - but haven't actually implemented yet - is to deny the optimization passes information about pointer sizing. Under the assumption that an opto pass can't insert an ptrtoint cast without knowing a safe integer size to use, this seems like it would outlaw a class of optimizations we'd be broken by.

My understanding is that the only current way to do this would be to not specify a DataLayout. (And hack a few places with built in assumptions. Let's ignore that for the moment.) With your proposed change, would there be a clean way to express something like this?

p.s. From reading the mailing list a while back, I suspect that the SPIR folks might have similar needs. (i.e. hiding pointer sizes, etc..) Pure speculation on my part though.

Philip

I don't think this will be a problem for opt or other LLVM tools. If opt
has a dependence on the host's default triple and datalayout, reproducing
the failure should be a simple matter of running the test locally with a
manually specified triple. It doesn't have implicit header search paths or
other weird host dependencies.

The LLVM Module has an optional target triple and target datalayout. Without
them, an llvm::DataLayout can't be constructed with meaningful data. The
benefit to making them optional is to permit optimization that would work
across all possible DataLayouts, then allow us to commit to a particular one
at a later point in time, thereby performing more optimization in advance.

This feature is not being used. Instead, every user of LLVM IR in a
portability system defines one or more standardized datalayouts for their
platform, and shims to place calls with the outside world. The primary
reason for this is that independence from DataLayout is not sufficient to
achieve portability because it doesn't also represent ABI lowering
constraints. If you have a system that attempts to use LLVM IR in a portable
fashion and does it without standardizing on a datalayout, please share your
experience.

The cost to keeping this feature around is that we have to pass around the
DataLayout object in many places, test for its presence, in some cases write
different optimizations depending on whether we have DataLayout, and in the
worst case I can think of, we have two different canonical forms for
constant expressions depending on whether DL is present. Our canonical IR is
different with and without datalayout, and we have two canonicalizers
fighting it out (IR/ConstantFold.cpp and Analysis/ConstantFolding.cpp).

I'm trying to force the issue. Either this is a useful feature to maintain
in which case I want to see a design on how to defer ABI decisions until a
later point in time, or else we do not support it and target triple and
target datalayout become a mandatory part of a valid Module again. I think
the correct direction is to make them mandatory, but this is a large change
that warrants debate.

I don't think we can reasonably express all the information needed by
ABIs at the LLVM level. Given that, It would *love* to see DataLayout
become a mandatory part of the IR!

If we decide that target information should be a mandatory part of a module,
there's another question about what we should do with existing .bc and .ll
files that don't have one. Load in a default of the host machine?

For tools that don't link with target (llvm-as and llvm-dis being the
most extreme cases) it would have to be the default "". For opt I
would be ok with "" or the host triple.

Thanks,
Rafael

In a very general sense, I would recommend this approach.

Push all of the existing “No datalayout” behaviour decisions into a default data layout. Keep the behaviour, but tidy up the API.

While LLVM is not designed to be a target independent IR, particularly for compiling C. Some other frontend languages may wish to use it that way.

> This is also what many clang tests do, where TUs get parsed using the
host triple. If we keep target datalayout out of the test files and fill it
in with the host's information, then our test coverage expands as our
buildbot diversity grows, which is a neat property.

Unfortunately, reproducibility suffers. You commit a change, a test fails
on two buildbots but passes on all of the others and on your local system.
Now what do you do?

There's two issues here. One is what to do if we encounter a .ll/.bc with
no target data. We're obliged to support llvm 3.0 bitcode files, so we need
to have an answer to this question.

Second is what to do in our test suite. If the answer to the first question
is "make it use the host target data" then the second part is a choice
either to leave the tests with no explicit layout and thereby use the host
target, or to require that tests in the testsuite specify their datalayout.
The tradeoff is that in one case we get more coverage across different
machines, and in the other case we get better reproducibility, which is
important for a regression suite or for a new user to verify that their
build of llvm is valid.

I've already hit this problem in clang, with host-defined tool search

paths leaking into the tests and causing them to fail on Windows only.
It's hard to fix a bug that causes a buildbot failure if you can't
reproduce it. At the very least, the target / data layout should be in the
failure message that the test suite generates in case of failure so that
you can reproduce it locally if a buildbot reports failure.

Exactly. As long as it's easy to grab the target datalayout from a
buildbot, we can slap it into our .ll file and reproduce the failure.

I can see both sides, and I know my preference, but I'd like to form
consensus.

Nick

The LLVM Module has an optional target triple and target datalayout.
Without them, an llvm::DataLayout can't be constructed with meaningful
data. The benefit to making them optional is to permit optimization that
would work across all possible DataLayouts, then allow us to commit to a
particular one at a later point in time, thereby performing more
optimization in advance.

This feature is not being used. Instead, every user of LLVM IR in a
portability system defines one or more standardized datalayouts for their
platform, and shims to place calls with the outside world. The primary
reason for this is that independence from DataLayout is not sufficient to
achieve portability because it doesn't also represent ABI lowering
constraints. If you have a system that attempts to use LLVM IR in a
portable fashion and does it without standardizing on a datalayout, please
share your experience.

Nick, I don't have a current system in place, but I do want to put forward
an alternate perspective.

We've been looking at doing late insertion of safepoints for garbage
collection. One of the properties that we end up needing to preserve
through all the optimizations which precede our custom rewriting phase is
that the optimizer has not chosen to "hide" pointers from us by using
ptrtoint and integer math tricks. Currently, we're simply running a
verification pass before our rewrite, but I'm very interested long term in
constructing ways to ensure a "gc safe" set of optimization passes.

As a general rule passes need to support the whole of what the IR can
support. Trying to operate on a subset of IR seems like a losing battle,
unless you can show a mapping from one to the other (ie., using code
duplication to remove all unnatural loops from IR, or collapsing a function
to having a single exit node).

What language were you planning to do this for? Does the language permit
the user to convert pointers to integers and vice versa? If so, what do you
do if the user program writes a pointer out to a file, reads it back in
later, and uses it?

One of the ways I've been thinking about - but haven't actually implemented

yet - is to deny the optimization passes information about pointer sizing.

Right, pointer size (address space size) will become known to all parts of
the compiler. It's not even going to be just the optimizations,
ConstantExpr::get is going to grow smarter because of this, as
lib/Analysis/ConstantFolding.cpp merges into lib/IR/ConstantFold.cpp. That
is one of the major benefits that's driving this. (All parts of the
compiler will also know endian-ness, which means we can constant fold
loads, too.)

Under the assumption that an opto pass can't insert an ptrtoint cast

without knowing a safe integer size to use, this seems like it would outlaw
a class of optimizations we'd be broken by.

Optimization passes generally prefer converting ptrtoint and inttoptr to
GEPs whenever possible. I expect that we'll end up with *fewer* ptr<->int
conversions with this change, because we'll know enough about the target to
convert them into GEPs.

My understanding is that the only current way to do this would be to not

specify a DataLayout. (And hack a few places with built in assumptions.
Let's ignore that for the moment.) With your proposed change, would there
be a clean way to express something like this?

I think your GC placement algorithm needs to handle inttoptr and ptrtoint,
whichever way this discussion goes. Sorry. I'd be happy to hear others
chime in -- I know I'm not an expert in this area or about GCs -- but I
don't find this rationale compelling.

p.s. From reading the mailing list a while back, I suspect that the SPIR

folks might have similar needs. (i.e. hiding pointer sizes, etc..) Pure
speculation on my part though.

The SPIR spec specifies two target datalayouts, one for 32 bits and one for
64 bits.

Nick

FWIW, I strongly support making this a mandatory part of the module. There is so much code to delete, this clearly simplifies the IR model.

Java - which does not permit arbitrary pointer manipulation. (Well, without resorting to mechanism like JNI and sun.misc.Unsafe. Doing so would be explicitly undefined behavior though.) We also use raw pointer manipulations in our implementation (which is eventually inlined), but this happens after the safepoint insertion rewrite. We strictly control the input IR. As a result, I can insure that the initial IR meets our subset requirements. In practice, all of the opto passes appear to preserve these invariants (i.e. not introducing inttoptr), but we’d like to justify that a bit more. I would argue that all of the pieces you mentioned are performing optimizations. :slight_smile: However, the exact semantics are unimportant for the overall discussion. This is good to hear and helps us. Er, I’m confused by this. Why would not knowing the size of a pointer case a GEP to be converted to a ptr <-> int conversion? Or do you mean that after the change conversions in the original input IR are more likely to be recognized? The key assumption I didn’t initially explain is that the initial IR couldn’t contain conversions. With that added, do you still see concerns? I’m fairly sure I don’t need to handle general ptr <-> int conversions. If I’m wrong, I’d really like to know it. Good to know. Thanks. Philip

The LLVM Module has an optional target triple and target datalayout.
Without them, an llvm::DataLayout can't be constructed with meaningful
data. The benefit to making them optional is to permit optimization that
would work across all possible DataLayouts, then allow us to commit to a
particular one at a later point in time, thereby performing more
optimization in advance.

This feature is not being used. Instead, every user of LLVM IR in a
portability system defines one or more standardized datalayouts for their
platform, and shims to place calls with the outside world. The primary
reason for this is that independence from DataLayout is not sufficient to
achieve portability because it doesn't also represent ABI lowering
constraints. If you have a system that attempts to use LLVM IR in a
portable fashion and does it without standardizing on a datalayout, please
share your experience.

Nick, I don't have a current system in place, but I do want to put
forward an alternate perspective.

We've been looking at doing late insertion of safepoints for garbage
collection. One of the properties that we end up needing to preserve
through all the optimizations which precede our custom rewriting phase is
that the optimizer has not chosen to "hide" pointers from us by using
ptrtoint and integer math tricks. Currently, we're simply running a
verification pass before our rewrite, but I'm very interested long term in
constructing ways to ensure a "gc safe" set of optimization passes.

As a general rule passes need to support the whole of what the IR can
support. Trying to operate on a subset of IR seems like a losing battle,
unless you can show a mapping from one to the other (ie., using code
duplication to remove all unnatural loops from IR, or collapsing a function
to having a single exit node).

What language were you planning to do this for? Does the language permit
the user to convert pointers to integers and vice versa? If so, what do you
do if the user program writes a pointer out to a file, reads it back in
later, and uses it?

Java - which does not permit arbitrary pointer manipulation. (Well,
without resorting to mechanism like JNI and sun.misc.Unsafe. Doing so
would be explicitly undefined behavior though.) We also use raw pointer
manipulations in our implementation (which is eventually inlined), but this
happens after the safepoint insertion rewrite.

We strictly control the input IR. As a result, I can insure that the
initial IR meets our subset requirements. In practice, all of the opto
passes appear to preserve these invariants (i.e. not introducing inttoptr),
but we'd like to justify that a bit more.

One of the ways I've been thinking about - but haven't actually

implemented yet - is to deny the optimization passes information about
pointer sizing.

Right, pointer size (address space size) will become known to all parts
of the compiler. It's not even going to be just the optimizations,
ConstantExpr::get is going to grow smarter because of this, as
lib/Analysis/ConstantFolding.cpp merges into lib/IR/ConstantFold.cpp. That
is one of the major benefits that's driving this. (All parts of the
compiler will also know endian-ness, which means we can constant fold
loads, too.)

I would argue that all of the pieces you mentioned are performing
optimizations. :slight_smile: However, the exact semantics are unimportant for the
overall discussion.

Under the assumption that an opto pass can't insert an ptrtoint cast

without knowing a safe integer size to use, this seems like it would outlaw
a class of optimizations we'd be broken by.

Optimization passes generally prefer converting ptrtoint and inttoptr to
GEPs whenever possible.

This is good to hear and helps us.

  I expect that we'll end up with *fewer* ptr<->int conversions with this
change, because we'll know enough about the target to convert them into
GEPs.

Er, I'm confused by this. Why would not knowing the size of a pointer
case a GEP to be converted to a ptr <-> int conversion?

Having target data means we can convert inttoptr/ptrtoint into GEPs,
particularly in constant expression folding.

Or do you mean that after the change conversions in the original input IR

are more likely to be recognized?

My understanding is that the only current way to do this would be to not

specify a DataLayout. (And hack a few places with built in assumptions.
Let's ignore that for the moment.) With your proposed change, would there
be a clean way to express something like this?

I think your GC placement algorithm needs to handle inttoptr and
ptrtoint, whichever way this discussion goes. Sorry. I'd be happy to hear
others chime in -- I know I'm not an expert in this area or about GCs --
but I don't find this rationale compelling.

The key assumption I didn't initially explain is that the initial IR
couldn't contain conversions. With that added, do you still see concerns?
I'm fairly sure I don't need to handle general ptr <-> int conversions. If
I'm wrong, I'd really like to know it.

So we met at the social and talked about this at length. I'll repeat most
of the conversation so that it's on the mailing list, and also I've had
some additional thoughts since then.

You're using the llvm type system to detect when something is a pointer,
and then you rely on knowing what's a pointer to deduce garbage collection
roots. We're supposed to have the llvm.gcroots intrinsic for this purpose,
but you note that it prevents gc roots from being in registers (they must
be in memory somewhere, usually on the stack), and that fixing it is more
work than is reasonable.

Your IR won't do any shifty pointer-int conversion shenanigans, and you
want some assurance that an optimization won't introduce them, or that if
one does then you can call it out as a bug and get it fixed. I think that's
reasonable, but I also think it's something we need to put forth before
llvm-dev.

Note that pointer-to-int conversions aren't necessarily just the
ptrtoint/inttoptr instructions (and constant expressions), there's also
casting between { i64 }* and { i8* }* and such. Are there legitimate
reasons an optz'n would introduce a cast? I think that anywhere in the
mid-optimizer, conflating integers and pointers is only going to be bad for
both the integer optimizations and the pointer optimizations.

It may make sense as part of lowering -- suppose we find two alloca's, one
i64 and one i8* and find that their lifetimes are distinct, and i64 and i8*
are the same size, so we merge them. Because of how this would interfere, I
don't think this belongs anywhere in the mid-optimizer, it would have to
happen late, after lowering. That suggests that there's a point in the pass
pipeline where the IR is "canonical enough" that this will actually work.

Is that reasonable? Can we actually guarantee that, that any pass which
would break this goes after a common gc-root insertion spot? Do we need
(want?) to push back and say "no, sorry, make GC roots better instead"?

Nick

I am not currently working on GC, but I am working on a back end for an architecture in which pointer integrity is enforced in hardware and pointers and integers have different representations and different representations and so I would also find much of this contract for optimisations useful. Round tripping via an int involves data loss on my architecture and having optimisations insert these can be annoying (and break security properties). I imagine that the situation is similar for most software-enforced memory safety tools, not just GC.

David

While I find all of these things very interesting from the perspective of
security and/or hardware constraints, I don't think we should try to deal
with that here.

Today, even without a datalayout, I suspect LLVM is not providing nearly
the guarantee that either of these use cases is looking for. It may well
work by happenstance, but hope isn't a strategy. If we want to add this
constraint to LLVM, let's discuss that separately. I don't think we have it
today, and I don't think making datalayout mandatory meaningfully moves us
further from having it. At worst it causes already possible random failures
to become more common.

I agree. Having the DataLayout present makes these easier to enforce. It would also be nice if the data layout could encode a little bit more information about pointers (is an integer representation, casts to these address spaces are valid), but that's a separate discussion.

David

I agree that this is not the right place to continue this discussion. I had intended to write up a proposal last week, but instead got distracted by actually writing code. I should have separate proposal along these lines to the mailing list today. I think we found a working middle ground in offline discussion, I’m hoping it won’t be particularly controversial. Philip

Nick,

Thanks for writing up the summary of our conversation. I have a couple of small clarifications to make, but I’m going to move that into a separate thread since the discussion has largely devolved from the original topic.

To repeat my comment from last week, I support your proposed change w.r.t. DataLayout.

Philip

Splitting out a conversation which started in “make DataLayout a mandatory part of Module” since the topic has decidedly changed. This also relates to the email “RFC: GEP as canonical form for pointer addressing” I just sent.

Correct. This is slightly off, but probably close to what I actually said even if not quite what I meant. :slight_smile: I’m going to skip this and respond with a fuller explanation Monday. I’d written an explanation once, realized it was wrong, and decided I should probably revisit when fully awake. Fundamentally, I believe that gc.roots could be made to work, even with decent (but not optimal) performance in the end. We may even contribute some patches towards fixing issues with the gc.root mechanism just to make a fair comparison. I just don’t believe it’s the right approach or the best way to reach the end goal. Correct and agreed. I split this part off into a separate proposal under the subject “RFC: GEP as canonical form for pointer addressing”. I agree this is possible, even with my proposal. In fact, we already have a stack colouring pass in tree which does exactly what your example illustrates. However, this is done well after CodeGenPrepare and is thus after we start relaxing canonical form anyway. A couple of other transforms which could potentially be problematic: - load widening - vectorization (when the vector element type looses the ‘pointerness’) In each of these cases, we have clear ways of expressing the transformation in ways which preserve type information. (i.e. struct types, vector element types, etc…) I would hope we could move towards these cleaner representations. (Note: I haven’t checked the current implementations. I should do so.) My view of this is that any optimization which lost type information in such a manner without good cause would be poor style to begin with. I would hope that patches to remove such information loss would be accepted so long as there was a reasonable alternative. (I’m assuming this is already true; if it’s not, let me know.) (In case it’s not clear, being past CodeGenPrepare and lowering for a specific target would be a “good reason”.) I think it is, but am open to being convinced otherwise. :slight_smile: Philip