If an asm's constraints claim that the variable is an output, but then don't actually write to it, that's a bug (at least if the value is actually used afterwards). An output-only constraint on inline asm definitely does _not_ mean "pass through the previous value unchanged, if the asm failed to actually write to it". If you need that behavior, it's spelled "+m", not "=m".
We do seem to fail to take advantage of this for memory outputs (again, this is not just for ftrivial-auto-var-init -- we ought to eliminate manual initialization just the same), which I'd definitely consider an missing-optimization bug.
You mean we assume C code is buggy and asm code is not buggy because
compiler fails to disprove that there is a bug?
Doing this optimization without -ftrivial-auto-var-init looks
reasonable, compilers do optimizations assuming absence of bugs
throughout. But -ftrivial-auto-var-init is specifically about assuming
these bugs are everywhere.Please be more specific about the problem, because your simplified example doesn't actually show an issue. If I write this function:
int foo() {
int retval;
asm("# ..." : "=r"(retval));
return retval;
}
it already does get treated as definitely writing retval, and optimizes away the initialization (whether you explicitly initialize retval, or use -ftrivial-auto-var-init).
Example: https://godbolt.org/z/YYBCXLThis is probably because you're passing retval as a register output.
If you change "=r" to "=m" (https://godbolt.org/z/ulxSgx), it won't be
optimized away.
(I admit I didn't know about the difference)Hi JF et al.,
In the Linux kernel we often encounter the following pattern:
type op(...) {
type retval;
inline asm(... retval ...);
return retval;
}, which is used to implement low-level platform-dependent memory operations.
Some of these operations turn out to be very hot, so we probably don't
want to initialize |retval| given that it's always initialized in the
assembly.However it's practically impossible to tell that a variable is being
written to by the inline assembly, or figure out the size of that
write.
Perhaps we could speculatively treat every scalar output of an inline
assembly routine as an initialized value (which is true for the Linux
kernel, but I'm not sure about other users of inline assembly, e.g.
video codecs).WDYT?
--
Alexander Potapenko
Software EngineerGoogle Germany GmbH
Erika-Mann-Straße, 33
80636 MünchenGeschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
_______________________________________________
cfe-dev mailing list
cfe-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev--
Alexander Potapenko
Software EngineerGoogle Germany GmbH
Erika-Mann-Straße, 33
80636 MünchenGeschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: HamburgDoes kernel asm use "+m" or "=m"?
If asm _must_ write to that variable, then we could improve DSE in
normal case (ftrivial-auto-var-init is not enabled). If
ftrivial-auto-var-init is enabled, then strictly saying we should not
remove initialization because we did not prove that asm actually
writes. But we may remove initialization as well for practical
reasons.Alex mentioned that in some cases we don't know actual address/size of
asm writes. But we should know it if a local var is passed to the asm,
which should be the case for kernel atomic asm blocks.Interestingly, ftrivial-auto-var-init DSE must not be stronger then
non-ftrivial-auto-var-init DSE, unless we are talking about our own
emitted initialization stores, in such case ftrivial-auto-var-init DSE
may remove then more aggressively then what normal DSE would do, we
don't actually have to _prove_ that the init store is dead.IMO the auto var init mitigation shouldn’t change the DSE optimization at all. We shouldn’t treat the stores we add any different. We should just improve DSE and everything benefits (auto var init moreso).
But you realize that this "just" improve involves fully understanding
static and dynamic behavior of arbitrary assembly for any architecture
without even using integrated asm?If you want to solve every problem however unlikely, yes. If you narrow what you’re doing to a handful of cases that matter, no.
How can we improve DSE to handle all main kernel patterns that matter?
Can we? It's still unclear to me. Extending this optimization to
generic DSE and all stores can make it much harder (unsolvable)
problem...Right now there's a handful of places in the kernel where we have to
use __attribute__((uninitialized)) just to avoid creating an extra
initializer: https://github.com/google/kmsan/commit/00387943691e6466659daac0312c8c5d8f9420b9
and https://github.com/google/kmsan/commit/2954f1c33a81c6f15c7331876f5b6e2fec0d631f
All those assembly directives are using local scalar variables of size
<= 8 bytes as "=qm" outputs, so we can narrow the problem down to "let
DSE remove redundant stores to local scalars that are used as asm()
"m" outputs"
False positives will sure be possible in theory, but hopefully rare in practice.Right, you only need to teach the optimizer about asm that matters. You don’t need “extending this optimization to generic DSE”. What I’m saying is: this is generic DSE, nothing special about variable auto-init, except we’re making sure it help variable auto-init a lot. i.e. there’s no `if (VariableAutoInitIsOn)` in LLVM, there’s just some DSE smarts that are likely to kick in a lot more when variable auto-init is on.
It doesn't have to be "if (VariableAutoInitIsOn), turn on DSE", it
could be just "If this is an assembly output, emit an
__attribute__((uninitialized)) for it”.That’s something I would really rather avoid. It’s much better to make DSE more powerful than to play around with how clang generates variable auto-init.
I would still love to know what's the main source of truth for the
semantics of asm() constraints.I don’t think you can trust programmer-provided constraints, unless you also add diagnostics to warn on incorrect constraints.
But then it's nothing left to trust. We sure don't want to parse the
assembly itself to reason about its behavior, so the constraints is
the only thing that lets us understand whether a variable is going to
be written to.I thin you do want to look into the assembly. Have you tried instrumenting clang to dump out all assembly strings? What are in those strings?
To some extent I did. I had to solve the same problem for
KernelMemorySanitizer to avoid false positives on values coming from
the assembly.
Here's an incomplete list of problems I decided not to deal with,
ending up with a heuristic based on output constraints and dynamic
address checks:
1. Assembly operates with entities that don't directly map to C
language constructs (registers, segments, flags, program counter).
Aliasing rules also don't work with assembly, and the memory model is
different from that offered by C.
2. Right now Clang doesn't use the integrated assembly to build the
kernel, so it's hard to leverage any of the existing frameworks to
parse the assembly or perform optimizations on it
(also note that the existing opportunities to optimize inline assembly
are quite limited, e.g. Clang is even unable to optimize away a
duplicate "mov %eax, %edx" instruction).
3. One has to solve the problem for every architecture supported by
the compiler.
4. The kernel is using a long tail of weird instructions that are
never used in the userspace code.
5. Certain inline assembly calls (e.g. for per-CPU variables) are
designed with Linux kernel implementation details in mind, and don't
make sense outside that. Ad-hoc code to handle them will have to be in
sync with the kernel source.
Instead of reasoning about particular kernel idioms on certain arches,
I'd prefer having all of that inline assembly translated to compiler
builtins with known semantics.
But I'm not sure that's currently possible, because AFAIU the kernel
developers are under the impression of those builtins being slower
than raw assembly (which I can also imagine).
Having said that, I suspect that we can do a good job in 95% cases
just relying on the constraints, provided that those have strict
semantics we all agree on.
Not sure we want to specifically do anything about malformed
constraints, as people who write inline assembly always have ways to
shoot themselves in the foot.