More detailed example...

Further to that, I thought an example might be useful. In the following code, n should end up with a value that varies nondeterministically with the scheduling decisions made by the underlying run time system -- my model checker, for example, should in theory be able to enumerate all possible values. Anyway, if you look at the compiler output (see below), the volatile global variable, n, has vanished, with the printf output only ever taking the (constant) result 0.

I somehow need to get around this, or my attempt to implement a C++ model checker is dead in the water.

Thank you in advance,
Sarah

PS: mcp_fork() duplicates the current stack as a new thread, and returns true within that thread and false in the original thread. mcp_yield() forces a reschedule. Stacks are separate, but the heap is shared. My model checker generates correct output for the generated code, but the generated code has the wrong semantics.

Sarah Thompson wrote:

Further to that, I thought an example might be useful. In the following
code, n should end up with a value that varies nondeterministically with
the scheduling decisions made by the underlying run time system -- my
model checker, for example, should in theory be able to enumerate all
possible values. Anyway, if you look at the compiler output (see below),
the volatile global variable, n, has vanished, with the printf output
only ever taking the (constant) result 0.

How are you compiling this? I get the following sort of output:

void %inc(int* %p) {
entry:
        %tmp = volatile load int* %p ; <int> [#uses=1]
        %tmp1 = add int %tmp, 1 ; <int> [#uses=1]
        volatile store int %tmp1, int* %p
        tail call void (...)* %mcp_yield( )
        %tmp.1 = volatile load int* %p ; <int> [#uses=1]
        %tmp1.1 = add int %tmp.1, 1 ; <int> [#uses=1]
        volatile store int %tmp1.1, int* %p
        tail call void (...)* %mcp_yield( )
        %tmp.2 = volatile load int* %p ; <int> [#uses=1]
        %tmp1.2 = add int %tmp.2, 1 ; <int> [#uses=1]
        volatile store int %tmp1.2, int* %p
        tail call void (...)* %mcp_yield( )
        ret void
}

I ran this with "llvm-gcc -c mcp.c -o mcp.bc". In the example you gave,
it is valid for LLVM to optimize away the volatile memory accesses if
it were doing link-time optimizations; it could prove that "int n"'s
value never changed the observable behaviour of the program and elided it.

Nick Lewycky

Sarah Thompson wrote:

Further to that, I thought an example might be useful. In the following
code, n should end up with a value that varies nondeterministically with
the scheduling decisions made by the underlying run time system -- my
model checker, for example, should in theory be able to enumerate all
possible values. Anyway, if you look at the compiler output (see below),
the volatile global variable, n, has vanished, with the printf output
only ever taking the (constant) result 0.

How are you compiling this? I get the following sort of output:

How are you compiling? If you are running gccld, then your global is getting marked 'internal' and then the globalopt pass is promoting it to a local scalar. We could make globalopt avoid doing this for globals with volatile accesses.

If you *are* running gccld, try passing -disable-internalize to see if this goes away.

-Chris

void %inc(int* %p) {
entry:
       %tmp = volatile load int* %p ; <int> [#uses=1]
       %tmp1 = add int %tmp, 1 ; <int> [#uses=1]
       volatile store int %tmp1, int* %p
       tail call void (...)* %mcp_yield( )
       %tmp.1 = volatile load int* %p ; <int> [#uses=1]
       %tmp1.1 = add int %tmp.1, 1 ; <int> [#uses=1]
       volatile store int %tmp1.1, int* %p
       tail call void (...)* %mcp_yield( )
       %tmp.2 = volatile load int* %p ; <int> [#uses=1]
       %tmp1.2 = add int %tmp.2, 1 ; <int> [#uses=1]
       volatile store int %tmp1.2, int* %p
       tail call void (...)* %mcp_yield( )
       ret void
}

I ran this with "llvm-gcc -c mcp.c -o mcp.bc". In the example you gave,
it is valid for LLVM to optimize away the volatile memory accesses if
it were doing link-time optimizations; it could prove that "int n"'s
value never changed the observable behaviour of the program and elided it.

Nick Lewycky
_______________________________________________
LLVM Developers mailing list
LLVMdev@cs.uiuc.edu http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

-Chris

How are you compiling this? I get the following sort of output:

llvm-gcc incdec.cpp -o incdec

I am currently using LLVM 1.8 -- I was basically holding off porting until LLVM 2.0 stabilises because I want to be able to move to 64 bit Intel and don't want to have to hit a moving target.

void %inc(int* %p) {
entry:
        %tmp = volatile load int* %p ; <int> [#uses=1]
        %tmp1 = add int %tmp, 1 ; <int> [#uses=1]
        volatile store int %tmp1, int* %p
        tail call void (...)* %mcp_yield( )
        %tmp.1 = volatile load int* %p ; <int> [#uses=1]
        %tmp1.1 = add int %tmp.1, 1 ; <int> [#uses=1]
        volatile store int %tmp1.1, int* %p
        tail call void (...)* %mcp_yield( )
        %tmp.2 = volatile load int* %p ; <int> [#uses=1]
        %tmp1.2 = add int %tmp.2, 1 ; <int> [#uses=1]
        volatile store int %tmp1.2, int* %p
        tail call void (...)* %mcp_yield( )
        ret void
}

I ran this with "llvm-gcc -c mcp.c -o mcp.bc". In the example you gave,
it is valid for LLVM to optimize away the volatile memory accesses if
it were doing link-time optimizations; it could prove that "int n"'s
value never changed the observable behaviour of the program and elided it.
  

This looks right, and would definitely generate correct code if it was used. I will try again with the gcc command line you used.

Thanks,
Sarah

Ok, try passing -Wl,-disable-internalize

-Chris

Chris Lattner wrote:

Ok, try passing -Wl,-disable-internalize
  

Yes, that nailed it nicely.

As a general point, it's very common in flight software to have quite a lot of statically allocated data structures, much more so than in 'normal' code, because the coding standards that are mandated (in many cases) outlaw all runtime memory allocation. Typically, malloc is allowed, but only at startup. Real-time kernels provide at least interrupt routine support and often also threads, so this static data does tend to get shared.

It's unlikely that LLVM will be used to compile flight software any time soon, if ever, though it is showing signs of becoming popular as a front-end for program analysis (at least here, anyway!), so support for this way of working is likely to be very beneficial.

Thanks again,
Sarah

Right. LLVM never turns static allocations into mallocs, though it does sometime turn small mallocs into static or stack allocations, and can turn static objects into stack objects or registers. Are you seeing a case where this happens?

-Chris