Help with Values sign

Hi everyone, I'm having some problems when loading values. Here's a simple
example of the IR that my compiler generates to show the problem:

define i32 @main() {
entry:
  %a = alloca i32 ; <i32*> [#uses=2]
  store i32 1, i32* %a
  %r = load i32* %a ; <i32> [#uses=1]
  br label %Return

Return: ; preds = %entry
  ret i32 %r
}

but instead of return 1 it returns 16777216. I have tested the program with
different values and it seems that it's a matter of internal binary
representation of integer values.

Thanks in advance.

Santos Merino del Pozo.

I show you the trace of calls to llvm that my compiler makes to generate the
code above. bloques.back().last is of type Value* and bloques.back().bl is
BasicBlock* (the current Block)

bloques.back().last =
               
dyn_cast<Value>(ConstantInt::get(Type::getInt32Ty(getGlobalContext()), $1,
true)); // $1 is an int
    AllocaInst *alloc =
                new AllocaInst(
                  Type::getInt32Ty(getGlobalContext()), izq.c_str(),
bloques.back().bl);
    Value *derecha = bloques.back().last;
    StoreInst *s = new StoreInst(derecha, alloc, false, bloques.back().bl);
    bloques.back().last = alloc;
    LoadInst* v1 =
                new LoadInst(bloques.back().last, "r", false, bloques.back().bl);
    bloques.back().last = v1;
    BasicBlock* blockReturn =
                BasicBlock::Create(getGlobalContext(), "Return", Main);
    Value* last = bloques.back().last;
    BranchInst::Create(blockReturn, bloques.back().bl);
    ReturnInst::Create(getGlobalContext(), last, blockReturn);

The source code corresponding to the IR representation would be:

    a = 1
    return a

How are you checking the return value?

I put your code in santos.ll and then executed it with lli, and I get 1 as a return value.

lli santos.ll ; echo $?
1

Joey

When the ExecutionEngine finishs the JIT execution it says:

  Result: 16777216

Santos Merino <santitox@hotmail.es> writes:

When the ExecutionEngine finishs the JIT execution it says:

  Result: 16777216

There is nothing wrong with the IR code you initially posted. If you are
obtaining that result, it must be caused by other factor. My guess is
that you are messing up with the stack by calling the function the wrong
way or by obtaining the result from the JIT engine inappropriately.

Please show a test case that is complete and minimal and that
demonstrates your problem.

I have found that the problem is the endianness of Values. For example a 1 it's
stored as 00000001 00000000 00000000 0000000 instead of 0000000 00000000 0000000
00000000 so how can I change how the values are stored and/or load (because I
suppose that the problem is when I store it or load from memory)

Thanks,
Santos Merino.

Are you setting up the module's TargetData string correctly?

-Chris

In my main function after generate the code and before start the execution via
JIT I do this:

    ExecutionEngine *EE = EngineBuilder(M).create();
    string str =
           EE->getTargetData()->getStringRepresentation();
    str[0] = 'e';
    M->setDataLayout(str);

    if (verifyModule(*M)) {
       errs() << argv[0] << ": Error building the function!\n";
       return 1;
    }

    vector<GenericValue> noargs;
    GenericValue GV = EE->runFunction(Main, noargs);

    outs() << "Result: " << GV.IntVal << "\n";
    return 0;

Why are you changing the data layout to be little-endian?

Joey

finally i've found the solution. To change the endianness I do the following (a
little spaguetti but it works):

    Module *M = new Module("pythoncode", getGlobalContext());
    ExecutionEngine *EE2 = EngineBuilder(M).create();
    string str =
        EE2->getTargetData()->getStringRepresentation();
    str[0] = 'e';
    cout << str << endl;
    M->setDataLayout(str);
    ExecutionEngine *EE = EngineBuilder(M).create();