Unnecessary code generated in LLVM IR

Please consider the following C code:
#include <math.h>
int main(void) {
double c = ceil(-INFINITY);
assert(isinf(INFINITY) && signbit(c));
}

The following LLVM IR is generated for the above code by clang:
define dso_local i32 @main() #0 {
entry:
%retval = alloca i32, align 4
%c = alloca double, align 8
store i32 0, i32* %retval, align 4
%0 = call double @llvm.ceil.f64(double 0xFFF0000000000000)
store double %0, double* %c, align 8
br i1 true, label %cond.true, label %cond.false //Unnecessary conditional branch

cond.true: ; preds = %entry
%call = call i32 @__isinff(float 0x7FF0000000000000) #4
%tobool = icmp ne i32 %call, 0
br i1 %tobool, label %land.rhs, label %land.end

cond.false: ; preds = %entry //Unnecessary Block
br i1 false, label %cond.true1, label %cond.false4

cond.true1: ; preds = %cond.false //Unnecessary Block
%call2 = call i32 @__isinf(double 0x7FF0000000000000) #4
%tobool3 = icmp ne i32 %call2, 0
br i1 %tobool3, label %land.rhs, label %land.end

cond.false4: ; preds = %cond.false //Unnecessary Block
%call5 = call i32 @__isinfl(x86_fp80 0xK7FFF8000000000000000) #4
%tobool6 = icmp ne i32 %call5, 0
br i1 %tobool6, label %land.rhs, label %land.end

land.rhs: ; preds = %cond.false4, %cond.true1, %cond.true
%1 = load double, double* %c, align 8
%2 = bitcast double %1 to i64
%3 = icmp slt i64 %2, 0
br label %land.end

land.end: ; preds = %land.rhs, %cond.false4, %cond.true1, %cond.true
%4 = phi i1 [ false, %cond.false4 ], [ false, %cond.true1 ], [ false, %cond.true ], [ %3, %land.rhs ]
%land.ext = zext i1 %4 to i32
%call7 = call i32 (i32, …) bitcast (i32 (…)* @assert to i32 (i32, …)*)(i32 %land.ext)
%5 = load i32, i32* %retval, align 4
ret i32 %5
}

This seems to happen because of the logical and operator in the assert, and instead, if the assert is split into two separate asserts:
assert(isinf(INFINITY));
assert(signbit(c));
, then the IR generated is fine.

Thanks,
Akash.

That code looks like it was compiled without any optimizations enabled. The IR without optimizations is meant to be easy for the frontend to generate. Even with the code split into two asserts, without optimizations enabled I see a lot of the same things unnecessary things.

HI,
By splitting the assert I am getting the following IR:
define dso_local i32 @main() #0 {
entry:
%c = alloca double, align 8
%0 = call double @llvm.ceil.f64(double 0xFFF0000000000000)
store double %0, double* %c, align 8
%call = call i32 @__isinff(float 0x7FF0000000000000) #4
%call1 = call i32 (i32, …) bitcast (i32 (…)* @assert to i32 (i32, …)*)(i32 %call)
%1 = load double, double* %c, align 8
%2 = bitcast double %1 to i64
%3 = icmp slt i64 %2, 0
%4 = zext i1 %3 to i32
%call2 = call i32 (i32, …) bitcast (i32 (…)* @assert to i32 (i32, …)*)(i32 %4)
ret i32 0
}

It’s not the most optimal but still doesn’t have as much unnecessary code. My main concern is why are signle, double and extended precision floats being checked for ‘isinf’, when the original code only asks for double, and this behaviour seems to be induced by the logical and operator.

Thanks,
Akash.

As Craig said, you need to enable optimizations as otherwise the code generation is purposefully "basic" to represent the input as truthfully as possible.
You can see O1 does the job: https://godbolt.org/z/d8TeAf

You might also want to include <assert.h>

Cheers,
  Johannes