Does LLVM optimize recursive call?

From: "Journeyer J. Joh" <oosaprogrammer@gmail.com>
I have a simple question about LLVM.

I learned that we need to use iterations than recursions in C programming.
That is because recursion is expensive. It can easily consume out all
the stack given to a program. And the call/return consumes time much
more.

But I've read a sentence that nowadays compilers can optimize
recursions in C or whatever languages not to consume heavy stack and
call/return.

I wonder what kind of optimization this is and if LLVM support this
kind of optimization.

At this point in your programming career, you should focus on writing
clean, correct code. It's the same thing I'm supposed to focus on at
this point in my own career! If the desired algorithm is expressed
cleanly using recursion, use recursion (and procedure calls in
general). If it's cleaner to use a loop, use a loop. The compiler will
help when it can. Later, when you study compilers, you can learn
exactly when and where the compiler can do various optimizations.

Think about costs asymptotically; that's what matters. Calls and
returns require constant time, just like addition and multiplication.

Preston

Preston Briggs <preston.briggs@gmail.com> writes:

Think about costs asymptotically; that's what matters. Calls and
returns require constant time, just like addition and multiplication.

Constant time, but not necessarily constant memory.

Deep recursion will blow up your stack (AKA "segmentation fault" :frowning: )
if the compiler could not optimize it (tail recursion optimization).

Only if the recursion is very deep. In practice, a recursive descent
parser isn't going to run out of stack space, nor will a quicksort or
binary-tree walker,

You're distracting this man from his job of learning how to program.

Yes, it's a mistake to have a set of mutually recursive routines that
chew up all your stack space; but it's a mistake to reply on a
compiler optimization to avoid such problems. Write an explicit loop
if it's a possibility.

Preston

Preston Briggs <preston.briggs@gmail.com> writes:

Preston Briggs <preston.briggs@gmail.com> writes:

Think about costs asymptotically; that's what matters. Calls and
returns require constant time, just like addition and multiplication.

Constant time, but not necessarily constant memory.

Deep recursion will blow up your stack (AKA "segmentation fault" :frowning: )
if the compiler could not optimize it (tail recursion optimization).

Only if the recursion is very deep.

Not so much. Stack size is usually a few megabytes, much smaller than
the heap size.

In practice, a recursive descent parser isn't going to run out of
stack space, nor will a quicksort or binary-tree walker,

Yes, because you're taking examples where the recursion depth is log(n)
or close to it. "there are cases where recursion is OK" is true, but it
doesn't imply "you shouldn't worry about your stack size".

Yes, it's a mistake to have a set of mutually recursive routines that
chew up all your stack space; but it's a mistake to reply on a
compiler optimization to avoid such problems.

I agree with this, but this is not what the message I replied to said.
Comparing calls and returns to addition and multiplication in terms of
performance is still wrong.

(BTW, it actually depends if the language enforces this optimization.
IIRC, C doesn't so you can't rely on it, but Caml does, so
tail-recursion is OK even for large datasets in Caml)

Only if the recursion is very deep. In practice, a recursive descent
parser isn't going to run out of stack space, nor will a quicksort or
binary-tree walker,

The recursive-descent parser case has happened in practice:
http://my.opera.com/hallvors/blog/2012/07/17/twitter-crashes-itself-with-commas?1

Also, I've seen some recursion-related PR's in Clang, although I think
that they are usually related to templates and not parsing.

Also, it's easy to blow stack in quicksort if you don't tail call into
the recursive invocation of the _larger_ subrange. Recursively calling
into the smaller subrange guarantees that it's size is less than half
of the current range, whereas a non-tail call into the larger subrange
can require linear stack space if the partition isn't good.

You're distracting this man from his job of learning how to program.

Indeed. Journeyer, you should ignore this stuff that we are saying,
these are all exceptions: you'll fix them if they arise, but you
generally shouldn't design with them in mind. Journeyer, if you want
to learn more about tail calls, I recommend this paper by Guy Lewis
Steele, Jr.:
http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-443.pdf
It's a pretty old paper, but it is exactly about your original
question. By the end, you'll probably know more about tail calls as
you'll ever need to know :slight_smile:

-- Sean Silva

Hello Sean Silva, Matthieu Moy, Preston Briggs, Kent Williams and
people interested in this

Thank you for the useful information you provided.

Now it's time for me to study the information you provided.
Because the contents you pointed have some amount for me to eat up I
now need time to understand those.

In short, I understood the points but I need to read the pdf and
examples you provided.

Thank you for the kind replies.

Sincerely
Journeyer J. Joh