So, first things first.
Handling unreachable code is annoying. I agree. However, its important to
characterize how annoying. Most code is not unreachable, and we're
(obviously) fine just bailing on such crazy code paths. So in practice the
common cost is keeping a local set to detect cycles in the graph where we
aren't expecting them. We are essentially always traversing a linked list
representing this graph. Because these linked list nodes don't have any
particular reason to have high locality, we have a cache-hostile traversal
where we have to do primarily local and cache friendly book-keeping at each
step to catch duplication. I suspect that this has a very small (but
non-zero) impact on compile time.
So it's worse than this.
It also means that doing something like "walk the graph" is no longer
sufficient to generate correct code. It's not just about cycles.
Let's take an example:
PromoteMemoryToRegisters has a rename pass. This rename pass walks blocks
from entry, following successors, renaming values to use the nearest
definition. After this is done, it's done.
Except, it isn't.
Because you see, then it has to go and see whether it missed some alloca's,
because they were unreachable.
And then it has to go and see if any of the blocks ended up with phi nodes
that have predecessors that are unreachable, , because it has to replace
those with undef.
And so on.
(It actually turns out this is not enough, and this handling is buggy).
Handling unreachable code is also error prone. We have had a long history
of bugs here. So if it were free to get rid of unreachable code, that would
be super awesome.
The first problem to realize is that for a large chunk of LLVM, we can't
actually get rid of handling unreachable code. Passes are quite likely to
create unreachable code while running
Which passes, exactly?
I am going to assert, based on experience writing almost all of the opt
passes llvm has (except a vectorizer :P), that it is entirely possible, and
not even difficult, to avoid creating unreachable code in these passes.
I'm also going to point out that *roughly all the other compilers in the
world* do not allow it either 
Or at least, not have it at the point at which you need to call a utility
or another pass.
So i would say your assertion that things are quite likely to create it is
wrong. Things may temporarily create it, but there is simply no need, at
all, to expose it to *anything else*.
, and that will mean that all of the utilities will end up needing to be
conservatively correct and handle unreachable code even when they don't
need to. =/
So i strongly disagree with this.
This is an assertion based on the way the world is now, where things
willy-nilly create unreachable code and expect the rest of the world to
deal with it. I don't believe it is even that difficult to get to a world
where this isn't treat.
The second problem is that cleaning up unreachable code isn't free. The
code which is most likely to create unreachable code is similarly likely to
destroy the analysis information we would use to remove it cheaply.
I disbelieve this 
And even then, just walking lists of basic blocks as we go out of the
function is going to dirty cache lines that we might not have any other
reason to look at. I can genuinely imagine cases where batching this will
be beneficial. Will it outstrip the cost of handling it? I don't know. But
I think it will mitigate the gains, especially if the gains aren't as
significant as we might naively think.
The third problem I have is that this makes it much harder to
constructively produce a correct IR transformation pass. When transforming
the IR we must never forget about regions we have made unreachable.
Almost all algorithms i can think of already expect to have to clean up
unreachable regions and delete dead blocks. It's only LLVM that doesn't do
it.
A single mistake here will cascade to a verification failure. This is the
reverse of the problem we have today where your pass must *accept*
unreachable IR.
But is also *much* easier to verify, because the number of points in which
predecessors are modified, or edges redirected, is actually not that
large. At worst, *those* are the only places that can create
forward-unreachable code. So you have some bounded test test. On the
other hand, accepting "whatever input" is not a bounded problem. People
can come up with crazier and crazier input you must accept.
But I think the constructive reasoning is much easier. It makes it harder
to have a bug through omission of a case.
The fourth problem I have is related to the third problem. How do we gain
confidence in the correctness of any IR transformation? We must construct
test cases that we believe will exercise the system. It seems *incredibly*
easier to construct test cases with interesting unreachable IR patterns
that should all be handled, than to construct test cases where the pass
will happen to form unreachable IR patterns in each way and ensure that
none escape the pass.
I fundamentally do not understand why you believe this. The number of
places that create unreachable code is finite, bounded, and testable. You
cannot create unreachable code out of magic.
You are right that it is easier to create test cases ith interesting
unreachable IR patterns, but there is an infinite number of them.
One is essentially covering input diversity, the other has to cover
*output* diversity, and must prove the negative. We fundamentally must gain
confidence that the pass *never* produces unreachable IR. This seems much
harder than demonstrating that it does handle unreachable IR.
The fifth problem I have is also related to the third and fourth problems.
Precluding unreachable code seems likely to make tools like delta test case
reduction dramatically less effective. Now, when cutting edges in the CFG
to minimize a test case they must actually build the graph and prune all
unreachable code.
Again, this is just "not hard". All the other compilers in the world do
this, and do it cheaply.