Is there a way to use only machine-representable types in phi-nodes before building SelectionDAG?

LLVM IR is allowed to operate on types irrrepresentable on target machine (like i9) and therefore this types are allowed to appear in phi-nodes.

As far is I understood, only target-representable types can be used in CopyFromReg and CopyToReg operations. How can I stop SelectionDAG from emitting needless code for type convertion?

Example:
I have a while loop over chars:

while(*str) {
    //some bit magic, which combines *str with usize64_t accumulator value. 
    str++;
}

which is transpiled to the following IR:

entry:
    %0 = load i8, ptr %str, align 1, !tbaa !2
    %tobool.not11 = icmp eq i8 %0, 0
    br i1 %tobool.not11, label %while.end, label %while.body
while.body:
    %1 = phi i8 [ %2, %while.body ], [ %0, %entry ]
    %conv = zext i8 %1 to i64
    ; other loop body code
    %2 = load i8, ptr %incdec.ptr, align 1, !tbaa !2

The machine, I’m implementing backend for, has only 64bit registers, so during Selection DAG building this phi-node is expanded to:

t13[52]: i32,ch = CopyFromReg t0[39], Register:i64 %1
t14[53]: i8 = truncate t13[52]
t15[54]: i64 = zero_extend t14[53]
// other loop body code
t30[69]: i8,ch = load<(load (s8) from %ir.incdec.ptr, !tbaa !2)> t0[39], t9[48], undef:i64
t31[70]: i64 = any_extend t30[69]
t33[72]: ch = CopyToReg t0[39], Register:i64 %6, t31[70]

Here I have useless truncate + zero_extend + any_extend instead of a single zero_extend which, on top of that, can be combined with load because we have zero-extending byte-loading instructions.

So, to my original question: is there a way to use only machine-representable types in phi-nodes before building SelectionDAG?

Apparenlty, there is a CodeGenPrepare::optimizePhiType method, which examine the usage of phi-nodes, but now it is able to fold only load/bitcast patterns.

Is it a good idea to add zext handling to this method?