[RFC] Fixing incompatibilties of the `x` packet w.r.t. GDB

That (them changing the spec during implementation – shame on them!) was my first thought as well, but no, it turns out the x packet didn’t exist in the spec at all. LLDB came with it first, and then GDB “reinvented” it last month.

Now you could say that means they should implement our packet. In fact, @rocallahan tried that, but it wasn’t very well received, mainly because our implementation isn’t self-consistent (more on that below).

It responds with b, but maybe not in all cases (I think the problematic are situations where the gdbserver does not have any process attached (I guess it checks for this before even looking at the packet), and although a read of size zero on an unreadable region currently succeeds it wasn’t clear whether that’s actually desirable. This is why they’ve added the new qSupported for feature detection.

The benefit is the unambiguity between errors and successful reads (i.e., is E47 an error or actual memory content?). Triggering this problem is not completely easy because lldb will normally read memory in 512-byte blocks, but I can trigger it when I disable memory caching:

(lldb) set set target.process.disable-memory-cache true
(lldb) memory read 0x00007fffffffeffc
lldb             <  20> send packet: $x7fffffffeffc,20#9b
lldb             <   8> read packet: $@E47#f0
lldb             <  20> send packet: $x7ffffffff000,1c#2f
lldb             <   7> read packet: $E08#ad
0x7fffffffeffc: 40 45 34 37 00 00 00 00 00 00 00 00 00 00 00 00  @E47............
0x7ffffffff00c: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
warning: Not all bytes (4/32) were able to be read from 0x7fffffffeffc.
(lldb) memory read 0x00007fffffffeffd
lldb             <  20> send packet: $x7fffffffeffd,20#9c
lldb             <   7> read packet: $E47#b0
error: memory read failed for 0x7fffffffeffd

The m packet does not have this problem because it will always return an even number of hex characters and it can use the lowercase e for hex digits.

That could work, but it’s not compatible with how gdb has implemented this feature (server sends the feature, and client does the detection). We might me still able to change that, but we’d need a very good reason, and the time is running very short as they’re getting ready to do a new release for this fix.

That said, I don’t think that’s necessary. I think my original proposal can satisfy all of these requirements – we just need to tweak the timeline a bit:

  • lldb starts recognising the binary-upload+ server feature ~now
  • in ~two years, servers start using the gdb implementation (and sending the qSupported feature)
  • in 2+5 = ~seven years, lldb stops supporting the old lldb implementation

Scenario (1) works because we hold off on the serverside change until the new clients are available. Scenario (2) works because we keep the client supporting the old protocol sufficiently long. Scenario (3) works immediately after the first step as long as those stubs follow either version of the protocol (sending binary-upload+ or responding to the x0,0 probe). Scenario (4) works, but it uses the less efficient m implementation for the first two years.

WDYT?