LLDB problems on remote debugging

Hi LLDB devs,

First I would like to express my appreciation and thanks to you all especially Greg Clayton and Ted Woodward! Your advice and guidance are quite useful for me!

I’ve been working on other lldb problems and resume solving the “remote loading” issue now. I now fully understand the “remote connection/execution”(what happens after gdb-remote and how ‘c’ or ‘s’ controls the execution) and the only problem is how to load the program. My simulator starts this way : ./sim --port 1234 It do not do the “loading program” job when specified with --port right now. In order to fit the old GDB style, I prefer to still let lldb to do the “loading binary” job.

To provide more information about my memory architecture : logically, the memory is a whole(one address space), but divided into PM(program memory, starting at address 0x0) and DM(data memory, starting at address 0x80000000). When loading program, the .text section should be loaded into PM and the .data section should be loaded into DM, and nothing more. And yes, only one executable.

I’ve tried “target modules load --load”(I’m using lldb 7.0.1 and it implemented this command), but the sections to load are not loadable(not PT_LOAD) and triggers no RSP package exchange. So I tried “memory write --infile”, it triggers a “qMemoryRegionInfo:0” packet to query memory region info and a “M” packet to write memory, but these two packet are not supported by my simulator right now. My simulator supports “X” packet not “M”! When using with the old GDB, “load” command in GDB triggers a few “X” packets

So I want to know:

  1. How to let lldb send “X” packet(perhaps a command) and where is the corresponding code located?(to let my simulator support “M” packet is also OK, but using the existing code that handles “X” packet is much easier)
  2. What’s the actual difference between “X” packet and “M” packet?(I can’t see any difference between them, from the packet specification on GDB website. “X” packet is “X addr,length:XX…” and “M” packet is “M addr,length:XX…”, I thought even the data “XX…” seems to be encoded in the same way: two hexadecimal digits per byte, or perhaps I was wrong?)
  3. Is letting my simulator support “qMemoryRegionInfo” packet enough to make lldb send the correct “X” or “M” packets?(extract .text and .data sections from my executable and send them to PM and DM addresses)
    *4. by the way, does GDB has similar command like “log enable gdb-remote packets” in lldb to print all the RSP packet exchange?

Kind regards,
Rui

------------------ Original ------------------

Hi LLDB devs,

First I would like to express my appreciation and thanks to you all especially Greg Clayton and Ted Woodward! Your advice and guidance are quite useful for me!

I’ve been working on other lldb problems and resume solving the “remote loading” issue now. I now fully understand the “remote connection/execution”(what happens after gdb-remote and how ‘c’ or ‘s’ controls the execution) and the only problem is how to load the program. My simulator starts this way : ./sim --port 1234 It do not do the “loading program” job when specified with --port right now. In order to fit the old GDB style, I prefer to still let lldb to do the “loading binary” job.

To provide more information about my memory architecture : logically, the memory is a whole(one address space), but divided into PM(program memory, starting at address 0x0) and DM(data memory, starting at address 0x80000000). When loading program, the .text section should be loaded into PM and the .data section should be loaded into DM, and nothing more. And yes, only one executable.

I’ve tried “target modules load --load”(I’m using lldb 7.0.1 and it implemented this command),

That LLDB is really old. I would highly recommend building top of tree LLDB for any real work.

but the sections to load are not loadable(not PT_LOAD) and triggers no RSP package exchange.

Are you not building your ELF file correctly? Why are there no PT_LOAD program headers? I would suggest fixing this?

So I tried “memory write --infile”, it triggers a “qMemoryRegionInfo:0” packet to query memory region info and a “M” packet to write memory, but these two packet are not supported by my simulator right now. My simulator supports “X” packet not “M”! When using with the old GDB, “load” command in GDB triggers a few “X” packets

So did you extract the section contents yourself into separate files so you can load the memory using “memory write --infile”? qMemoryRegionInfo is not required, so your stub can respond with “$#00” which means unsupported. This is there for people that have flash memory as someone implemented the ability to write to flash with some fancy packets (see ProcessGDBRemote::DoWriteMemory(…) for details).

So I want to know:

  1. How to let lldb send “X” packet(perhaps a command) and where is the corresponding code located?(to let my simulator support “M” packet is also OK, but using the existing code that handles “X” packet is much easier)

We support the x packet, but right now it is hooked up only for memory reads since not a lot of clients do large memory writes. In order for the “x” packet for memory reads to be used, we try and detect if it is supported for “x” packet (binary memory read) by sending a “x0,0” packet. if “OK” is returned we say it is supported. So you will need to modify:

size_t ProcessGDBRemote::DoWriteMemory(addr_t addr, const void *buf, size_t size, Status &error);

in ProcessGDBRemote.cpp to try the “X” packet once, and if “$#00” is returned, set a bool in the ProcessGDBRemote class to know to not try and use the “X” packet again. You will see a mixture of some packets being directly sent by ProcessGDBRemote, and some are put into the GDBRemoteCommunicationClient class and an accessor is called to try and send the packet. I would suggest making a function:

size_t GDBRemoteCommunicationClient::WriteMemory(addr_t addr, const void *buf, size_t size, Status &error);

And have the GDBRemoteCommunicationClient keep track of wether the X packet is supported and always use it if it is.

So the flow is:
1 - add a new instance variable to GDBRemoteCommunicationClient:
LazyBool m_supports_X = eLazyBoolCalculate;

LazyBool is an enum:

enum LazyBool { eLazyBoolCalculate = -1, eLazyBoolNo = 0, eLazyBoolYes = 1 };

2 - Add a new size_t GDBRemoteCommunicationClient::WriteMemory(addr_t addr, const void *buf, size_t size, Status &error) function:

size_t GDBRemoteCommunicationClient::WriteMemory(addr_t addr, const void *buf, size_t size, Status &error) {
if (m_supports_X != eLazyBoolNo) {
// Make packet and try sending the X packet
StreamString packet;
StringExtractorGDBRemote response;
packet.PutChar(‘X’);
… // Fill in all of the args and the binary bytes
if (SendPacketAndWaitForResponse(packet.GetString(), response, false) == PacketResult::Success) {
if (response.IsUnsupportedResponse())

m_supports_X = eLazyBoolNo;
else if (response.IsOKResponse())
return size;
else
return 0;
}
}
// Make and send the ‘M’ packet just like in ProcessGDBRemote::DoWriteMemory(…)

3 - Modify ProcessGDBRemote::DoWriteMemory() to call the new GDBRemoteCommunicationClient::WriteMemory() function.

  1. What’s the actual difference between “X” packet and “M” packet?(I can’t see any difference between them, from the packet specification on GDB website. “X” packet is “X addr,length:XX…” and “M” packet is “M addr,length:XX…”, I thought even the data “XX…” seems to be encoded in the same way: two hexadecimal digits per byte, or perhaps I was wrong?)

“M” packets sends bytes as a hex ASCII string (each byte takes two hex ASCII characters) and the “X” packet is binary with some escaping for bytes that conflict with the special characters in the GDB remote protocol, see this for binary data:

https://sourceware.org/gdb/current/onlinedocs/gdb/Overview.html#Binary-Data

  1. Is letting my simulator support “qMemoryRegionInfo” packet enough to make lldb send the correct “X” or “M” packets?(extract .text and .data sections from my executable and send them to PM and DM addresses)

qMemoryRegionInfo allows the debugger to figure things out about memory. We currently don’t support the “X” packet, so you will need to add that yourself as mentioned above. Right now “M” packets will be used.

*4. by the way, does GDB has similar command like “log enable gdb-remote packets” in lldb to print all the RSP packet exchange?

Not sure, I would hope so. Try “apropos log” in GDB maybe?

So you can either implement the “X” packet in LLDB, or fix your GDB server to support the “M” packet. Up to you.

If you have an ELF file that you are using as your executable and want me to take a look, make it available via some sharing on the web and I can look at it. Most people will run fully linked executables (these tend to have PT_LOAD segments) when they run on a simuilator, not an unlinked object file (these tend to not have PT_LOAD segments). It shouldn’t be too hard to get a statically linked executable with the right ELF program headers (PT_LOAD) so that you can use the --load option.

Greg

First I would like to express my appreciation and thanks to you all especially Greg Clayton and Ted Woodward! Your advice and guidance are quite useful for me!

You’re welcome!

*4. by the way, does GDB has similar command like "log enable gdb-remote packets" in lldb to print all the RSP packet exchange?

“set debug remote 1”
Unlike with lldb, this doesn’t work when you do a run. It only works when you do a target remote (at least with the default gdb installed on Ubuntu 16.04).