[Bug 25086] New: lldb should use unix socket for communication with server on linux

Bug ID 25086
Summary lldb should use unix socket for communication with server on linux
Product lldb
Version 3.7
Hardware PC
OS Linux
Status NEW
Severity normal
Priority P
Component All Bugs
Assignee lldb-dev@lists.llvm.org
Reporter vrba@mixedrealities.no
CC llvm-bugs@lists.llvm.org
Classification Unclassified

As documented on the webpage, lldb on linux uses lldb-server even for local
debugging. It connects to this stub via loopback device. I believe it should
connect over a UNIX socket instead. (On Windows, named pipes are the
corresponding alternative.)

Explanation: For debugging a network protocol I have introduced packet loss on
the loopback device with the following command:

tc qdisc add dev lo root netem loss random 15

This introduces 15% packet loss and causes lldb to work EXTREMELY slowly
because its communication with the server stub is severely disrupted. It takes
ages to start debugging even a simple "hello world" program.

labath@google.com changed bug 25086

What | Removed | Added |

  • | - | - |
    Status | NEW | RESOLVED |
    Resolution | — | INVALID |

Comment # 3 on bug 25086 from labath@google.com

I don't believe this is a use case we want to support. I would suggest solving
this problem externally, e.g. by limiting the simulated packet loss to your
application. I seem to recall being able to simulate packet loss using
iptables. I would recommend trying something like

iptables -A INPUT -p udp -m statistic --mode random --probability 0.15 -j DROP

Zeljko Vrba changed bug 25086

What | Removed | Added |

  • | - | - |
    Status | RESOLVED | REOPENED |
    Resolution | INVALID | — |

Comment # 4 on bug 25086 from Zeljko Vrba

Using iptables is a non-option because dropping packets will return EPERM error
to the application; see for example

Besides, tc-netem can simulate also burst losses, not just random uncorrelated

Is there any reason at all for not switching to unix domain sockets?