Phabricator is very slow

Starting sometime last night, Phabricator has become very slow to load. It takes up to several minutes for some page to load. I’ve asked around and it seems like others are experience the same. I figured I’d throw up a post in case others are second guessing themselves like I was.

Given the time, could this be related to some of the ongoing work to migrate to Github PRs?


Tagging @MaskRay as he kindly helped out last time we hit a Phabricator issue.

If I grep for git processes, I can that the server sometimes spawns git log commands, and certain git log commands can a long time.

maskray@llvm-reviews:/mnt/database/repo/14$ time git log --skip=0 -n 101 --pretty=format:%H:%P 988a16af929ece9453622ea256911cdfdf079d47 -- llvm/lib/Demangle/ItaniumDemangle.cpp > /dev/nullreal    0m15.884suser    0m14.164ssys     0m1.316s

Decreasing 101 to 30 will make it much faster. Time to figure out where to set it…

That can’t be new though? Something else changed? Is it indexing a bunch of new branches from somewhere?

That isn’t new and I think nothing has changed. Some folks have reported that Phabricator is fast again.


I’m starting to see this on some phab tickets:

# Unhandled Exception ("AphrontQueryException")
#1114: The table 'cache_general' is full
1 Like

Can somebody take a look at this please? This is going to delay us getting rid of Phab if we can’t complete the outstanding patches on there :expressionless:

1 Like

Yeah, it’s very slow again today. My last few requests have been failing with a 503 (Service Unavailable) error.

1 Like

According to tail -f /var/log/apache2/, there are many different IP addresses crawling /source/llvm-github/history and /source/llvm-github/browse pages. It’s apparently a botnet as adjacent files are visited by IP addresses from very different autonomous systems.

Such a visit will cause Phabricator to spawn a process like git log --skip=0 -n 30 --pretty=format:%H:%P 988a16af929ece9453622ea256911cdfdf079d47 -- llvm/lib/Demangle/ItaniumDemangle.cpp that takes a few seconds.

A while ago (earlier this year or last year) I redirected /source/llvm-github/browse to GitHub. I redirected /source/llvm-github/history about one hour ago.


Phabricator is reporting “#1114: The table ‘cache_general’ is full” for me and I can’t currently review any patches on there, which is not ideal!

Thanks for the report.

maskray@llvm-reviews:/srv/http/phabricator$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  1.2M  6.3G   1% /run
/dev/sda1        78G   38G   40G  49% /
tmpfs            32G     0   32G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/sda15      105M  7.2M   98M   7% /boot/efi
/dev/sdc        590G  590G   13M 100% /mnt/database

./bin/cache purge --all did not work:

phab@llvm-reviews:/srv/http/phabricator$ ./bin/cache purge --all
Purging "builtin-file" cache...
Purging "changeset" cache...
[2023-09-07 23:51:58] EXCEPTION: (AphrontQueryException) #3675: Create table/tablespace 'differential_changeset_parse_cache' failed, as disk is full at [<phabricator>/src/infrastructure/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:386]
arcanist(head=llvm-production), phabricator(head=llvm-production, ref.llvm-production=8502773a2afb, custom=1)
  #0 AphrontBaseMySQLDatabaseConnection::throwQueryCodeException called at [<phabricator>/src/infrastructure/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:320]
  #1 AphrontBaseMySQLDatabaseConnection::throwQueryException called at [<phabricator>/src/infrastructure/storage/connection/mysql/AphrontBaseMySQLDatabaseConnection.php:216]
  #2 AphrontBaseMySQLDatabaseConnection::executeQuery called at [<phabricator>/src/infrastructure/storage/xsprintf/queryfx.php:8]
  #3 queryfx called at [<phabricator>/src/applications/cache/purger/PhabricatorChangesetCachePurger.php:15]
  #4 PhabricatorChangesetCachePurger::purgeCache called at [<phabricator>/src/applications/cache/management/PhabricatorCacheManagementPurgeWorkflow.php:78]
  #5 PhabricatorCacheManagementPurgeWorkflow::execute called at [<arcanist>/src/parser/argument/PhutilArgumentParser.php:492]
  #6 PhutilArgumentParser::parseWorkflowsFull called at [<arcanist>/src/parser/argument/PhutilArgumentParser.php:377]
  #7 PhutilArgumentParser::parseWorkflows called at [<phabricator>/scripts/cache/manage_cache.php:21]

I resized the disk from 600GB to 850G.

This time I am more experienced!

sudo /etc/init.d/apache2 stop
sudo /etc/init.d/phd stop
sudo /etc/init.d/mysql stop
sudo parted /dev/sdc
sudo partprobe /dev/sdc
sudo resize2fs /dev/sdc
df -h /dev/sdc
sudo /etc/init.d/mysql start
sudo /etc/init.d/phd start
sudo /etc/init.d/apache2 start
1 Like

Looks like the slowness is back again. Loading the Phabricator front page takes about 30s for me.

There were many pygmentize processes but I cannot see them in htop output now.

I have disabled some aggressive spiders’ User-Agent values. Hope it helps as well.

There are many IPs from two autonomous systems crawling pages using fake user agents such as Safari/537.36, Chrome/27.0.1453.93, Chrome/37.0.2062.124, Chrome/35.0.2309.372. I have blocked the two IP ranges.

The php-fpm setting

pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

seemed un-configured and reserved too few processes. Changed to:

pm = dynamic
pm.max_children = 32
pm.start_servers = 8
pm.min_spare_servers = 4
pm.max_spare_servers = 8
pm.status_path = /php-fpm-status