Fast big project building without make

Suppose, I have many files like file01.cpp, file02.cpp, file03.cpp…and so on
If I call
clang++ -c file01.cpp
clang++ -c file02.cpp
clang++ -c file03.cpp
it waste time, because compiler loading may take more time than compilation small file.
Maybe clang++ -c file01.cpp file02.cpp file03.cpp
problem If we have directory structure, must give relative or worse full path
clang++ -c path\to\file01.cpp path\to\file02.cpp path\to\file03.cpp
maybe all files in one directory will be compiled in one call
How long may be compiler command line?
How to speedup another way? Is any shared library functional as compiler, or one instance compiler in the background as server?
I we have multicore processor we can speedup by parallel building, optimal threads number is equal number of cpu cores (or cpu threads sometimes = 2*cores?) more threads than it have no advantage.
If we have multi threads, is memory problem , especially in parallel linking. I always turn off swap, because small swap file destroy my ssd disk when I built huge project with debugger options. How to estimate how much memory is needed by next instance of compilation or linking?
In general: how to optimal in time compile parallel huge project with memory limitations?

I’m curious why “without make”? Running multiple compilations in parallel is one of the things that make (or other build system, like ninja) is very good at. Usually these will give you a way to limit the number of parallel jobs, to avoid out-of-memory problems; both make and ninja support a -j N option where N is the maximum number of parallel tasks.

It’s hard to estimate the memory needed by a task ahead of time, simpler to observe memory use and adjust N to avoid out-of-memory problems.

You can put many files on a single clang command line, but it will run them serially, not in parallel AFAIK. The compiler assumes if you want parallel compilation you will use a build tool like make or ninja.

Building tools like make or ninja uses compiler command line.In question I am wondering about compiler fronted specific limitations like maximal length of command line. If I write make file, I can choose: one file for one command or one comand with many *.cpp source files. Supposedly ninja is faster than cmake using skillful command line operations. I wanna ask about hard compiler frontend limitations. Is it profitable to compile a lot at once so as not to load the compiler again and again?

You should expect that it will be faster (less wall-clock time) to run compilations in parallel. The operating system is able to optimize running the same program multiple times; it reuses memory pages for the program rather than read them off the disk again, for example. I just ran clang twice on my Linux system, and time reports that the second run was 0.5 seconds faster than the first run.

Except for really trivially small source files, I’d expect the time spent compiling to be much larger than the time spent loading the process. Therefore, running 4 compiles in parallel will be faster than running the same 4 compiles serially.