100 processes on linux server

Hi
I'm running metabase 0.33.2 with docker, and I noticed that I have 100 processes running with the command:

java -XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 -Dlogfile.path=target/log -server -jar /app/metabase.jar

Is this normal?

Hi @Anders
Are you running the official version or have you build your own container?
How are you showing the process list, docker top <container> ?
Is it individual processes or are they children from the same parent process?
What’s you memory usage on the container?

I’m running the official version.
I made the process list by filtering and (patiently) counting them in htop.
I’ve just run docker top <container> and I see that the container has a single process.
I’ve also run docker stats -a <container> and it’s using 1.06gb of memory, and 100 PIDs/threads. At least I know I counted right :stuck_out_tongue:

Addendum: user is “2000”, in accordance with what @fozzy says.

Seeing something similar, not with docker top but with htop on a Ubuntu 18.04 where the user is “2000” from docker itself. Also experiencing high Jetty thread count growth that only clears after a container stop/start. We’re running 0.33.3 with docker and it seems to be highly correlated to cards not loading (excess query time). Hopefully some of the thread patches being worked on in GIT by csaul will be released soon.

@Anders @fozzy
I’m not sure if I understand the problem. Metabase will consume memory and threads, but it’s difficult to say an amount, since it depends on usage. So unless it’s consuming excessive amounts or never releasing, then I don’t see a problem.
There has been several big fixes the last couple of days, which should make a big impact on 0.34 - haven’t had time to test them all - not sure how it effect Jetty yet, but database connection handling should be much better.

If you check the Metabase log, do you have a lot of Jetty threads active/idle/queued? Or active database calls? Or is it active threads?

DEBUG metabase.middleware.log GET /api/user/current 200 1.9 ms (3 DB calls) Jetty threads: 3/50 (5 idle, 0 queued) (48 total active threads) Queries in flight: 0

@flamber Moments ago…
10-02 09:07:49 DEBUG middleware.log :: GET /api/card/494 200 6.6 ms (5 DB calls) Jetty threads: 9/50 (3 idle, 0 queued) (158 total active threads) Queries in flight: 0
10-02 09:07:49 DEBUG middleware.log :: GET /api/alert/question/494 200 1.7 ms (1 DB calls) Jetty threads: 9/50 (3 idle, 0 queued) (158 total active threads) Queries in flight: 0
10-02 09:07:50 INFO api.card :: Question’s average execution duration is 1646129 ms; using ‘magic’ TTL of 32923 seconds :floppy_disk:
10-02 09:07:50 INFO middleware.cache :: Returning cached results for query :floppy_disk:
10-02 09:07:50 DEBUG middleware.log :: POST /api/card/494/query 200 [ASYNC: completed] 9.5 ms (6 DB calls) Jetty threads: 8/50 (3 idle, 0 queued) (158 total active threads) Queries in flight: 0
That ‘card’ query doesn’t take that long directly from the Redshift side… more like 15s. I also just saw something slightly more disturbing in the logs where a Snowflake data model query was being sent to a Redshift data source… @camsaul … [Amazon] (500310) failed it of course. The point inside this chain is that there shouldn’t be 158 total active threads (appearing as user 2000 from docker on the htop of the machine) when the container has only been up for 45 minutes, and my 10 users are not using the site url, and I’ve labeld as many as possible system tables and schema tables as kruft so the JVM doesn’t keep trying to scan them.
We did find out yesterday that Redshift (postgre8) default is to cap 5 connections per system user without special configuration.

@fozzy

If you have logged the Snowflake query being run on Redshift, then you should definitely open an issue on that. That should never happen, but not something I’ve ever seen.

It’s not Jetty threads.
As for the amount of threads. Remember that just because there are no users visiting the system, doesn’t mean that it doesn’t do a lot of things. Everything from syncing to pulses/alerts. Basically everything the scheduler (Quartz) is in charge of.
And everything runs as a thread, so it doesn’t block everything else.

I’m not sure how Metabase handles it’s connections, when the database is limited to fewer connections than expected.
Currently Metabase will allow 15 connections per configured database. Perhaps you might want to modify Redshift to similar level. Or change Metabase, so it limits to 5 connections.

Are you seeing hanging database connections on Redshift - if so, then it might be issue #8679.

Like I said, there has been a lot of really big changes the last week, which will arrive in 0.34 - hopefully it will maintain better threading as part of having better database connection handling.

Yes - that's one of the great things about Metabase. Looking forward to 0.34 and beyond!

For Redshift, we see the commits Bump Redshift JDBC driver version 🆕 · metabase/metabase@b1aa926 · GitHub for new drivers. Our Redshift connections seem only to be tied to the connection attempts not closing out. It might be because we haven't customized the connection count on the Redshift side like #8679 had. Thanks for the support!

It might be the threads of the same process. htop shows processes and its threads by default, pressing capital H will hide the threads and only processes will be shown.