100 processes on linux server

@flamber Moments ago…
10-02 09:07:49 DEBUG middleware.log :: GET /api/card/494 200 6.6 ms (5 DB calls) Jetty threads: 9/50 (3 idle, 0 queued) (158 total active threads) Queries in flight: 0
10-02 09:07:49 DEBUG middleware.log :: GET /api/alert/question/494 200 1.7 ms (1 DB calls) Jetty threads: 9/50 (3 idle, 0 queued) (158 total active threads) Queries in flight: 0
10-02 09:07:50 INFO api.card :: Question’s average execution duration is 1646129 ms; using ‘magic’ TTL of 32923 seconds :floppy_disk:
10-02 09:07:50 INFO middleware.cache :: Returning cached results for query :floppy_disk:
10-02 09:07:50 DEBUG middleware.log :: POST /api/card/494/query 200 [ASYNC: completed] 9.5 ms (6 DB calls) Jetty threads: 8/50 (3 idle, 0 queued) (158 total active threads) Queries in flight: 0
That ‘card’ query doesn’t take that long directly from the Redshift side… more like 15s. I also just saw something slightly more disturbing in the logs where a Snowflake data model query was being sent to a Redshift data source… @camsaul … [Amazon] (500310) failed it of course. The point inside this chain is that there shouldn’t be 158 total active threads (appearing as user 2000 from docker on the htop of the machine) when the container has only been up for 45 minutes, and my 10 users are not using the site url, and I’ve labeld as many as possible system tables and schema tables as kruft so the JVM doesn’t keep trying to scan them.
We did find out yesterday that Redshift (postgre8) default is to cap 5 connections per system user without special configuration.