Performance issues when using existing question as a nested sub-query

First off: Metabase is an incredible tool, and this community is truly supportive and amazing. Thank you for taking a look at this question.

I discovered that you can use an existing question as a sub-query today in the docs: https://www.metabase.com/docs/latest/users-guide/13-sql-parameters.html#using-an-existing-question-as-a-sub-query

Which is an amazing feature. I have 4+ queries, written natively, that all use the same simple query as a base, call it “base_query”. Being able to use base_query as an “include” to the 4+ other queries would be awesome!

I created a new question that contained base_query and tried that in one of the existing questions. It produced the same results! Magic! I can significantly reduce my code maintenance!

But the new version, using:
FROM ({{ #base_query }} ) “source”

takes 9.5 seconds to complete.

The original question completes in <100ms. Caching is turned off.

Before I continue refactoring, I’m curious if there’s something under the hood of the {{#question}} feature that causes this difference in performance?

Besides caching, is there anything I can do? Is the performance of the original query due to some other caching mechanism?

Here’s the code segment from the new version of the question:

  SELECT "source"."object_id" AS "object_id",
         sum("source"."object_count") AS "sum",
         sum("source"."object_volume") AS "sum_2",
         count(DISTINCT "source"."id") AS "count"
  FROM {{#78}} as "source"
  WHERE true
    [[ AND "source"."id" = {{ ID }}]]
    [[ AND ({{ stock_date }}) ]]

  GROUP BY "source"."object_id"

Thank you again!

Looking at the logs, I guess somehow additional DB calls are being made:

Full native:
POST /api/card/77/query 202 [ASYNC: completed] 182.4 ms (8 DB calls) App DB connections: 1/10 Jetty threads: 2/50 (5 idle, 0 queued) (102 total active threads) Queries in flight: 1 (0 queued)

Nested sub-query:
POST /api/card/79/query 202 [ASYNC: completed] 8.9 s (11 DB calls) App DB connections: 1/10 Jetty threads: 2/50 (5 idle, 0 queued) (102 total active threads) Queries in flight: 0 (0 queued)

Hi @dolphinkickme

Please post “Diagnostic Info” from Admin > Troubleshooting.

That’s a massive overhead, but without knowing which database you’re querying and the data size, then it’s difficult to say what impact that has.

I might be misunderstanding something, but you’re referencing {{#78}}, while showing logs for question 77 at 182ms.

Also, you’re using a Field Filter ({{ stock_date }}), which doesn’t support table aliases, so not sure how that’s even working - or if that’s the root cause of the problem.

Can you create a query, where you simply replace {{#78}} with ( the-actual-nested-select-query ), since that’s basically what Metabase does before sending it to your database.

Also, check your database log to see the full query that it’s receiving and try to analyze that, since it might just be bad indexing.

Hi @flamber!

Question 77 is the native query I want to refactor (so this is { the-actual-nested-select-query }).
#78 is a section pulled from #77.
#79 is #77 rewritten to use {{ #78 }}, so in theory, the same as #77.

I never tested the field filter. I removed it and found the same performance result.

I'll check the database log asap.

Here's the "Diagnostic Info":

{
"browser-info": {
"language": "en-US",
"platform": "MacIntel",
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.5+10",
"java.vendor": "AdoptOpenJDK",
"java.vendor.url": "https://adoptopenjdk.net/",
"java.version": "11.0.5",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.5+10",
"os.name": "Linux",
"os.version": "4.14.173-137.229.amzn2.x86_64",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"postgres"
],
"hosting-env": "elastic-beanstalk",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "11.5"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.2.8"
}
},
"run-mode": "prod",
"version": {
"date": "2020-04-21",
"tag": "v0.35.3",
"branch": "release-0.35.x",
},
"settings": {
"report-timezone": "US/Eastern"
}
}
}

Thank you for your help!

@dolphinkickme Seems like something really strange is going on - we now have an issue open on it:
https://github.com/metabase/metabase/issues/13572

Thank you @flamber - I’ve been avoiding nested queries since!

And thank you for your seemingly tireless support of the community.

1 Like

Hey @flamber!

After I use question from other questions, the database become non usable ! All the calls to the database turn into [Async: canceled]. logs in PT-BR!

Do you know what it can be? the threads keep rising!

(Only happens with the database of the question, the others are ok!)

{
"browser-info": {
"language": "pt-BR",
"platform": "Win32",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.72 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.10+9",
"java.vendor": "AdoptOpenJDK",
"java.vendor.url": "https://adoptopenjdk.net/",
"java.version": "11.0.10",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.10+9",
"os.name": "Linux",
"os.version": "4.14.209-160.339.amzn2.x86_64",
"user.language": "en",
"user.timezone": "UTC"
},
"metabase-info": {
"databases": [
"h2",
"postgres",
"sqlserver",
"redshift"
],
"hosting-env": "unknown",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "12.5"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.2.18"
}
},
"run-mode": "prod",
"version": {
"date": "2021-02-16",
"tag": "v0.38.0",
"branch": "release-x.38.x",
"hash": "471aa51"
},
"settings": {
"report-timezone": null
}
}
}

[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:03:56-03:00 DEBUG metabase.server.middleware.log POST /api/card/63/query 202 [ASYNC: completed] 200.0 ms (6 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 7 / 15 Threads do Jetty: 12 / 50 (6 ocioso, 0 na fila) (122 total de threads ativas) Consultas ativas: 9 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:03:56-03:00 DEBUG metabase.server.middleware.log POST /api/card/64/query 202 [ASYNC: completed] 222.5 ms (14 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 7 / 15 Threads do Jetty: 11 / 50 (7 ocioso, 0 na fila) (122 total de threads ativas) Consultas ativas: 9 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:03:56-03:00 DEBUG metabase.server.middleware.log POST /api/card/72/query 202 [ASYNC: completed] 249.9 ms (14 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 2 / 15 Threads do Jetty: 5 / 50 (13 ocioso, 0 na fila) (122 total de threads ativas) Consultas ativas: 13 (1 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:03:56-03:00 DEBUG metabase.server.middleware.log POST /api/card/73/query 202 [ASYNC: completed] 256.0 ms (14 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 1 / 15 Threads do Jetty: 4 / 50 (14 ocioso, 0 na fila) (122 total de threads ativas) Consultas ativas: 13 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:04:30-03:00 INFO metabase.query-processor.middleware.fetch-source-query Consulta original buscada do cartão 58:
{:native "SELECT * FROM data.gerados_sku \nCROSS JOIN data.date_aux"}
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:04-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/61/query 202 [ASYNC: canceled] 1.1 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 14 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:04-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/79/query 202 [ASYNC: canceled] 1.1 mins (32 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 13 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:04-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/60/query 202 [ASYNC: canceled] 1.1 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 12 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:06-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/67/query 202 [ASYNC: canceled] 1.2 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 5 / 50 (13 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 11 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:06-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/59/query 202 [ASYNC: canceled] 1.2 mins (32 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 5 / 50 (13 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 10 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:06-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/66/query 202 [ASYNC: canceled] 1.2 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 9 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:06-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/81/query 202 [ASYNC: canceled] 1.2 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 8 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:07-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/68/query 202 [ASYNC: canceled] 1.2 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 7 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:07-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/78/query 202 [ASYNC: canceled] 1.2 mins (29 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 6 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:10-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/80/query 202 [ASYNC: canceled] 1.2 mins (30 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 4 / 50 (13 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 5 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:28-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/69/query 202 [ASYNC: canceled] 1.5 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 4 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:28-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/71/query 202 [ASYNC: canceled] 1.5 mins (31 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 3 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:28-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/70/query 202 [ASYNC: canceled] 1.5 mins (33 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 2 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:05:28-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/77/query 202 [ASYNC: canceled] 1.5 mins (32 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (15 ocioso, 0 na fila) (132 total de threads ativas) Consultas ativas: 1 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:00-03:00 INFO metabase.sync.util STARTING: passo 'sync-timezone' para postgres Banco de dados 2 'Data WareHouse'
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/60/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 5 / 50 (7 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 22 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/61/query 202 [ASYNC: canceled] 1.0 mins (20 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 5 / 50 (7 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 21 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/59/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 20 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/63/query 202 [ASYNC: canceled] 1.0 mins (5 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 19 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/66/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 5 / 50 (7 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 18 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/64/query 202 [ASYNC: canceled] 1.0 mins (11 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 17 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/72/query 202 [ASYNC: canceled] 1.0 mins (11 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 16 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/68/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 3 / 50 (8 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 15 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/67/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 14 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/69/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 13 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/73/query 202 [ASYNC: canceled] 1.0 mins (11 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 3 / 50 (8 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 12 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:40-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/70/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 2 / 50 (9 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 11 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/78/query 202 [ASYNC: canceled] 1.0 mins (17 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 3 / 50 (8 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 10 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/77/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 3 / 50 (8 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 9 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/76/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 3 / 50 (8 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 8 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/79/query 202 [ASYNC: canceled] 1.0 mins (20 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 6 / 50 (6 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 7 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/81/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 3 / 50 (8 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 6 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/71/query 202 [ASYNC: canceled] 1.0 mins (19 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 4 / 50 (7 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 5 (0 na fila)
[bb08748f-5c42-4358-8eff-ca3658ca3b26] 2021-03-03T14:11:41-03:00 DEBUG metabase.server.middleware.log POST /api/card/pivot/80/query 202 [ASYNC: canceled] 1.0 mins (17 chamadas ao banco de dados) Conexões de banco de dados do aplicativo: 0 / 15 Threads do Jetty: 5 / 50 (7 ocioso, 0 na fila) (146 total de threads ativas) Consultas ativas: 4 (0 na fila)

@PauloHFM Looking at the query time, I’m almost 99% sure that your reverse-proxy is closing the connection before the query has finished.
https://github.com/metabase/metabase/issues/12423 - upvote by clicking :+1: on the first post

I’m intrigued, because this happens only after using nested questions…

Thanks for the assessment!

@PauloHFM That’s because all the extra time it takes to run the Sub-Query - so two different issues.

This keeps making all the questions that uses the DB canceled, and not using nested questions anymore. This only “reset” when restarts metabase, and then after some queries, happen again. (only me using).

I imagine if something is closing the connection or receiving timeout from wherever (load balancer,reverse-proxy, database), metabase is still counting as active query? or using threads to “try” to get data?
That’s why the “active queries” and “threads” keep rising?

@PauloHFM Check your browser developer Network-tab and see the response headers - it should indicate which part of your chain is closing the connection.
Metabase should automatically send a “cancel query” to the database, when it has “lost” the user, but I don’t know which database you’re seeing this on.
There were some fixes on 0.38.1 for some databases not getting handled correctly.

Hi @flamber !

Just for you to know, we disabled http2 on Application Load Balancer, and it worked!

it’s a metabase issue dealing with the nature of http2?

@PauloHFM Interesting.

I would not recommend disabling http/2 between the client and the load balancer, since that will make parts of the UI load slower in modern.

But Metabase only supports http/1.1, so that should be defined between the load balancer and Metabase.

I don’t know which control ALB gives you, but I know you can make such configurations with Traefik/Nginx/Caddy/Apache.

But if it works for you and you don’t notice any negative side-effects, then leave it disabled.