Card shows fresh data but downloaded file shows an old instance of the data

Cards on metabase show correct data. However when the data is downloaded, it is pre-dated data - probably from a cache.

I currently use Metabase v0.40.1. Below is a reproducible example

SELECT *,CURRENT_DATE AS execution_date FROM table1 a LEFT JOIN table2 b ON = b.foreign_id 

In the UI, the correct result is shown (still facing cache issues regularly but that can be handled by changing just one line in the query and running again or refreshing) but when the data is downloaded(CSV,JSON or xlsx), the data in the downloaded file does not correspond with the data in the UI. However, the execution_date field alone is correctly populated with the current date. Have tried a lot of ways(refreshing webpage, restarting browser, etc.) to handle this issue but nothing is currently working. The issue just resolves itself after a couple of tries/minutes.

Any ideas on how I can work around this issue would be much appreciated.

Hi @the_adhi_porter
Please post "Diagnostic Info" from Admin > Troubleshooting.

Do you have caching enabled? Admin > Settings > Caching

Exports does not use cache, so that cannot be the reason. But perhaps you have database cache?

How many instances of Metabase are you running?

Hi @flamber
Working with OP. Sharing some requested details.

Caching enabled : Yes.


No idea about database cache. But have asked infra team. Will share details later.

We run 1 single instance of Metabase. Not sure where to find extra info on this - if any.

Diagnostic info below

  "browser-info": {
    "language": "en-US",
    "platform": "MacIntel",
    "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36",
    "vendor": "Google Inc."
  "system-info": {
    "file.encoding": "UTF-8",
    "": "OpenJDK Runtime Environment",
    "java.runtime.version": "11.0.11+9",
    "java.vendor": "AdoptOpenJDK",
    "java.vendor.url": "",
    "java.version": "11.0.11",
    "": "OpenJDK 64-Bit Server VM",
    "java.vm.version": "11.0.11+9",
    "": "Linux",
    "os.version": "4.14.158-129.185.amzn2.x86_64",
    "user.language": "en",
    "user.timezone": "GMT"
  "metabase-info": {
    "databases": [
    "hosting-env": "unknown",
    "application-database": "postgres",
    "application-database-details": {
      "database": {
        "name": "PostgreSQL",
        "version": "9.6.22"
      "jdbc-driver": {
        "name": "PostgreSQL JDBC Driver",
        "version": "42.2.18"
    "run-mode": "prod",
    "version": {
      "date": "2021-07-14",
      "tag": "v0.40.1",
      "branch": "release-x.40.x",
      "hash": "ed8f9c8"
    "settings": {
      "report-timezone": null

@ArgoSorcerer Okay, caching on works on saved questions. It does not apply to exports or ad-hoc queries.

Since I don't know which database you are querying, then unsure if you might have a database cache layer.

I can see you have 3 database types connected to Metabase. So if you make a query similar to what was described initially for all 3 databases, since that would test the idea of a database cache layer.

Since I don't know which database you are querying, then unsure if you might have a database cache layer.

@flamber apologies for delayed response. We are hitting the postgres db. AFAIK - we do not have any database cache layer but I will still check with the corresponding team.

I tried to reproduce the issue and captured a screen recording. Since video upload is not possible - sharing link to it.

00:02 mark : Card shows updated 4 hours ago - 265 result set
00:06 mark : Try to refresh result set (and override cache)
00:25 mark : Result set came back - up-to-date with 625 records
00:36 mark : Refresh the browser page to check if cache got updated - nope (4 hours ago - 265 records)

Only solution that we have seen work here is to open Query editor, add / remove white spaces or lines, then save the question again. This is creating major downstream data consumption issues with Pulse / public links feeding into Gsheet through Google scripts maintaining stale data.

PS : We also experimented with increasing the MINIMUM QUERY DURATION : 30 seconds so that this question does not get cached. Average runtime across runs we found was 12 seconds.

Tried to peek into metadata to understand how many questions are hitting cache and rank-order them wrt average runtime for optimizing. But it seems like nothing is hitting the cache. Screenshot below - please let me know if my interpretation of the tables / columns are correct. Data logging issue?

Hope this helps.

@ArgoSorcerer Okay, so it has nothing to do with exports/downloads, but regular viewing results.

But I cannot reproduce on 0.40.5.

Can you provide all the logs, so it's possible to see when the question was cached and which query hash it is using, since that is what is being referred to in the application database table query_cache
Perhaps your cache has been slightly corrupted by all the settings changes you've made, so you would truncate the table, so you'll start on a fresh.

The cache_hit has never functioned as far as I know - but that should been fixed in the upcoming 0.41

Thanks @flamber.
Will truncate the query_cache table over weekend to check if that works.
Logs will take me some time to get access to. Will share ASAP.

so it has nothing to do with exports/downloads, but regular viewing results.

Attaching another link to show that we do indeed have a problem with export / download. Public links suffer from this too.

@ArgoSorcerer Since I have absolutely no idea what your database table contains, then it's impossible for me to know which result is correct.
But considering that there's more than 30,000 installations of Metabase, then I'm quite sure that this is something specific about your setup, but without logs and details about the results and which database you are querying and how you are hosting/running Metabase, then it would be difficult to know what the problem is.