Dynamically updated table with months as field names

Hey folks

I have a table that I produce in BigQuery which is a rolling 12-month view of pivoted data (each month is a field).

I use a simple question in Metabse to display all of the data.

When the table updates (at the start of each month), the question errors because the last month is no longer available in the source table. For example, we are in April 2021 now, April 2020 is no longer available as a field and Metabase expects it.

I cannot use custom SQL to present this question as I am using the fully embedded option to display my reports to customers (you cannot do sandbox access for custom SQL queries).

Does anybody have a way around this? I'm thinking specifically about whether there's a way to have the behind-the-scenes Metabase SQL re-calculate each time the question is requested so that the old month isn't being compiled into the query?

Hi @danilo_nzyte

Please use the support email, when using the Enterprise Edition, which helps us prioritize issues.

You are seeing this issue: https://github.com/metabase/metabase/issues/12721

Here's how I would do it, but your mileage might vary, since I don't if you're using that table in other places or how your data is structured.

I would use generic column names like previous_month_01, previous_month_02, ...
Since this would mean that the questions would continue to work.
Then every month you would only have to update the Display Name in metabase_field.display_name so the users sees something that is more understandable.

If you somehow are forced to have hardcoded columns, then as you've noticed, Metabase will "archive" the old columns, which are connected to the question, and then it gets the new columns on sync, but then you would have to update the ID-references in the questions, so it's pointing to the new columns discovered during the sync.
This is a lot more complicated - and doing something incorrect would cause various types of corruption.

Thanks @flamber

The generic column naming isn't an issue, but I'm assuming this will require a backend job to update the PostgreSQL table? Is that what you were suggesting?

@danilo_nzyte Yes exactly, or API calls if you prefer - I will generally always recommend API, but in this case, it's a very small and limited update-query you're making, so if you write a (strict and simple) procedure on Postgres with a trigger, then you should be able to get it going and without causing any corruption.