We built a great report for one of our clients that leverages an aggregated data table in SQL Server. Before the concept of cascading filters arrived in v37 we had to create separate aggregate tables by a client, add to Metabase and then rebuild all of the questions from a master example dashboard. I thought there must be an easier way. After looking online in the discussion forum I found some python-based GitHub solutions to allow me to deep copy dashboards and questions. https://github.com/vvaezian/metabase_api_python . Well, I’m not super technical. I had one of our technical people help me figure out how to set up my environment and was successfully able to clone and deep copy a dashboard. This successfully created a new dashboard and independent copies of the questions on that dashboard. The problem is that because the data connection is different there doesn’t seem to be an easy way to be able to update the table underlying each question without recreating the question from scratch.
I think a lot of the heavy lifting is solved by the GitHub solution ( thank you vvaezian) but I needed to go into the underlying tables to be able to update specific information and this is where I got a little confused. I would welcome any help or Direction on how best to update these underlying tables.
I then needed to identify the following
- Database ID for both the original clone database as well as the new connection.
a. Cloned database_id = 1 and the new connection which we will call database_id=2. - Cloned table
. Cloned table_id=1 and the new table table_id=2 - Cloned Collection
. New Cloned collection Collection_id = 2
With the information, I looked up records in the report_card table so that could update some key fields.
- I updated the dataset_query field where I did a search and replace for database":1 with database":2
- I updated the dataset_query field where I did a search and replace for source-table":1 with source-table":2
- Copy the value out of the dataset_query field into a text editor and isolate all of the field-id values as the existing values relate to the original data connection. I use the id of the original value to look up the text name of that value and then use the text name value to look up the new field-id for the same text name but the new connection. once this was complete I was able to do a search and replace on report_card.dataset_query for the old field number field-id = 1 (from database_id = 1) for the new field number field-id = 2 (from database_id = 2)
- Updated report_card.database_id = 2
- Updated report_card.table_id = 2
This seemed to work well and I was able to open the dashboard and see the new questions properly formatted. there was still a little bit of wonkiness but I was working through some of those details when I decided to look for other tables that may have information that needed to be updated with Legacy database, table or field values
That is when I found the table->column report_dashboardcard.parameter_mappings
Looks like in this table there are values such as parameter_ID and field-ID, card_id that most likely are referencing the Legacy data connection. I think this field - ID is relatively self-explanatory and just needs to use the same replace update logic as I did in the dataset_query field. I’m a little bit stumped on what parameter_ID is. I thought at first it might be related to the dashboard filters so I tried removing and saving the dashboard without any filters to see if the underlined table removed parameter IDs. But it does not. I also know that the parameter IDs are shared with both the original dashboard and the new clone dashboard with my guess is that it will cause a problem down the road.
My overall question is what cables do I need to update for simply Having new questions in a new dashboard pointed to a new connection. This assumes that I’ve added the connection and need to update the underlying tables as I do not believe the python script that I have will update that information. If I’m missing something please let me know. Thank you for any advice.