Bulk archiving?

Is there any way to access the Metabase Application Database directly to bulk archive cards and dashboards? Or do I need to use the API in order to perform a cleanup task of this nature?

Hi @chadwicke
I'm not sure to what extend you are taking "bulk", but you can select multiple objects by clicking the checkbox (hover the icon) in the collection lists. Or use the API.
Alternatively directly in the application database table report_card.archived=true and report_dashboard.archived=true if you don't care about some inconsistency.

Hey Flamber!

I'm talking about ~1,200 dashboards and cards. It sounds like the ideal solution would probably be the API, based on your response?

Also, do you think you could provide or point me to an example of a PUT request for changing a card or dashboard to archived=true?

@chadwicke Open your browser developer Network-tab to see what is happening, when you use Metabase. Almost everything you do in the interface is done via API calls.

There are many wrappers available for Metabase API. No need to start from scratch.
This is a Python wrapper for instance (it contains a move_to_archive function).

Your Python wrapper was very helpful and got me to exactly where I needed to be in a fraction of the time - thanks!

I do have one other question. I'm using the Dataset endpoint to query the application database directly rather than storing a vulnerable card and using that as a data source. Is there any way to query the application database like that, but without needing to supply the database parameter? That way, no element is hard-coded?

Your question is not clear.

That's fair. Let me be more specific. I'm bulk archiving cards and dashboards, and I want it to be as automated as possible, but also sophisticated and flexible. In order to get the cards and dashboards that I want to archive, I first created a Metabase card that queries the report_card table in the application database, then, my Python script would make an API call to the Card endpoint, run the card I setup, then return the data, put it in a data frame, etc. This means if someone messes with that card, though, it could cause problems. For that reason, it was suggested that I use the Dataset endpoint and simply run the query, ad-hoc, through the API. This is working out great, but the API call for this endpoint requires that I specify the database ID of the DB that I want to query, which is the application database.

I know this is getting a little off of our original topic, but I was wondering if there was an approach like described that wouldn't require the DB ID. I hope that's a bit better of a explanation.

Queries have to run against a database.
If you don't want to provide the DB ID, you can provide DB Name instead, and get the DB ID dynamically (using get_item_id function).

That's a great suggestion - I appreciate it. That's the only alternative I could imagine, but at the end of the day, hardcoding the name isn't really any different from hardcoding the DB ID, so I'll probably just stick with the ID. I appreciate your help with this project, though! You too, @flamber !