I'm working with Python and the Metabase API to retrieve data from arbitrary tables. The goal is to design a general function that can handle any table — without prior knowledge of its schema, fields, or primary keys. Some tables are huge and may not have explicitly defined primary keys or reliable indexes.
What is the best approach to paginate through all the data in this case? Ideally, I want a solution that:
i. Works for any table, regardless of its structure;
ii. Ensures no data is missed or duplicated;
iii. Doesn't assume the presence of a natural order (like ID or timestamps);
iv. Is as efficient as possible.
Any insights, best practices, or Python-based examples would be appreciated.