How can I efficiently paginate a large Metabase table via API when the table structure is unknown and has no explicit primary key?

I'm working with Python and the Metabase API to retrieve data from arbitrary tables. The goal is to design a general function that can handle any table — without prior knowledge of its schema, fields, or primary keys. Some tables are huge and may not have explicitly defined primary keys or reliable indexes.

What is the best approach to paginate through all the data in this case? Ideally, I want a solution that:

i. Works for any table, regardless of its structure;

ii. Ensures no data is missed or duplicated;

iii. Doesn't assume the presence of a natural order (like ID or timestamps);

iv. Is as efficient as possible.

Any insights, best practices, or Python-based examples would be appreciated.

honestly, I would build a graphql endpoint completely from scratch without even using the Metabase apis... it doesn't make any sense that you want to build "an api to retrieve any data" from a platform that was built to actually put a schema on things