At work, we have a solution that stores its engine's necessary data on a Postgres database. The solution process manages flows from a JSON description and transforms them into engine fuel for some applications. Each solution depends on multiple (possibly branchy) workflows, each workflow has multiple processes for each user session and each process has multiple current states.
So far so good! As can imagine, the process_state may be classified either as "Medium Data" or "Big Data". The question is: What are Metabase efforts to lighten heavy-data tables?
Therefore I recommend sincerely bringing some documentation from DBA consulting from the Metabase side to instruct Metabase users. Table indexing is the most recommended subject.