I notice there is a constant for max rows when using the ui filters of 2,000 and max rows of 10,000 when querying with SQL.
Is it for performance reasons that this limitation is in place? Have you considered making this a variable that the user can define.
Appreciate any advice on this.
we have an issue open : https://github.com/metabase/metabase/issues/905
feel free to chime in, but it's active and being slated for the next version (0.13)
What is the ETA of this release?
Thanks so much for the responsiveness - this will be really valuable for us.
It's done, and merged in master.
We're baking in version 0.13 on some of our production instances and should cut a public facing jar sometime next week.
Looking through this request and the GitHub issue, the GitHub issue only applies to CSV exports and not to the full scope of this request. Is there a way to remove this constraint for plots in metabase? I am analyzing time series data for medical claims and have more than 2,000 data points I need to plot onto a scatter plot.
Not at the moment. We have some hard limits to keep our frontend client from chocking. 2k data points on a single scatter plot is a bit outside of our current scope. We'll be revising them as we optimize charting going forward, but for something like that I'd personally use R.
In general our focus is more on the presentation and sharing of analytics results than doing deep explorations. Internally, (and across our userbase), we play well with R, large Map-Reduce/Spark jobs, and NumPy rather than replace their use in specialized situations.
Note. If I have a SQL query with over 10,000 results, the UI will show me the: “10,000 results” message.
But, if I export the CSV, I get 19,000 rows!!
It may be worthwhile to add an alert message “To see all 19,000 rows”, simply export the CSV.