Extreme UI slowness to the point of unusability

Hi,

I was originally running a metabase instance on a docker image. When making a few simple line charts select timestamp, value from table order by 1 desc limit 10000 I noticed that after saving the 3rd chart to the dashboard (each a line chart with limit 10000), the UI became extremely laggy and slow. Resizing and moving charts would result in the browser hanging for extended periods of time.

I then tried running locally using the metabase jar file, which was initially a bit more performant, but I quickly ran into the same issue. Running on Ubuntu 18.04 LTS. My machine runs at 3.4Ghz and has 16GB ram. Graphics card is VGA compatible controller: Intel Corporation Device 5926 (rev 06).

By the way I did also try running the docker image on a machine with 64GB RAM and a discrete nVidia card (1060 I believe). The fact that the slowness has persisted on two different machines, and testing both on docker and natively on ubuntu indicates to me that the issue is coming from metabase itself.

I know “Map” charts with tens of thousands of points can run very slow, but I’d be extremely surprised if the same occurs on metabase for line charts of 10,000 points. Any suggestions/fixes? Any other information I can provide to get to the root of the issue?
I would really like to stick with Metabase (seems like a great product) but it’s so slow to navigate that I simply cannot use it. Any help greatly appreciated.

EDIT: extreme slowness persists in google chrome as well - not a firefox issue.

10,000 points on a chart sounds like a lot. Certainly more than is useful. Can you summarize the data in some way? As your seleting a time then a value, can you group by minute, hour or something? Ideally in the query, not in metabases - let the database do the hard work.

As @AndrewMBaines mentioned, you should be aggregating down to some time granularity if you expect to chart it as a timeseries. We push down aggregation and binning to the underlying DB, rather than doing it client side.

If you have 10k points, it’s extremely unlikely your monitor has 10k horizontal pixels, so binning has to occur somewhere =)

Thanks for the responses. In this case, we are looking at motor motion data - it’s being written at 125hz, so in this case my aggregations could at best be done at the .1second level - it can be important to capture the motion data on intervals of ~1sec.

My current workaround is to create a CTE and do a
select * from my_cte where row_number % 10 = 0
which samples every 10th point - turning my 10k points into 1k points. Metabase renders fairly well at this lower level of points rendered. However, I do lose higher-frequency data with this sampling - any suggestions? Or is ~1000 points per chart going to be somewhat of a hard limit?