Huge latencies to load dashboards with AWS hosting

@Louis Anything is different locations will just add latency. That's how networks work.
But if everything is in the same region, then the latency is mostly just between that region and the browser, which is likely negligible.

Try checking the database query log, when Metabase does a search, and do a EXPLAIN ANALYZE to see what the cause of the slow search could be.

One thing I'm noticing, you have 2097 questions and 667 dashboard. That's a fairly high amount of dashboards, but nothing extraordinary.
Though, those 2097 questions are then added to (some of) the dashboards roughly 12536 times (dashboard Textboxes are also registered as cards, though doesn't exist as separate questions). That seems like a high number.

The cache is a bit on the high side, but if you have a lot of filters, then perhaps reasonable, though high.

I cannot tell you why your EB goes into degraded. You'll have to ask an AWS expert about that.
I can only recommend that your try ECS.

Again, I'm asking, just try - try - Metabase Cloud. If it works fine, then you'll instantly know there's a problem somewhere in your setup. If it also fails, then it will be easier to correlate the problem.
Otherwise I don't think I can help you anymore for free - you're welcome to look at our Enterprise options, which comes with support: https://www.metabase.com/pricing/

@flamber
Thanks for your feedback. I'll provide the logs of the queries asap. In the meantime, I've asked for a meeting with a solution architect at AWS.

Metabase Cloud is an interesting option.
Can we use our current application database to keep working on our questions/dashboards ? We don't want start from scratch as you can imagine.
Also, I really don't want to find another solution which doesn't explain why it doesn't work now. It cannot stay as a black box.

I have seen huge installations with thousands of users on a couple of instances, so I know Metabase can perform well, but I have also seen installations with just a couple of users that is being dragged to its knees because of some obscure overload somewhere.

It gives me hope we can make Metabase run efficiently. However, the ‘obscure overloads' + the switch you made from EB to ECS doesn't give an impression of Metabase being super flexible regarding the hosting/configuration.

@Louis
You can migrate your existing setup to Cloud:
https://www.metabase.com/migrate/guide.html

Metabase is very flexible in regards to hosting - it's just a Java application.

EB is probably a fine platform, but it didn't work well for us - and is slow to propagate.
And EB is supposedly based on ECS, so that makes me wonder even more why you're having problems on ECS, but not EB.

As for obscure overloads, it's impossible to know how people are running advanced software.
And as such, one specific configuration mistake or spikes that the system wasn't intended for, can cause various slowdowns.

@flamber
I removed the environment parameters as the situation got a little better. Performances are not top notch but it's not as terrible as it was, which is weird.

I've scheduled a call with a AWS solution architect to find out what could be the issue.
In the meantime, we're interested in trying Metabase Cloud. Do you know if having a trial of 30 days would be possible ?

@Louis Let's start with 14 days - if you're having problems during that period, then we'll figure something out. But if it works from day one, then I'm not sure why you would need a further trial period.
We will figure something out, but the trial is generally for people who're new to Metabase.

@flamber
Ok, I'll activate the trial so we can check out the performances on Metabase Cloud and move forward with this issue. Thanks for your help on this.

@flamber
I'm not sure what to do to complete the step: Migrate an existing Metabase install
We run docker on ec2. I logged into my ec2 instance. I think I'm supposed to execute the command generated: where should I run it ? Then there is a "your-metabase-container". What I'm supposed to put here ?
Thanks in advance !

@Louis Just run in locally on your own computer. It only needs access to the environment variables, so it can get the information from the application database.

@flamber
I’m a bit confused here. I don’t have anything locally that could refer to the application database. This database is secured with only a limited access through a jump instance (espcially created to connect to the database in case of problem). Same for the docker, nothing locally except the zip file when I want to upgrade MB. Everything is in the instance.

@Louis Okay, I cannot see what you're seeing. I don't know what your-metabase-container is referring to.

@flamber
Let’s take a step back. Am I suppose to replace your-metabase-docker by something ? If yes, i have nothing locally that is somehow linked to docker and metabase.

@Louis You have to provide some type of link or screenshot. I don't understand what you are referring to.

Migration to Metabase Cloud only requires that you have access to the application database. The process can be executed from anywhere.

@flamber

Migrate an existing Metabase install

Checklist

Run the following command in the same directory as "metabase.jar" (or set the "METABASE_VERSION" environment variable).

Make sure any "MB_DB_**" environment variables you normally use to configure the application database are set, otherwise the script will look for the default H2 database file, "metabase.db.mv.db".
Some command
You can also run the script in a running Metabase Docker container like so, in which case all the required environment variables will already be set:

curl -s https://store-api.metabase.com/migrate/some_id_I_have | docker exec -i your-metabase-container /bin/bash -

You may also download the script and run it yourself.

This is the command I'm talking about. We use docker in our EC2 instance to host our self MB. So I guess I'm suppose to run this curl command there ? Should I replace the your-metabase-container by something ?

If I foolishly run it locally, I get:

zsh: command not found: docker
zsh: command not found: lscurl

@Louis Yes, you need to run the command on your Docker host, that would be EC2 in your case.
And replace your-metabase-container with your container reference (either it's name or ID).
Type docker ps to see your running containers.

@flamber
Ok thanks for the precision. I managed to execute the command. This is the tail of the logs

2021-12-10 09:53:31,142 INFO db.setup :: Verifying h2 Database Connection ...
2021-12-10 09:53:31,371 INFO db.setup :: Successfully verified H2 1.4.197 (2018-03-18) application database connection. ✅
2021-12-10 09:53:31,438 INFO db.setup :: Running Database Migrations...
2021-12-10 09:53:31,491 INFO db.setup :: Setting up Liquibase...
2021-12-10 09:53:31,502 INFO db.setup :: Liquibase is ready.
2021-12-10 09:53:31,502 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-12-10 09:53:31,995 INFO db.setup :: Database Migrations Current ...  ✅
2021-12-10 09:53:31,996 INFO db.data-migrations :: Running all necessary data migrations, this may take a minute.
2021-12-10 09:53:32,119 INFO db.data-migrations :: Finished running data migrations.
Database setup took 978.2 ms
Dump complete
Uploading /tmp/tmp.nDCdBP/migration.mv.db...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0  791M    0  1332    0     0   4605      0 --:--:-- --:--:-- --:--:--  4608 

And then:


43

In the documentation

Execute the script in your self-hosted environment

The script will upload your application data to your new Metabase Cloud instance, which will overwrite any data in the cloud instance. If all goes well, the script will print Done! .

I don't see any change in my new Metabase Cloud account. Should I retrieved my questions / dashboards ? Does the application database supposed to be dumped to the new instance automatically ? This is a bit of a black box.

Thanks

@Louis Please contact support via email. As noted in the instructions:

Note that the token generated for the migration script is only valid for one hour, and can only be used once. We limit the lifetime of these tokens to prevent accidental overwrites of your data. If you need to run the migration script again, contact us and we’ll generate another script with a new token for you.

@flamber
Ok thanks !