It's not freezing - it's doing a migration, which can take a long time. On one of my systems with 30k fields, it took 4 minutes to complete that migration. So since I have no idea of the size of your H2 or the amount of fields, then it's difficult to say how long it takes.
If you close the process, while it's working, then the H2 database is likely corrupted and you'll need to restore from backup.
And slow upgrade is noted in the release notes, since I put it there: https://github.com/metabase/metabase/releases/tag/v0.39.0.1
Since you're mentioning limited RAM, then it would be helpful to know how little you're running with, and how much Metabase can use, which is noted in the first line during startup.
If the instance is only running Metabase, then you're wasting RAM, since Metabase will maximum occupy ~8GB.
Since I have no idea how big your H2 file is or other CPU and disk IO performance, then yes, perhaps.
Or perhaps it's generating a lot of errors, which are written to the metabase.db.trace.db - and if that file is more than a few MB, then that usually indicates problems with your H2 (possible corruption). A large trace file can slow down everything. Shutdown Metabase, delete the trace file, start again.
I'm not going to have a discussion about if H2 is viable to use in production - it's not, final. Search the forum to find people who were hit by corruption. H2 is also slower than external application databases, when it gets larger than 10MB (the db, not the trace).
2021-05-26 09:22:44,296 INFO metabase.util :: Maximum memory available to JVM: 7.7 GB
2021-05-26 09:23:00,996 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-05-26 10:18:15,517 INFO db.setup :: Database Migrations Current ... ✅
2021-05-26 10:18:15,535 INFO db.data-migrations :: Running all necessary data migrations, this may take a minute.
2021-05-26 10:18:15,542 INFO db.data-migrations :: Running data migration 'migrate-click-through'...
2021-05-26 10:18:15,581 INFO db.data-migrations :: Finished running data migrations.
Database setup took 55.3 mins
metabase.db.mv.db - 51MB
8 CPU cores on a server.
no trace DB at all
And at the same time upgrade from 0.37.2 to 0.38.X takes couple of minutes.
Then any upgrade to 0.39 takes more than 1 hour.
@jazz78 So you're running multiple instances on the same host.
The migration still seems slow. I have an old computer and a 20MB H2 takes a handful of minutes. That's with 20k fields. So if you have a lot more fields, then it's going to be even slower, then times that by 2.5 (since mine is 20MB, yours is 50MB) and then it sounds kinda reasonable.
That migration process on Postgres takes a lot less time, since it has a better way of dropping columns for tables. MariaDB/MySQL should be faster too.
It's important to understand the different between migrations being done - normally only extra columns are added or other small changes, which also doesn't impact downgrade (which is officially unsupported), but 0.39.0 did a big migration by changing the table schema of one of the big tables.
Doing an upgrade from let's say 0.32 to 0.39 will run every migration between those versions, so you'll always have faster migrations if the upgrade is just to the next version.
We're trying to avoid making upgrades slow, but it's sometimes need to refactor the schema design.
@flamber thanks very much, very helpful. I don't know how many fields we have, I'm not sure a way to tell, but if i had to guess I would say like less than 100 (unless I'm thinking about fields the wrong way).
@jazz78 It's located in the application database table metabase_field.
"fields" are each column from each table from each database that Metabase has synced at some point. If you have 100 fields, then you have a really tiny data source, which I don't think. Then the H2 size seems way too big - unless you have created 100k questions (report_card) or have huge amount of user activity, which causes query_execution to explode. Or perhaps you have enabled way too much cache (Admin > Settings > Caching), which H2 definitely isn't the fastest to handle either.
@flamber thanks very much. I don't know where to find that table, or how, but yes i think my guess of 100 is probably wrong:
caching is disabled
less than 100k questions (probably like 1000 very max)
i think the user activity is low, there are only 4 users on one and about 100 light users on the other biggest one, the rest have low users, low usage.
Given your most recent 'gentle' encouragement, we are embarking on a transition to mysql. I will let you know if we continue to have upgrade issues after that, I am guessing not, given your feedback so far. Although, it's a bigger exercise overall so we probably won't have a clean understanding. But with the other benefits, it seems now is a good time to try, and maybe future 'upgrades' will go faster, and in the meantime we will enjoy the other benefits, and/or absence of corruption.