Metabase app trying to rerun migrations again

I am using docker-compose to run both metabase app and postgress database. I m using volumes to be sure that the data is persistnet. Everything works fine until I make docker-compose down and up again, then metabase connect to postgress successfully but it seems it wants to rerun migration as a fresh installation. I checked postgress metabase database and thankfully all tables exists and also the content of tables.

I am facing this issue when I try to make docker-compose up metabase-app

metabase-app_1          | 2022-01-18 14:09:50,291 INFO db.setup :: Verifying postgres Database Connection ...
metabase-app_1          | 2022-01-18 14:09:50,459 INFO db.setup :: Successfully verified PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) application database connection. ✅
metabase-app_1          | 2022-01-18 14:09:50,460 INFO db.setup :: Running Database Migrations...
metabase-app_1          | 2022-01-18 14:09:50,481 INFO db.setup :: Setting up Liquibase...
metabase-app_1          | 2022-01-18 14:09:50,545 INFO db.setup :: Liquibase is ready.
metabase-app_1          | 2022-01-18 14:09:50,545 INFO db.liquibase :: Checking if Database has unrun migrations...
metabase-app_1          | 2022-01-18 14:09:52,326 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
metabase-app_1          | 2022-01-18 14:09:52,395 INFO db.liquibase :: Migration lock is cleared. Running migrations...
metabase-app_1          | 2022-01-18 14:09:52,485 ERROR changelog.ChangeSet :: Change Set migrations/000_migrations.yaml::1::agilliland failed.  Error: ERROR: relation "core_user" already exists [Failed SQL: CREATE TABLE public.core_user (id SERIAL NOT NULL, email VARCHAR(254) NOT NULL, first_name VARCHAR(254) NOT NULL, last_name VARCHAR(254) NOT NULL, password VARCHAR(254) NOT NULL, password_salt VARCHAR(254) DEFAULT 'default' NOT NULL, date_joined TIMESTAMP WITH TIME ZONE NOT NULL, last_login TIMESTAMP WITH TIME ZONE, is_staff BOOLEAN NOT NULL, is_superuser BOOLEAN NOT NULL, is_active BOOLEAN NOT NULL, reset_token VARCHAR(254), reset_triggered BIGINT, CONSTRAINT CORE_USER_PKEY PRIMARY KEY (id), UNIQUE (email))]
metabase-app_1          | 2022-01-18 14:09:52,491 WARN metabase.util :: auto-retry metabase.db.liquibase$migrate_up_if_needed_BANG_$fn__34331@3de30fd8: Migration failed for change set migrations/000_migrations.yaml::1::agilliland:
metabase-app_1          |      Reason: liquibase.exception.DatabaseException: ERROR: relation "core_user" already exists [Failed SQL: CREATE TABLE public.core_user (id SERIAL NOT NULL, email VARCHAR(254) NOT NULL, first_name VARCHAR(254) NOT NULL, last_name VARCHAR(254) NOT NULL, password VARCHAR(254) NOT NULL, password_salt VARCHAR(254) DEFAULT 'default' NOT NULL, date_joined TIMESTAMP WITH TIME ZONE NOT NULL, last_login TIMESTAMP WITH TIME ZONE, is_staff BOOLEAN NOT NULL, is_superuser BOOLEAN NOT NULL, is_active BOOLEAN NOT NULL, reset_token VARCHAR(254), reset_triggered BIGINT, CONSTRAINT CORE_USER_PKEY PRIMARY KEY (id), UNIQUE (email))]
metabase-app_1          | 2022-01-18 14:09:52,583 ERROR changelog.ChangeSet :: Change Set migrations/000_migrations.yaml::1::agilliland failed.  Error: ERROR: relation "core_user" already exists [Failed SQL: CREATE TABLE public.core_user (id SERIAL NOT NULL, email VARCHAR(254) NOT NULL, first_name VARCHAR(254) NOT NULL, last_name VARCHAR(254) NOT NULL, password VARCHAR(254) NOT NULL, password_salt VARCHAR(254) DEFAULT 'default' NOT NULL, date_joined TIMESTAMP WITH TIME ZONE NOT NULL, last_login TIMESTAMP WITH TIME ZONE, is_staff BOOLEAN NOT NULL, is_superuser BOOLEAN NOT NULL, is_active BOOLEAN NOT NULL, reset_token VARCHAR(254), reset_triggered BIGINT, CONSTRAINT CORE_USER_PKEY PRIMARY KEY (id), UNIQUE (email))]

Here is my docker-compose

metabase-postgres-db:
image: postgres
restart: always
ports:
  - 5432:5432
environment:
  POSTGRES_PASSWORD: postgres
  POSTGRES_USER: metabase
  POSTGRES_DB: metabase  
  PGDATA: /var/lib/postgresql/data
volumes:
  - pg-data:/var/lib/postgresql/data
metabase_app:
image: metabase/metabase
restart: always
ports:
  - 3000:3000
volumes:
  - metabase-data:/metabase-data
environment:
  - MB_DB_TYPE=postgres
  - MB_DB_DBNAME=metabase
  - MB_DB_PORT=5432
  - MB_DB_USER=metabase
  - MB_DB_PASS=postgres
  - MB_DB_HOST=metabase-postgres-db      
depends_on:
  - metabase-postgres-db
links:
  - metabase-postgres-db
volumes: 
  metabase-data:
  pg-data:     

Plz advice

Hi @alaahammouda
That should only be able to happen if the table databasechangelog is either lost, truncated or corrupted in some other way.
Your docker-compose formatting doesn't look right. Yaml can be unforgiving if the syntax is wrong.

Docker-compose is correct just indentation issue while trying to format it here. Table databasechangelog is not inside postgress metabase database. I have 2 deployments one with kubernets and the other with compose, I checked both, none of them has this databasechangelog table.

In kubernetes deployment, I can shut metabase down and up, it always detect the database and make skip initialization. But in compose I am facing this issue now and I need the current database, it has multiple dashboards that I cannot do them again from scratch!

Any suggestions how to create this table or restore it or even to ignore it?

@alaahammouda I can guarantee you that table exists on your Kubernetes. Otherwise Metabase wouldn't work. If it doesn't exists, then there's something completely wrong with your setup.
The table is generated by Metabase during startup, so Metabase can perform automatic upgrades.

If you enabled query logging on your Postgres, you'll see that Metabase checks this table during startup.
Perhaps your Postgres is not initialized correctly before Metabase connects to it?

Perhaps your Postgres is not initialized correctly before Metabase connects to it?

It is up and running, I can exec into the container, login to db, show tables, I just find the databasechangelog in compose deployment, but nothing in k8s!

Anyway do you have an example of how the databasechangelog should have? Can I insert an entry manually so I force metabase app to skip migrations? And any ideas why would this table be empty or dropped when we stop the container and start it again?
Can we do any workaround so we don't lose the existing data in db?

@alaahammouda I have no idea how you ended up without this table. You would have to check your database logs.

I would recommend that you setup the exact same version of Metabase somewhere else, so you can see what the table contains, since it's specific to the version, but not unique to your instance.

But let me just say, that table is the only thing that Metabase has to use, so it doesn't end up with a corrupted application database.

There's many, many thousands of Metabase deployments - which is impossible without this table, so I can guarantee you the table exists on your k8s.

But post "Diagnostic Info" from Admin > Troubleshooting.

It's working perfectly on k8s and I took a backup from it in restore it in compose environment, but how it was created empty in compose env, I don't know. Anyway I will try to restore this table from another fresh deployment and see. Hope it works. Will update u .

Thanks

Your suggestion was perfect, I run the deployment seperately, I export databasechangelog table using pg_dump -c -U metabase -Fc -d metabase -t databasechangelog > file.dump then restore it in the other deployment using pg_restore -U metabase --data-only -d metabase -t databasechangelog file.dump
Then it loaded successfully.

Thank you so much
Appreciated.

@alaahammouda Thank you for the update and for posting what you did to solve it. I would highly recommend that you check your database logs to figure out why the table was deleted or missing.

1 Like

It wasn't there from the beginning, I took a copy from postgres database deployed on k8s cluster, restore it in compose env. So it seems, whenever it connects to it the second time it create the table inside metbase database and find it empty then the issue happened. But how it's working super perfectly in k8s deployment, I literally don't know.