Metabase dump with docker presents error of liquibase.exception.MigrationFailedException

I'm running a container with the postgres image and a container with the metabase image,

docker run -it -d -e "MB_DB_FILE=/path/metabase.db.mv.db" --name postgres
-p port:5432
-e POSTGRES_USER=userbd
-e POSTGRES_PASSWORD=passwordbd
--network metanet
postgres:14

docker run -it -d --name metabase-origin
-p port:3000
-e MB_DB_TYPE=postgres
-e MB_DB_DBNAME=metabase
-e MB_DB_PORT=5432
-e MB_DB_USER=userbd
-e MB_DB_PASS=passwordbd
-e MB_DB_HOST=postgres
--network metanet
metabase/metabase-enterprise:v1.44.4

I upload a backup of my official base to my bank in the container, I use a truncate table in the tables to make the pg_restore of the backup and mirror my bank in the container perfectly.

However when I run the dump instruction according to the serialization manual it gives me an error.

docker run --rm --name metabase-dump
-e MB_DB_CONNECTION_URI="postgres://postgres:5432/metabase?user=userbd&password=passwordbd"
-e "MB_DB_FILE=/mnt/sharedfolder/metabase.db.mv.db"
-v "/mnt/sharedfolder/metabase_data:/target"
--network metanet
metabase/metabase-enterprise:v1.44.4 "dump /target"

I tried without -e "MB_DB_FILE=/mnt/sharedfolder/metabase.db.mv.db" \
but i got the same error,

Usage of Metabase Enterprise Edition features are subject to the Metabase Commercial License. See https://www.metabase.com/license/commercial/ for details.
2022-10-06 14:35:02,401 INFO db.setup :: Verifying postgres Database Connection ...
2022-10-06 14:35:03,027 INFO db.setup :: Successfully verified PostgreSQL 14.5 (Debian 14.5-1.pgdg110+1) application database connection. :white_check_mark:
2022-10-06 14:35:03,033 INFO db.setup :: Running Database Migrations...
2022-10-06 14:35:03,037 INFO db.setup :: Setting up Liquibase...
2022-10-06 14:35:03,466 INFO db.setup :: Liquibase is ready.
2022-10-06 14:35:03,467 INFO db.liquibase :: Checking if Database has unrun migrations...
2022-10-06 14:35:06,550 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2022-10-06 14:35:06,853 INFO db.liquibase :: Migration lock is cleared. Running migrations...
liquibase.exception.LiquibaseException: liquibase.exception.MigrationFailedException: Migration failed for change set migrations/000_migrations.yaml::v44.00-000::dpsutton:
Reason: liquibase.exception.DatabaseException: ERROR: relation "persisted_info" already exists [Failed SQL: (0) CREATE TABLE "public"."persisted_info" ("id" INTEGER GENERATED BY DEFAULT AS IDENTITY NOT NULL, "database_id" INTEGER NOT NULL, "card_id" INTEGER NOT NULL, "question_slug" TEXT NOT NULL, "table_name" TEXT NOT NULL, "definition" TEXT, "query_hash" TEXT, "active" BOOLEAN DEFAULT FALSE NOT NULL, "state" TEXT NOT NULL, "refresh_begin" TIMESTAMP with time zone NOT NULL, "refresh_end" TIMESTAMP with time zone, "state_change_at" TIMESTAMP with time zone, "error" TEXT, "created_at" TIMESTAMP with time zone DEFAULT NOW() NOT NULL, "creator_id" INTEGER NOT NULL, CONSTRAINT "persisted_info_pkey" PRIMARY KEY ("id"), CONSTRAINT "fk_persisted_info_card_id" FOREIGN KEY ("card_id") REFERENCES "public"."report_card"("id") ON DELETE CASCADE, CONSTRAINT "fk_persisted_info_ref_creator_id" FOREIGN KEY ("creator_id") REFERENCES "public"."core_user"("id"), CONSTRAINT "fk_persisted_info_database_id" FOREIGN KEY ("database_id") REFERENCES "public"."metabase_database"("id") ON DELETE CASCADE, UNIQUE ("card_id"))]
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:126)
at liquibase.Liquibase.lambda$null$0(Liquibase.java:265)
at liquibase.Scope.lambda$child$0(Scope.java:180)
at liquibase.Scope.child(Scope.java:189)
at liquibase.Scope.child(Scope.java:179)
at liquibase.Scope.child(Scope.java:158)
at liquibase.Scope.child(Scope.java:243)
at liquibase.Liquibase.lambda$update$1(Liquibase.java:264)
at liquibase.Scope.lambda$child$0(Scope.java:180)
at liquibase.Scope.child(Scope.java:189)
at liquibase.Scope.child(Scope.java:179)
at liquibase.Scope.child(Scope.java:158)
at liquibase.Liquibase.runInScope(Liquibase.java:2405)
at liquibase.Liquibase.update(Liquibase.java:211)
at liquibase.Liquibase.update(Liquibase.java:197)
at liquibase.Liquibase.update(Liquibase.java:193)
at metabase.db.liquibase$migrate_up_if_needed_BANG_.invokeStatic(liquibase.clj:142)
at metabase.db.liquibase$migrate_up_if_needed_BANG_.invoke(liquibase.clj:130)
at metabase.db.setup$fn__35433$migrate_BANG___35438$fn__35439$fn__35440.invoke(setup.clj:66)
at metabase.db.liquibase$fn__30951$do_with_liquibase__30956$fn__30957.invoke(liquibase.clj:59)
at metabase.db.liquibase$fn__30951$do_with_liquibase__30956.invoke(liquibase.clj:51)
at metabase.db.setup$fn__35433$migrate_BANG___35438$fn__35439.invoke(setup.clj:61)
at metabase.db.setup$fn__35433$migrate_BANG___35438.invoke(setup.clj:40)
at metabase.db.setup$fn__35492$run_schema_migrations_BANG___35497$fn__35498.invoke(setup.clj:119)
at metabase.db.setup$fn__35492$run_schema_migrations_BANG___35497.invoke(setup.clj:113)
at metabase.db.setup$fn__35544$setup_db_BANG___35549$fn__35550$fn__35553$fn__35554.invoke(setup.clj:145)
at metabase.util$do_with_us_locale.invokeStatic(util.clj:716)
at metabase.util$do_with_us_locale.invoke(util.clj:702)
at metabase.db.setup$fn__35544$setup_db_BANG___35549$fn__35550$fn__35553.invoke(setup.clj:143)
at metabase.db.setup$fn__35544$setup_db_BANG___35549$fn__35550.invoke(setup.clj:142)
at metabase.db.setup$fn__35544$setup_db_BANG___35549.invoke(setup.clj:136)
at metabase.db$setup_db_BANG_$fn__35579.invoke(db.clj:65)
at metabase.db$setup_db_BANG_.invokeStatic(db.clj:60)
at metabase.db$setup_db_BANG_.invoke(db.clj:51)
at metabase_enterprise.serialization.cmd$dump.invokeStatic(cmd.clj:186)
at metabase_enterprise.serialization.cmd$dump.invoke(cmd.clj:181)
at clojure.lang.Var.invoke(Var.java:388)
at metabase.cmd$dump.invokeStatic(cmd.clj:161)
at metabase.cmd$dump.doInvoke(cmd.clj:155)
at clojure.lang.RestFn.invoke(RestFn.java:423)
at metabase.cmd$dump.invokeStatic(cmd.clj:158)
at metabase.cmd$dump.invoke(cmd.clj:155)
at clojure.lang.AFn.applyToHelper(AFn.java:154)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invokeStatic(core.clj:667)
at clojure.core$apply.invoke(core.clj:662)
at metabase.cmd$run_cmd$fn__87573.invoke(cmd.clj:227)
at metabase.cmd$run_cmd.invokeStatic(cmd.clj:227)
at metabase.cmd$run_cmd.invoke(cmd.clj:218)
at clojure.lang.Var.invoke(Var.java:388)
at metabase.core$run_cmd.invokeStatic(core.clj:167)
at metabase.core$run_cmd.invoke(core.clj:165)
at metabase.core$_main.invokeStatic(core.clj:189)
at metabase.core$_main.doInvoke(core.clj:184)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at metabase.core.main(Unknown Source)

If anyone has any suggestions on what I can do, thanks in advance.

Hi @ExtFuture
Please use the support email when using the Pro/Enterprise plans.

I don't understand what you mean with most of this:

I upload a backup of my official base to my bank in the container, I use a truncate table in the tables to make the pg_restore of the backup and mirror my bank in the container perfectly.

Which version of Metabase are you running (not using for Serialization)?
Which tables have you truncated?
It sounds like you have a bad backup/restore, which then are causing the problems you're having.

And stop including MB_DB_FILE if you are Postgres as the application database. Migration away from H2 is one-off process and then you never need that anymore.

hello flamber, thanks for replying,

TRUNCATE TABLE databasechangelog,databasechangeloglock,metabase_database,core_user,metabase_field,metabase_table,metabase_fieldvalues,report_cardfavorite,report_card,core_session,setting,revision,activity,segment,data_migrations,pulse_card,pulse_channel,pulse_channel_recipient,pulse,view_log,dependency,dashboardcard_series,card_label,label, permissions_group_membership,metric_important_field,permissions_group,permissions,metric,report_dashboard,report_dashboardcard,query,permissions_revision,query_cache,collection_permission_graph_revision,query_execution,dashboard_favorite,computation_job_result,computation_job,group_table_access_policy,collection,dimension,qrtz_job_details,qrtz_triggers,qrtz_simple_triggers,qrtz_cron_triggers,qrtz_simprop_triggers,qrtz_blob_triggers,qrtz_calendars, qrtz_paused_trigger_grps,qrtz_fired_triggers,qrtz_scheduler_state,qrtz_locks,task_history,native_query_snippet,login_history,moderation_review,secret,timeline_event,dashboard_bookmark,card_bookmark,collectio n_bookmark,bookmark_ordering,application_permissions_revision,persisted_info, timeline;

I do before restoring the backup of my db that is in production base.

is there any way for me to truncate the table that will be generated in the dump before sending the data at the time of the container execution?

I used the MB_DB_FILE to make the container recognize my metabase.db.mv.db file, when I'm managing the containers I don't use it, if there's a better way to do it, I accept suggestions.

I'm migrating from postgres to postgres

@ExtFuture Don't truncate anything. Delete the database. Then restore your backup, which should automatically create the tables needed.

If you are migrating from one Postgres1 to another Postgres2, then simply do a pg_dump on Postgres1, which you then pg_restore on the other Postgres2. Then you start Metabase pointing it to Postgres2.

I guess I'm missing some context on what exactly you are trying to do, but Serialization isn't involved in that.

I have only one server running in jar, the goal is to upgrade to docker so that I can manage applications in containers in a more dynamic and simple way.

I have other hosts and I want to make them all communicate, I managed to do this through nfs4 and sshfs, I have shared directories that are working, I set the target to be /mnt/sharedfolder

My goal regarding the metabase is to be able to dump the production base to an approval base and as it develops in the approval base, be able to deploy it back until we implement versioning by git.

Finally with docker swarm it is also a goal to make clustering for our applications.

When I send the metabase.db.mv.db to the container, I delete the tables and try to do a pg_restore, it shows an error saying that I don't have the tables to restore, when I truncate the table to clean the data, it accepts , what could I do about it ?

I hope I managed to explain my context

@ExtFuture Let's start completely over.

Post "Diagnostic Info" from your current running Metabase.

I redid the process this way testing what you commented

docker run -it -d --name postgres
-p port:5432
-e POSTGRES_USER=userdb
-e POSTGRES_PASSWORD=passdb
--network metanet
postgres:14

docker run -it -d --name metabase-origin \
-p port:3000 \
-e MB_DB_TYPE=postgres \
-e MB_DB_DBNAME=metabase \
-e MB_DB_PORT=5432 \
-e MB_DB_USER=userdb\
-e MB_DB_PASS=passdb\
-e MB_DB_HOST=postgres \
--network metanet \
metabase/metabase-enterprise:v1.44.4

I dumped it this way

docker run --rm --name metabase-dump
--network metanet
-e MB_DB_CONNECTION_URI="postgres://postgres:5432/metabase?user=metabase&password=knockknock"
-v "/mnt/sharedfolder/metabase_data:/target"
metabase/metabase-enterprise:v1.44.4 "dump /target"

and returned to me this error

2022-10-06 15:45:10,733 ERROR serialization.dump :: Error dumping Card "Análise Cromatográfica por Equipamento" (ID 12)
java.io.FileNotFoundException: /target/collections/root/collections/Health Index/collections/Dash Board dos Equipamentos/cards/An%C3%A1lise Cromatogr%C3%A1fica por Equipamento/An%C3%A1lise Cromatogr%C3%A1fica por Equipamento.yaml (No such file or directory)

all my original tables is returning that the directory is not found

look below my metabase_data before dump

image

{
"browser-info": {
"language": "pt-BR",
"platform": "Win32",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.16.1+1",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.16.1",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.16.1+1",
"os.name": "Linux",
"os.version": "3.10.0-1160.76.1.el7.x86_64",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"postgres",
"mysql",
"h2"
],
"hosting-env": "unknown",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "14.5 (Debian 14.5-1.pgdg110+1)"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.5.0"
}
},
"run-mode": "prod",
"version": {
"date": "2022-09-29",
"tag": "v1.44.4",
"branch": "release-x.44.x",
"hash": "382d728"
},
"settings": {
"report-timezone": null
}
}
}

@ExtFuture Are you trying to move from one Postgres database to another Postgres?
If yes, then don't use Serialization.
Simply just make a backup using pg_dump and restore it using pg_restore.

I don't understand why you are mixing Serialization and H2 into all of this. I think you are doing too many things at the same time, which the causes problems and makes it difficult understand and troubleshoot.

But since you are using the Pro/Enterprise plan, then use the support email. Go to Admin > Troubleshooting > click "Get Help".