Metabase can't connect to PostgreSQL in Google Kubernetes cluster

Anyone having the same issue?

Steps I've taken (rather than using the Helm chart);

-> Deployed the Metabase Docker container as a workload (metabase/metabase:latest)
-> Deployed the latest PostgreSQL docker container as a workload

Both deployed into the same namespace on the GKE cluster ('metabase').

Configuration variables for the Metabase container (redacted for anonymity obviously:

MB_DB_DBNAME
metabaseappdb
MB_DB_HOST
34.118.227.120
MB_DB_PASSWORD
danielssecretpassword
MB_DB_TYPE
postgres
MB_DB_USER
metabase
MD_DB_PORT
5432

And for the PostgreSQL container:

POSTGRES_DB
metabaseappdb
POSTGRES_PASSWORD
danielssecretpsasword
POSTGRES_USER
metabase

Where: 34.118.227.120 equals the node port service for the PostgreSQL database (which has a 5432:5432 port forwarding rule configured on TCP).

Both containers seem to have pulled okay but checking the logs I can see hat Metabase chokes when trying to connect to the DB (I tried 'localhost' in place of the node pod service IP).

Anyone experienced the same issue and figured out a resolution?

Are you able to confirm connectivity to the postgres database from a shell?

If you do kubectl exec on the metabase pod and try to perform a ping or telnet to the postgreSQL pod what happens?

Also how are you making sure to keep postgres backedup and stateful? Any reason why not using Cloud SQL for the database?

I'll have go get around to posting the solution here soon. It took a few hours of going around in circles to get there: I had to create a cluster port service for the Metabase app (in addition to the one for the internal database).

I saw that the app had its own cluster IP but ... until I created it deliberately (in GKE with 'expose') it behaved as if it wasn't on the internal network.

And re: the database, an architectural decision! The databases holding the actual data are being hosted externally on a dedicated PostgreSQL instance. But I figured that Metabase should also have its own local database just in case that isn't available and to decouple it from that service.

So my intention is that the intra-cluster database will just exist to ensure continuity of connectivity and operation, basically.

but how are you making sure that the PostgreSQL DB is persisted let's say something gets corrupted ... Are you mounting the volume somewhere? Just want to make sure you are aware that if something happens to this PostgreSQL running on Kubernetes you will lose all the data of metabase and will have to start from scratch.

1 Like

but how are you making sure that the PostgreSQL DB is persisted

Thanks for the heads up!

The Google Kubernetes Marketplace has a ready to go app for a PostgreSQL server that configures a persistent storage volume for itself during configuration.