Where: 34.118.227.120 equals the node port service for the PostgreSQL database (which has a 5432:5432 port forwarding rule configured on TCP).
Both containers seem to have pulled okay but checking the logs I can see hat Metabase chokes when trying to connect to the DB (I tried 'localhost' in place of the node pod service IP).
Anyone experienced the same issue and figured out a resolution?
I'll have go get around to posting the solution here soon. It took a few hours of going around in circles to get there: I had to create a cluster port service for the Metabase app (in addition to the one for the internal database).
I saw that the app had its own cluster IP but ... until I created it deliberately (in GKE with 'expose') it behaved as if it wasn't on the internal network.
And re: the database, an architectural decision! The databases holding the actual data are being hosted externally on a dedicated PostgreSQL instance. But I figured that Metabase should also have its own local database just in case that isn't available and to decouple it from that service.
So my intention is that the intra-cluster database will just exist to ensure continuity of connectivity and operation, basically.
but how are you making sure that the PostgreSQL DB is persisted let's say something gets corrupted ... Are you mounting the volume somewhere? Just want to make sure you are aware that if something happens to this PostgreSQL running on Kubernetes you will lose all the data of metabase and will have to start from scratch.
but how are you making sure that the PostgreSQL DB is persisted
Thanks for the heads up!
The Google Kubernetes Marketplace has a ready to go app for a PostgreSQL server that configures a persistent storage volume for itself during configuration.