Setting kubernetes cluster single replica on google cloud results in frequent restarts

Hi,
I'm referring to https://artifacthub.io/packages/helm/one-acre-fund/metabase as that was on the official filter;

I am trying set metabase CE on k8 cluster (1 replica only); See metabase-config.yaml below for info;
I do:

helm repo add one-acre-fund https://one-acre-fund.github.io/oaf-public-charts
helm install my-release one-acre-fund/metabase

and get


NAME                                       READY   STATUS    RESTARTS   AGE
pod/my-release-metabase-7479bfc65d-nv985   0/1     Running   6          11m
pod/my-release-postgresql-0                1/1     Running   0          11m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-release-metabase   0/1     1            0           11m

NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubernetes                       ClusterIP   10.106.128.1     <none>        443/TCP    169m
service/my-release-metabase              ClusterIP   10.106.130.215   <none>        3000/TCP   11m
service/my-release-postgresql            ClusterIP   10.106.128.65    <none>        5432/TCP   11m
service/my-release-postgresql-headless   ClusterIP   None             <none>        5432/TCP   11m

Clearly this isn't working;


kubectl logs pod/my-release-metabase-7479bfc65d-nv985
Warning: environ value jdk-11.0.12+7 for key :java-version has been overwritten with 11.0.12
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2021-09-16 17:04:58,777 INFO metabase.util :: Maximum memory available to JVM: 494.9 MB
2021-09-16 17:05:39,560 INFO util.encryption :: Saved credentials encryption is DISABLED for this Metabase instance. 🔓
 For more information, see https://metabase.com/docs/latest/operations-guide/encrypting-database-details-at-rest.html

Any resolution would /guidance would be helpful;
more logs

kubectl describe pod/my-release-metabase-7479bfc65d-nv985


Name:         my-release-metabase-7479bfc65d-nv985
Namespace:    default
Priority:     0
Labels:       org.metabase.app=app
              org.metabase.instance=my-release-metabase
              org.metabase.project=metabase
              pod-template-hash=7479bfc65d
              
              seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:       Running
IPs:
  IP:           10.106.0.130
Controlled By:  ReplicaSet/my-release-metabase-7479bfc65d
Init Containers:
  wait-db:
    Image:         jwilder/dockerize:0.6.1
    Image ID:      docker.io/jwilder/dockerize@sha256:5712c481002a606fffa99a44526fbff2cd1c7f94ca34489f7b0d6bbaeeff4aa4
    Port:          <none>
    Host Port:     <none>
    Args:
      -wait
      tcp://my-release-postgresql.default.svc:5432
      -timeout
      300s
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 16 Sep 2021 16:51:53 +0000
      Finished:     Thu, 16 Sep 2021 16:51:53 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             2Gi
    Requests:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             2Gi
    Environment:          <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gg86 (ro)
Containers:
  app:
    Image:          metabase/metabase:latest
    Image ID:       docker.io/metabase/metabase@sha256:759471275a5433d568822356bcd69a67175fc2852c92fb759edead05e4853e97
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 16 Sep 2021 17:12:36 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 16 Sep 2021 17:11:14 +0000
      Finished:     Thu, 16 Sep 2021 17:12:35 +0000
    Ready:          False
    Restart Count:  9
    Limits:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             2Gi
    Requests:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             2Gi
    Liveness:             http-get http://:3000/api/health delay=60s timeout=1s period=5s #success=1 #failure=5
    Readiness:            http-get http://:3000/api/health delay=10s timeout=1s period=5s #success=1 #failure=15
    Environment:
      MB_DB_TYPE:    postgres
      MB_DB_DBNAME:  metabase
      MB_DB_PORT:    5432
      MB_DB_USER:    metabase
      MB_DB_PASS:    <set to the key 'db-password' in secret 'my-release-metabase'>  Optional: false
      MB_DB_HOST:    my-release-postgresql.default.svc
    Mounts:
      /plugins from plugin-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gg86 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  plugin-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-6gg86:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                   From                                   Message
  ----     ------            ----                  ----                                   -------
  Normal   TriggeredScaleUp  22m                   cluster-autoscaler                     pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/biz-leads-1509126536502/zones/europe-central2-c/instanceGroups/gk3-metabase-default-pool-a14ac9f4-grp 1->2 (max: 1000)}]
  Warning  FailedScheduling  22m (x4 over 22m)     gke.io/optimize-utilization-scheduler  0/2 nodes are available: 1 Insufficient memory, 2 Insufficient cpu.
  Warning  FailedScheduling  21m (x2 over 21m)     gke.io/optimize-utilization-scheduler  0/3 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 2 Insufficient cpu, 2 Insufficient memory.
  Normal   Scheduled         21m                   gke.io/optimize-utilization-scheduler  Successfully assigned default/my-release-metabase-7479bfc65d-nv985 to gk3-metabase-default-pool-a14ac9f4-rv8v
  Normal   Pulling           21m                   kubelet                                Pulling image "jwilder/dockerize:0.6.1"
  Normal   Pulled            20m                   kubelet                                Successfully pulled image "jwilder/dockerize:0.6.1" in 7.594228043s
  Normal   Pulling           20m                   kubelet                                Pulling image "metabase/metabase:latest"
  Normal   Created           20m                   kubelet                                Created container wait-db
  Normal   Started           20m                   kubelet                                Started container wait-db
  Normal   Pulled            20m                   kubelet                                Successfully pulled image "metabase/metabase:latest" in 8.892920689s
  Normal   Created           20m                   kubelet                                Created container app
  Normal   Started           20m                   kubelet                                Started container app
  Warning  Unhealthy         11m (x31 over 19m)    kubelet                                Liveness probe failed: Get "http://10.106.0.130:3000/api/health": dial tcp 10.106.0.130:3000: connect: connection refused
  Warning  BackOff           5m52s (x20 over 10m)  kubelet                                Back-off restarting failed container
  Warning  Unhealthy         57s (x125 over 20m)   kubelet                                Readiness probe failed: Get "http://10.106.0.130:3000/api/health": dial tcp 10.106.0.130:3000: connect: connection refused

metabase-config.yaml


# Currently Metabase is not horizontly scalable. See
# https://github.com/metabase/metabase/issues/1446 and
# https://github.com/metabase/metabase/issues/2754
# NOTE: Should remain 1
replicaCount: 1
podAnnotations: {}
podLabels: {}
image:
  repository: metabase/metabase
  tag: latest
  pullPolicy: IfNotPresent
  replicas: 1

## String to fully override metabase.fullname template
##
# fullnameOverride:

# Config Jetty web server
listen:
  host: "0.0.0.0"
  port: 3000
ssl:
  # If you have an ssl certificate and would prefer to have Metabase run over HTTPS
  enabled: true
  # port: 8443
  # keyStore: |-
  #   << JKS KEY STORE >>
  # keyStorePassword: storepass
jetty:
#  maxThreads: 254
#  minThreads: 8
#  maxQueued: -1
#  maxIdleTime: 60000

# Backend database
database:
  # Database type (h2 / mysql / postgres), default: h2
  type: postgres
  # encryptionKey: << YOUR ENCRYPTION KEY >>
  ## Only need when you use mysql / postgres
  # host:
  # port:
  # dbname:
  # username:
  # password:
  ## Alternatively, use a connection URI for full configurability. Example for SSL enabled Postgres.
  # connectionURI: postgres://user:password@host:port/database?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory"
  connectionURI: "postgres://metabase:ciJ8KfiMUK@localhost:5432/metabase?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory"
  ## If a secret with the database credentials already exists, use the following values:
  # existingSecret:
  # existingSecretUsernameKey:
  # existingSecretPasswordKey:
  # existingSecretConnectionURIKey:

password:
  # Changing Metabase password complexity:
  # weak: no character constraints
  # normal: at least 1 digit (default)
  # strong: minimum 8 characters w/ 2 lowercase, 2 uppercase, 1 digit, and 1 special character
  complexity: normal
  length: 6

timeZone: UTC
emojiLogging: true
# javaOpts:
# pluginsDirectory:
# siteUrl:

session: {}
  # maxSessionAge:
  # sessionCookies:

livenessProbe:
  initialDelaySeconds: 120
  timeoutSeconds: 30
  failureThreshold: 6

readinessProbe:
  initialDelaySeconds: 30
  timeoutSeconds: 3
  periodSeconds: 5

service:
  name: metabase
  type: ClusterIP
  externalPort: 80
  internalPort: 3000
  # Used to fix NodePort when service.type: NodePort.
  nodePort:
  annotations: {}
    # Used to add custom annotations to the Service.
    # service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
ingress:
  enabled: false
  # Used to create Ingress record (should used with service.type: ClusterIP).
  hosts:
    # - metabase.domain.com
  # The ingress path. Useful to host metabase on a subpath, such as `/metabase`.
  path: /
  labels:
    # Used to add custom labels to the Ingress
    # Useful if for example you have multiple Ingress controllers and want your Ingress controllers to bind to specific Ingresses
    # traffic: internal
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  tls:
    # Secrets must be manually created in the namespace.
    # - secretName: metabase-tls
    #   hosts:
    #     - metabase.domain.com

# A custom log4j.properties file can be provided using a multiline YAML string.
# See https://github.com/metabase/metabase/blob/master/resources/log4j.properties
#
# log4jProperties:

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

## Node labels for pod assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
#
nodeSelector: {}

## Tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## Affinity for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

is it a memory allocation issue or something else? And
Is this still relevant:

Hi @codecakes
500MB memory is too little. You should be seeing OOM-killer messages, but perhaps those are swallowed, when it restarts the pod.
My first guess is that you are giving very few resources. Otherwise you might be running out of entropy.
Metabase should start in about 35 seconds.

Will update here after raising the cap;