ElasticBeanstalk upgrade failing

We are running Metabase on the ElasticBeanstalk using the standard EBS install.

When we upgraded from 0.17.0 to 0.18.0, all we got were 503 errors. I checked the databasechangeloglock table and no rows were listed as locked. Tried restarting the app servers (via AWS console) but didn’t help the problem. Ended up having to completely reinstall from scratch using a new EBS application.

Same thing happened when upgrading from 0.18.0 to 0.18.1.

Anyone have any ideas on if this is a bug, or ?? Has anyone else tried the standard upgrade on EBS and had it work for the 0.18.x versions?

Can you pull up beanstalk logs from the previous environment?

I’ve seen that occur when we’ve futzed with RDS security groups after deployment. Typically what will be in the logs in that situation is something along the lines of connection errors in metabase.db.

So there are some errors in the logfiles I got from EB. In the docker log I see these:

time=“2016-06-28T02:24:48.381150682Z” level=error msg=“HTTP Error” err=“conflict: unable to delete f13e4999bba8 (cannot be forced) - image has dependent child images” statusCode=409
time=“2016-06-28T02:24:48.381636794Z” level=info msg=“DELETE /v1.21/images/f962bb3f1485”
time=“2016-06-28T02:24:48.381772159Z” level=error msg=“Handler for DELETE /v1.21/images/f962bb3f1485 returned error: conflict: unable to delete f962bb3f1485 (cannot be forced) - image has dependent child images”
time=“2016-06-28T02:24:48.381795975Z” level=error msg=“HTTP Error” err=“conflict: unable to delete f962bb3f1485 (cannot be forced) - image has dependent child images” statusCode=409
time=“2016-06-28T02:24:48.442841424Z” level=info msg=“POST /v1.21/images/968506edc75f/tag?force=1&repo=aws_beanstalk%2Fcurrent-app&tag=latest”
time=“2016-06-28T02:24:48.501160718Z” level=info msg=“POST /v1.21/images/67a294eb6132/tag?force=1&repo=metabase%2Fmetabase&tag=v0.18.1”
time=“2016-06-28T02:24:48.558834478Z” level=info msg=“POST /v1.21/images/441c58909ff9/tag?force=1&repo=metabase%2Fmetabase&tag=v0.18.0”
time=“2016-06-28T02:24:48.558986107Z” level=error msg=“Handler for POST /v1.21/images/441c58909ff9/tag returned error: could not find image: no such id: 441c58909ff9”
time=“2016-06-28T02:24:48.559009165Z” level=error msg=“HTTP Error” err=“could not find image: no such id: 441c58909ff9” statusCode=404

Not a whole lot to go by there.

Is there an initial error somewhere? Based on what you’ve pasted, something seems to be going wrong with replacing the old docker image. But I’m not clear on exactly where this would be caused. And it’s not something we’ve come across.

Hmm strange. I can try deploying just the stock 0.18.0 and then upgrading to 0.18.1 and see if that works.