AWS EB Setup failing due to health check

I’ve had success setting up MB on the Mac App, however the AWS ElasticBeanstalk process is giving me some trouble.

Everything seems to go fine until the end. The environment healths transitions from Pending to Severe, where it hangs for around 10 mins before giving 2 errors.

2015-10-25 11:57:07 UTC+0000	ERROR	Stack named 'awseb-e-ngyt8icgae-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition].

2015-10-25 11:56:50 UTC+0000	ERROR	The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.

2015-10-25 11:36:17 UTC+0000	WARN	Environment health has transitioned from Pending to Severe. None of the instances are sending data.

Some digging around has posed the question, what should the health check type be: 1) Basic; or 2) Enhanced. Some suggestions only point to Enhanced being an issue? I’ve tried both.

I’m not entirely sure of the differences but the documentation doesn’t allude to which should be selected. Enhanced is selected by default.

I’ve tried to check my security groups and VPC settings, and traffic should be fine, but I can’t SSH in to the EB instance.

I’m creating this EB application in EU region, so I’ve had to download MB and upload it rather than insert the S3 URL. Other than this, I’ve followed the guide, adding my key-pair and setting my VPC settings.

Update:

Triple checked VPC settings. Only errors I get now are:

2015-10-25 13:14:17 UTC+0000	ERROR	Stack named 'awseb-e-jafkwqspnv-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition].

2015-10-25 13:14:04 UTC+0000	ERROR	The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.

curious. i've never actually seen any kind of choice for the health check type in EB. here's why i have configured on over a dozen EB instances that we run ...

based on this thread on the AWS forums my suspicion is that you have a routing problem likely caused by the way your VPC is setup ... Forums | AWS re:Post

i've run into issues a couple of times when deploying Metabase in environments where the VPC is very custom and the routing rules are very strict. the key is that you need to make sure that anything that EB launches for you (ELB, EC2 instance, RDS instance) all need to be able to talk to each other. so maybe there is something going on there?

Hi agilliland - thanks for replying.

I think you are right and the health check is a red herring.

From further reading about EB, it seems it randomly fires up in any Availability Zone of those selected during set up.
Since everything I’ve set up in our AWS infrastructure to this point hasn’t really touched multiple AZs, except one case, the security groups, network interfaces etc aren’t all aligned for such and I think this was causing the issue. I couldn’t even SSH in to the unit which was suspicious and it was modifying my other security groups.

I’m going to spend some time configuring the second AZ setups and give it another whirl.