Elastic Beanstalk reports 5xx errors even though instances are in perfect health


I need to set up an api application for gathering event data to be used in a recommendation engine. This is my setup:

  • Elastic Beanstalk env with a load balancer and autoscaling group.
  • I have 2x t2.medium instances running behind a load balancer.
  • EBS configuration is 64bit Amazon Linux 2016.03 v2.1.1 running Tomcat 8 Java 8
  • Additionally I have 8x t2.micro instances that I use for high-load testing the api, sending thousands of requests/sec to be handled by the api.
  • Im using Locust (http://locust.io/) as my load testing tool.
  • Each t2.micro instance that is run by Locust can send up to about 500req/sec

Everything works fine while the reqs/sec are below 1000, maybe 1200. Once over that, my load balancer reports that some of the instances behind it are reporting 5xx errors (attached). I’ve also tried with 4 instances behind the load balancer, and although things start out well with up to 3000req/sec, soon after, the ebs health tool and Locust both report 503s and 504s, while all of the instances are in perfect health according to the actual numbers in the ebs Health Overview, showing only 10%-20% CPU utilization.

Is there smth I’m missing in configuring the env? It seems like no matter how many machines I have behind the load balancer, the env handles no more than 1000-2000 requests per second.

enter image description here

Now I know for sure that it’s the ELB that is causing the problems, not the instances.

I ran a load test with 10 simulated users. Each user sends about 1req/sec and the load increases by 10 users/sec to 4000 users, which should equal to about 4000req/sec. Still it doesn’t seem to like any request rate over 3.5k req/sec (attachment1).

As you can see from attachment2, the 4 instances behind the load balancer are in perfect health, but I still keep getting 503 errors. It’s just the load balancer itself causing problems. Look how SurgeQueueLength and SpilloverCount increase rapidly at some point. (attachment3) I’m trying to figure out why.

Also I completely removed the load balancer and tested with just one instance alone. It can handle up to about 3k req/sec. (attachment4 and attachment5), so it’s definitely the load balancer.

Maybe I’m missing some crucial limit that load balancers have by default, like the queue size of 1024? What is normal handle rate for 1 load balancer? Should I be adding more load balancers? Could it be related to availability zones? ELB listeners from one zone are trying to route to instances from a different zone?

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

Cross zone load balancing is enabled

maybe this helps more:
enter image description here


The message says that “9.8 % of the requests to the ELB are failing with HTTP 5xx (6 minutes ago)”. This does not mean that your instances are not returning HTTP 5xx responses. The requests are failing at the ELB itself. This can happen when your backend instances are at capacity (e.g. connections are saturated and they are rejecting connections to the ELB).

Your requests are spilling over at the ELB. They never make it to the instance. If they were failing at the EC2 instances then the cause would be different and data for the environment would match the data for the instances.

Also note that the cause says that this was the state “6 minutes ago”. Elastic Beanstalk multiple data sources – one is the data coming from the instance which shows the requests per second and HTTP status codes in the table shown. Another data source is cloudwatch metrics for your ELB. Since cloudwatch metrics for ELB are 1 minute, this data is slightly delayed and the cause tells you how old the information is.

Leave a Reply