Question:
I’m trying to figure out the correct tuning for nginx on an AWS server that is wholly backed by EBS. The basic issue is that when downloading a ~100MB static file, I’m seeing consistent download rates of ~60K/s. If I use scp to copy the same file from the AWS server, I’m seeing rates of ~1MB/s. (So, I’m not sure EBS even comes into play here).
Initially, I was running nginx with basically the out-of-the-box configuration (for CentOS 6.x). But in an attempt to speed things up, I’ve played around with various tuning parameters to no avail — the speed has remained basically the same.
Here is the relevant fragment from my config as it stands at this moment:
1 2 3 4 5 6 7 8 9 10 11 |
location /download { root /var/www/yada/update; disable_symlinks off; autoindex on; # Transfer tuning follows aio on; directio 4m; output_buffers 1 128k; } |
Initially, these tuning settings were:
1 2 3 4 |
sendfile on; tcp_nopush on; tcp_nodelay on; |
Note, I’m not trying to optimize for a large amount of traffic. There is likely only a single client ever downloading at any given time. The AWS server is a ‘micro’ instance with 617MB of memory. Regardless, the fact that scp can download at ~1MB/s leads me to believe that HTTP should be able to match or beat that throughput.
Any help is appreciated.
[Update]
Additional information. Running a ‘top’ command while a download is running, I get:
1 2 3 4 |
top - 07:37:33 up 11 days, 1:56, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st |
and ‘iostat’ shows:
1 2 3 4 5 6 7 8 |
Linux 3.2.38-5.48.amzn1.x86_64 04/03/2013 _x86_64_ (1 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.02 0.00 0.03 0.03 0.02 99.89 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 0.23 2.66 8.59 2544324 8224920 |
Answer:
Have you considered turning sendfile on? Sendfile allows nginx to use the kernel directly to send static files, so it should be faster than any other option.