Bandwidth is tiered by instance size, here's a comprehensive answer:
For t2/m3/c3/c4/r3/i2/d2 instances:
- t2.nano = ??? (Based on the scaling factors, I'd expect 20-30 MBit/s)
- t2.micro = ~70 MBit/s (qiita says 63 MBit/s) - t1.micro gets about ~100 Mbit/s
- t2.small = ~125 MBit/s (t2, qiita says 127 MBit/s, cloudharmony says 125 Mbit/s with spikes to 200+ Mbit/s)
- *.medium = t2.medium gets 250-300 MBit/s, m3.medium ~400 MBit/s
- *.large = ~450-600 MBit/s (the most variation, see below)
- *.xlarge = 700-900 MBit/s
- *.2xlarge = ~1 GBit/s +- 10%
- *.4xlarge = ~2 GBit/s +- 10%
- *.8xlarge and marked specialty = 10 Gbit, expect ~8.5 GBit/s, requires enhanced networking & VPC for full throughput
m1 small, medium, and large instances tend to perform higher than expected. c1.medium is another freak, at 800 MBit/s.
I gathered this by combing dozens of sources doing benchmarks (primarily using iPerf & TCP connections). Credit to CloudHarmony & flux7 in particular for many of the benchmarks (note that those two links go to google searches showing the numerous individual benchmarks).
Caveats & Notes:
The large instance size has the most variation reported:
- m1.large is ~800 Mbit/s (!!!)
- t2.large = ~500 MBit/s
- c3.large = ~500-570 Mbit/s (different results from different sources)
- c4.large = ~520 MBit/s (I've confirmed this independently, by the way)
- m3.large is better at ~700 MBit/s
- m4.large is ~445 Mbit/s
- r3.large is ~390 Mbit/s
Burstable (T2) instances appear to exhibit burstable networking performance too:
Note that these are within the same region - if you're transferring across regions, real performance may be much slower. Even for the larger instances, I'm seeing numbers of a few hundred MBit/s.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…