Using iperf and bwm-ng to measure throughput

I am often asked to measure the bandwidth of a network path. Many users test this using a simple HTTP download or with speedtest.net. Unfortunately, any test using TCP will produce inaccurate results, due to the limitations of a session-oriented protocol. TCP window size, latency, and the bandwidth of the return channel (for ACK messages) all affect the results. The most reliable way to measure true bandwidth is with UDP. That’s where my friends iperf and bwm-ng come in handy.

iperf is a tool for measuring bandwidth and reporting on throughput, jitter, and data loss. Others have written handy tutorials, but I’ll summarise the basics here.

iperf will run on any Linux or Unix (including Mac OSX), and must be installed on both hosts. Additionally, the “server” (receiving) host must allow incoming traffic to some port (which defaults to 5001/UDP and 5001/TCP). If you want to run bidirectional tests with UDP, this means you must open 5001/UDP on both hosts’ firewalls.

iptables -I INPUT -p udp -m udp --dport 5001 -j ACCEPT

A network path is really two paths – the downstream path and the upstream (or return) path. With iperf, the “client” is the transmitter and the “server” is the receiver. So we’ll use the term “downstream” to refer to traffic transmitted from the client to the server, and “upstream” to refer to the opposite. Since these two paths can have different bandwidths and entirely different routes, we should measure them separately.

Start by opening terminal windows to both the client and server hosts, as well as the iperf man page. On the server, you only have to start listening. This runs iperf as a server on the default 5001/UDP.

root@server:~# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:   124 KByte (default)
------------------------------------------------------------

The server will output test results, as well as report them back to the client for display.

On the client, you have many options. You can push X data (-b) for Y seconds (-t). For example, to push 1 mbit for 10 seconds:

root@client:~# iperf -u -c server.example.com -b 1M -t 10
------------------------------------------------------------
Client connecting to 172.16.0.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:   110 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.1 port 37731 connected with 172.16.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.19 MBytes  1000 Kbits/sec
[  3] Sent 852 datagrams
[  3] Server Report:
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0-10.0 sec  1.19 MBytes  1.00 Mbits/sec  0.842 ms    0/  852 (0%)

You can request that the server perform a reverse connection to test the return path, either at the same time (-d, dual test) or in series (-r, tradeoff). This causes both ends to temporarily start both a client and a server.

root@client:~# iperf -u -c server.example.com -b 1M -t 10 -r
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:   110 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 172.16.0.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:   110 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.1 port 46297 connected with 172.16.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.19 MBytes  1000 Kbits/sec
[  4] Sent 852 datagrams
[  4] Server Report:
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  4]  0.0-10.0 sec  1.19 MBytes    998 Kbits/sec  0.250 ms    2/  852 (0.23%)
[  3] local 192.168.1.1 port 5001 connected with 172.16.0.2 port 34916
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0-10.0 sec  1.19 MBytes  1.00 Mbits/sec  0.111 ms    0/  851 (0%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order

The above shows first the client->server transmission, then the server->client transmission. If it seems hard to read, each simultaneous link has an ID such as “[ 3]”, and look for port 5001 to identify the host that is receiving data.

You can also specify the datagram size. Many devices have limits on packets per second, which means you can push more data with 1470-byte datagrams than with 64-byte datagrams. The same link tested with 64-byte datagrams (requiring nearly 20,000 packets where previously we needed only 852) showed 6% packet loss:

root@client:~# iperf -u -c server.example.com -b 1M -t 10 -l 64
------------------------------------------------------------
Client connecting to 172.16.0.2, UDP port 5001
Sending 64 byte datagrams
UDP buffer size:   110 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.1 port 47784 connected with 172.16.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.19 MBytes  1000 Kbits/sec
[  3] Sent 19533 datagrams
[  3] Server Report:
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0-10.0 sec  1.11 MBytes    933 Kbits/sec  0.134 ms 1294/19533 (6.6%)

To find the total packet size, add 28 bytes to the datagram size for UDP+IP headers. For instance, setting 64-byte datagrams causes iperf to send 92-byte packets. Exceeding the MTU can produce even more interesting results, as packets are fragmented.

iperf provides final throughput results at the end of each test. However, I sometimes find it handy to get results as the test is running, or to report on packets/second. That’s when I use bwm-ng.

Try opening two more terminals, one each to the client and server. In each, start bwm-ng.

root@client:~# bwm-ng -u bits -t 1000

  bwm-ng v0.6 (probing every 1.000s), press 'h' for help
  input: /proc/net/dev type: rate
  |         iface                   Rx                   Tx                Total
  ==============================================================================
               lo:           0.00 Kb/s            0.00 Kb/s            0.00 Kb/s
             eth0:           0.00 Kb/s         1017.34 Kb/s         1017.34 Kb/s
             eth1:           0.00 Kb/s            0.00 Kb/s            0.00 Kb/s
  ------------------------------------------------------------------------------
            total:           0.00 Kb/s         1017.34 Kb/s         1017.34 Kb/s

By default, bwm-ng shows bytes/second. Press ‘u’ to cycle through bytes, bits, packets, and errors per second. Press ‘+’ or ‘-‘ to change the refresh time. I find that 1 or 2 seconds produces more accurate results on some hardware. Press ‘h’ for handy in-line help.

Now, start the same iperf tests. Any packet losses will be immediately apparent, as the throughput measurements won’t match. The client will show 1 mbit in the Tx column, while the server will show a lower number in the Rx column.

However, bwm-ng will not differentiate between traffic from iperf and other traffic at the same time. When that happens, it is still useful to use the packets/sec display to find the maximum packet throughput limits of your hardware.

One warning to those who want to test TCP throughput with iperf: you cannot specify the data rate. Instead, iperf in TCP mode will scale up the data rate until it finds the maximum safe window size. For low-latency links, this is generally 85% of the true channel bandwidth as measured by UDP tests. However, as latency increases, TCP bandwidth decreases.

Tags:

  1. Kathleen’s avatar

    We were using iperf for a while. We switched to pathtest – it’s still command line and still free, but more customizable – TCP, UDP and ICMP and results have been consistent. http://www.testmypath.com

    Reply

    1. Tyler Wagner’s avatar

      Thanks for the tip. However, it’s worth noting that pathtest is free as in beer, not free as in speech. It can also only be downloaded after entering your details to AppNeta, and is available only as 32-bit binaries for Windows and Linux. Direct download link:

      http://www.testmypath.com/success.html

      Reply

Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.