Using iperf and bwm-ng to measure throughput

I am often asked to measure the bandwidth of a network path. Many users test this using a simple HTTP download or with Unfortunately, any test using TCP will produce inaccurate results, due to the limitations of a session-oriented protocol. TCP window size, latency, and the bandwidth of the return channel (for ACK messages) all affect the results. The most reliable way to measure true bandwidth is with UDP. That’s where my friends iperf and bwm-ng come in handy.

iperf is a tool for measuring bandwidth and reporting on throughput, jitter, and data loss. Others have written handy tutorials, but I’ll summarise the basics here.

iperf will run on any Linux or Unix (including Mac OSX), and must be installed on both hosts. Additionally, the “server” (receiving) host must allow incoming traffic to some port (which defaults to 5001/UDP and 5001/TCP). If you want to run bidirectional tests with UDP, this means you must open 5001/UDP on both hosts’ firewalls.

iptables -I INPUT -p udp -m udp --dport 5001 -j ACCEPT

A network path is really two paths – the downstream path and the upstream (or return) path. With iperf, the “client” is the transmitter and the “server” is the receiver. So we’ll use the term “downstream” to refer to traffic transmitted from the client to the server, and “upstream” to refer to the opposite. Since these two paths can have different bandwidths and entirely different routes, we should measure them separately.

Start by opening terminal windows to both the client and server hosts, as well as the iperf man page. On the server, you only have to start listening. This runs iperf as a server on the default 5001/UDP.

The server will output test results, as well as report them back to the client for display.

On the client, you have many options. You can push X data (-b) for Y seconds (-t). For example, to push 1 mbit for 10 seconds:

You can request that the server perform a reverse connection to test the return path, either at the same time (-d, dual test) or in series (-r, tradeoff). This causes both ends to temporarily start both a client and a server.

The above shows first the client->server transmission, then the server->client transmission. If it seems hard to read, each simultaneous link has an ID such as “[ 3]”, and look for port 5001 to identify the host that is receiving data.

You can also specify the datagram size. Many devices have limits on packets per second, which means you can push more data with 1470-byte datagrams than with 64-byte datagrams. The same link tested with 64-byte datagrams (requiring nearly 20,000 packets where previously we needed only 852) showed 6% packet loss:

To find the total packet size, add 28 bytes to the datagram size for UDP+IP headers. For instance, setting 64-byte datagrams causes iperf to send 92-byte packets. Exceeding the MTU can produce even more interesting results, as packets are fragmented.

iperf provides final throughput results at the end of each test. However, I sometimes find it handy to get results as the test is running, or to report on packets/second. That’s when I use bwm-ng.

Try opening two more terminals, one each to the client and server. In each, start bwm-ng.

By default, bwm-ng shows bytes/second. Press ‘u’ to cycle through bytes, bits, packets, and errors per second. Press ‘+’ or ‘-‘ to change the refresh time. I find that 1 or 2 seconds produces more accurate results on some hardware. Press ‘h’ for handy in-line help.

Now, start the same iperf tests. Any packet losses will be immediately apparent, as the throughput measurements won’t match. The client will show 1 mbit in the Tx column, while the server will show a lower number in the Rx column.

However, bwm-ng will not differentiate between traffic from iperf and other traffic at the same time. When that happens, it is still useful to use the packets/sec display to find the maximum packet throughput limits of your hardware.

One warning to those who want to test TCP throughput with iperf: you cannot specify the data rate. Instead, iperf in TCP mode will scale up the data rate until it finds the maximum safe window size. For low-latency links, this is generally 85% of the true channel bandwidth as measured by UDP tests. However, as latency increases, TCP bandwidth decreases.


  1. Kathleen’s avatar

    We were using iperf for a while. We switched to pathtest – it’s still command line and still free, but more customizable – TCP, UDP and ICMP and results have been consistent.


    1. Tyler Wagner’s avatar

      Thanks for the tip. However, it’s worth noting that pathtest is free as in beer, not free as in speech. It can also only be downloaded after entering your details to AppNeta, and is available only as 32-bit binaries for Windows and Linux. Direct download link:


    2. MYAUECAOXEJNX’s avatar

      I was curious if you ever considered changing the page layout of your site? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people could connect with it better. Youve got an awful lot of text for only having 1 or 2 images. Maybe you could space it out better?



Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.