Throughput

What is it?

Throughput is the actual amount of useful data transmitted over a communication channel per unit of time, usually measured in bits/sec (e.g., Mbps). It is different from bandwidth (theoretical capacity) and latency (delay): throughput is what you actually achieve in practice and can be reduced by protocol overhead, errors, congestion and hardware limits. In network fundamentals and across Audio/Video, Maker and Web contexts, throughput determines how many audio/video streams you can deliver simultaneously, how fast firmware or sensor data can be transferred to a device, and how quickly web assets or API responses reach clients.

Practical example

Audio/Video: when sending a live 4K stream, viewers may experience stuttering because network throughput is insufficient — even if the ISP advertises 500 Mbps, TCP overhead, packet loss or server CPU limits can reduce actual throughput. Maker: when uploading a large firmware image or dataset to a Raspberry Pi over Wi‑Fi or doing an OTA update for an ESP32, the throughput determines how long the update takes; a slow or noisy link reduces actual throughput and can force retransmissions. Web: for a busy web application, server and network throughput influence how many concurrent users get smooth pages and media; you can measure throughput with tools like iperf or browser devtools and improve it with compression, CDNs, caching or more efficient protocols (e.g., HTTP/2, QUIC).

Test your knowledge

Why does a network link advertised as 100 Mbps often deliver only about 60 Mbps of throughput in practice?

Ask Lex
Lex knows the context of this term and can give targeted explanations, examples, and extra context.
Tip: Lex replies briefly in the widget. For more detail, go to full screen mode.

Learn our language

Learn these terms from real professionals and take your skills further at KdG MCT.

Study at KdG