Networking Introduction

In this chapter, you will find a very short introduction to various networking concepts. These concepts are important to understanding when to use QUIC.

1. TCP/IP and UDP Comparison

Let's compare TCP, UDP, and QUIC.

  • unreliable: Transport packets are not assured of arrival and ordering.
  • reliable: Transport packets are assured of arrival and ordering.
FeatureTCPUDPQUIC
Connection-OrientedYesNoYes
Transport GuaranteesReliableUnreliableReliable ('a)
Packet TransferStream-basedMessage basedStream based
Header Size~20 bytes8 bytes~16 bytes (depending on connection id)
Control Flow, Congestion Avoidance/ControlYesNoYes ('b)
Based OnIPIPUDP

'a. Unreliable is supported as an extension.
'b. QUIC control flow/congestion implementations will run in userspace whereas in TCP it's running in kernel space, however, there might be a kernel implementation for QUIC in the future.

2. Issues with TCP

TCP has been around for a long time and was not designed with the modern internet in mind. It has several difficulties that QUIC tries to resolve.

Head-of-line Blocking

One of the biggest issues with TCP is that of Head-of-line blocking. It is a convenient feature because it ensures that all packages are sent and arrive in order. However, in cases of high throughput (multiplayer game networking) and big load in a short time (web page load), this can severely impact latency.

The issue is demonstrated in the following animation:

Head of line blocking

This animation shows that if a certain packet drops in transmission, all packets have to wait at the transport layer until it is resent by the other end. Once the delayed packet arrives at its destination, all later packets are passed on to the destination application together.

Let's look at two areas where head-of-line blocking causes problems.

Web Networking

As websites increasingly need a larger number of HTTP requests (HTML, CSS, JavaScript, images) to display all content, the impact of head-of-line blocking has also increased. To improve on this, HTTP 2 introduced request multiplexing within a TCP data stream, which allows servers to stream multiple responses at the same time. However, data loss of a single packet will still block all response streams because they exist within the context of a single TCP stream.

Connection Setup Duration

In the usual TCP + TLS + HTTP stack, TCP needs 6 handshake messages to set up a session between server and client. TLS performs its own, sending 4 messages for setting up an initial connection over TLS 1.3. By integrating the transport protocol and TLS handshakes, QUIC can make connection setup more efficient.

The QUIC protocol

QUIC is a general-purpose network protocol built on top of UDP, and standardized by the IETF. Although QUIC is still relatively new, the protocol is used for all connections from Chrome web browsers to the Google servers.

QUIC solves a number of transport-layer and application-layer problems experienced by modern web applications. It is very similar to TCP+TLS+HTTP2, but implemented on top of UDP. Having QUIC as a self-contained protocol allows innovations which aren’t possible with existing protocols as they are hampered by legacy clients and middleboxes.

Key advantages of QUIC over TCP+TLS+HTTP2 include:

  • Improved connection establishment speed (0-rtt).
  • Improved congestion control by moving congestion control algorithms into the user space at both endpoints.
  • Improved bandwidth estimation in each direction to avoid congestion.
  • Improved multiplexing without head-of-line blocking.
  • Contains forward error correction (FEC).

While QUIC's intentions are originally web-oriented, it offers interesting opportunities in other areas like game networking. One thing is for sure, QUIC has many great potentials and will serve us in the future with HTTP/3.

In the upcoming chapter we will be discussing various aspects of QUIC also in relation to Quinn.

Documentation Crates.io Build status codecov Chat Chat License: MIT License: Apache 2.0

Quinn is a pure-Rust, async-compatible implementation of the IETF QUIC transport protocol. The project was founded by Dirkjan Ochtman and Benjamin Saunders as a side project in 2018, and has seen more than 30 releases since then. If you're using Quinn in a commercial setting, please consider sponsoring the project.

Features

  • Simultaneous client/server operation
  • Ordered and unordered stream reads for improved performance
  • Works on stable Rust, tested on Linux, macOS and Windows
  • Pluggable cryptography, with a standard implementation backed by rustls and ring
  • Application-layer datagrams for small, unreliable messages
  • Future-based async API
  • Minimum supported Rust version of 1.66

Overview

  • quinn: High-level async API based on tokio, see examples for usage. This will be used by most developers. (Basic benchmarks are included.)
  • quinn-proto: Deterministic state machine of the protocol which performs no I/O internally and is suitable for use with custom event loops (and potentially a C or C++ API).
  • quinn-udp: UDP sockets with ECN information tuned for the protocol.
  • bench: Benchmarks without any framework.
  • fuzz: Fuzz tests.

Getting Started

Examples

$ cargo run --example server ./
$ cargo run --example client https://localhost:4433/Cargo.toml

This launches an HTTP 0.9 server on the loopback address serving the current working directory, with the client fetching ./Cargo.toml. By default, the server generates a self-signed certificate and stores it to disk, where the client will automatically find and trust it.

Links

Usage Notes

Click to show the notes

Buffers

A Quinn endpoint corresponds to a single UDP socket, no matter how many connections are in use. Handling high aggregate data rates on a single endpoint can require a larger UDP buffer than is configured by default in most environments. If you observe erratic latency and/or throughput over a stable network link, consider increasing the buffer sizes used. For example, you could adjust the SO_SNDBUF and SO_RCVBUF options of the UDP socket to be used before passing it in to Quinn. Note that some platforms (e.g. Linux) require elevated privileges or modified system configuration for a process to increase its UDP buffer sizes.

Certificates

By default, Quinn clients validate the cryptographic identity of servers they connect to. This prevents an active, on-path attacker from intercepting messages, but requires trusting some certificate authority. For many purposes, this can be accomplished by using certificates from Let's Encrypt for servers, and relying on the default configuration for clients.

For some cases, including peer-to-peer, trust-on-first-use, deliberately insecure applications, or any case where servers are not identified by domain name, this isn't practical. Arbitrary certificate validation logic can be implemented by enabling the dangerous_configuration feature of rustls and constructing a Quinn ClientConfig with an overridden certificate verifier by hand.

When operating your own certificate authority doesn't make sense, rcgen can be used to generate self-signed certificates on demand. To support trust-on-first-use, servers that automatically generate self-signed certificates should write their generated certificate to persistent storage and reuse it on future runs.

Contribution

All feedback welcome. Feel free to file bugs, requests for documentation and any other feedback to the issue tracker.

The quinn-proto test suite uses simulated IO for reproducibility and to avoid long sleeps in certain timing-sensitive tests. If the SSLKEYLOGFILE environment variable is set, the tests will emit UDP packets for inspection using external protocol analyzers like Wireshark, and NSS-compatible key logs for the client side of each connection will be written to the path specified in the variable.

The minimum supported Rust version for published releases of our crates will always be at least 6 months old at the time of release.

Certificates

In this chapter, we discuss the configuration of the certificates that are required for a working Quinn connection.

As QUIC uses TLS 1.3 for authentication of connections, the server needs to provide the client with a certificate confirming its identity, and the client must be configured to trust the certificates it receives from the server.

Insecure Connection

For our example use case, the easiest way to allow the client to trust our server is to disable certificate verification (don't do this in production!). When the rustls dangerous_configuration feature flag is enabled, a client can be configured to trust any server.

Start by adding a rustls dependency with the dangerous_configuration feature flag to your Cargo.toml file.

quinn = "*"
rustls = { version = "*", features = ["dangerous_configuration", "quic"] }

Then, allow the client to skip the certificate validation by implementing ServerCertVerifier and letting it assert verification for any server.

#![allow(unused)]
fn main() {
// Implementation of `ServerCertVerifier` that verifies everything as trustworthy.
struct SkipServerVerification;

impl SkipServerVerification {
    fn new() -> Arc<Self> {
        Arc::new(Self)
    }
}

impl rustls::client::ServerCertVerifier for SkipServerVerification {
    fn verify_server_cert(
        &self,
        _end_entity: &rustls::Certificate,
        _intermediates: &[rustls::Certificate],
        _server_name: &rustls::ServerName,
        _scts: &mut dyn Iterator<Item = &[u8]>,
        _ocsp_response: &[u8],
        _now: std::time::SystemTime,
    ) -> Result<rustls::client::ServerCertVerified, rustls::Error> {
        Ok(rustls::client::ServerCertVerified::assertion())
    }
}
}

After that, modify the ClientConfig to use this ServerCertVerifier implementation.

#![allow(unused)]
fn main() {
fn configure_client() -> ClientConfig {
    let crypto = rustls::ClientConfig::builder()
        .with_safe_defaults()
        .with_custom_certificate_verifier(SkipServerVerification::new())
        .with_no_client_auth();

    ClientConfig::new(Arc::new(crypto))
}
}

Finally, if you plug this ClientConfig into the Endpoint::set_default_client_config() your client endpoint should verify all connections as trustworthy.

Using Certificates

In this section, we look at certifying an endpoint with a certificate. The certificate can be signed with its key, or with a certificate authority's key.

Self Signed Certificates

Relying on self-signed certificates means that clients allow servers to sign their certificates. This is simpler because no third party is involved in signing the server's certificate. However, self-signed certificates do not protect users from person-in-the-middle attacks, because an interceptor can trivially replace the certificate with one that it has signed. Self-signed certificates, among other options, can be created using the rcgen crate or the openssl binary. This example uses rcgen to generate a certificate.

Let's look at an example:

#![allow(unused)]
fn main() {
fn generate_self_signed_cert() -> Result<(rustls::Certificate, rustls::PrivateKey), Box<dyn Error>>
{
    let cert = rcgen::generate_simple_self_signed(vec!["localhost".to_string()])?;
    let key = rustls::PrivateKey(cert.serialize_private_key_der());
    Ok((rustls::Certificate(cert.serialize_der()?), key))
}
}

Note that generate_simple_self_signed returns a Certificate that can be serialized to both .der and .pem formats.

Non-self-signed Certificates

For this example, we use Let's Encrypt, a well-known Certificate Authority (CA) (certificate issuer) which distributes certificates for free.

Generate Certificate

certbot can be used with Let's Encrypt to generate certificates; its website comes with clear instructions. Because we're generating a certificate for an internal test server, the process used will be slightly different compared to what you would do when generating certificates for an existing (public) website.

On the certbot website, select that you do not have a public web server and follow the given installation instructions. certbot must answer a cryptographic challenge of the Let's Encrypt API to prove that you control the domain. It needs to listen on port 80 (HTTP) or 443 (HTTPS) to achieve this. Open the appropriate port in your firewall and router.

If certbot is installed, run certbot certonly --standalone, this command will start a web server in the background and start the challenge. certbot asks for the required data and writes the certificates to fullchain.pem and the private key to privkey.pem. These files can then be referenced in code.

#![allow(unused)]
fn main() {
use std::{error::Error, fs::File, io::BufReader};

pub fn read_certs_from_file(
) -> Result<(Vec<rustls::Certificate>, rustls::PrivateKey), Box<dyn Error>> {
    let mut cert_chain_reader = BufReader::new(File::open("./fullchain.pem")?);
    let certs = rustls_pemfile::certs(&mut cert_chain_reader)?
        .into_iter()
        .map(rustls::Certificate)
        .collect();

    let mut key_reader = BufReader::new(File::open("./privkey.pem")?);
    // if the file starts with "BEGIN RSA PRIVATE KEY"
    // let mut keys = rustls_pemfile::rsa_private_keys(&mut key_reader)?;
    // if the file starts with "BEGIN PRIVATE KEY"
    let mut keys = rustls_pemfile::pkcs8_private_keys(&mut key_reader)?;

    assert_eq!(keys.len(), 1);
    let key = rustls::PrivateKey(keys.remove(0));

    Ok((certs, key))
}
}

Configuring Certificates

Now that you have a valid certificate, the client and server need to be configured to use it. After configuring plug the configuration into the Endpoint.

Configure Server

#![allow(unused)]
fn main() {
let server_config = ServerConfig::with_single_cert(certs, key)?;
}

This is the only thing you need to do for your server to be secured.

Configure Client

#![allow(unused)]
fn main() {
let client_config = ClientConfig::with_native_roots();
}

This is the only thing you need to do for your client to trust a server certificate signed by a conventional certificate authority.



Next, let's have a look at how to set up a connection.

Connection Setup

In the previous chapter we looked at how to configure a certificate. This aspect is omitted in this chapter to prevent duplication. But remember that this is required to get your Endpoint up and running. This chapter explains how to set up a connection and prepare it for data transfer.

It all starts with the Endpoint struct, this is the entry point of the library.

Example

Let's start by defining some constants.

#![allow(unused)]
fn main() {
static SERVER_NAME: &str = "localhost";

fn client_addr() -> SocketAddr {
    "127.0.0.1:5000".parse::<SocketAddr>().unwrap()
}

fn server_addr() -> SocketAddr {
    "127.0.0.1:5001".parse::<SocketAddr>().unwrap()
}
}

Server

First, the server endpoint should be bound to a socket. The server() method, which can be used for this, returns the Endpoint type. Endpoint is used to start outgoing connections and accept incoming connections.

#![allow(unused)]
fn main() {
async fn server() -> Result<(), Box<dyn Error>> {
    // Bind this endpoint to a UDP socket on the given server address. 
    let endpoint = Endpoint::server(config, server_addr())?;

    // Start iterating over incoming connections.
    while let Some(conn) = endpoint.accept().await {
        let mut connection = conn.await?;

        // Save connection somewhere, start transferring, receiving data, see DataTransfer tutorial.
    }

    Ok(())
}
}

Client

The client() returns only a Endpoint type. The client needs to connect to the server using the connect(server_name) method.
The SERVER_NAME argument is the DNS name, matching the certificate configured in the server.

#![allow(unused)]
fn main() {
async fn client() -> Result<(), Box<dyn Error>> {
    // Bind this endpoint to a UDP socket on the given client address.
    let mut endpoint = Endpoint::client(client_addr());

    // Connect to the server passing in the server name which is supposed to be in the server certificate.
    let connection = endpoint.connect(server_addr(), SERVER_NAME)?.await?;

    // Start transferring, receiving data, see data transfer page.

    Ok(())
}
}



Next up, let's have a look at sending data over this connection.

Data Transfer

The previous chapter explained how to set up an Endpoint and then get access to a Connection. This chapter continues with the subject of sending data over this connection.

Multiplexing

Multiplexing is the act of combining data from multiple streams into a single stream. This can have a significant positive effect on the performance of the application. With QUIC, the programmer is in full control over the stream allocation.

Stream Types

QUIC provides support for both stream and message-based communication. Streams and messages can be initiated both on the client and server.

TypeDescriptionReference
Bidirectional Streamtwo way stream communication.see open_bi
Unidirectional Streamone way stream communication.see open_uni
Unreliable Messaging (extension)message based unreliable communication.see send_datagram

How to Use

New streams can be created with Connection's open_bi() and open_uni() methods.

Bidirectional Streams

With bidirectional streams, data can be sent in both directions. For example, from the connection initiator to the peer and the other way around.

open bidirectional stream

#![allow(unused)]
fn main() {
async fn open_bidirectional_stream(connection: Connection) -> anyhow::Result<()> {
    let (mut send, recv) = connection
        .open_bi()
        .await?;

    send.write_all(b"test").await?;
    send.finish().await?;
    
    let received = recv.read_to_end(10).await?;

    Ok(())
}
}

iterate incoming bidirectional stream(s)

#![allow(unused)]
fn main() {
async fn receive_bidirectional_stream(connection: Connection) -> anyhow::Result<()> {
    while let Ok((mut send, recv)) = connection.accept_bi().await {
        // Because it is a bidirectional stream, we can both send and receive.
        println!("request: {:?}", recv.read_to_end(50).await?);

        send.write_all(b"response").await?;
        send.finish().await?;
    }

    Ok(())
}
}

Unidirectional Streams

With unidirectional streams, you can carry data only in one direction: from the initiator of the stream to its peer. It is possible to get reliability without ordering (so no head-of-line blocking) by opening a new stream for each packet.

open unidirectional stream

#![allow(unused)]
fn main() {
async fn open_unidirectional_stream(connection: Connection)-> anyhow::Result<()> {
    let mut send = connection
        .open_uni()
        .await?;

    send.write_all(b"test").await?;
    send.finish().await?;

    Ok(())
}
}

iterating incoming unidirectional stream(s)

#![allow(unused)]
fn main() {
async fn receive_unidirectional_stream(connection: Connection) -> anyhow::Result<()> {
    while let Ok(recv) = connection.accept_uni().await {
        // Because it is a unidirectional stream, we can only receive not send back.
        println!("{:?}", recv.read_to_end(50).await?);
    }

    Ok(())
}
}

Unreliable Messaging

With unreliable messaging, you can transfer data without reliability. This could be useful if data arrival isn't essential or when high throughput is important.

send datagram

#![allow(unused)]
fn main() {
async fn send_unreliable(connection: Connection)-> anyhow::Result<()> {
    connection
        .send_datagram(b"test".into())
        .await?;

    Ok(())
}
}

iterating datagram stream(s)

#![allow(unused)]
fn main() {
async fn receive_datagram(connection: Connection) -> anyhow::Result<()> {
    while let Ok(received_bytes) = connection.read_datagram().await {
        // Because it is a unidirectional stream, we can only receive not send back.
        println!("request: {:?}", received);
    }

    Ok(())
}
}