#Part 1 - Project Setup & Your First HTTP Proxy
Welcome to the first part of this series on building an API gateway in Rust! By the end of this tutorial, you’ll have a working reverse proxy that accepts incoming HTTP requests and forwards them to a backend server. It’s a surprisingly small amount of code, and it’ll give us a solid foundation to build on.
What is a Reverse Proxy?
Before we write any code, let’s make sure we’re on the same page about what a reverse proxy actually does.
When a client (like a web browser or a mobile app) makes a request, it normally goes directly to the server hosting the application. A reverse proxy sits between the client and the server. The client sends its request to the proxy, and the proxy forwards it to the real backend server, then sends the response back to the client.
The client doesn’t know or care that there’s a proxy in the middle — as far as it’s concerned, it’s just talking to a server.
Client ---> API Gateway (our proxy) ---> Backend Service
Client <--- API Gateway (our proxy) <--- Backend Service
This is the core of what an API gateway does, and everything else we’ll build in this series (routing, rate limiting, auth) is layered on top of this basic proxying behavior.
Setting Up the Project
Let’s create a new Rust project. Open your terminal and run:
$ cargo new ferroway
$ cd ferroway
This gives us a standard Rust project with a Cargo.toml and a src/main.rs. The name Ferroway is a play on “ferro” (Latin for iron, like Rust) and “gateway”.
We’re going to need a few dependencies. Open up Cargo.toml and update it:
[package]
name = "ferroway"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1", features = ["full"] }
hyper = { version = "1", features = ["full"] }
hyper-util = { version = "0.1", features = ["full"] }
http-body-util = "0.1"
bytes = "1"
Let’s talk about what each of these does:
tokio - This is the async runtime for Rust. If you’re coming from Go, think of it like the goroutine scheduler. If you’re coming from Node.js, it’s similar to the event loop. It lets us handle many connections concurrently without creating a new thread for each one.
hyper - A low-level HTTP library for Rust. It handles parsing HTTP requests and responses. We’re using a low-level library deliberately — higher-level frameworks like Actix or Axum would hide the details we want to learn about.
hyper-util - Utility types that complement hyper, including a convenient HTTP client and server builder.
http-body-util - Helpers for working with HTTP request and response bodies.
bytes - Efficient byte buffer handling. HTTP is fundamentally about shuffling bytes around, and this crate makes that ergonomic.
A Quick Note on Async Rust
If you haven’t worked with async Rust before, here’s the quick version. When you see async fn, it means the function can pause and resume — it doesn’t block a thread while waiting for something like a network response. When you see .await, that’s where the pausing happens.
// This function can pause while waiting for the network
async fn fetch_data() -> String {
// .await pauses here until the response comes back
// but the thread is free to do other work in the meantime
let response = make_request().await;
response
}
This is how we can handle thousands of connections with just a few threads. You don’t need to deeply understand the mechanics right now — just know that async and .await let us write code that looks sequential but runs concurrently.
Building the Proxy
Let’s replace the contents of src/main.rs with our proxy. We’ll build it step by step.
First, let’s set up the basic structure — a server that listens for incoming connections:
use bytes::Bytes;
use http_body_util::{combinators::BoxBody, BodyExt, Empty, Full};
use hyper::server::conn::http1;
use hyper::service::service_fn;
use hyper::{Request, Response};
use hyper_util::rt::TokioIo;
use std::net::SocketAddr;
use tokio::net::TcpListener;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
let listener = TcpListener::bind(addr).await?;
println!("Ferroway API Gateway listening on http://{}", addr);
loop {
let (stream, _) = listener.accept().await?;
let io = TokioIo::new(stream);
tokio::task::spawn(async move {
if let Err(err) = http1::Builder::new()
.serve_connection(io, service_fn(handle_request))
.await
{
eprintln!("Error serving connection: {:?}", err);
}
});
}
}
Let’s break this down:
#[tokio::main]— This macro sets up the Tokio async runtime. Without it, we can’t use.awaitin ourmainfunction.TcpListener::bind(addr).await?— We bind a TCP listener to port 3000. The?at the end is Rust’s way of saying “if this fails, return the error”. It’s similar toif err != nil { return err }in Go, but more concise.The
loop— We continuously accept incoming connections. For each connection, we spawn a new async task withtokio::task::spawn. This is similar togo handleConnection(conn)in Go — it runs concurrently without blocking.service_fn(handle_request)— This tells hyper to call ourhandle_requestfunction for every incoming HTTP request on this connection.
Now let’s implement the handle_request function that actually proxies the request:
async fn handle_request(
req: Request<hyper::body::Incoming>,
) -> Result<Response<BoxBody<Bytes, hyper::Error>>, hyper::Error> {
let backend_url = "http://127.0.0.1:8080";
let path = req.uri().path().to_string();
let method = req.method().clone();
println!("--> {} {}", method, path);
// Build the upstream URI
let uri = format!("{}{}", backend_url, path);
// Connect to the backend
let host = "127.0.0.1";
let port = 8080;
let stream = TcpListener::bind("0.0.0.0:0").await.ok();
let backend_stream = tokio::net::TcpStream::connect(format!("{}:{}", host, port)).await;
match backend_stream {
Ok(stream) => {
let io = TokioIo::new(stream);
let (mut sender, conn) = hyper::client::conn::http1::handshake(io).await?;
// Spawn the connection handler
tokio::task::spawn(async move {
if let Err(err) = conn.await {
eprintln!("Connection error: {:?}", err);
}
});
// Forward the request
let upstream_req = Request::builder()
.method(method)
.uri(&path)
.header("Host", host)
.body(req.into_body().boxed())
.unwrap();
let response = sender.send_request(upstream_req).await?;
// Map the response body
Ok(response.map(|b| b.boxed()))
}
Err(_) => {
// Backend is unreachable
let body = Full::new(Bytes::from("502 Bad Gateway"))
.map_err(|never| match never {})
.boxed();
Ok(Response::builder().status(502).body(body).unwrap())
}
}
}
There’s a lot happening here, so let’s walk through it:
Ownership in Rust — You’ll notice we clone req.method() and convert req.uri().path() to a String. In Rust, values have a single owner. When we pass req into the request body later, we can’t use its fields anymore. So we copy what we need first. This is the ownership system at work — it prevents bugs like use-after-free that plague C/C++ code.
Pattern Matching — The match expression is Rust’s way of handling different outcomes. It’s like a switch statement, but the compiler forces you to handle every case. Here we handle the happy path (connection succeeded) and the error path (backend unreachable).
The ? Operator — You’ll see ? after several calls. This is Rust’s error propagation operator. If the call returns an error, the function immediately returns that error. If it succeeds, we get the value inside. It keeps our code clean without hiding the fact that errors can happen.
BoxBody — The BoxBody type is a way of erasing the specific body type behind a trait object. Different bodies (incoming request bodies, full response bodies) have different concrete types in Rust, and BoxBody lets us treat them uniformly. This is similar to using an interface in Go or TypeScript.
Creating a Test Backend
Before we can test our proxy, we need a backend server to proxy to. Let’s create a simple one. Create a new file called examples/backend.rs:
use bytes::Bytes;
use http_body_util::Full;
use hyper::server::conn::http1;
use hyper::service::service_fn;
use hyper::{Request, Response};
use hyper_util::rt::TokioIo;
use std::net::SocketAddr;
use tokio::net::TcpListener;
async fn handle(
req: Request<hyper::body::Incoming>,
) -> Result<Response<Full<Bytes>>, hyper::Error> {
let path = req.uri().path();
let body = format!(
"Hello from the backend!\nYou requested: {} {}\n",
req.method(),
path
);
Ok(Response::new(Full::new(Bytes::from(body))))
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let addr = SocketAddr::from(([127, 0, 0, 1], 8080));
let listener = TcpListener::bind(addr).await?;
println!("Backend server listening on http://{}", addr);
loop {
let (stream, _) = listener.accept().await?;
let io = TokioIo::new(stream);
tokio::task::spawn(async move {
if let Err(err) = http1::Builder::new()
.serve_connection(io, service_fn(handle))
.await
{
eprintln!("Error: {:?}", err);
}
});
}
}
Testing It Out
Open two terminal windows. In the first one, start the backend:
$ cargo run --example backend
Backend server listening on http://127.0.0.1:8080
In the second terminal, start the gateway:
$ cargo run
Ferroway API Gateway listening on http://127.0.0.1:3000
Now, in a third terminal (or your browser), hit the gateway:
$ curl http://localhost:3000/hello
Hello from the backend!
You requested: GET /hello
The request went to our gateway on port 3000, got forwarded to the backend on port 8080, and the response came all the way back. You’ve just built a reverse proxy in Rust!
Try hitting a few different paths and watch the gateway’s terminal — you should see each request logged:
--> GET /hello
--> GET /api/users
--> POST /api/data
What Happens When the Backend is Down?
Stop the backend server and try making a request again:
$ curl http://localhost:3000/hello
502 Bad Gateway
Our error handling kicks in and returns a proper 502 status. This is important — a good gateway should handle backend failures gracefully rather than crashing.
Conclusion
In this first part, we’ve built a working reverse proxy in Rust using Tokio and Hyper. We’ve touched on some core Rust concepts along the way — ownership and borrowing, async/await, pattern matching, and error handling with ?.
The code is simple right now, but it’s a genuine foundation. In the next part, we’ll add routing so our gateway can forward different URL paths to different backend services — which is where things start to get really useful.
Challenge - Try adding query string forwarding to the proxy. Right now we only forward the path — can you modify
handle_requestto also preserve query parameters like/search?q=rust?
Next Part
In Part 2 - Routing & Path Matching, we’ll add the ability to route requests to different backend services based on URL patterns.
Continue Learning
Part 6 - Configuration & Authentication
In the final part of this series, we move our gateway configuration into a YAML file and implement JWT-based authentication middleware to protect our backend services.
Part 5 - Load Balancing & Health Checks
In this tutorial, we add load balancing to our API gateway with round-robin distribution and background health checks that automatically remove unhealthy backends from the pool.
Part 4 - Rate Limiting
In this tutorial, we implement a token bucket rate limiter as middleware for our API gateway, protecting backend services from abuse by tracking requests per client IP.
Part 3 - Middleware Pipeline
In this tutorial, we build a composable middleware system using Rust traits. We implement logging, CORS, and custom header injection middleware for our API gateway.