Sgr A*

HomeContactBlog

Building a HTTP Server in Rust

What started as a simple HTTP server turned into a deeper exploration of system design, concurrency, and control in Rust.

Maksym avatarMaksym
April 3, 2026

I started with a simple goal: build a small HTTP server in Rust. No frameworks. Just full control.

It worked.

But it wasn't a server yet.


At some point while learning Rust, I kept coming back to the same idea: what actually happens under the hood of something like Nginx or Apache?

Not how to use them — how they really work.

Eventually, curiosity turned into boredom with everything else, and I started building one myself.

Genesis

I called it Ferrox.

The name felt right for something built in Rust — simple, a bit heavy, and hard to bend once it's set.

Basic connection & Response

At the start, I did not use any external crates, only std.

use std::net::{TcpListener, TcpStream};
use std::io::{Read, Write};

fn handle_connection(mut stream: TcpStream) {
    // Read data from the buffer and then parse it (no use for now)
    let mut buffer: [u8; 1024] = [0; 1024];

    stream.read(&mut buffer).unwrap();

    println!("{}", String::from_utf8_lossy(&buffer));

    // crafting basic HTTP response
    let response: &str = "HTTP/1.1 200 OK\r\n\r\nHello, Ferrox!";

    // Writing this response to the stream
    stream.write(response.as_bytes()).unwrap();
    stream.flush().unwrap();
}

fn main() {
    // Bind the listener
    let listener: TcpListener = TcpListener::bind("127.0.0.1:80").unwrap();

    println!("Server running on http://127.0.0.1:80");

    // Accept incoming stream (no threading or async yet)
    for stream in listener.incoming() {
        let stream: TcpStream = stream.unwrap();
        handle_connection(stream);
    }
}

As you can see, this version is only the embryo of an HTTP server. It can already respond, but there is still no threading or concurrency.

While we're at it, let's take a look at something interesting here:

let response: &str = "HTTP/1.1 200 OK\r\n\r\nHello, Ferrox!";

This may look confusing at first, but that's how HTTP responses are formed:

HTTP/{VERSION} {STATUS} {MESSAGE}

Headers

Body

So in practice, we hardcoded the response here, and since we did not specify the content type, browsers will usually interpret it as text.

File serving

This is where the code started getting messy as more functionality was added.

fn handle_connection(mut stream: TcpStream) {
    let mut buffer: [u8; 1024] = [0; 1024];

    stream.read(&mut buffer).unwrap();

    // Basic request decoding, really fragile
    let request: std::borrow::Cow<'_, str> = String::from_utf8_lossy(&buffer);
    let first_line = request.lines().next().unwrap();
    let parts: Vec<&str> = first_line.split_whitespace().collect();

    // Hardcoded file for now
    let path = Path::new("www/index.html");
    let display = path.display();

    // Opening and reading the file
    let mut file = match File::open(&path) {
        Err(why) => panic!("couldn't open {}: {}", display, why),
        Ok(file) => file,
    };

    let mut s = String::new();
    match file.read_to_string(&mut s) {
        Err(why) => panic!("couldn't read {}: {}", display, why),
        Ok(_) => print!("{} contains:\n{}", display, s),
    }

    let (method, path, version) = (parts[0], parts[1], parts[2]);

    // Basic logging
    println!("Method: {}\nPath: {}\nVersion: {}", method, path, version);

    // Crafting the response, but this time with specified content type, so the browser understands it's an html
    let response = format!("HTTP/1.1 200 OK\r\nContent-Type: text/html\r\n\r\n{}", s);

    stream.write(response.as_bytes()).unwrap();
    stream.flush().unwrap();
}

Note that this example loads a file into memory before serving it. Real HTTP servers do not do that, because if you ask one to serve a 15GB zip file, then exactly 15GB would be loaded into the server's RAM. That does not look very performant, does it? We'll deal with that problem later.

Routing

At first, routing was designed only to serve HTML files. I added MIME type detection later, but it already had basic path traversal protection.

const SERVING_DIR: &str = "www"; // Serving directory

fn handle_connection(mut stream: TcpStream) {
    let mut buffer: [u8; 1024] = [0; 1024];

    stream.read(&mut buffer).unwrap();

    let request: std::borrow::Cow<'_, str> = String::from_utf8_lossy(&buffer);
    let first_line = request.lines().next().unwrap();
    let parts: Vec<&str> = first_line.split_whitespace().collect();
    let (method, req_path, version) = (parts[0], parts[1], parts[2]);

    let path = PathBuf::from(SERVING_DIR).join(req_path.trim_start_matches('/'));

    // Canonicalize the path
    let mut canonical = match path.canonicalize() {
        Ok(p) => p,
        Err(_) => {
            println!("File not found");
            return;
        }
    };

    // If path is starting from a serving directory, it means that file is legal to serve
    if !canonical.starts_with(SERVING_DIR) {
        println!("Illegal path."); // TODO: Forbidden
    }

    if canonical.is_dir() {
        // Try to display default index.html if request points to a directory
        canonical = canonical.join("index.html");
    }

    let display = canonical.display();

    let mut file = match File::open(&canonical) {
        Err(why) => panic!("couldn't open {}: {}", display, why),
        Ok(file) => file,
    };

    let mut s = String::new();
    match file.read_to_string(&mut s) {
        Err(why) => panic!("couldn't read {}: {}", display, why),
        Ok(_) => println!(
            "Method: {}\nPath: {}\nVersion: {}",
            method, req_path, version
        ),
    }

    let response = format!("HTTP/1.1 200 OK\r\nContent-Type: text/html\r\n\r\n{}", s);

    stream.write(response.as_bytes()).unwrap();
    stream.flush().unwrap();
}

Refactor

At this point, the flow had already become messy in a single file, so I split it up to keep the structure cleaner and introduce a bit of OOP.

src
 |- handlers
 |   |- static_files.rs - routing & serving
 |- http
 |   |- request.rs - parsing the request
 |   |- response.rs - response helpers
 |- main.rs - app entry point
 |- server.rs - networking

MIME & Bytes

The main goal of an HTTP server is to be able to serve any type of file requested by the client, or at least do its best to serve as many as possible. Since this example only handles HTML, it is too limited. Let's fix that.

  1. We need to be able to determine the MIME type based on the file extension. Since there are hundreds or even thousands of them, it is much faster to use an existing solution. The mime_guess crate is perfect for that.
  2. Earlier, this solution wrote raw strings into the response, which is not suitable for every file type. This part also needs a rework so it can serve everything as bytes, not as text.

Not everything is a string, but everything is bytes.

pub fn serve_file(file_path: &String) -> Result<Response, std::io::Error> {
    let path = PathBuf::from(SERVING_DIR).join(file_path.trim_start_matches('/'));
    let base = PathBuf::from(SERVING_DIR).canonicalize().expect("Serving dir must exist");

    let mut canonical = match path.canonicalize() {
        Ok(p) => p,
        Err(_) => {
            // Error templates were implemented in the meantime
            let body = render_error("404", "Not Found");
            return Ok(Response {
                status: "404 Not Found",
                body,
                content_type: mime::TEXT_HTML,
            });
        }
    };

    if !canonical.starts_with(&base) {
        let body = render_error("403", "Forbidden");

        return Ok(Response {
            status: "403 Forbidden",
            body,
            content_type: mime::TEXT_HTML,
        });
    }

    if canonical.is_dir() {
        canonical = canonical.join("index.html");
    }

    // this function reads the file as bytes.
    let body = std::fs::read(&canonical)?;

    // detect MIME type or fallback to text/plain if fails
    let mime = mime_guess::from_path(&canonical).first_or_text_plain();

    Ok(Response {
        status: "200 OK",
        body,
        content_type: mime,
    })
}

File streaming

As I mentioned before, Ferrox was loading files into memory before streaming them. This is very inefficient, so streaming a file piece by piece is much faster and does not fill the machine's RAM. Here's how I handled it.

Instead of:

stream.write_all(&bytes).unwrap();

I did this:

// Response struct determines body type
match &mut response.body {
    Body::Bytes(bytes) => {
        // For error templates that are generated at the compile time, they are always a text and template is small, so it's not a problem to stream them at once
        stream.write_all(bytes)?;
    }
    Body::File(file) => {
        // If this is a served file, we stream it using this function from std library
        std::io::copy(file, &mut stream)?;
    }
}

Directory indexing

This part is not strictly necessary to create a fully functioning HTTP server, but it is an excellent UX improvement.

The principle is very simple:

  1. Determine if it's a directory
  2. List all files
  3. Append those files as <a> links inside an HTML template
fn index_files(path: PathBuf, display_path: &String) -> Result<Vec<u8>, std::io::Error> {
    let dir_entries = std::fs::read_dir(&path)?;
    let mut html_list = String::new();

    if display_path != "/" {
        // If it's not a root of a serving dir, we add a button to go back at the top
        html_list.push_str("<li><a href=\"..\">..</a></li>");
    }

    for entry in dir_entries.flatten() {
        let name = entry.file_name().to_string_lossy().to_string();

        // Skip sensitive entries like .env, .git, etc.
        if name.starts_with('.') { continue; }

        let href = if entry.file_type()?.is_dir() {
            format!("{}/", name)
        } else {
            name
        };

        // Do XSS protection before pushing the string, as we basically doing SSR here
        html_list.push_str(&format!("<li><a href=\"{save_href}\">{save_href}</a></li>", save_href = encode_safe(&href)));
    }

    Ok(render_indexing(display_path, &html_list))
}

Threading & Timeouts

As the project grew, I realized how inefficient and vulnerable it still was. This included several critical points:

  1. A blocking thread. As you already saw, there is only one loop handling all incoming connections.
  2. Vulnerable to Slowloris attacks

I used the threadpool crate to implement a thread-per-connection model, similar to the one Apache uses.

const MAX_WORKERS: usize = 4;
const READ_TIMEOUT_SEC: u64 = 5;
const WRITE_TIMEOUT_SEC: u64 = 5;

pub fn serve(addr: &str) {
    let listener = TcpListener::bind(addr).unwrap();
    // Create pool
    let pool = ThreadPool::new(MAX_WORKERS);

    println!("Ferrox running on http://{addr} with {MAX_WORKERS} workers");

    for stream in listener.incoming() {
        match stream {
            Ok(stream) => {
                // Timeout for reading
                let _ = stream.set_read_timeout(Some(Duration::from_secs(READ_TIMEOUT_SEC)));
                // Timeout for writing
                let _ = stream.set_write_timeout(Some(Duration::from_secs(WRITE_TIMEOUT_SEC)));

                // Handle TCP stream here
                pool.execute(move || {
                    if let Err(e) = handle(stream) {
                        eprintln!("Connection error: {}", e);
                    }
                });
            }
            Err(e) => eprintln!("Failed to accept connection: {}", e),
        }
    }
}

That's it for the first part of the journey. I still have not covered many features such as Tokio async, TLS, configuration, and other important pieces. We'll talk about those later, as this post gets updated.

In the meantime, don't forget to star the repository on GitHub to be in touch with updates and new features!

Any questions?

Contact meSee other posts