From Scripts to Systems: Lateralus at Every Scale

November 2025 · 10 min read

Most languages pick a lane. Python is great for scripts but struggles at systems level. C owns the kernel but nobody wants to write a web scraper in it. Rust is incredible for systems programming but has a steep ramp-up for throwaway scripts.

Lateralus was designed to work across the entire spectrum — from a 5-line data transform to a full operating system kernel. Not by being mediocre at everything, but by having a multi-target compiler that adapts the same source language to radically different output environments. Here's how that works in practice.

The Scale Spectrum

One Language, Five Scales

The key to supporting all five is that different scales have different needs. Scripts need zero ceremony. CLI tools need argument parsing and exit codes. Servers need async I/O. Systems code needs direct memory access and no runtime. Lateralus addresses each level through compiler flags and standard library tiers.

Scale 1: Scripts — Zero Ceremony

For quick scripts, Lateralus requires no imports, no main function, no build step. Just write code and run it:

#!/usr/bin/env lateralus
// count_words.lat — that's the entire file

read_file("novel.txt")
    |> split(" ")
    |> map(lowercase)
    |> frequencies
    |> sort_by((_, count) => -count)
    |> take(20)
    |> each((word, n) => print("{n}\t{word}"))

Run it with lateralus count_words.lat. No compilation step, no package.json, no virtual environment. The default backend is Python, so it runs immediately via the Python interpreter. The shebang line means you can even chmod +x and run it directly.

This is the zero-dependency install experience. If you have Python 3.8+ installed (which most systems do), you can run Lateralus scripts today. No additional runtime needed.

Scale 2: CLI Tools — Structured Programs

When a script grows into a tool, you need argument parsing, help text, and proper exit codes. Lateralus's stdlib provides this out of the box:

// csv_filter.lat — a CLI tool for filtering CSV files
use std.cli.{arg, flag, run_cli}
use std.csv

let app = run_cli({
    name: "csv-filter",
    version: "1.0.0",
    description: "Filter CSV rows by column value",
    args: [
        arg("file", "Input CSV file"),
        arg("column", "Column name to filter"),
        arg("value", "Value to match"),
        flag("--header", "Print header row", true),
    ],
    run: (args) =>
        args.file
        |> csv.read
        |> filter(row => row[args.column] == args.value)
        |> csv.write(stdout)
})

The std.cli module generates help text, validates arguments, and handles errors — all from that declarative configuration. The pipeline in the run handler is the actual business logic: read CSV, filter, write output. Clean and composable.

Scale 3: Servers — Async I/O

HTTP servers need non-blocking I/O, routing, middleware, and graceful shutdown. Lateralus's async model (covered in the async concurrency post) makes this natural:

use std.http.{serve, json_response, status}
use std.db

let handle_get_users = req =>
    db.query("SELECT * FROM users WHERE active = true")
    |> await
    |> json_response(200)

let handle_create_user = req =>
    req.body
    |> parse_json
    |> validate_user
    |> db.insert("users")
    |> await
    |> match
        | Ok(user) => json_response(201, user)
        | Err(e)   => status(400, e.message)

serve(8080, [
    ("GET",  "/users", handle_get_users),
    ("POST", "/users", handle_create_user),
]) |> await

Each handler is a function that takes a request and returns a response — no classes, no decorators, no framework magic. The async parts are explicit (await after database calls), and errors are handled with pattern matching. Still compiling to Python here, leveraging asyncio under the hood.

Scale 4: Applications — Modules and Libraries

Large applications need module systems, dependency management, and separation of concerns. Lateralus supports all of this with its use system and project structure:

// Project structure for a data analytics app:
// analytics/
//   lateralus.toml      ← project config + dependencies
//   src/
//     main.lat          ← entry point
//     ingest/
//       csv.lat         ← CSV ingestion module
//       api.lat         ← API ingestion module
//     transform/
//       clean.lat       ← data cleaning pipelines
//       aggregate.lat   ← aggregation functions
//     output/
//       report.lat      ← report generation
// src/main.lat
use ingest.csv
use ingest.api
use transform.{clean, aggregate}
use output.report

let main = () => {
    let local_data  = csv.ingest("data/sales.csv")
    let remote_data = await api.fetch_metrics("2025-Q3")

    [local_data, remote_data]
        |> flatten
        |> clean.remove_nulls
        |> clean.normalize_dates
        |> aggregate.by_region
        |> report.generate_html
        |> write_file("output/report.html")
}

Same language, same pipeline syntax, just more structure. The module system uses filesystem paths (like Rust and Go), so there's no separate module declaration — if the file exists at ingest/csv.lat, you can use ingest.csv.

Scale 5: Systems — Bare Metal

This is where things get interesting. When you compile with lateralus build --target c99, the compiler produces portable C99 instead of Python. This C output can be compiled with GCC or Clang for maximum performance, or cross-compiled for embedded targets.

For operating system development, we go further: freestanding mode. This strips the runtime entirely — no garbage collector, no standard library, no heap allocator. You're on bare metal.

// kernel/interrupt_handler.lat
// Compiled with: lateralus build --target c99 --freestanding

use kernel.idt
use kernel.port_io.{inb, outb}

let keyboard_handler = () => {
    let scancode = inb(0x60)  // Read from keyboard port

    scancode
        |> decode_scancode
        |> match
            | Some(key) => buffer_push(key)
            | None      => ()  // Unknown scancode, ignore

    outb(0x20, 0x20)  // Send EOI to PIC
}

idt.register(33, keyboard_handler)

Yes, that's a keyboard interrupt handler written in Lateralus. It reads a scancode from the hardware port, pattern-matches to decode it, and sends an end-of-interrupt signal — all in a pipeline. The --freestanding flag tells the compiler to emit standalone C with no libc dependency. This is how we built LateralusOS.

Why Multi-Target Compilation Matters

The power of this approach isn't just that you can use one language everywhere — it's that knowledge transfers. A developer who learns Lateralus for scripting already knows how to write a server, a CLI tool, or a kernel module. The patterns are the same. The pipeline operator works the same. Pattern matching works the same. The only thing that changes is the compilation target.

Learn once, deploy anywhere. The same mental model, the same syntax, the same type system — from a 5-line script to a 50,000-line operating system. That's the promise of Lateralus.

Compilation Targets at a Glance

Target Comparison

The Path Forward

We're working on two additional backends. A WebAssembly target will let Lateralus code run in the browser and on edge compute platforms like Cloudflare Workers. And a native LLVM backend will produce optimized machine code directly, skipping the C intermediate step for maximum performance. The language stays the same — only the compiler output changes.

🧪 Try Lateralus at every scale ⭐ Star on GitHub

The full spectrum

Here's what Lateralus code looks like at each scale, showing how the same language and the same pipeline model works from one-liners to operating systems:

// One-liner script (10 seconds to write)
read_lines("/var/log/auth.log") |> filter(contains("Failed")) |> count() |> println()

// Security tool (10 minutes to write)
let vulns = targets
    |> flat_map(port_scan)
    |> flat_map(service_detect)
    |> flat_map(vuln_check)
    |> sort_by(severity)
    |> to_report("scan.pdf")

// OS kernel module (10 hours to write)
fn virtio_net_receive(queue: &VirtQueue) -> Result<Packet, NetError> {
    let desc = queue.pop_used()?
    let buf = desc.buffer()
    let header = VirtioNetHeader::parse(buf)?
    let packet = buf[HEADER_SIZE..]
        |> parse_ethernet()
        |?> parse_ip()
        |?> parse_tcp()
    Ok(packet)
}

The pipeline operator scales. The type system scales. The compilation model scales. That's the design goal: one language for everything from quick scripts to kernel drivers.

Lateralus is built by bad-antics. Follow development on GitHub or try the playground.

The full spectrum

Here's what Lateralus code looks like at each scale, showing how the same language and the same pipeline model works from one-liners to operating systems:

// One-liner script (10 seconds to write)
read_lines("/var/log/auth.log") |> filter(contains("Failed")) |> count() |> println()

// Security tool (10 minutes to write)
let vulns = targets
    |> flat_map(port_scan)
    |> flat_map(service_detect)
    |> flat_map(vuln_check)
    |> sort_by(severity)
    |> to_report("scan.pdf")

// OS kernel module (10 hours to write)
fn virtio_net_receive(queue: &VirtQueue) -> Result<Packet, NetError> {
    let desc = queue.pop_used()?
    let buf = desc.buffer()
    let header = VirtioNetHeader::parse(buf)?
    let packet = buf[HEADER_SIZE..]
        |> parse_ethernet()
        |?> parse_ip()
        |?> parse_tcp()
    Ok(packet)
}

The pipeline operator scales. The type system scales. The compilation model scales. That's the design goal: one language for everything from quick scripts to kernel drivers.

Lateralus is built by bad-antics. Follow development on GitHub or try the playground.