Async Concurrency in Lateralus: spawn, await, and Beyond
Modern software lives on the network. APIs call other APIs, services fan out requests in parallel, data pipelines stream from multiple sources. If your language doesn't have a good concurrency story, you're fighting the runtime instead of writing logic.
When I designed Lateralus's async model, I had three goals: it should be lightweight (no thread-per-task overhead), structured (no orphan tasks leaking resources), and pipeline-friendly (async should compose with |> naturally). Here's how we achieved all three.
The Basics: spawn and await
Lateralus uses two core primitives for concurrency: spawn to launch a concurrent task, and await to wait for its result.
// Spawn a task — runs concurrently, returns a Task<T> handle
let task = spawn fetch("https://api.example.com/users")
// Do other work while the fetch runs...
let local_data = read_file("cache.json")
// Await the result when you need it
let users = await task
// users : Result<Response, Error>
Tasks are lightweight — they're multiplexed onto a small pool of OS threads, similar to goroutines in Go or green threads in Erlang. You can spawn thousands of tasks without running out of memory or threads. The runtime handles scheduling, suspension, and resumption transparently.
await_all: Parallel Fan-Out
The real power comes when you need to run many tasks in parallel and collect all results. That's what await_all is for:
let urls = [
"https://api.example.com/users",
"https://api.example.com/posts",
"https://api.example.com/comments",
]
// Spawn all fetches concurrently, await all results
let responses = urls
|> map(url => spawn fetch(url))
|> await_all
// responses : List<Result<Response, Error>>
// All 3 requests ran in parallel!
Notice how naturally this composes with pipelines. The list of URLs flows into map which spawns concurrent tasks, then await_all waits for every task to complete. No callback nesting, no promise chains, no async def coloring — just data flowing through functions.
Structured Concurrency
One of the most insidious bugs in concurrent programs is the orphan task — a background task that outlives its parent scope, holding resources, failing silently, or corrupting state after the code that spawned it has moved on.
Lateralus enforces structured concurrency: every task is bound to the scope that spawned it. When a scope exits, all tasks spawned within it are automatically awaited (or cancelled if the scope exits due to an error).
let process_batch = batch => {
// These tasks are scoped to this block
let validated = spawn validate(batch)
let enriched = spawn enrich(batch)
// Both tasks are guaranteed to complete (or cancel)
// before process_batch returns
(await validated, await enriched)
}
// After process_batch returns, no orphan tasks exist.
// Resources are clean. Always.
No fire and forget. If you spawn it, someone awaits it. The compiler and runtime work together to guarantee this invariant. You literally cannot create an orphan task in safe Lateralus code.
Error Propagation in Async Contexts
What happens when a concurrent task fails? In many languages, async errors are swallowed, logged to stderr, or surface as mysterious "unhandled promise rejection" warnings. Lateralus treats async errors as first-class values using the Result type:
let fetch_user_profile = user_id => {
let profile_task = spawn fetch("/api/users/" ++ user_id)
let avatar_task = spawn fetch("/api/avatars/" ++ user_id)
match (await profile_task, await avatar_task)
| (Ok(profile), Ok(avatar)) =>
Ok(merge_profile(profile, avatar))
| (Err(e), _) => Err("Profile fetch failed: " ++ e)
| (_, Err(e)) => Err("Avatar fetch failed: " ++ e)
}
Pattern matching on async results gives you exhaustive error handling. The compiler ensures you handle every failure case — no error slips through unnoticed.
Async + Pipelines: The Killer Combo
This is where Lateralus's concurrency model really distinguishes itself. Because spawn returns a Task<T> and await unwraps it, you can weave async operations into pipeline chains seamlessly:
// A concurrent data processing pipeline
let process_orders = date =>
date
|> fetch_orders // Task<List<Order>>
|> await // List<Order>
|> filter(o => o.status == "pending") // List<Order>
|> map(o => spawn validate_payment(o)) // List<Task<Result>>
|> await_all // List<Result>
|> filter_ok // List<Order>
|> map(fulfill) // List<Fulfillment>
Read this top to bottom: fetch orders (async), filter pending ones (sync), validate payments in parallel (async), collect successes (sync), fulfill them (sync). The pipeline makes the concurrent parts visually obvious — you can see exactly where the parallelism happens.
Real-World Example: Concurrent HTTP Fetcher
Let's build a complete concurrent URL fetcher with rate limiting, retries, and error collection:
let fetch_with_retry = (url, retries) =>
match await spawn fetch(url)
| Ok(response) => Ok(response)
| Err(_) when retries > 0 =>
await sleep(1000)
fetch_with_retry(url, retries - 1)
| Err(e) => Err(e)
let crawl = urls =>
urls
|> chunks(10) // Rate limit: 10 at a time
|> flat_map(chunk =>
chunk
|> map(url => spawn fetch_with_retry(url, 3))
|> await_all
)
|> partition_results // (successes, failures)
// Usage:
let (pages, errors) = read_file("urls.txt")
|> lines
|> crawl
print("Fetched {len(pages)} pages, {len(errors)} failures")
How It Compares
Lateralus vs. Other Async Models
- Go goroutines: Similar lightweight tasks, but Go lacks structured concurrency — goroutines can outlive their parent. Lateralus prevents this by design.
- Rust async: Rust requires
.awaitpostfix syntax andasync fncoloring. Lateralus avoids function coloring — any function can spawn tasks without changing its signature. - Python asyncio: Python's
async/awaitsplits the world into sync and async functions. Lateralus treats concurrency as a value (Task<T>), not a function modifier. - JavaScript Promises: Promises eagerly execute and can't be cancelled. Lateralus tasks are lazy (run on spawn) and support structured cancellation.
What's Next
We're working on async iterators — the ability to |> through a stream of values that arrive over time. Think processing a WebSocket feed or tailing a log file, all with the same pipeline syntax. We're also exploring supervision trees inspired by Erlang's OTP for building fault-tolerant services.
Concurrency patterns for security tools
Security tools are inherently concurrent — you're scanning multiple targets, multiple ports, multiple protocols simultaneously. Here are the three most common patterns:
// Pattern 1: Fan-out / fan-in
let results = targets
|> map(|t| async scan(t))
|> await_all(concurrency: 100) // At most 100 concurrent scans
|> flatten()
// Pattern 2: Producer-consumer with backpressure
let (tx, rx) = channel(buffer: 1000)
// Producer: discovers URLs
async { crawler(seed_url) |> each(|url| tx.send(url)) }
// Consumer: scans discovered URLs
async { rx.recv_stream()
|> map(|url| async vuln_scan(url))
|> await_all(concurrency: 20)
|> each(|finding| report.add(finding))
}
// Pattern 3: Race — use the first result
let proxy = [socks5_connect, http_connect, direct_connect]
|> map(|method| async method(target))
|> race() // Returns the first successful connection
Concurrency patterns for security tools
Security tools are inherently concurrent — you're scanning multiple targets, multiple ports, multiple protocols simultaneously. Here are the three most common patterns:
// Pattern 1: Fan-out / fan-in
let results = targets
|> map(|t| async scan(t))
|> await_all(concurrency: 100) // At most 100 concurrent scans
|> flatten()
// Pattern 2: Producer-consumer with backpressure
let (tx, rx) = channel(buffer: 1000)
// Producer: discovers URLs
async { crawler(seed_url) |> each(|url| tx.send(url)) }
// Consumer: scans discovered URLs
async { rx.recv_stream()
|> map(|url| async vuln_scan(url))
|> await_all(concurrency: 20)
|> each(|finding| report.add(finding))
}
// Pattern 3: Race — use the first result
let proxy = [socks5_connect, http_connect, direct_connect]
|> map(|method| async method(target))
|> race() // Returns the first successful connection