Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Welcome to proc

The Problem

JavaScript streams are push-based. Producers push data whether consumers are ready or not. This creates backpressure—complex coordination between producers and consumers to prevent memory exhaustion. And when something goes wrong? You need error handlers on every stream in the chain.

// Traditional streams: backpressure + error handling at every step
stream1.on("error", handleError);
stream2.on("error", handleError);
stream3.on("error", handleError);
// Plus drain events, pause/resume, pipe coordination...

The Solution

proc uses async iterators instead of streams. Consumers pull data when ready. No backpressure. No coordination. And errors flow through pipelines naturally—one try-catch handles everything.

// proc: no backpressure, errors just work
try {
  await run("cat", "data.txt")
    .run("grep", "error")
    .run("wc", "-l")
    .lines
    .forEach(console.log);
} catch (error) {
  // All errors caught here
}

Who This Book Is For

This documentation is for developers who:

  • Run child processes and want better error handling than Deno.Command provides
  • Process streaming data (logs, CSV files, API responses) without loading everything into memory
  • Want Array-like methods (map, filter, reduce) for async data
  • Are replacing shell scripts with type-safe, testable code

You should be comfortable with TypeScript basics and async/await. No prior experience with Deno streams or child processes required.

What You’ll Learn

Running Processes — Execute commands, chain them like shell pipes, capture output, handle errors gracefully.

Async Iterables — Use map, filter, reduce, and more on any async data source. Process gigabyte files with constant memory.

Bridge Push and Pull — Convert callbacks, events, and WebSockets into async iterables with WritableIterable. Automatic backpressure, natural error propagation.

Data Transforms — Convert between CSV, TSV, JSON, and Record formats with streaming support. Or use the WASM-powered flatdata CLI for maximum throughput.

A Taste of proc

Count lines in a compressed file—streaming, constant memory:

import { read } from "jsr:@j50n/proc@0.24.6";

const count = await read("logs.txt.gz")
  .transform(new DecompressionStream("gzip"))
  .lines
  .count();

Chain processes like shell pipes:

import { run } from "jsr:@j50n/proc@0.24.6";

const errors = await run("cat", "app.log")
  .run("grep", "ERROR")
  .run("wc", "-l")
  .lines.first;

Transform async data with familiar methods:

import { enumerate } from "jsr:@j50n/proc@0.24.6";

const results = await enumerate(urls)
  .concurrentMap(fetch, { concurrency: 5 })
  .filter((r) => r.ok)
  .map((r) => r.json())
  .collect();

Bridge event-driven code to async iteration:

import { WritableIterable } from "jsr:@j50n/proc@0.24.6";

const messages = new WritableIterable<string>();
ws.onmessage = async (e) => await messages.write(e.data);
ws.onclose = () => messages.close();

for await (const msg of messages) {
  console.log("Received:", msg);
}

Quick Decision Guide

Need to run shell commands?Running Processes

Processing files line by line?File I/O

Converting CSV/TSV/JSON?Data Transforms

Have callback/event-based data?WritableIterable

Need maximum throughput?flatdata CLI

Working with any async data?Understanding Enumerable

Getting Started

  1. Installation — Add proc to your project
  2. Quick Start — Your first proc script in 5 minutes
  3. Key Concepts — Essential patterns to understand

Version: 0.24.6 | License: MIT | Status: Production-ready

GitHub · Issues · FAQ