Skip to main content
← Writing

Running a Warehouse System on a 4 GB Server with No Docker

· 22 min read
Series parts
  1. Part 1Walking Is the Most Expensive Warehouse Operation
  2. Part 2Three Generations of a Warehouse Routing Engine
  3. Part 3Running a Warehouse System on a 4 GB Server with No Docker
  4. Part 4Streaming Excel to a Database Without Losing a Single Row
On this page

Part 2 ended with a ~144 KB WASM binary that computes optimized routes in sub-second time. That binary is a function: items and positions go in, a pick sequence comes out. But a function isn’t a system. An operator scanning a barcode in a warehouse needs an HTTP endpoint that accepts the scan data, a backend that resolves the barcode to an ERP order, a database connection to retrieve item positions and quantities, and a response formatted for the React frontend. The optimization engine needs to be loaded, initialized, and called within that request lifecycle. Errors need to reach me somehow, on a server with no internet access. And all of this needs to stay running on a 4 GB machine where a crashed service can take down unrelated Vivaldi applications if it’s not properly isolated.

This post covers everything between the WASM function and the operator’s screen: the full system architecture, the technology choices with their rationale, the monorepo that holds 14 TypeScript packages and 2 Rust crates, the architectural patterns that kept the codebase navigable for a single engineer over 2.5 years, the hybrid REST/gRPC split, the log pipeline that turns structured JSON into email alerts, and the systemd deployment model that replaced Docker because Docker wasn’t allowed.

System Architecture

                      ┌─────────────────────────┐
                      │     React.js Client     │
                      │   (Bulma CSS, Formik)   │
                      └────────────┬────────────┘
                                   │ HTTP/REST
                                   ▼
                      ┌─────────────────────────┐
                      │     Koa.js REST API     │
                      └──┬─────────┬─────────┬──┘
                         │         │         │
           ┌─────────────┘         │         └──────────────┐
           ▼                       ▼                        ▼
 ┌────────────────────┐  ┌────────────────────┐  ┌────────────────────┐
 │ Rust/WASM          │  │ MSSQL Database     │  │ gRPC Services      │
 │ Optimization       │  │ (ERP-integrated    │  │ - Emailer          │
 │ Engine (JPS)       │  │  stored procs)     │  │ - CSV Updater      │
 └────────────────────┘  └────────────────────┘  └────────────────────┘
           │
           │ Unix pipe (stdout)
           ▼
 ┌────────────────────┐
 │ Log transport      │ ──gRPC──▶ Emailer service ──SMTP──▶ Admin inbox
 │ (error batching)   │
 └────────────────────┘

The system has four major components, connected by three different communication protocols. The React frontend talks to the Koa.js REST API over HTTP. The REST API talks to the Rust/WASM engine through in-process function calls, with no network boundary and no serialization overhead beyond the WASM ABI. The REST API talks to Microsoft SQL Server via the tedious driver for stored procedure calls. And the REST API talks to two gRPC services, an emailer and a CSV updater, over gRPC for process isolation and streaming.

The unusual part is the log pipeline at the bottom of the diagram. The REST API doesn’t process its own logs. It writes structured JSON to stdout via Pino.js, and a separate Node.js process (the log transport) reads that stream through a Unix pipe, filters it, batches errors, and forwards them to the emailer service over gRPC. The emailer sends SMTP alerts to my inbox. Two independent processes, composed in a single shell command, communicating through a pipe. I’ll explain why this design exists and what patterns it implements later in this post.

System Architecture
Click a component or connection for details.
HTTP/RESTIn-process calltedious (TDS)gRPCUnix pipe (stdout)gRPCSMTPReact.js ClientBulma CSS, FormikKoa.js REST API9 endpoints, OpenAPI 3.0Rust/WASM EngineJPS + greedy solverMSSQL DatabaseERP-integrated stored procsgRPC ServicesEmailer + CSV UpdaterLog TransportPipe-and-Filter, batchingEmailer ServicegRPC server, SMTP sendAdmin InboxSMTP alerts

Technology Stack

Two decisions shaped the stack more than any others. First, I chose Koa.js over Express because Koa’s async/await middleware model was cleaner for composing the multi-step request pipeline: parse barcode, resolve items from the ERP, run the optimizer, format the response. Express’s callback-based middleware felt unnecessarily noisy for a greenfield project in 2019. Koa middleware returns promises natively; error handling propagates through the async chain without explicit next(err) calls or try/catch wrappers at every layer. For a pipeline where each step depends on the previous step’s output, this distinction matters daily.

Second, I used gRPC for internal server-to-server communication instead of REST. The CSV updater needed client-side streaming, sending large Excel files row by row without buffering the entire file in memory, and gRPC handles streaming natively through protobuf contracts. REST has no standard primitive for this pattern. I reserved REST for the browser-facing API because browser-based gRPC (gRPC-Web) was impractical with React 16 in 2019; the proxy layer it required would have added more complexity than the benefit justified.

LayerTechnologyRationale
FrontendReact 16 + TypeScript, Bulma CSSLightweight UI for warehouse operators; drag-and-drop and file upload
Backend (REST)Node.js + Koa.jsAsync/await middleware; cleaner pipeline composition than Express for multi-step requests
Backend (gRPC)Node.js + grpc-jsNative streaming for bulk uploads, strong typing via protobuf; REST reserved for browser
OptimizationRust (edition 2021) → WASMNear-native pathfinding; portable across any Node.js runtime; ~144 KB binary
DatabaseMicrosoft SQL Server 2019 (migrated from 2008)Direct ERP integration via stored procedures; no data replication
Buildpnpm + TurborepoMonorepo orchestration with aggressive caching (migrated from Lerna)
LoggingPino.js → gRPC → SMTPStructured JSON logging with minimal GC pressure; batched error email forwarding
Deploymentsystemd on Debian Linux3 service units; no Docker (IT vetoed it)

On Koa vs Express: Koa.js was created by the same team behind Express, specifically to leverage ES2017 async/await. Express middleware uses the (req, res, next) callback pattern, where errors must be forwarded explicitly with next(err). Koa middleware uses async (ctx, next): you await next() to call downstream middleware, and any thrown error propagates up the async stack automatically. For a request that passes through 5–6 middleware layers (CORS, body parsing, validation, business logic, error formatting, logging), the difference in readability and debugging compounds quickly.

The Monorepo

The codebase is a pnpm monorepo holding 14 TypeScript packages and 2 Rust crates, migrated from Lerna to pnpm + Turborepo partway through the project. Lerna worked initially, but its task orchestration was sequential: running build across 14 packages waited for each to finish before starting the next, even when most packages had no dependency on each other. Turborepo’s build graph runs independent builds in parallel and caches results aggressively. A full rebuild that took minutes under Lerna completed in seconds once Turborepo could skip unchanged packages.

The packages group into five categories.

Frontend. The React client application. Bulma CSS for styling, Formik for form state management. The client talks exclusively to the REST API; it has no knowledge of gRPC or the WASM engine behind it.

Core logic. The REST API (the central service that handles all HTTP requests), the warehouse optimization wrapper (a TypeScript package that loads the WASM binary and exposes it as a typed function), and a position minimization utility.

Shared. Common domain models and utilities, the types that appear in both the REST API and the gRPC services. These packages had 100% test coverage because they defined the contracts between all other packages; a broken shared type would cascade into every consumer. An environment configuration package, also at 100% coverage, managed runtime settings: database connection strings, gRPC ports, log levels.

gRPC services. The email forwarding server, the CSV warehouse updater server, and the Pino log transport. Each runs as an independent Node.js process. The protobuf contracts live in their own package, shared between the services and the REST API that calls them.

Utilities. A CSV parser, an Excel exporter, and a shared Jest configuration used by all test suites.

The Rust workspace sits alongside the TypeScript packages. It holds two crates: warehouse-opt (pure domain logic: pathfinding, caching, the multi-phase greedy algorithm from Part 2) and warehouse-opt-wasm (the thin Facade that annotates types with wasm_bindgen). The WASM build output is an npm package that Turborepo treats identically to any TypeScript package. It appears in the dependency graph, gets versioned, and triggers downstream rebuilds when it changes.

turbo.json: WASM package as a first-class build target
// turbo.json

{
  "$schema": "https://turbo.build/schema.json",
  "pipeline": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**"]
    },
    "@vivaldi/warehouse-opt#build": {
      "dependsOn": ["^build"],
      "outputs": ["pkg/**"],
      "inputs": [
        "rust/**/*.rs",
        "rust/**/Cargo.toml",
        "grid.txt"
      ]
    },
    "test": {
      "dependsOn": ["build"]
    },
    "lint": {
      "outputs": []
    }
  }
}

The @vivaldi/warehouse-opt#build override tells Turborepo that this specific package’s build depends on Rust source files and grid.txt, not on TypeScript source. When only a .ts file changes elsewhere in the monorepo, Turborepo skips the WASM build entirely, a cache hit that saves the full cargo build + wasm-bindgen pipeline. When grid.txt changes (a warehouse layout reorganization), the WASM package rebuilds and every downstream package that depends on it rebuilds too. The dependency graph handles the cascade automatically.

Architectural Patterns

Four patterns recur across the codebase. I’m naming them explicitly because each one solved a specific problem, and because single-engineer projects often rely on implicit conventions that decay over time. Naming the patterns made them enforceable. When I returned to the codebase after an 8-month dormancy in 2021, the pattern names in my notes told me exactly how the code was organized.

Vertical Slice Architecture

Every REST API module follows the same six-file structure: router.ts defines route handlers and composes the module; controller.ts handles HTTP concerns (parsing request parameters, formatting responses, setting status codes); manager.ts contains business logic and orchestrates calls between the repository and external services; repository.ts encapsulates database access through stored procedure calls; entity.ts defines domain types; validator.ts contains Joi schemas for input validation.

REST API module structure: the pattern repeats across all 8 modules
packages/rest-api/src/
├── items/
│   ├── router.ts          # Route definitions, composition root
│   ├── controller.ts      # HTTP request/response handling
│   ├── manager.ts         # Business logic, orchestration
│   ├── repository.ts      # Database queries via stored procedures
│   ├── entity.ts          # Domain types (Item, InventorySlot)
│   └── validator.ts       # Input validation (Joi schemas)
├── slots/
│   ├── router.ts
│   ├── controller.ts
│   ├── manager.ts
│   ├── repository.ts
│   ├── entity.ts
│   └── validator.ts
├── warehouse/
│   ├── router.ts
│   ├── controller.ts
│   ├── manager.ts
│   ├── repository.ts
│   ├── entity.ts
│   └── validator.ts
└── ... (5 more modules, same structure)

This pattern repeats across all 8 API modules: items, slots, warehouse, orders, health, and three others. The benefit for a single engineer is navigability. When an endpoint behaves unexpectedly, I know exactly which file to open: if the HTTP status code is wrong, it’s the controller; if the business logic is wrong, it’s the manager; if the query returns bad data, it’s the repository. Six files, six responsibilities, zero ambiguity about where a given concern lives.

The alternative, grouping all controllers together, all repositories together, all validators together, is the “horizontal layer” approach that scales better for large teams where different engineers own different layers. For one engineer who owns every layer, vertical slices mean a bug in the /items/sort-batch-orders endpoint never requires opening a file outside the items/ directory.

Vertical Slice Explorer

Click a file to see its responsibility in the items/ module.

items/
Same 6-file structure across all 5 modules. Switch tabs to see the pattern repeat.

Facade Pattern (WASM Bridge)

The Rust workspace separates into two crates, and the boundary between them is deliberate. warehouse-opt contains all domain logic: Jump Point Search, the distance cache, the multi-phase greedy algorithm, grid parsing. It has zero knowledge of WASM: no wasm_bindgen annotations, no browser-specific code, no serialization format assumptions. warehouse-opt-wasm is a seven-line shim (shown in Part 2) that annotates the input and output types with wasm_bindgen and delegates to the core library. The entire WASM surface area is one re-exported function.

This separation paid off in testing. warehouse-opt is tested with standard cargo test using fixtures that exercise pathfinding, caching, and the full optimization pipeline. These tests run on any machine with a Rust toolchain; no WASM runtime, no Node.js, no browser needed. The WASM crate is tested through integration tests that load the binary into Node.js and verify the round-trip. Keeping the Facade thin means almost all test coverage lives in the core crate where it’s cheapest to run.

Type Bridge (Rust/TypeScript Boundary)

The tsify crate auto-generates TypeScript type declarations from Rust struct definitions. When the optimizer’s input type FindBestSortingParams includes a field requested_items: Vec<RequestedItem>, tsify produces a TypeScript interface with requestedItems: RequestedItem[]. The serde attribute #[serde(rename_all = "camelCase")] handles the naming convention translation during serialization: Rust uses snake_case internally, TypeScript expects camelCase, and the mismatch is reconciled at the WASM boundary without manual mapping code on either side.

This might sound like a convenience feature, but it solved a real class of bugs. The C++ generation (Gen 2) had no equivalent mechanism. The camelCase/snake_case boundary between Node.js and C++ was entirely manual, and a mismatch between a JSON key and a C++ struct member would silently produce garbage data or a crash at runtime. With tsify, a field name change in Rust propagates to the TypeScript types at compile time. The type checker catches the mismatch before the code runs. That class of bug is structurally impossible in the current system.

Polyglot Monorepo

The build graph spans three build systems. A change to grid.txt triggers build.rs, which generates generated.rs, which feeds into cargo build, which produces a WASM binary via wasm-bindgen, which becomes an npm package, which enters Turborepo’s dependency graph, which triggers rebuilds of every downstream TypeScript package. Three toolchains (Rust’s cargo, wasm-bindgen, and Turborepo) execute in sequence, yet the final output is a package that TypeScript consumers import like any other:

import { findBestSorting } from '@vivaldi/warehouse-opt';

The consumer doesn’t know or care that the implementation is Rust compiled to WASM. The polyglot complexity is confined to the build pipeline; at runtime, it’s a function call.

Hybrid REST + gRPC

The system uses REST for everything the browser touches and gRPC for everything internal. This split wasn’t an architectural preference; it was a practical consequence of the constraints.

The browser needed to call the API. gRPC-Web existed in 2019, but it required a proxy (Envoy or grpc-web-proxy) between the browser and the gRPC server, translating HTTP/1.1 requests into HTTP/2 gRPC calls. Adding a proxy process to a 4 GB server that already ran three services was unappealing, and the proxy itself would need monitoring and restart logic. REST over Koa.js was simpler and sufficient for the browser’s needs: request-response pairs for route optimization, item lookups, and file downloads.

gRPC was necessary for the internal services for two specific reasons. The CSV updater needed client-side streaming: when an operator uploads an Excel file to update the warehouse, the REST API streams the rows one at a time to the updater service over a gRPC stream, never buffering the full file in memory. On a 4 GB server, buffering a large spreadsheet in the API process while simultaneously serving route optimization requests is unsafe. gRPC’s streaming is a first-class protocol feature defined in the protobuf contract, not a workaround layered on top of HTTP chunked encoding. The emailer needed process isolation: if the email service crashes, the REST API continues serving routes. gRPC provided the process boundary with strong typing through protobuf contracts.

On gRPC: gRPC is a remote procedure call framework built on HTTP/2 and Protocol Buffers (protobuf). Where REST sends JSON over HTTP/1.1 and uses URL paths to identify resources, gRPC sends binary-encoded protobuf messages over HTTP/2 and uses service definitions to identify methods. The key advantage for this system was streaming: gRPC defines four interaction patterns: unary (one request, one response), server streaming, client streaming, and bidirectional streaming. The CSV updater uses client streaming: the REST API sends rows one at a time over a single open connection, and the updater processes each row as it arrives. REST has no standard equivalent; you’d need WebSockets, chunked encoding, or a custom protocol to achieve the same result.

On the hybrid architecture: the REST API is both an HTTP server (for the browser) and a gRPC client (for internal services). The REST API calls the emailer and CSV updater; it doesn’t receive gRPC calls. The only gRPC servers are the emailer and CSV updater processes. This asymmetry is intentional: the browser-facing surface area is entirely REST, and the gRPC complexity is hidden behind the API boundary. Part 4 covers the CSV updater’s streaming protocol in detail.

The Emailer: gRPC for Process Isolation

The emailer service exists for one reason: crash isolation. On a 4 GB server with no external monitoring (no Sentry, no Datadog, no PagerDuty, the constraints from Part 1), the only way I could receive error notifications was through Vivaldi’s own SMTP server. If the email-sending code ran inside the REST API process and crashed (a malformed email template, an SMTP timeout, a transient network error), it could bring down the API and stop operators from receiving optimized routes. A bug in the observability layer would make the system both broken and unobservable simultaneously.

The emailer runs as a separate Node.js process with its own systemd service unit. It exposes a single gRPC endpoint: accept a batch of error logs and send them as an email. If it crashes, systemd restarts it automatically. The REST API never notices because the log transport (the process that feeds errors to the emailer) handles the gRPC connection lifecycle independently. If a gRPC send fails, the transport retries at a fixed 5-second interval. Errors accumulate in the transport’s in-memory buffer until the emailer comes back.

The alternative, running the email logic inside the API process and wrapping it in try/catch, would have been simpler to deploy (one process instead of two) but brittle to operate. A bug in email formatting or SMTP handling would pollute the API process’s error state, and on Node.js, an uncaught exception in an async email callback can terminate the entire process. Process isolation makes this impossible: the emailer’s failures are confined to the emailer’s address space.

Log Pipeline

The log pipeline composes four named patterns, each solving a specific operational problem on a server with no access to third-party logging services.

Pipe-and-Filter

The REST API writes structured JSON to stdout using Pino.js. Pino was chosen specifically for its minimal garbage collection impact: it serializes log objects to JSON strings with minimal intermediate object allocations, which matters on a memory-constrained server where GC pauses in the logging path would add latency to every API request.

The API process does zero log processing. It writes JSON and moves on. A separate process, connected via a Unix pipe, handles everything else. The pipeline has three stages: stdin receives the raw JSON stream; a split+parse stage deserializes each line and filters by log level (only errors pass through; 404 responses are explicitly dropped because they’re expected behavior from health-check probes); a batch+send stage accumulates the filtered errors and forwards them to the emailer via gRPC.

pino-grpc-send: three-stage stream pipeline
// packages/pino-grpc-send/src/index.ts

import pump from 'pump';
import split from 'split2';
import through2 from 'through2';

const LOG_LEVEL_ERROR = 50;
const BATCH_SIZE = 10;
const FLUSH_INTERVAL_MS = 10_000;

pump(
  process.stdin,

  // Stage 1: split + parse
  split(JSON.parse),

  // Stage 2: filter by level, drop 404s
  through2.obj(function (log, _enc, cb) {
    if (log.level >= LOG_LEVEL_ERROR && log.res?.statusCode !== 404) {
      this.push(log);
    }
    cb();
  }),

  // Stage 3: batch + send via gRPC
  batchAndSend(emailerClient, {
    maxItems: BATCH_SIZE,
    maxWaitMs: FLUSH_INTERVAL_MS,
  }),
);

The pump library composes the stages and handles backpressure: if the gRPC send stage slows down (the emailer is busy), pump pauses the upstream filter stage, which pauses the parser, which pauses stdin reading. No logs are dropped; they buffer in the pipe until the downstream pressure resolves. If any stage errors, pump tears down the entire pipeline cleanly instead of leaving zombie streams.

Batch-with-Timeout

Errors flush to the emailer when either of two conditions is met: the batch accumulates 10 error logs, or 10 seconds have elapsed since the last flush, whichever comes first. The dual threshold prevents two failure modes. Without the count threshold, a burst of 50 errors during a database outage would produce 50 individual emails in rapid succession, flooding my inbox and potentially overwhelming the SMTP server. Without the time threshold, a single isolated error on an otherwise quiet day would sit in the buffer indefinitely, and I wouldn’t learn about it until the next error arrived to fill the batch. Ten logs, ten seconds, chosen empirically after observing error patterns during the first months of production operation.

Retry with Fixed Interval

Failed gRPC sends to the emailer retry at a constant 5-second interval. No exponential backoff, no jitter, no circuit breaker. The emailer runs on the same machine; if it’s down, it’s because it crashed and systemd is restarting it, which takes less than a second. A fixed retry interval is the right pattern for this failure mode because the recovery time is bounded and predictable. Exponential backoff is designed for remote services with variable recovery times; applying it to a local process restarting on the same host would add unnecessary delay to error delivery.

Unix Process Composition

The entire log pipeline, from API stdout through parsing, filtering, batching, and gRPC send, runs as two processes composed in a single shell command:

node rest-api | node pino-grpc-send

This is the systemd ExecStart line for the main service. The pipe operator connects the API’s stdout to the transport’s stdin. Each process manages its own memory independently; the API can garbage-collect its heap without affecting the transport, and vice versa. On a 4 GB server shared with other Vivaldi services, this memory isolation is not a design luxury; it’s a survival mechanism.

The composability matters. I could replace the transport with tee /var/log/warehouse.log for file-based logging, or with jq . for pretty-printed console output during debugging, or with /dev/null to discard logs entirely. The API process wouldn’t know the difference. Unix pipes are the original microservice boundary. Doug McIlroy described this pattern in 1964: write programs that do one thing, and connect them through text streams.

On pipe-and-filter: this pattern is a literal implementation of the Unix philosophy. The API is a source that produces structured data; the transport is a filter that selects, batches, and forwards. Each component has a single responsibility, and the pipe is a universal interface between them. This same composition model scales from shell one-liners to production service architectures; the semantics are identical, only the stakes change.

Deployment: Three systemd Services

The system runs as three coordinated systemd services. systemd’s native dependency graph handles startup ordering:

ServicePurposeDependency
Main REST APIKoa.js server, piped to log transportRequires email service
Email servicegRPC server that forwards errors via SMTPIndependent
CSV updatergRPC server for bulk warehouse updatesIndependent

The email service and CSV updater start independently; they have no dependencies on each other or on the main API. The main API depends on the email service (Requires=vivaldi-emailer.service), so systemd ensures the emailer is running before starting the API. If the emailer goes down while the API is running, the log transport’s retry logic handles the gap; the Requires dependency only governs startup ordering, not runtime health.

Main service unit: two processes, one pipe, one systemd service
# /etc/systemd/system/vivaldi-rest-api.service

[Unit]
Description=Vivaldi Warehouse Log: REST API + Log Transport
After=vivaldi-emailer.service
Requires=vivaldi-emailer.service

[Service]
Type=simple
WorkingDirectory=/opt/vivaldi/warehouse-log
ExecStart=/bin/bash -c 'node packages/rest-api/dist/index.js | node packages/pino-grpc-send/dist/index.js'
Restart=on-failure
KillSignal=SIGQUIT
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

The ExecStart line is the Unix pipe composition described above. KillSignal=SIGQUIT triggers graceful shutdown: the API drains in-flight requests, the log transport flushes any buffered errors, and both processes exit cleanly. Restart=on-failure means systemd restarts the service automatically if either process in the pipe exits with a non-zero status. All three services follow this pattern: graceful shutdown on SIGQUIT, automatic restart on failure.

Deployments happened over SSH: scp the build artifacts to /opt/vivaldi/warehouse-log/, run an install script that copies files and reloads the systemd daemon, then restart the services. No Docker image builds, no container registry, no orchestration platform. The IT team vetoed Docker (as noted in Part 1), and for three services on a single server maintained by one engineer, systemd provided everything I needed: dependency ordering, automatic restart, graceful shutdown signals, and journalctl for log inspection. A container orchestrator would have consumed memory and operational attention for capabilities I didn’t use on a deployment that happened a few times per month.

Log Pipeline Simulator
Error
404 (dropped)
drop 404sstdinJSON streamsplit+parsefilter errorsbatch+sendaccumulateSMTPemail alert
Processed: 0Filtered (404s): 0Batched: 0Emailed: 0

On operational overhead: three systemd services with automatic restart and graceful shutdown replaced what would otherwise require Docker, a container orchestrator, and a deployment pipeline. The total operational overhead is one install script and three .service files. The tradeoff is that deployments are manual and there’s no rollback mechanism beyond re-deploying the previous build artifacts. Acceptable for a system with infrequent deployments maintained by the same person who wrote the code, less acceptable at any larger scale.

API Surface

The REST API exposes nine endpoints, documented via an OpenAPI 3.0 specification:

MethodPathPurpose
POST/items/sort-batch-ordersCore: optimize pick route for batch orders
POST/slots/assign/itemAssign item to warehouse slot
GET/slots/assign/{position}Check slot occupant
GET/items/{id}Item detail lookup
GET/items/find/{pattern}Wildcard search (* supported)
POST/items/update-quantityUpdate quantity at position
GET/warehouse/snapshotFull warehouse export (Excel download)
POST/warehouse/updateBulk update (gRPC-streamed import)
GET/health/metricsBuild datetime and commit info

The most-used endpoint is POST /items/sort-batch-orders, the route optimizer. An operator scans a barcode, the frontend sends the decoded ORC code to this endpoint, the controller calls the manager, the manager calls the repository (which executes the stored procedure to resolve items and positions), passes the results to the WASM optimizer, and returns the sorted pick sequence. The full Vertical Slice in action: router, controller, manager, repository, optimizer, response.

Two endpoints deserve attention because Part 4 covers them in depth. GET /warehouse/snapshot is the simple path: one stored procedure call, one Excel serialization, one file download. POST /warehouse/update is the hard path: the REST API receives an Excel file from the browser, opens a gRPC stream to the CSV updater service, and streams rows one by one for validation, staging, and an atomic database swap. The simplicity of the export and the complexity of the import are both consequences of the same constraint: operators work with Excel, not SQL, and the system must make both directions safe.


Part 4 picks up where the API surface table leaves off: the data pipeline end-to-end. I’ll cover the export path that produces warehouse-snapshot.xlsx with a single stored procedure, the import path that streams Excel rows over gRPC with typed error codes and atomic database swaps, the Shared Database Pattern that wired everything directly into the live ERP, and the SQL Server 2008 to 2019 migration. The architecture described in this post is the chassis; Part 4 shows what happens when operators push data through it, and the honest lessons I learned from building a production system alone for 2.5 years.