Run durable workflows on Postgres. No servers, no queues, just your database.
npm install @hotmeshio/hotmesh
Install the package:
npm install @hotmeshio/hotmesh
The repo includes a docker-compose.yml that starts Postgres and a development container:
docker compose up -d
See the Durable API reference for the full API surface — workflows, activities, signals, child workflows, and more.
Define the workflow — plain TypeScript with branching, loops, and error handling. Activities are proxied so their results are checkpointed and replayed on restart.
// workflows.ts
import { Durable } from '@hotmeshio/hotmesh';
import type * as activities from './activities';
export async function orderWorkflow(itemId: string, qty: number) {
const { checkInventory, reserveItem, notifyBackorder } =
Durable.workflow.proxyActivities<typeof activities>();
const available = await checkInventory(itemId);
if (available >= qty) {
return await reserveItem(itemId, qty);
} else {
await notifyBackorder(itemId);
return 'backordered';
}
}
Start a worker — connects to Postgres and begins processing workflows on the given task queue.
// worker.ts
import { Durable } from '@hotmeshio/hotmesh';
import { Client as Postgres } from 'pg';
import { orderWorkflow } from './workflows';
const connection = {
class: Postgres,
options: { connectionString: 'postgresql://localhost:5432/mydb' }
};
const worker = await Durable.Worker.create({
connection,
taskQueue: 'orders',
workflow: orderWorkflow,
});
await worker.run();
Run a workflow — start an execution and await its result. The client can run in a different process, container, or server.
// client.ts
import { Durable } from '@hotmeshio/hotmesh';
import { Client as Postgres } from 'pg';
const connection = {
class: Postgres,
options: { connectionString: 'postgresql://localhost:5432/mydb' }
};
const client = new Durable.Client({ connection });
const handle = await client.workflow.start({
args: ['item-123', 5],
taskQueue: 'orders',
workflowName: 'orderWorkflow',
workflowId: 'order-456',
});
const result = await handle.result();
Activities are your side-effectful functions — database calls, API requests, anything non-deterministic. HotMesh checkpoints their results so they're never re-executed on replay.
// activities.ts
export async function checkInventory(itemId: string): Promise<number> {
return getInventoryCount(itemId);
}
export async function reserveItem(itemId: string, quantity: number): Promise<string> {
return createReservation(itemId, quantity);
}
export async function notifyBackorder(itemId: string): Promise<void> {
await sendBackorderEmail(itemId);
}
All snippets below run inside a workflow function (like orderWorkflow above). Durable methods are available as static imports:
import { Durable } from '@hotmeshio/hotmesh';
Long-running workflows — sleep is durable. The process can restart; the timer survives.
// sendFollowUp is a proxied activity from proxyActivities()
await Durable.workflow.sleep('30 days');
await sendFollowUp();
Parallel execution — fan out to multiple activities and wait for all results.
// proxied activities run as durable, retryable steps
const [payment, inventory, shipment] = await Promise.all([
processPayment(orderId),
updateInventory(orderId),
notifyWarehouse(orderId)
]);
Child workflows — compose workflows from other workflows.
const result = await Durable.workflow.executeChild({
args: [orderId],
taskQueue: 'validation',
workflowName: 'validateOrder',
workflowId: `validate-${orderId}`,
});
Signals — pause a workflow until an external event arrives.
const approval = await Durable.workflow.condition<{ approved: boolean }>('manager-approval');
if (!approval.approved) return 'rejected';
Activities retry automatically on failure. Configure the policy per activity or per worker:
// Durable: per-activity retry policy (activities registered at Worker.create)
const { reserveItem } = Durable.workflow.proxyActivities<typeof activities>({
retry: {
maximumAttempts: 5,
backoffCoefficient: 2,
maximumInterval: '60s'
}
});
// HotMesh: worker-level retry policy
const hotMesh = await HotMesh.init({
appId: 'orders',
engine: { connection },
workers: [{
topic: 'inventory.reserve',
connection,
retry: {
maximumAttempts: 5,
backoffCoefficient: 2,
maximumInterval: '60s'
},
callback: async (data) => { /* ... */ }
}]
});
Defaults: 3 attempts, coefficient 10, 120s cap. Delay formula: min(coefficient ^ attempt, maximumInterval). Duration strings like '5 seconds', '2 minutes', and '1 hour' are supported.
If all retries are exhausted, the activity fails and the error propagates to the workflow function — handle it with a standard try/catch.
Workflow state lives in your database as ordinary rows — jobs and jobs_attributes. Query it directly, back it up with pg_dump, replicate it, join it against your application tables.
SELECT
j.key AS job_key,
j.status AS semaphore,
j.entity AS workflow,
a.symbol AS attribute,
a.dimension AS dimension,
a.value AS value,
j.created_at,
j.updated_at
FROM
jobs j
JOIN jobs_attributes a ON a.job_id = j.id
WHERE
j.key = 'order-456'
ORDER BY
a.symbol, a.dimension;
What happened? Consult the database. What's still running? Query the semaphore. What failed? Read the row. The execution state isn't reconstructed from a log — it was committed transactionally as each step ran.
const handle = client.workflow.getHandle('orders', 'orderWorkflow', 'order-456');
const result = await handle.result(); // final output
const status = await handle.status(); // semaphore (0 = complete)
const state = await handle.state(true); // full state with metadata
const exported = await handle.export({ // selective export
allow: ['data', 'state', 'status', 'timeline']
});
There is no proprietary dashboard. Workflow state lives in Postgres, so use whatever tools you already have:
jobs and jobs_attributes to inspect state, as shown above.handle.status(), handle.state(true), and handle.export() give programmatic access to any running or completed workflow.HMSH_LOGLEVEL (debug, info, warn, error, silent) to control log verbosity.HMSH_TELEMETRY=true to emit spans and metrics. Plug in any OTel-compatible collector.HotMesh also supports a declarative YAML syntax. The same activities run in both modes — the difference is compilation speed. YAML workflows compile ~10x faster because the execution graph is declared upfront rather than discovered through replay. The tradeoff is expressiveness: YAML uses a functional pipe syntax for conditions and transformations instead of native TypeScript control flow.
See the Quick Start guide for YAML examples and the tests/functional/ directory for working implementations.
For a deep dive into the transactional execution model — how every step is crash-safe, how the monotonic collation ledger guarantees exactly-once delivery, and how cycles and retries remain correct under arbitrary failure — see the Collation Design Document. The symbolic system (how to design workflows) and lifecycle details (how to deploy workflows) are covered in the Architectural Overview.
Tests run inside Docker. Start the services and run the full suite:
docker compose up -d
docker compose exec hotmesh npm test
Run a specific test group:
docker compose exec hotmesh npm run test:durable # all Durable tests
docker compose exec hotmesh npm run test:durable:hello # single Durable test (hello world)
docker compose exec hotmesh npm run test:virtual # all Virtual network function (VNF) tests
HotMesh is source-available under the HotMesh Source Available License.