JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 37
  • Score
    100M100P100Q91034F
  • License Unlicense

Protocol and Language Agnostic Tooling Yielding Universal Semantics

Package Exports

  • @modularizer/plat
  • @modularizer/plat/client
  • @modularizer/plat/client-server
  • @modularizer/plat/python-browser

Readme

🦫 plat (aka "platypus")

"call it like you're there"

plat is a Protocol and Language Agnostic Tooling Yielding Proxy-like Universal Semantics. In short:

  • just write your methods...
  • ...then call them

It doesn't matter what you are designing

  • Standard REST API
  • MCP Server
  • A CLI tool
  • Database CRUD
  • AI Tool Calls
  • Inter Process Communication
  • Worker Queues
  • Client-to-client Chat Room

At the end of the day, all there is is handlers and callers.

Yes there is an API layer there, but it isn't really your concern.

  • auth
  • exceptions
  • headers
  • serialization/deserialization/type coercion
  • param validation
  • route building
  • response building
  • rate limiting
  • caching
  • client-side retry logic
  • client-side param validation

All of that can be handled easily behind the scenes with easily standardizable middleware plugins.

In fact, the transport method itself doesn't even matter:

  • HTTP
  • WS
  • File Queues
  • DB triggers
  • Zapier Integrations
  • External APIs
  • USPS mail delivery

It's all important, but your function handler does not need to know about it, and neither does your call site.

Okay, seriously ... what are we even talking about here?

Fair... try this


Quickstart

1. Install

npm i modularizer-plat

2. Make a server

import { Controller, POST, createServer, type RouteContext } from "plat"

@Controller()
class OrdersApi {
  @POST()
  async createOrder(
    input: { itemId: string; qty: number },
    ctx: RouteContext,
  ): Promise<{ orderId: string; status: string }> {
    return { orderId: "ord_123", status: "pending" }
  }
}

const server = createServer({ port: 3000 }, OrdersApi)
server.listen()

3. Serve it with the CLI

plat serve

4. See the docs

open http://localhost:3000/

5. Make a client

plat gen client http://localhost:3000/ --dst client.ts
import { createClient } from "./client"


const client = createClient("http://localhost:3000")
const order = await client.createOrder({ itemId: "sku_123", qty: 2 })
console.log(order)

6. Make a CLI

plat gen cli http://localhost:3000/ --dst cli.ts
npx tsx cli.ts createOrder --itemId=sku_123 --qty=2

7. Let your AI loose

import { OpenAPIClient } from "plat"


const spec = await fetch("http://localhost:3000/openapi.json").then((r) => r.json())
const client = new OpenAPIClient(spec, { baseUrl: "http://localhost:3000" })

const tools = client.tools
// hand `tools` to your AI provider
// then call back into `client.createOrder(...)` when it selects a tool

Use it

  • Client call: client.createOrder({ itemId: "...", qty: 2 })
  • Generated CLI call: plat createOrder --itemId=... --qty=2
  • AI tool call: an LLM can see createOrder as a tool with a name, input shape, and result shape
  • Documentation: generated openapi.json, plus docs/tool metadata derived from the same method

🎯 Why plat exists

Most API frameworks make you think about:

  • routes
  • methods
  • headers
  • authentication
  • serialization/deserialization/coercion
  • sending responses
  • REST hierarchies
  • request shapes
  • wire protocols
  • client generation drift
  • transport details

plat tries to make most of that disappear.

What you should be thinking about is:

  • what methods exist
  • what input each method accepts
  • what result each method returns

That’s the part plat treats as sacred.

It feels like you are adding a new class, and behind the scenes an API is born

One of the biggest reasons plat exists is to make it easy to use any AI provider:

  • on the client side
  • on the server side
  • as the initiator of your tasks
  • or as the doer of your tasks
  • or both

Everything below is in service of that same promise: define useful methods once, then let clients, CLIs, docs, and AI tools all see the same surface.

Diagram

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            
β”‚                   Tool Definitions                    β”‚            
β”‚           (controllers + decorated methods)           β”‚            
β”‚                                                       β”‚            
β”‚  TypeScript (plain types)    Python (type hints)      β”‚            
β”‚  class Orders {              @Controller()            β”‚            
β”‚    @Post()                   class Orders:            β”‚            
β”‚    create(input, ctx) {}       @POST()                β”‚            
β”‚    @Get()                      def create(self): ...  β”‚            
β”‚    list(input, ctx) {}         @GET()                 β”‚            
β”‚  }                             def list(self): ...    β”‚            
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            
                           β”‚                                         
                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                              
                β”‚  Operation Registry β”‚                              
                β”‚                     β”‚                              
                β”‚  operationId ─────► bound handler                  
                β”‚  method+path ─────► bound handler                  
                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                              
                           β”‚                                         
         server protocol plugins (how tool calls arrive)             
                           β”‚                                         
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”            
    β”‚       β”‚        β”‚         β”‚        β”‚       β”‚       β”‚            
β”Œβ”€β”€β”€β”΄β”€β”€β”β”Œβ”€β”€β”€β”΄β”€β”€β”β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”β”Œβ”€β”€β”€β”΄β”€β”€β”β”Œβ”€β”€β”€β”΄β”€β”€β”β”Œβ”€β”€β”€β”΄β”€β”€β”         
β”‚ HTTP β”‚β”‚  WS  β”‚β”‚  File  β”‚β”‚ WebRTC β”‚β”‚  DB  β”‚β”‚BullMQβ”‚β”‚ MQTT β”‚         
β”‚ REST β”‚β”‚  RPC β”‚β”‚ Queue  β”‚β”‚  Data  β”‚β”‚ Poll β”‚β”‚ Redisβ”‚β”‚Pub/  β”‚         
β”‚      β”‚β”‚      β”‚β”‚        β”‚β”‚  Chan  β”‚β”‚ Rows β”‚β”‚Queue β”‚β”‚ Sub  β”‚         
β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”˜         
                            ...                                      
       literally anything that can carry a JSON envelope             
                            ...                                      
β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”         
β”‚ HTTP β”‚β”‚  WS  β”‚β”‚  File  β”‚β”‚ WebRTC β”‚β”‚ POST β”‚β”‚ eBay β”‚β”‚  FB  β”‚         
β”‚ fetchβ”‚β”‚  RPC β”‚β”‚   IO   β”‚β”‚  Peer  β”‚β”‚to extβ”‚β”‚ list β”‚β”‚ Msg  β”‚         
β”‚      β”‚β”‚      β”‚β”‚        β”‚β”‚  Conn  β”‚β”‚  API β”‚β”‚ poll β”‚β”‚ poll β”‚         
β””β”€β”€β”€β”¬β”€β”€β”˜β””β”€β”€β”€β”¬β”€β”€β”˜β””β”€β”€β”€β”€β”¬β”€β”€β”€β”˜β””β”€β”€β”€β”€β”¬β”€β”€β”€β”˜β””β”€β”€β”€β”¬β”€β”€β”˜β””β”€β”€β”€β”¬β”€β”€β”˜β””β”€β”€β”€β”¬β”€β”€β”˜         
    β”‚       β”‚        β”‚         β”‚        β”‚       β”‚       β”‚            
    β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜            
                          β”‚                                          
        client transport plugins (how tool calls are sent)           
                          β”‚                                          
               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                               
               β”‚    OpenAPI Client   β”‚                               
               β”‚    (typed proxy)    β”‚                               
               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                               
                          β”‚                                          
       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                     
       β”‚         β”‚        β”‚        β”‚           β”‚                     
  β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”β”Œβ”€β”€β”€β”΄β”€β”€β”€β”β”Œβ”€β”€β”€β”΄β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”               
  β”‚   TS   β”‚β”‚ Python β”‚β”‚  CLI  β”‚β”‚  curl  β”‚β”‚ LLM Agent β”‚               
  β”‚        β”‚β”‚        β”‚β”‚       β”‚β”‚  bash  β”‚β”‚           β”‚               
  β”‚  node  β”‚β”‚  sync  β”‚β”‚ plat do β”‚β”‚ write  β”‚β”‚  Claude   β”‚               
  β”‚  bun   β”‚β”‚ async  β”‚β”‚plat pollβ”‚β”‚ JSON   β”‚β”‚  ChatGPT  β”‚               
  β”‚ browserβ”‚β”‚ promiseβ”‚β”‚       β”‚β”‚to inboxβ”‚β”‚  Gemini   β”‚               
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜               

The transport protocol, serialization, deserialization, queueing, and delivery mechanics are intentionally pushed out of your way.

That is especially powerful for AI-heavy systems, because you can keep swapping providers and execution patterns while preserving the same tool-shaped surface.

🎭 What the user experience should feel like

It should feel like this:

const order = await client.createOrder({ itemId: "sku_123", qty: 2 })

Not like this:

  • choosing between totally different client libraries
  • hand-authoring RPC envelopes
  • thinking about HTTP vs WS every time you call a method
  • manually syncing method names, routes, SDK methods, and OpenAPI operation IDs
  • re-implementing error handling, retries, and auth every time you make a request
  • refactoring if you change languages or protocols
  • endless boilerplate

It's like an SDK except you don't have to write it. It just comes for free with every openapi.json.

🦫 Flat by design

plat is intentionally opinionated about the API shape.

The rules
  • Method names are globally unique
  • Method names are the canonical route names
  • Input comes in as one object
  • Return values matter as first-class API types
  • Controllers organize code and docs, not URL hierarchies
  • The API surface stays flat and easy to call

Example

import { Controller, GET, POST, type RouteContext } from "plat"

type GetOrderInput = { id: string }
type CreateOrderInput = { itemId: string; qty: number }
type Order = { id: string; status: string }

@Controller()
export class OrdersApi {
  @GET()
  async getOrder(input: GetOrderInput, ctx: RouteContext): Promise<Order> {
    return { id: input.id, status: "pending" }
  }

  @POST()
  async createOrder(input: CreateOrderInput, ctx: RouteContext): Promise<Order> {
    return { id: "ord_123", status: "pending" }
  }
}

Canonical routes:

  • GET /getOrder
  • POST /createOrder

Canonical client calls:

await client.getOrder({ id: "ord_123" })
await client.createOrder({ itemId: "sku_123", qty: 2 })

That flatness matters because it makes the generated and dynamic clients obvious:

  • easy for humans to remember
  • easy for CLIs to expose
  • easy for AI agents to understand
  • easy for generated clients to mirror exactly
  • easy to hand to any AI provider as tool definitions

⏳ Long-running calls without changing the mental model

Sometimes a method is fast:

await client.createOrder({ itemId: "sku_123", qty: 2 })

Sometimes a method is slow, and you want visibility:

await client.importCatalog(
  { source: "s3://bucket/catalog.csv" },
  {
    onRpcEvent(event) {
      console.log(event.event, event.data)
    },
  },
)

Or you want deferred execution:

const handle = await client.importCatalog(
  { source: "s3://bucket/catalog.csv" },
  { execution: "deferred" },
)

const result = await handle.wait()

The important part is that it is still the same method.

As a bonus, in the right mode you can get:

  • progress updates
  • logs
  • chunks/messages
  • cancellation

That is what most users actually care about. The carrier and plugin details are for transport authors.

🐍 Python support

plat supports Python servers and clients too.

You can:

  • write Python controllers with plat decorators
  • generate OpenAPI from *.api.py
  • generate Python clients from OpenAPI
  • use sync, async, and promise-style Python clients
Python highlights
  • Sync clients
  • Async clients
  • Promise-style clients
  • Deferred call handles
  • Automatic input coercion
  • Automatic output serialization
  • First-class HTTP exception types

πŸ”Œ One client, many transports

The same method call should stay usable even when transport changes.

const httpClient = createClient("http://localhost:3000")
const rpcClient = createClient("ws://localhost:3000")
const fileClient = createClient("file:///tmp/plat-queue")

await httpClient.createOrder({ itemId: "sku_123", qty: 2 })
await rpcClient.createOrder({ itemId: "sku_123", qty: 2 })
await fileClient.createOrder({ itemId: "sku_123", qty: 2 })

Same tool call. Different carrier.

Diagram

   createOrder({ itemId, qty })
              β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚       β”‚        β”‚
      β–Ό       β–Ό        β–Ό
    HTTP     WS      File
      β”‚       β”‚        β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β–Ό
       same type-aware method call

πŸ€– AI tool calling

plat is a natural fit for LLM tools because the API shape is already tool-shaped.

Every operation has:

  • a stable name
  • one input object
  • one result
  • generated schema

That means you can use AI providers in whichever role you want:

  • as the caller deciding what tools to use
  • as the worker fulfilling part of a task
  • as interchangeable providers inside the same larger workflow
  • on the client side or the server side

That makes the same API useful to:

  • normal app code
  • a CLI
  • generated SDKs
  • an LLM agent

🧰 Dynamic clients and generated clients

plat supports both styles.

Dynamic clients

The OpenAPI client can work directly from an OpenAPI document and a runtime proxy.

Best when you want:

  • low ceremony
  • transport flexibility
  • no generated wrapper code
Generated clients

plat can also generate clients that materialize types and methods.

Especially useful in Python, where explicit generated models and wrappers help more than in TypeScript.

πŸ–₯️ CLI

plat includes a spec-first CLI.

plat gen openapi
plat gen client
plat gen cli
plat run openapi.json
plat serve

The CLI is available from both Node and Python packaging surfaces, with capability moving toward parity.

🧩 Plugin architecture

The plugin architecture matters, but mostly as an implementation and extension story.

For normal plat users, the important thing is:

  • methods stay flat
  • typing stays strong
  • clients feel direct
  • transport details stay hidden
  • provider complexity stays hidden too

For plugin developers, plat provides the escape hatch.

Client-side transport plugins

Transport plugins follow a generic lifecycle:

  • connect
  • send request
  • receive updates
  • receive result
  • disconnect

css:// identity and trust

Browser-hosted css:// servers can keep a stable host identity too.

  • Generate keypairs with generateClientSideServerIdentityKeyPair()
  • Persist them with saveClientSideServerIdentityKeyPair() or getOrCreateClientSideServerIdentityKeyPair()
  • Pin known hosts with trustClientSideServerOnFirstUse()
  • Optionally verify signed name-to-key records from a trusted authority
import {
  createFetchClientSideServerAuthorityServer,
  createClientSideServerMQTTWebRTCTransportPlugin,
  getOrCreateClientSideServerIdentityKeyPair,
} from '@modularizer/plat/client'

const knownHosts = {}
const transport = createClientSideServerMQTTWebRTCTransportPlugin({
  identity: {
    keyPair: await getOrCreateClientSideServerIdentityKeyPair({
      storageKey: 'plat-css:keypair:browser-math',
    }),
    knownHosts,
    trustOnFirstUse: true,
    authorityServers: [
      createFetchClientSideServerAuthorityServer({
        baseUrl: 'https://authority.example.com',
        publicKeyJwk: authorityPublicKeyJwk,
      }),
    ],
  },
})

The goal is not a global network lease. The goal is proving "this is the same host key I trusted last time" and optionally resolving known server names through a signed authority record.

You can also turn any normal plat server into an authority server with a standard method surface:

const knownHosts = {}
const authorityKeyPair = await getOrCreateClientSideServerIdentityKeyPair({
  storageKey: 'plat-authority:keypair',
})

const server = createServer({
  authorityServer: {
    authorityName: 'demo-authority',
    authorityKeyPair,
    knownHosts,
  },
}, OrdersApi)

That exposes the same methods everywhere:

  • resolveAuthorityHost
  • listAuthorityHosts
  • exportAuthorityHosts

Server-side protocol plugins

Protocol plugins are how tool calls arrive and how updates/results leave.

The goal is for the core method/typing/invocation story to be independent from:

  • HTTP
  • WebSockets
  • Node
  • any specific host process
Why this matters

That is what enables ideas like:

  • a browser-side server
  • a mobile-hosted server
  • a worker-hosted server
  • IndexedDB-backed local APIs
  • WebRTC-based peer-to-peer tools
  • custom carriers like DB polling or Redis streams

Most users should not have to think about any of this unless they are building a transport.

🌍 What makes plat different

Most systems force you to choose:

  • REST or RPC
  • server or client
  • app integration or AI tool integration
  • HTTP or "something custom"

plat is trying to collapse those choices into one model:

  1. Define useful tools
  2. Expose them everywhere
  3. Change carriers without changing the API itself

That makes plat especially interesting for:

  • internal tools
  • AI agents
  • automation systems
  • offline-first systems
  • browser-hosted local APIs
  • weird protocol experiments

πŸ›£οΈ Direction

plat is actively moving toward:

  • deeper transport neutrality
  • stronger portable server core extraction
  • easier custom protocol plugins
  • stronger generated clients and CLIs
  • better cross-language symmetry

The north star is simple:

Define tools once. Call them from anywhere. Carry them over anything.