JSPM

@parel-cloud/node

0.1.1
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 6
  • Score
    100M100P100Q67844F
  • License MIT

Parel SDK for JavaScript/TypeScript. 100+ AI models, GPU rental (BYOM), compare, and credits via a single OpenAI-compatible API. Official client for parel.cloud.

Package Exports

  • @parel-cloud/node

Readme

Parel - Run, use, compare 100+ AI models through one API

@parel-cloud/node

The official Node.js / TypeScript SDK for Parel.
100+ AI models (LLMs, image, video, TTS, STT, embeddings) + on-demand GPU rental (BYOM) + multi-model compare, behind one OpenAI-compatible API.

npm downloads GitHub MIT License TypeScript strict Node.js 18+

parel.cloud · Dashboard · Docs · Issues


Install

npm install @parel-cloud/node openai

Hello world

import { Parel } from "@parel-cloud/node";

const parel = new Parel(); // reads PAREL_API_KEY
const openai = await parel.openai;

const chat = await openai.chat.completions.create({
  model: "qwen3.5-72b",
  messages: [{ role: "user", content: "Merhaba, bana bir haiku yaz." }],
});
console.log(chat.choices[0].message.content);

Sign up at parel.cloud and every new account gets $1 free credit. That is a few thousand chat tokens, a handful of Flux images, or a one-minute Wan video, on us.


Why @parel-cloud/node

One client, 100+ models
Qwen, Llama, Gemma, DeepSeek, GPT, Claude, Gemini, Flux, Kling, Veo, Whisper. Switch the model string, everything else stays the same.
OpenAI drop-in
Reuse the official openai Node SDK. Streaming, tools, vision, JSON mode, audio, moderations all pass through, at Parel prices.
BYOM GPU rental
Deploy any Hugging Face model to RunPod, Modal, or Vast.ai with one call. Parel handles capacity, pricing, health, billing, cleanup.
Async that feels sync
Image, video, TTS, music are SQS-backed tasks. The SDK polls for you. Opt out with { async: true } if you want the raw task handle.
Typed errors
OpenAI-compatible error envelopes map to instanceof-able classes. ParelRateLimitError, ParelBudgetExceededError, ParelConflictError. One line triage.
KVKK friendly
Opt-in PII masking + Turkish data residency. Qwen for Turkish, Orpheus and Kokoro for Turkish TTS, Whisper Turbo for Turkish STT.

How it works

Parel architecture: your app uses @parel-cloud/node to call api.parel.cloud which routes to 100+ AI providers

Your app calls @parel-cloud/node. The SDK speaks HTTPS + Bearer auth to api.parel.cloud. Parel routes the request to the right provider, enforces your budget, handles PII, and returns an OpenAI-shaped response. The SDK never holds vendor keys. You never glue five client libraries together.


Providers

OpenAI    Anthropic    Google    Gemini    Meta    Qwen    Mistral    DeepSeek    Flux    Kling    ElevenLabs    Moonshot    NVIDIA    ByteDance    Cohere

and 15 more. Full catalogue: await parel.models.list().


What you get

Namespace What it does Return mode
parel.openai Chat, embeddings, moderations, vision, tools, streaming, speech Sync, streamed
parel.images Image generation and edits Polled, sync
parel.videos Text-to-video, image-to-video Polled, sync
parel.audio TTS, music, transcription Polled, sync
parel.gpu BYOM deploy lifecycle, inference, billing, events, tiers, prefetch Polled
parel.compare Multi-model head-to-head, conversations, winner marking Polled
parel.models Catalogue list and detail Sync
parel.credits Budget snapshot in USD Sync
parel.tasks Poll, wait, list, cancel async jobs Sync

Quickstart

Chat (OpenAI pass-through)

import { Parel } from "@parel-cloud/node";

const parel = new Parel({ apiKey: process.env.PAREL_API_KEY });
const openai = await parel.openai;

const res = await openai.chat.completions.create({
  model: "qwen3.5-72b",
  messages: [{ role: "user", content: "Why is the sky blue?" }],
});

Streaming

const stream = await openai.chat.completions.create({
  model: "qwen3.5-72b",
  messages: [{ role: "user", content: "Count to 5." }],
  stream: true,
});
for await (const chunk of stream) process.stdout.write(chunk.choices[0]?.delta?.content ?? "");

Tools and vision

await openai.chat.completions.create({
  model: "qwen3.5-vl-32b",
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "What's in this image?" },
        { type: "image_url", image_url: { url: "https://example.com/cat.jpg" } },
      ],
    },
  ],
  tools: [{ type: "function", function: { name: "search_web", parameters: { type: "object", properties: { q: { type: "string" } } } } }],
});

Images

const image = await parel.images.generate({
  model: "flux-schnell",
  prompt: "A minimalist watercolor of Istanbul at dawn, Bosphorus in fog",
  size: "1024x1024",
  onTick: (t) => console.log(`[${t.progress}%] ${t.status}`),
});
// image.result.data[0].url

Fire-and-forget:

const { task_id } = await parel.images.generate({ model: "flux-schnell", prompt: "...", async: true });
const done = await parel.tasks.waitFor(task_id);

Video

const video = await parel.videos.generate({
  model: "wan-2.6-t2v",
  prompt: "A robot watering a cherry tree at sunset",
  duration: 5,
  resolution: "1280x720",
  timeoutMs: 30 * 60_000,
});

Speech and transcription

// TTS
await parel.audio.speech({ model: "elevenlabs-tts", input: "Parel'e hoş geldin", voice: "alloy" });

// STT (Whisper Turbo, Turkish-friendly)
const t = await parel.audio.transcribe({ model: "whisper-large-v3-turbo", file: base64Wav, language: "tr" });
console.log(t.text);

Deploy a Hugging Face model on a rented GPU (BYOM)

const hf = await parel.gpu.validateHuggingFace("meta-llama/Llama-3.1-8B-Instruct");
// hf.recommended_gpu_tier === "rtx4090_24gb"

await parel.gpu.prefetch("meta-llama/Llama-3.1-8B-Instruct"); // optional, warm S3

const dep = await parel.gpu.create({
  huggingface_id: "meta-llama/Llama-3.1-8B-Instruct",
  gpu_tier: hf.recommended_gpu_tier ?? "rtx4090_24gb",
  idle_timeout_minutes: 15,
  budget_limit_usd: 5,
});

await parel.gpu.waitForRunning(dep.id, { onTick: (d) => console.log(d.status) });

const answer = await parel.gpu.chat(dep.id, {
  messages: [{ role: "user", content: "Write a haiku about Istanbul" }],
  max_tokens: 120,
});

await parel.gpu.stop(dep.id); // or parel.gpu.delete(dep.id)

Parel picks the cheapest available GPU across RunPod, Modal, and Vast.ai, handles TGI vs vLLM selection per architecture, enforces your budget cap, and refunds credits on crash loops. You watch deployment.status and call chat.

Multi-model compare

const run = await parel.compare.run({
  models: ["qwen3.5-72b", "gpt-4o-mini", "claude-3-5-sonnet", "gemini-2.5-pro"],
  prompt: "Summarize in 3 bullets: why KVKK compliance matters for Turkish SaaS.",
  timeoutMs: 10 * 60_000,
});
console.log(run.results);

Credits and budget

const snap = await parel.credits.get();
if (snap.remaining_usd < 1) console.log("top up: https://parel.cloud/billing");

Cancel a running task

const cancel = await parel.tasks.cancel(taskId);
console.log(`refunded $${cancel.refund_amount_usd}`);

Migrate from OpenAI in 30 seconds

- import OpenAI from "openai";
- const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
+ import { Parel } from "@parel-cloud/node";
+ const parel = new Parel({ apiKey: process.env.PAREL_API_KEY });
+ const openai = await parel.openai;

Every OpenAI call keeps working. You gain Qwen, Llama, DeepSeek, Claude, Gemini, Flux, Kling, Veo, Whisper, BYOM GPU, compare, credits, and tasks.


Error handling

import {
  ParelError,
  ParelAuthenticationError,
  ParelBudgetExceededError,
  ParelRateLimitError,
  ParelConflictError,
  ParelNotFoundError,
  ParelTimeoutError,
  ParelValidationError,
} from "@parel-cloud/node";

try {
  await parel.images.generate({ model: "flux-schnell", prompt: "cat" });
} catch (err) {
  if (err instanceof ParelBudgetExceededError) {
    // 402, top up at parel.cloud/billing
  } else if (err instanceof ParelRateLimitError) {
    await sleep((err.retryAfter ?? 1) * 1000);
  } else if (err instanceof ParelConflictError && err.code === "task_not_cancellable") {
    // cannot cancel, task already completed
  } else if (err instanceof ParelError) {
    console.error(err.status, err.code, err.message, err.requestId);
  } else {
    throw err;
  }
}

Every error carries the OpenAI-compatible envelope: code, type, message, requestId, HTTP status. Attach requestId to support tickets for faster triage.


Configuration

new Parel({
  apiKey: "pk-live-...",              // or PAREL_API_KEY env
  baseUrl: "https://api.parel.cloud", // override for staging or self-hosted
  timeoutMs: 60_000,                  // per-request timeout
  maxRetries: 2,                      // on 429, 5xx, network; idempotent verbs only
  fetch: globalThis.fetch,            // polyfill slot
  userAgent: "my-app/1.0",
});

Retry policy. GET, HEAD, DELETE, PUT retry up to maxRetries on 429, 5xx, ParelConnectionError, ParelTimeoutError. POST and PATCH never auto-retry, so generation calls are never double-charged. Backoff is exponential (500ms, 1s, 2s, 4s, 8s cap) with jitter.

AbortController. Every namespace helper accepts an AbortSignal. Cancel long polls cleanly:

const ac = new AbortController();
setTimeout(() => ac.abort(), 60_000);
await parel.images.generate({ model, prompt, signal: ac.signal });

Supported models

Family Examples
LLM (open) Qwen3.5 7B/32B/72B, Llama 3.3 70B, DeepSeek V3 671B, DeepSeek R1, Gemma 4 27B, Mistral Large, Nemotron 70B, Phi-4
LLM (closed) GPT-4o, GPT-4o-mini, GPT-o1, Claude 3.5 Sonnet, Claude 4 Opus, Gemini 2.5 Pro, Gemini 3.1, Grok 4
Vision Qwen3.5-VL 32B, GPT-4o vision, Claude vision, Gemini vision
Image Flux Schnell / Dev / Pro, SDXL Turbo, Recraft V4, Gemini 3.1 Nano Image, DALL-E 3
Video Wan 2.6, Kling 3, Veo 3.1, Seedance 1.5, Hailuo 2
TTS ElevenLabs v2, Orpheus Turkish, Kokoro Turkish, Chatterbox
STT Whisper Large v3 Turbo, Whisper Large v3, Faster-Whisper
Embeddings text-embedding-3-small / large, bge-large, qwen3-embed
Reranking bge-reranker, jina-reranker
Moderation omni-moderation-latest

Live catalogue and pricing: parel.cloud/models.


Pricing

Prepaid pay-as-you-go in USD. No subscription. No tiers.

  • Sign-up bonus: $1 free credit on every new account.
  • Top-up presets: $5 / $20 / $50 / $100 / $500 via Lemon Squeezy.
  • Batch tier: 50% discount for jobs that can wait up to 24 hours.
  • BYOM billing is per-second, metered at your chosen GPU tier.

Full pricing: parel.cloud/pricing.


FAQ

Do I need to install openai? Only for parel.openai.*. Images, videos, audio, BYOM, compare, credits, and tasks namespaces work without it.

Does it work in the browser? v0.1 is Node-first (18+). Use a server proxy for browsers today. Never ship API keys to clients. A browser entry point is planned for v0.2.

Is streaming supported? Yes for chat and completions (via parel.openai). Generations use polling, not SSE, in v0.1.

Is there a Python SDK? Yes, coming as @parel-cloud/python on PyPI. Track at github.com/parel-cloud/parel-python.

How do I self-host or point at staging? Pass baseUrl to the Parel constructor. Every path is /v1/....

Is it KVKK or GDPR compliant? Parel ships an opt-in KVKK mode: PII masking + Turkey-resident data. See docs.parel.cloud/kvkk.

How do I report a bug? github.com/parel-cloud/parel-node/issues. Include your requestId from the error.


Roadmap

  • v0.2 - Browser entry, @parel-cloud/react hooks, multipart STT, SSE streaming for generations
  • v0.3 - Webhooks, batch tier helpers, OpenAPI-codegen types
  • v1.0 - Stable surface, semver guarantees

Track releases: github.com/parel-cloud/parel-node/releases.


License

MIT © Parel Cloud / Aleonis Teknoloji Ltd. Şti.