JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 1051
  • Score
    100M100P100Q102458F
  • License MIT

TypeScript SDK for the Lovable API

Package Exports

  • @lovable.dev/sdk

Readme

@lovable.dev/sdk

TypeScript SDK for the Lovable API.

Currently in preview.

Installation

npm install @lovable.dev/sdk

Usage

import { LovableClient } from "@lovable.dev/sdk";

const client = new LovableClient({
  apiKey: "lov_your-api-key",
});

// List workspaces
const workspaces = await client.listWorkspaces();

// 1. Create a project
const project = await client.createProject(workspaces[0].id, {
  description: "Best todo app",
  initialMessage: "Create a todo app with authentication"
});

// 2. Wait for the AI response and get the preview URL
const response = await client.waitForResponse(project.id);
console.log(response.content);    // AI's response text
console.log(response.messageId);  // AI message ID (for traces)
console.log(response.previewUrl); // Preview URL for the project

// 3. Send a follow-up chat message
await client.chat(project.id, {
  message: "Add a footer",
});

// 4. Send a message with file attachments
import { readFile } from "fs/promises";
const imageData = await readFile("design.png");
await client.chat(project.id, {
  message: "Update the hero section to match this design",
  files: [{ name: "design.png", data: imageData, type: "image/png" }],
});

// 5. Publish the project and get the live URL
await client.publish(project.id);
const published = await client.waitForProjectPublished(project.id);
console.log(published.url); // Live public URL

Remixing a project at a specific message

const client = new LovableClient({ apiKey: "lov_your-api-key" });

// Remix a project at the state just before a specific message
const jobId = await client.remixProject("source-project-id", {
  workspaceId: "target-workspace-id",
  messageId: "message-id-to-snapshot-at",
  // remixMode: "including", // use "including" to keep the message and its AI response
  includeHistory: true,
  includeCustomKnowledge: true,
});

// Wait for the remix to complete
const { projectId } = await client.waitForRemix("source-project-id", jobId, {
  onProgress: (status, step) => console.log(`Remix: ${status}`, step),
});

console.log(`Remixed project: ${projectId}`);

// Send a follow-up message to the remixed project
await client.chat(projectId, { message: "Add dark mode" });
const response = await client.waitForResponse(projectId);

Remixing with agent state

Enable includeAgentState to carry over the agent's continuation state to the remixed project, allowing prompt cache reuse on the first follow-up message.

const jobId = await client.remixProject("source-project-id", {
  workspaceId: "target-workspace-id",
  messageId: "message-id-to-snapshot-at",
  includeHistory: true,
  includeCustomKnowledge: true,
  includeAgentState: true,
});
const { projectId } = await client.waitForRemix("source-project-id", jobId);

await client.chat(projectId, {
  message: "Add dark mode",
  continuation: "allow_expired_cache",
});

Requires includeHistory, includeCustomKnowledge, and API-key auth.

Continuation override

The continuation option on chat() overrides prompt cache reuse behavior. API-key auth only.

Value Effect
"force" Skip all checks, force cache reuse
"fresh_build" Force full prompt rebuild
"allow_expired_cache" Skip expiry check only

Using a custom model

You can route the main agent to a custom OpenAI-compatible endpoint for eval and RL workflows:

await client.chat(project.id, {
  message: "Add a dark mode toggle",
  customModel: {
    endpoint: "https://my-vllm.example.com/v1",
    apiKey: "sk-...",
    modelName: "meta-llama/Llama-3.3-70B-Instruct",
  },
});

Fetching message traces (_dev)

Trace APIs are available under the _dev namespace. These are not part of the stable v1 surface and may change without notice.

const response = await client.waitForResponse(project.id);

// Fetch all traces for the message
const traces = await client._dev.getMessageTraces(project.id, response.messageId);
console.log(traces.spans);

// Filter by purpose (e.g. only the main agent span)
const agentTraces = await client._dev.getMessageTraces(project.id, response.messageId, {
  purposes: ["main_agent"],
});

// Batch fetch traces for multiple messages
const result = await client._dev.getMessageTracesBatch([
  { projectId: "proj-1", messageId: "msg-1" },
  { projectId: "proj-2", messageId: "msg-2" },
], { purposes: ["main_agent"], concurrency: 5 });

for (const [messageId, trace] of result.traces) {
  console.log(messageId, trace.spans.length);
}
for (const [messageId, error] of result.errors) {
  console.error(messageId, error.message);
}

Replaying reviewers (_dev)

Re-run quality reviewers (e.g. project_success_v3, user_sentiment) for past assistant messages. Requires an API key with project write access; not part of the stable v1 OpenAPI bundle (types live in this SDK).

const out = await client._dev.replayReviews(project.id, {
  items: [{ response_message_id: "aimsg_..." }],
  reviewer_types: ["project_success_v3", "user_sentiment"],
  concurrency: 4,
});
console.log(out.results.items);

API Reference

LovableClient

Constructor

new LovableClient(options: LovableClientOptions)
  • apiKey (required): Your Lovable API key
  • baseUrl (optional): Override the default API base URL

Methods

listWorkspaces(): Promise<WorkspaceWithMembership[]>

List all workspaces the authenticated user has access to.

getWorkspace(workspaceId: string): Promise<WorkspaceWithMembership>

Get a specific workspace by ID.

listProjects(workspaceId: string, options?): Promise<ProjectResponse[]>

List projects in a workspace.

Options:

  • limit (optional): Maximum number of projects to return
  • visibility (optional): Filter by visibility ("all" | "personal" | "public" | "workspace")
createProject(workspaceId: string, options): Promise<ProjectResponse>

Create a new project in a workspace.

Options:

  • description (required): Project description
  • techStack (optional): Technology stack (e.g., "react")
  • visibility (optional): Project visibility ("draft" | "private" | "public")
  • templateProjectId (optional): ID of a template project to clone
  • initialMessage (optional): Initial chat message to send to the AI agent
  • files (optional): Array of files to attach (browser File objects or FileInput objects)
chat(projectId: string, options): Promise<void>

Send a chat message to a project's AI agent.

Options:

  • message (required): The message to send
  • files (optional): Array of files to attach (browser File objects or FileInput objects)
  • chatOnly (optional): If true, only chat without making code changes
  • customModel (optional): Route the main agent to a custom OpenAI-compatible endpoint (see CustomModelConfig)
  • continuation (optional): Override prompt cache continuation behavior ("force" | "fresh_build" | "allow_expired_cache"). API-key auth only. See Controlling prompt cache continuation

Note: This is an asynchronous operation. The API accepts the message and processes it in the background. Use waitForResponse() to wait for the AI's reply.

waitForResponse(projectId: string, options?): Promise<ChatResponse>

Wait for the AI's response to a chat message. Connects to the project's message stream (SSE) and returns the full response once complete.

Use this after chat() or after createProject() with initialMessage.

Returns:

  • content (string): The AI's full response text
  • messageId (string): The AI message ID (use with _dev.getMessageTraces())
  • previewUrl (string): The project's preview URL

Options:

  • timeout (optional): Maximum time to wait in ms (default: 300000 = 5 minutes)

Throws an error if the stream fails or timeout is reached.

getPreviewUrl(projectId: string): string

Get the preview URL for a project. This is a synchronous method that constructs the URL from the project ID.

publish(projectId: string, options?): Promise<DeploymentResponse>

Publish (deploy) a project to make it publicly accessible. The deployment runs asynchronously — use waitForProjectPublished() to wait for completion.

Options:

  • name (optional): Custom slug for the published URL

Returns:

  • status (string): Deployment status
  • deployment_id (string): The deployment ID
  • url (string): The published URL (may not be available until deployment completes)
getPublishedUrl(projectId: string): Promise<string | null>

Get the published URL for a project, or null if not published. Fetches the latest project details to check publication status.

waitForProjectReady(projectId: string, options?): Promise<ProjectResponse>

Wait for a project to reach "completed" status. Projects start in "in_progress" status while being created/built.

Options:

  • pollInterval (optional): Time between polls in ms (default: 2000)
  • timeout (optional): Maximum time to wait in ms (default: 300000 = 5 minutes)
  • onProgress (optional): Callback for status updates

Throws an error if the project fails or timeout is reached.

waitForProjectPublished(projectId: string, options?): Promise<ProjectResponse>

Wait for a project to be published (deployed) and have a live URL.

Options:

  • pollInterval (optional): Time between polls in ms (default: 3000)
  • timeout (optional): Maximum time to wait in ms (default: 600000 = 10 minutes)
  • onProgress (optional): Callback for status updates

Throws an error if timeout is reached.

remixProject(sourceProjectId: string, options): Promise<string>

Remix (fork) an existing project, optionally at a specific message point in time.

When messageId is provided, the remix captures the project state as it was just before that message was processed (default remixMode: "before"). Set remixMode: "including" to include the message and its AI response in the remix. Without messageId, the full current state is remixed.

Returns the remix job ID for polling with waitForRemix().

Options:

  • workspaceId (required): Target workspace for the new project
  • messageId (optional): Message ID to snapshot at — by default the remix reflects the project state just before this message
  • remixMode (optional): "before" (default) captures state before the message; "including" captures state after the message and its AI response
  • includeHistory (optional, default: false): Whether to preserve chat history
  • includeCustomKnowledge (optional, default: false): Whether to copy custom instructions/knowledge
  • includeAgentState (optional, default: false): Copy agent continuation state so the prompt cache can be reused after remix. Requires includeHistory: true, includeCustomKnowledge: true, and API-key auth. See Remixing with agent state
  • initialMessage (optional): Initial chat message to send after remix completes
  • skipInitialRemixMessage (optional, default: false): When true, suppresses the default "I've successfully remixed this project" message
  • skipIntegrations (optional, default: false): When true, skips copying integrations to the remixed project
waitForRemix(sourceProjectId: string, jobId: string, options?): Promise<RemixResult>

Wait for a remix operation to complete. Polls until the job finishes.

Returns:

  • projectId (string): The ID of the newly created project

Options:

  • pollInterval (optional): Time between polls in ms (default: 2000)
  • timeout (optional): Maximum time to wait in ms (default: 300000 = 5 minutes)
  • onProgress (optional): Callback with (status, step?) for progress updates

Throws an error if the remix fails or timeout is reached.

_dev (developer/experimental APIs)

These methods are not part of the stable v1 surface and may change without notice.

_dev.getMessageTraces(projectId: string, messageId: string, options?): Promise<MessageTracesResponse>

Fetch Braintrust trace spans for a specific chat message. The messageId is available from the ChatResponse returned by waitForResponse().

When a purpose has multiple spans (e.g. main_agent across turns), only the last span is returned — it contains the full accumulated context.

Options:

  • purposes (optional): Filter spans by purpose (e.g. ["main_agent", "knowledge_rag"])

Returns:

  • message_id (string): The message ID
  • braintrust_span_id (string): The Braintrust span ID
  • root_span_id (string): The root span ID
  • spans (TraceSpan[]): The filtered trace spans
_dev.getMessageTracesBatch(queries, options?): Promise<BatchTracesResult>

Fetch traces for multiple messages across projects in parallel.

  • queries: Array of { projectId, messageId } to fetch
  • options.purposes (optional): Filter spans by purpose (applied to all queries)
  • options.concurrency (optional): Max parallel requests (default: 5)

Returns:

  • traces: Map of messageId → MessageTracesResponse
  • errors: Map of messageId → Error (for failed requests)
_dev.replayReviews(projectId: string, body): Promise<ReplayReviewsResponse>

Re-run reviewers for one or more assistant (response) messages. Requires an API key with project write access.

  • body.items: Each entry must include response_message_id (assistant message id). Optional per-item reviewer_types.
  • body.reviewer_types (optional): Global default when an item omits reviewer_types.
  • body.concurrency (optional): Parallel items (default 4, max 16).
inviteCollaborator(workspaceId: string, options): Promise<WorkspaceMembershipResponse>

Invite a user to a workspace.

Options:

  • email (required): Email address of the user to invite
  • role (optional): Role to assign ("admin" | "collaborator" | "member" | "viewer")
listWorkspaceMembers(workspaceId: string): Promise<WorkspaceMembershipResponse[]>

List all members of a workspace.

removeWorkspaceMember(workspaceId: string, userId: string): Promise<void>

Remove a member from a workspace.

getProject(projectId: string): Promise<ProjectResponse>

Get project details by ID.

Types

The SDK exports TypeScript types for all API responses. See src/types.ts for the full list.

import type {
  WorkspaceWithMembership,
  ProjectResponse,
  CreateProjectOptions,
  ChatResponse,
  ContinuationOverride,
  CustomModelConfig,
  FileInput,
  RemixProjectOptions,
  TracePurpose,
  TraceSpan,
  MessageTracesResponse,
  // ... etc
} from "@lovable.dev/sdk";

FileInput

For Node.js or non-browser environments, use FileInput instead of the browser File API:

interface FileInput {
  name: string;              // Original file name (e.g., "screenshot.png")
  data: Blob | ArrayBuffer | Uint8Array;  // File contents
  type: string;              // MIME type (e.g., "image/png")
}

CustomModelConfig

Configuration for routing the main agent to a custom OpenAI-compatible endpoint:

interface CustomModelConfig {
  endpoint: string;   // Base URL (e.g., "https://my-vllm.example.com/v1")
  apiKey: string;     // API key for the endpoint
  modelName: string;  // Model identifier (e.g., "meta-llama/Llama-3.3-70B-Instruct")
}

ContinuationOverride

Controls prompt cache continuation behavior for a message:

type ContinuationOverride = "force" | "fresh_build" | "allow_expired_cache";
  • "force" — skip all continuation checks (force continuation)
  • "fresh_build" — force a full prompt rebuild from scratch
  • "allow_expired_cache" — skip cache expiry check but respect other continuation checks

TracePurpose

Available trace span purposes:

type TracePurpose = "main_agent" | "codebase_rag" | "knowledge_rag" | "review";