JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 7
  • Score
    100M100P100Q61351F
  • License MIT

Core embedding and vector store utilities for AskText voice Q&A.

Package Exports

  • @asktext/core
  • @asktext/core/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@asktext/core) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

AskText – Voice-first Q&A for your articles

AskText lets readers talk to any blog post or knowledge-base article via a Vapi voice assistant.
It is split into three small npm packages:

package what it does
@asktext/core text → chunks → OpenAI embeddings → store in your DB; plus helper to retrieve passages
@asktext/next drop-in API routes (webhook + optional voice quota) for Next.js / Vercel Edge
@asktext/react a one-liner React button + modal that opens the Vapi call

This guide shows the minimal set-up for any backend that already stores the full article body (HTML / Markdown / rich-text).
It uses JSON-encoded embeddings in a Prisma model – no pgvector or Pinecone required.


1 Prerequisites

  • Postgres database (works with SQLite/MySQL too – only JSON text is stored)
  • OpenAI API key (OPENAI_API_KEY)
  • Vapi public key & assistant ID (NEXT_PUBLIC_VAPI_PUBLIC_KEY, NEXT_PUBLIC_VAPI_ASSISTANT_ID)
  • A place to run server code (Next.js API routes, Express, Fastify ‑ anything works)

2 Database schema

model ArticleChunk {
  id         String   @id @default(cuid())
  postId     String
  chunkIndex Int
  content    String   @db.Text
  startChar  Int
  endChar    Int
  embedding  String   @db.Text   // JSON-encoded float[]

  @@index([postId, chunkIndex])
}

Add it to your existing schema.prisma, run npx prisma db push.


3 Backend – embed on publish

Install packages:

npm i @asktext/core openai prisma

Add one helper file (lib/asktext.ts):

import { PrismaClient } from '@prisma/client';
import { OpenAIEmbedder, embedAndStore } from '@asktext/core';

const prisma  = new PrismaClient();
const store   = embedAndStore.createPrismaJsonStore(prisma); // built-in JSON store
const embedder = new OpenAIEmbedder({ apiKey: process.env.OPENAI_API_KEY! });

export async function saveEmbeddings(postId: string, html: string) {
  await embedAndStore({ articleId: postId, htmlOrMarkdown: html, embedder, store });
}

Call saveEmbeddings() from whatever “after publish” hook your CMS exposes.

Tip: the helper automatically strips HTML, splits into ~1 500-char chunks with 200-char overlap, embeds each chunk and writes rows to ArticleChunk.


4 API routes (Next.js example)

npm i @asktext/next
npx asktext-init          # scaffolds 3 routes + .env.local.example

That creates:

  • app/api/asktext/webhook/route.ts – Vapi tool-call handler
  • app/api/voice/start & app/api/voice/end – optional Upstash quota guard

If you are not on Next.js, call createAskTextWebhook() in an Express / Fastify route – it just returns (req, res) => {…}.


5 Frontend – 1-line button

npm i @asktext/react
"use client";
import { AskTextButton } from '@asktext/react';

export default function ArticlePage({ post }) {
  return (
    <>
      {/* your article markup */}
      <AskTextButton articleId={post.id} floating />
    </>
  );
}

Env vars needed in the browser build:

NEXT_PUBLIC_VAPI_PUBLIC_KEY=
NEXT_PUBLIC_VAPI_ASSISTANT_ID=

6 Retrieving passages yourself (optional)

Need semantic search outside the assistant? Use the same helper:

import { retrievePassages } from '@asktext/core';
const passages = await retrievePassages({ query: "binary search", store, embedder, filter: { postId } });

7 FAQ

Q : Does this scale? Loading all chunks for one article into memory is fine up to thousands of chunks. For bigger installs you can swap in a Vector DB – just implement a 30-line VectorStore interface.

Q : Do I need pgvector? No – everything is JSON text in Postgres. Pgvector is optional and will be shipped as @asktext/pgvector-store later.

Q : How much does embedding cost? 100 k words ≈ 75 k tokens → ~US $0.01 with text-embedding-3-small.

Q : Where do I create the Vapi assistant? In the Vapi dashboard – follow their wizard, then put the assistant ID in the env. Detailed steps in /docs/vapi.md.


8 License

MIT.