JSPM

  • Created
  • Published
  • Downloads 144738
  • Score
    100M100P100Q172831F
  • License MIT

Javascript bindings for tiktoken

Package Exports

  • @dqbd/tiktoken
  • @dqbd/tiktoken/bundler
  • @dqbd/tiktoken/init
  • @dqbd/tiktoken/lite
  • @dqbd/tiktoken/lite/bundler
  • @dqbd/tiktoken/lite/init
  • @dqbd/tiktoken/lite/load
  • @dqbd/tiktoken/lite/tiktoken_bg.wasm
  • @dqbd/tiktoken/load
  • @dqbd/tiktoken/tiktoken_bg.wasm

Readme

⏳ tiktoken

tiktoken is a BPE tokeniser for use with OpenAI's models, forked from the original tiktoken library to provide NPM bindings for Node and other JS runtimes.

The open source version of tiktoken can be installed from NPM:

npm install @dqbd/tiktoken

Usage

Basic usage follows:

import assert from "node:assert";
import { get_encoding, encoding_for_model } from "@dqbd/tiktoken";

const enc = get_encoding("gpt2");
assert(
  new TextDecoder().decode(enc.decode(enc.encode("hello world"))) ===
    "hello world"
);

// To get the tokeniser corresponding to a specific model in the OpenAI API:
const enc = encoding_for_model("text-davinci-003");

// Extend existing encoding with custom special tokens
const enc = encoding_for_model("gpt2", {
  "<|im_start|>": 100264,
  "<|im_end|>": 100265,
});

// don't forget to free the encoder after it is not used
enc.free();

If desired, you can create a Tiktoken instance directly with custom ranks, special tokens and regex pattern:

import { Tiktoken } from "../pkg";
import { readFileSync } from "fs";

const encoder = new Tiktoken(
  readFileSync("./ranks/gpt2.tiktoken").toString("utf-8"),
  { "<|endoftext|>": 50256, "<|im_start|>": 100264, "<|im_end|>": 100265 },
  "'s|'t|'re|'ve|'m|'ll|'d| ?\\p{L}+| ?\\p{N}+| ?[^\\s\\p{L}\\p{N}]+|\\s+(?!\\S)|\\s+"
);

Compatibility

As this is a WASM library, there might be some issues with specific runtimes. If you encounter any issues, please open an issue.

Runtime Status Notes
Node.js
Bun
Vite See here for notes
Next.js See here for notes
Vercel Edge Runtime See here for notes
Cloudflare Workers 🚧 See here for caveats
Deno Currently unsupported

Vite

If you are using Vite, you will need to add both the vite-plugin-wasm and vite-plugin-top-level-await. Add the following to your vite.config.js:

import wasm from "vite-plugin-wasm";
import topLevelAwait from "vite-plugin-top-level-await";
import { defineConfig } from "vite";

export default defineConfig({
  plugins: [wasm(), topLevelAwait()],
});

Next.js

Both API routes and /pages are supported with the following configuration. To overcome issues with importing Node.js version, you can import the package from @dqbd/tiktoken/bundler instead.

import { get_encoding } from "@dqbd/tiktoken/bundler";
import { NextApiRequest, NextApiResponse } from "next";

export default function handler(req: NextApiRequest, res: NextApiResponse) {
  const encoder = get_encoding("gpt2");
  const message = encoder.encode(`Hello World ${Math.random()}`);
  encoder.free();
  return res.status(200).json({ message });
}

Additional Webpack configuration is required.

const config = {
  webpack(config, { isServer, dev }) {
    config.experiments = {
      asyncWebAssembly: true,
      layers: true,
    };

    return config;
  },
};

Vercel Edge Runtime

Vercel Edge Runtime does support WASM modules by adding a ?module suffix. Initialize the encoder with the following snippet:

import wasm from "@dqbd/tiktoken/tiktoken_bg.wasm?module";
import { init, get_encoding } from "@dqbd/tiktoken/init";

export const config = { runtime: "edge" };

export default async function (req: Request) {
  await init((imports) => WebAssembly.instantiate(wasm, imports));

  const encoder = get_encoding("cl100k_base");
  const tokens = encoder.encode("hello world");
  encoder.free();

  return new Response(`${encoder.encode("hello world")}`);
}

Cloudflare Workers

Similar to Vercel Edge Runtime, Cloudflare Workers must import the WASM binary file manually. However, users need to point directly at the WASM binary, including node_modules prefix in some cases.

Add the following rule to the wrangler.toml to upload WASM during build:

[[rules]]
globs = ["**/*.wasm"]
type = "CompiledWasm"

Initialize the encoder with the following snippet:

import wasm from "./node_modules/@dqbd/tiktoken/tiktoken_bg.wasm";
import { get_encoding, init } from "@dqbd/tiktoken/init";

export default {
  async fetch() {
    await init((imports) => WebAssembly.instantiate(wasm, imports));
    const encoder = get_encoder("cl100k_base");
    const tokens = encoder.encode("hello world");
    encoder.free();
    return new Response(`${tokens}`);
  },
};

Acknowledgements