JSPM

gpt4-tokenizer

1.3.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 951
  • Score
    100M100P100Q144011F
  • License MIT

Package Exports

  • gpt4-tokenizer
  • gpt4-tokenizer/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (gpt4-tokenizer) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

GPT4 Tokenizer

Build NPM Version NPM Downloads

This is a isomorphic TypeScript tokenizer for OpenAI's GPT-4 model. It also includes some utility functions for tokenizing and encoding text for use with the GPT-4 model.

It will work in all cases that TextEncoder and TextDecoder are globals.

Usage

First, install:

yarn add gpt4-tokenizer

Estimate token length

import GPT4Tokenizer from 'gpt4-tokenizer';

const tokenizer = new GPT4Tokenizer({ type: 'gpt3' }); // or 'codex'
const str = 'hello 👋 world 🌍';
const estimatedTokenCount = tokenizer.estimateTokenCount(str); // 7

Chunk by token length

import GPT4Tokenizer from 'gpt4-tokenizer';

const tokenizer = new GPT4Tokenizer({ type: 'gpt3' }); // or 'codex'
const str = 'A very long string...';
const estimatedTokenCount = tokenizer.chunkText(str, 5); // 7

Reference

This library is based on the following:

The main difference between this library and gpt-3-encoder is that this library supports both gpt3 and codex tokenization (The dictionary is taken directly from OpenAI so the tokenization result is on par with the OpenAI Playground). Also Map API is used instead of JavaScript objects, especially the bpeRanks object, which should see some performance improvement.

License

MIT