JSPM

@promptbook/utils

0.103.0-2
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 851132
  • Score
    100M100P100Q190597F
  • License CC-BY-4.0

Promptbook: Turn your company's scattered knowledge into AI ready books

Package Exports

  • @promptbook/utils
  • @promptbook/utils/esm/index.es.js
  • @promptbook/utils/umd/index.umd.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@promptbook/utils) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

โœจ Promptbook: AI Agents

Turn your company's scattered knowledge into AI ready Books

[NPM Version of ![Promptbook logo - cube with letters P and B](./design/logo-h1.png) Promptbook](https://www.npmjs.com/package/promptbook) [Quality of package ![Promptbook logo - cube with letters P and B](./design/logo-h1.png) Promptbook](https://packagequality.com/#?package=promptbook) Known Vulnerabilities ๐Ÿงช Test Books ๐Ÿงช Test build ๐Ÿงช Lint ๐Ÿงช Spell check ๐Ÿงช Test types Issues

๐ŸŒŸ New Features

  • ๐Ÿš€ GPT-5 Support - Now includes OpenAI's most advanced language model with unprecedented reasoning capabilities and 200K context window
  • ๐Ÿ’ก VS Code support for .book files with syntax highlighting and IntelliSense
  • ๐Ÿณ Official Docker image (hejny/promptbook) for seamless containerized usage
  • ๐Ÿ”ฅ Native support for OpenAI o3-mini, GPT-4 and other leading LLMs
  • ๐Ÿ” DeepSeek integration for advanced knowledge search
โš  Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.

๐Ÿ“ฆ Package @promptbook/utils

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/utils

Comprehensive utility functions for text processing, validation, normalization, and LLM input/output handling in the Promptbook ecosystem.

๐ŸŽฏ Purpose and Motivation

The utils package provides a rich collection of utility functions that are essential for working with LLM inputs and outputs. It handles common tasks like text normalization, parameter templating, validation, and postprocessing, eliminating the need to implement these utilities from scratch in every promptbook application.

๐Ÿ”ง High-Level Functionality

This package offers utilities across multiple domains:

  • Text Processing: Counting, splitting, and analyzing text content
  • Template System: Secure parameter substitution and prompt formatting
  • Normalization: Converting text to various naming conventions and formats
  • Validation: Comprehensive validation for URLs, emails, file paths, and more
  • Serialization: JSON handling, deep cloning, and object manipulation
  • Environment Detection: Runtime environment identification utilities
  • Format Parsing: Support for CSV, JSON, XML validation and parsing

โœจ Key Features

  • ๐Ÿ”’ Secure Templating - Prompt injection protection with template functions
  • ๐Ÿ“Š Text Analysis - Count words, sentences, paragraphs, pages, and characters
  • ๐Ÿ”„ Case Conversion - Support for kebab-case, camelCase, PascalCase, SCREAMING_CASE
  • โœ… Comprehensive Validation - Email, URL, file path, UUID, and format validators
  • ๐Ÿงน Text Cleaning - Remove emojis, quotes, diacritics, and normalize whitespace
  • ๐Ÿ“ฆ Serialization Tools - Deep cloning, JSON export, and serialization checking
  • ๐ŸŒ Environment Aware - Detect browser, Node.js, Jest, and Web Worker environments
  • ๐ŸŽฏ LLM Optimized - Functions specifically designed for LLM input/output processing

Simple templating

The prompt template tag function helps format prompt strings for LLM interactions. It handles string interpolation and maintains consistent formatting for multiline strings and lists and also handles a security to avoid prompt injection.

import { prompt } from '@promptbook/utils';

const promptString = prompt`
    Correct the following sentence:

    > ${unsecureUserInput}
`;

The prompt name could be overloaded by multiple things in your code. If you want to use the promptTemplate which is alias for prompt:

import { promptTemplate } from '@promptbook/utils';

const promptString = promptTemplate`
    Correct the following sentence:

    > ${unsecureUserInput}
`;

Advanced templating

There is a function templateParameters which is used to replace the parameters in given template optimized to LLM prompt templates.

import { templateParameters } from '@promptbook/utils';

templateParameters('Hello, {name}!', { name: 'world' }); // 'Hello, world!'

And also multiline templates with blockquotes

import { templateParameters, spaceTrim } from '@promptbook/utils';

templateParameters(
    spaceTrim(`
        Hello, {name}!

        > {answer}
    `),
    {
        name: 'world',
        answer: spaceTrim(`
            I'm fine,
            thank you!

            And you?
        `),
    },
);

// Hello, world!
//
// > I'm fine,
// > thank you!
// >
// > And you?

Counting

These functions are useful to count stats about the input/output in human-like terms not tokens and bytes, you can use countCharacters, countLines, countPages, countParagraphs, countSentences, countWords

import { countWords } from '@promptbook/utils';

console.log(countWords('Hello, world!')); // 2

Splitting

Splitting functions are similar to counting but they return the split parts of the input/output, you can use splitIntoCharacters, splitIntoLines, splitIntoPages, splitIntoParagraphs, splitIntoSentences, splitIntoWords

import { splitIntoWords } from '@promptbook/utils';

console.log(splitIntoWords('Hello, world!')); // ['Hello', 'world']

Normalization

Normalization functions are used to put the string into a normalized form, you can use kebab-case PascalCase SCREAMING_CASE snake_case kebab-case

import { normalizeTo } from '@promptbook/utils';

console.log(normalizeTo['kebab-case']('Hello, world!')); // 'hello-world'
  • There are more normalization functions like capitalize, decapitalize, removeDiacritics,...
  • These can be also used as postprocessing functions in the POSTPROCESS command in promptbook

Postprocessing

Sometimes you need to postprocess the output of the LLM model, every postprocessing function that is available through POSTPROCESS command in promptbook is exported from @promptbook/utils. You can use:

Very often you will use unwrapResult, which is used to extract the result you need from output with some additional information:

import { unwrapResult } from '@promptbook/utils';

unwrapResult('Best greeting for the user is "Hi Pavol!"'); // 'Hi Pavol!'

๐Ÿ“ฆ Exported Entities

Version Information

  • BOOK_LANGUAGE_VERSION - Current book language version
  • PROMPTBOOK_ENGINE_VERSION - Current engine version

Configuration Constants

  • VALUE_STRINGS - Standard value strings
  • SMALL_NUMBER - Small number constant

Visualization

  • renderPromptbookMermaid - Render promptbook as Mermaid diagram

Error Handling

  • deserializeError - Deserialize error objects
  • serializeError - Serialize error objects

Async Utilities

  • forEachAsync - Async forEach implementation

Format Validation

  • isValidCsvString - Validate CSV string format
  • isValidJsonString - Validate JSON string format
  • jsonParse - Safe JSON parsing
  • isValidXmlString - Validate XML string format

Template Functions

  • prompt - Template tag for secure prompt formatting
  • promptTemplate - Alias for prompt template tag

Environment Detection

  • $getCurrentDate - Get current date (side effect)
  • $isRunningInBrowser - Check if running in browser
  • $isRunningInJest - Check if running in Jest
  • $isRunningInNode - Check if running in Node.js
  • $isRunningInWebWorker - Check if running in Web Worker

Text Counting and Analysis

  • CHARACTERS_PER_STANDARD_LINE - Characters per standard line constant
  • LINES_PER_STANDARD_PAGE - Lines per standard page constant
  • countCharacters - Count characters in text
  • countLines - Count lines in text
  • countPages - Count pages in text
  • countParagraphs - Count paragraphs in text
  • splitIntoSentences - Split text into sentences
  • countSentences - Count sentences in text
  • countWords - Count words in text
  • CountUtils - Utility object with all counting functions

Text Normalization

  • capitalize - Capitalize first letter
  • decapitalize - Decapitalize first letter
  • DIACRITIC_VARIANTS_LETTERS - Diacritic variants mapping
  • string_keyword - Keyword string type (type)
  • Keywords - Keywords type (type)
  • isValidKeyword - Validate keyword format
  • nameToUriPart - Convert name to URI part
  • nameToUriParts - Convert name to URI parts
  • string_kebab_case - Kebab case string type (type)
  • normalizeToKebabCase - Convert to kebab-case
  • string_camelCase - Camel case string type (type)
  • normalizeTo_camelCase - Convert to camelCase
  • string_PascalCase - Pascal case string type (type)
  • normalizeTo_PascalCase - Convert to PascalCase
  • string_SCREAMING_CASE - Screaming case string type (type)
  • normalizeTo_SCREAMING_CASE - Convert to SCREAMING_CASE
  • normalizeTo_snake_case - Convert to snake_case
  • normalizeWhitespaces - Normalize whitespace characters
  • orderJson - Order JSON object properties
  • parseKeywords - Parse keywords from input
  • parseKeywordsFromString - Parse keywords from string
  • removeDiacritics - Remove diacritic marks
  • searchKeywords - Search within keywords
  • suffixUrl - Add suffix to URL
  • titleToName - Convert title to name format

Text Organization

  • spaceTrim - Trim spaces while preserving structure

Parameter Processing

  • extractParameterNames - Extract parameter names from template
  • numberToString - Convert number to string
  • templateParameters - Replace template parameters
  • valueToString - Convert value to string

Parsing Utilities

  • parseNumber - Parse number from string

Text Processing

  • removeEmojis - Remove emoji characters
  • removeQuotes - Remove quote characters

Serialization

  • $deepFreeze - Deep freeze object (side effect)
  • checkSerializableAsJson - Check if serializable as JSON
  • clonePipeline - Clone pipeline object
  • deepClone - Deep clone object
  • exportJson - Export object as JSON
  • isSerializableAsJson - Check if object is JSON serializable
  • jsonStringsToJsons - Convert JSON strings to objects

Set Operations

  • difference - Set difference operation
  • intersection - Set intersection operation
  • union - Set union operation

Code Processing

  • trimCodeBlock - Trim code block formatting
  • trimEndOfCodeBlock - Trim end of code block
  • unwrapResult - Extract result from wrapped output

Validation

  • isValidEmail - Validate email address format
  • isRootPath - Check if path is root path
  • isValidFilePath - Validate file path format
  • isValidJavascriptName - Validate JavaScript identifier
  • isValidPromptbookVersion - Validate promptbook version
  • isValidSemanticVersion - Validate semantic version
  • isHostnameOnPrivateNetwork - Check if hostname is on private network
  • isUrlOnPrivateNetwork - Check if URL is on private network
  • isValidPipelineUrl - Validate pipeline URL format
  • isValidUrl - Validate URL format
  • isValidUuid - Validate UUID format

๐Ÿ’ก This package provides utility functions for promptbook applications. For the core functionality, see @promptbook/core or install all packages with npm i ptbk


Rest of the documentation is common for entire promptbook ecosystem:

๐Ÿ“– The Book Whitepaper

For most business applications nowadays, the biggest challenge isn't about the raw capabilities of AI models. Large language models like GPT-5 or Claude-4.1 are extremely capable.

The main challenge is to narrow it down, constrain it, set the proper context, rules, knowledge, and personality. There are a lot of tools which can do exactly this. On one side, there are no-code platforms which can launch your agent in seconds. On the other side, there are heavy frameworks like Langchain or Semantic Kernel, which can give you deep control.

Promptbook takes the best from both worlds. You are defining your AI behavior by simple books, which are very explicit. They are automatically enforced, but they are very easy to understand, very easy to write, and very reliable and portable.

Paul Smith & Associรฉs Book

Aspects of great AI agent

We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.

You can look at it as prompting (or writing a system message), but decorated by commitments.

Persona commitment

Personas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.

Paul Smith & Associรฉs Book

Knowledge commitment

Knowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.

This can include domain-specific knowledge, company policies, or any other relevant information.

Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.

Paul Smith & Associรฉs Book

Rule commitment

Rules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.

Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.

Paul Smith & Associรฉs Book

Action commitment

Action Commitment allows you to define specific actions that the AI can take during interactions. This can include things like posting on a social media platform, sending emails, creating calendar events, or interacting with your internal systems.

Paul Smith & Associรฉs Book

Read more about the language

Where to use your AI agent in book

Books can be useful in various applications and scenarios. Here are some examples:

Chat apps:

Create your own chat shopping assistant and place it in your eShop. You will be able to answer customer questions, help them find products, and provide personalized recommendations. Everything is tightly controlled by the book you have written.

Reply Agent:

Create your own AI agent, which will look at your emails and reply to them. It can even create drafts for you to review before sending.

Coding Agent:

Do you love Vibecoding, but the AI code is not always aligned with your coding style and architecture, rules, security, etc.? Create your own coding agent to help enforce your specific coding standards and practices.

This can be integrated to almost any Vibecoding platform, like GitHub Copilot, Amazon CodeWhisperer, Cursor, Cline, Kilocode, Roocode,...

They will work the same as you are used to, but with your specific rules written in book.

Internal Expertise

Do you have an app written in TypeScript, Python, C#, Java, or any other language, and you are integrating the AI.

You can avoid struggle with choosing the best model, its settings like temperature, max tokens, etc., by writing a book agent and using it as your AI expertise.

Doesn't matter if you do automations, data analysis, customer support, sentiment analysis, classification, or any other task. Your AI agent will be tailored to your specific needs and requirements.

Even works in no-code platforms!

How to create your AI agent in book

Now you want to use it. There are several ways how to write your first book:

From scratch with help from Paul

We have written ai asistant in book who can help you with writing your first book.

Your AI twin

Copy your own behavior, personality, and knowledge into book and create your AI twin. It can help you with your work, personal life, or any other task.

AI persona workpool

Or you can pick from our library of pre-written books for various roles and tasks. You can find books for customer support, coding, marketing, sales, HR, legal, and many other roles.

๐Ÿš€ Get started

Take a look at the simple starter kit with books integrated into the Hello World sample applications:

๐Ÿ’œ The Promptbook Project

Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:

Project About
Book language Book is a human-understandable markup language for writing AI applications such as chatbots, knowledge bases, agents, avarars, translators, automations and more.
There is also a plugin for VSCode to support .book file extension
Promptbook Engine Promptbook engine can run applications written in Book language. It is released as multiple NPM packages and Docker HUB
Promptbook Studio Promptbook.studio is a web-based editor and runner for book applications. It is still in the experimental MVP stage.

Hello world examples:

๐ŸŒ Community & Social Media

Join our growing community of developers and users:

Platform Description
๐Ÿ’ฌ Discord Join our active developer community for discussions and support
๐Ÿ—ฃ๏ธ GitHub Discussions Technical discussions, feature requests, and community Q&A
๐Ÿ‘” LinkedIn Professional updates and industry insights
๐Ÿ“ฑ Facebook General announcements and community engagement
๐Ÿ”— ptbk.io Official landing page with project information

๐Ÿ–ผ๏ธ Product & Brand Channels

Promptbook.studio

๐Ÿ“ธ Instagram @promptbook.studio Visual updates, UI showcases, and design inspiration

๐Ÿ“˜ Book Language Blueprint

โš  This file is a work in progress and may be incomplete or inaccurate.

Book is a simple format do define AI apps and agents. It is the source code the soul of AI apps and agents.. It's purpose is to avoid ambiguous UIs with multiple fields and low-level ways like programming in langchain.

Book is defined in file with .book extension

Examples

Write an article about {topic} Book


Make post on LinkedIn based on @Input. Book


Odpovฤ›z na Email Book


Analyzuj {Pล™รญpad}. Book

iframe:

books.svg

Books

books.png

Books

Basic Commitments:

Book is composed of commitments, which are the building blocks of the book. Each commitment defines a specific task or action to be performed by the AI agent. The commitments are defined in a structured format, allowing for easy parsing and execution.

PERSONA

defines basic contour of

PERSONA @Joe Average man with

also the PERSONA is

Describes

RULE or RULES

defines

STYLE

xxx

SAMPLE

xxx

KNOWLEDGE

xxx

EXPECT

xxx

FORMAT

xxx

JOKER

xxx

MODEL

xxx

ACTION

xxx

META

Names

each commitment is

PERSONA

Variable names

Types

Miscellaneous aspects of Book language

Named vs Anonymous commitments

Single line vs multiline

Bookish vs Non-bookish definitions


____

Great context and prompt can make or break you AI app. In last few years we have came from simple one-shot prompts. When you want to add conplexity you have finetunned the model or add better orchestration. But with really large large language models the context seems to be a king.

The Book is the language to describe and define your AI app. Its like a shem for a Golem, book is the shem and model is the golem.

Franz Kafka Book

Who, what and how?

To write a good prompt and the book you will be answering 3 main questions

  • Who is working on the task, is it a team or an individual? What is the role of the person in the team? What is the background of the person? What is the motivation of the person to work on this task? You rather want Paul, an typescript developer who prefers SOLID code not gemini-2
  • What
  • How

each commitment (described bellow) is connected with one of theese 3 questions.

Commitments

Commitment is one piece of book, you can imagine it as one paragraph of book.

Each commitment starts in a new line with commitment name, its usually in UPPERCASE and follows a contents of that commitment. Contents of the commithemt is defined in natural language.

Commitments are chained one after another, in general commitments which are written later are more important and redefines things defined earlier.

Each commitment falls into one or more of cathegory who, what or how

Here are some basic commintemts:

  • PERSONA tells who is working on the task
  • KNOWLEDGE describes what knowledge the person has
  • GOAL describes what is the goal of the task
  • ACTION describes what actions can be done
  • RULE describes what rules should be followed
  • STYLE describes how the output should be presented

Variables and references

When the prompt should be to be useful it should have some fixed static part and some variable dynamic part

Untitled Book

Imports

Layering

Book defined in book

Book vs:

  • Why just dont pick the right model
  • Orchestration frameworks - Langchain, Google Agent ..., Semantic Kernel,...
  • Finetunning
  • Temperature, top_t, top_k,... etc.
  • System message
  • MCP server
  • function calling

๐Ÿ“š Documentation

See detailed guides and API reference in the docs or online.

๐Ÿ”’ Security

For information on reporting security vulnerabilities, see our Security Policy.

๐Ÿ“ฆ Packages (for developers)

This library is divided into several packages, all are published from single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

โญ Marked packages are worth to try first

๐Ÿ“š Dictionary

The following glossary is used to clarify certain concepts:

General LLM / AI terms

  • Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
  • Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
  • Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
  • Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
  • Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
  • Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
  • Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
  • Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.

Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook

๐Ÿ’ฏ Core concepts

Advanced concepts

Data & Knowledge Management Pipeline Control
Language & Output Control Advanced Generation

๐Ÿ” View more concepts

๐Ÿš‚ Promptbook Engine

Schema of Promptbook Engine

โž•โž– When to use Promptbook?

โž• When to use

  • When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to separate code from text prompts
  • When you want to describe complex prompt pipelines and don't want to do it in the code
  • When you want to orchestrate multiple prompts together
  • When you want to reuse parts of prompts in multiple places
  • When you want to version your prompts and test multiple versions
  • When you want to log the execution of prompts and backtrace the issues

See more

โž– When not to use

  • When you have already implemented single simple prompt and it works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion).
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

๐Ÿœ Known issues

๐Ÿงผ Intentionally not implemented features

โ” FAQ

If you have a question start a discussion, open an issue or write me an email.

๐Ÿ“… Changelog

See CHANGELOG.md

๐Ÿ“œ License

This project is licensed under BUSL 1.1.

๐Ÿค Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.

๐Ÿ†˜ Support & Community

Need help with Book language? We're here for you!

We welcome contributions and feedback to make Book language better for everyone!