Package Exports
- @promptbook/core
- @promptbook/core/esm/index.es.js
- @promptbook/core/umd/index.umd.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@promptbook/core) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
โจ Promptbook: AI Agents
Turn your company's scattered knowledge into AI ready Books
[](https://www.npmjs.com/package/promptbook)
[
](https://packagequality.com/#?package=promptbook)
๐ New Features
- ๐ GPT-5 Support - Now includes OpenAI's most advanced language model with unprecedented reasoning capabilities and 200K context window
- ๐ก VS Code support for
.bookfiles with syntax highlighting and IntelliSense - ๐ณ Official Docker image (
hejny/promptbook) for seamless containerized usage - ๐ฅ Native support for OpenAI
o3-mini, GPT-4 and other leading LLMs - ๐ DeepSeek integration for advanced knowledge search
โ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.
๐ฆ Package @promptbook/core
- Promptbooks are divided into several packages, all are published from single monorepo.
- This package
@promptbook/coreis one part of the promptbook ecosystem.
To install this package, run:
# Install entire promptbook ecosystem
npm i ptbk
# Install just this package to save space
npm install @promptbook/coreThe core package contains the fundamental logic and infrastructure for Promptbook. It provides the essential building blocks for creating, parsing, validating, and executing promptbooks, along with comprehensive error handling, LLM provider integrations, and execution utilities.
๐ฏ Purpose and Motivation
The core package serves as the foundation of the Promptbook ecosystem. It abstracts away the complexity of working with different LLM providers, provides a unified interface for prompt execution, and handles all the intricate details of pipeline management, parameter validation, and result processing.
๐ง High-Level Functionality
This package orchestrates the entire promptbook execution lifecycle:
- Pipeline Management: Parse, validate, and compile promptbook definitions
- Execution Engine: Create and manage pipeline executors with comprehensive error handling
- LLM Integration: Unified interface for multiple LLM providers (OpenAI, Anthropic, Google, etc.)
- Parameter Processing: Template parameter substitution and validation
- Knowledge Management: Handle knowledge sources and scraping
- Storage Abstraction: Flexible storage backends for caching and persistence
- Format Support: Parse and validate various data formats (JSON, CSV, XML)
โจ Key Features
- ๐ Universal Pipeline Executor - Execute promptbooks with any supported LLM provider
- ๐ Multi-Provider Support - Seamlessly switch between OpenAI, Anthropic, Google, and other providers
- ๐ Comprehensive Validation - Validate promptbooks, parameters, and execution results
- ๐ฏ Expectation Checking - Built-in validation for output format, length, and content expectations
- ๐ง Knowledge Integration - Scrape and process knowledge from various sources
- ๐พ Flexible Storage - Memory, filesystem, and custom storage backends
- ๐ง Error Handling - Detailed error types for debugging and monitoring
- ๐ Usage Tracking - Monitor token usage, costs, and performance metrics
- ๐จ Format Parsers - Support for JSON, CSV, XML, and text formats
- ๐ Pipeline Migration - Upgrade and migrate pipeline definitions
๐ฆ Exported Entities
Version Information
BOOK_LANGUAGE_VERSION- Current book language versionPROMPTBOOK_ENGINE_VERSION- Current engine version
Agent and Book Management
createAgentModelRequirements- Create model requirements for agentsparseAgentSource- Parse agent source codeisValidBook- Validate book formatvalidateBook- Comprehensive book validationDEFAULT_BOOK- Default book template
Commitment System
createEmptyAgentModelRequirements- Create empty model requirementscreateBasicAgentModelRequirements- Create basic model requirementsNotYetImplementedCommitmentDefinition- Placeholder for future commitmentsgetCommitmentDefinition- Get specific commitment definitiongetAllCommitmentDefinitions- Get all available commitment definitionsgetAllCommitmentTypes- Get all commitment typesisCommitmentSupported- Check if commitment is supported
Collection Management
collectionToJson- Convert collection to JSONcreateCollectionFromJson- Create collection from JSON datacreateCollectionFromPromise- Create collection from async sourcecreateCollectionFromUrl- Create collection from URLcreateSubcollection- Create filtered subcollection
Configuration Constants
NAME- Project nameADMIN_EMAIL- Administrator emailADMIN_GITHUB_NAME- GitHub usernameCLAIM- Project claim/taglineDEFAULT_BOOK_TITLE- Default book titleDEFAULT_TASK_TITLE- Default task titleDEFAULT_PROMPT_TASK_TITLE- Default prompt task titleDEFAULT_BOOK_OUTPUT_PARAMETER_NAME- Default output parameter nameDEFAULT_MAX_FILE_SIZE- Maximum file size limitBIG_DATASET_TRESHOLD- Threshold for large datasetsFAILED_VALUE_PLACEHOLDER- Placeholder for failed valuesPENDING_VALUE_PLACEHOLDER- Placeholder for pending valuesMAX_FILENAME_LENGTH- Maximum filename lengthDEFAULT_INTERMEDIATE_FILES_STRATEGY- Strategy for intermediate filesDEFAULT_MAX_PARALLEL_COUNT- Maximum parallel executionsDEFAULT_MAX_EXECUTION_ATTEMPTS- Maximum execution attemptsDEFAULT_MAX_KNOWLEDGE_SOURCES_SCRAPING_DEPTH- Knowledge scraping depth limitDEFAULT_MAX_KNOWLEDGE_SOURCES_SCRAPING_TOTAL- Knowledge scraping total limitDEFAULT_BOOKS_DIRNAME- Default books directory nameDEFAULT_DOWNLOAD_CACHE_DIRNAME- Default download cache directoryDEFAULT_EXECUTION_CACHE_DIRNAME- Default execution cache directoryDEFAULT_SCRAPE_CACHE_DIRNAME- Default scrape cache directoryCLI_APP_ID- CLI application identifierPLAYGROUND_APP_ID- Playground application identifierDEFAULT_PIPELINE_COLLECTION_BASE_FILENAME- Default collection filenameDEFAULT_REMOTE_SERVER_URL- Default remote server URLDEFAULT_CSV_SETTINGS- Default CSV parsing settingsDEFAULT_IS_VERBOSE- Default verbosity settingSET_IS_VERBOSE- Verbosity setterDEFAULT_IS_AUTO_INSTALLED- Default auto-install settingDEFAULT_TASK_SIMULATED_DURATION_MS- Default task simulation durationDEFAULT_GET_PIPELINE_COLLECTION_FUNCTION_NAME- Default collection function nameDEFAULT_MAX_REQUESTS_PER_MINUTE- Rate limiting configurationAPI_REQUEST_TIMEOUT- API request timeoutPROMPTBOOK_LOGO_URL- Official logo URL
Model and Provider Constants
MODEL_TRUST_LEVELS- Trust levels for different modelsMODEL_ORDERS- Ordering preferences for modelsORDER_OF_PIPELINE_JSON- JSON property orderingRESERVED_PARAMETER_NAMES- Reserved parameter names
Pipeline Processing
compilePipeline- Compile pipeline from sourceparsePipeline- Parse pipeline definitionpipelineJsonToString- Convert pipeline JSON to stringprettifyPipelineString- Format pipeline stringextractParameterNamesFromTask- Extract parameter namesvalidatePipeline- Validate pipeline structure
Dialog and Interface Tools
CallbackInterfaceTools- Callback-based interface toolsCallbackInterfaceToolsOptions- Options for callback tools (type)
Error Handling
BoilerplateError- Base error classPROMPTBOOK_ERRORS- All error types registryAbstractFormatError- Abstract format validation errorAuthenticationError- Authentication failure errorCollectionError- Collection-related errorEnvironmentMismatchError- Environment compatibility errorExpectError- Expectation validation errorKnowledgeScrapeError- Knowledge scraping errorLimitReachedError- Resource limit errorMissingToolsError- Missing tools errorNotFoundError- Resource not found errorNotYetImplementedError- Feature not implemented errorParseError- Parsing errorPipelineExecutionError- Pipeline execution errorPipelineLogicError- Pipeline logic errorPipelineUrlError- Pipeline URL errorPromptbookFetchError- Fetch operation errorUnexpectedError- Unexpected errorWrappedError- Wrapped error container
Execution Engine
createPipelineExecutor- Create pipeline executorcomputeCosineSimilarity- Compute cosine similarity for embeddingsembeddingVectorToString- Convert embedding vector to stringexecutionReportJsonToString- Convert execution report to stringExecutionReportStringOptions- Report formatting options (type)ExecutionReportStringOptionsDefaults- Default report options
Usage and Metrics
addUsage- Add usage metricsisPassingExpectations- Check if expectations are metZERO_VALUE- Zero usage value constantUNCERTAIN_ZERO_VALUE- Uncertain zero value constantZERO_USAGE- Zero usage objectUNCERTAIN_USAGE- Uncertain usage objectusageToHuman- Convert usage to human-readable formatusageToWorktime- Convert usage to work time estimate
Format Parsers
CsvFormatError- CSV format errorCsvFormatParser- CSV format parserMANDATORY_CSV_SETTINGS- Required CSV settingsTextFormatParser- Text format parser
Form Factor Definitions
BoilerplateFormfactorDefinition- Boilerplate form factorChatbotFormfactorDefinition- Chatbot form factorCompletionFormfactorDefinition- Completion form factorGeneratorFormfactorDefinition- Generator form factorGenericFormfactorDefinition- Generic form factorImageGeneratorFormfactorDefinition- Image generator form factorFORMFACTOR_DEFINITIONS- All form factor definitionsMatcherFormfactorDefinition- Matcher form factorSheetsFormfactorDefinition- Sheets form factorTranslatorFormfactorDefinition- Translator form factor
LLM Provider Integration
filterModels- Filter available models$llmToolsMetadataRegister- LLM tools metadata registry$llmToolsRegister- LLM tools registrycreateLlmToolsFromConfiguration- Create tools from configcacheLlmTools- Cache LLM toolscountUsage- Count total usagelimitTotalUsage- Limit total usagejoinLlmExecutionTools- Join multiple LLM toolsMultipleLlmExecutionTools- Multiple LLM tools container
Provider Registrations
_AnthropicClaudeMetadataRegistration- Anthropic Claude registration_AzureOpenAiMetadataRegistration- Azure OpenAI registration_DeepseekMetadataRegistration- Deepseek registration_GoogleMetadataRegistration- Google registration_OllamaMetadataRegistration- Ollama registration_OpenAiMetadataRegistration- OpenAI registration_OpenAiAssistantMetadataRegistration- OpenAI Assistant registration_OpenAiCompatibleMetadataRegistration- OpenAI Compatible registration
Pipeline Management
migratePipeline- Migrate pipeline to newer versionpreparePersona- Prepare persona for executionbook- Book notation utilitiesisValidPipelineString- Validate pipeline stringGENERIC_PIPELINE_INTERFACE- Generic pipeline interfacegetPipelineInterface- Get pipeline interfaceisPipelineImplementingInterface- Check interface implementationisPipelineInterfacesEqual- Compare pipeline interfacesEXPECTATION_UNITS- Units for expectationsvalidatePipelineString- Validate pipeline string format
Pipeline Preparation
isPipelinePrepared- Check if pipeline is preparedpreparePipeline- Prepare pipeline for executionunpreparePipeline- Unprepare pipeline
Remote Server Integration
identificationToPromptbookToken- Convert ID to tokenpromptbookTokenToIdentification- Convert token to ID
Knowledge Scraping
_BoilerplateScraperMetadataRegistration- Boilerplate scraper registrationprepareKnowledgePieces- Prepare knowledge pieces$scrapersMetadataRegister- Scrapers metadata registry$scrapersRegister- Scrapers registrymakeKnowledgeSourceHandler- Create knowledge source handlerpromptbookFetch- Fetch with promptbook context_LegacyDocumentScraperMetadataRegistration- Legacy document scraper_DocumentScraperMetadataRegistration- Document scraper registration_MarkdownScraperMetadataRegistration- Markdown scraper registration_MarkitdownScraperMetadataRegistration- Markitdown scraper registration_PdfScraperMetadataRegistration- PDF scraper registration_WebsiteScraperMetadataRegistration- Website scraper registration
Storage Backends
BlackholeStorage- Blackhole storage (discards data)MemoryStorage- In-memory storagePrefixStorage- Prefixed storage wrapper
Type Definitions
MODEL_VARIANTS- Available model variantsNonTaskSectionTypes- Non-task section typesSectionTypes- All section typesTaskTypes- Task types
Server Configuration
REMOTE_SERVER_URLS- Remote server URLs
๐ก This package does not make sense on its own, look at all promptbook packages or just install all by
npm i ptbk
Rest of the documentation is common for entire promptbook ecosystem:
๐ค The Book Abstract
It's time for a paradigm shift! The future of software is written in plain English, French, or Latin.
During the computer revolution, we have seen multiple generations of computer languages, from the physical rewiring of the vacuum tubes through low-level machine code to the high-level languages like Python or JavaScript. And now, we're on the edge of the next revolution!
It's a revolution of writing software in plain human language that is understandable and executable by both humans and machines โ and it's going to change everything!
The incredible growth in power of microprocessors and the Moore's Law have been the driving force behind the ever-more powerful languages, and it's been an amazing journey! Similarly, the large language models (like GPT or Claude) are the next big thing in language technology, and they're set to transform the way we interact with computers.
This shift will happen whether we're ready or not. Our mission is to make it excellent, not just good.
Join us in this journey!
๐ Get started
Take a look at the simple starter kit with books integrated into the Hello World sample applications:
๐ The Promptbook Project
Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:
| Project | About |
|---|---|
| Book language |
Book is a human-understandable markup language for writing AI applications such as chatbots, knowledge bases, agents, avarars, translators, automations and more.
There is also a plugin for VSCode to support .book file extension
|
| Promptbook Engine | Promptbook engine can run applications written in Book language. It is released as multiple NPM packages and Docker HUB |
| Promptbook Studio | Promptbook.studio is a web-based editor and runner for book applications. It is still in the experimental MVP stage. |
Hello world examples:
๐ Community & Social Media
Join our growing community of developers and users:
| Platform | Description |
|---|---|
| ๐ฌ Discord | Join our active developer community for discussions and support |
| ๐ฃ๏ธ GitHub Discussions | Technical discussions, feature requests, and community Q&A |
| ๐ LinkedIn | Professional updates and industry insights |
| ๐ฑ Facebook | General announcements and community engagement |
| ๐ ptbk.io | Official landing page with project information |
๐ผ๏ธ Product & Brand Channels
Promptbook.studio
| ๐ธ Instagram @promptbook.studio | Visual updates, UI showcases, and design inspiration |
๐ Book Language Blueprint
A concise, Markdown-based DSL for crafting AI workflows and automations.
Introduction
Book is a Markdown-based language that simplifies the creation of AI applications, workflows, and automations. With human-readable commands, you can define inputs, outputs, personas, knowledge sources, and actionsโwithout needing model-specific details.
Example
# ๐ My First Book
- BOOK VERSION 1.0.0
- URL https://promptbook.studio/hello.book
- INPUT PARAMETER {topic}
- OUTPUT PARAMETER {article}
# Write an Article
- PERSONA Jane, marketing specialist with prior experience in tech and AI writing
- KNOWLEDGE https://wikipedia.org/
- KNOWLEDGE ./journalist-ethics.pdf
- EXPECT MIN 1 Sentence
- EXPECT MAX 5 Pages
> Write an article about {topic}
โ {article}Each part of the book defines one of three circles:
1. What: Workflows, Tasks and Parameters
What work needs to be done. Each book defines a workflow (scenario or pipeline), which is one or more tasks. Each workflow has a fixed input and output. For example, you have a book that generates an article from a topic. Once it generates an article about AI, once about marketing, once about cooking. The workflow (= your AI program) is the same, only the input and output change.
Related commands:
2. Who: Personas
Who does the work. Each task is performed by a persona. A persona is a description of your virtual employee. It is a higher abstraction than the model, tokens, temperature, top-k, top-p and other model parameters.
You can describe what you want in human language like Jane, creative writer with a sense of sharp humour instead of gpt-4-2024-13-31, temperature 1.2, top-k 40, STOP token ".\n",....
Personas can have access to different knowledge, tools and actions. They can also consult their work with other personas or user, if allowed.
Related commands:
3. How: Knowledge, Instruments and Actions
The resources used by the personas are used to do the work.
Related commands:
- KNOWLEDGE of documents, websites, and other resources
- INSTRUMENT for real-time data like time, location, weather, stock prices, searching the internet, calculations, etc.
- ACTION for actions like sending emails, creating files, ending a workflow, etc.
General Principles
Book language is based on markdown. It is subset of markdown. It is designed to be easy to read and write. It is designed to be understandable by both humans and machines and without specific knowledge of the language.
The file has a .book extension and uses UTF-8 encoding without BOM.
Books have two variants: flat โ just a prompt without structure, and full โ with tasks, commands, and prompts.
As it is source code, it can leverage all the features of version control systems like git and does not suffer from the problems of binary formats, proprietary formats, or no-code solutions.
But unlike programming languages, it is designed to be understandable by non-programmers and non-technical people.
๐ Documentation
See detailed guides and API reference in the docs or online.
๐ Security
For information on reporting security vulnerabilities, see our Security Policy.
๐ฆ Packages (for developers)
This library is divided into several packages, all are published from single monorepo. You can install all of them at once:
npm i ptbkOr you can install them separately:
โญ Marked packages are worth to try first
โญ ptbk - Bundle of all packages, when you want to install everything and you don't care about the size
promptbook - Same as
ptbkโญ๐งโโ๏ธ @promptbook/wizard - Wizard to just run the books in node without any struggle
@promptbook/core - Core of the library, it contains the main logic for promptbooks
@promptbook/node - Core of the library for Node.js environment
@promptbook/browser - Core of the library for browser environment
โญ @promptbook/utils - Utility functions used in the library but also useful for individual use in preprocessing and postprocessing LLM inputs and outputs
@promptbook/markdown-utils - Utility functions used for processing markdown
(Not finished) @promptbook/wizard - Wizard for creating+running promptbooks in single line
@promptbook/javascript - Execution tools for javascript inside promptbooks
@promptbook/openai - Execution tools for OpenAI API, wrapper around OpenAI SDK
@promptbook/anthropic-claude - Execution tools for Anthropic Claude API, wrapper around Anthropic Claude SDK
@promptbook/vercel - Adapter for Vercel functionalities
@promptbook/google - Integration with Google's Gemini API
@promptbook/deepseek - Integration with DeepSeek API
@promptbook/ollama - Integration with Ollama API
@promptbook/azure-openai - Execution tools for Azure OpenAI API
@promptbook/fake-llm - Mocked execution tools for testing the library and saving the tokens
@promptbook/remote-client - Remote client for remote execution of promptbooks
@promptbook/remote-server - Remote server for remote execution of promptbooks
@promptbook/pdf - Read knowledge from
.pdfdocuments@promptbook/documents - Integration of Markitdown by Microsoft
@promptbook/documents - Read knowledge from documents like
.docx,.odt,โฆ@promptbook/legacy-documents - Read knowledge from legacy documents like
.doc,.rtf,โฆ@promptbook/website-crawler - Crawl knowledge from the web
@promptbook/editable - Editable book as native javascript object with imperative object API
@promptbook/templates - Useful templates and examples of books which can be used as a starting point
@promptbook/types - Just typescript types used in the library
@promptbook/color - Color manipulation library
โญ @promptbook/cli - Command line interface utilities for promptbooks
๐ Docker image - Promptbook server
๐ Dictionary
The following glossary is used to clarify certain concepts:
General LLM / AI terms
- Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
- Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
- Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
- Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
- Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
- Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
- Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
- Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.
Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook
๐ฏ Core concepts
- ๐ Collection of pipelines
- ๐ฏ Pipeline
- ๐โโ๏ธ Tasks and pipeline sections
- ๐คผ Personas
- โญ Parameters
- ๐ Pipeline execution
- ๐งช Expectations - Define what outputs should look like and how they're validated
- โ๏ธ Postprocessing - How outputs are refined after generation
- ๐ฃ Words not tokens - The human-friendly way to think about text generation
- โฏ Separation of concerns - How Book language organizes different aspects of AI workflows
Advanced concepts
| Data & Knowledge Management | Pipeline Control |
|---|---|
|
|
| Language & Output Control | Advanced Generation |
|
|
๐ Promptbook Engine
โโ When to use Promptbook?
โ When to use
- When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
- When you want to separate code from text prompts
- When you want to describe complex prompt pipelines and don't want to do it in the code
- When you want to orchestrate multiple prompts together
- When you want to reuse parts of prompts in multiple places
- When you want to version your prompts and test multiple versions
- When you want to log the execution of prompts and backtrace the issues
โ When not to use
- When you have already implemented single simple prompt and it works fine for your job
- When OpenAI Assistant (GPTs) is enough for you
- When you need streaming (this may be implemented in the future, see discussion).
- When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
- When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
- When you need to use recursion (see the discussion)
๐ Known issues
๐งผ Intentionally not implemented features
โ FAQ
If you have a question start a discussion, open an issue or write me an email.
- โ Why not just use the OpenAI SDK / Anthropic Claude SDK / ...?
- [โ How is it different from the OpenAI`s GPTs?](https://github.com/webgptorg/promptbook/discussions/118)
- โ How is it different from the Langchain?
- โ How is it different from the DSPy?
- โ How is it different from anything?
- โ Is Promptbook using RAG (Retrieval-Augmented Generation)?
- โ Is Promptbook using function calling?
๐ Changelog
See CHANGELOG.md
๐ License
This project is licensed under BUSL 1.1.
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
You can also โญ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.
๐ Support & Community
Need help with Book language? We're here for you!
- ๐ฌ Join our Discord community for real-time support
- ๐ Browse our GitHub discussions for FAQs and community knowledge
- ๐ Report issues for bugs or feature requests
- ๐ Visit ptbk.io for more resources and documentation
- ๐ง Contact us directly through the channels listed in our signpost
We welcome contributions and feedback to make Book language better for everyone!