JSPM

  • Created
  • Published
  • Downloads 348
  • Score
    100M100P100Q119119F
  • License MIT

PYB-CLI - Minimal AI Agent with multi-model support and CLI interface

Package Exports

  • pyb-ts
  • pyb-ts/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (pyb-ts) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

PYB-CLI - Minimal AI Agent A minimal philosophy-based AI agent implementation that delivers complex problem-solving capabilities with the least amount of code.

npm version License

Project Overview PYB-CLI is an AI agent implementation that follows the minimalist agent philosophy, embodying the core concept of "letting LLMs do what they do best." This implementation achieves core functionality with minimal code, fully leveraging LLMs as general problem-solving engines.

Core Philosophy

  1. "Let LLMs Do What They Do Best" The core of the minimalist agent philosophy is to fully leverage LLMs as general problem-solving engines, reducing artificial complex architectures that limit LLM capabilities - everything is a scaffold for LLMs.

In traditional complex frameworks, we often design complex state management, execution flows, and architectural layers, but these artificially complex architectures may actually limit LLM's intelligent decision-making capabilities. Minimalist philosophy advocates for letting LLMs themselves become the "brain" for problem-solving, rather than relying on complex frameworks.

  1. "Minimum Necessary Complexity" Principle Only keep elements essential for problem-solving:

Essential elements: messages (conversation history), tools (capability extensions), LLM (decision engine) Non-essential elements: complex state management, fixed execution flows, complex persistence mechanisms

  1. "Trust LLM Intelligence" Principle Believe that LLMs can handle complex information integration and decision-making:

No need to artificially segment state fields LLMs can extract required information from complete history Let LLMs decide how to organize and use information 4. "Transparency is Reliability" Principle All decision processes are visible to humans:

Conversation history can always be checked to understand decision-making Facilitates human supervision and intervention Easy to locate issues when problems occur Design Philosophy Minimum Necessary Complexity - Only keep elements essential for problem-solving Trust LLM Intelligence - Fully leverage LLM's intelligent decision-making capabilities Transparency is Reliability - All decision processes are visible to humans Core Features Minimal Implementation - Complete agent system with minimal files Dynamic Planning - Implicit state management based on message history Tool Extensions - Core capabilities for file operations, shell execution, etc. Feedback Loop - Intelligent cycle of perception → analysis → decision → execution → observation → replanning Easy to Understand - All states visible in message history for debugging and understanding Multi-Model Support - Supports multiple mainstream AI models, flexible for different scenarios Supported Models PYB-CLI supports multiple mainstream AI models that users can choose based on their needs:

Kimi Series: Kimi K2 Turbo (kimi-k2-turbo-preview) - Suitable for Chinese scenarios and domain-specific tasks Kimi K2 0711 Preview (kimi-k2-0711-preview) - Kimi model with vision capabilities Claude Series: Claude 3.5 Sonnet (claude-3-5-sonnet-latest) - Suitable for complex reasoning and general tasks Claude 3.5 Haiku (claude-3-5-haiku-latest) - Lightweight and efficient model GPT Series: GPT-4o (gpt-4o) - Balanced choice for cost-effectiveness and general tasks GPT-4o Mini (gpt-4o-mini) - Lightweight and efficient model O1 (o1) - Advanced model with reasoning capabilities DeepSeek Series: DeepSeek Chat (deepseek-chat) - Suitable for coding tasks and cost-sensitive scenarios DeepSeek Coder (deepseek-coder) - Optimized specifically for code tasks Qwen Series: Qwen Max (qwen-max) - Alibaba's Qwen high-performance model Qwen Plus (qwen-plus) - Alibaba's Qwen balanced model GLM Series: GLM-4 (glm-4) - Zhipu AI's high-performance model MiniMax Series: MiniMax Abab6.5s Chat (abab6.5s-chat) - MiniMax high-performance model Llama Series (Ollama): Llama 3 (llama3) - Suitable for local deployment and privacy protection scenarios Llama 3.1 (llama3.1) - Updated version of Llama model Model Selection Guide Complex Reasoning Tasks: Recommend Claude 3.5 Sonnet or O1 Cost-Sensitive Scenarios: Recommend DeepSeek Chat, DeepSeek Coder, or Llama 3 Chinese-Specific Tasks: Recommend Kimi K2 Turbo or Qwen Max Local Deployment Needs: Recommend Llama 3 or Llama 3.1 (Ollama) General Balanced Choice: Recommend GPT-4o or GLM-4 High-Performance Needs: Recommend Kimi K2 0711 Preview or Qwen Max Lightweight Tasks: Recommend Claude 3.5 Haiku or GPT-4o Mini Supported Tools bash - Execute shell commands read_file - Read file contents write_file - Write file contents edit_text - Edit file contents Installation and Usage As CLI Tool

Run directly (no installation required)

npx pyb-ts

Or install globally and use

npm install -g pyb-ts pyb Security Notice Important: This tool requires you to configure your own API keys. Never share your API keys with others, and ensure they are kept secure.

First-Time Setup After installation, you need to configure at least one AI model:

Interactive configuration (recommended)

pyb add

Or configure a specific model

pyb add gpt-4o --api-key YOUR_API_KEY --provider openai

Set as default model

pyb set-default gpt-4o

Set model pointers for different scenarios

pyb set-pointer main gpt-4o pyb set-pointer task deepseek-chat pyb set-pointer reasoning o1 Make sure to replace YOUR_API_KEY with your actual API key from the AI provider.

Getting Started Set up your API key as an environment variable:

export ANTHROPIC_API_KEY=your-anthropic-key

or

export OPENAI_API_KEY=your-openai-key

or

export KIMI_API_KEY=your-kimi-key Run the CLI:

pyb Use configuration commands to manage models:

pyb add # Interactive mode pyb add gpt-4o --api-key sk-xxx --provider openai pyb list # List configured models pyb set-default claude-3-5-sonnet-latest pyb set-pointer main gpt-4o pyb set-pointer task deepseek-chat pyb switch # Switch between configured models Note: Providers that currently support direct API key reading from environment variables include Anthropic, OpenAI, Kimi, DeepSeek, and Ollama. Other providers (such as Qwen, GLM, MiniMax) need to be configured through configuration files or interactive configuration wizard.

License MIT

Readme Keywords aiagentminimalclaude