JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 430
  • Score
    100M100P100Q90274F
  • License GPL-3.0-or-later

๐ŸŒ BananaCode

Package Exports

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@banaxi/banana-code) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    ๐ŸŒ Banana Code

    Create any app you want with AI

    Banana Code is a high-performance, terminal-based AI pair programmer. It combines the power of multiple state-of-the-art LLMs with a rich, interactive TUI and a robust tool-calling system to help you write, debug, and explore code without leaving your terminal.

                                                           #%%S#
                                                          ?;+*??%
                                                        #*;;;+%?#
                                                      #?;:;+?S
                                                #S%%?+;::;%
                                             #?+::,,::;;*#
                                           #*;::::,,,::*
                                          %;;;::::,,,::#
                                         ?;;;;::::::::*
                                        ?;;;;;::::::::#
                                       S;;;;;::::::::?
                                       *;;;;;:::::::;
                                      S;+;;;;:::::::?
                                      ?;;;;;::::::::#
                                      *;;;;;:::::::;
                                      +;;;;;:::::::;
                                      *;;;;;:::::::;
                                      ?;;;;;;:::::::#
                                      S;+;;;;::::::,?
                                       +;;;;;;::::::;
                                       ?;+;;;;::::::,*
                                        +;;;;;;:::::::%
                                        S;;;;;;::::::::%
                                         %;;;;;;::::::::%
                                          %;;;;;;;::::::,*
                                           %;;;;;;;::::::,;S
                                            #+;;;;;;::::::::+#
                                              ?;;;;;;;::::::,:*#
                                               S*;;;;;;;:::::::;%
                   #                             %;;;;;;;;:::::::*
                                                  #%*;;;;;;;::::::+#
                             #S#      ##             S*+;;;;::::;;;+
                                                       #S%*+;;;;;;*S
                                                 #S#SSSSSS%%%?%%%%S

    ๐Ÿค” Why Banana Code?

    While tools like Cursor provide great GUI experiences, Banana Code is built for developers who live in the terminal and want maximum flexibility.

    • No Vendor Lock-in: Switch instantly between the best proprietary models (Gemini, Claude, OpenAI) and high-performance open-source models (Ollama Local, Ollama Cloud) mid-conversation.
    • True Autonomy: With Plan & Execute mode and Self-Healing Error Loops, Banana Code doesn't just suggest code; it tries, fails, reads the errors, and fixes its own mistakes automatically.
    • Terminal Native: It brings the power of full workspace awareness, web search, and surgical file patching directly to your CLI without forcing you to change your IDE.

    โœจ Key Features

    • Multi-Provider Support: Switch between Google Gemini, Anthropic Claude, OpenAI (API key or ChatGPT / Codex OAuth), Mistral AI, OpenRouter (any model ID; see OpenRouter setup), Ollama Cloud, and Ollama (Local) effortlessly.
    • Auto Mode: For most providers, pick Auto Mode as your model โ€” a small โ€œrouterโ€ model reads your prompt and chooses which model to use for that turn (with a short reason). Local Ollama and OpenRouter do not offer Auto Mode.
    • Model Context Protocol (MCP): Connect Banana Code to any community-built MCP server (like SQLite, GitHub, Google Maps) to give your AI infinite new superpowers via /beta.
    • Plan & Agent Modes: Use /agent for normal execution, /plan for Plan mode, /ask for Ask mode (read-only Q&A), or /security for Security mode (vulnerability-focused reviews).
    • Hierarchical Sub-Agents: The main AI can spawn specialized "sub-agents" (Researchers, Coders, Reviewers) to handle complex tasks without polluting your main chat history.
    • Self-Healing Loop: If the AI runs a command (like running tests) and it fails, Banana Code automatically feeds the error trace back to the AI so it can fix its own code.
    • Agent Skills: Teach your AI specialized workflows. Drop a SKILL.md file in your config folder, and the AI will automatically activate it when relevant.
    • Smart Context & Pruning: Use @file/path.js to instantly inject file contents, auto-feed your workspace, and use /clean to instantly compress long chat histories to save tokens.
    • Web Research: Deep integration with DuckDuckGo APIs and Scrapers to give the AI real-time access to the internet.
    • Persistent Sessions: All chats are auto-titled and saved. Use /chats for a fully interactive menu to resume any past session.
    • Syntax Highlighting: Beautiful, readable markdown output with syntax coloring directly in your terminal.

    ๐Ÿš€ Installation

    Install Banana Code globally via npm using the scoped package name:

    npm install -g @banaxi/banana-code

    โš ๏ธ Important Notice: Please ensure you install @banaxi/banana-code. The unscoped banana-code package on npm is NOT affiliated with this project.

    ๐Ÿ› ๏ธ Setup

    On your first run, Banana Code will walk you through a quick setup to configure your preferred AI providers:

    banana

    You'll need your API keys handy for Gemini, Claude, OpenAI (unless you use ChatGPT sign-in), Mistral, Ollama Cloud, or OpenRouter. For OpenRouter, you enter an API key and a custom model ID; Banana Code checks OpenRouterโ€™s model list so the model supports tool calling before continuing.

    ๐Ÿ“– Usage

    Start a New Session

    banana

    Optional flags:

    Flag Effect
    --yolo Start with YOLO mode on (same as /yolo in-app: auto-approve permission prompts).
    --resume [uuid] Resume a session; UUID optional (latest session if omitted).

    Resume a Session

    To continue where you left off, use the --resume flag with your session UUID:

    banana --resume <uuid>

    Omit <uuid> to resume the most recently updated saved session.

    In-App Commands

    While in a chat, use these special commands (type /help for the full list):

    Command What it does
    /provider Switch provider: gemini, claude, openai, mistral, openrouter, ollama_cloud, ollama
    /model Change model; omit the name to open the menu (includes Auto Mode where supported).
    /chats Browse and resume saved sessions (auto-titled).
    /clear Clear the current conversation (same provider/model).
    /clean Summarize long history into a short memory to save tokens (beta; enable in /beta).
    /context Show message count and estimated tokens.
    /settings Workspace auto-feed, markdown/syntax output, patch tool, token count in status bar, global memory.
    /beta Beta tools (e.g. MCP, optional scrapers, /clean).
    /memory View, add, or delete global memories (needs memory enabled in /settings).
    /skills List loaded Agent Skills from ~/.config/banana-code/skills/.
    /init Generate BANANA.md project summary in the current directory.
    /permissions List permissions granted for this session.
    /debug Toggle debug output (e.g. tool results, auto-route diagnostics).
    /plan Plan mode: AI outlines a plan before large edits.
    /agent Default: AI applies changes directly.
    /ask Ask mode: questions and explanations only; no project edits.
    /security Security-focused review mode (defensive use only).
    /yolo Auto-approve permission prompts (use with care).
    /help Show all commands.
    /exit Quit (also Ctrl+D / Ctrl+C flow).

    File context: Type @path/to/file or @@/absolute/path in your message to attach file contents to that prompt.

    โšก Auto Mode

    When Auto Mode is selected as the model (/model or initial setup), each new user message is first sent to a small, fast router model (per provider) together with the last seven conversation messages (formatted as context only). The router returns JSON: which concrete model should handle this turn and a short reasonโ€”so short follow-ups like โ€œImplement itโ€ can pick a capable model when the history shows a large task. The assistantโ€™s reply then uses that model. If routing fails, providers fall back to a sensible default (e.g. Gemini may fall back to Gemini 3 Flash). OpenRouter and local Ollama do not offer Auto Mode (fixed model ID vs. local tag list).

    OpenRouter setup

    OpenRouter lets you use many models behind one API. In Banana Code, choose OpenRouter in /provider, paste your OpenRouter API key, then enter a model ID (e.g. org/model:free). Banana Code loads OpenRouterโ€™s public model list and checks that the model advertises tool support (tools / tool_choice in supported_parameters) so Bananaโ€™s tools can run. Routing uses the same OpenAI-compatible Chat Completions API at https://openrouter.ai/api/v1.

    ๐ŸŽ›๏ธ Operating modes

    Banana Code layers behavior modes on top of the normal agent. Only one โ€œstyleโ€ mode is active at a time (/plan, /ask, /security, or default agent). The status bar shows PLAN MODE, ASK MODE, or SECURITY MODE when relevant.

    Command Role
    /agent Default: full coding agent with tools (subject to permissions).
    /plan Plan mode โ€” propose a written plan before larger edits.
    /ask Ask mode โ€” read-only Q&A; no file or state-changing edits.
    /security Security mode โ€” prioritize vulnerability review.
    /yolo Auto-approve permission prompts (dangerous; use carefully).

    Plan mode

    Enable with /plan. The system prompt switches to Plan Mode: the model is told to treat you as someone who wants a clear plan before risky or wide-reaching work.

    Behavior

    • Small or trivial changes (e.g. a typo, a one-line fix) may still be applied directly with tools.
    • Significant work โ€” anything that touches multiple areas, adds a feature, or has broad impact โ€” should not start with write_file / patch_file. The model should instead output an implementation plan (files to touch, ordered steps).
    • It should pause and ask whether the plan looks good before editing.
    • File-changing tools for those larger tasks are only appropriate after you explicitly approve the plan.

    Return to normal behavior with /agent (or switch to another mode). Plan mode is meant to reduce surprise edits and keep big refactors reviewable.

    Ask mode

    Enable with /ask. The system prompt switches to Ask Mode: the assistant is restricted to answering questions, explaining code, and gathering information โ€” not changing your project.

    Behavior

    • The model must not modify the codebase: no write_file, patch_file, or shell commands that change state (installing packages, deleting files, etc.).
    • It may use read-only tools to help answer you: e.g. read_file, search_files, list_directory, and non-mutating execute_command runs such as git status or running tests to report output.

    Use Ask mode when you want explanations, design discussion, or code review without accidental edits. Return to the default coding agent with /agent, or switch to /plan or /security if you want those modes instead.

    Security mode

    Enable with /security. The system prompt switches to Security Mode: the model prioritizes finding and explaining security issues in your codebase.

    Behavior

    • Focus on vulnerabilities, misconfigurations, and unsafe patterns (e.g. injection, auth issues, secret leakage, OWASP-style issues).
    • Output should include actionable detail: affected paths, whatโ€™s wrong, and remediation ideas.

    Responsible use

    Banana Code is for defensive work on software you own or are authorized to test. Do not use Security mode to probe systems without permission or to develop exploits. Return to normal coding with /agent when youโ€™re done reviewing.

    Available Tools

    Banana Code can assist you by:

    • execute_command: Running shell commands (git, npm, ls, etc.).
    • read_file: Reading local source code.
    • write_file: Creating or editing files.
    • patch_file: Targeted search-and-replace style edits (on by default; disable in /settings).
    • fetch_url: Browsing web documentation.
    • search_files: Performing regex searches across your project.
    • list_directory: Exploring folder structures.

    Additional tools appear when you enable beta features in /beta (e.g. web search, MCP tools) or when the model exposes them.

    ๐Ÿง  Agent Skills

    Banana Code supports custom Agent Skills. Skills are like "onboarding guides" that teach the AI how to do specific tasks, use certain APIs, or follow your company's coding standards.

    When the AI detects a task that matches a skill's description, it automatically activates the skill and loads its specialized instructions.

    How to create a Skill:

    1. Create a folder in your config directory: ~/.config/banana-code/skills/my-react-skill/
    2. Create a SKILL.md file inside that folder using this exact format:
    ---
    name: my-react-skill
    description: Use this skill whenever you are asked to build or edit a React component.
    ---
    
    # React Guidelines
    - Always use functional components.
    - Always use Tailwind CSS for styling.
    - Do not use default exports.
    1. Type /skills in Banana Code to verify it loaded. The AI will now follow these rules automatically!

    ๐Ÿ”Œ Model Context Protocol (MCP) Support

    Banana Code supports the open standard Model Context Protocol, allowing you to plug in community-built servers to give your AI access to your databases, GitHub, Slack, Google Maps, and more.

    1. Enable MCP Support in the /beta menu.
    2. Create a configuration file at ~/.config/banana-code/mcp.json.
    3. Add your servers. For example, to add the "fetch" and "math" tools using the test server:
    {
      "mcpServers": {
        "everything": {
          "command": "npx",
          "args": ["-y", "@modelcontextprotocol/server-everything"]
        }
      }
    }

    Restart Banana Code, and the AI will instantly know how to use these new tools natively!

    ๐Ÿง  Global AI Memory

    Banana Code features a persistent "brain" that remembers your preferences across every project you work on.

    1. Enable Enable Global AI Memory in the /settings menu.
    2. Tell the AI facts about yourself or your coding style (e.g., "My name is Max" or "I prefer using Python for data scripts").
    3. Use the /memory command to view, manually add, or delete saved facts.
    4. The AI will now automatically adhere to these preferences in every future session!

    ๐ŸŒ Project Initialization (/init)

    Stop repeating yourself! When you start working in a new folder, type /init.

    Banana Code will analyze your entire project structure and generate a BANANA.md file. This file acts as a high-level architectural summary. Every time you start banana in that folder, the AI silently reads this file, giving it instant context about your project's goals and technologies from the very first message.

    ๐Ÿ“ก Headless API Mode (--api)

    Banana Code can be run as a background engine, exposing its powerful tool-calling and provider-switching logic via a local HTTP and WebSocket server. This allows you to build custom GUIs (Electron, Tauri, React) on top of the Banana Code engine without rewriting any AI logic.

    Start the server:

    banana --api 4000

    HTTP Endpoints

    • GET /api/status: Returns engine status, active provider, and model.
    • GET /api/sessions: Returns a JSON array of all saved chat sessions.
    • GET /api/config: Returns the current config.json preferences.

    WebSocket Streaming & Chat

    Connect a WebSocket client (like wscat or your GUI frontend) to ws://localhost:4000.

    Send a chat message:

    { "type": "chat", "text": "Run the sensors command" }

    Streamed Response Format: Banana Code streams data back to the client in real-time chunks:

    • {"type": "chunk", "content": "The output of the command is..."}
    • {"type": "tool_start", "tool": "execute_command"}
    • {"type": "done", "finalResponse": "..."}

    Remote Tool Approval (Security)

    If the AI decides to run a shell command or patch a file, Banana Code pauses execution and sends a permission ticket to your GUI (single-line JSON messages):

    {"type":"permission_requested","ticketId":"5c9b2a...","action":"Execute Command","details":"sensors"}

    Your GUI must present a dialog to the user and respond with the same ticketId to resume execution:

    {"type":"permission_response","ticketId":"5c9b2a...","allowed":true,"session":true}

    If an invalid ticketId is provided, Banana Code automatically blocks the tool execution to ensure safety.

    ๐Ÿ” Privacy & Security

    Banana Code is built with transparency in mind: 1. Approval Required: No file is written and no command is run without you saying "Allow". 2. Local Storage: Your API keys and chat history are stored locally on your machine (~/.config/banana-code/).

    Made with ๐ŸŒ by banaxi

    Banana Code is an independent open-source project and is not affiliated with, endorsed by, or sponsored by OpenAI, Google, Anthropic, or any other AI provider.

    This tool provides an interface to access services you already have permission to use. Users are responsible for complying with the Terms of Service of their respective AI providers. Use of experimental or internal endpoints is at the user's own risk.