Package Exports
- @hakkisagdic/context-manager
- @hakkisagdic/context-manager/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@hakkisagdic/context-manager) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Context Manager
LLM context optimization tool with method-level filtering and token analysis. Perfect for AI-assisted development workflows.
☕ Support This Project
If you find this tool helpful, consider buying me a coffee! Your support helps maintain and improve this project.
Files
context-manager.js
- Main LLM context analysis script with exact token counting.calculatorignore
- Files to exclude from token calculation (EXCLUDE mode).calculatorinclude
- Files to include in token calculation (INCLUDE mode)README.md
- This documentation fileREADME-tr.md
- Turkish documentation (Türkçe dokümantasyon)
Features
- ✅ Exact token counting using tiktoken (GPT-4 compatible)
- 🚫 Dual ignore system - respects both
.gitignore
and calculator ignore rules - 📋 Include/Exclude modes -
.calculatorinclude
takes priority over.calculatorignore
- 📊 Detailed reporting - by file type, largest files, statistics
- 💾 Optional JSON export - detailed analysis reports
- 🔍 Verbose mode (default) - shows all included files for transparency
- 🎯 Core application focus - configured to analyze only essential JS files
- 📈 Context optimization - perfect for LLM context window management
- 🤖 LLM context export - generate optimized file lists for LLM consumption
- 📋 Clipboard integration - copy context directly to clipboard
- ⚡ JSON format - Structured clipboard output identical to llm-context.json file
- 🎯 LLM-optimized - Clean directory structure without token counts
- 🔗 Consistent exports - Clipboard and file exports use identical JSON format
- 📤 Interactive export - Prompts for export choice when no options specified
- 🔀 Dual context modes - compact (default) or detailed format
- 📄 GitIngest format - Generate single-file digest for LLM consumption (inspired by GitIngest)
Quick Start
Core Application Analysis (Default)
# Interactive analysis with export selection
context-manager
# Analyze with minimal LLM context format
context-manager --context-clipboard
# Save detailed report
context-manager --save-report
# Generate LLM context file
context-manager --context-export
# Generate GitIngest-style digest (single file for LLMs)
context-manager --gitingest
# Method-level analysis
context-manager --method-level
# Combine multiple exports
context-manager -g -s # GitIngest digest + detailed report
# Use detailed context format (legacy)
context-manager --method-level --detailed-context --context-clipboard
Wrapper Script Usage
# Using the NPM package globally
context-manager
context-manager --save-report
context-manager --context-clipboard
Current Configuration
The tool is configured to focus on core application logic only:
✅ Included (64 JS files, ~181k tokens)
- Core MCP server implementation (
utility-mcp/src/
) - Authentication and security layers
- Request handlers and routing
- Transport protocols and communication
- Utilities and validation logic
- Configuration management
- Error handling and monitoring
🚫 Excluded via calculator ignore rules
- Documentation files (
.md
,.txt
) - Configuration files (
.json
,.yml
) - Infrastructure and deployment files
- Testing and script directories
- Build artifacts and dependencies
- Workflow orchestration files (
utility-mcp/src/workflows/**
) - Testing utilities (
utility-mcp/src/testing/**
) - All non-essential supporting files
Usage
Basic Analysis
# Interactive analysis with export selection
context-manager
# Quiet mode (no file listing)
context-manager --no-verbose
# With detailed JSON report
context-manager --save-report
# Generate LLM context file list
context-manager --context-export
# Copy context directly to clipboard
context-manager --context-clipboard
Interactive Export Selection
When you run the tool without specifying export options (--save-report
, --context-export
, or --context-clipboard
), it will automatically prompt you to choose an export option after the analysis:
# Run analysis and get prompted for export options
context-manager
# The tool will show:
# 📤 Export Options:
# 1) Save detailed JSON report (token-analysis-report.json)
# 2) Generate LLM context file (llm-context.json)
# 3) Copy LLM context to clipboard
# 4) No export (skip)
#
# 🤔 Which export option would you like? (1-4):
This interactive mode ensures you never miss the opportunity to export your analysis results in the format you need.
Include vs Exclude Modes
The token calculator supports two complementary filtering modes:
EXCLUDE Mode (.calculatorignore)
- Default mode when only
.calculatorignore
exists - Includes all files except those matching ignore patterns
- Traditional gitignore-style exclusion logic
INCLUDE Mode (.calculatorinclude)
- Priority mode - when
.calculatorinclude
exists,.calculatorignore
is ignored - Includes only files matching include patterns
- More precise control for specific file selection
- Perfect for creating focused analysis sets
Mode Priority
- If
.calculatorinclude
exists → INCLUDE mode (ignore.calculatorignore
) - If only
.calculatorignore
exists → EXCLUDE mode - If neither exists → Include all files (respect
.gitignore
only)
Example Usage
# EXCLUDE mode: Include everything except patterns in .calculatorignore
rm .calculatorinclude # Remove include file
context-manager
# INCLUDE mode: Include only patterns in .calculatorinclude
# (automatically ignores .calculatorignore)
context-manager
Help and Options
context-manager --help
Available Options
--save-report
,-s
- Save detailed JSON report--no-verbose
- Disable file listing (verbose is default)--context-export
- Generate LLM context file list (saves as llm-context.json)--context-clipboard
- Copy LLM context directly to clipboard--detailed-context
- Use detailed context format (8.6k chars, default is compact 1.2k)--help
,-h
- Show help message
LLM Context Export
The token calculator can generate optimized file lists for LLM consumption, with two format options:
Ultra-Compact Format (Default)
- Size: ~2.3k characters (structured JSON)
- Content: Project metadata and organized file paths without token counts
- Format: Identical to llm-context.json file - complete JSON structure
- Perfect for: LLM consumption, programmatic processing, structured data needs
- Usage:
--context-clipboard
or--context-export
Detailed Format (Legacy)
- Size: ~8.6k characters (comprehensive)
- Content: Full paths, categories, importance scores, directory stats
- Perfect for: Initial project analysis, comprehensive documentation
- Usage:
--detailed-context --context-clipboard
Features
- Smart file selection - Top files by token count and importance
- Directory grouping - Common prefix compression saves space
- Token abbreviation - "12k" instead of "12,388 tokens"
- Extension removal - ".js" removed to save characters
- Cross-platform clipboard - Works on macOS, Linux, and Windows
- Multiple output formats - JSON file or clipboard ready text
Usage
# Generate minimal LLM context and save to llm-context.json (2.3k chars JSON)
context-manager --context-export
# Copy minimal context directly to clipboard (2.3k chars JSON - identical to file)
context-manager --context-clipboard
# Copy detailed context to clipboard (8.6k chars)
context-manager --detailed-context --context-clipboard
# Combine with regular analysis
context-manager --save-report --context-clipboard
Output Format Examples
Compact Format (JSON - 2.3k chars):
{
"project": {
"root": "cloudstack-go-mcp-proxy",
"totalFiles": 64,
"totalTokens": 181480
},
"paths": {
"utility-mcp/src/server/": [
"CloudStackUtilityMCP.js"
],
"utility-mcp/src/handlers/": [
"workflow-handlers.js",
"tool-handlers.js",
"analytics-handler.js"
],
"utility-mcp/src/utils/": [
"security.js",
"usage-tracker.js",
"cache-warming.js"
]
}
}
Detailed Format (8.6k chars):
# cloudstack-go-mcp-proxy Codebase Context
**Project:** 64 files, 181,480 tokens
**Core Files (Top 20):**
1. `utility-mcp/src/server/CloudStackUtilityMCP.js` (12,388 tokens, server)
2. `utility-mcp/src/handlers/workflow-handlers.js` (11,007 tokens, handler)
...
**All Files:**
```json
[{"path": "file.js", "t": 1234, "c": "core", "i": 85}]
Use Cases
Compact Format (2.3k chars JSON):
- LLM Integration - Structured data for AI assistants with complete project context
- Programmatic Processing - JSON format for automated tools and scripts
- Context Sharing - Identical format in clipboard and file exports
- Development Workflows - Consistent structure for CI/CD and automation
Detailed Format (8.6k chars):
- Architecture Planning - Comprehensive project overview for major decisions
- New Team Member Onboarding - Complete codebase understanding
- Documentation Generation - Full project structure analysis
- Code Review Preparation - Detailed file relationships and importance
General Use Cases:
- Development workflow integration
- CI/CD pipeline context generation
- Automated documentation updates
- Project health monitoring
GitIngest Format Export
Context-manager now supports generating GitIngest-style digest files - a single, prompt-friendly text file perfect for LLM consumption.
What is GitIngest Format?
GitIngest format consolidates your entire codebase into a single text file with:
- Project summary and statistics
- Visual directory tree structure
- Complete file contents with clear separators
- Token count estimates
This format is inspired by GitIngest, implemented purely in JavaScript with zero additional dependencies.
Usage
# Standard workflow - analyze and generate digest in one step
context-manager --gitingest
context-manager -g
# Combine with other exports
context-manager -g -s # digest.txt + token-analysis-report.json
# Two-step workflow - generate digest from existing JSON (fast, no re-scan)
context-manager -s # Step 1: Create report
context-manager --gitingest-from-report # Step 2: Generate digest
# Or from LLM context
context-manager --context-export # Step 1: Create context
context-manager --gitingest-from-context # Step 2: Generate digest
# With custom filenames
context-manager --gitingest-from-report my-report.json
context-manager --gitingest-from-context my-context.json
Why use JSON-based digest?
- ⚡ Performance: Instant digest generation without re-scanning
- 🔄 Reusability: Generate multiple digests from one analysis
- 📦 Workflow: Separate analysis from export steps
- 🎯 Flexibility: Use different JSON sources for different purposes
Output Example
The generated digest.txt
file looks like:
Directory: my-project
Files analyzed: 42
Estimated tokens: 15.2k
Directory structure:
└── my-project/
├── src/
│ ├── index.js
│ └── utils.js
└── README.md
================================================
FILE: src/index.js
================================================
[complete file contents here]
================================================
FILE: src/utils.js
================================================
[complete file contents here]
Key Features
- Single File: Everything in one file for easy LLM ingestion
- Tree Visualization: Clear directory structure
- Token Estimates: Formatted as "1.2k" or "1.5M"
- Sorted Output: Files sorted by token count (largest first)
- Filter Compatible: Respects all
.gitignore
and calculator ignore rules
Use Cases
- LLM Context Windows: Paste entire codebase as single context
- Code Reviews: Share complete project snapshot
- Documentation: Single-file project reference
- AI Analysis: Perfect for ChatGPT, Claude, or other LLMs
- Archival: Simple project snapshot format
Version Tracking
Context-manager implements GitIngest format v0.3.1. See docs/GITINGEST_VERSION.md for implementation details and version history.
Configuration
.calculatorignore File (EXCLUDE Mode)
The .calculatorignore
file is pre-configured for core application analysis:
# Current focus: Only core JS files in utility-mcp/src/
# Excludes:
**/*.md # All documentation
**/*.json # All configuration files
**/*.yml # All YAML files
infrastructure/** # Infrastructure code
workflows/** # Workflow definitions
docs/** # Documentation directory
token-analysis/** # Analysis tools themselves
utility-mcp/scripts/** # Utility scripts
utility-mcp/src/workflows/** # Workflow JS files
utility-mcp/src/testing/** # Testing utilities
.calculatorinclude File (INCLUDE Mode)
The .calculatorinclude
file provides precise file selection:
# Include only core JavaScript files
# This should produce exactly 64 files
# Include main entry point
utility-mcp/index.js
# Include all src JavaScript files EXCEPT workflows and testing
utility-mcp/src/**/*.js
# Exclude specific subdirectories (using negation)
!utility-mcp/src/workflows/**
!utility-mcp/src/testing/**
Creating Custom Configurations
For EXCLUDE mode (edit .calculatorignore
):
# Remove lines to include more file types
# Add patterns to exclude specific files
# Example: Include documentation
# **/*.md <- comment out or remove this line
# Example: Exclude specific large files
your-large-file.js
specific-directory/**
For INCLUDE mode (create .calculatorinclude
):
# Include specific files or patterns
src/**/*.js # All JS files in src
config/*.json # Config files only
docs/api/**/*.md # API documentation only
# Use negation to exclude from broad patterns
src/**/*.js
!src/legacy/** # Exclude legacy code
!src/**/*.test.js # Exclude test files
Configuration File Priority
.gitignore
(project root) - Standard git exclusions (always respected).calculatorinclude
(token-analysis/) - INCLUDE mode (highest priority).calculatorignore
(token-analysis/) - EXCLUDE mode (used when no include file).calculatorignore
(project root) - Fallback EXCLUDE mode location
Installation
For exact token counting, install tiktoken:
npm install tiktoken
Without tiktoken, the tool uses smart estimation (~95% accuracy).
Output Example
🎯 PROJECT TOKEN ANALYSIS REPORT
================================================================================
📊 Total files analyzed: 64
🔢 Total tokens: 181,480
💾 Total size: 0.78 MB
📄 Total lines: 28,721
📈 Average tokens per file: 2,836
🚫 Files ignored by .gitignore: 11,912
📋 Files ignored by calculator rules: 198
📋 BY FILE TYPE:
--------------------------------------------------------------------------------
Extension Files Tokens Size (KB) Lines
--------------------------------------------------------------------------------
.js 64 181,480 799.8 28,721
🏆 TOP 5 LARGEST FILES BY TOKEN COUNT:
--------------------------------------------------------------------------------
1. 12,388 tokens (6.8%) - utility-mcp/src/server/CloudStackUtilityMCP.js
2. 11,007 tokens (6.1%) - utility-mcp/src/handlers/workflow-handlers.js
3. 7,814 tokens (4.3%) - utility-mcp/src/utils/security.js
4. 6,669 tokens (3.7%) - utility-mcp/src/handlers/tool-handlers.js
5. 5,640 tokens (3.1%) - utility-mcp/src/ci-cd/pipeline-integration.js
Context Management
Perfect for LLM context window optimization:
- 181k tokens = Core application logic only
- Clean analysis = No noise from docs, configs, or build files
- Focused development = Essential code for AI-assisted development
- Context efficiency = Maximum useful code per token
- Dual mode flexibility = Precise include/exclude control
- Ultra-minimal export = 1k chars (89% reduction) for frequent AI interactions
- Detailed export = 8.6k chars for comprehensive analysis when needed
Integration
You can integrate this tool into:
- CI/CD pipelines for code size monitoring
- Pre-commit hooks for token budget checks
- Documentation generation workflows
- Code quality gates
- LLM context preparation workflows
- Development environment setup
Troubleshooting
Include vs Exclude Mode Issues
- INCLUDE mode active: Remove
.calculatorinclude
to use EXCLUDE mode - Wrong files included: Check if
.calculatorinclude
exists (takes priority) - Mode confusion: Use verbose mode to see which mode is active
Patterns Not Working
- Ensure no inline comments in ignore/include pattern files
- Use file patterns (
docs/**
) instead of directory patterns (docs/
) - Test specific patterns with verbose mode
- Check pattern syntax:
**
for recursive,*
for single level
Token Count Issues
- Too high: Review included files with verbose mode, add exclusion patterns
- Too low: Check if important files are excluded, review patterns
- Inconsistent: Verify which mode is active (include vs exclude)
Missing Expected Files
- Check if files are excluded by
.gitignore
(always respected) - Verify calculator ignore/include patterns
- Ensure files are recognized as text files
- Use verbose mode to see exclusion reasons
Context Manager
LLM context manager with method-level filtering and token optimization. The ultimate tool for AI-assisted development.
Created by Hakkı Sağdıç
🚀 Features
✅ File-level token analysis - Analyze entire files and directories
🔧 Method-level analysis - Extract and analyze specific methods from JavaScript/TypeScript
📋 Dual filtering system - Include/exclude files and methods with pattern matching
📊 LLM context optimization - Generate ultra-compact context for AI assistants
🎯 Exact token counting - Uses tiktoken for GPT-4 compatible counts
📤 Multiple export formats - JSON reports, clipboard, file exports
📦 NPM package - Use programmatically or as global CLI tool
🔍 Pattern matching - Wildcards and regex support for flexible filtering
⚡ Performance optimized - 36% smaller codebase with enhanced functionality
📦 Installation
Option 1: NPM Package (Recommended)
# Local installation
npm install @hakkisagdic/context-manager
# Global installation
npm install -g @hakkisagdic/context-manager
# Run globally
context-manager --help
Option 2: Direct Usage
# Clone and use directly
git clone <repository>
cd token-analysis
node token-calculator.js --help
🎯 Quick Start
Basic Analysis
# Interactive analysis with export selection
context-manager
# File-level analysis with clipboard export
context-manager --context-clipboard
# Method-level analysis
context-manager --method-level --context-export
# Analysis with reports
context-manager --method-level --save-report --verbose
Advanced Usage
# Focus on specific methods only
echo "calculateTokens\nhandleRequest\n*Validator" > .methodinclude
context-manager --method-level
# Exclude test methods
echo "*test*\n*debug*\nconsole" > .methodignore
context-manager --method-level --context-clipboard
Usage
Command Line Interface
# Basic analysis
context-manager
# Method-level analysis
context-manager --method-level
# Save detailed report
context-manager --save-report
# Copy context to clipboard
context-manager --context-clipboard
# Combine options
context-manager --method-level --save-report --verbose
Programmatic Usage
const { TokenAnalyzer } = require('@cloudstack/context-manager');
// Basic file-level analysis
const analyzer = new TokenAnalyzer('./src', {
methodLevel: false,
verbose: true
});
// Method-level analysis
const methodAnalyzer = new TokenAnalyzer('./src', {
methodLevel: true,
saveReport: true
});
analyzer.run();
🔧 Configuration
File-Level Filtering
Priority Order:
.gitignore
(project root) - Standard git exclusions (always respected).calculatorinclude
- INCLUDE mode (highest priority for files).calculatorignore
- EXCLUDE mode (fallback for files)
.calculatorinclude
- Include only these files:
# Include only core JavaScript files
utility-mcp/src/**/*.js
!utility-mcp/src/testing/**
!utility-mcp/src/workflows/**
.calculatorignore
- Exclude these files:
# Exclude documentation and config
**/*.md
**/*.json
node_modules/**
test/
**/*.test.js
**/*.spec.js
Method-Level Filtering
.methodinclude
- Include only these methods:
# Core business logic methods
calculateTokens
generateLLMContext
analyzeFile
handleRequest
validateInput
processData
# Pattern matching
*Handler # All methods ending with 'Handler'
*Validator # All methods ending with 'Validator'
*Manager # All methods ending with 'Manager'
TokenCalculator.* # All methods in TokenCalculator class
.methodignore
- Exclude these methods:
# Utility and debug methods
console
*test*
*debug*
*helper*
print*
main
# File-specific exclusions
server.printStatus
utils.debugLog
Pattern Syntax
Pattern | Description | Example |
---|---|---|
methodName |
Exact match | calculateTokens |
*pattern* |
Contains pattern | *Handler matches requestHandler |
Class.* |
All methods in class | TokenCalculator.* |
file.method |
Specific file method | server.handleRequest |
!pattern |
Negation (exclude) | !*test* |
📤 Output Formats
1. File-Level Context (Default)
Use case: General codebase analysis, file organization
{
"project": {
"root": "my-project",
"totalFiles": 64,
"totalTokens": 181480
},
"paths": {
"src/core/": ["server.js", "handler.js"],
"src/utils/": ["helper.js", "validator.js"]
}
}
2. Method-Level Context (--method-level
)
Use case: Focused analysis, debugging specific methods, LLM context optimization
{
"project": {
"root": "my-project",
"totalFiles": 64,
"totalTokens": 181480
},
"methods": {
"src/server.js": [
{"name": "handleRequest", "line": 15, "tokens": 234},
{"name": "validateInput", "line": 45, "tokens": 156}
],
"src/utils.js": [
{"name": "processData", "line": 12, "tokens": 89}
]
},
"methodStats": {
"totalMethods": 150,
"includedMethods": 23,
"totalMethodTokens": 5670
}
}
3. Detailed Report (JSON)
Use case: Comprehensive analysis, CI/CD integration, historical tracking
{
"metadata": {
"generatedAt": "2024-01-15T10:30:00.000Z",
"projectRoot": "/path/to/project",
"gitignoreRules": ["node_modules/**", "*.log"],
"calculatorRules": ["src/**/*.js", "!src/test/**"]
},
"summary": {
"totalFiles": 64,
"totalTokens": 181480,
"byExtension": {".js": {"count": 64, "tokens": 181480}},
"largestFiles": [...]
},
"files": [...]
}
CLI Options
Option | Short | Description |
---|---|---|
--save-report |
-s |
Save detailed JSON report |
--verbose |
-v |
Show included files and directories |
--context-export |
Generate LLM context file | |
--context-clipboard |
Copy context to clipboard | |
--method-level |
-m |
Enable method-level analysis |
--help |
-h |
Show help message |
📊 Use Cases & Examples
1. 🤖 LLM Context Optimization
Goal: Generate minimal context for AI assistants
# Ultra-compact method-level context
code-analyzer --method-level --context-clipboard
# Focus on core business logic only
echo "handleRequest\nprocessData\nvalidateInput" > .methodinclude
code-analyzer --method-level --context-export
Result: 89% smaller context compared to full codebase
2. 📊 Codebase Analysis
Goal: Understand project complexity and structure
# Analysis with detailed reports
code-analyzer --save-report --verbose
# Track largest files and methods
code-analyzer --method-level --save-report
3. 🔍 Method-Level Debugging
Goal: Focus on specific problematic methods
# Debug authentication methods only
echo "*auth*\n*login*\n*validate*" > .methodinclude
code-analyzer --method-level --context-clipboard
# Exclude test and debug methods
echo "*test*\n*debug*\nconsole\nlogger" > .methodignore
code-analyzer --method-level
4. 🚀 CI/CD Integration
Goal: Monitor codebase growth and complexity
# Daily token analysis for monitoring
code-analyzer --save-report > reports/analysis-$(date +%Y%m%d).json
# Check method complexity trends
code-analyzer --method-level --save-report
5. 📈 Code Quality Gates
Goal: Ensure code stays within token budgets
# Check if codebase exceeds LLM context limits
TOKENS=$(code-analyzer --context-export | jq '.project.totalTokens')
if [ $TOKENS -gt 100000 ]; then
echo "Codebase too large for LLM context!"
exit 1
fi
🛠️ CLI Reference
Core Options
Option | Short | Description | Example |
---|---|---|---|
--save-report |
-s |
Save detailed JSON report | context-manager -s |
--verbose |
-v |
Show included files/methods | context-manager -v |
--context-export |
Generate LLM context file | context-manager --context-export |
|
--context-clipboard |
Copy context to clipboard | context-manager --context-clipboard |
|
--method-level |
-m |
Enable method-level analysis | context-manager -m |
--help |
-h |
Show help message | context-manager -h |
Usage Patterns
# Quick analysis with interactive export
context-manager
# Method-level analysis with all outputs
context-manager --method-level --save-report --context-export --verbose
# LLM-optimized context generation
context-manager --method-level --context-clipboard
# CI/CD monitoring
context-manager --save-report --context-export
# Development debugging
context-manager --method-level --verbose
💻 Programmatic API
Basic Usage
const { TokenAnalyzer } = require('@hakkisagdic/context-manager');
// File-level analysis
const analyzer = new TokenAnalyzer('./src', {
verbose: true,
saveReport: true
});
analyzer.run();
Method-Level Analysis
const { TokenAnalyzer, MethodAnalyzer } = require('@hakkisagdic/code-analyzer');
// Method-level analysis with custom filtering
const analyzer = new TokenAnalyzer('./src', {
methodLevel: true,
contextExport: true,
verbose: false
});
analyzer.run();
// Extract methods from specific file
const methodAnalyzer = new MethodAnalyzer();
const methods = methodAnalyzer.extractMethods(fileContent, 'server.js');
Advanced Configuration
const analyzer = new TokenAnalyzer('./src', {
// Enable method-level analysis
methodLevel: true,
// Output options
saveReport: true,
contextExport: true,
contextToClipboard: true,
// Verbosity
verbose: true,
// Compact context (for LLM optimization)
compactContext: true
});
// Access results
analyzer.run();
console.log('Analysis complete!');
Custom Method Analysis
const { MethodAnalyzer, MethodFilterParser } = require('@hakkisagdic/context-manager');
// Create custom method filter
const filter = new MethodFilterParser(
'./custom-methods.include',
'./custom-methods.ignore'
);
// Analyze specific file
const methodAnalyzer = new MethodAnalyzer();
const methods = methodAnalyzer.extractMethods(content, filePath);
// Filter methods
const filteredMethods = methods.filter(method =>
filter.shouldIncludeMethod(method.name, fileName)
);
Requirements
- Node.js: >= 14.0.0
- tiktoken: ^1.0.0 (optional, for exact token counts)
License
MIT License - see LICENSE file for details
Contributing
- Fork the repository
- Create your feature branch
- Add tests for new functionality
- Submit a pull request
📞 Support
Created with ❤️ by Hakkı Sağdıç