Package Exports
- conkurrence
- conkurrence/mcp
Readme
conkurrence-eval
One command. Find out if your AI agrees with itself.
Conkurrence measures whether multiple AI models produce consistent outputs on your evaluation tasks. It tells you which items your AI agrees on and which need human review — using the same psychometric methods trusted in clinical research.
The Problem
You have a golden dataset — "correct" answers your AI is measured against. But how reliable is it? If you ran the labeling again, would you get the same results?
Conkurrence answers that question with statistical rigor.
Quick Start
# Install
npm install -g conkurrence-eval
# Initialize from a template
conkurrence-eval init --template classification
# Edit data.json with your items, then run
conkurrence-eval run --schema schema.json --data data.json --config config.json --output results.json
# Read the results
conkurrence-eval report --input results.jsonTotal cost for a typical run (30 items, 4 raters): under $1 USD via AWS Bedrock.
What You Get
- Agreement scores per field — know exactly where your AI is reliable and where it isn't
- Per-item classification — high confidence, moderate, contested, or diagnostic alert
- Actionable guidance — not just scores, but what to do about them
- Bootstrap confidence intervals — know how certain the estimates are
- JSON + Markdown output — machine-readable for pipelines, human-readable for review
Templates
Start with a template, customize for your domain:
| Template | Use Case |
|---|---|
classification |
Binary or multi-class classification |
extraction |
Structured field extraction accuracy |
summarization |
Summary quality assessment |
evidence-evaluation |
Evidence relevance and limitations |
conkurrence-eval init --template summarizationCLI Reference
conkurrence-eval run # Run convergence analysis
conkurrence-eval report # Generate Markdown report
conkurrence-eval estimate # Estimate cost (no API calls)
conkurrence-eval compare # Compare two runs (before/after)
conkurrence-eval finalize # Merge expert decisions into golden dataset
conkurrence-eval init # Initialize from templateRun any command with --help for full options and examples.
Requirements
- Node.js >= 20
- AWS credentials with Bedrock access (
bedrock:InvokeModel) - 7 raters recommended for production validation (minimum 2, see docs for rationale)
License
Business Source License 1.1 (BUSL-1.1). Free for evaluation (3 runs). Commercial use requires a license key from conkurrence.com. Converts to Apache 2.0 on 2030-03-28. See LICENSE.md.