JSPM

nafex

1.0.3
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 6
  • Score
    100M100P100Q61950F
  • License ISC

Description: Comprehensive testing library for API testing, system monitoring, and step execution tracking

Package Exports

  • nafex
  • nafex/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (nafex) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

nafex

npm version npm downloads license build status

Comprehensive testing library for API testing, system monitoring, and step execution tracking

🆓 Free to Use Only - This library is provided free of charge for all users. Please use it responsibly and in accordance with the ISC license.

Overview

nafex is a powerful Node.js library that provides a complete suite of tools for API testing, system resource monitoring, and step execution tracking. It's designed to help developers and QA engineers build robust test automation workflows with integrated monitoring and reporting capabilities.

Key Features

  • 🔒 API Security Testing - Automated checks for HTTPS enforcement, security headers, SQL injection, and XSS vulnerabilities
  • Performance Testing - Concurrent request testing with percentile metrics (P50, P95, P99)
  • 🔄 Reliability Testing - Error handling validation and concurrent request stability testing
  • 📊 System Resource Monitoring - Real-time CPU, memory, and disk usage tracking with performance scoring
  • 📝 Step Execution Tracking - Track and report on multi-step test executions with detailed timing
  • 📄 Comprehensive Reporting - JSON reports and console summaries for all test activities
  • 🎨 Developer-Friendly - Clean API, TypeScript-ready, ESM support

Use Cases

  • API endpoint testing and validation
  • Performance benchmarking and load testing
  • Security vulnerability scanning
  • System resource monitoring during test execution
  • Automated test suite orchestration
  • CI/CD pipeline integration

Why This Library is Useful

nafex addresses common pain points developers face when building and maintaining test automation:

🚀 Performance & Simplicity

  • Zero Configuration Required - Get started in minutes with sensible defaults. No complex setup files or extensive configuration needed.
  • Lightweight & Fast - Minimal dependencies, optimized for performance. Built with native Node.js APIs for maximum speed.
  • One Library, Multiple Solutions - Instead of juggling separate tools for API testing, monitoring, and step tracking, get everything in one cohesive package.

🎯 Real-World Value

  • Security-First Testing - Built-in security checks (HTTPS, headers, injection attacks) help catch vulnerabilities before production. No need to integrate separate security scanners.
  • Performance Insights - Get percentile metrics (P50, P95, P99) out of the box. Understand not just average response times, but real-world user experience.
  • System Visibility - Monitor resource usage during test execution to identify performance bottlenecks and resource leaks.
  • Production-Ready - Test the same way you deploy. Works seamlessly in CI/CD pipelines, Docker containers, and production environments.

💡 Developer Experience

  • Clean, Intuitive API - Methods are self-explanatory. trackStep() does exactly what it says. No cryptic configuration or hidden behavior.
  • Comprehensive Reporting - Automatic JSON reports and console summaries give you immediate insights without manual parsing.
  • Flexible Integration - Use standalone or combine components. Mix and match to fit your workflow.
  • TypeScript-Ready - ESM-first design with TypeScript definitions support for better IDE autocomplete and type safety.

🔄 Practical Use Cases

  • API Development - Test endpoints during development to catch issues early
  • QA Automation - Build comprehensive test suites with integrated monitoring
  • Performance Testing - Identify slow endpoints and bottlenecks before users do
  • Security Audits - Automatically scan for common security issues
  • CI/CD Pipelines - Track test execution steps and generate reports for build artifacts

Bottom Line: Stop cobbling together multiple tools. nafex gives you a complete testing and monitoring solution that's simple to use, powerful enough for production, and free to use forever.

Installation

Install the package using npm or yarn:

# Using npm
npm install nafex

# Using yarn
yarn add nafex

Usage

Quick Start

import { ApiTester, generateHTMLReport, generateJSONfile, generateCSVFile } from 'nafex';

async function runApiTests() {
  // Initialize the API tester
  const api = new ApiTester(
    'https://jsonplaceholder.typicode.com',
    {}, // authConfig
    5000 // timeout in ms
  );

  // Test 1: Get posts
  const postsReport = await api.testEndpoint('/posts', {
    headers: { 'x-env': 'staging' } // optional per-request headers
  });

  // Test 2: Get users
  const usersReport = await api.testEndpoint('/users', {
    method: 'GET',
    headers: { 'x-env': 'staging' }
  });

  // Test 3: Get comments
  const commentsReport = await api.testEndpoint('/comments', {
    method: 'GET',
    headers: { 'x-env': 'staging', 'Content-Type': 'application/json' }
  });

  // Consolidate raw step results from each call and generate reports
  const allResults = []
    .concat(postsReport.results || [])
    .concat(usersReport.results || [])
    .concat(commentsReport.results || []);

  // const htmlPath = generateHTMLReport(allResults, 'api-tests.html');
  // const jsonPath = generateJSONfile(allResults, 'api-tests.json');
  // const csvPaths = generateCSVFile(allResults, 'api-tests');
  // console.log('HTML:', htmlPath);
  // console.log('JSON:', jsonPath);
  // console.log('CSV:', csvPaths);
}

// Run the tests
runApiTests().catch(err => console.error('Error running API tests:', err));

Output Demo

Here's a realistic example showing nafex in action with actual console output:

nafex Demo

API Testing

The ApiTester class provides comprehensive API endpoint testing including security, performance, and reliability checks.

import { ApiTester } from 'nafex';

// Initialize with base URL and authentication
const tester = new ApiTester(
  'https://api.example.com',
  {
    type: 'bearer',  // or 'basic' or 'apiKey'
    token: 'your-access-token'
    // For basic auth: { type: 'basic', username: 'user', password: 'pass' }
    // For API key: { type: 'apiKey', apiKey: 'key', headerName: 'X-API-Key' }
  },
  5000  // timeout in milliseconds
);

// Test an endpoint
const report = await tester.testEndpoint('/users', {
  headers: {
    'Custom-Header': 'value'
  }
});

console.log(report);
// Output: Array of test results with passed/failed status

Security Tests Performed:

  • HTTPS enforcement validation
  • Security headers check (X-Content-Type-Options, X-Frame-Options, etc.)
  • SQL injection vulnerability scanning
  • XSS (Cross-Site Scripting) vulnerability scanning

Performance Metrics:

  • Average response time
  • P50, P95, P99 percentile response times
  • Total requests processed

Reliability Tests:

  • Error handling validation (404 responses)
  • Concurrent request stability

Step Execution Tracking

Track multi-step test executions with automatic timing and reporting.

import { StepTracker } from 'nafex';

// Create a new instance of StepTracker
const tracker = new StepTracker();

async function main() {
  // Step 1: Build fixtures
  await tracker.trackStep('Build fixtures', async () => {
    console.log('Building test fixtures...');
    // Example: simulate fixture setup
    await new Promise(resolve => setTimeout(resolve, 1000));
    console.log('Fixtures built successfully.');
  });

  // Step 2: Invoke service
  const functionName = async () => {
    console.log('Invoking service...');
    // Example: simulate service call
    await new Promise(resolve => setTimeout(resolve, 1000));
    console.log('Service invoked successfully.');
  };

  await tracker.trackStep('Invoke service', functionName);
}

// Run the test
main().catch(err => {
  console.error('Test failed:', err);
  process.exit(1);
});

Features:

  • Automatic timing for each step
  • Success/failure status tracking
  • Console summaries after each step
  • Duration statistics
  • JSON report generation

System Resource Monitoring

Monitor CPU, memory, and disk usage during test execution.

Using the Singleton

import { systemResourceMonitor } from 'nafex';

// Start monitoring
await systemResourceMonitor.startMonitoring('My Test Suite', 2000); // 2 second intervals

// Your test code here...

// Stop monitoring
const summary = await systemResourceMonitor.stopMonitoring();
console.log(summary);

Using the Class (Advanced Configuration)

import { SystemResourceMonitor } from 'nafex';

const monitor = new SystemResourceMonitor({
  maxHistorySize: 1000,        // Maximum data points to store
  diskCacheTTL: 30000,         // Disk metrics cache TTL (ms)
  enableLogging: true,          // Enable/disable logging
  logLevel: 'info',            // 'debug', 'info', 'warn', 'error'
  monitorInterval: 2000        // Default monitoring interval (ms)
});

// Collect single metrics snapshot
const metrics = await monitor.collectMetrics('Checkpoint 1');
console.log(metrics);
// Output: { timestamp, testStep, processCPU, processMemory, systemMemory, diskUsage, performanceScore }

// Start continuous monitoring
await monitor.startMonitoring('Test Run', 1000);

// Pause/resume monitoring
monitor.pauseMonitoring();
monitor.resumeMonitoring();

// Stop and get summary
const summary = await monitor.stopMonitoring();

Monitored Metrics:

  • Process CPU usage (%)
  • Process memory usage (MB)
  • System memory usage (MB, %)
  • Disk usage (MB, %)
  • Performance score (calculated composite metric)

Report Generation

Generate comprehensive JSON reports and console summaries.

import { ReportGenerator } from 'nafex';

const reporter = new ReportGenerator();

reporter.addStep({
  stepName: 'Test Step',
  status: 'PASSED',
  duration: 1.5,
  timestamp: new Date().toISOString()
});

// Generate JSON report
reporter.generateJSON();
// Saves to ./reports/execution-report.json

// Print summary to console
reporter.generateSummary();

// Generate both
reporter.saveAllReports();

Utility Functions

import { printBox } from 'nafex';

// Print formatted console boxes
printBox([
  'System Information',
  'Platform: Windows 10',
  'CPU: Intel i7',
  'Memory: 16 GB'
]);

API Reference

ApiTester

Constructor

new ApiTester(baseURL, authConfig?, timeout?)

Parameters:

  • baseURL (string) - Base URL for API requests
  • authConfig (object, optional) - Authentication configuration
    • type (string) - 'bearer' | 'basic' | 'apiKey'
    • token (string) - Bearer token (for type: 'bearer')
    • username (string) - Username (for type: 'basic')
    • password (string) - Password (for type: 'basic')
    • apiKey (string) - API key value (for type: 'apiKey')
    • headerName (string) - Header name for API key (for type: 'apiKey')
  • timeout (number, optional) - Request timeout in milliseconds (default: 5000)

Methods

  • testEndpoint(endpoint, config?) - Run security, performance, and reliability tests on an endpoint
    • Returns: Promise<Array> - Array of test result objects

StepTracker

Constructor

new StepTracker()

Methods

  • trackStep(stepName, stepFunction) - Track execution of a step

    • Parameters:
      • stepName (string) - Name of the step
      • stepFunction (function) - Async function to execute
    • Returns: Promise<any> - Result of the step function
  • reporter - Access to the internal ReportGenerator instance

SystemResourceMonitor

Constructor

new SystemResourceMonitor(options?)

Options:

  • maxHistorySize (number) - Maximum number of data points to store (default: 1000)
  • diskCacheTTL (number) - Disk metrics cache TTL in milliseconds (default: 30000)
  • enableLogging (boolean) - Enable logging (default: true)
  • logLevel (string) - Log level: 'debug' | 'info' | 'warn' | 'error' (default: 'info')
  • monitorInterval (number) - Default monitoring interval in milliseconds (default: 2000)

Methods

  • startMonitoring(testName, intervalMs?) - Start continuous monitoring

    • Parameters:
      • testName (string) - Name of the test/monitoring session
      • intervalMs (number, optional) - Interval between collections in milliseconds
    • Returns: Promise<Date> - Start time
  • stopMonitoring() - Stop and get summary

    • Returns: Promise<Object> - Session summary object
  • collectMetrics(testStep) - Collect metrics at current moment

    • Parameters:
      • testStep (string) - Label for this collection point
    • Returns: Promise<Object> - Metrics object with CPU, memory, disk usage, and performance score
  • pauseMonitoring() - Pause without stopping

    • Returns: boolean - true if paused successfully, false if not monitoring
  • resumeMonitoring() - Resume paused monitoring

    • Returns: boolean - true if resumed successfully, false if not monitoring
  • getData(filters?) - Get all collected data

    • Parameters:
      • filters (object, optional) - Optional filters for data retrieval
    • Returns: Array - Array of collected monitoring data points
  • getSummaryStats() - Get statistical summary

    • Returns: Object - Statistical summary of collected metrics
  • clearData() - Clear all monitoring data

    • Returns: void

ReportGenerator

Constructor

new ReportGenerator()

Methods

  • addStep(stepData) - Add a step result to the report
  • generateJSON() - Generate and save JSON report
  • generateSummary() - Print summary to console
  • saveAllReports() - Generate both JSON and summary

printBox

printBox(lines: string[])

Print an array of lines in a formatted console box.

Configuration / Options

Environment Variables

Currently, the library uses programmatic configuration through constructor options and method parameters. No environment variables are required.

Build Configuration

The package uses ES modules (ESM). Ensure your project is configured for ESM:

{
  "type": "module"
}

Report Directories

  • StepTracker reports: ./Report/step-results.json
  • ReportGenerator reports: ./reports/execution-report.json

These directories are created automatically if they don't exist.

Examples

Complete Integration Example

import {
  ApiTester,
  StepTracker,
  SystemResourceMonitor,
  ReportGenerator,
  printBox
} from 'nafex';

async function runTestSuite() {
  // Initialize components
  const apiTester = new ApiTester('https://api.example.com', {
    type: 'bearer',
    token: process.env.API_TOKEN
  });

  const stepTracker = new StepTracker();
  const monitor = new SystemResourceMonitor({
    monitorInterval: 1000,
    enableLogging: true
  });

  // Start system monitoring
  await monitor.startMonitoring('Full Test Suite');

  try {
    // Track API testing step
    await stepTracker.trackStep('API Endpoint Tests', async () => {
      const results = await apiTester.testEndpoint('/users');
      return results;
    });

    // Track additional steps
    await stepTracker.trackStep('Data Validation', async () => {
      // Your validation logic
    });

    // Generate final reports
    stepTracker.reporter.saveAllReports();

  } finally {
    // Stop monitoring
    const summary = await monitor.stopMonitoring();
    console.log('Monitoring Summary:', summary);
  }
}

runTestSuite();

CI/CD Integration Example

import { StepTracker, systemResourceMonitor } from 'nafex';

async function ciTestPipeline() {
  const tracker = new StepTracker();

  await systemResourceMonitor.startMonitoring('CI Pipeline');

  await tracker.trackStep('Install Dependencies', async () => {
    // npm install
  });

  await tracker.trackStep('Run Unit Tests', async () => {
    // npm test
  });

  await tracker.trackStep('Run Integration Tests', async () => {
    // npm run test:integration
  });

  const summary = await systemResourceMonitor.stopMonitoring();

  // Save reports for CI artifacts
  tracker.reporter.saveAllReports();

  // Exit with error if any step failed
  const failedSteps = tracker.reporter.getSteps().filter(s => s.status === 'FAILED');
  if (failedSteps.length > 0) {
    process.exit(1);
  }
}

Upcoming Features

We're actively working on enhancing nafex with the following features:

  • 📊 Comprehensive Reporting - Enhanced reporting capabilities with multiple export formats

    • HTML Reports - Rich, interactive HTML reports with visualizations and charts
    • JSON Reports - Structured JSON reports for programmatic processing (currently available)
    • CSV Exports - Export test results and metrics to CSV for data analysis in spreadsheet applications
  • 📈 Benchmarking - Compare performance metrics across multiple test runs

    • Historical performance tracking
    • Baseline comparisons
    • Performance regression detection
    • Trend analysis and visualization
  • Enhanced Performance Metrics - Advanced performance analytics and insights

    • Detailed performance breakdowns by endpoint
    • Request/response size tracking
    • Network latency analysis
    • Throughput calculations (requests per second)
    • Advanced percentile analysis

Stay tuned for updates! You can track progress on our GitHub repository.

Documentation

For more detailed documentation, examples, and advanced usage patterns, visit:

Contributing

Contributions are welcome! We encourage you to contribute to nafex by:

  1. Reporting Issues: Found a bug or have a feature request? Open an issue on GitHub.

  2. Submitting Pull Requests:

    • Fork the repository
    • Create a feature branch (git checkout -b feature/amazing-feature)
    • Make your changes
    • Add tests if applicable
    • Commit your changes (git commit -m 'Add amazing feature')
    • Push to the branch (git push origin feature/amazing-feature)
    • Open a Pull Request
  3. Code Standards:

    • Follow existing code style and patterns
    • Write clear commit messages
    • Ensure all exports are properly documented

License

Free to Use Only - This library is provided free of charge for all users.

ISC License

Copyright (c) Junaid

Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

Support / Contact

Get Help

  • 📖 Documentation: Check the GitHub repository for examples and guides
  • 🐛 Bug Reports: Open an issue on GitHub
  • 💬 Questions: Use GitHub Discussions or Issues for questions

Maintainer

Junaid - junaidbuss@gmail.com

Community

  • 🌟 Star the repository if you find it useful
  • 🔗 Share with others who might benefit
  • 🤝 Contribute improvements and bug fixes

Made with ❤️ for developers who care about quality and reliability.