JSPM

auraclassify

1.0.0
    • ESM via JSPM
    • ES Module Entrypoint
    • Export Map
    • Keywords
    • License
    • Repository URL
    • TypeScript Types
    • README
    • Created
    • Published
    • 0
    • Score
      100M100P100Q26273F
    • License Apache-2.0

    AuraClassify is a powerful content moderation and classification system built on TensorFlow.js, utilizing the Universal Sentence Encoder for text analysis.

    Package Exports

    • auraclassify
    • auraclassify/index.js

    This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (auraclassify) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

    Readme

    AuraClassify

    AuraClassify is a powerful content moderation and classification system built on TensorFlow.js, utilizing the Universal Sentence Encoder for text analysis.

    Features

    • Content moderation and classification
    • Multi-category support with subcategories
    • Confidence scoring and sentiment analysis
    • Detailed analysis reports
    • Easy to train and use
    • Supports both browser and Node.js environments

    Installation

    npm install auraclassify

    Quick Start

    const AuraClassify = require('auraclassify');
    
    // Initialize classifier
    const classifier = new AuraClassify({
        backend: "tfjs" // or "tfjs-node" for Node.js backend
    });
    
    // Train the model
    await classifier.train({
        dataset: trainingData,
        log: true,
        batchSize: 4
    });
    
    // Classify text
    const result = await classifier.classify("Text to analyze");

    Training Data Format

    Training data should be an array of objects with input and output properties:

    const trainingData = [
        {
            input: "Example text content",
            output: "category" // or "category/subcategory"
        }
    ];

    Supported Categories

    • safe: Safe content
    • sexual: Adult content
    • harassment: Harassment content
    • hate: Hate speech
    • illicit: Illegal content
    • self-harm: Self-harm content
    • violence: Violent content

    Each category can have subcategories (e.g., "violence/threatening", "self-harm/instructions")

    API Reference

    Constructor

    const classifier = new AuraClassify({ backend = "tfjs" });

    Methods

    train(options)

    await classifier.train({
        dataset: trainingData,
        log: true,
        batchSize: 4
    });

    classify(text)

    const result = await classifier.classify("Text to analyze");

    save(path)

    await classifier.save("path/to/model.json");

    load(path)

    await classifier.load("path/to/model.json");

    How It Works

    AuraClassify uses the Universal Sentence Encoder to convert text into high-dimensional vectors (embeddings). These embeddings capture semantic meaning, allowing the system to understand context and nuance in text.

    The classification process involves:

    1. Text embedding generation
    2. Similarity comparison with trained examples
    3. Category and subcategory detection
    4. Confidence scoring
    5. Detailed analysis generation

    Example Output

    {
        analysis: {
            input: {
                text: "Original text",
                length: 12,
                wordCount: 2
            },
            result: {
                label: "category/subcategory",
                confidence: 0.85,
                confidenceLevel: "HIGH"
            },
            // ... additional analysis data
        },
        summary: {
            decision: "CATEGORY (HIGH confidence level)",
            confidence: 0.85,
            status: "RELIABLE"
        }
    }

    License

    Apache License 2.0

    Contributing

    Contributions are welcome! Please feel free to submit a Pull Request.

    Support

    For issues and feature requests, please use the GitHub issues page.