JSPM

Found 6 results for toxicity-detection

glin-profanity

Glin-Profanity is a lightweight and efficient npm package designed to detect and filter profane language in text inputs across multiple languages. Whether you’re building a chat application, a comment section, or any platform where user-generated content

  • v3.3.0
  • 72.65
  • Published

@sabir7718/abuse-detector

A powerful hybrid abuse/toxicity detection module for Node.js. Combines Aho-Corasick, fuzzy matching, phonetic normalization, and TensorFlow.js AI for accurate real-time content moderation.

  • v1.0.1
  • 35.04
  • Published

@llm-guardrails/core

TypeScript-native LLM guardrails with behavioral analysis, budget controls, and topic gating. Zero runtime dependencies. 100% test pass rate.

  • v0.4.1
  • 32.44
  • Published

@vettly/sdk

Content moderation SDK for apps. Filtering, reporting, blocking, and audit trails.

  • v0.2.8
  • 29.76
  • Published

content-guard

🛡️ Advanced content analysis and moderation system with multi-variant optimization. Features context-aware detection, harassment prevention, and ML-powered toxicity analysis. Pre-1.0 development version.

  • v0.3.1
  • 29.64
  • Published

ecokind-moderation-sdk

EcoKind - A privacy-first moderation SDK powered by decentralized LLMs on the Internet Computer. Making the internet safer, one message at a time.

  • v1.1.2
  • 16.78
  • Published