JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 12912
  • Score
    100M100P100Q168648F
  • License ISC

Glin-Profanity is a lightweight and efficient npm package designed to detect and filter profane language in text inputs across multiple languages. Whether you’re building a chat application, a comment section, or any platform where user-generated content is involved, Glin-Profanity helps you maintain a clean and respectful environment.

Package Exports

  • glin-profanity
  • glin-profanity/lib/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (glin-profanity) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Glin Profanity - ML-Powered Profanity Detection

GLIN PROFANITY - JavaScript/TypeScript

ML-Powered Profanity Detection for the Modern Web

NPM MIT Downloads Demo


Installation

npm install glin-profanity

Quick Start

import { checkProfanity, Filter } from 'glin-profanity';

// Simple check
const result = checkProfanity("This is f4ck1ng bad", {
  detectLeetspeak: true,
  languages: ['english']
});

result.containsProfanity  // true
result.profaneWords       // ['fucking']

// With replacement
const filter = new Filter({
  replaceWith: '***',
  detectLeetspeak: true
});
filter.checkProfanity("sh1t happens").processedText  // "*** happens"

React Hook

import { useProfanityChecker } from 'glin-profanity';

function ChatInput() {
  const { result, checkText } = useProfanityChecker({
    detectLeetspeak: true
  });

  return (
    <>
      <input onChange={(e) => checkText(e.target.value)} />
      {result?.containsProfanity && <span>Clean up your language</span>}
    </>
  );
}

Features

Feature Description
Leetspeak detection Catches f4ck, sh1t, @ss
Unicode normalization Handles Cyrillic/Greek lookalikes
ML toxicity detection TensorFlow.js integration
23 languages Arabic to Turkish
Result caching LRU cache for repeated checks
React hook useProfanityChecker built-in

API

Core Functions

// Full check with options
checkProfanity(text: string, config?: FilterConfig): ProfanityCheckResult

// Quick boolean check
isProfane(text: string): boolean

// Async version
checkProfanityAsync(text: string, config?: FilterConfig): Promise<ProfanityCheckResult>

Filter Class

const filter = new Filter({
  languages: ['english', 'spanish'],
  detectLeetspeak: true,
  leetspeakLevel: 'moderate',     // basic | moderate | aggressive
  normalizeUnicode: true,
  cacheResults: true,
  maxCacheSize: 1000,
  replaceWith: '***'
});

filter.isProfane('f4ck');                    // true
filter.checkProfanity('bad word').profaneWords;  // ['bad']
filter.clearCache();

Configuration Options

Option Type Default Description
languages string[] ['english'] Languages to check
allLanguages boolean false Check all 23 languages
detectLeetspeak boolean false Enable leetspeak detection
leetspeakLevel string 'basic' basic / moderate / aggressive
normalizeUnicode boolean true Normalize Unicode homoglyphs
cacheResults boolean false Cache results for repeated checks
maxCacheSize number 1000 LRU cache limit
replaceWith string undefined Replacement string
customWords string[] [] Add custom profane words
ignoreWords string[] [] Whitelist words
severityLevels boolean false Enable severity mapping

Documentation

Resource Link
Getting Started docs/getting-started.md
API Reference docs/api-reference.md
Framework Examples docs/framework-examples.md
Advanced Features docs/advanced-features.md
ML Guide docs/ML-GUIDE.md
Main README README.md

License

MIT License - see LICENSE


Built by GLINCKER