Package Exports
- clean-web-scraper
- clean-web-scraper/main.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (clean-web-scraper) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Web Content Scraper
A powerful Node.js web scraper that extracts clean, readable content from websites while keeping everything nicely organized. Perfect for creating AI training datasets! ๐ค
โจ Features
- ๐ Smart recursive web crawling of internal links
- ๐ Clean content extraction using Mozilla's Readability
- ๐งน Smart content processing and cleaning
- ๐๏ธ Maintains original URL structure in saved files
- ๐ซ Excludes unwanted paths from scraping
- ๐ Handles relative and absolute URLs like a pro
- ๐ฏ No duplicate page visits
- ๐ Generates JSONL output file for ML training
- ๐ AI-friendly clean text and csv output (perfect for LLM fine-tuning!)
- ๐ Rich metadata extraction
- ๐ Combine results from multiple scrapers into a unified dataset
๐ ๏ธ Prerequisites
- Node.js (v18 or higher)
- npm
๐ฆ Dependencies
- axios - HTTP requests master
- jsdom - DOM parsing wizard
- @mozilla/readability - Content extraction genius
๐ Installation
npm i clean-web-scraper
# OR
git clone https://github.com/mlibre/Clean-Web-Scraper
cd Clean-Web-Scraper
npm install๐ป Usage
const WebScraper = require('clean-web-scraper');
const scraper = new WebScraper({
baseURL: 'https://example.com/news', // Required: The website base url to scrape
startURL: 'https://example.com/blog', // Optional: Custom starting URL
excludeList: ['/admin', '/private'], // Optional: Paths to exclude
exactExcludeList: ['/specific-page'], // Optional: Exact URLs to exclude
scrapResultPath: './example.com/website', // Required: Where to save the content
jsonlOutputPath: './example.com/train.jsonl', // Optional: Custom JSONL output path
textOutputPath: "./example.com/texts", // Optional: Custom text output path
csvOutputPath: "./example.com/train.csv" // Optional: Custom CSV output path
maxDepth: 3, // Optional: Maximum depth for recursive crawling
includeTitles: true, // Optional: Include page titles in outputs
});
scraper.start();
// Combine results from multiple scrapers
WebScraper.combineResults('./combined-dataset', [scraper1, scraper2]);node example-usage.js๐ค Output
Your AI-ready content is saved in a clean, structured format:
- ๐ Base folder: ./folderPath/example.com/
- ๐ Files preserve original URL paths
- ๐ Pure text format, perfect for LLM training and fine-tuning
- ๐ค No HTML, no mess - just clean, structured text ready for AI consumption
- ๐ JSONL output for ML training
- ๐ CSV output with clean text content
example.com/
โโโ website/
โ โโโ page1.txt # Clean text content
โ โโโ page1.json # Full metadata
โ โโโ blog/
โ โโโ post1.txt
โ โโโ post1.json
โโโ texts/ # Numbered text files
โ โโโ 1.txt
โ โโโ 2.txt
โโโ train.jsonl # Combined content
โโโ train.csv # Clean text in CSV format๐ค AI/LLM Training Ready
The output is specifically formatted for AI training purposes:
- Clean, processed text without HTML markup
- Multiple formats (JSONL, CSV, text files)
- Structured content perfect for fine-tuning LLMs
- Ready to use in your ML pipelines
Standing with Palestine ๐ต๐ธ
This project supports Palestinian rights and stands in solidarity with Palestine. We believe in the importance of documenting and preserving Palestinian narratives, history, and struggles for justice and liberation.
Free Palestine ๐ต๐ธ