@tooly/firecrawl
Firecrawl API tools for OpenAI, Anthropic, and AI SDK
Found 177 results for crawling
Firecrawl API tools for OpenAI, Anthropic, and AI SDK
tiny-crawler is a web crawler.
A tool for getting public website content using a browser engine or http get.
A Node.js scraping framework built on puppeteer (to use a headless Chrome/Chromium browser)
This script provides to analyze console error on your website.
Easily scrap the web for torrent and media files.
Easily crawl your public notion pages
A simple web scraping tool built for developers that can be utilized on both the client and server.
A simple crawler made in JavaScript for Node.
Real Fish Youtube Trend Video Crawling
Streaming pdf fetcher for academic papers.
Simple and powerful crawler. It scraps content and collects links from websites using request or phantomjs. The whole magic and simplicity is behind configuration.
easily create crawlers based on self-replicated scrapers
An interactive Command-Line Interface Build in NodeJS for downloading a single image or multiple images to disk from URL
plosone.org scraper
Build web scraping agents using AI to auto-extract the data from websites
One API to scrape All the Web.
Crawler is a web spider written with Nodejs. It gives you the full power of jQuery on the server to parse a big number of pages as they are downloaded, asynchronously. Scraping should be simple and fun!
Fast asynchronous NodeJS module for crawling/scraping a web through worker_threads.
Model Context Protocol (MCP) server for Firecrawl Simple - provides web scraping and crawling capabilities to LLMs
Real Fish Youtube Video Crawling Module
A @0y0/scraper expansion pack.
A plugin for Hapi.js to run goldwasher as a scraping API on the web.
Automated scraping module using patterns generated by the userscript Scrapeasy.
Sasori is a dynamic web crawler powered by Puppeteer, designed for lightning-fast endpoint discovery.
NodeCraw is a web crawling application that allows you to crawl specified URLs and extract information from web pages. It utilizes various modules and libraries to perform crawling and save the results.
A 2nd generation spider to crawl any article site, automatic reading title and content.
Aragog web scraping framework client
Scrapy Framework implemented by nodejs.
🚀 An easy-to-handle Node.js scraper that allow you to scrape them all in a record time.
Crawler is a web spider written with Nodejs. It gives you the full power of jQuery on the server to parse a big number of pages as they are downloaded, asynchronously
A simple web crawler
Crawler made simple
a headless browser automation library with easy-use API
PhantomJS/Browser lib which allows to parse a webpage
MCP server for Firecrawl Simple — a web scraping and site mapping tool enabling LLMs to access and process web content
PhantomJS sitemap generator
Makes your ajax web application indexable by search engines by generating html snapshots on the fly. Caches results for blazing fast responses and better page ranking.
Lightweight crawler written in TypeScript using ES6 generators.
Lightweight crawler written in TypeScript using ES6 generators.
Some tools to help you to render your application as a static web site using the crawlable module.
Simple website crawler and scraper
PhantomJS and JSDOM based crawling tool. Used PhantomJS for full load of asynchronously-loaded resources and JSDOM for quick crawls. Allows custom [tough-cookie](https://www.npmjs.com/package/tough-cookie) insertion. Refer to [cheerio](https://www.npmj
A lightweight and modular web crawling framework built with Puppeteer.
Net Crawler is a web spider written with Nodejs
A Node.js scraping framework built on puppeteer-core (to use a headless Chrome/Chromium browser). The core module without browser installation
This extracts the top five news metadata from NAVER headlines.
Single Page App SER
Lightweight crawler written in TypeScript using ES6 generators.
fork from headless-chrome-crawler and update puppeteer to the latest version
Crawler is a web spider written with Nodejs. It gives you the full power of jQuery on the server to parse a big number of pages as they are downloaded, asynchronously. Scraping should be simple and fun!
Web scraping/crawling framework built on top of headless Chrome
DCrawler is a distribited web spider written in Nodejs and queued with Mongodb. It gives you the full power of jQuery to parse big pages as they are downloaded, asynchronously. Simplifying distributed crawler!
The error crawler that powers http://plucky.io/
Distributed web crawler powered by Headless Chrome
SoongSil UniverSity U-saint Score Crawling
Fast and lightweight web crawler with built-in cheerio, xml and json parser.
Collection of patches for puppeteer and playwright to avoid automation detection and leaks. Helps to avoid Cloudflare and DataDome CAPTCHA pages. Easy to patch/unpatch, can be enabled/disabled on demand.
Helper to extract confessions from webpages
A small package to crawl a site and return a redirect template. This is helpful for migration from one to another website with different url schemes.
Environment for Goose Parser which allows to run it using JsDOM
Daily use crawling methods for puppeteer
Datasco API SDK for Node.js to collect any data from any website
proxidoor helps you make HTTP requests through a rotating proxy, you can use it for services such as web scraping, web crawling and more.
crawler service
Official JavaScript/TypeScript SDK for the Friday API
Distributed web crawler powered by Headless Chrome
Environment for Goose parser which allows to run it in commmon Browser
A Node.js scraping framework built on puppeteer-extra (to use a headless Chrome/Chromium browser). Has the ability to solve reCaptcha
web scraper for album reviews from pitchfork
Crawler is a web spider written with Nodejs. It gives you the full power of jQuery on the server to parse a big number of pages as they are downloaded, asynchronously. Scraping should be simple and fun!
Transform your text with dynamic typing animations! crawling-typer lets you display an array of strings one at a time, each with its own color. Customize typing speed, delete speed, and pauses between strings. Enjoy full control with loop counts, post-loo
An API to get magnet links using Puppeteer.
A Node.js scraping framework built on puppeteer-extra (to use a headless Chrome/Chromium browser). Has the ability to solve reCaptcha. The core module without browser installation
Distributed web crawler powered by Headless Chrome
Tem o objetivo de executar rotinas de CRAWLING a partir de um arquivo JSON utilizando xpath mas aceitando para cada passo uma função callback que recebe o valor e pode passar esse valor para um próximo passo.
A straightforward sitemap generator written in TypeScript.
robin web crawling engine with nodejs
An API to get data off of IMDB using Puppeteer.
Simple & Human-Friendly HTML Scraper with Json-ld support
Minimalist Node.js web scraper and crawler working with under-the-hood JSDOM
Set of utils and queues to make web scraping easy.
NodeJS Crawler for Twitter
The most advanced web crawler for JavaScript
StackSleuth in-house browser automation agent for debugging and user simulation
Crawler Second-system effect,the second development
keyword mention 크롤러
billboard chart crawling module
Package to find style links from the site you want
A Simple Job Manager
based on node-crawler
Simple Instagram Crawling without using public API
Harvesting data at the <html> mine.
Easily scrap web pages by providing json recipes
Web crawler
Node.js web scraping utility powered by puppeteer pool
NodeJs crawling & scraping framework heavily inspired by Scrapy (Pyhton)
A simple command0line tool to crawl and test your website
make web scraping easy
A Wight backend for fetching static web pages
spamlet is an efficient and simple crawler for playwright
Easily create a scraper api with the @web/scrapper library, which includes a scraper and advanced events for your website.
This is the React Component for Detect Crawling
A web-crawler and scraper that extracts data from a family of nested dynamic webpages with added enhancements to assist in knowledge mining applications.
Web crawler for Node.js
A set of shared utilities that can be used by crawlers
Easy To Use Web Crawler
Providers are the core of applications, where the subtitles are collected. Each provider exports a unique strategy for gathering data. From legendastv's web scraping from opensubtitle API usage, you can collect subtitles from your favorite tv shows and mo
Moving or backing up your Wordpress site to Blogger
Parkour the web like a yamakazi
scrap and caching by use a redis from instagram
n8n node for Firecrawl v2 API - Web scraping, crawling, and data extraction tool for workflows and AI agents
A tool to get sitemaps from websites and crawl them
naver stock data crawler
A lightweight and simple API for web crawling built on chromium puppeteer
Distributed web crawler powered by Headless Chrome