Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (n8n-nodes-olyptik) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
n8n-nodes-olyptik

This is an n8n community node that lets you use Olyptik in your n8n workflows.
Olyptik is a powerful web crawling and content extraction API that helps you scrape websites, extract structured data, and convert web content to markdown format.
Installation
Follow the installation guide in the n8n community nodes documentation.
- Go to Settings > Community Nodes.
- Select Install.
- Enter
n8n-nodes-olyptikin Enter npm package name. - Agree to the risks of using community nodes: select I understand the risks of installing unverified code from a public source.
- Select Install.
After installing the node, you can use it like any other node. n8n displays the node in search results in the Nodes panel.
Credentials
This node requires Olyptik API credentials. You can get your API key from your Olyptik Dashboard.
The node supports the following authentication methods:
- API Key: Your Olyptik API key
Supported Operations
Crawl Resource
- Create: Start a new web crawl
- Get: Retrieve information about a specific crawl
- Query: Search and filter your crawls
- Abort: Stop a running crawl
Crawl Results Resource
- Get: Retrieve the results from a completed crawl
Trigger Node
The package also includes an Olyptik Trigger node that can listen for webhooks from Olyptik:
- Crawl Status Change: Triggers when a crawl status changes (e.g., from running to completed)
- Crawl Result Created: Triggers when new results are found during crawling
Example Workflows
Basic Web Crawling
- Use the Olyptik node to start a crawl
- Wait for completion or use the Olyptik Trigger to get notified
- Retrieve the crawl results
- Process the extracted content
Automated Content Monitoring
- Set up an Olyptik Trigger for crawl status changes
- When a crawl completes, automatically retrieve the results
- Send notifications or process the content as needed
Configuration
Starting a Crawl
Required parameters:
- Start URL: The website URL to begin crawling
- Max Results: Maximum number of pages to crawl
Optional parameters:
- Max Depth: How deep to crawl (default: 10)
- Engine Type: Choose between Auto, Cheerio (fast), or Playwright (JavaScript-heavy sites)
- Use Sitemap: Whether to use the website's sitemap.xml
- Entire Website: Crawl the entire website
- Include Links: Include links in the extracted markdown
- Use Static IPs: Use static IP addresses for crawling
Retrieving Results
- Crawl ID: The ID of the crawl to get results for
- Page: Page number for pagination
- Limit: Number of results per page (1-100)
API Documentation
For detailed API documentation, visit: https://docs.olyptik.io