@huggingface/transformers
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Found 34 results for transformers.js
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Chroma's fork of @xenova/transformers serving as our default embedding function
a lightweight no-dependency fork of transformers.js (only tokenizers)
A highly reduced fork of the Xenova JavaScript port of 🤗 transformers. Node.js only, with a lot of functionality removed.
Semantically create chunks from large texts. Useful for workflows involving large language models (LLMs).
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Transformers.js provider for Vercel AI SDK - Run 🤗 Transformers directly in the browser with WebGPU support
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Perform speech-to-text on audio files within your n8n workflows.This node provides local audio transcription, no internet or third-party APIs required for processing.
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
large model translator
Semantically create chunks from large texts. Useful for workflows involving large language models (LLMs).
Node.js plugin for speech recognition that works with OpenAI's Whisper models using ONNX.
Node.js binding for huggingface/tokenizers library
Testing @xenova's v3 branch
Easily use transformers.js with react in browser
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Semantically create chunks from large texts. Useful for workflows involving large language models (LLMs).
transformers.js mod for react-native
Simple, performant React Hooks for running Transformers.js in your browser.
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Model Context Protocol server for local vector search
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
A Node.js library to scan Markdown files, extract metadata, and generate SEO information (titles, descriptions) for English content using local LLMs.
**Bring local inference into your app !**
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!