Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (n8n-nodes-lexeri) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
n8n-nodes-lexeri
This is an n8n community node for integrating Lexeri terminology management into your n8n workflows.
Lexeri is a collaborative terminology management platform that helps ensure consistent and correct use of terminology across your content and translations.
Features
Actions
- Check Terminology: Validate text against your Lexeri termbase and receive suggestions for preferred terminology
- Search Terms: Search for terms in your termbase by value and language
- Get Termbase Info: Retrieve metadata and statistics about your termbase
- Export Termbase: Export your entire termbase as a TBX (TermBase eXchange) file
- Import Terms: Import terms from TBX, CSV, or XLSX files
- Extract Terms from Documents: Create term extraction jobs from uploaded documents
- Create Term Request: Create collaborative term requests with suggestions and reference documents
- List Usage Examples: View all usage examples for a specific term
- Create Usage Example: Add contextual usage examples to terms
- List Topics: View all available topics for term categorization
- Add Terms to Topic: Organize terms by assigning them to topics
Triggers
- Term Created: Triggers when a new term is created in your termbase
- Term Request Created: Triggers when a new term request is created
Installation
In n8n (Community Nodes)
- Go to Settings > Community Nodes
- Select Install
- Enter
n8n-nodes-lexeriin the Enter npm package name field - Agree to the risks and click Install
After installation, the Lexeri node will appear in your node palette.
For Local Development
See the Development section below.
Credentials
To use this node, you need a Lexeri API Bearer token.
Getting Your Bearer Token
- Log in to your Lexeri account
- Navigate to the termabse settings and create and copy a new API token in the API tab
Configuring Credentials in n8n
- Click on the Lexeri node in your workflow
- Under Credentials, click Create New
- Select Lexeri API
- Enter your Bearer Token
- (Optional) Change the Base URL if using sandbox or a custom instance
- Click Save and test the connection
Operations
1. Check Terminology
Validates text against your Lexeri termbase and returns matches with terminology recommendations.
Parameters:
- Text: The text you want to check (supports multi-line input)
- Locale Code: The language code of the text (e.g.,
en,de,fr,es) - Suggestion Locale Code: (Optional) Language for preferred-term suggestions. Leave empty to get suggestions in the same language as the text. Set a different locale to get cross-language suggestions (e.g.
ensuggestions fordetext). - Simplified Output: (Default: enabled) Return a simplified response focused on non-preferred terms
Output Modes:
Simplified Output (Default)
Returns a clean, easy-to-use format perfect for IF nodes and workflow branching:
{
"has_forbidden_terms": true,
"forbidden_count": 1,
"total_matches": 1,
"forbidden_terms": [
{
"found": "Luftdichtheit",
"state": "not_recommended",
"preferred": "Luftdichtigkeit",
"usage": "Use 'Luftdichtigkeit' instead"
}
],
"text": "Das Passivhaus...",
"locale_code": "de"
}Use this when:
- You want a simple true/false check for forbidden terms
- You're using IF nodes to branch workflows
- You need clean, minimal output
Full API Response
When disabled, returns the complete Lexeri API response with all term matches:
matching_term: The term found in your text with details like state (preferred/not_recommended), value, usage notespreferred_term: The suggested replacement term (if the matched term is not preferred)phrases: The exact phrases matched in your input textmisspellings: Any detected spelling errorsalways_misspelled: Boolean flag indicating persistent misspelling
Use this when:
- You need detailed term information
- You're processing all matches (not just forbidden ones)
- You want access to the full API data
Example Output:
{
"matches": [
{
"matching_term": {
"identifier": "abc-123",
"state": "not_recommended",
"value": "Luftdichtheit",
"locale_code": "de",
"usage": "Use 'Luftdichtigkeit' instead"
},
"preferred_term": {
"identifier": "def-456",
"state": "preferred",
"value": "Luftdichtigkeit"
},
"phrases": ["Luftdichtheit"],
"misspellings": [],
"always_misspelled": false
}
]
}2. Search Terms
Search for terms in your termbase by value and language code.
Parameters:
- Search Query: Optional search query for term value (leave empty to return all terms)
- Locale Code: Language code to search in (e.g.,
en,de,fr) - State: (Optional) Filter by usage state:
preferred,admitted,not_recommended,outdated, ornot_selected(proposed terms in a term request) - Term Type: (Optional) Filter by term type:
fullForm,shortForm, orabbreviation - Topic Identifier: (Optional) Filter terms assigned to a specific topic
- Tag: (Optional) Filter terms by tag (available tags can be found in termbase info)
- Missing Translation Locale: (Optional) Return only terms that are missing a translation in this locale
- Initial Character: (Optional) Filter terms by their first character (e.g.
A) - Term Entry Identifier: (Optional) Return only terms belonging to a specific term entry
Example Output:
[
{
"identifier": "term-123",
"value": "sustainability",
"state": "preferred",
"locale_code": "en",
"usage": "Use in environmental contexts",
"part_of_speech": "noun"
}
]3. Get Termbase Info
Retrieve metadata and statistics about your termbase.
Parameters: None (uses authenticated user's termbase)
Example Output:
{
"identifier": "termbase-456",
"name": "Product Terminology",
"description": "Terms for product documentation",
"number_of_terms": 1523,
"locales": [
{ "code": "en", "name": "English" },
{ "code": "de", "name": "German" }
]
}4. Export Termbase
Export your entire termbase as a TBX (TermBase eXchange) file.
Parameters: None
Output: Binary data (TBX/XML file) that can be downloaded or processed further in your workflow.
Use Case: Backup your termbase, share with translation tools, or migrate to other systems.
5. Import Terms
Import terms from TBX, CSV, or XLSX files.
Parameters:
- Input Binary Field: Name of the binary property containing the file to import (default:
data) - Description: Optional description for the import job
Example Output:
{
"identifier": "import-789",
"file": "glossary.tbx",
"description": "Q1 2024 terms",
"state": "processing",
"import_state": "pending",
"number_of_terms": 0,
"token": "abc123xyz"
}Note: Import is an asynchronous operation. Use the returned identifier and token to check import status.
6. Extract Terms from Documents
Create term extraction jobs from uploaded documents using Lexeri's NLP capabilities.
Parameters:
- Title: Title for the extraction job
- Description: Optional description
- Locale Code: Language code for extraction (NLP-supported languages only)
- Input Binary Field: Name of the binary property containing the document(s)
Example Output:
{
"identifier": "extraction-012",
"title": "Product Manual Term Extraction",
"description": "Extract terms from v2.0 manual",
"state": "processing",
"extraction_state": "pending",
"token": "def456uvw"
}Supported Languages for Extraction: English, German, Spanish, French, Italian, Portuguese, Dutch, Polish, Czech, Russian, Japanese, Chinese (Simplified), Korean
7. Create Term Request
Create collaborative term requests with suggestions and reference documents.
Parameters:
- Title: Title for the term request
- Description: Optional description
- Locale Code: Language code for term suggestions
- Term Suggestions: Comma-separated list of terms to suggest (e.g., "sustainability, eco-friendly, carbon neutral")
- Term Type: (Optional) Type of the suggested terms:
fullForm,shortForm, orabbreviation - Part of Speech: (Optional)
proper_noun,noun,verb,adj, oradv - Grammatical Gender: (Optional)
masculine,feminine,neuter, orother - Grammatical Number: (Optional)
SingularorPlural - Usage Note: (Optional) Usage note applied to all suggested terms
- Definition: (Optional) Definition applied to all suggested terms
- Reference Document Binary Field: Optional binary property containing reference document(s)
Example Output:
{
"identifier": "request-345",
"title": "Q2 Marketing Terms",
"description": "New product launch terminology",
"state": "open",
"token": "ghi789rst"
}Use Case: Collaborate with team members or clients on terminology decisions, attach reference materials, and track term approval workflows.
8. List Usage Examples
List all usage examples for a specific term.
Parameters:
- Term Identifier: The identifier of the term (e.g.,
term-abc123)
Example Output:
[
{
"identifier": "example-001",
"text": "The sustainability initiative aims to reduce carbon emissions by 30%.",
"source": "Annual Report 2024",
"created_at": "2024-01-15T10:30:00Z",
"updated_at": "2024-01-15T10:30:00Z"
},
{
"identifier": "example-002",
"text": "Our sustainability program focuses on renewable energy.",
"source": "Website Content",
"created_at": "2024-02-01T14:20:00Z",
"updated_at": "2024-02-01T14:20:00Z"
}
]Use Case: Review how a term is used in context before creating similar content.
9. Create Usage Example
Add a contextual usage example to a term.
Parameters:
- Term Identifier: The identifier of the term (required)
- Example Text: The usage example showing the term in context (required)
- Source: Optional reference for where the example comes from
Example Output:
{
"identifier": "example-003",
"text": "Implementing sustainability practices reduces environmental impact.",
"source": "Product Guide v3.0",
"created_at": "2024-03-10T09:15:00Z",
"updated_at": "2024-03-10T09:15:00Z"
}Use Case: Document real-world usage patterns to help writers and translators use terms correctly.
10. List Topics
List all topics available in your termbase for categorizing terms.
Parameters: None
Example Output:
[
{
"identifier": "topic-001",
"title": "Marketing",
"locale_code": "en",
"description": "Marketing and promotional content terms",
"created_at": "2023-06-01T00:00:00Z",
"updated_at": "2024-01-15T10:00:00Z"
},
{
"identifier": "topic-002",
"title": "Technical Documentation",
"locale_code": "en",
"description": "Product and technical documentation terminology",
"created_at": "2023-06-01T00:00:00Z",
"updated_at": "2024-02-20T15:30:00Z"
}
]Use Case: View available topics before categorizing terms.
11. Add Terms to Topic
Associate multiple term entries with a topic for better organization.
Parameters:
- Topic Identifier: The identifier of the topic (e.g.,
topic-001) - Term Entry Identifiers: Comma-separated list of term entry identifiers (e.g.,
entry-123,entry-456,entry-789)
Example Output:
{
"success": true,
"added_count": 3,
"topic_identifier": "topic-001"
}Use Case: Organize terms by subject area (e.g., marketing, technical, legal) for easier navigation and management.
Triggers
The Lexeri Trigger node allows you to react to events happening in your Lexeri termbase in real-time.
1. Term Created Trigger
Triggers when a new term is created in your termbase.
Parameters:
- Event: Select "Term Created"
- Filter by Resource ID: (Optional) Enable to filter by specific term identifiers
- Resource Identifier: (Optional) Only trigger for specific term ID (e.g.,
term-abc123)
Webhook Payload:
{
"action": "term_created",
"payload": {
"identifier": "term-abc123",
"termbase_identifier": "termbase-456",
"updated_term_entry_identifiers": ["entry-789"]
}
}Use Case: Automatically notify translators when new terms are added, sync terms to external systems, or trigger quality checks.
2. Term Request Created Trigger
Triggers when a new term request is created.
Parameters:
- Event: Select "Term Request Created"
- Filter by Resource ID: (Optional) Enable to filter by specific term request identifiers
- Resource Identifier: (Optional) Only trigger for specific request ID (e.g.,
request-abc123)
Webhook Payload:
{
"action": "term_request_created",
"payload": {
"identifier": "request-abc123",
"termbase_identifier": "termbase-456",
"user_name": "john_doe",
"title": "Q2 Marketing Terms",
"description": "New product launch terminology"
}
}Use Case: Notify terminology managers of new requests, assign requests to team members, or integrate with project management tools.
Setting Up Webhooks
- Add the Lexeri Trigger node to your workflow
- Select the event type you want to listen for
- (Optional) Enable filtering to only trigger for specific resources
- Activate your workflow - n8n will automatically register the webhook with Lexeri
- When you deactivate the workflow, the webhook is automatically removed
Important Notes:
- Webhooks are registered automatically when you activate the workflow
- Each trigger registers a separate webhook with Lexeri
- The webhook URL is provided by n8n and uses your n8n instance's webhook endpoint
- Filtering by resource identifier is done at the n8n level (after receiving the webhook)
Example Workflows
Workflow 1: Content Quality Check
- Trigger: Webhook or Schedule
- Function: Prepare text to check
- Lexeri: Check terminology
- IF: Check if any forbidden terms were found
- Action: Send notification or update content
Workflow 2: Termbase Management
- Manual Trigger: Start workflow
- Lexeri (Get Termbase Info): Retrieve current statistics
- HTTP Request: Download new term file
- Lexeri (Import Terms): Import the new terms
- Slack: Notify team of successful import
Workflow 3: Document Term Extraction
- Trigger: New file in Google Drive/Dropbox
- Download File: Get document content
- Lexeri (Extract Terms): Extract terminology candidates
- Lexeri (Create Term Request): Create request for review
- Email: Notify terminology manager
Workflow 4: Term Search and Validation
- Webhook: Receive term query
- Lexeri (Search Terms): Find matching terms
- Function: Process and format results
- Respond to Webhook: Return term information
Workflow 5: Automated Term Notifications
- Lexeri Trigger (Term Created): Listen for new terms
- Function: Format notification message
- Slack/Email: Notify translation team
- Airtable/Notion: Log term in tracking system
Workflow 6: Term Request Assignment
- Lexeri Trigger (Term Request Created): Listen for new requests
- Function: Determine assignee based on title/description
- Lexeri (Add Terms to Topic): Categorize the request
- Slack: Send assignment notification to terminology manager
API Reference
This node uses the Lexeri REST API. For detailed API documentation, see:
Support
For issues or feature requests, please visit:
License
MIT
Development
This section is for developers who want to contribute to or customize this n8n node.
Prerequisites
- Node.js: v22 or higher
- npm: Latest version
- Docker (optional): For testing with a local n8n instance
Project Structure
n8n-nodes-lexeri/
├── credentials/
│ └── LexeriApi.credentials.ts # Bearer token authentication
├── nodes/
│ └── Lexeri/
│ └── Lexeri.node.ts # Main node implementation
├── lexeri.svg # Node icon (single source)
├── dist/ # Compiled output (auto-generated)
│ ├── credentials/
│ │ ├── LexeriApi.credentials.js
│ │ └── lexeri.svg # (copied during build)
│ └── nodes/
│ └── Lexeri/
│ ├── Lexeri.node.js
│ └── lexeri.svg # (copied during build)
├── package.json
├── tsconfig.json
└── eslint.config.mjsNote: The icon lexeri.svg is stored in the project root as the single source of truth. During build, it's automatically copied to both dist/credentials/ and dist/nodes/Lexeri/ directories where n8n expects to find it.
Project Structure
n8n-nodes-lexeri/
├── credentials/
│ └── LexeriApi.credentials.ts # Credential type definition (Bearer token)
├── nodes/
│ ├── Lexeri/
│ │ ├── Lexeri.node.ts # Main node implementation
│ │ ├── types.ts # TypeScript type definitions for API models
│ │ ├── locales.ts # Locale code constants and options
│ │ └── lexeri.svg # Node icon
│ └── LexeriTrigger/
│ ├── LexeriTrigger.node.ts # Trigger node implementation
│ └── lexeri.svg # Node icon
├── dist/ # Compiled JavaScript output (generated)
├── .prettierrc.js # Code formatting rules
├── eslint.config.mjs # ESLint configuration
├── package.json # Package metadata and dependencies
├── tsconfig.json # TypeScript compiler configuration
└── README.md # This fileKey Files
credentials/LexeriApi.credentials.ts: Defines the credential type that users configure in n8n. Contains the Bearer token field and authentication logic.nodes/Lexeri/Lexeri.node.ts: The main node implementation with operations, parameters, and execute logic.nodes/Lexeri/types.ts: TypeScript interfaces for all Lexeri API models (terms, topics, usage examples, webhooks, etc.).nodes/Lexeri/locales.ts: Language code constants for locale selection in operations.nodes/LexeriTrigger/LexeriTrigger.node.ts: The trigger node for webhook events (term_created, term_request_created).package.json: Defines the package name, scripts, and registers the nodes and credentials with n8n.
Development Setup
1. Clone the Repository
git clone https://github.com/yourusername/n8n-nodes-lexeri.git
cd n8n-nodes-lexeri2. Install Dependencies
npm installThis installs the n8n node CLI (@n8n/node-cli) and other development dependencies.
3. Start Development Mode
npm run devThis command:
- Compiles TypeScript to JavaScript
- Starts a local n8n instance with your node loaded
- Enables hot-reload for changes
- Opens n8n in your browser (usually at http://localhost:5678)
You can now test your node directly in the n8n UI!
4. Make Changes
Edit the files in credentials/ or nodes/. The dev server will automatically reload when you save changes.
5. Build the Project
npm run buildThis compiles TypeScript to the dist/ folder. Always build before publishing.
6. Lint Your Code
npm run lintOr auto-fix issues:
npm run lint:fixAdding a New Operation
To add a new operation (e.g., "Search Terms"):
1. Add the Operation to the Dropdown
In nodes/Lexeri/Lexeri.node.ts, find the operation property and add a new option:
{
name: 'Search Terms',
value: 'searchTerms',
description: 'Search for terms in the termbase',
action: 'Search for terms',
},2. Add Operation-Specific Parameters
Add parameters that should only appear for this operation using displayOptions:
{
displayName: 'Search Query',
name: 'searchQuery',
type: 'string',
displayOptions: {
show: {
operation: ['searchTerms'],
},
},
default: '',
required: true,
description: 'The search query',
},3. Add the API Call Logic
In the execute method, add a new conditional block:
if (operation === 'searchTerms') {
const searchQuery = this.getNodeParameter('searchQuery', itemIndex) as string;
const localeCode = this.getNodeParameter('localeCode', itemIndex) as string;
const credentials = await this.getCredentials('lexeriApi');
const baseUrl = (credentials.baseUrl as string) || 'https://terms.lexeri.com';
const options: IHttpRequestOptions = {
method: 'GET',
url: `${baseUrl}/terms`,
qs: {
search: searchQuery,
locale_code: localeCode,
},
};
const response = await this.helpers.httpRequestWithAuthentication(
'lexeriApi',
options,
);
returnData.push({
json: response,
pairedItem: itemIndex,
});
}4. Build and Test
npm run build
npm run devTest the new operation in the n8n UI.
Testing Locally with n8n in Docker
If you're running n8n in Docker, you can test your custom node by mounting the project into the container.
Method 1: Volume Mount (Recommended for Development)
Build the project:
npm run buildMount the project into your n8n Docker container:
Update your
docker-compose.ymlor Docker run command:version: '3' services: n8n: image: n8nio/n8n ports: - 5678:5678 volumes: - n8n_data:/home/node/.n8n - /Users/stefan/development/lexeri_n8n:/home/node/.n8n/custom environment: - N8N_CUSTOM_EXTENSIONS=/home/node/.n8n/customRestart n8n:
docker-compose restartVerify: Open n8n and search for "Lexeri" in the node palette.
Method 2: Install as npm Package in Container
Build the project:
npm run buildEnter the container:
docker exec -it <container-name> /bin/sh
Install the package:
cd /usr/local/lib/node_modules/n8n npm install /path/to/your/n8n-nodes-lexeri
Restart n8n.
Method 3: Use npm link (For Active Development)
In the project directory:
npm link
In the n8n container or global n8n installation:
npm link n8n-nodes-lexeri
Restart n8n.
Testing the API
Test Credentials
Create a workflow in n8n with the Lexeri node and configure valid credentials. If the credentials are valid, you should see a green checkmark.
Test Term Check Operation
- Add a Lexeri node to your workflow
- Select "Check Terminology"
- Enter test text (e.g., "Das Passivhaus hat eine besonders hohe Luftdichtheit.")
- Set locale to
de - Execute the workflow
- Verify the output contains term matches
Example Test with cURL
You can also test the API directly:
curl -X POST https://terms.lexeri.com/term_checks/live \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "text=Das Passivhaus hat eine besonders hohe Luftdichtheit." \
-F "locale_code=de"Lexeri API Reference
- Base URL:
https://terms.lexeri.com(production) orhttps://terms.sandbox.toptranslation.com(sandbox) - Authentication: Bearer token in
Authorizationheader - Endpoints: See the Lexeri API Documentation
- OpenAPI Spec: Available at
https://terms.toptranslation.com/openapi.yaml
Key Endpoints Used
| Endpoint | Method | Description | Operation |
|---|---|---|---|
/term_checks/live |
POST | Check text for terminology matches | Check Terminology |
/terms |
GET | Search terms in termbase | Search Terms |
/terms/{identifier}/usage_examples |
GET | List usage examples for a term | List Usage Examples |
/terms/{identifier}/usage_examples |
POST | Create usage example for a term | Create Usage Example |
/termbases/info |
GET | Get termbase metadata | Get Termbase Info |
/topics |
GET | List all topics | List Topics |
/topics/{identifier}/term_entries/batch_add |
POST | Add term entries to topic | Add Terms to Topic |
/exports/download |
POST | Export termbase as TBX | Export Termbase |
/imports |
POST | Create import job | Import Terms |
/term_extractions |
POST | Create term extraction job | Extract Terms |
/term_requests |
POST | Create term request | Create Term Request |
/webhooks |
GET | List all webhooks | Trigger: Check exists |
/webhooks |
POST | Register webhook | Trigger: Subscribe |
/webhooks/{identifier} |
DELETE | Unregister webhook | Trigger: Unsubscribe |
/stats |
GET | Get user statistics | Credential testing |
https://files.lexeri.com/v2/documents |
POST | Upload files | File operations |
Deploying to a Remote n8n Instance (without npm)
If your n8n instance runs in Docker on a remote server and you don't want to publish to npm, use the volume mount approach.
1. Build and pack the node locally
npm run build
npm packThis creates a file like n8n-nodes-lexeri-0.4.0.tgz.
2. Upload the tarball to the server
scp n8n-nodes-lexeri-0.4.0.tgz user@your-server:/tmp/3. Create a custom-nodes directory on the server
SSH into the server and extract the tarball into a dedicated directory next to your compose.yaml:
ssh user@your-server
mkdir -p /root/n8n-compose/custom-nodes/n8n-nodes-lexeri
cd /root/n8n-compose/custom-nodes/n8n-nodes-lexeri
tar -xzf /tmp/n8n-nodes-lexeri-0.4.0.tgz --strip-components=14. Update compose.yaml
Add the volume mount and environment variable to the n8n service:
services:
n8n:
...
environment:
...
- N8N_COMMUNITY_PACKAGES_ENABLED=true
- N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
- N8N_CUSTOM_EXTENSIONS=/home/node/.n8n/custom
volumes:
...
- ./custom-nodes/n8n-nodes-lexeri:/home/node/.n8n/custom/n8n-nodes-lexeri5. Recreate the container
cd /root/n8n-compose
docker compose up -dThe Lexeri node will now appear in the n8n node palette.
Updating the node
When you release a new version, repeat steps 1–3, then restart n8n:
docker compose restart n8nPublishing to npm
When you're ready to publish your node as a community package:
1. Update package.json
- Ensure the package name starts with
n8n-nodes- - Update version, description, author, and repository URL
- Verify the
n8nfield correctly lists your credentials and nodes
2. Build and Test
npm run build
npm run lintEnsure everything compiles without errors.
3. Publish
npm publishYour package will be available on npm and users can install it via n8n's Community Nodes interface.
4. (Optional) Submit to n8n Creator Hub
For verified community nodes, submit your package through the n8n Creator Portal.
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Make your changes
- Run linting and tests (
npm run lint,npm run build) - Commit your changes (
git commit -m 'Add new feature') - Push to the branch (
git push origin feature/my-feature) - Open a Pull Request
Troubleshooting
Node doesn't appear in n8n
- Ensure the package name starts with
n8n-nodes- - Check that
package.jsonhas the correctn8nfield - Verify the build completed successfully (
dist/folder exists) - Restart n8n completely
TypeScript compilation errors
- Run
npm installto ensure all dependencies are installed - Check that you're using Node.js v22+
- Review error messages and fix type issues
API authentication fails
- Verify your Bearer token is correct
- Check the base URL (production vs. sandbox)
- Test the API directly with cURL to isolate issues
Resources
- n8n Community Nodes Documentation
- n8n Node Building Guide
- Lexeri Developer Portal
- n8n Community Forum