JSPM

  • Created
  • Published
  • Downloads 51
  • Score
    100M100P100Q61390F
  • License Apache

Integrate and evaluate various AI models, such as ChatGPT, Llama, Diffusion, Cohere, Gemini and Hugging Face.

Package Exports

  • intellinode
  • intellinode/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (intellinode) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Intelligent Node (IntelliNode)

Unified prompt, evaluation, and production integration to any AI model

IntelliNode is the ultimate tool to integrate with the latest language models and deep learning frameworks using javascript. The library provides intuitive functions for sending input to models like ChatGPT, WaveNet and Stable diffusion, and receiving generated text, speech, or images. With just a few lines of code, you can easily access the power of cutting-edge AI models to enhance your projects.

Latest Updates

  • Add Google Gemini chat and vision.
  • Add Mistral SMoE model as a chatbot provider (open source mixture of experts).
  • Update the chatbot to augment answers with your documents, allowing for a multi-model agent approach.
  • Update Openai with DALLยทE 3 vision, speech, and ChatGPT functions (automation).
  • Improve Llama v2 chat speed and support llama code models. ๐Ÿฆ™
  • Update stable diffusion to use the XL model engine. ๐ŸŽจ
  • Add support for hugging face inference. ๐Ÿค—
  • Support in-memory semantic search. ๐Ÿ”
  • Add web search to cohere chatbot.

Join the discord server for the latest updates and community support.

Examples

Functions

Chatbot

  1. imports:
const { Chatbot, ChatGPTInput } = require('intellinode');
  1. call:
// set the system mode and the user message.
const input = new ChatGPTInput('You are a helpful assistant.');
input.addUserMessage('What is the distance between the Earth and the Moon?');

// get the responses from the chatbot
const bot = new Chatbot(apiKey);
const responses = await bot.chat(input);

Google Gemini Chatbot

IntelliNode enable effortless swapping between AI models.

  1. imports:
const { Chatbot, GeminiInput, SupportedChatModels } = require('intellinode');
  1. call:
const input = new GeminiInput();
input.addUserMessage('Who painted the Mona Lisa?');

// get the api key from mistral.ai
const geminiBot = new Chatbot(geminiApiKey, SupportedChatModels.GEMINI);
const responses = await geminiBot.chat(geminiInput);

The documentation on how to switch the chatbot between ChatGPT, Mistral and LLama can be found in the IntelliNode Wiki.

  1. imports:
const { SemanticSearch } = require('intellinode');
  1. call:
const search = new SemanticSearch(apiKey);
// pivotItem: item to search.
const results = await search.getTopMatches(pivotItem, searchArray, numberOfMatches);
const filteredArray = search.filterTopMatches(results, searchArray)

Gen

  1. imports:
const { Gen } = require('intellinode');
  1. call:
// one line to generate blog post
const blogPost = await Gen.get_blog_post(prompt, openaiApiKey);
// or generate html page code
text = 'a registration page with flat modern theme.'
await Gen.save_html_page(text, folder, file_name, openaiKey);
// or convert csv data to charts
const csv_str_data = '<your csv as string>'
const topic = "<the csv topic>";

const htmlCode = await Gen.generate_dashboard(csv_str_data, topic, openaiKey, num_graphs=2);

Models Access

Image models

  1. imports:
const { RemoteImageModel, SupportedImageModels, ImageModelInput } = require('intellinode');
  1. call DALLยทE:
provider=SupportedImageModels.OPENAI;

const imgModel = new RemoteImageModel(apiKey, provider);
const images = await imgModel.generateImages(new ImageModelInput({
    prompt: 'teddy writing a blog in times square',
    numberOfImages: 1
}));
  1. change to call Stable Diffusion:
provider=SupportedImageModels.STABILITY;
// ... same code

Language models

  1. imports:
const { RemoteLanguageModel, LanguageModelInput } = require('intellinode');
  1. call openai model:
const langModel = new RemoteLanguageModel('openai-key', 'openai');
model_name = 'text-davinci-003'

const results = await langModel.generateText(new LanguageModelInput({
  prompt: 'Write a product description for smart plug that works with voice assistant.',
  model: model_name,
  temperature: 0.7
}));

console.log('Generated text:', results[0]);
  1. change to call cohere models:
const langModel = new RemoteLanguageModel('cohere-key', 'cohere');
model_name = 'command'
// ... same code

Speech Synthesis

  1. imports:
const { RemoteSpeechModel, Text2SpeechInput } = require('intellinode');
  1. call google model:
const speechModel = new RemoteSpeechModel('google-key', 'google');
const audioContent = await speechModel.generateSpeech(new Text2SpeechInput({
  text: text,
  language: 'en-gb'
}));

Hugging Face Inference

  1. imports:
const { HuggingWrapper } =  require('intellinode');
  1. call any model id
const inference = new HuggingWrapper('HF-key');
const result = await huggingWrapper.generateText(
   modelId='facebook/bart-large-cnn',
   data={ inputs: 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building...' });

The available hugging-face functions: generateText, generateImage, processImage.

Check the samples for more code details including automating your daily tasks using AI.

Utilities

Prompt Engineering

Generate improved prompts using LLMs:

const promptTemp = await Prompt.fromChatGPT("fantasy image with ninja jumping across buildings", openaiApiKey);
console.log(promptTemp.getInput());

Azure Openai Access

To access Openai services from your Azure account, you have to call the following function at the beginning of your application:

const { ProxyHelper } = require('intellinode');
ProxyHelper.getInstance().setAzureOpenai(resourceName);

Custom proxy

Check the code to access the chatbot through a proxy: proxy chatbot.

๐Ÿ“• Documentation

  • IntelliNode Wiki: Detailed documentation about IntelliNode.
  • Showcase: Explore interactive demonstrations of IntelliNode's capabilities.
  • Samples: Get started with IntelliNode using well-documented code samples.
  • Model Evaluation: Demonstrate a swift approach to compare the performance of multiple models like gpt4, llama and cohere.
  • LLM as Microservice: For scalable production.
  • Fine-tuning Tutorial: Learn how to tune LLMs with yout data.

Pillars

  • The wrapper layer provides low-level access to the latest AI models
  • The controller layer offers a unified input to any AI model by handling the differences. So you can switch between models like Openai and Cohere without changing the code.
  • The function layer provides abstract functionality that extends based on the app's use cases. For example, an easy-to-use chatbot or marketing content generation utilities.

Intellicode compatible with third party libraries integration like langchain and vector DBs.

License

Apache License

Copyright 2023 IntelliNode

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.