Skip to content

Access the latest AI models like ChatGPT, LLaMA, Deepseek, Diffusion, Hugging face, and beyond through a unified prompt layer and performance evaluation

License

Notifications You must be signed in to change notification settings

intelligentnode/IntelliNode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unified prompt, evaluation, and production integration to any large model

Intelligent Node

IntelliNode is a javascript module that integrates cutting-edge AI into your project. With its intuitive functions, you can easily feed data to models like ChatGPT, LLaMA, WaveNet, Gemini and Stable diffusion and receive generated text, speech, or images. It also offers high-level functions such as semantic search, multi-model evaluation, and chatbot capabilities.

Access the module

Install

One command and get access to latest models:

npm i intellinode 

For detailed usage instructions, refer to the documentation.

Examples

Gen

The Gen function quickly generates tailored content in one line.

import:

const{ Gen }=require('intellinode');

call:

// one line to generate html page code (openai gpt4 is default)text='a registration page with flat modern theme.'awaitGen.save_html_page(text,folder,file_name,openaiKey);
// or generate blog post (using cohere)constblogPost=awaitGen.get_blog_post(prompt,apiKey,provider='cohere');

Chatbot

import:

const{ Chatbot, ChatGPTInput }=require('intellinode');

call:

// set chatGPT system mode and the user message.constinput=newChatGPTInput('You are a helpful assistant.');input.addUserMessage('What is the distance between the Earth and the Moon?');// get chatGPT responses.constchatbot=newChatbot(OPENAI_API_KEY,'openai');constresponses=awaitchatbot.chat(input);

Gemini Chatbot

IntelliNode enable effortless swapping between AI models.

  1. imports:
const{ Chatbot, GeminiInput, SupportedChatModels }=require('intellinode');
  1. call:
constinput=newGeminiInput();input.addUserMessage('Who painted the Mona Lisa?');constgeminiBot=newChatbot(apiKey,SupportedChatModels.GEMINI);constresponses=awaitgeminiBot.chat(input);

Nvidia DeepSeek

  1. Import:
const{ Chatbot, NvidiaInput, SupportedChatModels }=require("intellinode");
  1. Call:
constinput=newNvidiaInput("You are an insightful assistant.",{model: 'deepseek-ai/deepseek-r1'});input.addUserMessage("What's the summary of the Inception movie?");// visit build.nvidia.com to get your key.constnvidiaBot=newChatbot(NVIDIA_API_KEY,SupportedChatModels.NVIDIA);constresponses=awaitnvidiaBot.chat(input);

The documentation to switch the chatbot between ChatGPT, LLama, Cohere, Mistral and more can be found in the IntelliNode Wiki.

Semantic search

import:

const{ SemanticSearch }=require('intellinode');

call:

constsearch=newSemanticSearch(apiKey);// pivotItem is the item to search.constresults=awaitsearch.getTopMatches(pivotItem,searchArray,numberOfMatches);constfilteredArray=search.filterTopMatches(results,searchArray)

Prompt engineering

Generate improved prompts using LLMs:

constpromptTemp=awaitPrompt.fromChatGPT("fantasy image with ninja jumping across buildings",openaiApiKey);console.log(promptTemp.getInput());

Language models

import:

const{ RemoteLanguageModel, LanguageModelInput }=require('intellinode');

call openai model:

constlangModel=newRemoteLanguageModel('openai-key','openai');model_name='gpt-4o'constresults=awaitlangModel.generateText(newLanguageModelInput({prompt: 'Write a product description for smart plug that works with voice assistant.',model: model_name,temperature: 0.7}));console.log('Generated text:',results[0]);

change to call cohere models:

constlangModel=newRemoteLanguageModel('cohere-key','cohere');model_name='command-xlarge-20221108'// ... same code

Image models

import:

const{ RemoteImageModel, SupportedImageModels, ImageModelInput }=require('intellinode');

call DALL·E:

provider=SupportedImageModels.OPENAI;constimgModel=newRemoteImageModel(apiKey,provider);constimages=awaitimgModel.generateImages(newImageModelInput({prompt: 'teddy writing a blog in times square',numberOfImages: 1}));

change to call Stable Diffusion:

provider=SupportedImageModels.STABILITY;// ... same code

Openai advanced access

To access Openai services from your Azure account, you have to call the following function at the beginning of your application:

const{ ProxyHelper }=require('intellinode');ProxyHelper.getInstance().setAzureOpenai(resourceName);

To access Openai from a proxy for restricted regions:

ProxyHelper.getInstance().setOpenaiProxyValues(openaiProxyJson);

For more details and in-depth code, check the samples.

Frontend

Include the following CDN script in your HTML:

<script src="https://cdn.jsdelivr.net/npm/intellinode@latest/front/intellinode.min.js"></script> 

Check a sample html here.

The code repository setup

First setup

  1. Initiate the project:
cd IntelliNode npm install 
  1. Create a .env file with the access keys:
OPENAI_API_KEY=<key_value> COHERE_API_KEY=<key_value> GOOGLE_API_KEY=<key_value> STABILITY_API_KEY=<key_value> HUGGING_API_KEY=<key_value> 

Test cases

  1. run the remote language models test cases: node test/integration/RemoteLanguageModel.test.js

  2. run the remote image models test cases: node test/integration/RemoteImageModel.test.js

  3. run the remote speech models test cases: node test/integration/RemoteSpeechModel.test.js

  4. run the embedding test cases: node test/integration/RemoteEmbedModel.test.js

  5. run the chatBot test cases: node test/integration/Chatbot.test.js

📕 Documentation

  • IntelliNode Wiki: Check the wiki page for indepeth instructions and practical use cases.
  • Showcase: Experience the potential of Intellinode in action, and use your keys to generate content and html pages.
  • Samples: Explore a code sample with detailed setup documentation to get started with Intellinode.
  • Model Evaluation: Demonstrate a swift approach to compare the performance of multiple models against designated target answers.
  • Semantic Search: In-memory semantic search with iterator over large data.

Pillars

The module foundation:

  • The wrapper layer provides low-level access to the latest AI models
  • The controller layer offers a unified input to any AI model by handling the differences. So you can switch between models like Openai and Cohere without changing the code.
  • The function layer provides abstract functionality that extends based on the app's use cases. For example, an easy-to-use chatbot or marketing content generation utilities.

Roadmap

Call for contributors: registration form .

  • Add support for vllm offline models.
  • Add support for Nvidia Nim for local and remote models
  • Evaluate multiple models using a few lines.
  • Add Gen function to do complex business cases with one command.
  • Audd auto agent capabilities.

License

Apache License

Copyright 2023 Github.com/Barqawiz/IntelliNode

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0 

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

close