Skip to content

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]

License

Notifications You must be signed in to change notification settings

BerriAI/litellm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚅 LiteLLM

Deploy to RenderDeploy on Railway

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]

PyPI VersionY Combinator W23WhatsappDiscord

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model LiteLLM Proxy Server (LLM Gateway)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

🚨 Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

Important

LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here
LiteLLM v1.40.14+ now requires pydantic>=2.0.0. No changes required.

Open In Colab
pip install litellm
fromlitellmimportcompletionimportos## set ENV variablesos.environ["OPENAI_API_KEY"] ="your-openai-key"os.environ["ANTHROPIC_API_KEY"] ="your-anthropic-key"messages= [{ "content": "Hello, how are you?","role": "user"}] # openai callresponse=completion(model="openai/gpt-4o", messages=messages) # anthropic callresponse=completion(model="anthropic/claude-3-sonnet-20240229", messages=messages) print(response)

Response (OpenAI Format)

{ "id": "chatcmpl-565d891b-a42e-4c39-8d14-82a1f5208885", "created": 1734366691, "model": "claude-3-sonnet-20240229", "object": "chat.completion", "system_fingerprint": null, "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "Hello! As an AI language model, I don't have feelings, but I'm operating properly and ready to assist you with any questions or tasks you may have. How can I help you today?", "role": "assistant", "tool_calls": null, "function_call": null } } ], "usage": { "completion_tokens": 43, "prompt_tokens": 13, "total_tokens": 56, "completion_tokens_details": null, "prompt_tokens_details": { "audio_tokens": null, "cached_tokens": 0 }, "cache_creation_input_tokens": 0, "cache_read_input_tokens": 0 } }

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

fromlitellmimportacompletionimportasyncioasyncdeftest_get_response(): user_message="Hello, how are you?"messages= [{"content": user_message, "role": "user"}] response=awaitacompletion(model="openai/gpt-4o", messages=messages) returnresponseresponse=asyncio.run(test_get_response()) print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

fromlitellmimportcompletionresponse=completion(model="openai/gpt-4o", messages=messages, stream=True) forpartinresponse: print(part.choices[0].delta.contentor"") # claude 2response=completion('anthropic/claude-3-sonnet-20240229', messages, stream=True) forpartinresponse: print(part)

Response chunk (OpenAI Format)

{ "id": "chatcmpl-2be06597-eb60-4c70-9ec5-8cd2ab1b4697", "created": 1734366925, "model": "claude-3-sonnet-20240229", "object": "chat.completion.chunk", "system_fingerprint": null, "choices": [ { "finish_reason": null, "index": 0, "delta": { "content": "Hello", "role": "assistant", "function_call": null, "tool_calls": null, "audio": null }, "logprobs": null } ] }

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

fromlitellmimportcompletion## set env variables for logging tools (when using MLflow, no API key set up is required)os.environ["LUNARY_PUBLIC_KEY"] ="your-lunary-public-key"os.environ["HELICONE_API_KEY"] ="your-helicone-auth-key"os.environ["LANGFUSE_PUBLIC_KEY"] =""os.environ["LANGFUSE_SECRET_KEY"] =""os.environ["ATHINA_API_KEY"] ="your-athina-api-key"os.environ["OPENAI_API_KEY"] ="your-openai-key"# set callbackslitellm.success_callback= ["lunary", "mlflow", "langfuse", "athina", "helicone"] # log input/output to lunary, langfuse, supabase, athina, helicone etc#openai callresponse=completion(model="openai/gpt-4o", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])

LiteLLM Proxy Server (LLM Gateway) - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder #INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

importopenai# openai v1.0.0+client=openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url# request sent to model set on litellm proxy, `litellm --model`response=client.chat.completions.create(model="gpt-3.5-turbo", messages= [ { "role": "user", "content": "this is a test request, write a short poem" } ]) print(response)

Proxy Key Management (Docs)

Connect the proxy with a Postgres DB to create proxy keys

# Get the code git clone https://github.com/BerriAI/litellm # Go to foldercd litellm # Add the master key - you can change this after setupecho'LITELLM_MASTER_KEY="sk-1234"'> .env # Add the litellm salt key - you cannot change this after adding a model# It is used to encrypt / decrypt your LLM API Key credentials# We recommend - https://1password.com/password-generator/ # password generator to get a random hash for litellm salt keyecho'LITELLM_SALT_KEY="sk-1234"'> .env source .env # Start docker-compose up

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \ --header 'Authorization: Bearer sk-1234' \ --header 'Content-Type: application/json' \ --data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{ "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token"expires": "2023-11-19T01:38:25.838000+00:00"# datetime object }

Supported Providers (Docs)

ProviderCompletionStreamingAsync CompletionAsync StreamingAsync EmbeddingAsync Image Generation
openai
azure
AI/ML API
aws - sagemaker
aws - bedrock
google - vertex_ai
google - palm
google AI Studio - gemini
mistral ai api
cloudflare AI Workers
cohere
anthropic
empower
huggingface
replicate
together_ai
openrouter
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
Groq AI
Deepseek
anyscale
IBM - watsonx.ai
voyage ai
xinference [Xorbits Inference]
FriendliAI
Galadriel

Read the Docs

Contributing

Interested in contributing? Contributions to LiteLLM Python SDK, Proxy Server, and contributing LLM integrations are both accepted and highly encouraged! See our Contribution Guide for more details

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under the LiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Code Quality / Linting

LiteLLM follows the Google Python Style Guide.

We run:

If you have suggestions on how to improve the code quality feel free to open an issue or a PR.

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

Run in Developer mode

Services

  1. Setup .env file in root
  2. Run dependant services docker-compose up db prometheus

Backend

  1. (In root) create virtual environment python -m venv .venv
  2. Activate virtual environment source .venv/bin/activate
  3. Install dependencies pip install -e ".[all]"
  4. Start proxy backend uvicorn litellm.proxy.proxy_server:app --host localhost --port 4000 --reload

Frontend

  1. Navigate to ui/litellm-dashboard
  2. Install dependencies npm install
  3. Run npm run dev to start the dashboard
close