Skip to main content

ChatUnify

This notebook covers how to get started with Unify chat models.

Unify dynamically routes each query to the best LLM, with support for providers such as OpenAI, MistralAI, Perplexity AI, and Together AI. You can also access all providers individually using a single API key.

You can check out our live benchmarks to see where the data is coming from!

Installation

First thing to do is installing the Unify package.

!pip install -U unifyai
Collecting unifyai
Downloading unifyai-0.9.5-py3-none-any.whl.metadata (7.4 kB)
Collecting jsonlines<5.0.0,>=4.0.0 (from unifyai)
Downloading jsonlines-4.0.0-py3-none-any.whl.metadata (1.6 kB)
Requirement already satisfied: openai<2.0.0,>=1.41.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from unifyai) (1.45.1)
Requirement already satisfied: requests<3.0.0,>=2.31.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from unifyai) (2.32.3)
Requirement already satisfied: attrs>=19.2.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from jsonlines<5.0.0,>=4.0.0->unifyai) (24.2.0)
Requirement already satisfied: anyio<5,>=3.5.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (4.4.0)
Requirement already satisfied: distro<2,>=1.7.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (1.9.0)
Requirement already satisfied: httpx<1,>=0.23.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (0.27.2)
Requirement already satisfied: jiter<1,>=0.4.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (0.5.0)
Requirement already satisfied: pydantic<3,>=1.9.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (2.9.1)
Requirement already satisfied: sniffio in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (1.3.1)
Requirement already satisfied: tqdm>4 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (4.66.5)
Requirement already satisfied: typing-extensions<5,>=4.11 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from openai<2.0.0,>=1.41.0->unifyai) (4.12.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from requests<3.0.0,>=2.31.0->unifyai) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from requests<3.0.0,>=2.31.0->unifyai) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from requests<3.0.0,>=2.31.0->unifyai) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from requests<3.0.0,>=2.31.0->unifyai) (2024.8.30)
Requirement already satisfied: exceptiongroup>=1.0.2 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai<2.0.0,>=1.41.0->unifyai) (1.2.2)
Requirement already satisfied: httpcore==1.* in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai<2.0.0,>=1.41.0->unifyai) (1.0.5)
Requirement already satisfied: h11<0.15,>=0.13 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai<2.0.0,>=1.41.0->unifyai) (0.14.0)
Requirement already satisfied: annotated-types>=0.6.0 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from pydantic<3,>=1.9.0->openai<2.0.0,>=1.41.0->unifyai) (0.7.0)
Requirement already satisfied: pydantic-core==2.23.3 in /Users/vedpatwardhan/Desktop/langchain/.venv/lib/python3.10/site-packages (from pydantic<3,>=1.9.0->openai<2.0.0,>=1.41.0->unifyai) (2.23.3)
Downloading unifyai-0.9.5-py3-none-any.whl (53 kB)
 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.7/53.7 kB 3.1 MB/s eta 0:00:00
[?25hDownloading jsonlines-4.0.0-py3-none-any.whl (8.7 kB)
Installing collected packages: jsonlines, unifyai
Successfully installed jsonlines-4.0.0 unifyai-0.9.5

[notice] A new release of pip is available: 24.1 -> 24.2
[notice] To update, run: pip install --upgrade pip

Environment Setup

Make sure to set the UNIFY_KEY environment variable. You can get a key in the Unify Console.

import os
os.environ["UNIFY_KEY"] = "API_KEY"

Usage

Let's take a look at how to use the package now.

The first thing we can do is initialize a model. To configure Unify, pass an endpoint string to ChatUnify. You can read more about this in Unify's docs.

from langchain_community.chat_models import ChatUnify

chat = ChatUnify(model="gpt-4o@openai")
API Reference:ChatUnify

Once we have initialized the model, we can query it with invoke

chat.invoke("Hello! How are you?")
AIMessage(content="Hello! I'm just a computer program, so I don't have feelings, but I'm here and ready to help you with whatever you need. How can I assist you today?", additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 34, 'prompt_tokens': 13, 'total_tokens': 47, 'completion_tokens_details': {'reasoning_tokens': 0}, 'cost': 0.000575}, 'model': 'gpt-4o@openai', 'finish_reason': 'stop'}, id='run-10640642-200a-41c6-acc2-f651c0ded4ad-0')

Single Sign-On

If you don't want the router to select the provider, you can also use our SSO to query endpoints in different providers without making accounts with all of them. For example, all of these are valid endpoints:

chat = ChatUnify(model="llama-3.1-8b-chat@together-ai")
chat = ChatUnify(model="gpt-4o@openai")
chat = ChatUnify(model="mistral-nemo@mistral-ai")

This allows you to quickly switch and test different models and providers. For example, if you are working on an application that uses gpt-4 under the hood, you can use this to query a much cheaper LLM during development and/or testing to reduce costs.

Take a look at the available ones here!

Chaining Inputs

Let's build a simple chain that leverages prompt templates now.

We will need to define a prompt template:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant that translates English to French."),
("human", "Translate this sentence from English to French. {english_text}."),
]
)
API Reference:ChatPromptTemplate

And then simply build and invoke the resulting chain:

chat = ChatUnify(model="llama-3.1-8b-chat@input-cost")
chain = prompt | chat
chain.invoke({"english_text": "Hello! How are you?"})
AIMessage(content='The translation of the sentence "Hello! How are you?" from English to French is:\n\n"Bonjour ! Comment allez-vous ?"', additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 29, 'prompt_tokens': 60, 'total_tokens': 89, 'completion_tokens_details': None, 'queue_time': 0.002340239999999997, 'prompt_time': 0.01620952, 'completion_time': 0.038666667, 'total_time': 0.054876187, 'cost': 5.32e-06}, 'model': 'llama-3.1-8b-chat@input-cost', 'finish_reason': 'stop'}, id='run-d91ac89d-92e2-41a4-a3a6-6415ebb860a5-0')

Streaming and optimizing for latency

If you are building an application where responsiveness is key, you most likely want to get a streaming response. On top of that, ideally you would use the provider with the lowest Time to First Token, to reduce the time your users are waiting for a response. Using Unify this would look something like:

chat_ttft = ChatUnify(model="mistral-large@ttft")
for chunk in chat_ttft.stream("What is a large language model?"):
print(chunk.content, end="")
A large language model is a type of artificial intelligence model designed to understand and generate human-like text based on patterns it has learned from extensive datasets. Here are some key aspects of large language models:

1. **Size**: These models are typically very large, with billions of parameters. The size allows them to capture complex linguistic patterns and generate coherent text.

2. **Training Data**: They are trained on vast amounts of text data from the internet, up until a certain point in time. This data can include books, articles, websites, and more.

3. **Versatility**: Large language models can perform a wide range of tasks, such as translating languages, summarizing text, answering questions, generating code, and more, often without being specifically trained for each task.

4. **Context Understanding**: They can understand and generate text based on the context provided. However, they don't have personal experiences, feelings, or consciousness.

5. **Limitations**: While they strive to generate helpful, harmless, and honest responses, they may sometimes provide inaccurate information, make up facts (hallucinate), or exhibit biases present in their training data.

6. **Examples**: Some well-known large language models include models developed by organizations like OpenAI (e.g., the model behind me), Google, Meta, and others.

In essence, large language models are powerful tools for understanding and generating human language, but they should be used with awareness of their capabilities and limitations.

Batching and Lowest Output Cost

On the other hand, maybe you are building an AI service that processes inputs in batches to generate content. In this case, you may want to get the cheaper provider for longer outputs. Let's see how you can do this using batch and dynamic routing!

messages = [
"Write a blog post about Rome",
"Write a blog post about Paris"
]

chat_cheapest = ChatUnify(model="llama-3.1-8b-chat@output-cost")
chat_cheapest.batch(messages)
[AIMessage(content="**Discover the Eternal City: A Guide to Rome**\n\nRome, the capital of Italy, is a city that embodies the very essence of history, culture, and beauty. With its rich past, stunning architecture, and vibrant atmosphere, Rome is a destination that has captivated the hearts of travelers for centuries. In this blog post, we'll delve into the must-see sights, hidden gems, and insider tips to help you make the most of your Roman adventure.\n\n**Must-see Sights**\n\nRome is a treasure trove of iconic landmarks, each one more breathtaking than the last. Here are a few of the top attractions to add to your itinerary:\n\n*   **The Colosseum**: This ancient amphitheater is one of Rome's most recognizable symbols. Take a guided tour to learn about the gladiators who once fought here and the engineering feats that made this massive structure possible.\n*   **The Vatican City**: The Vatican is home to numerous iconic landmarks, including St. Peter's Basilica, the Sistine Chapel, and the Vatican Museums. Be sure to book tickets in advance to avoid long lines.\n*   **The Pantheon**: This magnificently preserved ancient temple is a must-visit for architecture enthusiasts. Its impressive dome and oculus make it a true marvel of ancient engineering.\n\n**Hidden Gems**\n\nWhile the must-see sights are a great starting point, Rome has plenty of hidden gems waiting to be discovered. Here are a few of our favorites:\n\n*   **Trastevere Neighborhood**: This charming neighborhood is known for its narrow streets, quaint piazzas, and lively nightlife. Get lost in the winding streets and discover the local shops, restaurants, and bars.\n*   **Campo de' Fiori Market**: This bustling market has been in operation since the 15th century. Come here to sample local produce, artisanal cheeses, and fresh flowers.\n*   **The Trevi Fountain**: This stunning baroque fountain is a beautiful spot to people-watch and take in the city's vibrant atmosphere.\n\n**Insider Tips**\n\nRome can be overwhelming, especially for first-time visitors. Here are a few insider tips to help you navigate the city like a local:\n\n*   **Take a walking tour**: Rome is a city best explored on foot. Join a guided tour to learn about the city's history, architecture, and culture.\n*   **Use public transportation**: Rome has an excellent public transportation system. Buy a rechargeable ticket and use the buses, trams, and metro to get around the city.\n*   **Eat like a local**: Rome is famous for its delicious food. Try traditional dishes like carbonara, amatriciana, and supplì (fried risotto balls filled with mozzarella).\n\n**Accommodation**\n\nRome has a wide range of accommodation options to suit every budget. Here are a few recommendations:\n\n*   **Luxury Hotels**: If you're looking to splurge, consider staying at a luxury hotel like the Hotel Eden or the Grand Hotel Plaza.\n*   **Boutique Hotels**: For a more unique experience, try a boutique hotel like the Hotel Artemide or the Hotel Palazzo Montemartini.\n*   **Hostels**: If you're on a tight budget, consider staying at a hostel like the Hostel Colosseum or the Hostel Astrid.\n\n**Getting Around**\n\nRome is a relatively compact city, making it easy to get around on foot. However, if you prefer to use public transportation, here are a few tips:\n\n*   **Buy a rechargeable ticket**: The rechargeable ticket (BIT) is a convenient way to pay for public transportation. You can buy it at any newsstand or tobacconist.\n*   **Use the buses**: Rome's bus system is extensive and efficient. Use the buses to get to outlying neighborhoods and attractions.\n*   **Walk or bike**: Rome is a city best explored on foot or by bike. Take a stroll through the city's charming neighborhoods or rent a bike to explore the surrounding countryside.\n\n**Conclusion**\n\nRome is a city that has something for everyone. Whether you're interested in history, culture, food, or architecture, Rome is a destination that will leave you in awe. With its rich past, stunning landmarks, and vibrant atmosphere, Rome is a city that will leave you wanting more. So come and discover the eternal city for yourself – you won't be disappointed!", additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 904, 'prompt_tokens': 16, 'total_tokens': 920, 'completion_tokens_details': None, 'cost': 5.06e-05}, 'model': 'llama-3.1-8b-chat@output-cost', 'finish_reason': 'stop'}, id='run-14ac7bc4-8166-4fe2-b85c-a11a94e4c305-0'),
AIMessage(content="**The City of Love: Discovering the Magic of Paris**\n\nAs the world's most romantic city, Paris has been enchanting visitors for centuries. From its stunning architecture to its world-class art museums, Paris is a destination that has something for everyone. Whether you're a foodie, a history buff, or a fashionista, the City of Light is sure to captivate your senses and leave you with unforgettable memories.\n\n**Must-See Attractions**\n\nParis is home to some of the world's most iconic landmarks, including the Eiffel Tower, the Arc de Triomphe, and the Louvre Museum. But there's more to Paris than just its famous landmarks. Be sure to explore the charming streets of Montmartre, visit the stunning Notre Dame Cathedral, and take a stroll along the Seine River.\n\n**The Eiffel Tower: A Symbol of Paris**\n\nNo trip to Paris would be complete without a visit to the Eiffel Tower. This iron giant, built for the 1889 World's Fair, is an engineering marvel and a testament to French ingenuity. Take the elevator to the top for breathtaking views of the city, or enjoy a romantic dinner at the Michelin-starred Le Jules Verne.\n\n**Art and Culture**\n\nParis is renowned for its rich artistic heritage, with museums like the Louvre and Orsay showcasing some of the world's most famous works of art. From the Mona Lisa to Van Gogh's Starry Night, Paris is a treasure trove of artistic masterpieces. Don't miss the Musée d'Orsay, which is home to an impressive collection of Impressionist and Post-Impressionist art.\n\n**Food and Wine**\n\nFrench cuisine is world-famous for its rich flavors, intricate preparations, and exquisite presentation. Be sure to try some of the city's famous dishes, such as escargots, ratatouille, and croissants. And of course, no trip to Paris would be complete without a glass of fine wine. Visit the famous wine bars in the Latin Quarter, or take a wine tasting tour to sample some of the region's best vintages.\n\n**Romance and Atmosphere**\n\nParis is often referred to as the City of Love, and for good reason. The city's charming streets, picturesque gardens, and cozy cafes make it the perfect destination for couples and honeymooners. Take a stroll through the Luxembourg Gardens, enjoy a romantic dinner at a sidewalk café, or watch the sunset from the top of the Eiffel Tower.\n\n**Insider Tips**\n\n* Visit the city's famous markets, such as the Marché aux Puces de Saint-Ouen, for a unique shopping experience.\n* Take a day trip to the Palace of Versailles, a stunning royal palace with breathtaking gardens.\n* Explore the city's trendy neighborhoods, such as Le Marais and Belleville, for a taste of local culture.\n* Be sure to try some of the city's delicious street food, such as crepes and baguettes.\n\n**Conclusion**\n\nParis is a city that has something for everyone. Whether you're a history buff, a foodie, or a romantic, the City of Light is sure to captivate your senses and leave you with unforgettable memories. So why not start planning your trip to Paris today? Book your flights, grab your camera, and get ready to fall in love with the most romantic city in the world.\n\n**Photos:**\n\n* The Eiffel Tower at sunset\n* The Louvre Museum's stunning glass pyramid\n* A charming street in Montmartre\n* A romantic dinner at a sidewalk café\n* The stunning Notre Dame Cathedral\n\n**Recommended Accommodations:**\n\n* Hotel Plaza Athenee: A luxurious hotel with stunning views of the Eiffel Tower\n* Hotel Le Bristol: A stylish hotel with a beautiful courtyard garden\n* Hotel Le Saint James: A charming boutique hotel in the heart of Paris\n\n**Recommended Restaurants:**\n\n* Le Jules Verne: A Michelin-starred restaurant with stunning views of the Eiffel Tower\n* Le Comptoir du Relais: A cozy bistro serving classic French cuisine\n* Septime: A trendy restaurant with a focus on seasonal ingredients\n\n**Recommended Tours:**\n\n* Paris City Vision: A guided tour of the city's landmarks and hidden gems\n* Seine River Cruise: A relaxing tour of the city's waterfront\n* Montmartre Walking Tour: A stroll through the charming streets of Paris's oldest neighborhood", additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 914, 'prompt_tokens': 16, 'total_tokens': 930, 'completion_tokens_details': None, 'cost': 5.115e-05}, 'model': 'llama-3.1-8b-chat@output-cost', 'finish_reason': 'stop'}, id='run-86341982-841b-4b1a-b77c-50bc5f782764-0')]

Async calls and Lowest Input Cost

Last but not least, you can also run request asynchronously. For tasks like long document summarization, optimizing for input costs is crucial. Unify's dynamic router can do this too!

messages = [
"Summarize this in 10 words or less. OpenAI is a U.S. based artificial intelligence "
"(AI) research organization founded in December 2015, researching artificial intelligence "
"with the goal of developing 'safe and beneficial' artificial general intelligence, "
"which it defines as 'highly autonomous systems that outperform humans at most economically "
"valuable work'. As one of the leading organizations of the AI spring, it has developed "
"several large language models, advanced image generation models, and previously, released "
"open-source models. Its release of ChatGPT has been credited with starting the AI spring",

"Summarize this in 10 words or less. Mistral AI is a French company selling"
" artificial intelligence (AI) products. "
"It was founded in April 2023 by previous employees of Meta Platforms and Google DeepMind. "
"The company raised €385 million in October 2023 and in December 2023 it was valued at "
"more than $2 billion. It produces open source large language models, citing the "
"foundational importance of open-source software, and as a response to proprietary models. "
"As of March 2024, two models have been published and are available as weights. "
"Three more models, Small, Medium and Large, are available via API only.",

"Summarize this in 10 words or less. LLaMA (Large Language Model Meta AI) is a family of"
" autoregressive large language models (LLMs), "
"released by Meta AI starting in February 2023. For the first version of LLaMA, four model sizes "
"were trained: 7, 13, 33, and 65 billion parameters. LLaMA's developers reported that the 13B "
"parameter model's performance on most NLP benchmarks exceeded that of the much larger GPT-3 "
"(with 175B parameters) and that the largest model was competitive with state of the art models "
"such as PaLM and Chinchilla. Whereas the most powerful LLMs have generally been accessible only "
"through limited APIs (if at all), Meta released LLaMA's model weights to the research community "
"under noncommercial license. Within a week of LLaMA's release, its weights were leaked to the "
"public on 4chan via BitTorrent."
]

chat_model = ChatUnify(model="mistral-large@input-cost")


await chat_model.abatch(messages)
[AIMessage(content='OpenAI develops safe, beneficial AI; released ChatGPT.', additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 14, 'prompt_tokens': 127, 'total_tokens': 141, 'completion_tokens_details': None, 'cost': 0.000338}, 'model': 'mistral-large@input-cost', 'finish_reason': 'stop'}, id='run-b36074ff-9550-440e-be93-17a029673fcd-0'),
AIMessage(content='French AI startup Mistral, valued $2 billion, offers open-source language models.', additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 18, 'prompt_tokens': 152, 'total_tokens': 170, 'completion_tokens_details': None, 'cost': 0.00041200000000000004}, 'model': 'mistral-large@input-cost', 'finish_reason': 'stop'}, id='run-fc7b411b-a444-4e3a-9bb5-99479331c625-0'),
AIMessage(content="LLaMA, Meta's language models, outperform GPT-3; weights leaked.", additional_kwargs={}, response_metadata={'usage': {'completion_tokens': 21, 'prompt_tokens': 221, 'total_tokens': 242, 'completion_tokens_details': None, 'cost': 0.000568}, 'model': 'mistral-large@input-cost', 'finish_reason': 'stop'}, id='run-7011c7e0-6ebe-43b0-9cea-68db91d28627-0')]

Was this page helpful?


You can also leave detailed feedback on GitHub.