Building an AI Chatbot with OpenAI and LangChain

Building an AI Chatbot with OpenAI and LangChain

Building an AI Chatbot with OpenAI and LangChain

In this blog post, we’ll walk through creating a simple AI chatbot using OpenAI's language models combined with the power of LangChain—a framework that makes it easier to develop applications with large language models. We'll cover everything from setting up the environment to writing and testing your chatbot code.


Introduction

Chatbots are increasingly popular for customer service, virtual assistants, and more. Leveraging OpenAI’s powerful language models allows you to create chatbots that can understand and generate human-like text. LangChain further simplifies the development process by providing high-level abstractions for managing conversations, chaining prompts, and integrating external knowledge sources.

In this tutorial, we’ll build a simple chatbot that can:

  • Accept user input.

  • Generate responses using OpenAI’s GPT-3/4 API.

  • Manage context using LangChain.


Prerequisites

Before we begin, ensure you have:

  • Basic knowledge of Python.

  • An OpenAI API key (you can get one from OpenAI).

  • Python installed (preferably Python 3.7+).

You’ll also need to install the following packages:

Note: While FastAPI is optional in this tutorial (for later integration into a web service), the core functionality uses OpenAI and LangChain.


Setting Up the Environment

First, create a new project directory and set up a virtual environment (optional but recommended):

mkdir ai-chatbot
cd ai-chatbot
python -m venv env
source env/bin/activate  # On Windows: env\Scripts\activate

Next, install the required packages mentioned in the prerequisites.


Building the Chatbot

Integrating OpenAI

LangChain can seamlessly integrate with OpenAI. We start by creating a simple function that uses the OpenAI API to generate responses. Save your OpenAI API key in an environment variable for security:

export OPENAI_API_KEY="your_openai_api_key"

Now, let’s write a Python snippet that defines a function to call OpenAI's API:

import os
import openai

# Set the OpenAI API key
openai.api_key = os.getenv("OPENAI_API_KEY")

def generate_response(prompt: str) -> str:
    response = openai.Completion.create(
        engine="text-davinci-003",  # or another engine of your choice
        prompt=prompt,
        max_tokens=150,
        temperature=0.7
    )
    return response.choices[0].text.strip()

# Test the function
if __name__ == "__main__":
    test_prompt = "Hello, how can I help you today?"
    print("Chatbot:", generate_response(test_prompt))

This function sends a prompt to OpenAI’s API and returns the generated text.


Using LangChain for Conversation Management

LangChain helps manage conversation state and chaining prompts together. Let’s create a simple conversational agent using LangChain’s ConversationChain class.

Create a new file named chatbot.py and add the following code:

from langchain import OpenAI, ConversationChain

# Initialize the OpenAI LLM interface via LangChain
llm = OpenAI(temperature=0.7)

# Create a conversation chain to manage dialogue context
conversation = ConversationChain(llm=llm)

def chat_with_bot(user_input: str) -> str:
    # Get a response from the conversation chain
    response = conversation.predict(input=user_input)
    return response

# Test the conversation
if __name__ == "__main__":
    print("Welcome to the AI Chatbot! (type 'exit' to quit)")
    while True:
        user_input = input("You: ")
        if user_input.lower() == "exit":
            break
        response = chat_with_bot(user_input)
        print("Bot:", response)

What’s happening in the code above?

  • LangChain's OpenAI: We create an llm object that wraps the OpenAI API.

  • ConversationChain: This object manages the dialogue history so that context is maintained across multiple interactions.

  • chat_with_bot(): This function takes user input, processes it through the conversation chain, and returns the response.

Run the script using:

Type your messages to see the chatbot in action!


Running and Testing the Chatbot

For quick testing, you can run the script directly from your terminal. Each message you send will be processed, and the conversation chain will maintain context. Over time, the conversation will feel more natural as previous interactions are taken into account.

If you want to deploy this chatbot as a web service, you can integrate it with FastAPI. Here’s a simple example:

FastAPI Integration (Optional)

Create a file named main.py:

from fastapi import FastAPI
from pydantic import BaseModel
from chatbot import chat_with_bot

app = FastAPI(title="AI Chatbot with OpenAI and LangChain")

class ChatRequest(BaseModel):
    message: str

@app.post("/chat")
def chat_endpoint(request: ChatRequest):
    response = chat_with_bot(request.message)
    return {"response": response}

@app.get("/")
def read_root():
    return {"message": "Welcome to the AI Chatbot API. Use the /chat endpoint to start a conversation."}

Run your FastAPI application using uvicorn:

uvicorn main:app --reload

You can now test your chatbot through the interactive Swagger UI at http://127.0.0.1:8000/docs.


Conclusion

In this tutorial, we built an AI chatbot using OpenAI and LangChain. We started by integrating OpenAI’s language model, then used LangChain’s ConversationChain to maintain context throughout the conversation. Finally, we demonstrated how to deploy the chatbot using FastAPI.

This basic framework can be expanded with additional features such as memory management, integration with external data sources, or more advanced conversation logic. Experiment with these tools and see how you can customize your chatbot to suit your needs!

Happy coding and building your intelligent conversational agents! 🚀

Do you have any project idea you want to discuss about?

Do you have any project idea you want to discuss about?

Do you have any project idea you want to discuss about?