Skip to main content
LangChain is a powerful framework for developing applications powered by language models. It simplifies the process of chaining together different components like LLMs, prompts, and memory. This guide shows you how to combine LangChain’s flexibility with Cycls’ easy deployment to create robust AI agents.

Prerequisites

  • Python 3.8+
  • cycls package installed
  • Docker installed (for local testing)
  • OpenAI API key
Note: This guide uses OpenAI, but LangChain and Cycls support many providers including Anthropic, Google Gemini, Mistral, Cohere, and more. Simply swap the pip dependency and model name.
pip install cycls

Step 1: Import Cycls

Create a new file called agent.py and import the cycls package:
import cycls

Step 2: Configure Environment

Create a file named .env in the same directory to store your API key safely:
OPENAI_API_KEY=sk-proj-...
This keeps your secrets out of your code.

Step 3: Initialize the Agent

Initialize the agent with LangChain dependencies and include your .env file.
agent = cycls.Agent(
    pip=["langchain", "langchain-openai", "python-dotenv"],
    copy=[".env"]
)
  • pip: Installs required Python packages.
  • copy: Copies your .env file to the agent’s environment so it can access your keys.

Step 4: Define the Agent Logic

Use the @agent decorator to register your async handler. We’ll load the environment variables, initialize the chat model, and stream the response.
Important: Import dependencies inside the function body to ensure they work in the remote environment.
@agent("langchain-agent", title="LangChain Agent")
async def chat_agent(context):
    # 1. Load environment variables
    from dotenv import load_dotenv
    load_dotenv()

    from langchain.chat_models import init_chat_model

    # 2. Initialize the chat model (uses OPENAI_API_KEY from .env)
    model = init_chat_model("gpt-4o")

    # 3. Get the user's message
    query = context.messages[-1]["content"]

    # 4. Stream the response
    async for chunk in model.astream(query):
        if chunk.content:
            yield chunk.content

Step 5: Deploy the Agent

Add the deployment call at the end of your file:
agent.deploy(prod=False)
ParameterDescription
prod=FalseDevelopment mode (local testing)
prod=TrueProduction deployment

Full Code

Here is the complete agent.py file:
import cycls

agent = cycls.Agent(
    pip=["langchain", "langchain-openai", "python-dotenv"],
    copy=[".env"]
)

@agent("langchain-agent", title="LangChain Agent")
async def chat_agent(context):
    # Load environment variables from the copied .env file
    from dotenv import load_dotenv
    load_dotenv()

    from langchain.chat_models import init_chat_model

    # Initialize the chat model (uses OPENAI_API_KEY from .env)
    model = init_chat_model("gpt-4o")

    # Get the latest user message
    query = context.messages[-1]["content"]

    # Stream the response
    async for chunk in model.astream(query):
        if chunk.content:
            yield chunk.content

agent.deploy(prod=False)

Step 6: Run the Agent

Execute your agent:
python agent.py
The agent will start and provide an endpoint for interaction.

Using Other LLM Providers

Swap the dependency and model name to use a different provider:
ProviderPip PackageModel Example
OpenAIlangchain-openaigpt-4o
Anthropiclangchain-anthropicclaude-sonnet-4-5-20250929
Googlelangchain-google-genaigemini-3.0-pro
Mistrallangchain-mistralaimistral-large-latest