Skip to main content
Deploying your agent to production takes just one command. Cycls handles the infrastructure, scaling, and security for you.

Getting Started

Before deploying your agent, you’ll need a Cycls API key.
  1. Go to the Cycls Console and sign in or create an account.
  2. Navigate to the API Keys section.
  3. Create a new API key and copy it securely.
You’ll use this API key to authenticate your deployments. Keep it safe and never commit it to version control.

Deploying to Production

To deploy your agent, simply set prod=True in your deploy() method. This will automatically build your agent and deploy it to the Cycls serverless cloud.
import cycls

# 1. Initialize with your Cycls API Key
agent = cycls.Agent(
    pip=["openai"],
    key="CYCLS_KEY"  # Required for cloud deployment
)

@agent("cake", title="My Agent")
async def cake_agent(context):
    yield "Hello from the cloud!"

# 2. Deploy to Cloud
agent.deploy(prod=True)

Environment Variables & Secrets

When deploying to production, you should never hardcode sensitive information like API keys. Instead, use a .env file and the python-dotenv package to manage them securely.
  1. Create a .env file in your project root:
    CYCLS_API_KEY=cy-...
    OPENAI_API_KEY=sk-...
    
  2. Configure your agent to load these variables and copy the .env file to the cloud environment:
import cycls
import os
from dotenv import load_dotenv

# Load environment variables from .env
load_dotenv()

# Initialize agent with secrets and dependencies
agent = cycls.Agent(
    key=os.getenv("CYCLS_API_KEY"),
    pip=["openai", "python-dotenv"],
    copy=[".env"]  # Securely copy .env file to the deployed agent
)

# A helper function to call the LLM
async def llm(messages):
    import openai
    # Initialize OpenAI client using the environment variable
    client = openai.AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        stream=True
    )
    # Yield the content from the streaming response
    async def event_stream():
        async for chunk in response:
            content = chunk.choices[0].delta.content
            if content:
                yield content
    return event_stream()

# Register the function as an agent named "cake"
@agent("cake", title="My AI Agent", auth=True)
async def cake_agent(context):
    # The context object contains the message history
    return await llm(context.messages)

# Set prod to True to deploy
agent.deploy(prod=True)

What Happens During Deployment?

When you run your script with prod=True, Cycls performs the following steps:
  1. Build: Creates a Docker image containing your code, dependencies (pip), and system packages.
  2. Push: Uploads the image to the private Cycls Container Registry.
  3. Provision: Sets up the serverless infrastructure to host your agent.
  4. Deploy: Launches your agent and assigns it a permanent URL (e.g., https://cake.cycls.ai).

Updating Your Agent

To update your agent, simply make changes to your code and run the script again. Cycls will build a new version and seamlessly update the deployment with zero downtime.

Monitoring & Management

The Cycls Dashboard lets you monitor your agent’s performance, view real-time logs, and manage your deployments. From the dashboard, you can also delete deployments.