Skip to main content
This comprehensive guide shows you how to build an AI agent that can execute custom Python functions using the OpenAI Responses API.

Prerequisites

  • cycls installed
  • OpenAI API key
pip install cycls openai

Step 1: Define a Tool

Create a standard Python function. In this example, we’ll create a mock weather function.
import json

def get_weather(location):
    """Get current weather for a city."""
    return json.dumps({"temp": "24", "unit": "celsius"})

Step 2: Define the Schema

OpenAI needs to know what your tool does. You define this in a JSON schema format.
    tools = [{
        "type": "function",
        "name": "get_weather",
        "description": "Get current temperature",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"]
        }
    }]
Note: The description fields are crucial—they tell the LLM when to use your tool.

Step 3: Initial Request

Send the user’s message history and your tool definitions to the model. We use the new client.responses.create API.
    # 3. First Call    
    response = client.responses.create(
        model="gpt-4o",
        input=context.messages,
        tools=tools
    )
    
    # Update history    
    context.messages.extend(response.output)

Step 4: Handle Execution

Iterate through the response output to check for function_call items. If found, execute the function and append the result as a function_call_output. Then call the model again with the updated history.
    # 4. Handle Tool Execution    
    tool_called = False    
    for item in response.output:
        if item.type == "function_call":
            tool_called = True            
            if item.name == "get_weather":
                args = json.loads(item.arguments)
                result = get_weather(args["location"])
                context.messages.append({
                    "type": "function_call_output",
                    "call_id": item.call_id,
                    "output": result
                })
    
    # Final Call    
    if tool_called:
        final = client.responses.create(
            model="gpt-4o",
            input=context.messages
        )
        yield final.output_text
    else:
        yield response.output_text

Step 5: Deploy the Agent

Add the deployment call at the end of your file:
agent.deploy(prod=False)
ParameterDescription
prod=FalseDevelopment mode (local testing)
prod=TrueProduction deployment

Full Code

Here is the complete openai_tools_agent.py
import cycls
import json
import os
from openai import OpenAI

agent = cycls.Agent(
    pip=["openai", "python-dotenv"],
    copy=[".env"]
)

# 1. Define the tool logic
def get_weather(location):
    return json.dumps({"temp": "24", "unit": "celsius"})

@agent("tools-agent", title="Tools Agent")
async def tool_agent(context):
    from dotenv import load_dotenv
    load_dotenv()
    
    client = OpenAI()
    
    # 2. Define the tool schema    
    tools = [{
        "type": "function",
        "name": "get_weather",
        "description": "Get current temperature",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"]
        }
    }]

    # 3. First Call    
    response = client.responses.create(
        model="gpt-4o",
        input=context.messages,
        tools=tools
    )

    # Update history    
    context.messages.extend(response.output)

    # 4. Handle Tool Execution    
    tool_called = False    
    for item in response.output:
        if item.type == "function_call":
            tool_called = True            
            if item.name == "get_weather":
                args = json.loads(item.arguments)
                result = get_weather(args["location"])
                context.messages.append({
                    "type": "function_call_output",
                    "call_id": item.call_id,
                    "output": result
                })

    # Final Call    
    if tool_called:
        final = client.responses.create(
            model="gpt-4o",
            input=context.messages
        )
        yield final.output_text
    else:
        yield response.output_text
        
agent.deploy(prod=False)