AutoGen in FastAgency#
The AutoGen runtime is a key component of FastAgency, empowering developers to create intelligent, multi-agent systems powered by large language models (LLMs). It allows agents to communicate, collaborate, and perform complex tasks autonomously while easily integrating with external Rest APIs for real-time data and functionality.
In this example, we will create a simple weather chatbot using AutoGen runtime in FastAgency. The chatbot will enable a user to interact with a weather agent that fetches real-time weather information from an external REST API using OpenAPI specification.
Installation#
We strongly recommend using Cookiecutter for setting up the project. Cookiecutter creates the project folder structure, default workflow, automatically installs all the necessary requirements, and creates a devcontainer that can be used with Visual Studio Code.
You can setup the project using Cookiecutter by following the project setup guide.
Alternatively, you can use pip + venv. Before getting started, make sure you have installed FastAgency with support for the AutoGen runtime along with the mesop and openapi submodules by running the following command:
These components enable you to build multi-agent workflows and seamlessly integrate with the external Rest APIs.
Prerequisites#
Before you begin this guide, ensure you have:
- OpenAI account and API Key: This guide uses OpenAI's
gpt-4o-mini
model, so you'll need access to it. Follow the steps in the section below to create your OpenAI account and obtain your API key.
Setting Up Your OpenAI Account and API Key#
1. Create a OpenAI account:
- Go to https://platform.openai.com/signup.
- Choose a sign-up option and follow the instructions to create your account.
- If you already have an account, simply log-in.
2. Obtain your API Key:
- Go to https://platform.openai.com/account/api-keys.
- Click Create new secret key button.
- In the popup, provide a Name for the key, then click Create secret key button.
- The key will be shown on the screen—click Copy button, and you're ready to go!
Set Up Your API Keys in the Environment#
To securely use the API keys in your project, you should store it as an environment variable.
Run the following command in the same terminal where you will run the FastAgency application. This environment variable must be set for the application to function correctly; skipping this step will cause the example application to crash.
Example: Integrating a Weather API with AutoGen#
Step-by-Step Breakdown#
1. Import Required Modules#
The example starts by importing the necessary modules from AutoGen and FastAgency. These imports lay the foundation for building and running multi-agent workflows.
import os
from typing import Any
from autogen import UserProxyAgent
from autogen.agentchat import ConversableAgent
from fastagency import UI, FastAgency
from fastagency.api.openapi import OpenAPI
from fastagency.runtimes.autogen import AutoGenWorkflows
from fastagency.ui.mesop import MesopUI
2. Configure the Language Model (LLM)#
Here, the large language model is configured to use the Open AI's gpt-4o-mini
model, and the API key is retrieved from the environment. This setup ensures that both the user and weather agents can interact effectively.
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}
],
"temperature": 0.8,
}
3. Set Up the Weather API#
We define the OpenAPI specification URL for the weather service. This Rest APIs will later be used by the weather agent to fetch real-time weather data.
openapi_url = "https://weather.tools.fastagency.ai/openapi.json"
weather_api = OpenAPI.create(openapi_url=openapi_url)
4. Define the Workflow and Agents#
In this step, we define two agents and specify the initial message that will be displayed to users when the workflow starts.
-
UserProxyAgent
: This agent simulates the user interacting with the system. -
ConversableAgent
: This agent acts as the weather agent, responsible for fetching weather data from the API.
The workflow is registered using AutoGenWorkflows.
wf = AutoGenWorkflows()
@wf.register(name="simple_weather", description="Weather chat") # type: ignore[type-var]
def weather_workflow(
ui: UI, params: dict[str, Any]
) -> str:
initial_message = ui.text_input(
sender="Workflow",
recipient="User",
prompt="I can help you with the weather. What would you like to know?",
)
user_agent = UserProxyAgent(
name="User_Agent",
system_message="You are a user agent",
llm_config=llm_config,
human_input_mode="NEVER",
code_execution_config=False
)
weather_agent = ConversableAgent(
name="Weather_Agent",
system_message=weather_agent_system_message,
llm_config=llm_config,
human_input_mode="NEVER",
)
5. Register API Functions with the Agents#
In this step, we register the weather API functions to ensure that the weather agent can call the correct functions, such as get_daily_weather
and get_daily_weather_weekly_get
, to retrieve the required weather data.
wf.register_api( # type: ignore[attr-defined]
api=weather_api,
callers=[user_agent],
executors=[weather_agent],
functions=[
{
"get_daily_weather_daily_get": {
"name": "get_daily_weather",
"description": "Get the daily weather",
}
},
"get_hourly_weather_hourly_get",
],
)
6. Enable Agent Interaction and Chat#
Here, the user agent initiates a chat with the weather agent, which queries the weather API and returns the weather information. The conversation is summarized using a method provided by the LLM.
chat_result = user_agent.initiate_chat(
weather_agent,
message=initial_message,
summary_method="reflection_with_llm",
max_turns=3,
)
return chat_result.summary # type: ignore[no-any-return]
7. Create and Run the Application#
Finally, we create the FastAgency application and launch it using the mesop
interface.
Complete Application Code#
main.py
import os
from typing import Any
from autogen import UserProxyAgent
from autogen.agentchat import ConversableAgent
from fastagency import UI, FastAgency
from fastagency.api.openapi import OpenAPI
from fastagency.runtimes.autogen import AutoGenWorkflows
from fastagency.ui.mesop import MesopUI
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}
],
"temperature": 0.8,
}
openapi_url = "https://weather.tools.fastagency.ai/openapi.json"
weather_api = OpenAPI.create(openapi_url=openapi_url)
weather_agent_system_message = """You are a weather agent. When asked
for weather, always call the function to get real-time data immediately.
Do not respond until the data is retrieved. Provide the actual weather
concisely based only on the real-time data from the function. Do not
use any pre-existing knowledge or memory."""
wf = AutoGenWorkflows()
@wf.register(name="simple_weather", description="Weather chat") # type: ignore[type-var]
def weather_workflow(
ui: UI, params: dict[str, Any]
) -> str:
initial_message = ui.text_input(
sender="Workflow",
recipient="User",
prompt="I can help you with the weather. What would you like to know?",
)
user_agent = UserProxyAgent(
name="User_Agent",
system_message="You are a user agent",
llm_config=llm_config,
human_input_mode="NEVER",
code_execution_config=False
)
weather_agent = ConversableAgent(
name="Weather_Agent",
system_message=weather_agent_system_message,
llm_config=llm_config,
human_input_mode="NEVER",
)
wf.register_api( # type: ignore[attr-defined]
api=weather_api,
callers=[user_agent],
executors=[weather_agent],
functions=[
{
"get_daily_weather_daily_get": {
"name": "get_daily_weather",
"description": "Get the daily weather",
}
},
"get_hourly_weather_hourly_get",
],
)
chat_result = user_agent.initiate_chat(
weather_agent,
message=initial_message,
summary_method="reflection_with_llm",
max_turns=3,
)
return chat_result.summary # type: ignore[no-any-return]
app = FastAgency(provider=wf, ui=MesopUI())
Running the Application#
The preferred way to run the Mesop application is using a Python WSGI HTTP server like Gunicorn on Linux and Mac or Waitress on Windows.
Ensure you have set your OpenAI API key in the environment and that the weather API URL is accessible. The command will launch a mesopUI
interface where users can input their requests and interact with the weather agent.
Output#
Once you run the command above, FastAgency will start a Mesop application. Below is the output from the terminal along with a partial screenshot of the Mesop application:
[2024-10-10 13:19:18 +0530] [23635] [INFO] Starting gunicorn 23.0.0
[2024-10-10 13:19:18 +0530] [23635] [INFO] Listening at: http://127.0.0.1:8000 (23635)
[2024-10-10 13:19:18 +0530] [23635] [INFO] Using worker: sync
[2024-10-10 13:19:18 +0530] [23645] [INFO] Booting worker with pid: 23645
This example demonstrates the power of the AutoGen runtime within FastAgency, showing how easy it is to integrate LLM-powered agents with real-time Rest API services. By leveraging FastAgency, developers can quickly create interactive, scalable applications that interact with external data sources in real-time.
For more detailed documentation, visit the AutoGen Reference.