Dependency Injection#
Dependency Injection is a secure way to connect external functions to agents in AutoGen
without exposing sensitive data such as passwords, tokens, or personal information. This approach ensures that sensitive information remains protected while still allowing agents to perform their tasks effectively, even when working with large language models (LLMs).
In this guide, we’ll explore how to use FastAgency
to build secure workflows that handle sensitive data safely.
As an example, we’ll create a banking agent that retrieves a user's account balance. The best part is that sensitive data like username and password are never shared with the language model. Instead, it’s securely injected directly into the function at runtime, keeping it safe while maintaining seamless functionality.
Let’s get started!
Why Use Dependency Injection?#
When working with large language models (LLMs), security is paramount. There are several types of sensitive information that we want to keep out of the LLM’s reach:
- Passwords or tokens: These could be exposed through prompt injection attacks.
- Personal information: Access to this data might fall under strict regulations, such as the EU AI Act.
Dependency injection offers a robust solution by isolating sensitive data while enabling your agents to function effectively.
Why Dependency Injection Is Essential#
Here’s why dependency injection is a game-changer for secure LLM workflows:
- Enhanced Security: Your sensitive data is never directly exposed to the LLM.
- Simplified Development: Secure data can be seamlessly accessed by functions without requiring complex configurations.
- Unmatched Flexibility: It supports safe integration of diverse workflows, allowing you to scale and adapt with ease.
In this guide, we’ll explore how to set up dependency injection, build secure workflows, and create a protected application step-by-step. Let’s dive in!
Install#
We will use Cookiecutter for setting up the project. Cookiecutter creates the project folder structure, default workflow, automatically installs all the necessary requirements, and creates a devcontainer that can be used with Visual Studio Code.
You can setup the project using Cookiecutter by following the project setup guide.
In this example, we’ll create a Mesop application without authentication. The generated project will have the following files:
my_bank_app
├── docker
│ ├── content
│ │ ├── nginx.conf.template
│ │ └── run_fastagency.sh
│ └── Dockerfile
├── my_bank_app
│ ├── deployment
│ │ ├── __init__.py
│ │ └── main.py
│ ├── local
│ │ ├── __init__.py
│ │ ├── main_console.py
│ │ └── main_mesop.py
│ ├── __init__.py
│ └── workflow.py
├── scripts
│ ├── build_docker.sh
│ ├── check-registered-app-pre-commit.sh
│ ├── check-registered-app.sh
│ ├── deploy_to_fly_io.sh
│ ├── lint-pre-commit.sh
│ ├── lint.sh
│ ├── register_to_fly_io.sh
│ ├── run_docker.sh
│ ├── run_mesop_locally.sh
│ ├── static-analysis.sh
│ └── static-pre-commit.sh
├── tests
│ ├── __init__.py
│ ├── conftest.py
│ └── test_workflow.py
├── README.md
├── fly.toml
└── pyproject.toml
Complete Workflow Code#
The only file you need to modify to run the application is my_bank_app/my_bank_app/workflow.py
. Simply copy and paste the following content into the file:
workflow.py
import os
from typing import Annotated, Any
from autogen import UserProxyAgent, register_function
from autogen.agentchat import ConversableAgent
from fastagency import UI
from fastagency.api.dependency_injection import inject_params
from fastagency.runtimes.autogen import AutoGenWorkflows
account_ballace_dict = {
("alice", "password123"): 100,
("bob", "password456"): 200,
("charlie", "password789"): 300,
}
def get_balance(
username: Annotated[str, "Username"],
password: Annotated[str, "Password"],
) -> str:
if (username, password) not in account_ballace_dict:
return "Invalid username or password"
return f"Your balance is {account_ballace_dict[(username, password)]}$"
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}
],
"temperature": 0.8,
}
wf = AutoGenWorkflows()
@wf.register(name="bank_chat", description="Bank chat") # type: ignore[misc]
def bank_workflow(ui: UI, params: dict[str, str]) -> str:
username = ui.text_input(
sender="Workflow",
recipient="User",
prompt="Enter your username:",
)
password = ui.text_input(
sender="Workflow",
recipient="User",
prompt="Enter your password:",
)
user_agent = UserProxyAgent(
name="User_Agent",
system_message="You are a user agent",
llm_config=llm_config,
human_input_mode="NEVER",
)
banker_agent = ConversableAgent(
name="Banker_Agent",
system_message="You are a banker agent",
llm_config=llm_config,
human_input_mode="NEVER",
)
ctx: dict[str, Any] = {
"username": username,
"password": password,
}
get_balance_with_params = inject_params(get_balance, ctx)
register_function(
f=get_balance_with_params,
caller=banker_agent,
executor=user_agent,
description="Get balance",
)
chat_result = user_agent.initiate_chat(
banker_agent,
message="We need to get user's balance.",
summary_method="reflection_with_llm",
max_turns=3,
)
return chat_result.summary # type: ignore[no-any-return]
Step-by-Step Guide#
Imports#
These imports are similar to the imports section we have already covered, with the only difference being the additional imports of the inject_params
function:
import os
from typing import Annotated, Any
from autogen import UserProxyAgent, register_function
from autogen.agentchat import ConversableAgent
from fastagency import UI
from fastagency.api.dependency_injection import inject_params
from fastagency.runtimes.autogen import AutoGenWorkflows
Define the Bank Savings Function#
The get_balance
function is central to this workflow. It retrieves the user's balance based on the provided username and password.
The key consideration here is that both username and password should NEVER be exposed to the LLM. Instead, they will be securely injected into the get_balance
function later in the workflow using the inject_params
mechanism, ensuring that sensitive information remains confidential while still allowing the function to access the required data.
account_ballace_dict = {
("alice", "password123"): 100,
("bob", "password456"): 200,
("charlie", "password789"): 300,
}
def get_balance(
username: Annotated[str, "Username"],
password: Annotated[str, "Password"],
) -> str:
if (username, password) not in account_ballace_dict:
return "Invalid username or password"
return f"Your balance is {account_ballace_dict[(username, password)]}$"
Configure the Language Model (LLM)#
Here, the large language model is configured to use the gpt-4o-mini
model, and the API key is retrieved from the environment. This setup ensures that both the user and weather agents can interact effectively.
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}
],
"temperature": 0.8,
}
Define the Workflow and Agents#
The bank_workflow
handles user interaction and integrates agents to retrieve balance securely.
-
User Input Collection:
- At the beginning of the workflow, the user is prompted to provide:
- Username: The workflow asks, "Enter your username:".
- Password: The workflow then asks, "Enter your password:".
- At the beginning of the workflow, the user is prompted to provide:
-
Agent Setup:
- Two agents are created to handle the workflow:
- UserProxyAgent: Simulates the user's perspective, facilitating secure communication.
- ConversableAgent: Acts as the banker agent, retrieving the user's balance.
- Two agents are created to handle the workflow:
wf = AutoGenWorkflows()
@wf.register(name="bank_chat", description="Bank chat") # type: ignore[misc]
def bank_workflow(ui: UI, params: dict[str, str]) -> str:
username = ui.text_input(
sender="Workflow",
recipient="User",
prompt="Enter your username:",
)
password = ui.text_input(
sender="Workflow",
recipient="User",
prompt="Enter your password:",
)
user_agent = UserProxyAgent(
name="User_Agent",
system_message="You are a user agent",
llm_config=llm_config,
human_input_mode="NEVER",
)
banker_agent = ConversableAgent(
name="Banker_Agent",
system_message="You are a banker agent",
llm_config=llm_config,
human_input_mode="NEVER",
)
Dependency Injection#
Username and password provided by the user are stored securely in a context dictionary (ctx
). These parameters are never shared with the LLM and they are only used internally within the workflow.
Using inject_params
, the sensitive parameters from the ctx
dictionary are injected into the get_balance
function.
ctx: dict[str, Any] = {
"username": username,
"password": password,
}
get_balance_with_params = inject_params(get_balance, ctx)
Register Function with the Agents#
In this step, we register the get_balance_with_params
register_function(
f=get_balance_with_params,
caller=banker_agent,
executor=user_agent,
description="Get balance",
)
Enable Agent Interaction and Chat#
Here, the user agent initiates a chat with the banker agent, which retrieves the user's balance. The conversation is summarized using a method provided by the LLM.
chat_result = user_agent.initiate_chat(
banker_agent,
message="We need to get user's balance.",
summary_method="reflection_with_llm",
max_turns=3,
)
return chat_result.summary # type: ignore[no-any-return]
Run Application#
You can run this chapter's FastAgency application using the following command:
Output#
At the beginning, the user is asked to provide the username and password.
Once the user provide them, the agent executes the get_balance
function with both parameters securely injected into the function using the inject_params
mechanism, ensuring these parameters are not exposed to the LLM.
The agent processes the request, retrieves the user's balance, and provides a summary of the results without compromising sensitive data.