How to monitor AI agent applications on Amazon Bedrock AgentCore with Grafana Cloud
Summary
pip install bedrock-agentcore-starter-toolkit Step 1: Create a CrewAI Agent Let’s create an example AI Agent using CrewAI: Python Copy import os
from bedrock_agentcore import BedrockAgentCoreApp
from crewai import Agent, Task, Crew, Process
Initialize AgentCore runtime
app = BedrockAgentCoreApp()
Define a simple research assistant agent
researcher = Agent(
role="Research Assistant",
goal="Provide helpful, accurate answers, with concise summaries.",
backstory=("You are a knowledgeable research assistant who answers clearly "
"and cites facts when relevant."),
Use Llama 3 8B via AWS Bedrock
llm="bedrock/meta.llama3-8b-instruct-v1:0",
verbose=False,
max_iter=2
)
@app.entrypoint
def invoke(payload: dict):
"""AgentCore entrypoint. Expects {'prompt': ''}"""
user_message = payload.get("prompt", "Hello!")
task = Task(
description=user_message,
agent=researcher,
expected_output="A helpful, well-structured response."
)
crew = Crew(
agents=[researcher],
tasks=[task],
process=Process.sequential,
verbose=False,
)
result = crew.kickoff()
return {"result": result.raw}
if _name == "main_":
app.run() Key components: BedrockAgentCoreApp: Integrates CrewAI with the Amazon Bedrock AgentCore runtime Agent definition: Single agent with a research assistant role using Llama 3 @app.entrypoint: Decorator that marks the function as the agent’s entry point Crew orchestration: CrewAI manages task execution and agent coordination The agent accepts JSON input like {"prompt": "your question"} and returns a JSON response. Step 2: Configure dependencies Create a requirements.txt that includes: Copy crewai>=1.0.0
openlit>=1.35
litellm
bedrock-agentcore Step 3: Configure AgentCore deployment Run the AgentCore configuration command: Bash Copy agentcore configure \
--deployment-type container \
--entrypoint crewai_agent.py \
--name crewai_agent \
--non-interactive This generates a .bedrockagentcore/crewaiagent/ directory with: - Dockerfile: Container build configuration - agentconfig.json: Metadata for AgentCore Step 4: Add OpenTelemetry configuration Now comes the observability magic. Edit the generated Dockerfile at .bedrockagentcore/crewai_agent/Dockerfile and add these environment variables: dockerfile Copy # Disable AWS ADOT observability to use OpenLIT exclusively
ENV DISABLEADOTOBSERVABILITY="true"
OpenTelemetry configuration for Grafana Cloud
ENV OTELSERVICENAME="my_service"
ENV OTELDEPLOYMENTENVIRONMENT="my_environment"
ENV OTELEXPORTEROTLPENDPOINT="yourgrafanacloudotlp_endpoint"
ENV OTELEXPORTEROTLPHEADERS="Authorization=Basic%20" Important: Replace the OTLP endpoint and headers with your Grafana Cloud credentials: Sign in to the Grafana Cloud portal and select your Grafana Cloud stack. Click Configure in the OpenTelemetry section. In the Password / API Token section, click Generate to create a new API token Give the API token a name Click on Create token Click on Close without copying the token Copy and replace the values for OTELEXPORTEROTLPENDPOINT and OTELEXPORTEROTLP_HEADERS in the Dockerfile ENVs For more information, refer to our guide on manually setting up OpenTelemetry for Grafana Cloud. Next, ensure the CMD line in the Dockerfile uses OpenLit’s instrumentation wrapper: dockerfile Copy # Use OpenLit to automatically instrument the agent
CMD ["openlit-instrument", "python", "-m", "crewai_agent"] What’s happening here? openlit-instrument wraps your Python command At runtime, OpenLit automatically monitors the CrewAI agent operations Every LLM request and agent task is traced and exported to Grafana via OTLP Step 5: Build and deploy Build the Docker image and deploy to AgentCore: Bash Copy agentcore launch --local-build This command will: 1. Build the Docker image locally with all dependencies 2. Push the image to Amazon ECR 3. Deploy the agent to Bedrock AgentCore 4. Set up IAM execution roles 5. Configure the runtime environment The deployment process takes two to five minutes. You’ll see output like: Copy ��� Pushing to ECR...
��� Deploying to AgentCore...
��� Agent deployed successfully!
Agent ID: agt_abc123xyz Step 6: Invoke the agent Test your deployed agent: Bash Copy # Simple test
agentcore invoke '{"prompt": "hi"}'
Research query
agentcore invoke '{"prompt": "Explain AI Observability"}'
Complex request
agentcore invoke '{"prompt": "Compare supervised and unsupervised learning with examples"}' Response: JSON Copy {
"result": "Quantum computing is a revolutionary approach to computation that..."
} Response: JSON Copy {
"result": "Quantum computing is a revolutionary approach to computation that..."
} Step 7: Explore Grafana Cloud AI Observability Once you have telemetry flowing from CrewAI Agent on AgentCore to Grafana Cloud, you can use the pre-built dashboards from Grafana Cloud AI Observability. Navigate to Connections ��� search for AI Observability and click on it ��� go to GenAI Observability, scroll down, and install the dashboards. Here’s a breakdown of what you can see in the dashboards: End-to-end latency: Total time from request to response LLM call details: Which model, how many tokens, latency, cost Agent workflow: Task creation, execution, and response formatting Error traces: If something fails, you’ll see the exact step and error message Next steps The combination of AWS Bedrock AgentCore, OpenTelemetry, and Grafana Cloud provides a production-ready stack for AI agents with enterprise-grade observability. Explore our Grafana Cloud AI Observability documentation to learn more. Grafana Cloud is the easiest way to get started with metrics, logs, traces, dashboards, and more. We have a generous forever-free tier and plans for every use case. Sign up for free now!
Read the Original Article
This article originally appeared on Grafana Labs blog on Grafana Labs.
Read Full Article on Original Site