Today, both our co-founders joined the "Startups Agentic Immersion Day: Powering AI Agents with AWS". And honestly? We had a pretty great time and learned a lot. We've gone ahead to compile and share some of the key insights from the event. The sessions provided valuable insights into how businesses can leverage AWS to build practical and scalable agentic AI systems.
Below are the key takeaways from the event, and stay till the end to see how you can get started with building your own agentic AI system with just 3 lines of code (outside of AWS).
Building Agentic AI on AWS
- Guardrails: Prevent users from asking questions or issuing prompts that the system isn’t designed to handle, ensuring safety and control.
- Amazon Bedrock Knowledge Bases: Provide native support for Retrieval-Augmented Generation (RAG), allowing AI agents to tap into domain-specific knowledge.
What is an AI Agent?
An AI agent is more than a chatbot — it’s a system that can be linked to tools or resources to perform specific tasks and extend functionality:
- Connect to tools: AI agents can integrate with external services to execute defined functions.
- Tap into resources: Agents can leverage knowledge bases or existing setups to provide accurate, context-aware responses.
Typical Architecture
One session outlined a best-practice AI agent architecture on AWS, which usually consists of two main parts: keep core app logic separate from AI functions and tools. This separation makes your product easier to scale and maintain.
- Core App / Business Logic: The backbone of your product that handles core features and user flows.
- AI Functions & Tools: Separated from the core logic and triggered only when the user’s request requires AI-driven processing.
Within this setup, memory plays a key role:
- Short-term memory: Keeps context within the current chat session.
- Long-term memory: Stores user behaviors and knowledge across multiple sessions for more personalized interactions.
A typical AWS-based agentic AI system involves three parts:
- Web Application: The user-facing app (often built with Python, EC2, Streamlit).
- App Business Logic: The backend running on AWS EC2.
- Model: Amazon Bedrock powering the AI layer.
Useful Resources
Here are some helpful tools and resources mentioned during the event:
-
Strands Agents – Quickly set up an agent with as little as three lines of code:
from strands import Agent # Create an agent with default settings agent = Agent() # Ask the agent a question agent("Tell me about agentic AI")
It also supports integration with custom Python functions and MCPs, making it useful for data processing and tool linking.
-
Smithery – A curated collection of Modular Component Protocols (MCPs) offering powerful AI integrations. Some of the popular MCPs and uses include:
- Decodo: Real-time web content scraping
- GitHub: Repository management
- Slack: Messaging automation
- Zapier: Connecting thousands of apps
- Browser automation: Dynamic web interactions
- Problem-solving tools: Dynamic analysis capabilities
- Local knowledge access: Tap into existing data sources
Takeaway
Agentic AI isn’t just about answering questions — it’s about performing actions by combining AI models, tools, and memory into a structured architecture. With AWS, businesses can scale these systems while maintaining guardrails and leveraging Bedrock’s built-in RAG capabilities.
💬 If you’re exploring agentic AI for your business, we can help you design and implement an architecture that aligns with your goals — from MVP testing to enterprise-scale deployment.