Day 3
Agentic AI Foundations
Autonomous Workflows & Future Architecture
AnyCompany Financial Group · Generative & Agentic AI on AWS
Welcome to Day 3. Day 1 was GenAI fundamentals. Day 2 was prompt engineering — you built reusable templates. Today we go from individual prompts to autonomous systems that chain multiple AI steps together. By end of day, you'll understand what agentic AI is, when to use it, and you'll design a workflow for your own team.
Day 3 Agenda
AI Agents vs Agentic AI — definitions, differences, when to use which
Core Agentic AI Concepts — the Observe → Plan → Act → Reflect loop
Amazon Bedrock AgentCore & AWS Roadmap
Workflow Automation Deep Dive — invoice processing, reconciliation, fraud
"Opening Minds to Possibilities" — design your own workflow
Hands-on: Claude Code in Action on AWS & MCP Servers
AWS Specialist Team Sharing Session & Q&A
Walk through the agenda. Morning is conceptual — understanding what agentic AI is and isn't. Afternoon is practical — designing workflows and building with tools. The "Opening Minds" workshop is the creative highlight — participants brainstorm what THEY could automate. Close with AWS specialist team sharing the roadmap.
Foundations
AI Agents vs Agentic AI
What's the difference and when does it matter?
This is the most important conceptual distinction of the day. Many people use "AI agent" and "agentic AI" interchangeably — they're not the same thing. Getting this right helps teams make better architecture decisions.
AI Agent vs Agentic AI
AI Agent
A single AI that can use tools to complete a specific task.
Follows a defined workflow
Uses tools (search, calculate, API calls)
Completes one task at a time
Human defines the steps
Example: A chatbot that looks up account balances and answers customer questions
Agentic AI System
Multiple AI agents working together autonomously to achieve a goal.
Plans its own approach
Breaks complex goals into subtasks
Coordinates multiple agents
Reflects and adjusts strategy
Example: A system that receives an invoice, extracts data, validates against POs, flags discrepancies, and routes for approval — all autonomously
Use the analogy: An AI agent is like a skilled employee who follows instructions well. An agentic AI system is like a team lead who can break down a project, assign tasks, check quality, and adjust the plan when things go wrong. For business users, the key question is: "Does my workflow need a single tool, or does it need a system that can plan and adapt?" Simple Q&A = agent. End-to-end invoice processing = agentic system.
When to Use Which?
Scenario Use Why
Answer customer FAQs AI Agent Single task, defined scope
Classify support tickets AI Agent One-step classification
Process invoice end-to-end Agentic AI Multi-step: extract, validate, route, report
Fraud investigation Agentic AI Analyze patterns, gather evidence, draft report
Loan application review Agentic AI Check docs, verify data, assess risk, recommend
Generate a single report AI Agent One prompt, one output
Key takeaway: Start with agents for simple tasks. Graduate to agentic systems when you need multi-step workflows with decision points.
Walk through each row. The pattern: if it's a single prompt-response, use an agent. If it requires multiple steps with decisions between them, use an agentic system. For business users, the practical implication is: the prompt templates you built yesterday (Day 2) are the building blocks. An agentic system chains multiple prompts together with logic between them. Don't over-engineer — many tasks that seem complex can be solved with a well-crafted single prompt.
Key Takeaway: AI Agents vs Agentic AI
AI Agent = one skilled worker following instructions. Good for single, well-defined tasks.
Agentic AI = a team that plans, executes, and adapts. Good for multi-step workflows with decision points.
Most business tasks start as agents and evolve into agentic systems as complexity grows.
The prompt templates you built on Day 2 are the building blocks for agentic workflows.
Start simple. Don't build an agentic system when a good prompt will do.
This is the notes slide — participants should capture these points. Emphasize the last bullet: the biggest mistake teams make is over-engineering. A well-crafted prompt template (like the ones from Day 2) can handle 80% of use cases. Reserve agentic systems for genuinely complex, multi-step workflows.
Core Concepts
The Agentic AI Loop
Goals, Memory, Tools, Reasoning, Planning, Reflection
Now we go deeper into HOW agentic systems work. These 6 components are the building blocks. Understanding them helps business users evaluate vendor solutions and design their own workflows.
6 Components of Agentic AI
1. Goals
What the system is trying to achieve. Clear goals = better outcomes.
2. Memory
What the system remembers across steps. Short-term (current task) and long-term (learned patterns).
3. Tools
External capabilities: search, calculate, call APIs, read databases, send emails.
4. Reasoning
How the system thinks through problems. Chain-of-Thought from Day 2 is the foundation.
5. Planning
Breaking a complex goal into ordered subtasks. "First extract, then validate, then route."
6. Reflection
Self-checking: "Did my output meet the goal? Should I try a different approach?"
Connect each component to something they already know. Goals = the "task" in their prompts. Memory = conversation context from Day 2. Tools = the APIs and databases their teams already use. Reasoning = Chain-of-Thought from yesterday. Planning = the structured output sections. Reflection = the self-critique technique from the exercises. The point: they already understand the pieces — agentic AI just orchestrates them automatically.
The Agentic Loop
┌─────────────┐
│ OBSERVE │ ← Receive task or new information
└──────┬───────┘
│
┌──────▼───────┐
│ PLAN │ ← Break into subtasks, choose tools
└──────┬───────┘
│
┌──────▼───────┐
│ ACT │ ← Execute: call tools, generate output
└──────┬───────┘
│
┌──────▼───────┐
│ REFLECT │ ← Check: Did it work? Adjust if needed
└──────┬───────┘
│
┌──────▼───────┐
│ GOAL MET? │──Yes──▶ Done
└──────┬───────┘
│ No
└──────────▶ Back to OBSERVE
This is the core loop that every agentic system follows. Walk through with a concrete example: "An invoice arrives (OBSERVE). The system plans: extract data, match to PO, check for discrepancies (PLAN). It runs the extraction and matching (ACT). It checks: did the extraction capture all fields? Does the PO match? (REFLECT). If something's wrong — say the vendor name doesn't match — it loops back and tries a fuzzy match. When everything checks out, it routes for approval (GOAL MET)." The key insight for business users: this loop is what makes agentic AI different from a simple prompt. It can recover from errors and adapt.
Key Takeaway: The Agentic Loop
Every agentic system follows: Observe → Plan → Act → Reflect → Repeat
The "Reflect" step is what makes it intelligent — it can catch its own mistakes and retry
This is the same self-critique technique from Day 2, but automated
When designing workflows, ask: "What happens when a step fails?" — that's where reflection matters
The loop continues until the goal is met or a human is asked to intervene
Capture slide. The most important point: the Reflect step is what separates agentic AI from simple automation. Traditional automation fails when something unexpected happens. Agentic AI can recognize the failure and try a different approach. But it's not magic — you still need to define what "success" looks like for each step.
AWS Platform
Amazon Bedrock AgentCore
From DIY agents to managed infrastructure
Transition: "Now that you understand what agentic AI is, let's look at how AWS provides the infrastructure to build and run these systems. AgentCore is AWS's answer to the 'build vs buy' question for agentic AI."
The DIY Agent Challenge
Many teams build agents from scratch. Common pain points:
Orchestration complexity — managing multi-step workflows, retries, error handling
Memory management — maintaining context across long conversations and sessions
Tool integration — connecting to internal APIs, databases, and services
Scaling — handling concurrent users, rate limits, cost management
Monitoring — knowing when agents fail, drift, or produce bad output
Security — controlling what agents can access and do
Reality check: Building a reliable agentic system from scratch takes 3-6 months of engineering. Maintaining it is an ongoing cost.
This resonates with teams that have tried building agents. Ask: "Has anyone on your team built a chatbot or automation that worked in demo but broke in production?" That's the DIY challenge. The orchestration and error handling are the hard parts — not the AI model itself. AgentCore handles the infrastructure so teams can focus on the business logic.
AgentCore: What It Provides
Capability What it does Business benefit
Agent Runtime Hosts and runs your agents No infrastructure to manage
Memory Persistent context across sessions Agents remember past interactions
Tool Use Connect agents to APIs and databases Agents can take real actions
Guardrails Safety filters and access controls Prevent harmful or unauthorized actions
Knowledge Bases RAG over your documents Agents answer from YOUR data
Orchestration Multi-agent coordination Complex workflows without custom code
Walk through each row. For business users, the key message is: AgentCore handles the engineering complexity so your team can focus on defining WHAT the agent should do, not HOW to build the infrastructure. Knowledge Bases connects directly to the RAG technique from Day 2 — same concept, enterprise scale. Guardrails connects to the responsible AI content from Day 1. Everything builds on what they've already learned.
Transition Path: DIY → AgentCore
Phase What to do Timeline
1. Assess Inventory current DIY agents, identify pain points 2-4 weeks
2. Pilot Migrate one low-risk agent to AgentCore 4-6 weeks
3. Validate Compare performance, cost, reliability 2-4 weeks
4. Migrate Move remaining agents in priority order 3-6 months
5. Optimize Leverage AgentCore features (multi-agent, guardrails) Ongoing
Key benefit: Reduced maintenance burden, better reliability, faster development of new agents.
This is the practical "what do we do next" slide. For teams with existing DIY agents, the transition doesn't have to be all-at-once. Start with one low-risk agent, prove the value, then migrate the rest. For teams without agents yet, the message is: start with AgentCore from day one — don't build DIY infrastructure you'll have to replace later.
Key Takeaway: AgentCore
AgentCore handles the hard parts — orchestration, memory, scaling, security — so you focus on business logic
It's not "replace your agents" — it's "stop maintaining infrastructure and focus on what agents DO"
Knowledge Bases = enterprise RAG (same grounding technique from Day 2, at scale)
Guardrails = responsible AI controls (same concepts from Day 1, enforced automatically)
Start with a pilot — one agent, one use case. Prove value before migrating everything.
Capture slide. The thread across all 3 days: Day 1 taught responsible AI principles. Day 2 taught prompt engineering and RAG grounding. Day 3 shows how AgentCore operationalizes all of it — guardrails enforce Day 1's principles, Knowledge Bases implement Day 2's RAG patterns, and the agent runtime handles the orchestration complexity.
Deep Dive
Workflow Automation
From manual processes to autonomous pipelines
Transition: "Now let's get concrete. What does an agentic workflow actually look like? We'll walk through a real example — invoice processing — then discuss other workflows relevant to your teams."
Example: Invoice Processing Workflow
📄 Invoice arrives (email/upload)
│
┌────▼────┐
│ EXTRACT │ AI reads PDF, extracts vendor, amount,
│ DATA │ line items, dates
└────┬────┘
│
┌────▼────┐
│ VALIDATE │ Match against PO database
│ vs PO │ Check amounts within tolerance
└────┬────┘
│
┌────▼────────────────────┐
│ DECISION │
│ ✅ Match → Auto-approve │
│ ⚠️ Variance → Flag │
│ ❌ No PO → Escalate │
└────┬────────────────────┘
│
┌────▼────┐
│ REPORT │ Generate validation report
│ │ Update dashboard
└─────────┘
This is the same workflow participants built in the Kiro workshop (Module 1). But here it's automated end-to-end — no human in the loop for the happy path. Point out: each box is essentially a prompt template (like the ones from Day 2) connected by decision logic. The EXTRACT step uses structured output (Module 4). The VALIDATE step uses RAG grounding against the PO database. The DECISION step uses the risk rating framework. Everything they learned in Days 1-2 comes together here.
More Workflow Opportunities
Workflow Steps Business impact
Payment Reconciliation Match txns → Flag mismatches → Suggest resolutions → Report Days → hours
Fraud Investigation Detect pattern → Gather evidence → Score risk → Draft case file Consistent, faster triage
Loan Processing Check docs → Verify data → Assess risk → Recommend Faster approvals, fewer errors
KYC Verification Extract ID data → Check watchlists → Verify docs → Flag issues Higher volume, consistent checks
Compliance Reporting Scan txns → Apply rules → Generate report → Export for regulator Audit-ready, automated
Ask participants: "Which of these is closest to a pain point in YOUR team?" This sets up the afternoon workshop where they'll design their own workflow. Each of these follows the same pattern: Input → Process → Decide → Output. The AI handles the processing and decision steps; humans handle exceptions and final approvals. Point out: they've already built pieces of these in the Kiro workshop — fraud detection (Module 5), compliance reporting (Module 6), invoice processing (Module 1).
Key Takeaway: Workflow Automation
Every workflow follows: Input → Process → Decide → Output
Each step is a prompt template (Day 2) connected by decision logic
Start with the happy path — automate the 80% that's straightforward
Route exceptions to humans — don't try to automate edge cases on day one
Measure: time saved, error reduction, consistency improvement
The Kiro exercises you did (invoice processing, fraud detection) are prototypes of these workflows
Capture slide. The most important point: start with the happy path. The biggest mistake teams make is trying to automate every edge case from the start. Automate the 80% that's straightforward, route the 20% exceptions to humans, and gradually expand automation as you learn what works.
Interactive
"Opening Minds to Possibilities"
What could YOU automate?
This is the creative highlight of Day 3. Participants brainstorm automation opportunities from their own daily work. The goal is to leave with at least one concrete idea they can pursue after the training.
Workshop: Design Your Workflow
In groups of 3-4, pick a workflow from your team and design it:
Step 1: Identify (10 min)
What manual process takes the most time?
Where do errors happen most often?
What's repetitive but requires judgment?
Step 2: Map (15 min)
Draw the current process (boxes and arrows)
Identify which steps AI could handle
Mark decision points and exception paths
Step 3: Assess (10 min)
Feasibility: Can current AI do this?
Impact: How much time/cost saved?
Risk: What if the AI gets it wrong?
Step 4: Present (5 min each)
Share your workflow with the group
Get feedback and suggestions
Identify quick wins vs long-term projects
Facilitate actively. Walk around during Steps 1-3 and help groups that are stuck. Common issues: groups pick something too broad ("automate all of finance") — help them narrow to a specific process. Or they pick something too simple ("send an email") — push them toward something with multiple steps and decisions. The best outcomes are workflows that are painful today, have clear inputs/outputs, and involve repetitive judgment calls. During presentations, encourage cross-team feedback — often someone from another team has solved a similar problem.
Key Takeaway: Designing AI Workflows
Best candidates for automation: repetitive, judgment-based, high-volume, error-prone
Worst candidates: rare, highly creative, politically sensitive, no clear success criteria
Always ask: "What's the cost of the AI being wrong?" — that determines how much human oversight you need
Quick wins: Report generation, data extraction, classification, summarization
Long-term projects: End-to-end processing, multi-system integration, autonomous decision-making
Document your workflow design — it becomes the requirements spec for implementation
Capture slide. The "cost of being wrong" question is the most important filter. If the AI misclassifies a support ticket, the cost is low — a human corrects it. If the AI approves a fraudulent transaction, the cost is high — you need more guardrails. This risk assessment determines the level of human oversight in your workflow design.
Hands-on
AI-Powered IDEs in Action
Claude Code, Kiro, and MCP Servers
Transition to the hands-on portion. This section demonstrates how developers and power users can build agentic workflows using AI-powered IDEs. For business users, the key takeaway is understanding what's possible — not necessarily doing it themselves.
MCP: Model Context Protocol
MCP connects AI to your internal tools and data sources:
What MCP does Example
Connect AI to databases AI queries your transaction database directly
Connect AI to APIs AI calls your internal risk scoring API
Connect AI to file systems AI reads and writes to your document store
Connect AI to monitoring AI checks system health dashboards
For business users: MCP is what turns AI from "a chatbot that answers questions" into "a system that can actually DO things in your environment." You don't need to build MCP servers — your engineering team does. But understanding what's possible helps you design better workflows.
MCP is the bridge between AI and real systems. Without MCP, AI can only read text you paste into it. With MCP, AI can query databases, call APIs, read files, and take actions. For business users, the key insight is: when you design a workflow and say "the AI should check the PO database" — MCP is HOW that happens technically. You don't need to build it, but knowing it exists helps you design realistic workflows.
Key Takeaway: AI-Powered Development Tools
Kiro (Day 1-2 exercises) — business users describe what they want, AI builds it
Claude Code / Cursor — developers build production systems with AI pair programming
MCP — the protocol that connects AI to your internal tools and databases
Business users don't need to code — but understanding what's possible helps design better workflows
The gap between "prototype in Kiro" and "production system" is where engineering teams add MCP, guardrails, and monitoring
Capture slide. The key message for business users: you prototype in Kiro (fast, no code), then engineering teams harden it for production using Claude Code + MCP + guardrails. Your role is to define WHAT the system should do and validate that it works correctly. The engineering team handles HOW it's built and deployed.
Synthesis
Connecting All Three Days
From concepts to templates to autonomous systems
This is the wrap-up that ties everything together. Show participants how each day built on the previous one.
The 3-Day Journey
Day What you learned What you built
Day 1 GenAI fundamentals, Bedrock, responsible AI Understanding of what's possible and safe
Day 2 Prompt engineering techniques Reusable prompt templates for your team
Day 3 Agentic AI, workflows, AgentCore Workflow designs for your specific use cases
The thread: Day 1's responsible AI principles → enforced by Day 3's guardrails. Day 2's prompt templates → become Day 3's workflow building blocks. Day 2's RAG grounding → becomes Day 3's Knowledge Bases.
This is the "it all connects" moment. Walk through the thread: the responsible AI principles from Day 1 aren't just theory — they're enforced by AgentCore's guardrails in production. The prompt templates from Day 2 aren't just exercises — they're the actual prompts that run inside agentic workflows. The RAG grounding technique from Day 2 is exactly what Knowledge Bases does at enterprise scale. Nothing was wasted — every concept builds toward production-ready AI systems.
Day 3 Outcomes
Distinguish between AI agents and agentic AI systems
Understand the agentic loop: Observe → Plan → Act → Reflect
Evaluate Amazon Bedrock AgentCore for your organization
Design multi-step workflow automations for your team's processes
Identify quick wins vs long-term automation projects
Understand MCP and how AI connects to internal systems
Plan a transition path from DIY agents to managed solutions
Walk through each outcome. Ask: "Do you feel confident about each of these?" For any hesitation, point them to the relevant key takeaway slide or offer to revisit. Emphasize: the workflow designs they created in the workshop are real deliverables — they can take them back to their teams and start planning implementation.
What to Do Next
This week: Share your workflow design with your team. Identify one quick win to prototype.
This month: Build a prototype using Kiro or Claude Code. Test your Day 2 prompt templates with real data.
This quarter: Evaluate AgentCore for your highest-impact workflow. Start a pilot.
Resources: Workshop site (exercises + slides), prompt templates, AWS documentation
Support: AWS specialist team available for architecture reviews and adoption planning
This is the action plan. Make it concrete: "Before you leave today, write down the ONE thing you're going to do this week." The most common quick win is taking a Day 2 prompt template and testing it with real data from their team. That alone can save hours per week. For teams ready for more, the AgentCore pilot is the next step.
Thank You
From prompts to templates to autonomous workflows
AnyCompany Financial Group · Generative & Agentic AI on AWS
Close with: "Three days ago, most of you had never written a structured prompt. Today, you've built reusable templates, designed autonomous workflows, and have a concrete plan for bringing AI into your daily work. The tools are ready. The techniques are proven. The only thing left is to start." Thank the participants and the AWS specialist team.