We are witnessing a massive shift in how we interact with Artificial Intelligence. We are moving from simple, reactive chatbots to Agentic Systems—autonomous entities capable of reasoning, planning, and interacting with the world to achieve complex goals.
But how do we actually build these systems? It’s not just about having a powerful LLM; it’s about the architecture around it. It requires structure, design, and a thoughtful approach to how the agent perceives and acts.
I recently started diving deep into the book “Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems” by Antonio Gulli (LinkedIn). This book is a fantastic resource that extracts key architectural blueprints for building AI agents.
To really understand these concepts, I decided to get my hands dirty. I am launching a new series of blog posts where I will explore these patterns one by one. Full credit goes to Antonio Gulli for defining these patterns; my goal here is simply to document my learning journey and share the practical implementation of his ideas.
The Project: Agentic Design Patterns on GitHub#
Reading code in a PDF is one thing; running it is another.
I have started a new open-source project to accompany this series. I am taking the concepts and code from Antonio’s book and converting them into a clean, executable, and easy-to-navigate repository.
You can follow along, star the repo, and try the code yourself here:
👉 GitHub: carlosprados/Agentic_Design_Patterns
My goal is to provide a “canvas” for developers—a practical foundation where you can see these patterns in action using frameworks like LangChain and Google’s Agent Developer Kit.
Pattern #1: Prompt Chaining#
Let’s start at the beginning. As described in Chapter 1 of the book, the most foundational pattern in the Agentic world is Prompt Chaining (sometimes called the Pipeline pattern).
The Problem: The Monolithic Prompt#
When we first start using LLMs, the tendency is to stuff everything into one massive prompt. We ask the model to “Read this 20-page report, extract the dates, summarize the key findings, check for errors, and format it as a JSON object.”
This often leads to failure. The model gets overwhelmed. It might hallucinate, forget instructions (“instruction neglect”), or mix up the formatting. The cognitive load is simply too high for a single inference step.
The Solution: Divide and Conquer#
Prompt Chaining solves this by decomposing a complex task into a sequence of smaller, manageable sub-tasks.
Instead of one giant leap, we take structured steps:
- Step 1: Extract text from the document.
- Step 2: Summarize that text.
- Step 3: Format the summary into JSON.
The output of one step becomes the input for the next. This creates a dependency chain where the context and results of previous operations guide the subsequent processing.
Why this matters#
By breaking the chain, you gain several advantages:
- Reliability: Each step is simpler, reducing the chance of error.
- Debuggability: If the output is wrong, you know exactly which link in the chain failed.
- Focus: You can use different system prompts (or even different models!) for different steps. You might use a cheap, fast model for formatting and a “smart” model for reasoning.
Seeing it in Action#
In the GitHub repository, I’ve implemented a classic example of this pattern based on the book’s guidance.
The code demonstrates a pipeline that takes raw, unstructured text (like a technical description of a laptop) and passes it through a chain to:
- Extract specific technical specifications.
- Transform and sanitize that data.
- Format it into a clean JSON structure ready for a database.
Here is a snippet of the logic using LangChain (check the repo for the full runnable source):
# A conceptual look at the chain structure
extraction_chain = prompt_extract | llm | StrOutputParser()
# The output of extraction feeds into the transformation
full_chain = (
{"specifications": extraction_chain}
| prompt_transform
| llm
| StrOutputParser()
)This modularity is the building block of more complex behaviors. Once you master chaining, you can start building agents that don’t just follow a straight line, but can decide which chain to use.
What’s Next? Prompt Chaining is just the tip of the iceberg. In the next post, we will explore Routing (Chapter 2)—giving our agents the ability to make decisions and choose different paths based on the user’s intent.
Make sure to check out the repository, clone it, and try running the Chapter 1 examples.
Special thanks to Antonio Gulli for his inspiring work.

