How to use Agentic AI in AEM Development to Supercharge Your Website Redesign
Jan 21, 2026
This post continues the experimentation from my previous blog on using Edge Delivery Services Skills for AI assistant development. Here, I’ll walk through creating a couple of Agent Skills specifically for AEM development and share insights on the time, effort, and benefits involved in building your own Skills. If you want a refresher on Agent Skills—their purpose and how to use them—you can check out my earlier post for a quick overview.
TL;DR
Well-defined Agent Skills can significantly improve development speed and consistency, but they take time to design and continuously refine. They’re not a one-time setup—ongoing iteration is key to getting real value.
Why Use Agent Skills?
-
More Predictable Results
We’ve all experienced asking an AI agent for something specific and getting an unexpected—or even useless—response. Skills help address this by providing agents with precise, specialized knowledge and predefined workflows, ensuring their outputs stay within expected boundaries and align more closely with the original intent.
-
Reusable Knowledge and Workflows
Skills package instructions and actions into reusable components. Instead of repeatedly explaining how a task should be handled, teams can define the process once and rely on the agent to follow it consistently every time.
-
On-Demand Context
Skills are loaded only when needed, eliminating the need to repeat the same guidance across multiple conversations. This keeps interactions cleaner and ensures the agent always has the right context at the right time.
-
Scalability and Consistency Across Teams
Skills are designed to be shared and composed across different agents and projects. This makes it easier to scale AI solutions while maintaining consistent behavior and standards—especially valuable when multiple development teams are involved, and code consistency is a priority.
Building Agent Skills
Introduced in 2025, Agent Skills quickly became a standard method for creating and sharing AI capabilities. At a high level, building a Skill involves creating a SKILL.md file inside a dedicated folder that the AI Agent can scan, interpret, and execute. You can explore all the details in the Skill specification at agentskills.io.
Here’s what I used to build my custom Agent Skills:
Step 1: Create a Unique Folder
Each Skill should live in its own folder that the agent can access. Common locations include:
-
Project-level:
.claude/skills/my-new-skill -
System-level:
/userHome/.claude/skills/my-new-skill
Step 2: Add the SKILL.md File
The SKILL.md file is the core of the Skill and must include:
-
Metadata (Frontmatter)
-
Identifies the Skill
-
At a minimum, include the
nameanddescription
-
-
Body Content
-
Contains the instructions the agent should follow
-
No strict formatting rules—you can structure it however it helps the agent complete the task effectively
-
Step 3: Add Optional Resources
You can include additional materials to support the Skill:
-
Markdown Instructions: Extra guides or documentation
-
Scripts: Executable code that the Skill can run
-
File References: Any other reference materials the agent might need
Building Custom AEM Skills
For this experiment, I focused on automating the creation of traditional AEM components. To achieve this, I developed the following set of Skills:
-
Building AEM Components Skill. This skill handles the overall creation of AEM components, ensuring proper structure and reusable patterns. It also coordinates other Skills as needed throughout the component creation workflow.
-
Building Granite Dialogs Skill. This Skill focuses on designing Granite UI dialogs, including layouts, fields, and standard configurations, making it easy to set up the authoring interface.
-
Building Sling Models Skill. Responsible for creating Sling Models with correct annotations and data mapping, which are then used in the component’s HTL templates.
-
Building Sling Model Tests Skill. To follow best practices, each new Sling Model should have a corresponding test. This Skill helps generate those tests and is typically invoked after a Sling Model is created.
Each of these Skills provides clear, step-by-step instructions that follow best practices—or at least the common patterns I use when developing AEM components. For example, in the “Building Sling Models” Skill, I explicitly guide the use of Java interfaces and implementations when creating new Sling Models, discourage the use of generic @Inject annotations, and include sample model templates in a separate file (injectors-type.md) to show the preferred approach.
Testing the New AEM Skills
To evaluate how the new AEM Skills perform in real development scenarios, I ran two structured tests focused on component generation and adherence to predefined patterns. The goal of these tests was to understand how well the AEM Skills interpret prompts, follow defined conventions, and produce usable component structures.
Test 1: Simple Component Generation
The first test focused on creating a relatively simple component using the following prompt:
Use the build-aem-components skill to create a new AEM component called Promo. This component can edit the image, title, body, single button with an editable title, and the link. There should also be an option to display the image on the left or the right.
Test 2: Advanced Component with Multifield
The second test introduced more complexity by leveraging a multifield configuration. The prompt used was:
Use the build-aem-components skill to create a new AEM component called News Carousel. Each news item should have:
-
Title (required)
-
Image path
-
Description (max 80 characters)
-
Call-to-action with editable text, path, and target (
_blankor_self)
Test Results and Observations
Overall, the AEM Skills performed consistently across both tests:
-
The correct Skills were detected and applied as expected.
-
The generated output generally followed the patterns and conventions explicitly defined in the Skill configuration.
-
All required files were generated and organized according to the expected component structure.
-
The build process was executed, and issues were resolved iteratively when errors occurred.
That said, none of the generated components were immediately production-ready. However, the remaining issues were minor and required minimal manual adjustments, which suggests the Skills are already highly effective as a development accelerator.
Note: For this experiment, the focus was on backend and component structure generation. Expanding the setup with an additional Skill to handle client libraries and produce more polished HTML, CSS, and JavaScript output was considered, but ultimately excluded due to time constraints.
Key Lessons Learned
This experiment turned out to be very valuable, and along the way, I picked up a few lessons that are worth sharing if you’re planning to write your own Skills:
-
Design before you write
Skills can grow large and complex very quickly, so a bit of upfront design goes a long way. I initially planned to build a single Skill, but as it expanded, it made more sense to break it into multiple, smaller Skills. In some cases, I also moved shared information, code, and patterns into separate files, which made everything cleaner and easier to understand.
-
Keep Skills focused and modular
As Skills evolve, it’s easy for them to become bloated. What started as one Skill quickly turned into four, each with multiple supporting files. Treat Skills like microservices or composable components—encapsulate functionality properly and keep each Skill focused on a single responsibility.
-
Prompts still matter
Even with well-defined Skills, the prompt you provide is still important. Clear, intentional prompts help the agent understand how and when to apply a Skill, and poor prompts can lead to suboptimal results, regardless of how good the Skill is.
-
Be explicit and firm in your instructions
One recurring challenge was that the agent would skip or partially ignore detailed instructions. Writing a Skill is a bit like giving instructions to a child—you need to be clear and firm. Using strong language, like “MUST use” in key places, helped ensure that requirements were followed rather than treated as optional suggestions.
-
Define clear steps and validation checks
Providing a step-by-step process for the agent to follow, along with verification criteria, significantly improved the accuracy and consistency of the output.
-
Iterate to improve results
Even with well-defined Skills, the first response wasn’t always exactly what I expected. Asking the agent to iterate on its output often produced much better and more refined results.
-
Expect ongoing refinement
Writing a Skill isn’t a one-time task. Skills need to be continuously polished and adjusted as you learn what works and what doesn’t. While this can be time-consuming, the quality improvements are usually worth the effort.
Final Thoughts
Agent Skills are a powerful and valuable feature that can significantly enhance development workflows when properly defined and utilized. They not only speed up the development process but also help enforce common patterns and best practices. However, defining them correctly can be time-consuming. At first glance, the task may seem straightforward; however, as workflows become more complex, continuous refinement is necessary. This is not a one-time effort; instead, Skills should be treated as a living, evolving practice. When maintained and improved over time, they can better support development workflows, ultimately leading to higher-quality results when using AI assistants.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.