Building with AI Agents: A Practical Guide Inspired by Spotify and Anthropic
A practical guide inspired by Spotify and Anthropic's conversation on agentic development: define scope, set up infrastructure, implement feedback loops, use chain-of-thought, add guardrails, test in isolation, and monitor.
Introduction
AI agents are no longer a futuristic concept—they are reshaping how software teams design, build, and even perceive the role of a developer. In a recent conversation between Spotify and Anthropic, engineering leaders shared insights on integrating autonomous agents into real-world workflows. This guide distills those lessons into a step‑by‑step framework you can apply today. Whether you’re a seasoned engineer or just starting to explore agentic development, you’ll learn how to set up, test, and scale AI agents safely and effectively.

What You Need
- Access to a large language model (LLM) (e.g., Anthropic’s Claude, OpenAI’s GPT‑4, or an open‑source model)
- An API key or local inference setup for your chosen model
- Basic programming skills (Python, JavaScript, or any language with HTTP capabilities)
- A version control system (e.g., Git) to manage agent‑generated code
- A sandbox environment (Docker, a separate cloud account, or a local VM) for safe experimentation
- Optional: monitoring tools like Datadog, Prometheus, or simple logging to track agent actions
Step‑by‑Step Guide
Step 1: Define the Agent’s Role and Scope
Before writing any code, clarify what your agent will do. In the Spotify‑Anthropic discussion, the first rule was “start small.” Choose a repetitive, well‑understood task such as:
- Writing unit tests for existing functions
- Fixing linting errors automatically
- Generating documentation from code comments
- Translating code between languages (e.g., Java to Python)
Document the boundaries: what the agent should never do (e.g., modify production databases, deploy to production without human review). This scope definition will guide every later step.
Step 2: Set Up the Agent Infrastructure
Your agent needs a “home”—a script or service that receives prompts, calls the LLM, and executes the response. Create a minimal scaffold:
- Write a Python script that reads a task description from the command line.
- Construct a system prompt that includes the agent’s role, allowed tools, and constraints (e.g., “You may only edit files in /workspace/code/”).
- Send the prompt to the LLM via its API.
- Parse the response (typically in JSON or markdown) and extract commands to run (e.g., file writes, shell commands).
- Execute those commands in a sandboxed environment (Docker container).
Spotify emphasized using thin wrappers around the LLM—keep the overhead low so you can iterate quickly.
Step 3: Implement Feedback Loops
An agent without feedback is like a developer without tests. Build in mechanisms for the agent to verify its own work:
- Automatic validation: After the agent runs a command, check if the expected output (e.g., a file changed, a test passes) occurred.
- User confirmation: For high‑risk actions (e.g., deleting files, changing secrets), require a human to approve before execution.
- Self‑correction: If a test fails, feed the error back into the LLM and ask the agent to fix the issue (up to a maximum number of retries).
During the Spotify x Anthropic live session, they demonstrated how a simple “test‑fail‑retry” loop turned a flaky agent into a reliable code reviewer.
Step 4: Use Chain‑of‑Thought Prompting
To improve the quality of agent outputs, structure the prompt to encourage reasoning. Instead of “Write a unit test for function X,” try:
“First, list the edge cases for the function. Second, decide what assertions are needed. Third, write the test code. Finally, run the test and report any failures.”
Anthropic’s research shows that explicit step‑by‑step instructions (chain‑of‑thought) dramatically reduce hallucination and increase task success rates. Your agent will “think aloud” in its response, making it easier for you to audit its logic.
Step 5: Add Guardrails with Tool‑Use Constraints
Copying the approach from Spotify’s internal tool, restrict the agent’s available actions. For example, instead of giving the agent a generic shell, provide a set of predefined “tools”:

read_file(path)write_file(path, content)run_tests(test_suite_path)query_database(read_only_query)
Each tool has a description and validation layer. If the agent tries an action outside the set, the system returns an error. This technique, highlighted in the Spotify‑Anthropic talk, prevents the agent from “going rogue” while still allowing flexibility.
Step 6: Test Incrementally in Isolation
Never let an agent loose on your main branch. Instead, create a fork or a feature branch for every agent session. Use a CI/CD pipeline that:
- Runs agent‑produced changes through existing tests.
- Checks for security vulnerabilities (e.g., secrets exposed, dangerous imports).
- Generates a diff summary for human review.
Spotify’s team mentioned they use “agent sandboxes”—ephemeral environments that mimic their production stack but with fake data. Once the agent passes all checks, a human merges the pull request.
Step 7: Monitor and Iterate
Treat your agent like a new team member. Log every prompt, response, and action. Analyze failure patterns:
- Are certain types of tasks always failing? (e.g., “refactoring async code” might need more context in the prompt.)
- Is the agent taking too many steps? (Add a step cap or a time limit.)
- Are the guardrails too restrictive or too loose? Adjust tool definitions.
Set up a dashboard (e.g., using ELK, Datadog, or simple SQL) to track success rate, average completion time, and user satisfaction. During the live conversation, Anthropic showed how they continuously fine‑tune their prompts based on real‑world agent logs.
Tips for Success
- Start with a single, boring task. Do not aim for a “general coding assistant” on day one – pick one pain point (e.g., generating boilerplate) and master it.
- Never give an agent direct production access. Always use a human‑in‑the‑loop for any write operation that affects live systems.
- Invest in good system prompts. Spend as much time crafting the agent’s persona and constraints as you would writing code. The prompt is the product.
- Log everything. If you cannot explain why an agent did something, you cannot trust it. Combine logs with version control to replay interactions.
- Embrace imperfect outcomes. Early agent versions might fail 60% of the time – that is normal. Use failures to improve the agent’s context and tool set.
- Collaborate across teams. Share your agent’s failures and successes with colleagues. Spotify and Anthropic’s joint session revealed that cross‑pollination of ideas accelerates learning.
- Consider cost. Each API call adds up. Cache common prompts and responses, and use smaller, cheaper models for straightforward tasks.
By following these steps, you can move from “What is an AI agent?” to “My agent just shipped a feature” – safely and responsibly.