← Back to articles

Stop the Slop: An Internal Guide for Devs

AI Development Best Practices
Brendan O'Neill

A guide for developers to use AI tools in a way that improves quality, speed, and maintainability without introducing unnecessary risk or complexity.

✍️ Writing Process: This post was fully planned in advance using structured notes and a detailed outline. AI was used as a writing assistant to turn that plan into a clearer, more readable document. The ideas, structure, and direction were entirely human-defined.

Stop the Slop: An Internal Guide for Devs

Introduction

AI coding isn’t going anywhere. Developers are using it every day, and it can save a tremendous amount of time when used well. It can also waste far more time when used poorly. We’re already seeing the rise of AI code slop—messy, unclear, over-generated output that slows teams down and creates long-term maintenance risks.

Agentic coding (letting AI run multi-step workflows or generate entire feature sets) might look impressive in demos, but it is not a good idea for real-world development or team collaboration. It creates unpredictable output, reviewability issues, and a codebase no one fully understands.

To address this, I created an internal guide for my small dev team (7 engineers)
that defines how we use AI intentionally, safely, and effectively.

Below is the full policy we use internally—a set of best practices and requirements designed to keep AI helpful, not harmful, and to prevent AI code slop from creeping into our work.


Team AI Usage Policy

Internal Doc

Purpose & Overview

This policy outlines how our team should use AI code-generation tools in a way that improves quality, speed, and maintainability without introducing unnecessary risk or complexity.

While there is plenty of advice for individuals, there is far less guidance for teams working together in shared codebases. Used correctly, AI can meaningfully accelerate development—especially for tests, repetitive patterns, boilerplate structures, forms, modals, and other predictable components.

Our goal is to establish a consistent, team-wide approach that:

  • Brings structure and consistency to AI-assisted development
  • Supports and reinforces best-practice engineering standards
  • Reduces wasted time and effort
  • Improves reviewability, traceability, and developer understanding

What We Are Trying to Avoid

In short: AI slop.

AI slop is low-quality, overly complex, poorly reviewed, or poorly understood AI-generated code that creeps into a shared codebase. It builds up when developers allow the model to generate too much at once, prompt without clarity, or accept results they don’t fully understand. AI slop spreads quickly—and is much harder to unwind later.

More specifically, we want to avoid:

  • Large commits full of AI-generated code
  • “Agentic” workflows where the AI performs multiple steps or generates multiple files
  • Complex or tangled output that no one fully understands
  • Accept-now, fix-later loops caused by poor reviewability
  • The creeping accumulation of maintainability issues caused by AI slop

Why this matters:

  • Multi-file AI output is significantly harder to understand and verify
  • AI-generated complexity compounds quickly
  • Error rates multiply—if a system has a 95% success rate per step, after 20 steps the likelihood of success drops near 33% (see article)

This policy exists to prevent AI slop before it starts.


Key Principles for AI-Assisted Development

These principles capture the workflow and mindset shifts introduced by AI. They focus on preparation, prompting discipline, clarity, and maintaining control of the development process.


1. Think Before You Generate

Take 10–15 minutes to plan your AI work before writing any prompts.

Create a simple Prompt Plan that outlines:

  • The commits you expect to make
  • The sequence of prompts you intend to use
  • The files you know you will need
  • Any reference files you should copy into the prompt for context

Then prepare your workspace:

  • Create the empty files you know you will need
  • Copy relevant reference files or snippets directly into the prompt box as context

Clear planning and setup lead to clear and accurate generations.


2. Commit Early, Commit Often (This Is Critical)

This is one of the most important rules in the entire policy.

If running git reset --hard makes you uncomfortable, you have gone too far without committing.
Frequent, incremental commits keep your work reversible and make AI-assisted experimentation safe.

Each commit should represent a clear, single, reviewable unit of work—ideally mapping to a single prompt or a single functional change.
No combined changes. No mixed responsibilities. No “AI dump” commits.

As part of the PR process, reviewers should explicitly check commit structure to ensure:

  • The developer committed small, isolated changes
  • Each commit has a clear purpose
  • Commits reflect logical steps from the Prompt Plan
  • No commit bundles multiple AI generations or unrelated edits

This commit discipline is what protects the team from runaway AI output and ensures that resetting hard (as required in Point 2) is always a safe, low-cost option.

Frequent, clean commits = safer prompting, easier reviews, less AI slop.


3. Generate One File at a Time

Keep generations small and easy to follow.

If you cannot clearly understand the output, reject it and break the task into smaller parts. Multi-file or multi-step generations create complexity that is hard to reason about and even harder to review.


4. Limit Overprompting Loops

Set a strict limit of two prompts.

This refers specifically to the pattern of:

  1. Writing a prompt
  2. Reviewing the output
  3. Rejecting it
  4. Prompting again with slightly different wording

If the second attempt does not meaningfully improve, stop immediately. Do not continue trying to “fix” the output with more prompting.

At that point, the correct move is to handle the next piece manually, not to prompt again. In some cases, you may need to reset the work (git reset --hard) and manually complete the first steps yourself—see Point 2 (“Commit Early, Commit Often”) for why this is expected.

This rule prevents unproductive prompting loops and keeps the developer in control.


5. Trust Your Instincts

Redo freely and step in with your own code whenever necessary.

If you cannot easily understand, explain, or extend the generated code, it should not be committed.

At that point, you should be fully prepared to reset the work (git reset --hard) and start the implementation yourself—this aligns directly with Point 2 (“Commit Early, Commit Often”), which ensures you are always safe to reset and take over.

Your instincts are a key quality signal: if the code feels wrong, overly complex, or “AI-shaped,” trust that feeling and take over manually.


6. Work in Private Mode

Always work in private or local mode when using AI tools.

This ensures that the codebase, context snippets, and internal information are not shared or logged outside the organization.


7. Stay In-Sync With the Team (Model and Settings Consistency)

All team members must explicitly use the same model (currently: Sonnet 4.5) and follow the same internally defined rules and settings.

These rules/settings should be centrally maintained and applied consistently across all AI tools:

  • In tools like Cursor, they should be uploaded and managed through the admin rules panel
  • In other providers, they should be applied via context files or equivalent configuration mechanisms

Do not use the “suggested” model option—providers may automatically switch models for token-saving or cost efficiency, causing inconsistent output and code drift.

Staying aligned across model and settings ensures predictable behavior, stable conventions, and consistent development patterns.


Standard Development Practices (See SDLC SOPs)

General engineering expectations—testing, peer review, code quality, secure handling of data, and human accountability—apply regardless of AI. These fundamentals are defined in detail in the organization’s SDLC SOPs.

For all non-AI-specific development protocols, developers should refer directly to those SOPs.
This AI policy adds the additional discipline required when AI becomes part of the workflow.


Summary

AI can accelerate development significantly, but only when used with discipline, preparation, and clear boundaries.

By creating a Prompt Plan, preparing context, limiting prompt loops, keeping work small, working in private mode, and staying aligned on the same model and settings, the team ensures that AI enhances clarity, speed, and maintainability—without introducing unnecessary complexity or AI slop.

How did you find this article?

Share this article

Join the newsletter

Get the latest articles and insights delivered directly to your inbox.