The essay explained why constraints make AI-assisted development faster. This guide explains how to build them for your team.
Before you begin
The examples in this guide are our constraints — the specific numbers and rules that work for our team and codebase. Your numbers will be different. The methodology is the takeaway, not the specifics. Whether your primitive limit is 50 lines or 75, the principle of having a specific, measurable limit is what matters.
The core insight: most coding standards are written as positive examples — "do it like this." Hard stops invert this. They define gates — "this cannot pass."
The Problem with Examples
Most coding standards look like this:
function BookCard({ title, author, coverImage }: BookCardProps) {
return (
<div className="rounded-lg shadow-md p-4">
<img src={coverImage} alt={title} />
<h3>{title}</h3>
<p>{author}</p>
</div>
);
}This demonstrates what good looks like. But it doesn't prevent the AI from:
- ✗Adding a `book` object prop instead of flat props
- ✗Including loading and error states
- ✗Adding hover state via useState
- ✗Including onClick handlers for navigation
- ✗Defining utility functions in the same file
The example shows the destination. It doesn't fence off the wrong paths.
The Inversion Method
Take each coding convention and invert it from "do this" to "you cannot do that."
Step 1: Identify What Goes Wrong
Look at AI-generated code that didn't meet your standards. What patterns keep appearing?
Step 2: Convert to Hard Limits
Each problem becomes a numbered constraint with a specific threshold.
"Keep components focused and small"
"Don't pass too much data into components"
Nine props? Component is doing too much.
On calibration: These specific numbers are starting points. Start here, then adjust based on what gets escalated. If 50 lines constantly blocks legitimate work, try 60. If 8 props never gets questioned, try 6. The goal isn't the rules themselves — it's the outcomes they produce.
Step 3: Define "Not My Job"
The most powerful constraints define what the component is not responsible for.
<BookCard
book={book}
isLoading={loading}
error={error}
/>{isLoading && <Skeleton />}
{error && <ErrorMessage />}
{book && <BookCard
title={book.title}
author={book.author}
/>}Edge case: Sometimes a domain object shouldn't be destructured because the parent genuinely doesn't care about its internals. That's what escalation is for — challenge the rule, don't silently ignore it.
Step 4: Use Lookup Tables
For common decisions, create lookup tables that eliminate deliberation.
| You Tried | Where It Actually Goes |
|---|---|
| Loading/error/empty state | Parent component |
| onClick for navigation | Parent wraps in Link |
| formatPrice() utility | lib/formatters.ts |
| Hover visual state | Tailwind hover: classes |
| useState for hover | Delete. Use CSS. |
The Constraint Format
Each hard stop should be:
Not "keep it simple" but "maximum 50 lines for primitives"
There's a number or a yes/no test
Clear what to do when violated: split, move, or delete
## [Category]: [Specific Rule]
| Rule | Limit | If Violated |
|------|-------|-------------|
| [What's measured] | [Number] | [What to do] |
**STOP:** [One-line test to apply]
- REJECTED: [Code that violates]
- ACCEPTED: [Code that passes]The Decision Tree
For complex domains, create a decision tree that eliminates deliberation:
Before adding ANYTHING to a component, ask:
Is it loading/error/empty state?
→ YES → Parent handles. Not my job.
Is it a callback for user action?
→ Can parent wrap me in Link/button?
→ YES → Parent handles. Not my job.
→ NO → Is it the ONE primary action?
→ YES → Okay, one callback allowed.
→ NO → Separate action component.
Is it data formatting?
→ YES → lib/formatters or parent formats first.
Is it visual state (hover, focus, pressed)?
→ YES → CSS handles. Not my job.
Still want to add it?
→ STOP. Ask first.
The Escalation Clause
Every constraint system needs a pressure valve. Without one, people route around the rules silently.
The key insight: "No exceptions" doesn't mean rigid dogma. The escalation clause is the exception mechanism. The difference is that exceptions are explicit, documented, and decided — not silent workarounds that accumulate into inconsistency.
- STOP
- "This needs X which violates [rule]. Should I proceed or split differently?"
- Do NOT justify it yourself
- Do NOT update the rules to accommodate the code
If constraints get escalated constantly, they need adjustment. If they never get escalated, they're working.
A 52-line component that's cohesive is better than a 48-line component with awkward splits. The escalation process surfaces these cases so you can make a conscious choice rather than pretending the rule doesn't apply.
Getting Started
You don't need to constrain everything at once. Start with your biggest source of friction:
- 1
Review recent AI-generated PRs. What patterns kept appearing that you had to fix?
- 2
Pick the top 3 issues. The ones that cause the most rework.
- 3
Write hard stops for those 3. Specific thresholds, clear actions when violated.
- 4
Test for a week. Did they reduce the problem? Were they escalated constantly?
- 5
Iterate. Tighten constraints that are too loose. Loosen ones that block legitimate work.
The goal isn't perfection on day one. It's a system that improves over time.
Checklist: Is Your Constraint Hard Enough?
- Specific threshold? There's a number, not just "don't do too much"
- Measurable? You can check compliance without judgement calls
- Clear action? When violated, it's obvious what to do
- No exceptions baked in? "Unless X" clauses undermine the constraint. Exceptions go through escalation, not into the rule.
- Applies to AI? Written in terms the model can understand
- Escalation path? There's a way to challenge it when genuinely needed
If any answer is no, sharpen the constraint.
The takeaway
Copy our examples if they fit your context. But the real value is the thinking: invert from "do this" to "cannot pass," make limits specific and measurable, define what's not your job, and build escalation into the system. Do that, and you'll develop constraints that actually work for your team — regardless of whether your line limit is 50 or 80.
This guide is part of Signal from Noise's work on AI-assisted development. We help teams build capability without agency overhead. Get in touch.