Code Quality Foundations for AI-assisted Codebases
Stories by Nick Tune on Medium
There are a few basic techniques you can use to increase the quality of code produced by AI coding assistants. These are easy techniques you can apply to any codebase so I highly encourage you to spend a little time setting them up.
The techniques fall into three broad groups
- Rules: define coding quality and standards
- Reviews: feedback loops to verify produced output
- Blocks: hard checks to ensure rules and reviews aren’t bypassed
In this post I’ll share the real examples from my open source side project living-architecture.
RulesRules define what code quality means in your codebases. I bundle standards, principles, and conventions all under this generic umbrella.
Lint rulesLint rules are one of the most effective techniques for baking in code quality since they are easy to enforce and many rules already exist, plus it’s easy to create your own.
My lint config contains rules like the following…
Type safety
I’ve banned any and as type assertions. Claude Code loves to use these a lot and can’t be trusted with them. I never use them myself anyway, there’s always a better alternative.
// No any types
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/no-unsafe-assignment': 'error',
'@typescript-eslint/no-unsafe-member-access': 'error',
'@typescript-eslint/no-unsafe-call': 'error',
'@typescript-eslint/no-unsafe-return': 'error',
// No type assertions - fix the types instead
'@typescript-eslint/consistent-type-assertions': [
'error',
{ assertionStyle: 'never' },
],
// No non-null assertions - handle errors properly
'@typescript-eslint/no-non-null-assertion': 'error',
Code complexity
I’ve set a maximum code complexity per-function as 12 , maximum level of indentation as 3, and a maximum file size of 400.
When Claude hits these limits, it forces it to think about how to make the code more modular and it usually does a good job.
Without these limits Claude will absolutely create highly nested code and files with 1k+ lines.
// Complexity limits
'max-lines': [
'error',
{ max: 400, skipBlankLines: true, skipComments: true },
],
'max-depth': ['error', 3],
complexity: ['error', 12],
Naming
I observed that Claude would use a lot of generic naming like helper and utils . It seems so lazy and just defaults to these generic names even when finding more accurate names isn’t hard.
So I use lint rules to ban the worst offenders. And it works. Claude hits the lint rule and has to think of a better name, and it always finds something more appropriate.
// Ban generic folder imports (not lib - that's NX convention)
'no-restricted-imports': [
'error',
{
patterns: [
{
group: ['*/utils/*', '*/utils'],
message: 'No utils folders. Use domain-specific names.',
},
{
group: ['*/helpers/*', '*/helpers'],
message: 'No helpers folders. Use domain-specific names.',
},
{
group: ['*/common/*', '*/common'],
message: 'No common folders. Use domain-specific names.',
},
{
group: ['*/shared/*', '*/shared'],
message: 'No shared folders. Use domain-specific names.',
},
{
group: ['*/core/*', '*/core'],
message: 'No core folders. Use domain-specific names.',
},
{
group: ['*/src/lib/*', '*/src/lib', './lib/*', './lib', '../lib/*', '../lib'],
message: 'No lib folders in projects. Use domain-specific names.',
},
],
},
],
Test coverage
In my vitest configs I set test coverage to 100%. Sometimes you might need to set more granular rules but I recommend starting out with 100 for everything and iterating from there.
coverage: {
reportsDirectory: './test-output/vitest/coverage',
provider: 'v8' as const,
exclude: ['**/*test-fixtures.ts'],
thresholds: {
lines: 100,
statements: 100,
functions: 100,
branches: 100,
},
},Claude will often miss edge cases, even with detailed planning. 100% coverage requirements help to catch them. They don’t guarantee perfect tests but it’s better to have these thresholds than not having them.
Standards and conventionsFor other coding standards, principles and conventions I have a /docs/conventions folder:

Then these are referenced in the main CLAUDE.md so that Claude knows when to read them.
## Code Conventions
When writing, editing, refactoring, or reviewing code:
- always follow `docs/conventions/software-design.md`
- look for standard implementation patterns defined in `docs/conventions/standard-patterns.md`
- avoid `@docs/conventions/anti-patterns.md`
The automatic code review agent enforces these conventions (see `./claude/automatic-code-review/rules.md`)
Code quality is of highest importance. Rushing or taking shortcuts is never acceptable.
## Testing
Follow `docs/conventions/testing.md`.
100% test coverage is mandatory and enforced.
Reviews
For anything that is not enforced by a tool (i.e. lint and test coverage) I recommend setting up reviews where a separate, specialist agent will review the work that has been done and provide feedback to the main agent.
In living-architecture I have fully automated reviews by a second agent that take place before the human in the loop is asked to review the work.
Automated code reviewIn a previous post I mentioned the automatic code review technique I had setup with Claude Code hooks. In living-architecture I have found this to be extremely useful.
The automatic code review runs every time Claude has finished. It automatically kicks in. In living-architecture I set up the automatic code review to re-inforce the same conventions that are in CLAUDE.md
Architecture, modularity check
Check all production code files (not test files) against the following conventions:
Read @/docs/architecture/overview.md Read @/docs/conventions/codebase-structure.md
Ensure that all code is in the correct place and aligns with boundaries and layering requirements. Look at each line of code and ask "What is the purpose of this code? Is it related to other code? Is it highly cohesive? Should it really be here or would it fit better somewhere else?"
Coding Standards
Check all production code files (not test files) against the following conventions:
Read @/docs/conventions/software-design.md
Read @/docs/conventions/standard-patterns.md
Claude will not always follow standards and guidelines, no matter how great your prompts and CLAUDE.mdfile are. But with a dedicated agent focused on review fewer things slip through the net the chance of the standards being applies increases substantially.
Whenever you see bad code, add the convention or pattern to one of the files and next time the reviewer will automatically pick it up.
Functionality reviewSimilar to auto code review I setup an auto task check:
- /docs/workflow/task-workflow.md that tells Claude to run the task-check command when it’s finished a piece of work
- CLAUDE.md references /docs/workflow/task-workflow.md
- task-check compares the work done to the requirements
This is where defining requirements is crucial. My approach is to write PRDs and then break them down into tasks stored in task master.
So when I start a new session I say “next task” and Claude picks the next ticket and marks it in progress and starts working on it. Then, the reviewer agent knows what to check.
BlocksWhatever rules and guidelines you try to put in place, your AI assistant is going to try and find ways around them. So here are some simple techniques you can use to prevent that.
git hooksI highly recommend using git hooks. I do a full lint, typecheck, test verification for the whole monorepo on each commit.
NX has an effective caching system which means the performance of this is quite reasonable for the high level of confidence it provides.
Prevent dangerous operations and file modificationAll of the rules you set up can be bypassed AI. For example, when the git hooks fail Claude will always try to use --no-verify to bypass checks.
You can use Claude hooks to prevent these dangerous commands:
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "$CLAUDE_PROJECT_DIR/.claude/hooks/block-dangerous-commands.sh"
}
]
}
]

I also recommend setting up file modification rules so that AI cannot modify your lint or test coverage configuration files.
Code Quality Foundations for AI-assisted Codebases was originally published in Nick Tune’s weird ideas on Medium, where people are continuing the conversation by highlighting and responding to this story.
Generated by RSStT. The copyright belongs to the original author.