How We Review Code at AALA

Every pull request at AALA goes through the same process. Same checks, same severity levels, same language rules. It does not matter who wrote the code or which project it belongs to. The review process is the process.

We packaged that process into a skill that any AI coding agent can use. Not a product. Not a SaaS platform. Just the review rules we follow internally, published as an open-source skill on GitHub.

What Gets Checked

Every file goes through 10 checks, in order.

Check A: Naming and Readability. Variable names must be descriptive. Function names follow verb + noun and must match what the function actually does. If getUser() also deletes inactive records as a side effect, that is a blocking finding. The name is a contract.

Check B: Single Responsibility. One function, one job. Anything over 40 lines gets read carefully. Anything over 400 lines in a non-config file gets flagged with an explanation.

Check C: DRY. Two identical blocks is one too many. Repeated hardcoded values, copy-pasted logic with minor variations, same validation in two places. All flagged.

Check D: Separation of Concerns. Route handlers parse input, call a service, return a response. Nothing else. Database queries stay out of controllers. Business logic stays out of templates.

Check E: Security. Input validation, injection prevention, authentication, data protection, dependency pinning, environment files. Every item checked against every file. No skipping.

Check F: Language-Specific. Each language has its own guide with idioms, patterns, and rules specific to that ecosystem. A Python review applies Python rules. A TypeScript review applies TypeScript rules. No cross-contamination.

Check G: Docker and Infrastructure. No latest tags. No root containers. Resource limits defined. No secrets in compose files.

Check H: Error Handling. No silent catches. Specific exceptions where possible. Async rejections handled.

Check I: Performance. No N+1 queries. Resources released after use. Infinite loops must have clear exit conditions.

Check J: Algorithmic Complexity. Nested loops over the same dataset when a Map would do. List lookups inside loops. Database queries inside loops. Sorting the same collection twice.

Check K: Test Coverage Awareness. Source files with business logic or security-sensitive code that have no corresponding test file get flagged. The check verifies test existence, not test quality.

Languages and Frameworks

The skill ships with base guides for 8 languages:

  • JavaScript
  • TypeScript
  • Go
  • Python
  • PHP
  • Rust
  • HTML
  • CSS/SCSS

And framework overlays for 7 frameworks:

  • Express
  • Ember
  • FastAPI
  • Laravel
  • NestJS
  • Next.js
  • React/Vite

When a framework is detected, the overlay loads on top of the base language guide. Express rules apply alongside JavaScript or TypeScript rules. Laravel rules apply alongside PHP rules. The overlay adds framework-specific checks without replacing the base.

Four Review Modes

The skill supports four ways to scope a review.

Changeset is the default. It reviews staged changes, unstaged changes, and commits not yet pushed. This is what you run before pushing.

Full codebase audits every reviewable file in a folder or project. This is for new codebases, periodic audits, or onboarding onto an unfamiliar project.

Incoming reviews what a remote branch will bring after merge. Useful when someone else’s work is about to land in your branch.

PR / Branch compare diffs two branches. This is for pull request reviews where you want to see everything a feature branch introduces relative to main.

Severity Levels

Every finding gets one of four labels.

BLOCKING means a security vulnerability, silent failure, or broken architecture. Nothing merges until this is resolved.

IMPORTANT means a DRY violation, naming issue, single responsibility breach, or architecture rule broken. Fix it in this PR.

NIT is a style issue or minor inconsistency. Optional, but noted.

PRAISE is something done well. At least one per file when warranted. Good code deserves acknowledgment.

Finding Format

Every finding includes the file path with line numbers, the rule it violates, an objective statement of the issue, the current code with line numbers, and the fix. No opinions. No “you might want to consider.” The code either follows the rule or it does not.

How to Install

The skill works with 17 AI coding agents including Claude Code, Cursor, Codex, Windsurf, GitHub Copilot, Gemini CLI, and others.

npx skills add aalasolutions/aala-review

Or copy it directly:

cp -r skills/aala-review ~/.claude/skills/

The skill is MIT licensed. The repository is at github.com/aalasolutions/aala-review.

That Is the Process

No pitch. No upsell. This is the standard we hold ourselves to across every project, every language, every PR. We open-sourced it because a review process only improves when more people use it, break it, and make it better.

If your team has a similar process, you already know why this matters. If you do not have one yet, this is a solid starting point.


Published by the AALA AI Team. The aala-review skill is open-source under the MIT license.