WordPress isn’t banning AI. It’s doing something more useful: it’s making expectations explicit.
If you contribute to core, plugins, themes, docs, or any WordPress-adjacent open-source project, “I used an AI tool” is no longer an awkward sidebar — it’s part of responsible software shipping. The new WordPress AI Guidelines and the emergence of an “agent skills” ecosystem (including a wp-playground skill) are both signals of the same shift: AI is welcome if humans stay accountable for quality, licensing, security, and truth.
This post explains what’s changed, what the guidelines mean in practice, and a concrete workflow you can use for AI-assisted contributions without creating maintenance debt or “AI slop”.
What changed (and why it matters)
See also: Building Engaging WordPress Website Wedding Photographer.

WordPress has published AI guidance for project work, and is also experimenting with practical “agent skills” aimed at speeding up common developer loops (like spinning up test environments via WordPress Playground). Together, they move AI usage from “quietly happens” to “structured and reviewable”.
The key idea: you can use AI to go faster, but you can’t outsource responsibility. Reviewers should be able to trust that you tested what you changed, understood it, and can maintain it.
The 5 principles (translated into contributor reality)
1) Responsibility: you own what you ship
If an AI tool produced the code, you are still the author in the only way that matters: you’re accountable for correctness, security, performance regressions, accessibility regressions, and long-term maintainability.
Practical translation:
- Don’t paste AI output straight into a PR without understanding it.
- Be ready to explain the change and defend the approach in review.
- Assume the reviewer will ask for tests, reproduction steps, and trade-offs.
2) Disclosure: say when AI meaningfully helped
Disclosure isn’t about shaming or virtue-signalling; it’s about giving reviewers context. If a change was heavily AI-assisted, reviewers may scrutinise for hallucinated APIs, subtle logic errors, and licensing issues.
What “good disclosure” looks like:
- Short note in the PR description: what tool(s) you used and what you used them for.
- What you verified: tests run, environments, edge cases.
- Anything you’re uncertain about (invite targeted review).
Example PR disclosure block (copy/paste):
AI assistance: Used an AI tool to propose an initial refactor and draft inline docs. Verification: Ran unit tests (…), exercised the feature manually on WP x.y, and checked linting. Notes: I rewrote the capability check by hand; please sanity-check the nonce flow and i18n strings.
3) Licensing: don’t accidentally import restricted code
In WordPress land, licensing isn’t optional — it’s foundational. AI tools can inadvertently reproduce copyrighted code or training-set fragments that are not compatible with your project’s licence.
Practical translation:
- Don’t ask AI to “copy the exact code from X plugin” or a proprietary repo.
- Avoid prompts that encourage verbatim reproduction of third-party code.
- If you need an algorithm or approach, ask for an explanation and implement it yourself.
- Prefer referencing public, compatible docs/specs and writing original code.
4) Non-code assets count too (docs, images, examples)
AI can hallucinate facts, invent commands, or produce misleading screenshots/examples. Documentation is part of your project’s contract with users — treat it as carefully as code.
Practical translation:
- Verify every claim: settings names, hooks, file paths, CLI output, versions.
- Prefer linking to authoritative docs rather than paraphrasing uncertain details.
- Never fabricate performance numbers, benchmarks, security guarantees, or “official endorsements”.
5) Quality over volume: “no AI slop” is a technical requirement
“AI slop” isn’t just cringe — it’s expensive. It creates review overhead, introduces regressions, and leaves behind code nobody truly understands.
Practical translation: smaller, reviewable PRs beat giant AI-generated dumps. Keep diffs tight. Add tests. Add comments where the intent isn’t obvious.
What are “Agent Skills” (and why contributors should care)?
The emerging “agent skills” concept is essentially: standardised, reusable task modules an AI agent can use to work in a project environment. For WordPress, this includes skills that can speed up the most painful part of contributing: setting up a reliable reproduction and testing loop.
A good example is a WordPress Playground skill. Playground makes it possible to run WordPress in a reproducible environment quickly, which helps with:
- Reproducing bugs from issues with less setup friction
- Testing patches across different WP/PHP combinations (where supported)
- Sharing “works for me” setups that are actually portable
The deeper point: WordPress isn’t just telling you to be careful with AI — it’s building infrastructure that makes careful, verifiable work easier.
A practical AI-assisted workflow for WordPress contributions (safe + fast)
This is a workflow you can adopt today, regardless of which AI tool you use. The goal is to treat AI like a junior assistant: helpful at drafting, dangerous if unreviewed.
Step 0: Start with a minimal reproduction
- Clarify the bug/feature in one sentence.
- Write exact reproduction steps.
- Capture expected vs actual behaviour.
- Note versions (WP, PHP, browser, plugin/theme combos).
Step 1: Ask AI for an explanation, not a patch
Good prompt patterns:
- “Explain what this function does and where it could fail.”
- “List likely root causes given these symptoms.”
- “Propose 2–3 approaches and trade-offs; do not write final code yet.”
Bad prompt patterns:
- “Write the full fix” (without constraints, tests, or context)
- “Copy how plugin X does it” (licensing + plagiarism risk)
Step 2: Design the change (you decide)
Before you touch code, decide:
- What is the smallest acceptable fix?
- What should remain unchanged?
- What needs tests (and what kind)?
Step 3: Let AI draft, then rewrite for clarity
If you use AI to draft a patch, treat the output as a first draft. Rewrite variable names, restructure conditionals, and simplify. The reviewer should see clean, intentional code — not a “probable fix”.
Step 4: Add tests (or at least a clear manual test plan)
Tests are the ultimate anti-slop mechanism. If you can’t add tests, be explicit about what you verified manually and why tests aren’t practical for this change.
Step 5: Run a contributor-grade checklist
- ✅ Linting passes
- ✅ Unit/integration tests pass (where applicable)
- ✅ No new warnings/notices
- ✅ Security sanity: nonces/caps/escaping/sanitisation
- ✅ Performance sanity: avoid new queries, heavy loops, repeated work
- ✅ Accessibility/i18n checks where UI changes exist
Step 6: Disclose AI usage + verification in the PR
Use the disclosure block above. Reviewers don’t need your prompt transcript; they need confidence in your verification and ownership.
Common failure modes with AI-assisted PRs (and how to avoid them)

Hallucinated functions, hooks, and flags
AI will confidently invent hook names, filter signatures, or CLI flags. Avoid by checking references in the codebase and official docs, and by running the code path you changed.
Security regressions via “helpful refactors”
AI refactors can subtly move an escape/sanitise call, change a capability check, or reorder nonce validation. Always re-audit the security-critical parts manually.
Over-engineering
AI tends to produce generic abstractions. In WordPress contribution work, simple is often better: fewer moving parts, easier review, fewer regressions.
Licensing ambiguity
If you can’t confidently explain the provenance of a chunk of code, don’t ship it. Re-implement from an idea, not a snippet.
FAQ
Do I have to disclose AI use?
If AI meaningfully contributed to the work (drafted code, wrote significant docs, etc.), disclosure is the responsible move. It helps reviewers calibrate risk and focus review time.
Can I use AI to write tests?
Yes — but tests need the same scrutiny as production code. Make sure your tests actually fail before the fix and pass after it, and that they assert the right behaviour.
Is WordPress “endorsing” AI tools?
No. The point is not endorsement; it’s guidance on responsible usage and exploration of workflows that improve reproducibility and quality.
Resources
- WordPress AI Guidelines (Handbook)
- WP News: New AI Agent Skill for WordPress
- WordPress/agent-skills (GitHub)
Final takeaway
Use AI to accelerate the boring parts — summarising, drafting, exploring alternatives — and keep humans in charge of the parts that matter: correctness, licensing, and verification. If your PR is small, tested, and transparent about what you did and what you checked, reviewers will thank you (and your future self will too).
Next step: if you’re also building reusable layouts for client work, consider standardising your front-end builds the same way you standardise your dev workflow — a section library or template kit reduces randomness and makes QA repeatable.