New Attack Surface, New Discipline
AI-native development introduces security risks that did not exist before. You are sending code to external APIs, relying on AI to generate security-critical code, and installing dependencies the AI suggests. Each of these creates a potential vulnerability. The AI-native engineer must develop new security habits on top of existing best practices.
The Top Security Risks
- 1. Secret Leakage: Accidentally pasting .env files, API keys, or credentials into AI prompts. Once sent, you cannot unsend them.
- 2. Vulnerable Code Generation: AI generates code with SQL injection, XSS, or insecure authentication patterns without warning you.
- 3. Supply Chain Risks: AI suggests installing packages that are malicious, abandoned, or have known vulnerabilities.
- 4. Over-Privileged AI Tools: Giving AI tools more access than they need (database write access, production credentials).
- 5. Prompt Injection: Malicious data in your codebase or inputs that manipulates AI behavior.
Never Share Secrets with AI
# NEVER do this:
> Here is my .env file, help me debug the API connection:
STRIPE_SECRET_KEY=sk_live_xxxxx
DATABASE_URL=postgresql://admin:realpassword@prod.db.com/app
# INSTEAD, do this:
> I am having trouble connecting to the Stripe API. The error is
"Invalid API key." My configuration is in app/lib/stripe.ts.
I have set STRIPE_SECRET_KEY in my .env file. Help me debug
why the key is not being read correctly.
# Claude Code reads your code files but you can configure it
# to never read .env files:
# In CLAUDE.md:
# "NEVER read .env, .env.local, .env.production, or any file
# containing credentials. If you need to debug environment
# variables, ask me to describe the issue instead."
Reviewing AI Code for Security
AI-generated code can contain any of the OWASP Top 10 vulnerabilities. Here are the most common ones to watch for:
| Vulnerability | How AI Introduces It | What to Check |
|---|---|---|
| SQL Injection | String concatenation in queries | Verify parameterized queries are used everywhere |
| XSS | Rendering user input without sanitization | Check that user data is escaped before rendering |
| Broken Auth | Missing auth checks on new endpoints | Verify every endpoint has proper authentication |
| Mass Assignment | Accepting all request body fields directly | Whitelist accepted fields, reject unknown ones |
| Insecure Defaults | CORS with *, permissive CSP headers | Review all security headers and CORS configuration |
| Hardcoded Secrets | AI puts placeholder secrets in code | Search for hardcoded passwords, keys, and tokens |
Supply Chain Security
# When AI suggests installing a package, verify it first:
# 1. Check the package on npm
> Before installing that package, check:
- How many weekly downloads does it have?
- When was it last updated?
- How many contributors?
- Any known vulnerabilities?
# 2. Use npm audit after AI installs packages
npm audit
# 3. Check for typosquatting (common AI mistake)
# AI might suggest "colurs" instead of "colors"
# or "lodas" instead of "lodash"
# Always verify the package name is correct
# 4. Pin exact versions for security-critical dependencies
# In package.json, use exact versions not ranges:
# "jsonwebtoken": "9.0.2" (not "^9.0.2")
Sandboxing AI Tool Execution
# Principle of least privilege for AI tools:
# 1. MCP database servers: read-only access by default
{
"mcpServers": {
"postgres": {
"env": {
"DATABASE_URL": "postgresql://readonly_user:pass@db/app"
}
}
}
}
# 2. File system access: limit to project directory only
{
"mcpServers": {
"filesystem": {
"args": ["/path/to/project"]
}
}
}
# 3. Never give AI tools production credentials
# Use staging/development environments for AI-assisted work
# Production deployments should go through CI/CD, not AI tools
Security Checklist for AI-Native Development
## AI-Native Security Checklist
### Before Using AI Tools
- [ ] .env files are in .gitignore
- [ ] CLAUDE.md includes "never read .env files" instruction
- [ ] MCP servers use least-privilege credentials
- [ ] AI tools do not have production access
### When Reviewing AI-Generated Code
- [ ] No hardcoded secrets or credentials
- [ ] SQL queries use parameterization (no string concatenation)
- [ ] User input is validated and sanitized
- [ ] Authentication/authorization checks on all endpoints
- [ ] CORS and CSP headers are restrictive, not permissive
- [ ] No eval() or dynamic code execution with user input
- [ ] Dependencies are verified (correct name, active, no vulns)
- [ ] Error messages do not leak internal details
### Ongoing Practices
- [ ] Run npm audit / pip audit regularly
- [ ] Review AI-suggested dependencies before installing
- [ ] Rotate any credentials that may have been exposed
- [ ] Use static analysis tools (ESLint security plugins, Semgrep)
- [ ] Penetration test AI-generated features before production
Privacy Considerations
When using cloud-hosted AI models, your code is sent to external servers. For most projects, this is fine — the providers have strong data handling policies. But for highly sensitive code (government, military, healthcare PHI, financial PII), consider: using local models via Ollama for sensitive operations, reviewing your AI provider's data retention policies, and confirming your organization's policy on sending code to AI services. When in doubt, ask your security team.
Summary
AI-native development introduces new security risks that require new habits. Never share secrets in prompts. Review AI-generated code for OWASP vulnerabilities. Verify AI-suggested dependencies. Use least-privilege access for AI tools. And remember: AI does not have security judgment — it generates code that works, not necessarily code that is secure. You are the security reviewer. Build the checklist habit and apply it to every AI-generated change.