Vibe Coding, Security

Shipping Fast, Staying Safe: What Every Vibe Coder Needs to Know About Security

undefined

If you're building software with AI tools like Lovable, Cursor, or Claude Code, you're shipping faster than ever. You know security is important, but at the same time it might be a bit of an afterthought.

We recently hosted Eno Thereska for a candid conversation about security for vibe coders. Eno spent eight years at Microsoft, five years as a principal engineer at AWS S3, and worked at startups like Confluent and Alcion.ai before founding trent.ai to rethink security for the AI age.

It's perfectly normal if you don't give a sh-t about security. Most developers and managers are excited, busy, and promoted around new features and product launches. I've yet to meet a single person that says, 'We're really excited about security - tell us more.'


But ignoring security isn't an option. The wake-up call often comes in three forms: an embarrassing breach, a customer asking about compliance, or security blocking an enterprise deal. The question isn't if you'll need to care about security - it's when.

The Security Maturity Ladder: Where Are You?

One of the most useful frameworks Eno shared was thinking about security in stages. Not everyone needs enterprise-grade security on day one. Here's how to think about it:

Stage 0: Experimentation
You're experimenting, learning the tools, going from zero to one. Security? Don't worry about it yet.

Stage 1: Early Launch
You're shipping something real, maybe to your first design partners. The goal here is simple: avoid embarrassment. No leaked API keys. No public S3 buckets. No wide-open endpoints getting hammered by bots. Basic hygiene.

Stage 2: Growth
Real users are showing up. You're past friendly design partners and into paying customers you don't personally know. This is where things get interesting—and where many teams realize that security problems often start before the first line of code is written. A tool can scan your code all day, but if your system isn't designed for multi-tenancy, no scanner will catch that.

Stage 3: Selling to Enterprise
SOC2 compliance. Security questionnaires with 300+ line items. Auditors who will test whether you actually understand your system—not just whether you've run the right tools. This stage costs real money (expect $100K+ annually) and requires evidence, not vibes.

The jump between stages has a subtle trap: the more you rely on automated tools early on, the more disconnected you can become from security fundamentals. When an auditor shows up and asks about your key rotation policy or per-customer encryption, tools won't answer for you.

Common Security Issues

Eno walked through the common ways teams shoot themselves in the foot:

1. Leaking Secrets
API keys in repos. .env files committed to git. Tokens in logs. The fix: secret scanners and secret managers. This is table stakes.

2. Over-Permissive Cloud Configs
Wildcard permissions like s3:*. Public buckets. Open security groups. The principle: least privilege, always.

3. Missing Authentication
Unprotected endpoints. JWT mishandling. Weak session logic. The rule: every endpoint needs a policy.

4. No Rate Limiting
Without limits, you're one viral moment (or one bad actor) away from a massive bill or complete service degradation. Eno shared how he lost sleep over a $50,000 AWS alert—which turned out to be legitimate GPU reservations, but the scare was real.

5. Unpatched Dependencies
Known CVEs sitting in your node_modules. Supply chain attacks. The fix: automated dependency updates.

6. AI Tool Uncertainty
This is the new frontier. When your code uses AI agents, MCP servers, or generates behavior dynamically, traditional scanning breaks down. "The data has become code," Eno explained. "You don't know anymore how much has gone out." 2025 and 2026 will be critical years as the industry figures this out.

The Tools: What Actually Works

Eno demoed three tools during the session:

Lovable's Built-in Security Scanner
The no-code platform now includes a "Review Security" button that scans the code it generates. It's powered by partnerships with companies like Semgrep and checks against the OWASP top 10 vulnerabilities.

Semgrep
An open-source static analysis tool with community-contributed rules. Running semgrep scan --config auto on a repo will find potential issues. The catch? It's conservative. Eno ran it on the Apache Kafka repo—one of the best-maintained open source projects around—and got 200+ findings, most of them false positives. Your mileage will vary, but it's free and comprehensive.

Gitleaks
Focused purely on finding secrets. Also open source, also prone to false positives. On a well-maintained repo, it might flag 14 "leaks" that are all test fixtures or documentation examples. But for catching actual mistakes, it's worth running.

The pattern across all these tools: they're better than nothing, but they often still require human judgment or use of an AI agent to interpret results.

What Code Scanners Can't Catch

Code scanners can only find problems in code that exists. They can't tell you about design flaws, missing features, or architectural decisions that will cause problems later.

70% of errors come before the first line of code is written


To understand why, think about what code scanners actually do: they look for known patterns of bad code: things like SQL injection vulnerabilities, hardcoded API keys, or insecure function calls. But a security vulnerability in something like your multi-tenancy setup (keeping Customer A's data separate from Customer B's data) isn't about bad code. It's about system design. Your code might be perfectly "correct": it stores data, retrieves data, does everything it's supposed to do. The scanner looks at it and says "looks fine." But if there's no user-level isolation built in, any customer can potentially see any other customer's data. That's not a code bug, but an architecture problem that had to be thought about before the first line was written.

This is particularly relevant for vibe coders. When you're iterating through conversation with an AI, you're making design decisions implicitly. The AI might scaffold a perfectly reasonable-looking app that has fundamental security issues baked into its architecture.

What can you do? Before building, or even mid-project, try running your architecture past Claude or ChatGPT with security in mind. Ask questions like: "How would this system keep Customer A's data separate from Customer B's?" or "What happens if someone tries to access another user's data?" You can also prompt the AI to do a security review after each major feature, it won't catch everything, but it might surface design issues that a code scanner wouldn't.

Practical Takeaways

If you're building with AI coding tools today, here's a minimal security checklist for early stages:

[ ] No secrets in code (use environment variables and secret managers)
[ ] Secret scanning enabled in your workflow
[ ] HTTPS everywhere
[ ] Basic rate limiting on public endpoints
[ ] Some form of logging and monitoring
[ ] Access control on every endpoint
[ ] Least-privilege cloud configurations
[ ] Dependencies updated regularly

For those not afraid of the terminal, try running Semgrep and Gitleaks on your repos. For those who prefer to stay in Lovable, use its built-in security scanner. And for everyone: remember that these tools catch symptoms, not causes. Thinking about security at the design stage can prevent problems that scanners cannot find.

The Bottom Line

Security isn't something you need to obsess over from day one. But it's also not something you can ignore. The key is matching your security investment to your stage: basic hygiene when you're launching, design thinking when you're growing, and real compliance work when you're selling to enterprise.

The AI coding revolution has made building faster than ever—and created new security uncertainties we're still figuring out. The best approach? Stay aware, scale your investment appropriately, and don't be too proud to ask for help.

Thanks to Eno Thereska for sharing his expertise with our community.

This post is part of our ongoing series on building with AI tools. Join our community to catch the next Office Hours session.