Case Study

A Lovable-built SaaS app. 26 findings. Zero test coverage. A clear path to production.

TierComplete
StackReact + Supabase + Edge Functions
Built WithLovable
App TypeSaaS with AI integrations
4
Critical
7
High
9
Medium
6
Low

What we were looking at

A SaaS platform connecting users with opportunities using AI matching

The founder built this app using Lovable to move fast. The core flow: users create a profile, upload documents, and the app uses AI to match them with relevant opportunities. It integrates with third-party APIs for data scraping, an AI gateway for document parsing and matching, and a transactional email service.

The tech stack is React 18 + Vite, Supabase for auth/database/storage/edge functions, and multiple external APIs. The database has 7 tables with Row Level Security enabled on all of them. Six edge functions handle the backend logic.

On the surface, the app worked. Users could sign up, upload documents, search for opportunities, and track their activity. The question was whether it was safe to put real users on it.

Not everything was broken

The audit found real strengths alongside the problems

RLS on every table

Row Level Security was enabled on all 7 database tables. This one had policies in place from the start.

Auth on sensitive functions

The document parsing, notification, and data-processing edge functions all verified JWT tokens correctly. The pattern was there, just not applied everywhere.

Rate limiting infrastructure

A database-level rate limiting function and table already existed. It just wasn’t wired up to the endpoints that needed it most.

Error handling on integrations

External API calls included timeout handling, 429 (rate limit) responses, and 402 (payment) error codes. The defensive patterns were solid.

Key Takeaway
The foundation was good. The gaps were in consistency.

The security patterns already existed in some places. The roadmap focused on applying them everywhere. Targeted fixes, not a rebuild.

What would have caused real damage

4 findings that needed to be fixed before any public access
CriticalData-scraping endpoint has no authentication

The edge function that searches for opportunities makes up to 3 third-party API calls per request. It accepts requests without verifying any auth token. Anyone with the project’s public anon key (embedded in the frontend JavaScript) can call it directly. A single script could exhaust the API budget in under an hour.

// supabase/functions/[redacted]/index.ts// No JWT verification
Request → parse JSON → call third-party API
// No rate limiting, no usage caps, no auth check
Recommended Fix

Add JWT verification (the pattern already exists in other functions), add per-user rate limiting, and add a global daily call cap.

CriticalAI matching endpoint leaks user data

The AI matching function accepts a user ID from the request body without verification, then uses an elevated service-role client to read that user’s profile data. Any caller can read any user’s personal information by guessing or enumerating user IDs.

// supabase/functions/[redacted]/index.tsconst { userId } = await req.json() // trusts the caller
const client = createClient(url, SERVICE_ROLE_KEY) // bypasses RLS
// Anyone can read any user's skills, job title, location

The audit also found: .env file committed to git (secrets pattern risk) and wildcard CORS on all edge functions (cross-site request forgery). Both critical, both fixable in under 2 hours combined.

Where it breaks under load

Cost exposure was the biggest risk
The Core Problem
Unauthenticated endpoints calling paid APIs with no usage caps.

The audit modeled API cost exposure from 10 users to 1,000+ users and found that without auth and rate limiting, a single script could drain the API budget in under an hour. The report included exact cost projections and the specific endpoints responsible.

Beyond security

The audit covered UX, accessibility, performance, code quality, and test coverage
CriticalZero test coverage. The entire test suite was a single placeholder.
HighZero ARIA attributes on custom components. Screen readers couldn’t navigate the app.
MediumStill showing the AI tool’s default metadata in page titles, descriptions, and share links.
Low“Apply” button didn’t actually submit anything. Users could believe they’d applied when they hadn’t.

Plus 18 more findings across performance, code quality, architecture, and scalability. Each with severity, evidence, and a recommended fix with effort estimate.

Production readiness roadmap

26 findings, organized into 4 phases by priority
Phase 1
Before any public access
2-3 days

Close all critical security vulnerabilities. Auth, rate limiting, CORS, input validation.

Phase 2
First sprint post-launch
3-5 days

Performance, accessibility, and code quality fixes. Database optimization, error handling.

Phase 3
First month
5-8 days

Test coverage, bundle optimization, caching, and dependency cleanup.

Phase 4
Ongoing
Continuous

Monitoring, full accessibility compliance, end-to-end testing, vendor evaluation.

The Bottom Line
Phase 1 takes 2-3 days and closes all critical vulnerabilities.

The audit turned 26 unknown unknowns into a prioritized, actionable roadmap. Every finding includes severity, code evidence, a recommended fix, and an effort estimate.

Your app might have the same gaps.

This founder shipped a working app in weeks. The audit took days. The critical fixes took hours. Find out where you stand before your users do.

Get Your Audit