5 security checks a URL-only scanner will never catch
A URL-only scanner works at the edge: it reads your HTTP headers, parses your HTML, fetches your JavaScript bundles, and runs a handful of HTTP-level probes. That's the front door. It tells you whether the locks on the front door are from this decade.
It doesn't tell you if the back door is wide open.
We've been running full 15-provider scans on our own test projects (with real credentials) for weeks. Here are the five classes of findings that a URL scan can never see — but that cause most of the real incidents we read about.
1. Supabase RLS that's technically "enabled" but wide open
A table in Supabase can have Row Level Security "enabled" and still return every row to an unauthenticated caller. The most common pattern: an RLS policy with USING (true), which means "this row is visible under all conditions." The dashboard says RLS is on. The PostgREST endpoint still leaks everything.
A URL scan can't see this. It has no Supabase credentials. The only thing it could do is guess PostgREST endpoints and probe them — which is exactly what our free scanner avoids, because it produces false positives and hammers real databases.
A deep scan with your service_role key queries pg_policies directly and tells you exactly which tables have which policies. If USING (true) is in there, it's flagged as CRITICAL.
2. Firestore rules with allow read, write: if true
The Firebase quickstart documentation ships with test-mode rules: allow read, write: if true;. They expire after 30 days. A lot of deployed projects hit "extend," then hit "extend" again, and never write proper rules.
From the outside, a Firebase app looks fine. The JS bundle loads, the UI renders, users sign in with Google. Behind the scenes, the database is a public Google Doc.
A URL scan sees the frontend. It has no way to query Firestore rules — those live in firestore.rules on the Firebase side. A deep scan with your project ID authenticates against the Firebase admin API and reads the actual deployed rules.
3. Leaked AI provider keys that drain your balance while you sleep
API keys for OpenAI, Anthropic, or any other metered API are a cost-exposure finding, not just a secrets finding. The question isn't "is the key leaked" (a URL scanner can sometimes spot that in JS bundles). The question is: what happens in the first 8 hours after someone finds it?
Leaked keys get scraped within minutes. The attacker doesn't announce themselves — they run jobs 24/7 at the provider's maximum rate limit until your billing cap trips or your card gets declined. The bill you see the next morning is usually four to five figures.
Our cost-exposure provider checks for both (a) the presence of the key in client-side code and (b) whether your provider has per-key spend limits configured. Both matter. One without the other is still a disaster.
4. MCP server misconfigurations that hand your filesystem to an LLM
Model Context Protocol servers are new and people are shipping them fast. The most common patterns we see:
- A filesystem MCP server exposing the user's home directory without path restrictions. An LLM with tool-use can read
.ssh/id_rsaif it feels like it. - Hardcoded API keys in the server config because "it's just local."
- Tool descriptions that can be overridden by the context the LLM reads, enabling "tool poisoning" attacks.
Unless you point a scanner at the MCP server config, none of this is visible. A URL scan of the website that hosts the MCP server tells you nothing about the MCP server's tool surface.
5. Open infrastructure ports (Postgres, Redis, Elasticsearch) on the same host
A web app serves on 443. The database runs on 5432. In a properly configured deployment, 5432 is firewalled to the app VPC. In a lot of real deployments, it's not — because "I'll fix it later" and "later" didn't happen.
A URL scan looks at 443 and goes home. Our network provider probes 13 common service ports (Postgres, MySQL, MongoDB, Redis, Memcached, Elasticsearch, Kafka, RabbitMQ, and more) and reports any that are reachable from the open internet. This caught an open Redis on one of our own monitored sites that we didn't know about.
The shape of the gap
A URL scan audits the surface of your app — what a random visitor sees. That's valuable. If your CSP is broken or you're leaking a Stripe secret in HTML, the URL scan catches it.
A deep scan audits the machinery of your app — the database that backs it, the cloud account that runs it, the integrations that power it. Those are where most of the real incidents happen.
You don't need a deep scan every day. You need one before you ship, and after every major integration change. Run one here.
Don't ship until you're sekrd
Run a free scan to find the vulnerabilities your AI missed.
Scan Your App Free