Skip to main content
Telovra Telovra
← Back to Blog

Why Blocklists Don't Work (And What Does)

Telovra Team
internet filtering blocklists content filtering

Every internet filter starts the same way: someone makes a list of websites to block. Schools, workplaces, parental control software, government censors — they all begin with a database of URLs or domains classified as harmful, distracting, or inappropriate.

For a while, it works. The most obvious bad sites are blocked. Users feel protected. Administrators check a box.

Then it stops working. Not all at once, but gradually, as the architecture of the internet evolves around the list. Here’s why.

Problem 1: Domain churn outpaces curation

New domains are registered at a staggering rate — tens of thousands every day. Some are legitimate businesses. Some are spam. Some host harmful content. No human team can evaluate them all.

Blocklist providers use automated crawlers and community reporting to flag new domains, but there’s always a gap between when a harmful site appears and when it’s added to the list. That gap can be days, weeks, or longer for niche content.

For a parent relying on a blocklist to protect their child, this means the filter is always running behind. The sites it knows about are blocked. The sites it doesn’t know about yet are wide open. (For how this affects real parental control tools, see our parental controls comparison.)

Problem 2: CDNs and shared infrastructure

The modern internet doesn’t work the way blocklists assume. A URL is not a reliable identifier for content because content delivery has been abstracted away from individual domains.

Consider how content actually gets served in 2026:

  • CDN-hosted content. A harmful page might be served from a Cloudflare or AWS CloudFront domain. Blocking that domain blocks millions of legitimate sites along with it.
  • Platform-hosted content. Harmful content on a site like Medium, Notion, or Google Sites shares its domain with millions of legitimate pages. You can’t block the domain without blocking everything on the platform.
  • Subdomains and paths. Blocking example.com blocks the entire site, including harmless pages. Blocking example.com/specific-path is more precise, but path structures change constantly and can be randomized.

Blocklists were designed for a web where one domain meant one website. That relationship broke down years ago.

Problem 3: Encryption limits inspection

Most web traffic is now encrypted via HTTPS (TLS). This is good for security and privacy, but it complicates content filtering.

With encrypted traffic, a blocklist-based filter can see the domain name (via SNI in the TLS handshake) but not the specific page content. It can block twitter.com but it can’t distinguish between a Twitter thread about your industry and one about celebrity gossip. It can block youtube.com but can’t tell the difference between an educational tutorial and a music video.

Some enterprise tools use TLS interception (MITM proxies) to decrypt and inspect traffic, but this introduces its own problems: certificate warnings, privacy concerns, and significant computational overhead. It’s also increasingly difficult as browsers and operating systems tighten certificate pinning.

Problem 4: Content exists in context

The most fundamental problem with blocklists isn’t technical — it’s conceptual. Blocklists assume content is inherently “good” or “bad.” But most content is contextually appropriate or inappropriate.

A Wikipedia article about human reproduction is appropriate for a teenager doing biology homework. It might be inappropriate for a 6-year-old browsing unsupervised. A YouTube video about chemistry experiments is educational in one context and a safety risk in another.

Blocklists can’t encode context. They can only say “this site is blocked” or “this site is allowed.” They have no mechanism to ask “blocked for whom?” or “blocked when doing what?”

This creates the classic overblocking/underblocking trade-off:

  • Conservative blocklists (block aggressively) catch harmful content but also block legitimate research, educational resources, and useful tools.
  • Permissive blocklists (block less) avoid overblocking but let more harmful or distracting content through.

Neither setting is correct because the right answer depends on who’s browsing and why.

Problem 5: Evasion is trivial

Bypassing a blocklist is one of the easiest things on the internet. Common methods:

  • VPNs and proxies. Route traffic through a server the blocklist doesn’t control.
  • Alternate DNS. Switch from the filtered DNS resolver to a public one (Google, Cloudflare, etc.).
  • URL shorteners. Access a blocked URL through a redirect service the blocklist doesn’t cover.
  • Cached versions. View blocked content through Google Cache, Wayback Machine, or similar.
  • New domains. Move content to a new domain that isn’t on the list yet.

A motivated teenager can bypass most parental control blocklists in minutes. A technical employee can route around workplace filters nearly as easily. The filter creates an illusion of control without actual enforcement.

What works better: content-level classification

If the problem is that blocklists evaluate URLs when they should be evaluating content, the solution is to evaluate content.

Content-level classification analyzes what’s actually on a page — its topic, its language patterns, the type of content it contains — rather than just where it’s hosted. This approach:

Handles new domains automatically. A brand-new site gets classified the same way as an established one. No waiting for someone to add it to a list.

Works with shared infrastructure. It doesn’t matter if the content is on Cloudflare, Medium, or a personal server. The classification looks at the content itself, not the infrastructure.

Enables context-aware decisions. When you combine content classification with a user’s stated intent, you can make decisions that static lists can’t: “This page is about X, and the user’s current task is Y, so it’s relevant/not relevant.” (See what is intent-based filtering for a full explanation of this approach.)

Explains its reasoning. A blocklist can only tell you “this site is blocked.” Content-level classification can tell you “this page was classified as a social media feed, which doesn’t match your current task of writing a quarterly report.” That explanation makes the system trustworthy and auditable.

The trade-offs

Content classification isn’t free of trade-offs:

  • More computationally expensive. Evaluating content requires more processing than a database lookup.
  • Not perfect. Classification models can misinterpret content, especially ambiguous or multilingual pages.
  • Requires an intent signal. To make context-aware decisions, you need some statement of intent — either from the user or from a configured profile.

But these trade-offs are engineering challenges, not fundamental limitations. The trade-offs of blocklists — architectural incompatibility with modern web infrastructure, inability to handle context, trivial evasion — are fundamental limitations that can’t be engineered away.

Where this is headed

The internet is increasingly dynamic, personalized, and delivered through shared infrastructure. Static lists of “blocked” and “allowed” domains are a solution built for a simpler web that no longer exists.

Intent-based filtering — evaluating content, understanding context, and explaining decisions — addresses these structural problems directly. That’s the approach Telovra is building.

For more on how this applies to families, see Telovra Family. For individual productivity, see Telovra Focus.

Interested in early access?

Get notified when Telovra launches.