We ran 114,000 real pop clicks through 13 layers of bot detection. Not a simulation. Not a sample. Every single click from a live Galaksion campaign, analyzed at the HTTP level before it reached any landing page.

What we found challenges most of what you read about pop traffic quality in 2026.

The Setup

The campaign ran popunder traffic from Galaksion across 407 unique publisher zones, targeting mixed geos with standard pop settings. Every click passed through our detection pipeline, which examines:

Each click received a trust score from 0 to 10 and a binary ACCEPT/BLOCK decision. Here's what the data shows.

The Numbers: 114,000 Clicks Dissected

MetricValue
Total clicks analyzed114,411
Unique zones407
Overall accept rate74.8%
Overall block rate25.2%
Average trust score6.31 / 10
Zones classified as BLOCKED103 (25.3%)
Zones classified as TRUSTED18 (4.4%)
Zones in WATCH status286 (70.3%)

Let that sink in: one in four clicks was a bot. And this is from a mainstream pop network with built-in quality filters. Without independent filtering, a quarter of every dollar spent on this campaign would go to non-human traffic.

Zone Quality Distribution: The Long Tail Problem

The 407 zones weren't evenly distributed. Pop traffic follows a steep power law:

This matters because the biggest zones have the most data and the clearest quality signals. The long tail of small zones — each sending 5-20 clicks — is where uncertainty lives. You can't judge a zone on 7 clicks.

The Good Zones

18 zones earned TRUSTED status — accept rate above 80% with meaningful volume. These zones consistently sent real browsers with valid headers, diverse devices, and natural timing patterns. Characteristics:

These 18 zones represent your ROI. They're the gold in the trash. Every dollar spent on them reaches a real person.

The Bad Zones

103 zones were confirmed bad — accept rate below 25% with high volume, or zero accepts with moderate volume. The worst offenders:

Common bot signatures across blocked zones:

The Gray Area

286 zones (70%) sit in the watch category. These zones haven't accumulated enough evidence to classify definitively. They include:

This 70% gray area is why static blocklists fail. You need continuous monitoring to classify these zones as evidence accumulates.

RollerAds Comparison

We ran a parallel analysis on RollerAds traffic from the same period. While the Galaksion dataset is larger (114K vs ~30K clicks), the comparison reveals interesting differences:

MetricGalaksionRollerAds
Accept rate74.8%~82%
Average trust score6.31~6.8
Zone block rate25.3%~18%
Datacenter IP rate~12%~6%
Missing Sec-Fetch rate~18%~11%

RollerAds shows cleaner traffic on average, with fewer datacenter IPs and more consistent browser fingerprints. However, Galaksion offers significantly more volume and lower CPMs. The effective cost per real human click may be comparable once you factor in filtering:

With a good blocklist removing the worst zones, Galaksion's effective rate drops further — making it potentially the better value despite the lower raw quality.

What This Means for Your Campaigns

If You're Running Galaksion

  1. Expect 25% waste without filtering. Budget accordingly. If you spend $100/day, $25 goes to bots.
  2. The top zones are excellent. Galaksion's best zones match or beat any network's quality. The problem is the tail.
  3. Zone blocklists cut waste by 60-70%. Blocking the confirmed-bad 103 zones would have saved roughly $7,200 on this campaign's spend.
  4. Update blocklists frequently. Zone quality shifts. Run analysis every 24-48 hours minimum.

If You're Running RollerAds

  1. Cleaner baseline, but not clean enough to skip filtering. 18% waste is still significant.
  2. Better for smaller budgets where every dollar matters and you can't afford 25% waste.
  3. Combine with Galaksion for maximum coverage — use RollerAds as your stable base and Galaksion for volume scaling.

If You're Comparing Networks

Don't compare raw quality metrics. Compare cost per verified human click after filtering. A "dirty" network with $0.50 CPM and good filtering often beats a "clean" network at $2.00 CPM.

The Detection Breakdown

Which detection layers caught the most bots? Here's the contribution of each signal to block decisions across 114K clicks:

Detection Signal% of BlocksFalse Positive Risk
Missing/invalid Sec-Fetch headers34%Very low
Datacenter/hosting ASN22%Low (VPN users)
Header inconsistency (Chrome version vs capabilities)18%Very low
Known bot user agents11%Zero
Threat intelligence (IP reputation)8%Very low
Burst rate / mechanical timing5%Very low
Geographic mismatch2%Low

The top two signals — Sec-Fetch headers and datacenter detection — catch over half of all bots. These are reliable, low-false-positive signals that every pop traffic buyer should be checking.

Methodology Notes

Full transparency on how this analysis was conducted:

Analyze Your Own Traffic

See your real zone quality data — accept rates, trust scores, and automatic blocklists for your pop campaigns.

Start Free Analysis

100K checks free. Works with any pop network.