We ran 114,000 real pop clicks through 13 layers of bot detection. Not a simulation. Not a sample. Every single click from a live Galaksion campaign, analyzed at the HTTP level before it reached any landing page.
What we found challenges most of what you read about pop traffic quality in 2026.
The Setup
The campaign ran popunder traffic from Galaksion across 407 unique publisher zones, targeting mixed geos with standard pop settings. Every click passed through our detection pipeline, which examines:
- HTTP headers (Sec-Fetch, Accept-Language, Accept-Encoding, Connection)
- TLS fingerprints and protocol version
- User agent consistency and Chrome version analysis
- Device fingerprinting (screen resolution, device model, client hints)
- IP reputation (datacenter detection, hosting ASN identification, threat intelligence feeds)
- Behavioral signals (burst rate, geographic consistency, temporal patterns)
Each click received a trust score from 0 to 10 and a binary ACCEPT/BLOCK decision. Here's what the data shows.
The Numbers: 114,000 Clicks Dissected
| Metric | Value |
|---|---|
| Total clicks analyzed | 114,411 |
| Unique zones | 407 |
| Overall accept rate | 74.8% |
| Overall block rate | 25.2% |
| Average trust score | 6.31 / 10 |
| Zones classified as BLOCKED | 103 (25.3%) |
| Zones classified as TRUSTED | 18 (4.4%) |
| Zones in WATCH status | 286 (70.3%) |
Let that sink in: one in four clicks was a bot. And this is from a mainstream pop network with built-in quality filters. Without independent filtering, a quarter of every dollar spent on this campaign would go to non-human traffic.
Zone Quality Distribution: The Long Tail Problem
The 407 zones weren't evenly distributed. Pop traffic follows a steep power law:
- Top 10 zones delivered 38% of all traffic
- Top 50 zones delivered 72% of all traffic
- Bottom 200 zones delivered less than 8% combined
This matters because the biggest zones have the most data and the clearest quality signals. The long tail of small zones — each sending 5-20 clicks — is where uncertainty lives. You can't judge a zone on 7 clicks.
The Good Zones
18 zones earned TRUSTED status — accept rate above 80% with meaningful volume. These zones consistently sent real browsers with valid headers, diverse devices, and natural timing patterns. Characteristics:
- Average accept rate: 91%
- Average trust score: 6.8
- Diverse Chrome versions (120-134 range)
- Natural geographic distribution matching campaign targeting
- Temporal patterns following real human activity cycles
These 18 zones represent your ROI. They're the gold in the trash. Every dollar spent on them reaches a real person.
The Bad Zones
103 zones were confirmed bad — accept rate below 25% with high volume, or zero accepts with moderate volume. The worst offenders:
- 47 zones with zero accepted clicks out of 20+ attempts — "dead zones" sending 100% non-human traffic
- 31 zones with accept rate below 10% — confirmed bot sources
- 25 zones with accept rate below 25% on 50+ clicks — chronic low quality
Common bot signatures across blocked zones:
- Missing Sec-Fetch headers: Modern browsers always send Sec-Fetch-Site, Sec-Fetch-Mode, and Sec-Fetch-Dest. Bots and headless browsers often don't.
- Datacenter IPs: Traffic claiming to be from mobile devices but originating from AWS, Google Cloud, or OVH data centers.
- Header inconsistency: Chrome 130 user agent but missing Sec-CH-UA client hints that Chrome 130 always sends.
- Mechanical timing: Perfectly spaced clicks — exactly every 3 seconds — instead of natural human variation.
The Gray Area
286 zones (70%) sit in the watch category. These zones haven't accumulated enough evidence to classify definitively. They include:
- New zones with fewer than 10 clicks
- Zones with mixed quality — 40-60% accept rate — that could be compromised clean zones or bot zones with some real traffic
- Zones with strong accept rates but too few clicks for confidence
This 70% gray area is why static blocklists fail. You need continuous monitoring to classify these zones as evidence accumulates.
RollerAds Comparison
We ran a parallel analysis on RollerAds traffic from the same period. While the Galaksion dataset is larger (114K vs ~30K clicks), the comparison reveals interesting differences:
| Metric | Galaksion | RollerAds |
|---|---|---|
| Accept rate | 74.8% | ~82% |
| Average trust score | 6.31 | ~6.8 |
| Zone block rate | 25.3% | ~18% |
| Datacenter IP rate | ~12% | ~6% |
| Missing Sec-Fetch rate | ~18% | ~11% |
RollerAds shows cleaner traffic on average, with fewer datacenter IPs and more consistent browser fingerprints. However, Galaksion offers significantly more volume and lower CPMs. The effective cost per real human click may be comparable once you factor in filtering:
- Galaksion: $0.50 CPM ÷ 74.8% clean = $0.67 effective CPM per real click
- RollerAds: $0.70 CPM ÷ 82% clean = $0.85 effective CPM per real click
With a good blocklist removing the worst zones, Galaksion's effective rate drops further — making it potentially the better value despite the lower raw quality.
What This Means for Your Campaigns
If You're Running Galaksion
- Expect 25% waste without filtering. Budget accordingly. If you spend $100/day, $25 goes to bots.
- The top zones are excellent. Galaksion's best zones match or beat any network's quality. The problem is the tail.
- Zone blocklists cut waste by 60-70%. Blocking the confirmed-bad 103 zones would have saved roughly $7,200 on this campaign's spend.
- Update blocklists frequently. Zone quality shifts. Run analysis every 24-48 hours minimum.
If You're Running RollerAds
- Cleaner baseline, but not clean enough to skip filtering. 18% waste is still significant.
- Better for smaller budgets where every dollar matters and you can't afford 25% waste.
- Combine with Galaksion for maximum coverage — use RollerAds as your stable base and Galaksion for volume scaling.
If You're Comparing Networks
Don't compare raw quality metrics. Compare cost per verified human click after filtering. A "dirty" network with $0.50 CPM and good filtering often beats a "clean" network at $2.00 CPM.
The Detection Breakdown
Which detection layers caught the most bots? Here's the contribution of each signal to block decisions across 114K clicks:
| Detection Signal | % of Blocks | False Positive Risk |
|---|---|---|
| Missing/invalid Sec-Fetch headers | 34% | Very low |
| Datacenter/hosting ASN | 22% | Low (VPN users) |
| Header inconsistency (Chrome version vs capabilities) | 18% | Very low |
| Known bot user agents | 11% | Zero |
| Threat intelligence (IP reputation) | 8% | Very low |
| Burst rate / mechanical timing | 5% | Very low |
| Geographic mismatch | 2% | Low |
The top two signals — Sec-Fetch headers and datacenter detection — catch over half of all bots. These are reliable, low-false-positive signals that every pop traffic buyer should be checking.
Methodology Notes
Full transparency on how this analysis was conducted:
- Every click was analyzed server-side before any redirect. No client-side JavaScript, no cookies, no tracking pixels. Pure HTTP-level analysis.
- Trust scores use 13 independent signals. No single signal blocks traffic. A click must fail multiple checks to be blocked.
- Zone classification requires minimum sample sizes. A zone needs 20+ clicks for BLOCKED status, 10+ for TRUSTED. This prevents knee-jerk reactions to small samples.
- No conversion data was used in classification. Zone quality is determined purely by traffic signals, making it independent of offer, landing page, or campaign performance.
- All data is from live production traffic. No synthetic tests, no lab conditions. Real campaigns, real money, real decisions.
Analyze Your Own Traffic
See your real zone quality data — accept rates, trust scores, and automatic blocklists for your pop campaigns.
Start Free Analysis100K checks free. Works with any pop network.