Proxied logoProxied text

When the Image You See Isn’t the Image You Requested: Proxy Leaks via A/B Testing

DavidDavid
David

July 30, 2025

Blog coverBlog cover

When the Image You See Isn’t the Image You Requested: Proxy Leaks via A/B Testing

Ask anyone who’s run stealth ops in the wild—they’ll tell you about the time an A/B test gave them away. Not because of a header, a TLS signature, or a browser quirk, but because the server sent them an image, a button, or even a layout nobody else saw. Or worse—one they kept seeing session after session, while everyone around them got something new. That’s the quiet power of A/B testing as a fingerprint. And it’s the nightmare that gets missed when all you’re thinking about is traffic, not content.

Here’s the uncomfortable truth: most modern websites, apps, and even backend APIs run constant A/B, multivariate, or feature flag tests. These experiments aren’t just about “which shade of blue makes people click more.” They’re about dynamic session tagging, per-user experiment assignment, silent segmentation, and background telemetry collection. They change what you see, when you see it, and—if you don’t blend perfectly—how you get clustered.

Field Story: The Image That Outed My Stack

I learned the hard way how sticky these leaks can be. Few years back, I was running a pool of “clean” browser sessions through premium mobile proxies. We were hitting a ticketing site notorious for A/B tests—different images, different buttons, new layout on checkout. All the obvious stuff was patched: rotating IPs, unique user agents, randomized headers, custom canvas noise, jitter on mouse movement.

But after a week, a pattern emerged. Our bots started getting the same test image over and over—a new payment icon that wasn’t showing up for anyone else on the team. We were “stuck” in a bucket. No matter how often we wiped cookies, swapped proxies, or even nuked browser containers, we kept getting that same image. Support eventually flagged our pool as “automated.” We’d triggered a cluster, not by being too clean—but by being too persistent inside a test nobody else could see.

The post-mortem? Our sessions were too predictable. Every new connection hit the A/B assignment logic with “clean” browser states, so the system dropped us in the least-used bucket. Because we were unique, we stayed unique. Our pool became its own experiment group. And that experiment group had a different experience—across IPs, browsers, even device types.

The Ugly Anatomy of A/B Test Leaks

  • Sticky Assignment: Many A/B systems pin you to a test group by cookie, localStorage, or even IP/UA fingerprint. Change your proxy, keep the rest the same, and you stay stuck. Change everything, and the system sometimes recognizes the “start from zero” pattern as its own kind of signal.
  • Backend Segmentation: Some A/B logic isn’t just client-side—it’s session ID, account history, even backend-linked. If your stack reuses accounts or login tokens, you’re auto-clustered.
  • Dynamic Content Delivery: What’s supposed to be “random” is actually deterministic once you hit a certain threshold of uniqueness. The rarer your session profile, the rarer the test variant you get—and the easier it is to spot you in the logs.
  • Image and Asset URLs: Many test systems change the filename, CDN path, or even dimensions of delivered assets. When your stack starts requesting “img/abtest_xyz123.png” and nobody else does, that’s a flag.

Where A/B Testing Outpaces Proxy Rotation

You think swapping proxies, headers, and browser containers every session will save you. But most A/B tests are built to survive that—using a combination of sticky storage, backend assignments, or just observing your “new” state and slotting you back into the rare bucket. Here’s how the pain shows up:

  • Cookie and LocalStorage Artifacts: Most A/B systems drop at least one cookie or localStorage entry per test. If you clean cookies every time, you never graduate to a “normal” user profile—you stay in “fresh” test groups, sometimes the weirdest, least-used ones.
  • Proxy Pool Clustering: If your pool reuses exit nodes, session containers, or device IDs, you create a “flock” of users who always get the same test. Detection teams can spot the cluster by asset requests alone.
  • Feature Flag Residue: Some sites run long-lived flags tied to account or even payment method. Wipe the browser, rotate IPs—still in the same test group. Only real diversity breaks the assignment.
  • Timing and Friction Artifacts: A/B systems sometimes deliberately slow down or “tweak” performance for certain groups. If your whole stack gets a lag, a weird image, or a new button, you’re living in your own test world.

Edge Cases—The Wildest Leaks

  • API Endpoint Differences: Some A/B buckets hit different APIs or data sources entirely. You might be requesting “/api/v2/order” while most real users are on “/api/v1/order.” Logs don’t lie.
  • Image CDN Partitioning: The assets for your bucket come from a different CDN or edge node. Now your proxy IPs are mapping requests nobody else is making.
  • Script/Tracking Beacon Drift: The A/B test injects extra tracking calls, new JS, or special error logging. Bots tend to ignore these, or get stuck on them when scripts change.
  • User Experience Divergence: Sometimes, your test group breaks features—your pool quietly loses buttons, new popups appear, or your checkout always fails at step three. That’s not a coincidence. That’s a cluster.

Why Bots and Clean Pools Always Get the Strangest A/B Assignments

  • Freshness Becomes a Pattern: You’re always a “new” user, so you always trigger “new user” tests.
  • Low-Entropy Profile: Bots that wipe everything, every time, get thrown into “experimental” or “high-risk” test buckets—sometimes the ones meant to stress-test new logic.
  • Uncommon Traffic Flow: Bots that jump directly to deep links, skip onboarding, or request images out of order can get shunted into their own bucket.
  • Consistent Outlier: If your pool always moves the same way, always skips certain pages, or always blocks trackers, you stand out from both normal users and regular bots.

What Detection Teams Are Actually Watching

  • Asset Request Patterns: Who keeps asking for “img/abtest_99q1r4x.png” at 3AM from ten different IPs in five countries? Clustered.
  • Session Assignment Drift: If a group of sessions always lands in the same A/B bucket, that’s not random—it’s a signal.
  • Request Entropy: Real users’ cookies, headers, and storage state build up slowly. Bots that keep showing up empty or identical flag themselves as “test fodder.”
  • Broken Feature Trails: User sessions that always hit bugs, edge cases, or rare test logic become easy to track. If your pool has a unique pattern of failures, you get mapped.

How Proxied.com Learned to Live With the Mess

After burning through enough pools stuck in weird A/B buckets, here’s what we do:

  • Accept the bucket—then break the pattern. Sometimes, you just have to ride out the test and inject enough chaos (random headers, localStorage variation, session timeouts, actual failed logins) to “grow out” of the assignment.
  • Rotate everything—proxy, browser, device ID, account, session, extension set, even the script flow. No two sessions should ever look the same from assignment to assignment.
  • Seed cookies and localStorage with real user junk—let the pool accumulate “history” and not always look new.
  • Script friction—let sessions fail, bounce, reload, and even error out. Real users don’t pass every A/B test cleanly.
  • Watch for assignment—monitor which assets, endpoints, or scripts you’re getting. If you’re “too unique,” burn and restart.
  • Test in the wild—use real devices and mixed human traffic to see what “normal” really gets, then tune the pool to blend.

The only thing worse than being stuck in a test bucket is pretending you’re not there.

Survival Tips—Blending With the A/B Crowd

  1. Never wipe everything between sessions—let a little mess accumulate. History is realism.
  2. Script different user flows—start on the homepage, the deep link, the checkout, and everything in between.
  3. Randomize extension noise, autofill state, and even language or locale headers.
  4. Accept some failed sessions—if you’re too perfect, you’re an experiment, not a real user.
  5. Log your asset requests and compare to “normal”—if you’re only seeing what nobody else does, you’re clustered.
  6. Burn pools stuck in rare buckets—there’s no fixing some assignments. Just walk away.

Field Scars—Where the Shadows Pile Up

  • Travel Booking Sites: A/B test different images for “Deal of the Day”—bots stuck in old test buckets can’t see the real deals, or always get flagged for weird booking flows.
  • Ticketing Portals: Bots kept seeing a broken “Pay Now” button nobody else did—test bucket flag stuck for days, flagged by logs.
  • Retail Checkout Flows: Automation pools got a unique shipping option that never worked—clustered and burned by the backend team running tests on “unusual” traffic.
  • App UIs: Certain proxy sessions stuck with “beta” design, unable to revert or progress—burned after enough failed conversions.

You can’t script your way out of a bucket sometimes. You just have to know when you’re in one and move on.

Proxied.com’s Playbook—Mess, Diversity, Survival

We don’t chase perfect anymore. We randomize every session, every flow, every storage state, and always compare asset requests to real traffic. If something gets stuck, we burn it, start over, and never let the pool look the same twice. The goal is never to be in the rarest bucket. It’s to be lost in the crowd of real user mess.

Final Thoughts

A/B testing is the stealth fingerprint nobody talks about. If you’re seeing different images, buttons, or flows than the people around you, it might be the sign you’re living in your own private experiment. In 2025, survival isn’t about being perfect—it’s about being normal. Or at least, normal enough to never look like the test subject.

feature flag fingerprint
Proxied.com
image assignment fingerprinting
proxy A/B testing leaks
stealth automation detection
session clustering

Find the Perfect
Proxy for Your Needs

Join Proxied