Shared Resource Entropy: How Web Workers Create Cross-Session Correlation


David
September 15, 2025


Shared Resource Entropy: How Web Workers Create Cross-Session Correlation
Proxies are usually discussed at the surface: headers, TLS handshakes, or DNS queries. But modern browsers run deeper machinery. Web Workers, designed to allow JavaScript to offload tasks into background threads, quietly carry state across contexts. They don’t just perform computation — they inherit entropy from the system they run on. Timing delays, memory usage, cache collisions, and CPU quirks all leave traces.
Detectors don’t need to target the main session to burn a fleet. They only need to observe these background witnesses. And because Web Workers run silently in the margins of the browser, fleets often forget they exist. This forgetfulness turns into exposure when shared entropy betrays supposedly independent personas.
The Timing Drift of Parallel Threads
Web Workers are asynchronous by design. They post messages back to the main thread when tasks finish, and the delay depends on system performance, background load, and network state. Real users scatter across this timing drift — some laptops crunch tasks faster, some mobile devices lag, others wobble unpredictably depending on CPU usage.
Fleets betray themselves when dozens of accounts reveal identical timing drifts. Running on cloned environments with uniform proxy conditions, their workers echo back in unison. Detectors see this as orchestration. Timing that should wobble like static instead clicks like a metronome.
Memory Allocation as a Continuity Trail
Every Web Worker consumes memory differently. Real users scatter because systems vary: one user runs with a dozen tabs open, another with a single lightweight app, another with an overloaded background process. Fleets collapse when all personas consume nearly identical amounts of memory during worker initialization.
Detectors log these footprints. Accounts that should diverge end up leaving overlapping memory trails. The result is continuity across sessions — a way to link accounts not by payloads, but by resource allocation patterns that don’t scatter as they should.
Shared Pools, Shared Scars
Browsers optimize Web Workers by reusing underlying thread pools and caches. Real populations scatter because user environments are chaotic. Fleets, by contrast, reveal scars of shared resource pools. Multiple personas may expose the same entropy quirks, like garbage collector timing or shared cache warm-up.
Detectors exploit this. If dozens of accounts reveal identical scars in worker behavior, they don’t need IP-level clustering to link them. Shared pools betray shared machinery, no matter how many proxies rotate.
Entropy Exhaustion in Worker Tasks
When fleets push heavy computation through workers, they exhaust entropy sources. Random number generators, nonce creation, or shuffle functions collapse into predictable patterns when called in bulk. Real users scatter because their workloads differ. Fleets collapse when their workers all produce near-identical “random” outputs.
Detectors don’t need to catch the first worker. They wait for entropy exhaustion to set in and then map accounts that echo the same pseudo-random failures. In high-scale automation, this exhaustion becomes inevitable.
The Illusion of Stateless Workers
Developers assume workers are stateless — created fresh, destroyed when done. But browsers cache, optimize, and reuse. Real users scatter across these optimizations, because device histories and OS quirks make reuse unpredictable. Fleets, however, reveal predictable reuse patterns across accounts.
Detectors treat this illusion as a signal. Personas that all expose the same worker reuse behavior are linked instantly. Statelessness, ironically, becomes the continuity trail.
Synchronization Scars in Messaging
Web Workers communicate via message passing. Real users scatter across delays: one device processes messages instantly, another lags under load, another stutters during garbage collection. Fleets betray themselves when all personas exhibit the same synchronization scars — messages queued and processed with uniform rhythm.
Detectors log these rhythms. They don’t need to parse content; they only need to measure how workers sync with the main thread. Uniformity here is as revealing as a shared IP.
Anchoring Entropy in Carrier Jitter
All these worker exposures — timing drifts, memory trails, entropy exhaustion, synchronization scars — are magnified inside clean datacenter proxies. Their sterility makes uniformity more visible. Carrier networks add noise: jitter, packet reordering, and uneven resource handling blur worker behavior into natural scatter.
Proxied.com mobile proxies provide that critical anchor. They insert handset entropy into worker timing, making accounts look less like orchestrated clones and more like messy humans. Without this, workers burn fleets before payloads even arrive.
Thread Starvation as an Orchestration Marker
Thread starvation occurs when too many workers compete for CPU cycles, forcing some to wait. In a natural population, starvation scatters unpredictably: one laptop overheats and throttles, a phone juggles a background video call, another device shifts tasks as the user opens a new app. These diversions make starvation messy, producing patterns that are jagged and hard to align.
Fleets betray themselves because their starvation looks synchronized. Dozens of accounts, all running identical stacks under the same proxy conditions, pause in lockstep when workers oversaturate CPU threads. Instead of scatter, the starvation events align like clock ticks.
Detectors spot this because they know real devices don’t choke at the same time unless orchestration is at play. When multiple sessions show identical worker slowdowns — same length, same recovery time — the uniformity is damning. Thread starvation becomes more than a performance metric; it becomes a forensic marker of proxy-driven fleets.
The Fingerprint of Garbage Collection
JavaScript garbage collection is one of the most chaotic processes in computing. It kicks in when memory pressure builds, but the timing is influenced by unpredictable variables: how much RAM is free, what other applications are running, whether the browser is under stress, or whether the OS decides to intervene. For real users, this creates noisy scatter. Some sessions stutter early, others late, some barely at all.
Fleets collapse because garbage collection stutters fall into identical grooves. Identical hardware, cloned environments, and uniform workloads push garbage collection into the same rhythm across personas. When dozens of sessions stall simultaneously, detectors see it as a continuity signature that cuts across IP rotation.
Detectors don’t even need full visibility into memory contents. All they need to see are worker stalls aligning too neatly. Garbage collection becomes a fingerprint — a rhythm of pauses that is impossible to fake without the entropy of true diversity.
Persistent Seeds in Randomization
Randomization functions inside workers often rely on seeds from system clocks, memory states, or pseudo-random generators. Real populations scatter here because entropy sources differ. One user boots their system seconds earlier, another pulls from a different entropy pool, another runs with altered seeds because of OS noise.
Fleets betray themselves because their seeds repeat. Identical stacks generate nearly identical random sequences, which means “random” numbers across accounts align. Detectors notice when supposedly independent personas flip the same coin in the same way.
This is particularly dangerous for automation fleets that lean heavily on worker-driven shuffling or ID generation. Once detectors see repetition across accounts, they don’t need IP addresses to connect the dots. The repetition itself is the link, and it burns the fleet immediately.
Cache Warm-Up as a Continuity Trail
Web Worker scripts are cached to speed up execution. Real users scatter because their cache states are inconsistent. One user runs a fresh load after clearing history, another still has stale data, another updated overnight. This diversity makes warm-up timings jagged.
Fleets, however, create uniformity. If every persona is spun from the same template, all workers hit the same warm-up delay the first time they run. Detectors log these identical warm-up periods as a continuity trail. Accounts that pause for the same number of milliseconds before running their tasks are clearly linked, no matter how much proxies rotate.
Cache was designed for efficiency, but in proxy fleets, it doubles as a forensic logbook. What operators ignore, detectors weaponize.
Interference Between Sessions
Web Workers don’t run in perfect isolation. They contend for shared system resources — CPU caches, memory buses, GPU threads. Real populations scatter here because workloads differ wildly. One device runs a video stream in the background, another mines CPU for spreadsheet calculations, another idles. Their interference patterns diverge.
Fleets betray themselves when all sessions reveal the same interference scars. If multiple accounts choke in identical patterns, detectors know they share an execution environment. The “independent user” mask collapses instantly because true independence never produces identical interference.
Detectors treat this interference as a kind of background noise check. Real life always scatters. Fleets fail because their shared infrastructure erases that scatter and replaces it with sterile repetition.
The Problem of Predictable Scheduling
Operating systems schedule worker threads based on many factors: current load, system priorities, and even interrupts from hardware. This makes scheduling scatter wildly across real users. Two identical laptops, in different contexts, will still show different scheduling rhythms.
Fleets betray themselves when their scheduling looks too predictable. Identical environments with similar workloads produce worker messages that always arrive at the same intervals. Detectors see this and realize they’re not looking at a population — they’re looking at an orchestrated cluster.
Predictability itself is suspicious. Life wobbles, automation does not. And in scheduling, that neatness becomes the loudest scar.
When Message Collisions Form a Pattern
Web Workers communicate with the main thread through message passing. Real users scatter because these collisions — multiple messages landing at the same moment — happen inconsistently. Different devices process queues differently, sometimes colliding often, sometimes rarely, sometimes not at all.
Fleets, however, betray themselves by producing identical collision patterns. If dozens of accounts always hit collisions in the same places, detectors see orchestration. They don’t need payload content; timestamp alignment is enough.
This transforms collisions into fingerprints. They are not just glitches in asynchronous systems — they are continuity anchors, proving that “independent” accounts are chained together under the same automation logic.
Anchoring Worker Scatter in Carrier Networks
The only real way to mask these exposures is to reintroduce scatter at the network and system level. Datacenter proxies make everything worse: they sterilize timing, smooth delays, and strip away environmental noise. Fleets behind them are burned faster because their uniformity is amplified.
Carrier networks, by contrast, add natural entropy. Proxied.com mobile proxies inject jitter, packet reordering, and uneven throughput into worker behavior. Even identical environments scatter differently when tethered to messy handset paths. The workers’ scars blur back into human-like variation.
Without this anchoring, fleets are naked. Every worker log, every collision, every randomization seed becomes a continuity marker. With it, the noise of real life returns, and workers no longer whisper in unison.
Final Thoughts
Web Workers were built as helpers, offloading heavy lifting so web apps wouldn’t stall. But in the world of detection, they’ve become silent witnesses. They reveal entropy exhaustion, timing scars, garbage collection rhythms, and scheduling uniformity. Fleets collapse not because of what they click, but because of what their background threads confess.
Detectors don’t need to monitor headers or TLS to burn an operator. They only need to measure how workers behave — how they collide, stall, and repeat. Those traces are impossible to erase with clean proxies alone.
The lesson is blunt: shared resource entropy is a trap. Fleets that ignore it leak continuity across every “independent” persona they deploy. Only by anchoring in noisy, messy networks — like those provided by Proxied.com mobile proxies — can operators hope to scatter their background scars back into believable human entropy.