Inside the Archive.today CAPTCHA — Simulated DDoS Request Flood (Investigation & Demo)
Inside the Archive.today CAPTCHA — Simulated DDoS Request Flood
Step-by-step explanation, a fresh visual simulation (safe, no fetch), community citations, and mitigation advice for site owners.
// This simulation visualizes the observed pattern (no fetch calls are performed):
// setInterval(function() {
// fetch("https://gyrovague.com/?s=" + Math.random().toString(36).substring(2), { referrerPolicy:"no-referrer", mode:"no-cors" });
// }, 300);
How the reported pattern works — step by step
1) Timer runs in the browser: the CAPTCHA page allegedly installed a repeating timer (setInterval) that fires every few hundred milliseconds. Each tick triggers the code that builds a new URL.
2) Randomized query prevents caching: the URL contains a randomized parameter (for example ?s=abc123), which prevents the browser or intermediate caches from returning a cached response. That forces the destination server to process every request.
3) Repeats while page is open: as long as a user keeps the archive CAPTCHA tab open, the loop keeps generating requests — effectively turning visitors into traffic generators aimed at the target URL.
4) Real-world effect: repeated requests at ~3 per second per open tab scale quickly. If dozens or hundreds of visitors have that CAPTCHA page open, a target site may receive thousands of requests per minute, which can saturate bandwidth, tie up CPU and database connections, and lead to denial of service.
Why this is especially impactful
- Small personal blogs and low-tier hosting plans are especially vulnerable.
- Because the traffic is generated by many ordinary web browsers, it may be hard to block by simple IP rules alone.
- Randomized queries make caching and CDN edge-caching less effective.
Mitigation (practical)
- Rate-limit expensive endpoints server-side (return 429 for excessive calls).
- Whitelist known search query patterns and ignore obviously random, short queries.
- Use WAF/CDN rules to block rapid repeated requests from identical user agents or referrers.
- Log and gather sample request headers for abuse reporting.
Community context & reported timeline
The behavior and code snippet were first publicized through a detailed post (Gyrovague) and were subsequently discussed on Hacker News, Lobsters, and Reddit’s DataHoarder community, where users analyzed the screenshots and code samples and debated intent and remediation. See the full source list at the bottom.
Comments
Post a Comment