BOOK THIS SPACE FOR AD
ARTICLE ADI found a security mistake in Spotify’s documentation site https://developer.spotify.com and reported it so they could fix it. So to be clear, I have not in any way hacked Spotify. Sorry for the slightly clickbaity title, I couldn’t help myself. However, I still think it’s an interesting story and a good little security lesson here. So let’s get started.
My day job as a Security Engineer consists of working with Software Engineers to build cool services in the most secure ways possible. A colleague of mine was implementing an OpenID Connect client towards a third party’s API and I had advised him to implement it with PKCE as part of the flow. If all this sounds gibberish to you, don’t worry, it’s not too important for the overall story.
Footnote — TL;DR on OpenID Connect and PKCE
The important part is that OpenID Connect is used for authenticating users towards using a third-party identity provider ( The “Login with Google/Facebook/etc.”-buttons you see in various places). PKCE is a small improvement to the flow used to improve the security of the flow in certain situations. I may do a write-up on this subject in the future but in the meantime, you can read more here.
The Backstory — Continued
So anyway, my colleague implemented the OpenID Connect Client and sent me a Pull Request so I could review the code and give him feedback. It was not a big Pull Request, so I didn’t expect to have much feedback to give, but I did notice a security issue in the code. Here’s the equivalent of what my colleague did to generate a code verifier. Can you spot the issue?
function generateRandomString(length) {let text = '';
let possible = "ABCDEFGHIJKLMNOPQRSTUVXYZabcdefghijklmnopqrstuvxyz0123456789"
for (let i = 0; i < lenght; i++) {
text += possible(charAt(Math.floor(Math.random() * possible.lenght));
}
return text;
}
The issue here is the use of Math.random(), as it is pseudo-random and not cryptographically random. This gets very technical, but informally it means that it’s not “random enough”. If you want to dive into this rabbit hole, have a look here or here.
Essentially, if an adversary worked hard enough, they would likely be able to predict the output of Math.random(), which would allow them to predict the output ofgenerateRandomString() and guess the verifier. This would reduce the security of the PKCE flow and potentially open the client up to authorization code interception attacks. According to the specification of PKCE, the verifier is to be created in the following way:
code_verifier = high-entropy cryptographic random STRING using the
unreserved characters [A-Z] / [a-z] / [0–9] / “-” / “.” / “_” / “~”
The implementation shown above doesn’t follow that definition as it isn’t cryptographically random. So my colleague had made a minor mistake, that would affect the security of the solution. Luckily, it was just a matter of switching approach and implementing it with a cryptographically secure function instead. However, when we talked about it, he surprised me by saying “Oh, I used Spotify’s documentation as inspiration for that code”. That made my ears perk up a little bit.
My colleague had made a Google search for PKCE and Google had sent him to Spotify’s documentation, even though he was implementing OpenID Connect with PKCE for something else. His thinking was “Spotify must know what they are doing”, and I would probably have agreed. However, when we went to developer.spotify.com and looked at their example for PKCE, we found this:
So the error came from that code example. We knew how to fix the problem for our solution but I wanted to report this issue to Spotify so they could fix it on their end too.
Full disclosure, I am not a big bounty hunter who constantly looks for security flaws, hoping to get a paycheck out of it. However, I have reported a couple of small findings like this before. My thinking is that the companies should usually be interested in correcting it, but I also don’t want to make a bigger deal out of it than it is. The biggest issue with this type of flaw is that there is no way of knowing how many people used the vulnerable code in their client. But more on the impact in a bit.
So where to report it? Initially, I had a look at their HackerOne policy, but it seemed a stretch to say that my finding fit anywhere into the scope they define there. Remember, this was not a security finding in their source code. instead, they were inadvertently reducing the security of OpenID Connect Clients implemented according to their specification. As such, there was not a security flaw anywhere in their services.
I Googled a bit and found their online forum community.spotify.com, where I managed to get ahold of a Spotify employee through DM. While the finding was not CRITICAL I did not want to bring attention to the issue publicly before Spotify had a chance to react to it, so I was trying to avoid “reporting it” on a public forum. You can see some of my reports in the following picture, with the reaction from the Spotify employee below.
I checked back at it a week later and saw that they were yet to fix the issue. I figured it might have ended in a backlog so huge that no one would ever get to it. However, after checking it roughly six months later, you will see a different documentation version if you go here. You can see a picture of what it looks like at the time of writing below:
So they fixed it using crypto.getRandomValues(). Awesome! This was also the approach we had taken in our internal solution.
The impact of the issue is hard to gauge accurately. Very few may have used their vulnerable code snippet, or many could have used it. There is no way to tell. What mitigates the issue is that other factors would have to align for this to become an actionable vulnerability that someone could abuse. authorization code interception attacks usually occur if the OpenID Connect flow is run in a malicious web client or mobile app. Furthermore, adversaries would have to know that a given OpenID Connect Client was vulnerable to this attack and they would have to figure out how to break the pseudo-randomness of the Math.Random() function for a successful attack to work. I am not enough of an expert on the inner workings of Math.Random(), but I would guess that the likelihood of these factors aligning is small.
All that said, there are loads of horror stories where adversaries have been able to abuse the fact that an implementation of a security protocol used pseudo-randomness instead of cryptographic randomness. If you’d like to dive further into this topic and many more, I can recommend the book Serious Cryptography by Jean-Philippe Aumasson (This is not an ad, I just really liked the book). But be prepared for loads of math and cryptanalysis. It’s a heavy one.
So what were the takeaways of the story? My point was never to point fingers at Spotify. I think they handled themselves very well. They professionally took my input and dealt with it. Instead, I would hope to give you the following three takeaways from the story:
When implementing cryptographic functions always use cryptographic randomness. The only downside is performance, which you rarely notice with these flows.If you find any security findings, report them accurately. Don’t exaggerate for sensation’s sake. Don’t go looking for bug bounties where you aren’t eligible. I receive “security findings” in my day job and most of the stuff we get is very small-scale. Yet, people consistently ask about a bug bounty even though our security policy clearly state that we don’t have a bug bounty program. “Bug Bounty Begging” will only result in people not taking you seriously about whatever finding you may have. However, if you report stuff decently and accurately, you are much more likely to get the message through.On the other hand, if your company gets a high-quality security finding from an outside source, show appreciation and fix the issue like Spotify did in this case. Even if the issue is not critical, sometimes it’s the combination of many small security issues that result in serious security problems.