How Apple Scammed Me Out Of $50,000 in their Bug Bounty Program (Silent Patching & Ignoring Me)

4 months ago 61
BOOK THIS SPACE FOR AD
ARTICLE AD

Random User

Back in 2020, Apple made internal changes to the iOS permission system, around iOS 13.3.1. And I found a vulnerability in it!

I found that user installed apps could get full access to the photo library, even if you explicitly disabled that app’s access to photos. Any installed app could’ve had full, permanent access to the entire photo library, while in Settings the app showed that the photos permission disabled.

Reading infosec twitter, I saw many posts accusing Apple of scamming researchers, silently patching vulnerabilities submitted through their bug bounty program without ever crediting/rewarding the submitter, delayed responses and straight up ignoring reports. https://securityaffairs.com/123321/mobile-2/apple-silently-fixed-ios-zero-day.html

But I gave in and decided to report it via email. It was not a complex bug so it didn’t take long for me to write the report, and send a PoC video showcasing it.

After a few days, I received a confirmation email that they were looking into the issue and will contact me if they have any questions.

In April 2020, I received an email from them asking me to take a sysdiagnose right after reproducing the issue. I sent the requested sysdiagnose.

They responded saying that they were unable to access the file I sent, and that I need to “remove my logging profile” (?) and then re-take the sysdiagnose and send it. I was pretty confused because I never had a configuration profile installed, but I took another sysdiagnose and sent it to them.

After an additional 2 months, in June 2020, iOS 13.5 was released, which fixed the issue. Yay! But still, there was no answer from Apple. I decided to request a status update. They responded saying that they are looking into how this issue was addressed as well as crediting me.

In September 2020, I got an email saying that they were never able to reproduce the issue, and “any changes made to iOS have been made incidentally”, not as a result of my report.

I was pretty confused and asked them why it took them so long to try to reproduce the issue, as it’s been 4 months which is half of an iOS version lifespan. And why they would only notify me that they are unable to reproduce it after 6 months. Of course incidentally made changes could happen after so long, so why would they only try to reproduce the issue now?

I never got any answer afterwards and they completely ignored me and stopped responding.

A few weeks ago, I decided to confront them on their new “apple security bounty site”, which they claim to have created to solve the “communication issues” (not scams) that happened via email. I created a new report and referenced this report, asking what happened. This is how they responded.

They just marked this report as “closed” without replying further.

If an iOS engineer has found this vulnerability and decided to patch it 4 months after my report, then they should be able to confirm that I’m the one who reported it first through their bounty program and I should be awarded for it as it was a valid report.

If you report a vulnerability and it is internally found already by an engineer/internal researcher, it is marked as a duplicate. But if I report a vulnerability and after 4 months a developer finds it, why should I not be credited and awarded? The engineer is the one who found a duplicate, and I’m the original reporter.

Also if they really wanted to reproduce the issue they can easily install a corellium 13.3.1 VM right now and reproduce it, and if they really wanted to check how this vulnerability was fixed they could look at the code changes made to the permission system in 13.5, but instead of doing that they just stopped responding to me (just like old times).

But I still don’t believe them because most other researchers have complained about the same silent patching of issues while benefiting from their free labor.

I also found it ironic how Apple’s factory in China (Pegatron) has been in controversy in the past over “sweatshop labor” and even scamming workers out of free labor, and I pointed out how ironic it is that Apple is seemingly doing the exact same thing to security researchers (benefiting from their free labor while never paying them for their reports, even though they should be eligible).

On their bounty table, “unauthorized access to data protected by TCC prompt” (full photo library) is worth $50,000:

they even show this ON THE SITE HOMEPAGE, someone supposedly being awarded $50k for reporting an unauthorized full photos access bug:

But again what is interesting is that there is a person who has found this exact vulnerability before (full photos access from the lockscreen), VBarraquito, a spanish taxi driver, and he has never received a dime from Apple despite reporting the issues to their bug bounty program and practicing disclosure confidentiality. https://twitter.com/VBarraquito/status/1332695073599975424

https://twitter.com/VBarraquito/status/1330602256107003907/photo/1

He also claims to have been promised “some sort of gift from Apple” for reporting many of these issues but he never received anything.

Since he kept getting scammed, he just resorted to publicly posting his findings afterwards while EverythingApplePro showcased his findings.

But I won’t only talk about this issue, I want to talk about how, in my opinion,

I know that this may sound like a conspiracy, but I will provide you three examples (there are way more if you go look for them).

De-anonymize AirDrop users

A group of security researchers from Germany have found that due to the lack of strong encryption, AirDrop senders’ names and contact info can be revealed. It was reported to Apple in 2019 and remained unpatched until now, recently China has begun exploiting this vulnerability to find the identities of people spreading anti-CCP messages.

2. iOS VPN bug leaks traffic (reported to Apple in 2020, “expected behaviour, won’t fix” — https://www.ipvanish.com/blog/ios-vpn-leaks/

This issue has been found by many people and reported to Apple multiple times and even publicly disclosed. If you set up an “always on VPN” it will stop working randomly, leak your traffic, it takes ~10 seconds to turn on after restarting the device, during which time all your traffic is sent directly to your ISP, and also stops working randomly while using it normally. This is different from the VPN DNS bug which is another bug that causes DNS requests to leak outside a VPN tunnel, when it is turned on.

Apple said that this is by design, because iOS only features a VPN killswitch feature for MDM devices, not normal users. Why is that, when Android has been offering this feature for years and it works perfectly, and never leaks traffic? What purpose would it be to prevent users from using this feature other than pleasing repressive regimes who want to spy on citizens easily? Every single other OS has this feature (Android, Windows, Linux, etc).

There is also the issue of their forced use of WebKit engine for all iOS apps. WebKit is their insecure browser engine which, combined with the poor sandboxing of iOS (compared to Android), is responsible for the majority of iOS attacks/exploits. If Apple allowed other browser engines to run on iOS, such as Blink (Chrome), it would be much harder to develop an exploit which would successfully execute on multiple browser engines (security through fragmentation), and Chrome is much more secure than Apple’s Safari WebKit.

Another interesting thing is the widely publicized iOS triangulation exploit. A year ago, Kaspersky staff have found that their phones were infected using a never-before-seen exploit. Upon investigation, it looks like it was developed by a nation-state. Kaspersky is a Russian infosec company.

The crazy part is, this exploit utilizes a previously-unknown hardware instruction in the A12-A16 SoCs, which bypasses security features like PAC and PPL. https://www.bleepingcomputer.com/news/security/iphone-triangulation-attack-abused-undocumented-hardware-feature/

The weird part is that this feature has never been used in firmware or documented before, so it’s unknown how state-backed attackers would know about this feature or how to exploit it.

This is not proof that Apple intentionally implemented this feature as a “backdoor” for the US government, but it surely is suspicious, considering their past.

Apple, in the past, has also agreed to lower the security and privacy of iCloud users in China by storing their data and their encryption keys on servers inside Mainland China, in buildings managed by Chinese state employees, effectively rendering encryption useless.

So what is the conclusion?

Read Entire Article