BOOK THIS SPACE FOR AD
ARTICLE ADHi, Ajak Amico’s welcome back to another blog. So recently I came across a reel on Instagram where the user complained that someone used his publicly available pics to create a fake account on Tinder, and to the surprise, the profile even got verified💀. So before starting, if you haven’t subscribed to our channel, do subscribe, guys.
Follow our Youtube Channel: @ajakcybersecurity (360 Videos)
Follow on Instagram:AjakCybersecurity
So this was the reel which I came across, his name is Lekan, but someone created a fake account on Tinder with a fake name ‘Nathan’ and you could see even the profile got verified, he just tagged Tinder in the comment section, but still haven’t tinder replied back, I scrolled through the comments, to see answers but everybody was questioning, how he is even verified? that’s when I started to research on face authentication bypass especially on dating platforms.
You may come to know, every reputed organization have cyber domain, if you find any bugs in their website or apps or on their API, they will reward you with a huge amount of money, ok now how much will you get if you reprove this specific face authentication? This Tinder organization has a platform called Bug Bounty, managed by Hackerone, according to the priority and severity of the bug you get paid, this bug comes under P1 and S1 (Critical), so you get $20,000 for this specific vulnerability, the proof of screenshot is attached below.
As a security researcher, I suspect under 3 categories.
API Pentesting.Bypassing Via Logic Flaw.DeepFake AI / Deep Learning Concept.The first method is by testing the whole application and their API endpoints, this is where white hat hackers come into action, they test the whole application, and they will find flaws, to bypass it, if it’s reported legally and you prove with whole proof, you get paid.this is called bug bounty, and using burp suite a proxy tool and by playing with request and response this could have been bypassed.
The second method is via Logic flaw, there are many ways in a pen testing process , and my favo is via logic flaw, during my cybersecurity journey, I have bypassed an organization's fingerprint authentication just by some stupid logic, so imagine an app like WhatsApp, I created a group, and switched on my fingerprint authentication when I give a call from another phone, it should first ask for fingerprint auth, but in this case, it didn't happened, after speaking and when you cut the call, it should ask for fingerprint authentication, but in my case, when I cut the call, it redirected me to that specific group chat, without any authentication. this may sound stupid, but this is how Business logic flaw works. and I suspect this might be the case for bypassing the Tinder face auth too, using some business logic flaw the face authentication could have been bypassed.
This might be everyone's concern, that using deepfake AI this could be possible, and I suspect too highly in this category, as you can see in many reels, people used to create deep fake generated videos, as a common man watching this it may look fun, but as a security researcher point of view this looks something dangerous, and talking about deep FakeAI, this was actually bypassed in 2022 by a security team, a security team bypassed both tinder and bumble face authentication using a machine learning/deep learning concept called StarGANv2, it’s generally image-to-image translation model, here 1000 of models are trained and they trained different models with different skin types, hair, and face shapes with both masculine and feminine and the output is expressed via some calculations. using this they bypassed both tinder and bumble and more interesting here is they bypassed with both masculine and feminine characters. for more clarification I have attached the blog link below.
The all above methods to bypass are just my suggestions, this could have been exploited in different ways too, but my most usual suspect is this Deepfake AI, until Tinder investigates it I can’t say the exact method how it was exploited. fingers crossed, this deep fake AI is more dangerous than you think, and just be careful when you post your own pictures on the social media, you could also be the next internet victim. Make this as an awareness post. see ya in the next blog :)
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Hope you would have learned some information from this blog if so, kindly press the follow button for further updates. Best wishes from Ajak Cybersecurity.❤️
“கற்றவை பற்றவை🔥”
Learn Everyday, Happy Hacking 😁🙌
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Follow our Youtube Channel: @ajakcybersecurity
Follow on Instagram: @ajakcybersecurity