The Rising Threat of MMS Leaks and Cyber Crimes in India
MMS Leaks and Cyber Crime Surge
In the past year and a half, India has witnessed an alarming increase in MMS leaks, exposing private moments from offices to bedrooms. These leaks have extended beyond personal spaces, infiltrating colleges and metro trains. The pressing question is not whether these actions are right or wrong, but rather how these MMS videos reached people's mobile devices. Just as this question began to unravel, a '19-minute video' that went viral on social media left security agencies and cyber experts in a state of confusion.
For the average viewer, determining the authenticity of this video was nearly impossible, raising concerns about whether it was genuine or generated using artificial intelligence (AI). Before the mystery could be solved, the emergence of 'Season 5' and a '50-minute full version' presented a new challenge for cyber experts. Scammers exploited the curiosity surrounding these AI-generated videos to distribute malware links to unsuspecting users, leading to the hacking of thousands of phones and draining bank accounts. Additionally, leaked CCTV footage from the Nammo Bharat train raised serious questions about government surveillance.
What is particularly shocking about these incidents is that many of the leaks were either created using AI or shared by close acquaintances. Some leaks even involved security personnel. To popularize these videos, scammers used the faces of well-known female YouTubers to create deepfake videos. In one case, a victim's video was leaked by a friend, while CCTV footage from the Nammo Bharat train was recorded and disseminated by a staff member. Cyber fraudsters created fake links using names like 'Lalita' and 'Sara Baloch' to deceive people. Overall, MMS leaks have become a new weapon for cyber criminals.
Incident 1: The Deception of the '19-Minute Video' and 'Season 5'
In November 2025, a 19-minute video featuring a Bengali YouTuber and her boyfriend went viral, leading to claims that 'Season 5' and a '50-minute full version' had been leaked. Cyber investigations revealed that no such 'Season 5' existed. The circulating videos were fabricated using AI deepfake technology. Forensic reports indicated that the facial expressions in the video appeared unnatural, the lip movements did not sync with the audio, and there were lighting mismatches. The investigation further uncovered that the true intention was cyber fraud, as scammers created fake links under the guise of 'Season 5,' which, when clicked, installed malware on users' phones, stealing banking details, UPI PINs, and OTPs.
Incident 2: CCTV Leak from the Nammo Bharat Train
In December 2025, a 4-minute and 44-second CCTV footage from the Nammo Bharat train on the Ghaziabad-Meerut corridor went viral, showing a young couple in intimate moments. This incident caused a nationwide uproar. Investigations revealed shocking details: the footage was recorded and leaked by a staff member named Rishabh from NCRTC using his mobile phone. Those responsible for passenger safety and surveillance violated privacy. NCRTC dismissed the accused and filed an FIR at the Muradnagar police station. The aftermath was tragic, as both students dropped out of college and attempted suicide due to depression. Eventually, under family pressure, they were married.
Incident 3: Digital Honey Trap Using Sara Baloch and Lalita's Names
In February 2026, a link was circulated connecting Pakistani creator Sara Baloch to an 'Assam incident,' claiming her 'leaked MMS' was going viral. In reality, Sara Baloch had no connection to the video; it was a complete cyber scam using her name to empty people's accounts. Meanwhile, in Karimnagar, Telangana, police arrested Lalita and her husband, who lured men to rented flats under the pretense of friendship, where her husband recorded videos with hidden cameras. They then blackmailed the victims for money. The police clarified that the 'Lalita viral video' was not on the internet but secured in the police evidence room. Nevertheless, scammers began creating fake links using her name to hack people's phones.
Incident 4: The Bengali YouTuber's Boyfriend's Betrayal
A 16-minute private video of a Bengali female YouTuber suddenly went viral, causing a stir on social media. The YouTuber later released a video revealing that her ex-boyfriend leaked it out of revenge. This case involved a partner exposing private moments publicly after a breakup. Subsequently, the YouTuber released another video, which was claimed to be staged and edited, but by then, significant damage had already occurred. The YouTuber found it challenging to leave her house, let alone go outside.
Incident 5: Viral MMS of a Bhojpuri Star
In November 2025, an MMS of a 15-year-old Bhojpuri actress went viral, causing chaos online. Forensic investigations later revealed that the video was entirely AI-generated, with the victim's face superimposed onto another body. This was linked to an international porn-bot network. Similarly, a video of a female influencer from Assam also went viral, with forensic reports confirming it was created using AI body-swap technology. The lighting mismatches and different backgrounds were evident. In her defense, the victim stated that AI had ruined her life.
Can I also fall victim to deepfake MMS?
Yes, unfortunately, anyone can become a victim. Deepfake technology has become so affordable that anyone can steal your photos from social media and create explicit videos in just 10-15 minutes. The most alarming part is that you don't need to be a celebrity; scammers can also steal your images.
Where do scammers find our pictures, and how can we prevent them from accessing them?
The most common sources for stolen images are WhatsApp DP, Instagram stories, and Facebook profiles. Once the images are in their hands, anyone can use them. Therefore, always keep your social media profiles private, avoid befriending strangers, and never click on unknown links. If you do, report it immediately to the cyber cell.
How can deepfake videos be identified?
Identifying deepfake videos can be quite challenging. However, there are certain indicators to look for. First, observe the facial expressions; deepfakes often appear lifeless, robotic, or unnatural. Second, blinking can reveal the truth; real people blink regularly, while deepfakes may blink irregularly or infrequently. Third, check the synchronization of lips and audio; deepfakes often have mismatched lip movements. Lastly, pay attention to lighting and skin tone; if the face and body have different lighting, it could indicate a deepfake.