Adoption of Deep Fake Detection Algorithms in Social Media Platforms for Preventing Misinformation and Identity Abuse through AI-Based Media Forensics

Main Article Content

Aashay Gupta

Abstract

The expanding of deepfakes synthetic media generated by artificial intelligence poses significant threats to social media ecosystems, exacerbating misinformation dissemination and enabling identity abuse. This study investigates the adoption of deepfake detection algorithms within major social media platforms to mitigate these risks via AI-based media forensics. Employing a mixed-methods approach, including systematic literature review, analysis of benchmark datasets such as FaceForensics++ and the DeepFake Detection Challenge (DFDC), and empirical evaluation of detection models, the research reveals that convolutional neural network (CNN)-based algorithms achieve up to 95% accuracy in controlled settings but falter in real-time social media contexts due to variability in content quality. Key findings highlight a 550% surge in deepfake incidents from 2019 to 2022, underscoring the urgency for platform integration. Conclusions advocate for hybrid forensic frameworks combining biological signals and blockchain verification, offering theoretical advancements in media authenticity and practical policy recommendations for regulatory compliance. This work bridges gaps in scalable deployment, fostering resilient digital information environments.

Article Details

How to Cite
Aashay Gupta. (2023). Adoption of Deep Fake Detection Algorithms in Social Media Platforms for Preventing Misinformation and Identity Abuse through AI-Based Media Forensics. International Journal on Recent and Innovation Trends in Computing and Communication, 11(11), 2040–2048. Retrieved from https://mail.ijritcc.org/index.php/ijritcc/article/view/11931
Section
Articles