0%
Less than one-third of people worldwide even knew what a deepfake was in 2022. For those unsure what deepfake means, it refers to AI-generated audio, video, or images that convincingly imitate real people or events. While your brain processes thousands of images and videos daily, most people don’t realize when they’re being duped.
Research from the Center for Humans and Machines (CHM) found a massive gap between what we think we can detect and what we actually can. Most of us feel smart and digitally savvy, while in reality, we could become prime targets for manipulation.
Why are we such easy prey? Because our brains are wired to trust what we see. That instinct helped our ancestors survive—but in today’s digital jungle, it’s a major liability. Combine that with confirmation bias, emotional triggers, and the power of social proof, and you’ve got the perfect storm for falling for deepfakes.
Our brains were built for a world where seeing meant believing. MIT neuroscientists found that our brains process visual information in under 13 milliseconds, making us lightning-fast at absorbing what we see – and perfect victims for digital trickery. Understanding our cognitive blind spots is the first step in knowing how to detect deepfakes before they manipulate what we believe.
"Seeing is believing" is coded into our neural wiring. We process images way faster than words. According to Mehrabian's rule, a whopping 55% of communication impact comes from visual body language, while the actual communication with words contribute to just 7%. What was once our evolutionary superpower is now our biggest weakness in the deepfake era.
Our brains are full of blind spots that deepfakes exploit:
Despite all our claimed skepticism, we're still suckers for "authentic" content. Studies show 57% of consumers trust human-created content over AI-generated stuff. Among people who know all about AI marketing, 40% trust AI content just as much as human content. This messy middle is where deepfakes thrive.
Each time you see a lie, you're more likely to believe it. Research proves that perceived truthfulness jumps logarithmically with repetition – the biggest leap happens the second time you see something. One study found that simply repeating misinformation warped people's judgment and fueled its spread.
Angry faces impact us whether we think they're real or fake. But smiles and happy expressions? They only work if we think they're genuine. Threats grab our attention regardless of authenticity, while positive content needs to feel real to matter.
"Up to this point, we have not seen a single example of deepfake generation algorithms that can create realistic human hands and demonstrate the flexibility and gestures of a real human being" — Siwei Lyu, Co-Director of the Center for Information Integrity (CII) and Director of the Media Forensics Lab at University at Buffalo
Deepfakes aren't perfect—yet. Thank goodness!
Even the most convincing AI fakery leaves behind telltale signs if you know where to look. Let's uncover these digital fingerprints that scream "FAKE!"
Start by checking eye blinks, lip sync, and facial movements—they’re often the first giveaways.
Your real human peepers naturally blink 15-20 times per minute, but AI-generated faces? They're weird blinkers!
Either too little, too much, or with robotic precision that screams artificial. And those lips? When they don't match the words coming out. That's red flag city!
Here's what to check:
Real videos follow actual physics (shocking, right?). Deepfakes struggle hard with shadows and reflections. Check if the light bouncing off someone's eyes matches what's lighting up the room. Are the shadows pointing in different directions? That's not how sunlight works!
Want to catch a deepfake red-handed? Look at their... well, hands! AI is terrible at creating realistic fingers—count 'em and look for weird shapes or extra digits. Those teeth often look like baby chompers or something from a horror movie. And ears? They're often missing lobes or look like they belong on an elf.
Effective audio deepfake detection looks for unnatural pauses, monotone delivery, and missing breath sounds that real speech naturally includes.
Real people breathe, pause, and vary their tone naturally.
Fake audio sounds flat, with slurred words or awkward stumbling over phrases. Missing those tiny breath sounds between sentences? That's your BS detector going off!
Zoom in! Deepfakes hate it when you do that. Look for blurry edges around hair or jewelry and unnatural skin textures that look like plastic. Those pixel glitches aren't just tech problems, they're your clues to spotting manipulation.
Sometimes the easiest way to spot a fake is just asking: "Wait, could this actually happen?" People standing on a moving semi truck? Politicians saying things wildly out of character? Check the source too—brand new accounts or known spreaders of misinformation are red flags waving in your face.
The more you know about these tells, the harder it becomes for deepfakes to pull the wool over your eyes.
Deepfakes have exploded by a mind-blowing 900% year-over-year in 2024. The good news is that there are tools you can use right now to fight back. The best deepfake detection methods combine visual analysis, audio inconsistencies, and metadata review.
These tools spot deepfakes using science, code, and a lot of digital intuition. These are some of the top deepfake detection tools you should know about:
This is the real deal. A free, open-source platform that doesn't hide anything. The UB Media Forensics Lab built this to combine multiple detection algorithms in one easy-to-use interface. Since launching, they've analyzed over 6,300 suspicious videos.
What makes it awesome:
When Poynter tested a fake Biden robocall, guess which tool nailed it? DeepFake-o-meter beat out three other free tools, spotting it as 69.7% likely to be AI-generated.
Microsoft Video Authenticator is like having x-ray vision for photos and videos.
It hunts for subtle giveaways:
Here's the catch, you can't just download it. Microsoft offers it through partners to news publishers and political campaigns to fight election deepfakes. Why hide it from the public? Good question!
FaceForensics++ is crazy good at one thing: spotting when faces have been swapped in videos. Meanwhile, Deepware shows off with some impressive stats:
Looking to contribute to a deepfake detection project? Open-source initiatives on GitHub offer accessible codebases and datasets. The open-source community is building detection tools you can use right now:
Under the hood, these detection systems use some serious tech:
Modern deepfake detection techniques use smart AI to flag visual glitches and unnatural movements most of us miss. Most tools don't rely on just one approach, they use multiple techniques at once to catch as many fakes as possible.
Security experts have been warning of the threats posed by deepfakes for individuals and organizations alike for some time. This study shows that organizations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating the users of their systems and services.
Neither humans nor AI can win this battle alone. I don't care how advanced the tech gets, we need both working together.
AI has a few blind spots when trying to spot fakes:
Our brains have some special sauce that machines just can't copy:
Regular people perform just as well as fancy machine learning models when viewing minimal context videos. And when we average human responses together? The "crowd wisdom" matches AI accuracy at 80%.
We're especially good at processing whole faces at once. When researchers flipped faces upside down, misaligned them, or partially covered them, human detection ability tanked while AI performance barely changed. This suggests we use brain pathways that machines simply don't have.
The winning approach? Humans + AI working as a team. Studies prove that combining human judgment with AI analysis creates better systems than either working alone.
Timing matters too. People were more likely to correctly spot deepfakes when they got AI warnings before watching content rather than after. Prevention is better than cure!
'A false image of reliability is worse than low reliability' – Siwei Lyu
85% of security pros report a rise in AI-based cyberattacks. Deepfakes are part of that shift, which means staying cautious online matters more than ever.
Start with your accounts. Use strong, unique passwords for every important login, and turn on two-factor authentication wherever it’s available. Password managers can make this much easier.
Be mindful of what you share publicly. Photos, videos, and voice clips can all be reused or manipulated. Review your social media privacy settings and limit what strangers can access.
When someone contacts you unexpectedly—especially asking for money, sensitive data, or urgent action—pause first.
Keep your devices updated. Security updates often fix vulnerabilities that attackers rely on. Enabling automatic updates is one of the easiest ways to reduce risk.
If you run a business, awareness matters just as much as tools. Train employees to question unusual requests, verify identities, and escalate anything suspicious. Strong approval processes can stop many scams before they start.
Technology helps, but habits matter more. The best defense against deepfakes is a mix of smart security basics and taking an extra moment to verify before you trust.

Senior Security Consultant