Table of Contents
- Why is AI video generation technology "creepy"?
- The Uncanny Valley Effect: A Cognitive Dilemma
- Technological Advances and the Shrinking "Cognitive Gap"
- The Reasons Behind the Unease: Deeper Fears Beyond Technology
- Different Reactions Around the World: The "Uncanny Valley" from a Cultural Perspective
- Responding to "Creepiness": A Dual Path of Technology and Humanities
- Outlook: A Future Beyond the "Uncanny Valley"
Why is AI video generation technology "creepy"?
Late at night, while scrolling through your phone, a video suddenly appears in your feed—a familiar celebrity face, natural expressions, and voice. But upon closer inspection, the emptiness in the depths of their eyes and the subtle inconsistencies create an inexplicable unease. Psychologists call this the "Uncanny Valley effect." With the rapid development of AI video generation technology, this discomfort is becoming more common and thought-provoking.
In late 2023, a deepfake video of Taylor Swift went viral on social media, sparking widespread attention. The "Taylor" in the video looked almost identical to the real Taylor, but was doing things that the real Taylor would never do. This not only sparked strong protests from the artist herself, but also left many viewers feeling uneasy and confused. Where does this unease come from? Why do AI-generated videos give people a "creepy" feeling?
The Uncanny Valley Effect: A Cognitive Dilemma
In 1970, Japanese roboticist Masahiro Mori proposed the "Uncanny Valley" theory to describe people's emotional responses to human-like objects: when robots or animated characters look almost like humans, but have some unnatural aspects, people switch from acceptance to strong rejection. This reaction stems from our brains discovering a mismatch between visual cognition and expectations.
New York University psychology professor Jonathan Haidt found in a 2022 study that this reaction is actually an evolutionary protection mechanism. "Our brains have evolved the ability to be wary of faces that are 'almost right but slightly wrong,' because in primitive societies this could mean disease, death, or deception," Haidt wrote in his research report.
Today's AI video generation technology is on the edge of this "Uncanny Valley." The technology is powerful enough to create very realistic faces and expressions, but subtle inconsistencies still exist: the eyes don't have real focus, the emotional expression is slightly disconnected from the language content, or the subtle facial movements seem too mechanical.
In an interview, Maria Chen, an AI ethics scholar at King's College London, said: "Our brains are good at recognizing faces and expressions, which is the foundation of social life. When AI-generated content breaks this basic cognitive process, unease arises. This is not just visual, but more of a cognitive and emotional discomfort."
Technological Advances and the Shrinking "Cognitive Gap"
The speed of AI video generation technology is staggering. In the past two years alone, the following key indicators have improved significantly:
- Facial detail realism: Increased from 70% similarity in 2022 to 92% in 2024
- Dynamic expression fluency: Increased from 15 frames per second to over 30 frames per second
- Audio-visual synchronization accuracy: Latency reduced from 250 milliseconds to less than 50 milliseconds
A technical report from San Francisco AI research institute OpenVisage shows that today's AI systems can accurately capture and reproduce more than 200 micro-expressions, while in 2020 this number was only more than 20.
Mikhail Sorokin, a programmer who has participated in multiple video generation projects, explained: "Previous AI videos had obvious 'breakpoints'—pauses when blinking, and problems with lip syncing. Now, we have basically solved these technical problems."
However, the progress of technology has not alleviated, but has intensified people's discomfort. This is because when AI-generated videos are closer to reality but not fully realized, the audience's discomfort is even stronger.
The Reasons Behind the Unease: Deeper Fears Beyond Technology
The discomfort caused by AI videos goes far beyond the visual "Uncanny Valley" and involves deeper psychological and social factors.
Blurring of Identity and Authenticity
Pierre Dubois, a media researcher at the Sorbonne University in Paris, pointed out: "Digital identity has become an important part of our self-awareness. When AI can easily copy and manipulate this identity, people feel that their uniqueness is threatened."
A survey of 3,000 global respondents showed that 62% were concerned that their face or voice might be used for AI-generated unauthorized content, and this concern was as high as 78% among young people aged 25-34.
Undermining Social Trust
"We have moved from an era of 'seeing is believing' to an era of 'even seeing may not be believing,'" said Professor Sarah Blackwood, director of the Media Research Center at Brown University, "This shaking of basic trust has had a profound impact on social structures."
In an experiment in 2023, researchers presented participants with a mixed set of real and AI-generated videos. Even media professionals had a correct identification rate of only 62%. When participants were told that the video might be AI-generated, their trust in real videos also decreased by 43%.
Uncontrolled Technology Anxiety
There is a universal fear of uncontrolled forces in human psychology. The rapid development of AI video generation technology and its unpredictable application prospects have triggered this deep-seated anxiety.
Hans Mueller, a professor of technology philosophy at the Free University of Berlin, said: "When a technology develops faster than our ability to understand and regulate it, fear naturally arises. People are worried not only about today's applications, but also about tomorrow's possibilities."
Different Reactions Around the World: The "Uncanny Valley" from a Cultural Perspective
Interestingly, the "creepy" reaction to AI-generated videos varies in different cultural contexts.
East Asia: Collision of Technology Acceptance and Spiritual Concepts
In Japan, although the "Uncanny Valley" theory originated here, the public's acceptance of AI images is relatively high. Cultural anthropology research at the University of Tokyo shows that this may be related to the widespread acceptance of "almost human but non-human" characters in Japanese anime culture.
However, in South Korea, which has a strong tradition of ancestor worship, AI restoration of deceased relatives has caused greater controversy. A TV show in South Korea in 2023 that "revived" deceased relatives caused widespread ethical discussions, with many viewers expressing that they felt "disrespectful" and "disturbing the peace of the soul."
The West: The Dilemma of Authenticity and Freedom of Expression
Concerns about AI videos in the United States and Europe are more focused on authenticity and information integrity. A Cornell University study found that the main concern of American respondents about AI-generated videos was "may be used for political propaganda" (73%) and "undermining the credibility of news" (68%).
Respondents in France and Germany are more concerned about personal image rights and data protection issues, which reflects Europe's cultural tradition of emphasizing personal data sovereignty.
Developing Countries: Alternative Fears Under the Digital Divide
In some regions where technological infrastructure is not yet perfect, the fear of AI videos takes on different forms. Research at the University of Nairobi in Kenya shows that local people's concerns about AI videos stem more from "information acquisition inequality"—worrying that the inability to distinguish between true and false information will exacerbate existing social inequalities.
A social survey in India showed that the primary concern of respondents in rural areas about AI-generated videos was "may be used for fraud" (81%), rather than ethical or identity issues.
Responding to "Creepiness": A Dual Path of Technology and Humanities
Faced with the discomfort brought by AI videos, various solutions are being explored around the world.
Technical Transparency and Identification Tools
Many technology companies are developing "watermark" technology for AI-generated content. Adobe's Content Authenticity Initiative launched an open standard in 2023, allowing creators to add invisible digital signatures to their work to help users identify content sources.
At the same time, startups like Deeptrace are focused on developing deepfake detection technology, with an accuracy rate of 91%. Researchers at the University of Washington in Seattle have also found that current AI videos still have obvious defects in pupil response and microvascular patterns, which provides a technical approach for identification.
Media Literacy Education
Singapore began promoting the "Digital Authenticity" course in secondary schools in 2023, teaching students how to identify AI-generated content. Course designer Lee Mei Ling said: "Our goal is not to make students afraid of technology, but to cultivate their critical thinking skills in the digital age."
The British Broadcasting Corporation (BBC) has also launched the "Reality Check" project for a global audience, providing free resources to help the public identify suspicious digital content.
Legal and Ethical Frameworks
The European Union's Artificial Intelligence Act (AI Act) officially came into effect in 2024, requiring AI-generated content to be clearly identified. Companies that violate the regulations may face fines of up to 4% of their global turnover.
China also promulgated the "Administrative Measures for Generative Artificial Intelligence Services" in 2023, which clearly stipulates that AI-generated content must comply with national values and indicate the source.
Various states in the United States are also actively legislating, and California has passed a bill prohibiting the unauthorized use of another person's portrait to create AI content.
Outlook: A Future Beyond the "Uncanny Valley"
As technology continues to develop, we may face not only the problem of how to deal with the "Uncanny Valley," but how to redefine "reality" in a world where AI video becomes commonplace.
Professor Mark Thompson, director of the Digital Ethics Laboratory at Oxford University, believes: "We may be undergoing a cognitive paradigm shift. Just as humans have adapted to photography and film, we will learn to coexist with AI-generated content and develop new standards of authenticity."
This process will inevitably be accompanied by discomfort and adjustment, but it also contains creative possibilities.
Li Wenhua, an AI ethics scholar at Nanyang Technological University in Singapore, pointed out: "AI video allows us to rethink what is 'real,' what is 'performance,' and what is 'identity.' These philosophical questions are far more profound and lasting than technology itself."
It is foreseeable that when technology finally crosses the "Uncanny Valley"—whether through complete realism or clear stylization—today's feeling of "creepiness" will become history. But the new social contract, ethical guidelines, and media understanding framework formed in this process will shape our digital future for a long time.
As one anonymous AI researcher put it: "Technology is always challenging our comfort zone, forcing us to rethink basic questions. The discomfort caused by AI video may be the price we must pay for growth."
In this dialogue between technology and humanity, we are both viewers and participants. After discomfort, perhaps there is a deeper understanding and coexistence.