top of page

AI Images and Videos: What Could Go Wrong?

OpenAI's Sora, a new text-to-video AI model, has prompted discussions across various sectors due to its capability and potential societal impacts. Sora can generate videos up to 60 seconds long from text instructions alone or by combining text with an image, representing a significant leap in artificial intelligence and merging the boundaries between textual content and video production. However, as with any groundbreaking technology, Sora brings forward a spectrum of ethical, legal, and societal challenges.


The Technology Behind Sora

At the heart of its design, Sora is an integration of two processes. First, it uses a diffusion model similar to the technology that powers AI image generators such as OpenAI's DALL-E. This diffusion model helps the model transform a seemingly random assortment of pixels into visually coherent and compelling images. It does so by iterating over the pixel data and applying complex algorithms at each step to gradually refine these pixels into a structured and recognizable visual form. Sora also leverages the capabilities of the transformer architecture, a concept that has revolutionized the field of machine learning. Unlike traditional models that process data linearly or sequentially, the transformer architecture excels in understanding and managing the relationships between different elements of data, regardless of their position in the sequence. This is particularly advantageous for video content, where the contextual relevance and sequence of images are very important. The transformer architecture analyzes and sequences the data produced by the diffusion model, ensuring that each frame is not only visually coherent but also logically connected to its predecessors and successors within the video stream. This approach, combining the power of the diffusion model with the intelligence of the transformer architecture, enables Sora to produce video content that is not only aesthetically pleasing but also coherent and fluid. [1


Security Concerns and Ethical Considerations

The increasing realism and potential for misuse of Sora, DALL-E, and other text-to-video and text-to-image models have triggered significant concerns among experts and the public alike. In response to these growing concerns, OpenAI has initiated a series of "red team" exercises aimed at strengthening Sora's built-in safeguards to prevent its exploitation for malicious purposes.[1] These exercises involve a group of testers including specialists with knowledge and experience in dealing with misinformation, hateful content, and various forms of bias. These experts are examining Sora's capabilities as well as its limitations, with a focus on identifying and mitigating any vulnerabilities that could be exploited. One of the most pressing issues associated with Sora is its ability to generate highly believable yet entirely fictitious content. This capability extends to creating realistic simulations of media, such as videos mimicking shaky cellphone footage, which could easily be mistaken for genuine recordings. The ability to fabricate convincing fake news or misleading content could undermine trust in media, distort public perceptions on important issues, and manipulate political discourse, threatening the very foundations of democratic societies. OpenAI's red team exercises are, therefore, not just a measure of precaution but a necessary step towards ensuring that the advancements in AI technology do not become a tool for misinformation and societal disruption.


Deepfakes, powered by artificial intelligence, pose a growing risk by creating compelling fake videos, photos, or audio recordings. This technology can make it appear as if individuals, often public figures or celebrities, are saying or doing things they never did, leading to significant concerns. The risks of deepfakes extend from non-consensual pornography, where individuals' likenesses are used without their permission, to disinformation campaigns that can sway public opinion, manipulate political elections, and undermine trust in media. Deepfakes can also threaten reputational damage by falsely portraying individuals in damaging scenarios. Despite evolving detection methods, the increasing complexity of deepfakes makes them hard to identify. The advent of generative AI tools, including the development of deepfakes and AI-generated content, presents unprecedented risks to the integrity of elections, particularly as the 2024 U.S. elections approach. New AI models enable the rapid production of misleading content, such as audio recordings or videos of candidates making statements they never actually said, which can sway public perception and voter behavior. This potential for misuse raises serious concerns about the ability of bad actors to undermine elections, impersonate candidates, and spread disinformation on a scale previously unseen. Former President Trump disseminated AI-generated content and the Republican National Committee (RNC) used AI to create hypothetical future scenarios, showing the very real threats these technologies pose to democratic processes. Legislative efforts, like those proposed by Rep. Yvette Clarke, aim to mandate the labeling of AI-created campaign advertisements and show the urgent need for regulatory measures to safeguard electoral integrity against the influence of AI-enhanced disinformation campaigns.[2]



Legal and Copyright Challenges

The ongoing debate surrounding the concept of "fair use" in the context of training artificial intelligence models with copyrighted materials is becoming increasingly controversial. High-profile lawsuits, such as the notable case filed by The New York Times against OpenAI and Microsoft, show the growing controversy around the use of copyrighted content by AI without obtaining explicit permission from the copyright holders. [3] This legal battle emphasizes a significant ethical and legal quandary in the digital age, where the boundaries of copyright law and the capabilities of AI technologies intersect in multiple ways. Further compounding these concerns is the recent strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which has thrust the role of artificial intelligence in the entertainment industry into the spotlight. [4] The strike has intensified discussions about labor rights in an era where advanced AI technologies are increasingly prevalent in many sectors. Actors and performers are voicing significant worries about the potential for AI to either replace them entirely or utilize their likenesses without obtaining their consent or offering fair compensation. These fears highlight a broader unease about job displacement, the loss of control over personal image, and the potential devaluation of human creativity in the face of rapidly advancing technology. The strike by SAG-AFTRA marks a critical juncture, signaling a wider debate on how the entertainment industry, and society more broadly, should approach the implementation of AI technologies. As actors advocate for contracts that explicitly address and regulate the use of AI and fight to safeguard their rights, this movement may well establish a benchmark for other sectors facing similar dilemmas. The outcome of the SAG-AFTRA strike could not only determine the future trajectory of Hollywood but also play a role in shaping international dialogues on how to balance the promise of technological innovation while also protecting labor rights and ethical standards. [5]


Impacts on Democracy

Legislation aimed at mitigating these risks, such as requiring disinformation campaigns to be labeled, is under consideration, however, the effectiveness of such measures remains to be seen. In response to the growing challenges posed by AI-generated disinformation and deepfakes, European policy responses have emphasized the importance of regulation, transparency, and the autonomy of users in the digital space. The European Union (EU) is leading efforts to impose stronger regulations on online platforms to curb the spread of manipulative disinformation campaigns. [6] These efforts include advocating for greater transparency around advertising and ensuring that users have more control over their online experiences. The EU also stresses the need for information sources to be deemed trustworthy, arguing that the quality of content should not be dictated by governments or private platforms alone. This approach aims to mitigate the risks associated with AI and disinformation and seeks to set an environment where ethical standards and user empowerment are prioritized. 


[1] Hsu, J. (2024, February 17). Realism of OpenAI’s Sora video generator raises security concerns. New Scientist. https://www.newscientist.com/article/2417639-realism-of-openais-sora-video-generator-raises-security-concerns/


[2] Klepper, D., & Swenson, A. (2023, May 14). AI-generated disinformation poses threat of misleading voters in 2024 election. PBS NewsHour. https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election


[3] Oremus, W., & Izadi, E. (2024, January 4). AI’s future could hinge on one thorny legal question. Washington Post. https://www.washingtonpost.com/technology/2024/01/04/nyt-ai-copyright-lawsuit-fair-use/


[4] Watercutter, A. (2023, July 4). The Hollywood Actors Strike Will Revolutionize the AI Fight. Wired. https://www.wired.com/story/hollywood-sag-strike-artificial-intelligence/


[5] Collier, K. (2023, July 14). Actors vs. AI: Strike brings focus to emerging use of advanced tech. NBC News. https://www.nbcnews.com/tech/tech-news/hollywood-actor-sag-aftra-ai-artificial-intelligence-strike-rcna94191


[6] Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3(E32). https://doi.org/10.1017/dap.2021.20




bottom of page