Sora: OpenAI’s Newest Video-Generating Model

OpenAI, a renowned artificial intelligence research laboratory, has recently unveiled its newest creation: Sora, a cutting-edge video-generating model. Sora promises to revolutionize the world of content creation by generating high-quality, realistic videos based on textual descriptions. In this article, we will explore the features and limitations of Sora, as well as the concerns it raises for content creators.

*This is generated by Sora

Part1. Features

Impressive Video Generation

Sora has been described as capable of generating videos that maintain video quality and adhere to the user’s prompt. The examples shared by OpenAI include a Shiba Inu dog wearing a beret and black turtleneck, as well as a massive tidal wave in an ornate historical hall. The videos generated by Sora have been praised for their eye-popping and breathtaking quality.


Flexibility in Sampling: Sora can generate videos of different aspect ratios, including widescreen and vertical formats, allowing for content creation tailored to various devices directly at their native aspect ratios. This also facilitates rapid prototyping at lower resolutions before generating content at full resolution.

Enhanced Composition and Framing: Training on videos at their native aspect ratios enhances composition and framing in the generated videos, avoiding instances where subjects are partially out of view.

Simulating Real-World Scenarios: Sora’s training at scale enables it to simulate various aspects of the physical and digital world, including 3D consistency, long-range coherence, object permanence, interaction with the environment, and simulation of digital worlds like video games.

How to make a Sora video:
you just need to input a prompt and then Sora will output a video.
Prompt: A movie trailer featuring the adventures of the 30 year old spaceman wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.

*you will get this video generated by Sora


As of now, Sora is not available for public use. Only a select group of safety testers and artists approved by OpenAI have access to the program. 

OpenAI’s CEO, Sam Altman, has been taking Sora prompt requests on social media and sharing the results.

Part2. Limitations

OpenAI Sora, while impressive in its video generation capabilities, does have some limitations. Here are some of the limitations of the OpenAI Sora model:

Lack of Physical Accuracy: Sora struggles to depict precise physical situations and causality. For example, it may have difficulties understanding interactions such as the traces on a cookie after it has been eaten.

Left-Right Confusion: Sora sometimes confuses left and right, leading to misrepresentations of object positions or directions in the generated content. This can potentially impair the user experience.


Animals or people can spontaneously appear, especially in scenes containing many entities.


It’s a pity that all Sora videos are silent. So, many people are adding BGM to these AI videos.

These accounts continue to upload new Sora videos.


In conclusion, Sora, OpenAI’s video generating model, has the potential to revolutionize content creation. However, limitations such as maintaining consistency and the need for extensive training data should be considered. While Sora offers exciting possibilities, careful implementation is needed to maximize its benefits and minimize negative consequences.