OpenAI, the company behind ChatGPT, recently launched Sora, the first AI-driven text-to-video generation model. The company claims it can generate videos up to 60 seconds long. The development was also shared by the company’s CEO, Sam Altman, who shared a short video made by Sora on X (formerly Twitter).
Mr. Altman asked his fans on the platform to “RSVP with subtitles for the videos you want to see and we’ll start making some!” He added that they shouldn’t “withhold details or difficulties!” CRED founder Kunal Shah responded to the same The content stated that he wanted to make an interesting video using animals and the ocean as elements. He said: “A sea cycling race in which athletes ride bicycles under the view of drone cameras and different animals participate.”
A few hours later, the head of OpenAI responded to the post with a video. In the video, whales, penguins and turtles can be seen riding colorful bikes in the ocean.
https://t.co/qbj02M4ng8pic.twitter.com/EvngqF2ZIX
— Sam Altman (@sama) February 15, 2024
Since being shared, the video has received 4.5 million views and 30,000 likes on the platform.
“Fun and powerful AI,” said one user.
“This is actually the most impressive video to date from a semantic and fidelity perspective,” another said.
A third said: “Such a powerful tool, it has spread magic around the world.”
One user added, “No, the turtle can’t reach the pedals”
Another said: “The pace at which these AI technologies are advancing is incredible… and scary because we are not ready for the disruption these technologies will soon cause.”
Meanwhile, Sora can create videos up to 60 seconds long with highly detailed scenes, complex camera movements, and multiple characters full of vibrant emotions, according to OpenAI. ” Interestingly, it claims to generate videos that are more than ten times longer than the actual length offered by its competitors.
OpenAI said in its statement website“The current model has weaknesses. It may have difficulty accurately simulating the physics of complex scenarios, and it may not understand specific instances of cause-and-effect relationships. For example, a person may take a bite of a cookie, but subsequently, the cookie may not show a bite mark.” To ensure artificial intelligence Smart tools are not being used to create deepfakes or other harmful content, and the company is building tools to help detect misleading content.