In the world of deepfakes, is Open AI’s Sora video generator a blessing or a curse?Everything you need to know about this tool

Following Open AI’s ChatGPT, the new text-to-video tool Sora is considered a major breakthrough in the field of generative AI, which begins to challenge organizational design, work, and individual workers.

Even though Open AI’s tools are still in the “red team” phase and have limited access, as CEO Sam Altman puts it, it’s already making a splash.

Let’s see how this tool actually works:

The word “Sora” actually means sky in Japanese, and the tool converts it into a minute-long video based on text prompts.

“Sora is capable of generating complex scenes with multiple characters, specific movement types, and accurate details of the subject and background,” OpenAI explained in a blog post published last week. “The model understands not only what the user is asking for in the prompt, but also how those things exist in the physical world.”

Sora uses two artificial intelligence methods to achieve a high level of realism. The first diffusion model for AI image generators such as DALLE-E, which helps convert random image pixels into coherent images.

The second is a “Transformer Architecture” for contextualizing and piecing together sequential data, which means breaking down large, complex words into understandable sentences. AI will break down video clips into visual “spatial patches” that can be processed through a “transformer architecture.”

Who can access Sora?

OpenAI said a small group of visual artists, filmmakers and designers have already gained access, but people familiar with the matter hinted there could be a waiting list soon.

Red team members who gain access are experts in areas such as misinformation, hateful content and understanding bias in key risk areas.

See also  What is 'Garbha Griha'? Decoding the 'dark' interior of Ayodhya Ram Temple, the 'largest of all temples globally'

Unfortunately, there is no indication of when open AI tools will be available to everyone. “We are sharing our research progress early to begin collaborating with others outside OpenAI and getting feedback to inform the public about upcoming AI capabilities,” the blog reads.

What are the potential risks?

OpenAI said it will develop tools for investigating misleading content, such as detection classifiers capable of identifying videos created by Sora.

In addition, OpenAI will adapt existing security procedures developed for Sora-related products such as DALLE3. The company says it has built powerful image classifiers to review every frame of the resulting video to ensure compliance before granting access.

Experts also warn that the product could create deep fake videos that reinstate racial and gender stereotypes.

Misinformation and disinformation fueled by AI-generated content is a major concern for leaders in government, academia, business and other sectors.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Justin

Justin, a prolific blog writer and tech aficionado, holds a Bachelor's degree in Computer Science. Armed with a deep understanding of the digital realm, Justin's journey unfolds through the lens of technology and creative expression.With a B.Tech in Computer Science, Justin navigates the ever-evolving landscape of coding languages and emerging technologies. His blogs seamlessly blend the technical intricacies of the digital world with a touch of creativity, offering readers a unique and insightful perspective.

Related Articles