This video was generated entirely by AI.
It was generated by OpenAI’s new text-to-video model called Sora. It’s currently only being tested by a small group of experts and creators.
I don’t know how other’s feel about this, but I find this kinda terrifying. A lot of the videos that they’ve shown off so far are already very convincing, and I can already imagine the kind of misinformation that will be created with this tool. Even if OpenAI can keep the lid on it with Sora, the fact that they have pulled something like this off probably means we’ll see others turning up soon. I wouldn’t be shocked to see an open sourced equivalent for this, like Stable Diffusion.
I know we can’t put the genie back in the bottle now, but I do really worry about the future when the tech is getting this good this quick. This feels like the end of the information age, because we might be soon be unable to tell what info is real and what is fake.
You can see more examples from Sora here: https://openai.com/sora
Same. Even if it is only 50% of the quality of these videos that is still scarily good, plus this is early days, this is the worst it will ever be, it will only improve from here and considering the leaps AI has made over the last year I wouldn’t be shocked to see this become hard to tell apart from real videos not too far in the future.