just saying i still think releasing an ai video generator to the public is a horrific idea, heck, making it at all is just incredibly reckless. we’re still trying to grapple with the tsunami of low quality ai slop images that have infiltrated almost every corner of the internet from search to social media and of course turbocharging misinformation, disinformation, and propaganda into the stratosphere. ai video that is as good as sora is the death knell, it was already becoming difficult to detect ai generated images and text, and now we’re just going to make video as hard to trust. governments won’t be able to react fast enough to control it, nor do i think they care. we still haven’t properly regulated big tech around various things like social media to smartphones. and i simply don’t trust openai, they had all these limits for chatgpt when it launched and yet people jailbroke it numerous times, google told people to put glue on pizza, microsoft thought it’d be a genius idea to record everything you ever do on your computer, adobe decided it owns everything you create and fed it to their ai to vomit out some slop for people on facebook to believe is real. i don’t think sora should exist, the cons massively outweigh and all pros in my view. i get that sora is still limited at the moment, but i simply don’t trust them. chatgpt spat out whatever you wanted by giving a jailbreak prompt. people will work around sora’s restrictions, i don’t think we’ll like the results.

comments (single view)

i’d say crazy stuff has already happened, like people making ai p*rn of real people without consent, but governments are too busy being paid off or just not understanding the tech at all to effectively regulate quick enough. tech companies know this and try and push through as many things they can before a clamp down.

yeah that’s one of the really scary parts of it and it’s like it’s been forgotten already

View all comments