just saying i still think releasing an ai video generator to the public is a horrific idea, heck, making it at all is just incredibly reckless. we’re still trying to grapple with the tsunami of low quality ai slop images that have infiltrated almost every corner of the internet from search to social media and of course turbocharging misinformation, disinformation, and propaganda into the stratosphere. ai video that is as good as sora is the death knell, it was already becoming difficult to detect ai generated images and text, and now we’re just going to make video as hard to trust. governments won’t be able to react fast enough to control it, nor do i think they care. we still haven’t properly regulated big tech around various things like social media to smartphones. and i simply don’t trust openai, they had all these limits for chatgpt when it launched and yet people jailbroke it numerous times, google told people to put glue on pizza, microsoft thought it’d be a genius idea to record everything you ever do on your computer, adobe decided it owns everything you create and fed it to their ai to vomit out some slop for people on facebook to believe is real. i don’t think sora should exist, the cons massively outweigh and all pros in my view. i get that sora is still limited at the moment, but i simply don’t trust them. chatgpt spat out whatever you wanted by giving a jailbreak prompt. people will work around sora’s restrictions, i don’t think we’ll like the results.
I like the thought that regulation will come once something crazy happens that really impacts someone with a lot of money or power, but at that point it’s too late
One of my coworkers is really into ai, literally consults it for anything, and he wants to incorporate it into our work and it’s like dude you are actively pushing us both out of our jobs even if it doesn’t look like it this second
i’d say crazy stuff has already happened, like people making ai p*rn of real people without consent, but governments are too busy being paid off or just not understanding the tech at all to effectively regulate quick enough. tech companies know this and try and push through as many things they can before a clamp down.
yeah that’s one of the really scary parts of it and it’s like it’s been forgotten already
i think what’s scarier is that i have a chatgpt jailbreak that works… (i wont share it because im not dumb)
the tech moneyheads have opened Pandora's Box and are forcing us to join them
and we are kinda left with:
ai continues to be abused
ai gets regulated, responsible development takes hold, and the users are respected, whether they use the ai or not
the digital world crashes and we must crawl back and rebuild our internet culture, starting only with UNIX, vim, IRC, and Usenet newsgroups
The 1st option is highly likely, 2nd is unlikely to happen fast enough, we haven’t regulated social media or other “older” forms of tech yet nor have we even dealt with regulation for AI generated text and images, and 3rd might happen, but I worry it’ll be hard to rebuild due to how much AI slop will overflow the web.
bruh im so fucking tired of AI, do people even like this shit, and why is the investment into this garbage and not the stuff itd be actually useful (shit like data analysis) for like breh
AI is the current goldmine in which makes the stock price of tech companies like Microsoft, Nvidia, etc. go up insanely. It increases shareholder value, and it has the potential to lower costs because the goal is for it to replace human labour with AI, hence bigger returns for shareholders, so AI will continue getting investment because companies loyalty is to one group only, its shareholders.