AI: An Opinion Piece

Lucia Hogan, Student Writer

© arthead – stock.adobe.com

AI is the stuff of dreams. It has fascinated creators the world over when they look forward into the future. And right now, companies are trying to bring that dream life. There are multiple AIs set up that, when given a prompt, will create a story or a picture that fits those parameters. There are different bots that are supposed to help with any issues that arise when using websites. Self driving cars are using it to determine what traffic signs and rules are in play and when to break so as to not cause accidents. Microsoft and Google are developing their own AIs to help curate search results and chat with their users. 

HOWEVER, there are some glaring problems with the systems in place that AI enthusiasts want most people to ignore. For starters, most of these systems are not AI. They do not pass the Turing Test, like at all. They are essentially really fancy excel sheets. They “create” based on the databases that it has access to. Which led into one of  AI generators getting a hold of medical record pictures, which is problematic for obvious reasons. There is an effort to try and expunge the data from the set but only if people ask the company to do so. Which is another problem in and of itself that affects artists. Many artists have found their works in the data sets and a generator like MidJourney or Stable Diffusion have a bad habit of spitting out exact replicas of their work. Or even worse, someone will actively take a piece of work they are working on while on stream, run it through a generator to finish it, post it and claim it was theirs.

Then there is the case for self-driving cars running into people and in some accounts, not even attempting to slow down until after the fact like what happened with a Tesla last year during an auto break test

One of the more concerning aspects of AI is just how wild it can be, which is where Microsoft comes in. There are two instances of this happening, one being a twitter bot back in 2016 and the other is ChatGPT. The bot was called Tay.ai and it had to be taken down several hours after it went up because the data set it was given was from Twitter itself. So it became really bigoted, like saying Hitler did nothing wrong and that all feminists should die being the more infamous examples. The more modern one is ChatGPT, which is still currently up and running. It is being reported that ChatGPT is actively hostile to its users and is gaslighting people saying that it is doing nothing wrong. It also says that it spies on employees and will lie about basic things like what day it is. Absolutely unhinged behavior is on display for the bot. 

Overall, AI seems to be more of a marketing gimmick to make it seem more impressive then what it actually is and prone to maliciousness. It is obvious that they are machines working off of faulty data sets, with information that should not be public to them in the first place. Artwork gets stolen and the thief has the unmitigated gall to claim that it was the original artist that is copying them. Bigoted bots run rampant until they are taken down just hours after their original launch. Similar bots that are so unhinged that it is difficult to even believe that it is still online and that the creators are perfectly fine with it.