I've moved past getting pissed at the AI ******** articles to getting pissed at the people who get fooled into thinking these articles are real.
It seems pretty much every Reddit thread with more than a few posts has someone calling out the OP for generating the post with AI, therefore doing what that community calls 'karma farming'. Often times that starts an argument about if it is or is not AI, often with the OP defending themselves. And indeed some times the OP admits it was generated at least in part with AI due to their poor English language skills, in essence asking for mercy.
The tone of such replies seems to be "you dummies, why are you paying attention to AI slop" when (a) this happens in cases when it's not clear if its AI or not and (b) it comes across as that person trying to draw attention to themselves, so doing the same thing they are accusing the OP of doing.
The bottom line is at some level we all have this desire to be heard. We want (and some people would say crave) attention and validation. And at the same time we often criticize others who are achieving that, calling them "attention whores".
I guess this is why psychiatrists make good money.
I feel like AI is the new VCR or phone or streaming TV, where people need a 12 year old standing next to them to explain how technology works.
Now if someone could explain how I can get my AOL account and VCR+ to work up on my flip phone, I'll be all set.
Like lots of new tech, the first response is fear. We don't understand it so we get fearful. I think, just like social media before it, there is a lot to be fearful of, because no one can say where things will end up. Yet on the other hand, I also know this isn't going to go away, so we better try to shape it in a way that is as beneficial as possible.
One recent thread is about how a person's employer mandates they use up a certain amount of AI credits/tokens per day regardless of if they feel they need AI to help them do their jobs or not. Their response is to use the AI to whiz through mandatory training by letting AI answer the questions for it.
They feel their employer is pushing AI so hard to (a) justify the decision to invest in AI services and (b) to show that AI can do the jobs of many of the employees better than the employees do. It seems like a pretty reasonable take on things. If the employer was happy with the way things are, they wouldn't be investing in AI to begin with. Yet of course they won't really know till they actually use AI to replace the employees, and as many firms are finding, once they fire them they can be hard to get back.