AI isn't as impressive as people make it out to be.

September 7, 2025.

Nothing screams "Boomer" more than someone who screams about how AI is going to replace us, our jobs, and how "amazing" the stuff it makes is. Despite this, there are a lot of people who also try to flaunt AI's.. "achievements" as some kind of proof it'll replace people, but it's really just going to be an over-hyped trend much like how the "cloud" was back around 2016 (that's almost a decade ago!).

But really, I've seen AI before the hype got popular, I've used it, and I've learned about it, and believe me when I say; This isn't fascinating tech. There is nothing intelligent, nor impressive, about it apart from the fact it takes up a town's worth of land and electricity to house super computers of the latest fashion that can produce.. alright code? To really stress this thought, if you give it anything remotely outside of what it's familiar with (say for example, a Godot script prompt), it'll give you the most bloated code you'll ever see, with half of the statements checking if a node is null or not (sometimes three times, because, you never know!).

No, it isn't smart, and it doesn't learn, either.

So, recall my prior example, and also consider that ChatGPT, the LLM I was referring to, is FAMILIAR with Godot 4.X, as well as the nodes present, and can literally search up the Godot Docs via webcrawl, and DESPITE THIS, it will hallucinate or make garbage code that doesn't work. How exactly can you call something "learning" when it literally doesn't understand basic node hierarchies or be able to interpret what certain functions do? Why would the script need to check multiple times to see if a node is existing or not?

People constantly talk about AI getting better and what not, but here's a take you'll not see often: I'm not seeing it. Yeah, it can do more general things now, and it SEEMS smart and capable, but it's really just a facade, where the model had covered certain cases. For a model to have actually LEARNED something, it would be able to go onto Godot docs, read it, understand the functions (and the needed functions to make sure what was prompted for works), and be able to put it together in a cohesive program. What you actually get is an ALGORITHM (not really an AI, huh?), the same stuff that YouTube gives you your favorite ragebait garbo or how Google autofills your obscure little searches.

AI machines/LLMs, thus, do not actually learn, but rather just aggregate boatloads of data and selects different weights based on different responses. If it were really smart, it wouldn't need as much stored data as, say, OpenAI servers does. You'd just feed it data, it would learn, and you wouldn't need to store responses with it.

If your job "replaces" you with AI, they'll be posting new positions next day.

The funniest thing to have come out of this AI fanaticism is the "replacement" of jobs. Although I do think some jobs could get replaced, maybe some tedious or manual or easy jobs, I really do not think THOUSANDS of full blown careers would go down under. AI isn't reliable, and never will be, and will always need people to be watching over it. Despite this, the AI may give results the company doesn't want, or better yet, people will see that it's AI and simply just not engage with the brand. AI has a very unappealing way of creating, whether it be for images or text, that is blatantly obvious for a keen eye. (to be continued)