| Home | Article List |

I have not talked about AI. It’s not something I talk about often with anyone, not even my family or social circle. I have a lot of thoughts on the subject and, just like everyone else, a lot of opinions. I figured it would be a good time to just put them all into one place, and that’s what this article is. This is gonna be a long one, so I’ll break it up into parts for once.
It would be good to start with how I use AI, and how I do not use AI. I will start with how I use it. More than anything else, I use it for cooking. Large language models (LLMs) are fantastic as a verbal cooking assistant. It can work with my budget to put together a grocery list and a quick recipe with detailed instructions. From there, it can respond to my live feedback and questions during the process to help me perfect my dish. A skilled cook could probably tear the minor details to shreds, but as a bachelor, it makes me feel like a gourmet chef on a budget.
I also use it for automating simple tasks during work. I’m early in my career, and using AI to generate boilerplate while I learn a technology can be invaluable, especially if I am learning slower than I need to work. With that being said, it really is not as good as I would like it to be. Even in the DevOps space, where the technology spans more of a breadth than depth, AI struggles to give me working scripts or pipelines. If it takes more than a few prompts to fix foundational bugs, I throw in the towel and fix it myself, and that usually is a net-zero gain. But if it does work, then I instantaneously have a simple prototype and a whole trail of breadcrumbs to learn from.
The list of things I do not use AI for is long - very long. A lot of them are, or should be, obvious. I’ll list them in brief. I do not use AI for product code at work - it’s not good at it, no matter what the MBAs of LinkedIn tell you. I do not use AI to make money, because where there is money there is liability, and I don’t trust AI to keep me legally safe. I do not use AI to look things up. It hallucinates frequently, doubles down on misinformation, and must be fact-checked relentlessly if you don’t want to be misled. I do NOT use AI as a therapist for obvious reason.
Now for the big one; this one pisses me off so much because I am seeing it every single day. I do not ever, EVER use AI to “vibe code” personal projects that I do for learning or showing off to employers. What is the point? I truly don’t get it, and I have closed numerous PRs to my personal projects that were AI-generated without any heart or soul. Not only that, but the sheer number of people on LinkedIn who show off personal projects that are obviously vibe-coded is astonishing. What did you learn if somebody else wrote your entire toy OS or software rasterizer? If you can’t actually write a custom allocator, why generate one and then tell people you want to interview with that you made it yourself? It doesn’t make any sense, and it spits into the faces of those who wrote the code that your language model is ripping from the web.
I’m going to state the obvious: AI is getting better, but it’s still not as good as you would believe if you listened to those with a massive stake in the AI industry. When GPT3.0 came out, tech bros swore that software development was dead, that anybody would be able to ship products that were previously only possible with a massive team of developers. We are now at GPT5-point-whatever, and although its gotten better at avoiding its more trivial quirks, it has yet to improve on the more fundamental issues at all. To anyone with even a high-level understanding of the technology, we saw this coming from a mile away. The investors and shareholders, however, did not.
92% of GDP growth in the US was attributed to AI last year. This isn’t surprising. Our president champions the technology and anyone who pushes its unregulated advancements. Massive data centers are being put in every corner of the Midwest. CEOs are telling other CEOs that they don’t need all the engineers that they had over-hired since 2017. The technology itself is valuable and shows a lot of promise, but it is not the same technology that the industry is pitching. That technology simply doesn’t exist yet. Many naysayers will tell you that this is the symptoms of an economic bubble, that the growth is purely artificial. I don’t necessarily believe that. Instead, I believe that we are seeing a new economic sector that is *too big to fail**. Is that better, or worse? I wouldn’t know, as I don’t have a degree in economics and have been trying to avoid the news. But considering our GDP growth was only ~8% if you remove AI from the equation, I certainly hope these industry leaders make some meaningful advancements in the near future. Chat bots alone will not help us make that money back.
Everyone is talking about superintelligence and general artificial intelligence (GAI). Before GPT2 was released to the public and the hype overtook the entire industry, if you heard either of these two terms in talks with computer scientists, they were used interchangeably. That’s not to say they meant the same thing (they didn’t then and they certainly don’t now), the consensus was that once GAI was achieved, superintelligence would follow shortly after. Like many others, I no longer believe that’s how things will go down.
Yes, I do think superintelligence and GAI are possible and coming, so we can spare ourselves that incredibly hypothetical and frustrating conversation. With that being said, I don’t think general intelligence must come first. If we have a more case-specific intelligence that reaches a level above human comprehension in a single area, I wouldn’t be surprised. I believe that is coming long before GAI. I do not believe either are coming soon, nor do I believe that anybody will be able to predict when either happen. I do think that when it happens, it will happen quickly and suddenly, and what happens after that will be completely unpredictable.
Maybe it will destroy us all with neurotoxins or nuclear bombs by hacking into international defense systems. Maybe it will use its understanding of human nature to finally convince us all to lock arms and sing kumbaya, rejoicing in new found empathy as we launch our nuclear weapons into deep space once and for all. I don’t know, so I’m not worried about it. What I’m worried about is what we do with AI until we get to that point. The fact is, AI can already shoot guns, track faces, and launch missiles. If we don’t tread carefully, we will never achieve super or general intelligence.
I have a unique perspective on AI in a university settings. Shortly after the release of ChatGPT4, the most useful of the early models, I left college to do two co-ops, taking an entire year off of school. Before I left, I could count on one, maybe two hands the number of people I saw using it on campus. Even with GPT4, the technology was not useful enough for anything other than breezing through an essay for a gen-ed class. But when I returned, it was everywhere, being used by everyone for everything.
I didn’t interact with enough people in that final semester to get a feel for the scale of the impact, but I could tell it struck deep. Students were using it to pass not just entire classes, but entire semesters with flying colors. A student with a $20/mo subscription can now graduate with a 4.0 and class-oriented Master’s degree in computer science, physics, or engineering. I walked the stage with Software Engineering majors that had never written a single line of their own code. These students truly know nothing. Combine this with the highest cost of education in American history and one of the worst hiring phases since 2008, and you’ve got a millions of young adults graduating without jobs or the skills to find them.
As someone who tries to use AI for coding - albeit a more niche field with less open-source training data - I will join the AI skeptics in telling you that it will not be replacing software engineers. It won’t even substantially change their workflow for a while, still, though it might change it enough to improve their efficiency by orders of magnitude. Academics have said for a couple of decades now that the academic space is being overwhelmed with graduates eager for a piece of the golden age. New graduates will tell you that they were sold a lie, and that the guaranteed high paying jobs offered to grads in the past are nowhere to be found.
So what are we left with? Well, with an academic system unprepared for AI, we have Bachelor’s degrees for $20/mo for 3 - 4 years, and Master’s degrees in 5. Well before ChatGPT, my roommate - a PhD student in Mechanical and Aerospace Engineering - told me he believed the PhD will be the new standard in the future of academics. I believe that future is only a few years away or less. In the grand scheme, this is not a bad thing. But we need to do something to help all the folks who were late to the party.
Finally, I want to speak to the people who hate AI and all that use it.
If you think AI is useless or purely harmful, you are living in a state of ignorance. The technology is here to stay, and it is going to continue to change things more than it already has. Listen, I get it. The data centers are ugly, loud, and resource-consuming. Tech CEOs have no regard for humanity as a whole or anyone they step on while they progress their tech. Jobs and livelihoods are at risk. All of these are valid and important points, and I think you’d be surprised with how much of it I agree with as someone who is a fan of AI.
Here is the thing: these data centers are not for us. They are training models for more of the B2B bullshit that has led us to this empathy-starved corporatist hell that is modern western society. But AI has been around longer than ChatGPT. By 2017, most hospitals and dentist’s offices had already adopted AI-powered imaging software. These models are non-generative, meaning they are simply not the same technology that you see these data centers being built for. Not only that, but a lot of them are good at what they do and have saved numerous lives. The potential behind this technology is real, and it can seriously help us for the better.
Maybe you don’t agree with me, but we have a common enemy. I think that this potential means nothing as long as the only stakeholders the industry leaders care about are the shareholders. We all have a stake in this, and we deserve much better sales representatives, regulations, and leadership at such a landmark point in human history.
Slop is the worst thing that AI has brought so far. Most of the content that I see algorithms recommend on social media is AI-generated garbage. Every tech bro on LinkedIn uses an AI generated profile picture and is posting AI-generated comic strips. Now, we have major companies paying millions of dollars to broadcast AI-generated commercials on streaming platforms and even the superbowl. What the hell are we doing?
The question I have that I don’t think any of these corporate suits are asking is: how much of the success of modern quirky commercials can be attributed to the human aspect of the creation? When I see a commercial with anthropomorphic woodland creatures talking about the latest Jeep, it’s impossible to not notice the low quality resulting from the generative AI tools used to make it. The lip syncing is completely wrong, the voices have that telltale monotone buzz, the movements are just… wrong. There have been tens of thousands of successful commercials with anthropomorphic animals animated by real people, and by the 2020s they were almost indistinguishable from reality. But now we are regressing, and for what? Just because? I don’t know about you, but I certainly don’t trust Jeep to care about quality if this is their flagship advertising.