My view on GenAI
Finding my way in the ever-changing landscape.
This article is about the use of Generative Artificial Intelligence (GenAI) such as ChatGPT, Gemini and Copilot. Other AI/ML applications are outside the scope of this blog.
Lately I've been thinking more and more about my view on GenAI. It's an amazing technology, don't get me wrong, but it's also amazingly overhyped. There's a multitude of examples where use of GenAI has gone horribly wrong. Besides the questionable quality of current LLMs, there are other concerns such as copyright infringement, privacy risks, and environmental impact.
And because of all of these reasons I don't really use it. I have recently run a local model to test what GenAI does and how it works and I quickly came to the conclusion that it wasn't for me. There's another reason that I don't use GenAI, especially not for work, but I couldn't put my finger on it what it was.
That was, until I found this interesting article from Geocodio about how their team makes use of Claude Code. I found that the reasoning of Cory really resonates with me. He states that he would miss crucial opportunities for learning. This is further reinforced by the fact that code that is generated by AI is interacted with differently than when you write it yourself. It's sort of like learning to write: You only get better at writing by actually doing it, not by reading.
While I see myself as a reasonably seasoned developer, I don't want to miss out on learning new skills that I can use without help from an LLM. I want to do my work well and if I don't fully understand how the AI-written code works, it becomes very difficult for me to push that code to production. Also, the quality of AI generated code is already a problem and I don't want to add to that.
The only way I would use this technology is to learn from it rather than using its output directly. But for now I will keep working the same way that I have been doing for the past decade: Steady and with actual improvement to my work and my knowledge.