The AI technologies that are dominating all the conversation in technology are controversial for a lot of good reasons.
- They violate the rights of copyright holders and remove incentives for creating new good content.
- They have significant environmental costs in carbon and water.
- They will displace and devalue a lot of workers.
- They will contribute to inequality and the wealth of the most garbage human beings from Altman to Zuckerberg.
We are told we should accept all these inconveniences because AI is going to usher in a brighter future that is better than today. Call me skeptical but other people have written a lot more about the evils above so I will not add to that here.
What I’ve been thinking about more lately is that in addition to providing some new and better solutions, AI seems like it could actually slow down innovation in a few ways.
Using more Silicon to Build on Sand
I have alluded to this before but many of the applications I am seeing for AI involve using these models to “paper over” problems with existing systems rather than fixing them. So much of the economy and thus our lives have become digitized and the systems which run them are not always well maintained or easy to use. We feel this friction, and don’t feel the power to make it better so we want some way to route around it.
Maybe your business’s data is poorly organized because the systems which capture it were never well thought out or integrated. No problem, we can make some AI data lake and extract all kinds of “knowledge” from the data that does exist. It might not be perfect but it will draw some charts and graphs and it will seem “good enough”.
This feels like an advance for sure, and you might get promoted or make the sale, but the systems at the bottom aren’t getting improved and are now probably even less likely to ever be improved. We may be able to go faster and faster, but we are still going to be dragging a dead weight around behind us.
The same thing can occur when using agents to execute processes in these systems. Sure the AI can adapt to working around the idiosyncratic behavior of legacy systems but they aren’t actually improving them. They will make small errors and leave all kinds of detritus and no one will bother ever modernizing the underlying systems or thinking of new fundamental ways to build them.
Eternal 2023
I’ve done some experiments with AI coding and read a lot about other people’s experiences. There’s no doubt in my mind that AI assisted coding is here to stay and it has definitely helped me. There is however at least one thing I have noticed that concerns me. Since all of these models are trained on existing code, they are heavily biased towards common/popular code patterns and technologies. I guess that’s good in one sense, but it does tend to mean that its going to push you towards specific existing technologies rather than encouraging you to try out new ones. If you want build a React or Django app then that’s fine. If instead you’d like use some new brand new technologies or techniques, the AI tool is going to at best help you less and and worse work directly against you.
Of course these tools can be good at generalizing across different programming languages and libraries, but it still seems to me that the more innovative a new paradigm is, then the less likely it will be that AI assistants will be of much help. This could create even more friction for adoption and growth of new innovations in software development.
Experience requires Engagement
I think the main concern I have about AI’s ability to slow down progress is its ability to slow down our development as humans.
A lot of people have talked about this in the area of writing prose. The argument goes something like this:
If you are a student, ChatGPT can write the essay for you but what it doesn’t do is force you to go through the grueling process of weighing all the arguments against each other and trying to develop a coherent thought. The work that’s done before you write and while you edit is what trains our minds to think more clearly. The world doesn’t need more student essays it needs more people who can think clearly.
We are seeing this play out now in coding with the rise of Vibe Coding. I can get the AI to generate large amounts of code for me by just giving a general description of what I want it to do. Even if we ignore any issues about the quality of the code, we still have the problem that now we have code that no one actually understands very well.
This could be a teaching opportunity, where the developer really studies the output of the code and maybe even learns some new tricks. I’m sure this happens sometimes because it has happened to me, but I’m also pretty sure that’s a more rare occurrence. What’s much more likely is that if the tests pass the developer will move on. Deadlines are like that.
In my experience, learning to program comes from a line by line engagement with the code, building a model and mapping between your brain and the text while you learn to execute the code in your mind and consider all the possible pitfalls of your attempt. Skimming the output from Cursor and saying LGTM isn’t going to be enough. I don’t think building more layers of smarter agents or .md files isn’t going to help either. Maybe we can try to say that this is the new programming skillset but that doesn’t mean we still don’t have zillions of lines of code and fewer people with the skills to understand it.
Maybe none of these effects described above will be significant enough to slow down the progress coming from the big matrix multiplies happening in bigger and bigger data centers. I still think these effects are worth considering as we think about how these technologies are employed and how they affect not just the ability of the economy to grow but our ability to grow as humans.
Leave a Reply