I’m not really sure if either of these terms have solid defintions so let’s start by defining them for the purposes of this post.
AGI – An artificial intelligence which is generally as capable as a human. Maybe its better at some things, maybe a little worse at others (as are we all) but it can generally do all the things that a human could do from an intellect perspective. IMO this includes more than just things like writing an essay, creating an image or solving a math problem. I’m thinking about things like:
- Assessment of a complex situation and the generation of solid ideas for improvement w/o human prompting
- Modeling inside their brains the steps of complex processes and being able to make decisions based on that model.
- Participation in a dynamic physical environment – Like controlling a robot to play tennis for instance
- Demonstrating initiative or motivation to learn or do more
I don’t see any real evidence of LLMs doing these things, although they can definitely appear to be doing some of them some of the time. People can disagree on the details here but I don’t think we are checking all the boxes yet.
SuperIntelligence – This is the thing we are supposed to be scared of or worship depending on your POV. An AI that is dramatically smarter than any human. We are worried that it might not act in our interest or even destroy us and we would be powerless to stop it. We can’t even pull the plug. I assume the idea here is that it would be so smart that it would be able to stop us from finding a way to turn it off, or maybe we’d be too dependent on it to do so, or maybe it would make us all fall in love with it so we wouldn’t want to. There’s a lot of sci fi out there. Pick one.
People are predicting when these things might happen and its pretty common to hear predictions of 2-5 years for AGI based on how fast it seems things are moving.
After that some people believe that SuperIntelligence will come fast. Once we have an AGI in a computer, we can just add more cpu/data/whatever and get a virtuous cycle where development moves even faster till some singularity occurs.
I’m pretty positive about the real ability for AI to improve human productivity but I’m not nearly so optimistic about how fast things like AGI and SuperIntelligence are coming.
Let’s start with AGI. My personal skepticism about just making bigger and bigger transformer models getting us there is mostly related to the bullet points I listed above. I believe there’s several kinds of thinking and I don’t see much evidence that LLMs etc are any good at those even if they can often generate things which appear to be in the correct form. Being able to type out something that looks like a plan is not the same thing as being able to make a plan.
I think one of the best takes here is from Rodney Brooks in Mar 2023.
Ilya Sutskever seems to believe that transformers are going to be enough but his argument doesn’t seem very convincing. I think Yann LeCun is correct that more machinery is going to be needed. We still need a lot more research and experiments and better tools for analysis of the capabilities for these systems.
Maybe all of that will happen fast or maybe the fact that transformer models have generated so much attention will starve out the research on some other techniques. Innovation is hard and when your current way of doing things seems to be working out so far, it can be tough to realize that a new model is needed.
Personally I think 10 years is a more realistic timeline for something which would be generally considered a true AGI that can do all the kinds of things I listed above. Maybe longer.
What about SuperIntelligence? How far off is that?
Its pretty hard to say of course. In fact, its not clear to me that its ever going to happen. No one really knows anything here, just as mice probably don’t understand much about our level of intelligence. By definition in a sense we have no idea how hard this problem actually is. I will just list a few thoughts about why I’m skeptical of seeing a SuperIntelligence develop in any reasonable time frame.
For starters, It is not even clear to me that its possible for something to be that smart. Like what rule of the universe states that there’s no limit to how intelligent something can be? What does being smarter even mean?
Speed is one component for sure, we can make things which solve problems faster than the old system but that itself is only worth so much. We have some problems that we haven’t solved in a long time and its not clear that working faster would make any difference. Maybe we don’t just don’t have the techniques to get there at all.
In order to be SuperIntelligent an AI will likely need completely new ways of thinking, and where are those going to come from? I don’t think we are going to think of them.
We put a lot of stock in language and imagery and these tools have enabled us to build up learning over a long period of time. Written down thought can travel in time to be absorbed and expanded upon by another person in the future. Art and Literature have helped humans and our minds to have the intellect and capabilities that we have today. It thus feels natural to think that training machines on this kind of content can make them as intelligent as we are.
Humans do have a great capacity for abstract thought, but our intelligence was developed over a long time by interaction with Nature. This wicked and wonderful teacher has given us the experience needed to evolve our ability to reason about and modify the world. This isn’t just something you can simulate and run faster. Even if we can train a transformer system to think like us by giving it a ton of content we produced, that doesn’t mean it can get smarter than us from any fixed set of data. The only way we know for new capabilities to emerge is for them to develop in a rich and unpredictable environment.
Its not clear the resources for this development would exist. We can imagine that AI companies are going to become so rich that they will have infinite capital to re-invest in ever more chips etc to build smarter and smarter machines. Or maybe that the smart machines will be able to do some smart stuff to raise more money to make more smart machines. We still might run out of power or physical resources to feed the beast. While it may have happend before in Exodus 32, I don’t know that enough people will be willing to sacrifice enough stuff to build a God.
I am way less worried about an AI God destroying humanity than I am about a group of humans who think they are God using AI to dominate the rest of humanity.
Leave a Reply