Thanks to ChatGPT we can all, finally, experience artificial intelligence. All you need is a web browser, and you can talk directly to the most sophisticated AI system on the planet — the crowning achievement of 70 years of effort. And it seems like real AI the AI we have all seen in the movies. So, does this mean we have finally found the recipe for true AI?
Computers are machines that follow instructions. The programs that we give them are nothing more than finely detailed instructions — recipes that the computer dutifully follows. Your web browser, your email client, and your word processor all boil down to these incredibly detailed lists of instructions. So, if “true AI” is possible — the dream of having computers that are as capable as humans then it too will amount to such a recipe. All we must do to make AI a reality is find the right recipe.
The path to AGI will likely require unpredictable breakthroughs and innovations. The median predicted date for AGI on Metaculus, a well-regarded forecasting platform, is 2032. To me, this seems too optimistic. A 2022 expert survey estimated a 50% chance of us achieving human-level AI by 2059.
While generative AI can be used to produce, distribute, synthesize, and interpret this kind of research, only humans can look at the findings and make sense of them. Even with today’s advanced technology, understanding why people say what they say remains a human area of expertise.
AI is going to get as smart as humans in many ways but exactly how smart it gets will be decided largely by advancements in quantum computing. Human intelligence isn’t as simple as knowing facts.
For about 40 years, the main idea that drove attempts to build AI was that its recipe would involve modeling the conscious mind the thoughts and reasoning processes that constitute our conscious existence. This approach was called symbolic AI because our thoughts and reasoning seem to involve languages composed of symbols (letters, words, and punctuation). Symbolic AI involved trying to find recipes that captured these symbolic expressions, as well as recipes to manipulate these symbols to reproduce reasoning and decision-making.
For these reasons, and more, it seems unlikely to me that LLM technology alone will provide a route to “true AI.” LLMs are rather strange, disembodied entities. They don’t exist in our world in any real sense and aren’t aware of it. If you leave an LLM mid-conversation and go on holiday for a week, it won’t wonder where you are. It isn’t aware of the passing of time or indeed aware of anything at all. It’s a computer program that is literally not doing anything until you type a prompt, and then simply computing a response to that prompt, at which point it again goes back to not doing anything. Their encyclopedic knowledge of the world, such as it is, is frozen at the point they were trained. They don’t know of anything after that.
Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction. AGI may still be far off, but the growing capabilities of generative AI suggest that we could be making progress toward its development
Put simply, Artificial General Intelligence (AGI) can be defined as the ability of a machine to perform any task that a human can. Although the aforementioned applications highlight the ability of AI to perform tasks with greater efficacy than humans, they are not generally intelligent, i.e., they are exceedingly good at only a single function while having zero capability to do anything else. Thus, while an AI application may be as effective as a hundred trained humans in performing one task it can lose to a five-year-old kid in competing over any other task.
For instance, computer vision systems, although adept at making sense of visual information, cannot translate and apply that ability to other tasks. On the contrary, a human, although sometimes less proficient at performing these functions, can perform a broader range of functions than any of the existing AI applications of today.
While an AI has to be trained in any function it needs to perform with massive volumes of training data, humans can learn with significantly fewer learning experiences. Additionally, humans — and (perhaps one day) agents with artificial general intelligence can generalize better to apply the learnings from one experience to other similar experiences. An agent having artificial general intelligence will not only learn with relatively less training data but will also apply the knowledge gained from one domain to another.
For example, an AGI agent that has been trained to process one language using NLP can potentially be able to learn languages having shared roots and similar syntaxes. Such a capability will make the learning process of artificially intelligent systems similar to that of humans, drastically reducing the time for training while enabling the machine to gain multiple areas of competency.
While AI can replace some tasks, it cannot replace human problem-solving skills. Therefore, combining the strengths of AI and human curiosity is necessary to achieve outstanding results in scientific pursuits.
There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans?
AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness.