On Intelligence
The ongoing improvements in modern AI challenge our definitions of intelligence. Has the Turing test been passed? Has AGI been achieved? Are humans generally intelligent? Are LLMs?
To start by giving credit where credit is due, we think François Chollet has proposed the most concise definition of intelligence:
Intelligence is skill acquisition efficiency.
Our work is heavily inspired by Chollet's research and the ARG AGI Challenge.
Intelligence is Multi-Dimensional
The notion of skill acquisition efficiency can be augmented with the following dimensional characteristics:
- Generality ($G$): The diversity and breadth of domains across which the system operates successfully.
- Capability ($C$): The difficulty and complexity of tasks achievable within those domains.
- Efficiency ($E$): The amount of time and energy resources required accomplish any task, including learning and skill refinement.
We thus characterize any intelligent system $S$ as a vector across these three dimensions:
$$I(S) = \langle G, C, E \rangle$$
Importantly, a more intelligent system is distinctly characterized by the amount of novelty it can deal with.
- A more general intelligence can expand to novel, out of distribution problem areas.
- A more capable intelligence can solve harder and more complex problems in a given domain.
- And a more efficient intelligence can acquire skills (i.e. learn) in existing and new domains more efficiently than a less intelligent one.
Collectively, these dimensions provide sufficient contours define human intelligence, AGI, and ASI.
Comparative Analysis
Natural evolution itself is an intelligent process, which has produced many organisms that satisfy our definition of intelligence. In fact, all of life exhibits intelligent behavior to varying degrees.
Biological Intelligence
- Single-cell organisms — Very low on generality and capability, but remarkably high on efficiency. A bacterium processes environmental signals and executes survival behaviors on virtually zero energy.
- Higher order life (birds, dolphins, ants) — Moderate generality across survival domains: foraging, navigation, social coordination. Higher capability than single cells, with impressive efficiency.
- Humans — High generality across virtually all domains. High capability for complex, multi-step objectives. Moderate efficiency (20 watts for the brain, but requires significant bodily infrastructure).
Artificial Intelligence
- Narrow, specialized systems (AlphaGo, AlphaFold, image recognition, autonomous driving) — Near-zero generality but superhuman, even supernatural capability within their domain. Poor efficiency (massive compute and energy requirements).
- More general AI systems (LLMs) — High generality across language-accessible domains. Uneven capability: impressive on some tasks, unreliable on others. Low efficiency (billions of parameters, significant training and inference costs).
The Intelligence Spectrum
By incorporating these different dimensions, one can imagine a spectrum of intelligence something like this:
A Minimally Viable Definition of AGI
What is AGI, then?
In our view, AGI is that artificial intelligence which can solve any information processing problem that the average human can solve. That is to say, regardless of efficiency, or capability, AGI is a system for which no problem can be devised which ordinary humans can solve that the AI system cannot.
And what is ASI?
ASI is that artificial intelligence which is more general than humans but also super-human in capability and/or efficiency.
At present, in our view neither AGI nor ASI have been achieved.
We have superhuman AI systems, but they are not general.
We have semi-general AI system (LLMs) but they are inefficient and still not as general as humans.
AlphaGo can out-play any human Go player, but only by learning from millions of games. Some humans, by contrast, can become very good at Go within dozens or a few hundred games. An AGI would be able to learn Go as quickly as humans without ever having been trained on Go related game data.
In our view, the transformer architecture alone is not enough for us to reach AGI. Biological intelligence simply remains superior across this spectrum of generality, capability and efficiency.
Priori Labs
We are a new research lab interested in the intersection of AI research and biological intelligence. Our initial research is primarily focused on characterizing the gap between human cognitive abilities and modern AI systems.
In particular, we believe there are many areas in which humans still dramatically outperform modern AI systems and that these areas are signposts pointing us in the direction of new ideas.
Scaling alone will not achieve AGI.
New research is needed.