Skip to content Skip to footer

On The Nature and Measurement Of Intelligence

“Looked at in one way, everyone knows what intelligence is; looked at in another way, no one does.”

Artificial Intelligence is currently one of the fastest growing fields in the world, also most of the research in computer science is dedicated to it, almost half of all the research in computer science is now about A.I and related fields. But before we make it artificial we need to first define, understand, and measure it. This post is inspired by a paper I recently read, made in 2019 by francois chollet, francois is one of the pioneers in the field and one of the people that I highly look up to and admire because of his amazing way of thinking.

The paper describes intelligence as “skill-acquisition efficiency” opposed to the common way that is used which is by comparing the results between the A.I performance and human performance in a specific task. To consider an agent intelligenet it must have the ability to acquire new skills and knowledge.

The difference between human and A.I now days is that A.I can get good at some tasks sometimes even better than human (Alphazero for example is a chess playing A.I that Practically unbeatable! take a minute and think about this! no human being will ever beat A.I in chess! similarly AlphaGo is another one for the game of Go and many more were made for atari games and different games) but even then, human still way more intelligent because they can adapt to any changes of the rules or the environment. on the other hand A.I needs relatively long time and human involvement and supervision to adjust to the smallest changes.(consider changing the rules of chess the way the knight moves or the queen moves I will argue that an average chess player will beat Alphazero then) to achieve real A.I this should not be the case A.I agents should be equipped to lean how to learn novel tasks that they never seen before!.

mayraf (my best friend) is a very good chess player he is actually the best chess player I have ever known in my life, he is also better at chess than any of my friends at any of their things(hobbies, games, etc…), but still mayraf was not born knowing chess nor we can say that god made him only to play chess, but we can safely assume that he is an intelligent person, because we implicitly know that he had to use his general intelligence to acquire this specific skill over his lifetime, which reflects his general ability to acquire many other possible skills in the same way. The same assumption does not apply to a non-human system that does not arrive at competence the way humans do. We need A.I agents to be able to repurpose their abilities to learn new skills the same way mayraf can learn other games using the same abilities he used for learning chess.

“Even though such machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal they were acting not through understanding, but only from the disposition of their organs.” – Rene Descartes

We want A.I to generalize

So we now have a goal to make A.I generalize, to perform well even in novel situations, but even this needs more clarification we can divided generalization to two types system-centeric generalization which means situation that are new to the A.I agent, and developer-aware generalization which means to perform well even in situations that are new to the A.I’s developer.

we can also qualitatively define degrees of generalization to

Absence of generalization:this is generalization in the sense of mathematical proofs we can not say a program generalize if it’s just producing a proven mathematical relationship Likewise, a sorting algorithm that is proven to be correct cannot be said to “generalize” to all lists of integers,

Local generalization, or “robustness”: This is the ability of a system to handle new points from a known distribution for a single task or a well-scoped set of known tasks, given a sufficiently dense sampling of examples from the distribution (e.g. tolerance to anticipated perturbations within a fixed context). For instance, an image classifier that can distinguish previously unseen 150×150 RGB images containing cats from those containing dogs, after being trained on many such labeled images, can be said to perform local generalization.

Broad generalization, or “flexibility”: This is the ability of a system to handle a broad category of tasks and environments without further human intervention. For instance, a L5 self-driving vehicle, or a domestic robot capable of passing Wozniak’s coffee cup test (entering a random kitchen and making a cup of coffee) could be said to display broad generalization.

Extreme generalization: This describes open-ended systems with the ability to handle entirely new tasks that only share abstract commonalities with previously encountered situations, applicable to any task and domain within a wide scope. This could be characterized as “adaptation to unknown unknowns across an unknown range of tasks and domains”. Biological forms of intelligence (humans and possibly other intelligent species) are the only example of such a system at this time.

Do we care if a plane eats fish ?

So how can we make human level generalization? key step is to deeper understand how our mind work, how it can turn experience into behavior, knowledge and skills. I’ve came across this blog post by george hotz I really like this idea (not that of a fish eating planes of course!). He was saying that to achieve this performance we need to look differently at how we preserve our brains “There’s two papers I love that illustrate what understanding a system really means. Can a biologist fix a radio? (2002) and Could a Neuroscientist Understand a Microprocessor? (2017). To me, they illustrate that the current tools of biology and sub-fields are unequipped to handle understanding life. We will know we understand life once we can build life. We will know we understand a brain once we can build a brain.

I think it’s going to end up looking like birds and planes. Planes fly in a much more “rigid” way, and they are much simpler than birds. But for our purposes, planes are more useful, and nobody would deny that they are “flying machines”

Nobody will deny that our artificial life are “thinking machines.” Nobody will care if the machine can “love” in the same way nobody cares if a plane eats fish. We will care if we’ve built something that can do everything useful that a human can do.”

One more post left in this year! I think this is actually the 16th post so far, I really did came a long way in these past 5 months all of this wouldn’t have been possible if it wasn’t for your support and encouragement so thanks again. I’m also considering sharing this blog with larger audience (hopefully soon) Next post would be a year recap and what I’m looking forward to in 2022.

Leave a comment


Kimironko, Gasabo, Kigali, Rwanda

Contact Information



DataQ © 2024. All Rights Reserved.