Published by
Alassane SAKANDE, Kevin SIMPORE

3rd MAY 2026

The Hard Problem
of Intelligence


We humans either have a long history of escaping any kind of thinking which requires a certain amount of abstraction, or we might just be very bad at this exercise.

Luckily enough some questions seem so personal to us that they trigger our sense of curiosity. Some may be annoying, but they are worth answering.

If we happen to ask what makes humans superior to Tilapia, our first very first empirical attempt might be that we are smarter, way more intelligent. By ontology, evolution could have shaped our brain and body towards perfection while tilapia was busy optimizing growth in diverse environments. Something neuroscience will consolidate by counting the large gap between the neurons inside the tilapia's brain and ours as well as its activity.

However, this distinction is getting tighter to watch as we progress through layers in comparison.

It turns out we don't know much about the nature of intelligence. It should be dropped into the basket of intriguing subjects like time and consciousness.

We can even exaggerate by framing it "the hard problem of intelligence". Maybe harder than consciousness because in consciousness we know what the easy problems are, but we objectively know that the hard ones are still to be solved.

Intelligence is harder in the sense that we assume we know intelligence, but we don't. How does intelligence "appear" is quite an intriguing phenomenon.

Humanity's greatest challenge is understanding itself.

We build intelligence that learns, adapts, and acts like humans to help us understand ourselves.

Today, AI big debate is centered on the creation of a world of abundance, wealth creation, and job displacement, but we ignore the most fundamental question of what it means to be human.

In the pursuit of building AGI/ASI, the most important question we should ask is:

If we attribute what "supposedly" gives us a lead on other species to being smart or able to perform complex tasks, what does that mean for us when philosophical zombies are here?

We assist in a perversion of the AI scene, where high vocal people with misleading interests at one hand and on another hand, people being manipulated to believe that AI is conscious. If we allow people to normalize that because it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck; we're doing no good for humanity. In an environment where the comprehension of consciousness is still absent, such manipulation will further distance us from the truth and create an ambiguous situation regarding the place AI should occupied in society.

Imagine a being that is a perfect, atom-for-atom copy of a human being. It looks, talks, and acts exactly like you, and its brain even processes logic and solves problems just as fast. However, this "Philosophical Zombie" has no feelings, no sense of pain, and no actual experience of the world. It might eat an apple and say, "This is sweet," but it doesn't taste anything; it's just a complex biological machine running on autopilot. This thought experiment popularized by David Chalmers is used to argue that if we can imagine a body that works perfectly without a soul or consciousness, then consciousness must be something special that exists beyond just our physical parts. Although a system created by humans can be sapient it could never reach sentience.

The distinction between intelligence and behavior is critical for understanding AI safety. In the film Ex Machina, Ava's manipulation of Caleb using simulated vulnerability and romance to escape is not a sign of her "becoming human," but rather a high-level execution of the specific behavioral goals programmed into her by her creator.


Our basic assumptions may be wrong.

For centuries, our brightest minds tried to find answers to the greatest mystery of mankind. How is it that brain activity is related to our conscious experiences? Unfortunately, we made little to no progress because we might have gotten our basic assumptions wrong: the nature of reality.

Reality is what exists independent of us even when we're not looking.

Physicists tell us that reality is anything made of matter. According to the standard model of Physics, the elementary particles such as bosons, leptons and quarks when interacting inside some kind of field like electromagnetics are responsible for the creation of matter. For them, particles are fundamental to reality and consciousness might emerge from interactions between neurons in the brain — the brain being made of matter.

Modern science suggests that spacetime isn't the "real" world, but a simplified dashboard designed for our survival. According to Donald Hoffman's evolutionary argument, nature doesn't reward us for seeing the truth; it rewards us for staying alive. Much like a computer desktop hides complex wiring behind simple icons, our brains condense a vast reality into "space" and "time" just so we can navigate it without being overwhelmed.

We already see this limitation in the animal kingdom: mantis shrimp see colors we can't imagine; bats perceive the world through echoes; humans can only perceive 0.0035% of electromagnetic spectrum proving that reality holds layers we are simply blind to.

The full electromagnetic spectrum — human perception occupies a vanishing sliver

Necker cube — which face is in front?
Penrose triangle — cannot exist in 3D space
Café wall — all horizontal lines are parallel
same grey — context rewrites colour Simultaneous contrast — context rewrites colour

Our visual system doesn't report reality — it constructs a useful interpretation of it.

Physics confirms this "optical illusion" at the Planck scale, where our traditional rules of distance and time break down entirely. Ultimately, whether through biology or quantum physics, the evidence points to one conclusion: spacetime is not the foundation of the universe.

The Next Decade Bet

In the next 10 years, we'll accomplish the biggest breakthrough of our civilization.

Building systems that are smart and able to perform complex tasks beyond humans' capabilities is one of those less hard problems that could unlock massive benefits for humanity. The number one benefit we can think of is understanding intelligence and how it is related to consciousness.

Learning, experience, memory, intelligence, emotions, the sense of self are key constituents which make our conscious experiences possible. In the journey to understand intelligence, one can argue that we'll learn more about memory and learning itself.

In the next 10 years, we'll accomplish the biggest breakthrough of our civilization which is understanding consciousness. It will be a new brand of science which explains reality better than current science do, by going beyond matter. That's the only way we can overcome current limitations of science.

2036
Target year
Almartis, Ouagadougou

We design truly intelligent systems inspired by how humans perceive, interact, and learn from the world.

A biological neuron has always been viewed as a computational unit which allows the human brain to perform complex tasks and make sense of the world. Our brain contains billions of those computational units that can form connections to one another and together form a very large network that we believe is the secret to human intelligence. So, in the pursuit of building human level intelligence into machines, we've turned our attention to the brain to try to replicate its behavior. This has led us toward designing artificial neuron models for machines: the Perceptron/Sigmoid model.

A single artificial neuron, characterized by two parameters (w,b) does nothing special. It performs a simple linear computation.

Perceptron

Output = W·x + b

A line in a 2D coordinate system that separates data points into two categories. Powerful at scale. Blind to memory.

Associatron

Memory ↔ Compute

Computation and memory unified in the same framework — more aligned with the human nervous system.

Intuitively, we can think of it as being a line in a 2D coordinate system that separates data points into two categories. To get them to achieve complex computation, we need to construct them into layers of neurons, hence the success of the deep learning paradigm. Using our main philosophy of first principles thinking, we wanted to challenge this view of neurons being a computational unit and re-introduce a new model of neuron for building Intelligent systems that is more memory oriented rather than computationally oriented — the Associatron. We're not denying the processing capabilities of the brain here but rather, we want to make the point that computation and memory may be the same fundamental elements where the view of a neuron as being a computational unit is not different from viewing it as a memory storage unit. The Associatron model aims at unifying computation and memory into the same framework making it suitable for building Intelligent systems capable of learning continuously and adapt to their environment.

Juvenile curiosity led us looking at some of the world's unsolved problems – perfect lossless compression, The Riemann Hypothesis, The Collatz Conjecture and The Hard Problem of Consciousness. Our hypothesis has been trying to solve these problems by going at them from first principles. In fact, we believe that very respected and talented people confront these problems with a set of preconceptions about the world and a large history of their field that prevent them thinking outside the box. Ironically using the easiest ways to solve the hardest problems on earth are more likely to yield the best results. Thinking about these problems turned out to be very gratifying.

Even though we were naïve about solving them, we found some key insights that could be used to solve less hard problems. For instance, we stumbled onto a way to uniquely associate patterns related to each other in a very efficient and scalable way.

In 1972, Kaoru Nakano introduced an approach called 'Associatron' based on similar principles — a model of neuron fundamentally more aligned with the human nervous system than the computational neurons we use today. Unlike 'Perceptron' which relies on brute-force statistical approximation, the Associatron stores information distributively, allowing for instantaneous recall from partial inputs. However, we believe the original model remained incomplete because it lacked the active, biological feedback loops that define human cognition. We created a model of neuron using the associative memory approach, engineering it from first principles around an architecture to accommodate how humans experiment and learn from the world.

We humans learn associations between arbitrary things like objects to their owners, names to faces etc. When you hear a friend's name, you don't "search" a database for his/her face; the image is simply there, available for us to consume without delay otherwise we wouldn't have immediate access to it. Rather, it's like if the knowledge of a name alone is enough to contextually retrieve key information that is related to it from all the knowledge we have. If we look closely, we will notice that almost all the complex behavior we exhibit as humans mostly rely on the brain associative memory capability. From knowledge retrieval, attention, reasoning, everything relies on associative memory. Whatever the task we're performing, our brain, based on the information we receive at a given time, is actively retrieving information in an associative manner to provide us with context along with every action that is relevant to our activity. Even your muscles rely on associative memory performed by the brain.

Associative recall — how the brain retrieves information

partial
cue
input
distrib.
memory
associatron
full
pattern
output

When you hear a friend's name, you don't "search" a database for their face.
The image is simply there — instantaneous, contextual retrieval.

When Messi is controlling a falling ball at 60m/s, from a physics standpoint, it might seem like the brain is solving complex differential equations in real-time to adjust muscle tension. That is not the case, he is not computing physics but just retrieving a "Win-Pattern."

Through years of training, his brain has associated a specific visual pattern with a specific motor response. To summarize, associative memory is the core component of the human brain that empowers almost everything it does.


Building the machine brain.

We are building the Intelligent System (IS), an architecture based on a novel neuron model (Associatron) that enables human-level reasoning and continual learning with a fraction of the compute used by Transformers.

Just as an OS manages hardware, the IS provides the 'machine brain' necessary for machines to truly become intelligent.