Tag Archives: intelligence

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

The ego and the way

Intelligence is blind to ignorance. While it is true that the smarter you get, the more likely you are to realize how little you know; It is also true that the smarter you get, the less likely you are to listen to opinions and ideas which you do not agree with. You easily dismiss opposing views, you do not challenge the ideas as much as you challenge the intelligence of those that share them.

Imagine an upside down bell curve. On the X-axis is level of intelligence, on the Y-axis is knowledge of your intelligence.

I think both extremely intelligent and unintelligent people are aware of where they are on the scale, but most people are in the middle. They are somewhat intelligent, and yet blissfully unaware of where they are on the scale. They don’t know what they don’t know, and so they think they are more intelligent than they are. Their knowledge of their intelligence does not match their actual intelligence. I think here, where most people live on the scale, their egos get in the way. Not too many people think, “I am dumber than most people think,” while many would consider, “I’m smarter than people give me credit for.”

And so most people in the world think they are smarter than they are. For that reason, their political, scientific, economic, technical, social, and cultural perspectives are ‘correct’. For the same egotistical reasons, the views of others that oppose them are perceived as less intelligent. I fear that sometimes I too may be guilty here.

And so we live in a world we’re people are egotistically unaware of their lack of intelligence. Crazy conspiracies fool them. Legitimate conspiracies are dismissed. Intelligent sounding pseudoscience convinces them while counterintuitive facts and evidence get easily dismissed. They are smart enough to think they are smart, while scoring high enough on the Dunning-Kruger scale to be easily fooled. Smart enough to do their own research, but not intelligent enough to evaluate that research with intellectual rigour.

And so egos grow with intelligence, and in turn intelligence wanes when the ego interferes with the wisdom that should come with intelligence. Meanwhile, the best and the brightest, the ones who are truly both intelligent and wise, they know just how little they still know. They give up trying to convince the ones who let ego cloud intelligence.

They find themselves lonely, uninterested in bickering over opinions that dismiss and alter facts to win petty arguments. They are labeled as the crazy ones. Their wisdom ignored; they are helpless to bypass the egos and support intelligent growth. Because for most of the world the ego gets in the way.

Beliefs trump intelligence

Is it a feature or a bug? I don’t know, but what I observe is that human beings allow their beliefs to trump their intelligence.

I saw it first hand with my dad. In many respects he was the most intelligent person I ever met. He designed a process to chemically leach platinum out of recycled electronics and catalytic converters; He designed a nuclear powered airplane; He created a perfect solution of diesel fuel and water that he ran in diesel motors… And he was also a doomsday-er convinced that the end of the world as we know it was going to happen in his lifetime. It didn’t. But that belief consumed so much of his time and energy.

And so it is with religious faith. Intellectually bright people will believe their scriptures are the actual words of their God.

And so it is with conspiracy theorists, who are often smart people, yet they let their beliefs cloud over any counter arguments or logical insights that don’t match their beliefs about the conspiracies they harbour.

And so it is with political extremists, who can only see the benefits but not the consequences of their polarized views.

And so, it would seem, it is with all of us. We hold strong beliefs about the world we live in and we blindly allow our beliefs to influence our thinking, bias our views, and undermine our own intelligence.

Is it a feature that helps us find community and bond with like minded people, or is it a bug in our intellect that sabotages our ability to think logically and objectively? Again, I don’t know. What I have come to realize though, is that our beliefs seem to have some hierarchical level of control over our thoughts and actions that upstages and even eclipses our intelligence.

Consciousness and AI

I have a theory about consciousness being on a spectrum. That itself isn’t anything new but I think the factors that play into consciousness are: Basic needs, computational ‘processing’, and idleness. Consciousness comes from having more processing time than needed to meet basic needs, along with the inability for processing (early thinking) to be idle, and so for lack of a better word desires are created.

Think of a very simple organism when all of its needs are met. This isn’t a real thought process I’m going to share but rather a meta look at this simple organism: “I have enough heat, light, and food, what should I do with myself?”

  • Seek better food
  • Move closer to the heat or light source
  • Try different food
  • Join another organism that can help me
  • Find efficiencies
  • Find easier ways to move
  • Hunt

At first, these are not conscious decisions, they are only a choice of simple processes. But eventually, the desires grow. Think decisions that start like, “If I store more energy I can survive longer in times of famine.” And evolve to more of a desire not just for survival but for pleasure (for lack of a better word): “I like this food more than other kinds and want more of it.” …All stemming from having idle processing/thinking time.

I don’t know when ‘the lights turn on‘, when an organism moves from running basic decisions of survival to wanting and desiring things, and being conscious? I believe consciousness is on a spectrum and it is idle processing/thinking time that eventually gets an organism to sentience. It’s sort of like the bottom of Maslow’s hierarchy pyramid must be met, (psychological and safety) AND there then needs to be extra, unnecessary processing time, idle time that the processor then uses for what I’m calling desires… interests beyond basic needs.

Our brains are answer-making machines. We ask a question and it answers, whether we want it to or not. If I say what does a purple elephant with yellow polkadots look like? You will inevitably think of what that looks like simply from reading the question. I think that is what happens at a very fundamental level of consciousness. When all needs are met the processors in the brain don’t suddenly stop and sit idle. Instead, the questions arise, “How do I get more food?”, “Where would be better for me to move to?” Eventually all needs are met, but the questions keep coming. At first based on simple desires, but more and more complex over generations and eons of time.

So why did I title this, ‘Consciousness and AI’? I think one of the missing ingredients in developing Artificial General (or Super) intelligence is that we are just giving AI’s tasks to complete and process at faster and faster times, and when the processing of these tasks are completed, the AI sits idle. An AI has no built in desire that an organic organism has to use that idle time to ask questions, to want something beyond completing the ‘basic needs’ tasks it is asked to do.

If we figure out a way to make AI curious, to have it desire to learn more, and to not let itself be idle, at that point it will be a very short path to AI being a lot smarter than us.

I’m currently listening to Annaka Harris’ audio book ‘LIGHTS ON: How Understanding Consciousness Helps Us Understand the Universe’ on Audible, and that’s inspiring a lot of my thinking. That said, this post is me rehashing an idea that I had back in December 2019, when I wrote, ‘What does in mean to be conscious?’… I go into this idea of idle time further in that post:

“…life requires consciousness, and it starts with the desire to reproduce. From there, consciousness coincidentally builds with an organism’s complexity and boredom, or idle processing time, when brains do not have to worry about basic survival. Our consciousness is created by the number of connections in our brains, and the amount of freedom we have to think beyond our basic survival.”

My conclusions in that post focused more on animal life, but listening to Annaka’s documentary of interviews with scientists I’m realizing that I really do think there is some level of consciousness right down to the simplest life forms. If it’s idle time and desires that bring about sentience, then figuring out how to make AI’s naturally curious will be the path to artificial general intelligence… Because they are already at a place where they have unused processing time, which is continuing to grow exponentially fast.

Dire consequences

The inability to process the consequences of your thoughts, words and action is a good definition for stupidity. The thing about stupidity is that even intelligent people can perform acts of stupidity. But repeatedly doing stupid things suggests a lack of intelligence.

I watched a video yesterday of people doing stupid things and getting hurt. One example was a guy standing on someone’s shoulders on a diving board and trying to dive, but slipping while pushing off and landing face first on the diving board. I don’t know if alcohol was part of the decision making, and I don’t know how smart that person might be, but this is a good display of stupidity with dire consequences.

If I said that there’s currently a display of stupidity on a global scale by a political administration, you would automatically know exactly which administration I’m talking about. The difference between the stupidity of the guy on the diving board versus this administration I mention is the scope of the consequences. The diving board guy was the sole sufferer of his stupidity.

I honestly feel like when I am listening to the words and watching the actions of this administration, I am watching a blooper reel of accidents. I’m watching a repeated display of stupidity with dire consequences, and yet the bloopers keep coming: Insulting and even threatening allies, slashing support programs, dissolving institutions, and making economic blunders, all of which are alienating not only global friends, but dividing their nation, and harming their citizens.

This blooper reel isn’t going to be fixed with stitches on a forehead, needed because of an impact with a diving board. The suffering for this stupidity won’t be felt by a single person. This is going to hurt a lot of people, and it’s going to take a long time to recover. The question is, when will the stupidity stop?

I don’t think the guy on the diving board is going to try to repeat that stunt. The question is if he’ll do something equally stupid again… it’s the repeated behaviour that truly moves someone from making a stupid choice to actually just being stupid.

Not emergence but convergence

My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”

On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.

More from the article:

…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.

If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.

And in conclusion:

If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.

When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.

But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.

These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.

Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?

Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.

Immediate Emergence – Are we ready for this?

I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.

Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.

Are we ready for this?

We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.

“But they won’t have the wisdom of experience.”

Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.

I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.

When I was a kid

I grew up on a dead end street, and there were no kids my age nearby. This was in Barbados, and my grandparents owned a motel (actually rental apartments) on our street. I had a few friends that visited yearly but a lot of summer days I spent either playing with my younger sister or an older cousin when he’d put up with me. Or, I played on my own. I had quite an amazing imagination and could entertain myself for hours.

My swing set was a space ship and I’d visit distant worlds. I had a bionic (6 Million Dollar Man) doll, and a Stretch Armstrong doll that I tried to stretch too far, but could never stretch him enough that he didn’t shrink back to his normal shape. While I played with these toys sometimes, most of my play was in my mind.

For a while I was fascinated by electricity, and this was the focus of my imagination. I remember being told that I would be electrocuted if I got the hair dryer wet and I thought that if I were to drop a hair dryer into the tub I would electrocute the whole world. I actually imagined that I’d put everyone on earth in space ships, and I’d be in the last one to take off. I’d wait until my ship left the ground, then I’d drop a live wire into a tub to watch what happened when the earth got electrocuted. It would be embarrassing to tell you how much I thought about this… if I wasn’t a kid. What does a kid with a big imagination know about electricity? 

I also remember seeing a sunken ship from a glass bottom boat. The old wooden boat had a huge steering wheel on it. This made me think that the bigger the boat, the larger the helm wheel needed to be. That’s just the way a kid’s brain operates. I remember seeing a massive cruise ship and asking my dad how big the steering wheel would be on it, thinking it would be bigger than the car we were driving in. I was disappointed when he explained that wasn’t how it works. 

That’s the workings of a 7 or 8 year old kid’s imagination. Imagine if I still thought these things… what would you think of me? 

When they came out I had to fact check these two videos, looking for multiple reputable sources to make sure they were not an artificial intelligence created farce. 

 

Smart and wrong

I didn’t track where on social media I saw this, and so I can’t give credit where credit is due, but I wanted to share.

It was a video of woman sharing a discussion with her husband after she said something like, “I thought I was smart, how did I not know…”

And her husband said, “You are smart…”And then he went on to define smart in a way that she’d never thought of:

A smart person is wise enough to know when they are wrong and will change their mind.

What a great definition! I think that too many people take opposing views as if they are dichotomies, and don’t realize that different views are on a spectrum. As such you can move your views (or have your views changed by good arguments) and that doesn’t mean your whole identity is now different.

Smart people recognize when they are wrong, change their mind, and move on. It’s not a polarizing thing, it’s actually a wise thing to do. If you find that you are always right, are you learning anything new? How smart are you?

(Don’t answer that unless you are willing to change your mind.)😜

——-

Update 9:36pm: Just came by this and had to add it to the post:

Idiots, cruelty, and kindness

Sometimes I hear something and I think, ‘I wish I said that’. This video ends that way. It doesn’t start that way, I almost stopped listening, but I’m glad I waited past the comedy to get to the real message.

“Empathy and compassion are evolved states of being.”

And so,

“…the kindest person in the room is often the smartest.”

Prove your intelligence. Be kind.