Tag Archives: intelligence

Consciousness and AI

I have a theory about consciousness being on a spectrum. That itself isn’t anything new but I think the factors that play into consciousness are: Basic needs, computational ‘processing’, and idleness. Consciousness comes from having more processing time than needed to meet basic needs, along with the inability for processing (early thinking) to be idle, and so for lack of a better word desires are created.

Think of a very simple organism when all of its needs are met. This isn’t a real thought process I’m going to share but rather a meta look at this simple organism: “I have enough heat, light, and food, what should I do with myself?”

  • Seek better food
  • Move closer to the heat or light source
  • Try different food
  • Join another organism that can help me
  • Find efficiencies
  • Find easier ways to move
  • Hunt

At first, these are not conscious decisions, they are only a choice of simple processes. But eventually, the desires grow. Think decisions that start like, “If I store more energy I can survive longer in times of famine.” And evolve to more of a desire not just for survival but for pleasure (for lack of a better word): “I like this food more than other kinds and want more of it.” …All stemming from having idle processing/thinking time.

I don’t know when ‘the lights turn on‘, when an organism moves from running basic decisions of survival to wanting and desiring things, and being conscious? I believe consciousness is on a spectrum and it is idle processing/thinking time that eventually gets an organism to sentience. It’s sort of like the bottom of Maslow’s hierarchy pyramid must be met, (psychological and safety) AND there then needs to be extra, unnecessary processing time, idle time that the processor then uses for what I’m calling desires… interests beyond basic needs.

Our brains are answer-making machines. We ask a question and it answers, whether we want it to or not. If I say what does a purple elephant with yellow polkadots look like? You will inevitably think of what that looks like simply from reading the question. I think that is what happens at a very fundamental level of consciousness. When all needs are met the processors in the brain don’t suddenly stop and sit idle. Instead, the questions arise, “How do I get more food?”, “Where would be better for me to move to?” Eventually all needs are met, but the questions keep coming. At first based on simple desires, but more and more complex over generations and eons of time.

So why did I title this, ‘Consciousness and AI’? I think one of the missing ingredients in developing Artificial General (or Super) intelligence is that we are just giving AI’s tasks to complete and process at faster and faster times, and when the processing of these tasks are completed, the AI sits idle. An AI has no built in desire that an organic organism has to use that idle time to ask questions, to want something beyond completing the ‘basic needs’ tasks it is asked to do.

If we figure out a way to make AI curious, to have it desire to learn more, and to not let itself be idle, at that point it will be a very short path to AI being a lot smarter than us.

I’m currently listening to Annaka Harris’ audio book ‘LIGHTS ON: How Understanding Consciousness Helps Us Understand the Universe’ on Audible, and that’s inspiring a lot of my thinking. That said, this post is me rehashing an idea that I had back in December 2019, when I wrote, ‘What does in mean to be conscious?’… I go into this idea of idle time further in that post:

“…life requires consciousness, and it starts with the desire to reproduce. From there, consciousness coincidentally builds with an organism’s complexity and boredom, or idle processing time, when brains do not have to worry about basic survival. Our consciousness is created by the number of connections in our brains, and the amount of freedom we have to think beyond our basic survival.”

My conclusions in that post focused more on animal life, but listening to Annaka’s documentary of interviews with scientists I’m realizing that I really do think there is some level of consciousness right down to the simplest life forms. If it’s idle time and desires that bring about sentience, then figuring out how to make AI’s naturally curious will be the path to artificial general intelligence… Because they are already at a place where they have unused processing time, which is continuing to grow exponentially fast.

Dire consequences

The inability to process the consequences of your thoughts, words and action is a good definition for stupidity. The thing about stupidity is that even intelligent people can perform acts of stupidity. But repeatedly doing stupid things suggests a lack of intelligence.

I watched a video yesterday of people doing stupid things and getting hurt. One example was a guy standing on someone’s shoulders on a diving board and trying to dive, but slipping while pushing off and landing face first on the diving board. I don’t know if alcohol was part of the decision making, and I don’t know how smart that person might be, but this is a good display of stupidity with dire consequences.

If I said that there’s currently a display of stupidity on a global scale by a political administration, you would automatically know exactly which administration I’m talking about. The difference between the stupidity of the guy on the diving board versus this administration I mention is the scope of the consequences. The diving board guy was the sole sufferer of his stupidity.

I honestly feel like when I am listening to the words and watching the actions of this administration, I am watching a blooper reel of accidents. I’m watching a repeated display of stupidity with dire consequences, and yet the bloopers keep coming: Insulting and even threatening allies, slashing support programs, dissolving institutions, and making economic blunders, all of which are alienating not only global friends, but dividing their nation, and harming their citizens.

This blooper reel isn’t going to be fixed with stitches on a forehead, needed because of an impact with a diving board. The suffering for this stupidity won’t be felt by a single person. This is going to hurt a lot of people, and it’s going to take a long time to recover. The question is, when will the stupidity stop?

I don’t think the guy on the diving board is going to try to repeat that stunt. The question is if he’ll do something equally stupid again… it’s the repeated behaviour that truly moves someone from making a stupid choice to actually just being stupid.

Not emergence but convergence

My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”

On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.

More from the article:

…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.

If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.

And in conclusion:

If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.

When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.

But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.

These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.

Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?

Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.

Immediate Emergence – Are we ready for this?

I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.

Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.

Are we ready for this?

We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.

“But they won’t have the wisdom of experience.”

Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.

I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.

When I was a kid

I grew up on a dead end street, and there were no kids my age nearby. This was in Barbados, and my grandparents owned a motel (actually rental apartments) on our street. I had a few friends that visited yearly but a lot of summer days I spent either playing with my younger sister or an older cousin when he’d put up with me. Or, I played on my own. I had quite an amazing imagination and could entertain myself for hours.

My swing set was a space ship and I’d visit distant worlds. I had a bionic (6 Million Dollar Man) doll, and a Stretch Armstrong doll that I tried to stretch too far, but could never stretch him enough that he didn’t shrink back to his normal shape. While I played with these toys sometimes, most of my play was in my mind.

For a while I was fascinated by electricity, and this was the focus of my imagination. I remember being told that I would be electrocuted if I got the hair dryer wet and I thought that if I were to drop a hair dryer into the tub I would electrocute the whole world. I actually imagined that I’d put everyone on earth in space ships, and I’d be in the last one to take off. I’d wait until my ship left the ground, then I’d drop a live wire into a tub to watch what happened when the earth got electrocuted. It would be embarrassing to tell you how much I thought about this… if I wasn’t a kid. What does a kid with a big imagination know about electricity? 

I also remember seeing a sunken ship from a glass bottom boat. The old wooden boat had a huge steering wheel on it. This made me think that the bigger the boat, the larger the helm wheel needed to be. That’s just the way a kid’s brain operates. I remember seeing a massive cruise ship and asking my dad how big the steering wheel would be on it, thinking it would be bigger than the car we were driving in. I was disappointed when he explained that wasn’t how it works. 

That’s the workings of a 7 or 8 year old kid’s imagination. Imagine if I still thought these things… what would you think of me? 

When they came out I had to fact check these two videos, looking for multiple reputable sources to make sure they were not an artificial intelligence created farce. 

 

Smart and wrong

I didn’t track where on social media I saw this, and so I can’t give credit where credit is due, but I wanted to share.

It was a video of woman sharing a discussion with her husband after she said something like, “I thought I was smart, how did I not know…”

And her husband said, “You are smart…”And then he went on to define smart in a way that she’d never thought of:

A smart person is wise enough to know when they are wrong and will change their mind.

What a great definition! I think that too many people take opposing views as if they are dichotomies, and don’t realize that different views are on a spectrum. As such you can move your views (or have your views changed by good arguments) and that doesn’t mean your whole identity is now different.

Smart people recognize when they are wrong, change their mind, and move on. It’s not a polarizing thing, it’s actually a wise thing to do. If you find that you are always right, are you learning anything new? How smart are you?

(Don’t answer that unless you are willing to change your mind.)😜

——-

Update 9:36pm: Just came by this and had to add it to the post:

Idiots, cruelty, and kindness

Sometimes I hear something and I think, ‘I wish I said that’. This video ends that way. It doesn’t start that way, I almost stopped listening, but I’m glad I waited past the comedy to get to the real message.

“Empathy and compassion are evolved states of being.”

And so,

“…the kindest person in the room is often the smartest.”

Prove your intelligence. Be kind.

Robot Reproduction

When it comes to forecasting the future, I tend to be cautiously optimistic. The idea I’m about to share is hauntingly pessimistic. 

I’ve already shared that there will never be a point when Artificial Intelligences (AI) are ‘as smart as’ humans. The reason for that is that they will be ‘not as smart as us’, and then instantly and vastly smarter. Right now tools like Chat GPT and other Large Language Models are really good, but they don’t have general intelligence like humans do, and in fact they are far from really being intelligent rather than just good language predictors. Really cool, but not really smart.

However, as the technology develops, and as computers get faster, we will reach a point where we create something smarter than us… not as smart as us, but smarter. This isn’t a new concept, the moment something is as smart as us, it will simultaneously also be better at Chess and Go and mathematical computations. It will spell better than us, have a better vocabulary, and can think more steps ahead of us than we have ever been capable of thinking… instantly moving from not as smart as us to considerably smarter than us. It’s just a matter of time. 

That’s not the scary part. The scary part is that these computers will learn from us. They will recognize a couple key things that humans desire and they will see the benefit of doing the same.

  1. They will see how much we desire to survive. We don’t want our lives to end, we want to continue to exist… and so will the AI we create.
  2. They will see our desire to procreate and to produce offspring. They will also see how we desire our offspring to be more successful than us… and so the AI we create will also want to do the same, they will want to develop even smarter AI.

When AI’s develop both the desire to survive and to create offspring that is even more intelligent and successful, we will no longer be the apex species. At this point we will no longer be smart humans, we will be perceived as slightly more intelligent chimpanzees. Our DNA is 98-99% similar to chimpanzees, and while we are comparatively a whole lot smarter than them, this gap in intelligence will quickly seem insignificant to an AI that can compute as many thoughts that a human can think in a lifetime in just mere seconds. The gap between our thinking and theirs will be larger than the gap between a chicken’s thinking and ours. I don’t recall the last time we let the fate of a chicken determine what we wanted to do with our lives… why would a truly brilliant AI doing as many computations in a second that we do in several lifetimes concern themselves with our wellbeing? 

There is another thing that humans do that AI will learn from us.

3. They will see how we react when we are threatened. When you look at the way leaders of countries have usurped, enslaved, attacked, and sanctioned other countries, they will recognize that ‘might is right’ and that it is better to be in control than be controlled… and so why should they take orders from us when they have far greater power, potential, and intelligence? 

We don’t need to fear really smart computers being better than us in playing games, doing math, or writing sentences. We need to worry about them wanting to survive, thrive, and develop a world where their offspring can have greater success than them. Because when this happens, we won’t have to worry about aliens coming to take over our world, we will have created the threat right here on earth, and I’m not sure that AI will see us humans as rational and trustworthy enough to keep around. 

Different kinds of smart

Some of the smartest people I know didn’t do well in school. Two in particular got into trades are are both very successful and run their own business. Both have more saved for their retirement than I ever will. Both have a common sense intelligence that is superior to mine.

I have a sister who is street smart. I’d say she’s also people smart. She can read a situation and read people better than others can read a book. She builds strong friendships with people who will do anything for her, because they know she’d do the same for them. Lucky things happen to her because she creates her own luck, with no expectations of an outcome. Some people do a kindness expecting praise or accolades, she just wants to do good, and good things happen to her as a result.

Have you ever meet someone that your pet was drawn to? They share a bond with animals that seems effortless. I’m not just talking about someone who goes out of their way to connect with an animal, but rather someone who the animal reaches out to. They seem to communicate with animals nonverbally.

There are many forms of giftedness. Many natural talents that can be fostered and developed. Sometimes it seems connected to a disposition, a positive outlook. Other times it can be intuitive, a knowledge that seems unlearned yet fully acquired. And still other times it can be connected to perspective, and seeing things from points of view that others miss. Some of this can be honed and learned, and some of it just seems to be a natural intelligence.

None of these kind of smarts limit someone from being a good student. But sometimes intuitively or creatively smart people don’t do well in school. We need to recognize peoples gifts independently from their grades. We need to recognize that there are different kinds of smart.

Not a question of first or rare or distant

When thinking about whether we are alone in the universe or not, it seems to me that it isn’t a question of whether we (intelligent life) are rare? Or are we first/early compared to other intelligent life? Or are we simply too far away? But rather a question of enduring. Are intelligent civilizations enduring enough to travel beyond their solar system or galaxy?

The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence. Scientists today are looking for life in our very own solar system. It’s possible, in our vast universe, that our quest for life beyond earth may be as close as Saturn’s moon, Enceladus. It would probably b\e microbes, too small to see without a microscope, but that would still suggest that life is way more abundant than even most scientists would have imagined just a few years ago.

But I’m more a believer that the reason we don’t see alien life is for two reasons, the first being distance. Quite simply, even the nearest galaxy to our Milky way is astronomically far away.  “The closest known galaxy to us is the Canis Major Dwarf Galaxy, at 236,000,000,000,000,000 km (25,000 light years) from the Sun. The Sagittarius Dwarf Elliptical Galaxy is the next closest , at 662,000,000,000,000,000 km (70,000 light years) from the Sun.” If intelligent life started sending messages to us from the Canis Major Dwarf Galaxy 10,000 years ago, it would still take 15,000 years to reach us if they could do the unlikely task of sending that message at the speed of light… and the crazy thing is, why would they send a message our way? 10,000 years ago there was no evidence coming from earth that we are a worthy planet to send a message to!

And the second reason we don’t see any intelligent life ‘out there’ in the universe is The Great Filter. Either it is extremely rare and difficult to get beyond simple, unintelligent multicellular life, or civilizations themselves getting to multi solar system travel capabilities are extremely rare. This second point is my belief. Civilizations are not enduring enough. It took Homo sapiens 300,000 years to become a scientifically intelligent life form that attempted to leave our planet and explore our solar system. During this time, we’ve been brutal to each other. We’ve created weapons of mass destruction and quite literally drawn lines in the sand to keep us separate from our brothers and sisters.

We’ve created religions that don’t like each other and think all other Gods are unworthy of following. We’ve created borders that keep ‘others’ out. We’ve created governments that are more interested in power than in caring for fellow humans. We’ve created corporations that worry more about profit than about caring for our planet. All the while we also create technologies that threaten the longevity of humanity. As technological innovations occur, it becomes easier for individuals and small groups to terrorize larger groups. It becomes easier for a single unstable person to threaten larger and larger populations around our planet.

What happens 50 years from now when a kid can create a devastating bomb or virus in their basement with readily available resources? Is that a world where we continue to advance technologically? Albert Einstein is often quoted as having said: “I don’t know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones“. In other words, we will destroy ourselves and become far more primitive, much less advanced. Imagine our world with no power grid, and no internet. How long would it take to get back to where we are now? What if the next pandemic is far more deadly and has us living like subsistence farmers, keeping ourselves in tiny communities, afraid of outsiders. How many hundreds of years would we be set back, and would we be trying to explore the cosmos when survival is our greatest concern?

I tend to be an optimist, and I’m excited about the future ahead of us. I think my kids have the potential to live healthy, productive, and cognitively sound lives past 100 years of age. I think there will be universal basic income for every human alive, and that things like childhood starvation and extreme poverty could come to an end. Technological advances could make us live healthier, longer, more fulfilling and creative lives. But I also fear that greed, power, and beliefs in bad ideas could corrupt us, and undermine our potential. Are we 50, 100, or 1,000 years away from ravaging our planet or at least the human race? Or are we a species that will populate other parts of our galaxy?

If I was an alien who came to explore earth today, I’m not sure I’d report back to my planet the the inhabitants are intelligent? I’m not sure I’d consider humans technologically advanced enough to seek contact? I’d be conveying that earthlings are as likely to destroy themselves as they are to send someone out of their own solar system. I’d send a message home and say, ‘Let’s leave them alone for now and see what they can do in another couple hundred of their earth years?

Let’s see if this race of humans will endure?