Tag Archives: Annaka Harris

Free will or no free will tug of war

Consciousness and Free Will

I just finished listening Annaka Harris’ audio documentary, ‘Lights On: How Understanding Consciousness Helps Us Understand the Universe’. I’ve also listened to her and moreso her husband, Sam Harris, talk about Free Will – or rather that we lack free will. On these concepts I consider this couple two of the brightest minds. They have researched these topics far more than me and their depth of knowledge and understanding far surpasses mine. I look to them for anything they share on these subjects and admire the scope of what they know and understand on the topics. And yet I disagree a fair bit with their conclusions.

I’m not going to detail my thinking completely here. Rather I’m going to do a bit of a mind dump and hopefully expand on my thoughts later. I just feel that these two topics belong together and I often think of how they are connected. I’ll also share some links to the things I’ve already written on the topics. 

1. I don’t think consciousness is fundamental.

I think it is emergent. Consciousness is on a spectrum, but life is an essential necessity before consciousness. If life must come first, consciousness is not fundamental. So a rock does not have consciousness, but the simplest amoeba does. Every living thing has some level of consciousness. However, there is a minimal basic consciousness related to ‘the lights being turned on’. We can argue about where this point is, and while I favour the idea of self-awareness being the ‘lights on’ moment, I think even the idea of what it means to be self-aware is debatable and that a human definition automatically biases greater intelligence than I think is required for an organism to be self-aware. 

2. Consciousness comes from an excess of processing time.

“…life requires consciousness, and it starts with the desire to reproduce. From there, consciousness coincidentally builds with an organism’s complexity and boredom, or idle processing time, when brains do not have to worry about basic survival. Our consciousness is created by the number of connections in our brains, and the amount of freedom we have to think beyond our basic survival.” And from the link in #1, above, “It’s sort of like the bottom of Maslow’s hierarchy pyramid must be met, (psychological and safety) AND there then needs to be extra, unnecessary processing time, idle time that the processor then uses for what I’m calling desires… interests beyond basic needs.”

3. In this way, free will starts early. The early decision-making might be as simple as moving towards more nutritious food, but somewhere in the development of brains choices move more towards desires… choosing to move towards something we like/desire, not just something better for the organism. The fact that we do not just operate in a way that best serves survival to me is one of the strongest arguments for free will. Free will is ubiquitous in nature. Animals show higher order consciousness and make choices that show value for other life and do not make sense in a universe without free will.  

4. Free will is on a bell curve.

Our hardware and software are imperfect, and our beliefs, our morals, our desires, our wants and wishes are all fed through imperfect systems influenced by outside sources. As a simple example, we know that being hungry can affect our disposition as well as our decision-making. These ‘outside’ influences can be very strong and can keep us low on the free will bell curve, while other choices we make might be a lot freer on the free will bell curve. Hardware issues like our gut biome or a tumour as examples can limit our free will, as can software issues like the brainwashing of beliefs or the societies we live in, which can and do reduce our free will. But as significant as these influences can be, they do not negate free will. 

5. We truly don’t understand consciousness and free will because of our inability to understand the unconscious mind. However, this hardware issue gives us hints. 

 I’ll start by saying we do ourselves a disservice when we separate our conscious and unconscious minds. This is a hardware issue that gets in our way and our software does not have a way around it. The argument that we can ask a person a question while they have sensors on their brain and we can figure out what their answer is before they consciously do is a poor argument that we don’t have choice or that we don’t have free will… Even if our conscious mind makes up after-the-fact reasons for the decision. The reality is that we are of one mind, and our conscious mind not knowing what our unconscious mind knows at the same time is not the separation we think it is. It’s simply that we have poor hardware that makes us think these are two separate minds.

Glimpses of the unconscious, for example with the use of psychedelic drugs, show us extremely metaphoric imagery and not a doorway to logical processes. This might not seem to be a good argument, but I think it’s better than thinking of us as having two minds, the unconscious with no free will and the conscious with just an illusion of free will. If consciousness is built from idle processing time, the idea that organisms start to make any choices at all that veer away from survival inherently suggests that there is choice and so there is free will. 

All this said, and despite thinking we have free will, I really don’t think it’s all that free. I think our basic survival needs, the desire for sustenance, the desire to procreate, the desire to protect our family, the desire for community and attention, these all limit the freeness of our free will. Then there is also the limits of our hardware and software, the influences of other organisms on our bodies… these all flatten the curve of free will to the point that we spend most of our lives not really having much choice… But limited choice and highly influenced choice is still not no choice, and so there is free will even if it’s not completely free. 

Consciousness and AI

I have a theory about consciousness being on a spectrum. That itself isn’t anything new but I think the factors that play into consciousness are: Basic needs, computational ‘processing’, and idleness. Consciousness comes from having more processing time than needed to meet basic needs, along with the inability for processing (early thinking) to be idle, and so for lack of a better word desires are created.

Think of a very simple organism when all of its needs are met. This isn’t a real thought process I’m going to share but rather a meta look at this simple organism: “I have enough heat, light, and food, what should I do with myself?”

  • Seek better food
  • Move closer to the heat or light source
  • Try different food
  • Join another organism that can help me
  • Find efficiencies
  • Find easier ways to move
  • Hunt

At first, these are not conscious decisions, they are only a choice of simple processes. But eventually, the desires grow. Think decisions that start like, “If I store more energy I can survive longer in times of famine.” And evolve to more of a desire not just for survival but for pleasure (for lack of a better word): “I like this food more than other kinds and want more of it.” …All stemming from having idle processing/thinking time.

I don’t know when ‘the lights turn on‘, when an organism moves from running basic decisions of survival to wanting and desiring things, and being conscious? I believe consciousness is on a spectrum and it is idle processing/thinking time that eventually gets an organism to sentience. It’s sort of like the bottom of Maslow’s hierarchy pyramid must be met, (psychological and safety) AND there then needs to be extra, unnecessary processing time, idle time that the processor then uses for what I’m calling desires… interests beyond basic needs.

Our brains are answer-making machines. We ask a question and it answers, whether we want it to or not. If I say what does a purple elephant with yellow polkadots look like? You will inevitably think of what that looks like simply from reading the question. I think that is what happens at a very fundamental level of consciousness. When all needs are met the processors in the brain don’t suddenly stop and sit idle. Instead, the questions arise, “How do I get more food?”, “Where would be better for me to move to?” Eventually all needs are met, but the questions keep coming. At first based on simple desires, but more and more complex over generations and eons of time.

So why did I title this, ‘Consciousness and AI’? I think one of the missing ingredients in developing Artificial General (or Super) intelligence is that we are just giving AI’s tasks to complete and process at faster and faster times, and when the processing of these tasks are completed, the AI sits idle. An AI has no built in desire that an organic organism has to use that idle time to ask questions, to want something beyond completing the ‘basic needs’ tasks it is asked to do.

If we figure out a way to make AI curious, to have it desire to learn more, and to not let itself be idle, at that point it will be a very short path to AI being a lot smarter than us.

I’m currently listening to Annaka Harris’ audio book ‘LIGHTS ON: How Understanding Consciousness Helps Us Understand the Universe’ on Audible, and that’s inspiring a lot of my thinking. That said, this post is me rehashing an idea that I had back in December 2019, when I wrote, ‘What does in mean to be conscious?’… I go into this idea of idle time further in that post:

“…life requires consciousness, and it starts with the desire to reproduce. From there, consciousness coincidentally builds with an organism’s complexity and boredom, or idle processing time, when brains do not have to worry about basic survival. Our consciousness is created by the number of connections in our brains, and the amount of freedom we have to think beyond our basic survival.”

My conclusions in that post focused more on animal life, but listening to Annaka’s documentary of interviews with scientists I’m realizing that I really do think there is some level of consciousness right down to the simplest life forms. If it’s idle time and desires that bring about sentience, then figuring out how to make AI’s naturally curious will be the path to artificial general intelligence… Because they are already at a place where they have unused processing time, which is continuing to grow exponentially fast.

The Light Source

I’ve just started listening to Annaka Harris’ new audio documentary, LIGHTS ON: How Understanding Consciousness Helps Us Understand the Universe.

I find it incredible that the mind is one of the 3 deep unknowns we know so little about: deep oceans, deep space, and deep minds. All these years of scientific discovery and we still really don’t know how consciousness works; what turns the lights on; what makes this group of biologically animated atoms conscious and self aware?

We can’t point to a part of our anatomy and say, ‘this is what makes us conscious’, or ‘this is the spot that makes us know that we are human, that gives us subjective experience’. Will we ever really know? We are still discovering new species of animals in the depths of the ocean. The James Webb telescope is making us question what we know about the origins of the universe, in these areas there are new groundbreaking discoveries all the time… And still we seem to be stuck questioning what makes us conscious, with relatively little new information updating what we can say for certain.

One area that seems to suggest new insights is in split brain studies where people have damage to different areas of the brain or have the left/right brain connection severed. But to me this says more about our hardware than our software. In an oversimplified metaphor, if you have a wiring issue in your house and a light switch no longer works, that doesn’t give you more information about how electricity works. This really doesn’t give us more information about why the lights were on in the first place.

I think it’s fascinating that Annaka chose to question both philosophers and scientists including physicists like Sara Imari Walker, in her quest to understand consciousness. This won’t be an easy listen. I think this is an audio book that will require more time than the length of the book because I’ll need to rewind and re-listen to parts of it. Still, I’m looking forward to learning more, and to pondering big questions about what consciousness is.

And I’m sure that I will be sharing more here.

Related: What does in mean to be conscious?