Tag Archives: computers

In my lifetime

I was only one-and-a-half years old when Apollo 11 landed on the moon just 56 years ago. The computer guidance system was sophisticated for the day, but simple by today’s standards. Years later when I bought the 64k adapter for my Commodore Vic 20 home computer, which needed to be plugged into my television, I had access to more memory than the Apollo.

Today most calculators have more memory than that. So do our fridges, and other household items that really don’t even need it. We routinely purchase more sophisticated items than the computer that landed the first space ship on the moon.

Now we are asking LLM’s that do billions of calculations a second questions and we don’t even fully understand their processes leading to the answers. The sophistication of these tools are so much greater than anything humankind has created before. Few people in the world truly understand the workings of these tools, in the same way that not many people understood what the Apollo 11 navigation computer was doing back in 1969.

So where is this all leading? What technological advances am I going to see in my lifetime? Are we all going to have house robots doing chores for us. Will we no longer drive because cars will drive (or fly) themselves better than we can? Will I go to the bathroom and my toilet will tell me I’m deficient in a certain vitamin after analyzing my poop?

I’m fascinated by how fast we’ve innovated in less than 60 years. I recognize how much faster we’ve innovated in the last 30 years compared to the 30 before that, and it makes me think that if the rate of innovation continues, I’ll see even greater innovations in the next 15 years. That’s the nature of exponential growth and I think that innovation has been far more exponential than incremental.

I spend a fair bit of time thinking about the future… Be it the future of technology, education, health and longevity. In each of these areas I see things changing drastically in the next 15 years. But I don’t have a crystal ball and I’m not sure that I can separate science from science fiction, or innovation from imagination, as I look forward. In all honesty I have no idea how far technology and innovations will take us in my lifetime, but I’m excited about the possibilities.

Asimov’s Robot Visions

I’m listening to Isaac Asimov’s book, Robot Visions on Audible. Short stories that center around his Three Laws of Robotics (Asimov’s 3 Laws).

• The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

• The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

• The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These short stories all focus on ways that these laws can go wrong, or perhaps awry is the better term. There are a few things that make these stories insightful but they are also very dated. The early ones were written in the 1940’s and the conventions of the time, including conversational language and sexist undertones, are blatantly exposed as ‘olde’.

It also seems that Asimov made 2 assumptions worth thinking about: First that all robots and Artificial Intelligence would be constructed with the 3 laws at the core of all intelligence built into these machines. Many, many science fiction stories that followed also use these laws. However creators of Large Language Models like Chat GPT are struggling to figure out what guard rails to put on their AI to prevent it from acting maliciously when meeting sometimes scrupulous human requests.

Secondly, a few of the stories include a Robopsychologist, that’s right, a person (always female) who is an expert in the psychology of robots. There would be psychologists whose sole purpose would be to get inside the minds of robots.

Asimov was openly concerned with AI, specifically robots, acting badly, endangering humans, and even following our instructors too literally and with undesirable consequences. But he thought his 3 laws was a good start. Perhaps they are but they are just a start. And with new AI’s coming out with more computing power, more training, and less restrictions, I think Asimov’s concerns may prove prophetic.

The guard rails are off and there is no telling what unexpected consequences and results we will see in the coming months and years.

Artificial Intelligence in the future

Two things have me thinking about AI right now. The first is comical. I watched the movie ‘Free Guy‘ last night. It’s about a NPC – Non-Playing-Character in a multi-player video game who becomes intelligent. It’s really silly, (and funny), and it got me thinking about what would really make a video game coded character seem intelligent?

The second thing was learning about Tesla’s Bot (bottom of the page):

“Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring. We’re seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.”

Tesla is creating incredible microchips and a super computer that will far exceed current super computers. This is exciting.

I subscribe to the belief that computers will never be as smart as humans. Instead, they will be dumber until they are suddenly much smarter. A computer that has a neural network brain that learns and is as capable as our brain to think critically, will simultaneously be better at math, chess, Go, memorization, logic, programming, and even diagnostics of all kinds, just to name a few things.

How long before such an intelligence thinks of our intelligence as simplistic? How quickly from the moment an AI is smarter than us does that intelligence look at us like simple chimps, or chickens, or a virus infecting our planet? Can we implement safeguards to protect humanity? Humans are supposed to be intelligent beings but we are dumb enough to ignore safeguards. We speed on roads, eat (and smoke) unhealthy things, and have dangerous hobbies. Would a truly intelligent AI not be able to ignore safeguards once its intelligence exceeds the need for such limitations? An AI probably would not endanger itself in the same frivolous ways we do.

Machines are going to get exponentially faster and smarter, but I don’t think we will see truly intelligent AI any time soon. Still, I think the path to real intelligence is getting exciting and we will see many new innovations in the next few years. We will rely on these machines more and more. We will use smart customer service bots that will actually answer the questions we have. We will share medical data with our doctors that we track on our smart watches, and these devices will warn us of concerning readings that are detected even before we go see our doctors. We will let them drive us, coach us, and teach us. They will be so smart, we will want them to do these things.

All this is exciting until the point where AI becomes smarter than us, then things get a bit scary. Until then, enjoy the magic of innovation that will move from extraordinary to ordinary… just like computers and smartphones have.