This post is about General Artificial Intelligence, but I’m going on a bird walk first.
My dad has had a health setback and will be on a long road to recovery. I heard him talking about his recovery plans and I felt had to share a personal example with him. When I went through 6 months of chronic fatigue I finally found relief after I discovered that I had a severe Vitamin D deficiency and started taking high dose supplements. Here’s the thing, I saw positive results in just 3 days… I went from hearing my alarm and wondering how I could physically get out of bed, to feeling normal when I woke up in just a matter of days of taking suppliments. However, it took over 6 months to recover to the point where I could put my treadmill within 90% of what I could do before the fatigue hit me. My capabilities improved, but much slower than I thought would happen. We are good at setting goals and knowing what’s possible, but we often overestimate how fast we can achieve those goals.
But I digress (my dad’s favourite thing to say when he finishes a bird walks on his way to making a point.)
Chat GPT and similar language predicting software are still pretty far away from artificial general intelligence, and the question is: How far away are we? How far away are we from a computer being able to out-think, out-comprehend, and our problem-solve the brightest of humans? Not just on one task, like they do on competing in Chess or Go, rather in ‘general’ terms, with any task.
I think we are further away in time than most people think (at least those people who think that artificial general intelligence is possible). I think there needs to be at least one if not a few technological leaps that need to happen first, and I think this will take longer than expected to happen.
The hoverboard and flying cars in the movie Back to the Future may not be too far away, but the ‘future’ in the 1987 movie was supposed to be 2015.
Are we going to achieve Artificial General Intelligence any time soon? I doubt it. I think we need a couple quantum leaps in our knowledge first. But when this happens, computers will instantly be much smarter than us. They will be far more capable than humans at that point. So the new question isn’t about when this will happen, but rather what we do when it does happens? Because I’m not a fan of a non-human intelligence looking at humans the way we think about stupid chickens, or even smart pets. What happens when thé Artificial Intelligence we create sees us as stupid, weak animals? Well, I guess time will tell (but don’t think that’s any time too soon).