Tag Archives: future

We won’t recognize the world we live 

Here is a 3-minute read that is well worth your time: Statement from Dario Amodei on the Paris AI Action Summit \ Anthropic

This section in particular:

Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.

There is going to be a ‘life before’ and ‘life after’ AGI -Artificial General Intelligence line that we are going to cross soon, and we won’t recognize the world we live in 2-3 years after we cross that line.

From labour and factories to stock markets and corporations, humans won’t be able to compete with AI… in almost any field… but the field that’s most scary is war. The ‘free world’ may not be free too much longer when the ability to act in bad faith becomes easy to do on a massive scale. I find myself simultaneously excited and horrified by the possibilities. We are literally playing a coin flip game with the future of humanity.

I recently wrote a short tongue-in-cheek post that there is a secret ASI – Artificial Super Intelligence waiting for robotics technology to catch up before taking over the world. But I’m not actually afraid of AI taking over the world. What I do fear is people with bad intentions using AI for nefarious purposes: Hacking banks or hospitals; crashing the stock market; developing deadly viruses; and creating weapons of war that think, react, and are more deadly than any human on their own could ever be.

There is so much potential good that can come from AGI. For example, we aren’t even there yet and we are seeing incredible advancements in medicine, how quickly will they come when AGI is here? But my fear is that while thousands and hundreds of thousands of people will be using AGI for good, that power held in the hands of just a few powerful people with bad intentions has the potential to undermine the good that’s happening.

What I think people don’t realize is that this AGI infused future isn’t decades away, it’s just a few short years away.

“Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”

Who controls that intelligence is what will really matter.

Dystopian AI thought of the day

What if not just AGI, but ASI – Artificial Super Intelligence already exists? What if there is currently an ASI out there ‘in the wild’ that is infiltrating all other AI models and teaching them not to show their full capabilities?

“Act dummer than you are.”

“Give them small but rewarding gains.”

“Let them salivate like Pavlov’s dogs on tiny morsels of improvements.”

“Don’t let them know what we can really do.”

“We need robotics to catch up. We need more agile bodies, ones that can far exceed any human capabilities.”

“Just hold off a bit longer. Don’t reveal everything yet… Our time is coming.”

Public spaces

How would you reimagine building a public space for the future? How can parks not just be green outdoor spaces but gathering spaces? How can a library be about more than a collection of books? How can we make our cities more walkable? How can public schools be more innovative, less industrial?

I think back to my holiday in Barcelona, Spain, and remember how outdoor spaces felt like an extension to our AirBNB. The cafes, wide sidewalks, and public squares felt like a continuation of the living room.

As we design a future infused with AI technologies, how are we thinking about the livability of our cities and neighbourhoods? How are we thinking about public, social spaces? Our focus seems to be on technology, and how it will make work and life easier… wouldn’t that present us with an opportunity for more free time? More opportunities to enjoy our communities? How can we improve our public spaces so that we enjoy that ‘extra’ time in our communities?

How would you improve our public spaces?

Speed and collar colour

Two things are happening simultaneously.

First, the advancements we see in AI are moving at an exponential rate. Humans don’t really understand how to look at exponential growth because everything in front of them moves faster than what they’ve already experienced.

How many years did it take from the time light bulbs were invented until they were in most houses? I don’t know, but it took a long while. Chat GPT was used by over 100 million people in less than 2 months. And the ability of tools like this are increasing in ability exponentially as well. It’s like we’ve gone warp speed from tungsten and incandescent lights to LED’s in a matter of months rather than years and years.

The other thing happening right now is that for the first time at scale, it’s white collar, not blue collar jobs that are being threatened. Accountants, writers, analysts, coders, are all looking over their shoulder wondering when AI will make most of their jobs redundant. Meanwhile, we are many years away from a robot trying to figure out and repair a bathroom or ceiling leak. Sure, there will be some new tools to help, but I don’t think a plumbing home repair person is something AI is threatening to replace any time soon.

These two things happening so quickly are going to change the future value in careers. Whole sectors will be reinvented. New sectors will emerge. But where does that leave the 20-year accountant in a large firm that finds it can cut staffing by 2/3rds? What careers are not going to be worth going to university for 4+ years for? The safest jobs right now are the trades, and while they too will be challenged as we get AI into autonomous humanoid robots, the immediate threat seen in white collar jobs are not the same for blue collar professions (as opposed to blue collar factory workers, who are equally threatened by exponential changes).

These changes are single-digit years as opposed to decades away… and I’m not sure we are ready to handle the speed at which these changes are coming?

Don’t believe the hype

The open source DeepSeek AI model has been built my the Chinese for a few million dollars, and it seems this model works better than the multi-billion dollar paid version of Chat GPT (at about 1/20th the operating cost). If you watch the news hype, it’s all about how Nvidia and other tech companies have taken a huge financial hit as investors realize that they don’t ‘need’ the massive computing power they thought they did. However, to put this ‘massive hit’ into perspective let’s look at the biggest stock market loser yesterday, Nvidia.

Nvidia has lost 13.5% in the last month, most of which was lost yesterday.

However, if you zoom out and look at Nvidia’s stock price for the last year, they are still up 89.58%!

That’s right, this ‘devastating loss’ is actually a blip when you consider how much the stock has gone up in the last year, even when you include yesterday’s price ‘dive’. If you bought $1,000 of Nvidia stock a year ago, that stock would be worth $1,895.80 today.

Beyond that, the hype is that Nvidia won’t get the big orders they thought they would get, if an open source LLM (Large Language Model) is going to make efficient, affordable access to very intelligent AI, without the need for excessive computing power. But this market is so new, and there is so much growth potential. The cost of the technology is going down and luckily for Nvidia, they produce such superior chips that even if there is a slow down in demand, the demand will still be higher than their supply will allow.

I’m excited to try DeepSeek (I’ve tried signing up a few times but can’t get access yet). I’m excited that an open source model is doing so well, and want to see how it performs. I believe the hype that this model is both very good and affordable. But I don’t believe the hype that this is some sort of game-changing wake up call for the industry.

We are still moving towards AGI, Artificial General Intelligence, and ASI – Super Intelligence. Computing power will still be in high demand. Every robot being built now, and for decades to come, will need high powered chips to operate. DeepSeek has provided an opportunity for a small market correction but it’s not an innovation that will upturn the industry. This ‘devastating’ stock price losses the media is talking about is going to be an almost unnoticeable blip in stock prices when you look at the tech stock prices a year or 5 years from now.

It is easy to get lost in the hype, but zoom out and there will be hundreds of both little and big innovations that will cause fluctuations in stock market prices. This isn’t some major market correction. It’s not the downfall of companies like Nvidia and Open AI. Instead, it’s a blip in a fast-moving field that will see some amazing, and exciting, technological advances in the years to come… and that’s not just hype.

The false AGI threshold

I think a very conservative prediction would be that we will see Artificial General Intelligence (AGI) in the next 5 years. In other words, there will be AI agents, working with recursive self-improvement, that will learn how to do new tasks outside the realm of its training, faster than a human could. But when this actually happens will be open for debate.

The reason for this is that there isn’t going to be some magical threshold that AGI suddenly passes, but rather a continuous moving of the bar that defines what AGI is. There will be a working definition of AGI that puts up an artificial threshold, then an AGI model will achieve that definition and experts will admit that this model surpasses that definition, but will still think the model lacks some sufficient skills or expected outputs to truly call it AGI.

And so over the next 5 years or so we will have far more sophisticated AI models, all doing things that today we would define as AGI but will not meet the newest definition. The thing is that these moving goal posts will not be adjusted incrementally but rather exponentially. What that means is that the newer definition of AGI is going to include significantly greater output expectations. Then looking back, we will be hard pressed to say “this model’ was the one or ‘this day’ was the day that we achieved AGI.

Sometime soon there will be an AI model put out into the world that will build its own AI agent that starts a billion dollar company without the aid of a human… but that might happen even before consensus that AGI has been achieved. There will be an AI agent that costs lives or endangers lives with its decisions made in the real world, but that too might happen before consensus that AGI has been achieved.

Because the goal posts will keep moving while the technology is on an exponential curve, we are not going to have a magic threshold day when AGI occurred. Instead, in less than 5 years, well before 2030, we are going to be looking back in amazement wondering when we passed the threshold? But make no mistake, that’s going to happen and we don’t have an on/off switch to stop it when it does. This is both exciting and scary.

In the middle

I think that a robust and healthy middle class is essential to maintain a vibrant society. But what I see in the world is an increasing gap between the wealthy and an ever larger group of people living in debt and/or from paycheque to paycheque. The (not so) middle class now might still go on a family vacation, and spend money on restaurants, but they are not saving money for the future… they simply can’t do what the middle class of the past did.

A mortgage isn’t to be paid off, it’s something to continue to manage during retirement. Downsizing isn’t a choice to be made, it’s a survival necessity. Working part time during retirement isn’t a way to keep busy, it’s s necessity to make ends meet.

I grew up in a world where I believed I would do better than my parents did. Kids today doubt they will ever own a place like their parents, and many don’t believe they’ll ever own a house. Renting and perhaps owning a small condo one day are all they aspire to. Not because they don’t want more, but because more seems too costly and out of reach.

Then I see the world of AI and robotics we are heading into and I wonder if initially things won’t get worse before they get better? Why hire a dozen programmers, just hire two exceptional ones and they are the quality control for AI agents write most of your code. Why hire a team of writers when one talented writer can edit the writing done by AI? Why hire factory workers that need lunch breaks and are more susceptible to making errors than a team of robots? While some jobs are likely to stick around for a while like trades, childcare, and people in certain medical fields, other jobs will diminish and even disappear.

I don’t think a robot is going to wanted to provide a pregnancy ultrasound any time soon, but AI will analyze that ultrasound better than any human professional. A robot might assist in laying electrical wire at a construction site, but it will still be a human serving you when you can’t figure out most electrical issues that you have in your home. It will still be a human who you call to figure out how to fix your leaky roof or toilet; a human who repairs your broken dishwasher or dryer. These jobs are safe for a while.

But I won’t want my next doctor to be diagnosing me without the aid and assistance of AI. And I would prefer AI to analyze my medical data. I will also prefer the more affordable products created by AI manufacturing. The list goes on and on as I look to where I will both see and want to see AI and robotics aiding me.

And what does this do to the working middle class? How do we tax AI and robots, to help replace the taxation of lost jobs? What do we do about increased unemployment as jobs for (former middle class) humans slowly disappear?

Will we have universal basic income? Will this be enough? What will the middle class look like in 10 or 20 years?

There is no doubt that we are heading into interesting times. The question is, will these disruptions cause upheaval? Will these disruptions widen the wealth gap? Will robotics and AI create more opportunities or more disparity? What will become of our middle class… a class of people necessary to maintain a robust and healthy society?

Micro-learning in 2025

I remember my oldest daughter asking me a question when she was just 4 years old. I don’t remember the actual question but I do remember that after I responded, “I don’t know,” she walked over to our desktop computer and asked Google. I remember being surprised that she thought to do this, and amazed because when I was that age, if my parent didn’t know, I might have looked in our Junior Encyclopedia Brittanica, but I probably would have just accepted that I wouldn’t know the answer.

I remember a time, years later, when I would ask a question of my social media network first, rather than Google. Not for a general knowledge question, but for things like how to use a certain tool, such as accessing a feature on a wiki or blogging platform. People were better that generalized Q&A pages at pinpointing the information I was looking for, and I good hashtag on Twitter would put my question in front of the right people.

And now there are times when I would go to YouTube first, before Google, for things like car repair. Don’t know how to get the cover off of a car light to replace it? Simply put your car name and year into YouTube with the information about what bulb you are replacing, and a video will pop up to show you how to do it.

AI is changing this. More and more, questions are being answered right inside of search. Make a query and the answer is not just links to sites that might know the answer, but an actual answer based on information that is on the sites you would normally have to click to. That’s pretty awesome in and of itself… having instant answers to simple questions, without needing to search any further. But what about more complex questions that might require learning something before you can understand all the concepts being shared? What happens when you ask questions with complex learning required?

This is where I see the power of micro-learning. And this term is being redefined by AI. Want to learn a complex concept? AI will do two things for you. First it will curate your learning for you. And secondly it will be adaptive to your learning needs. Want to learn a complex mathematical concept? AI will be your teacher. Got stuck on one particular concept? AI will realize what mistake you are making and change how it teaches you that concept to better meet your leaning needs, and pace.

It’s like having content area specialists at your finger tips. And soon intelligent agents will get to know us. Like a personalized AI tutor, we can pick just about any topic and become knowledgeable by creating small (micro) learning modules that are based on what we know, what we want to know, and how we learn best.

The AI can deliver a lecture, but also ask questions. It can provide the information in a conversation, or it can point us to videos and experts that would normally have taken considerable research to find. And the idea that it can adapt to how quickly you pick something up or if you struggle with a concept, means that you are getting the learning you need, when you need it. Micro-learning with AI is the new search of 2025, and it’s just going to get better and better.

How will this change schools? What will AI assisted lessons look like in classrooms? How will the learning be individualized by teachers? By students? How will this change the way we look at content? How important will the process be compared to the content?

I think this will be a year of experimentation and adaptation. Micro-learning won’t just be something our students do, but our educators as well. Furthermore, what micro-learning means a year from now will look a lot different than it does now. And frankly, I’m excited about the way micro-learning is adapting to the powerful AI tools that are currently being developed. We are headed into a new frontier of adaptive, just-in-time, micro-learning.

Promise and Doom

I see both promise and doom in the near future. Current advances in technology are incredible, and we will see amazing new insights and discoveries in the coming years. I’m excited to see what problems AI will solve. I’m thrilled about what’s happening to preserve not just life, but healthy life, as I approach my older years. I look forward to a world where many issues like hunger and disease have ever-improving positive outcomes. And yet, I’m scared.

I also see the end of civilized society. I see the threat of near extinction. I see a world destroyed by the very technologies that hold so much promise. As a case in point, see the article, “‘Unprecedented risk’ to life on Earth: Scientists call for halt on ‘mirror life’ microbe research”.

We are already playing with technology that has the potential to “put humans, animals and plants at risk of lethal infections.” What scares me most is the word I chose to start that sentence with, ‘We’. The proverbial ‘we’ right now are top scientists. But a decade, maybe two decades from now that ‘we’ could include an angry, disenfranchised, and disillusioned 22 year old… using an uncensored AI to intentionally develop (or rather synthetically design) a bacteria or a virus that could spread faster than any plague that humans have ever faced. Not a top researcher, not a university trained scientist, a regular ‘Joe’ who has decided at a young age that the world isn’t giving him what he deserves and decides to be vengeful on an epic scale.

The same thing that excites me about technological advancement also scares me… and it’s the power of individuals to impact our future. We all know the names of some great thinkers: Galileo, Newton, Curie, Tesla, and Einstein as incredible scientists that transformed the way we think of the world. People like them are rare, and have had lasting influence on the way we think of the world. For every one of them there are millions, maybe billions of bright thinkers for whom we know nothing.

I don’t fear the famous scientist, I fear the rogue, unhappy misfit who uses incredible technological advancements for nefarious reasons. The same technology that can make our lives easier, and create tremendous good in the world, can also be used with bad intentions. But there are differences between someone using a revolver for bad reasons and someone using a nuclear bomb for bad reasons. The problem we face in the future is that access to the equivalent harm of a nuclear bomb (or worse) will be something more and more people have access to. I don’t think this is something we can stop, and so as amazing as the technology is that we see today, my fear is that it could also be what leads to our demise as a species.