Tag Archives: technology

Robots, robots, everywhere

In the world of robots two things are happening at lightning speed:

  1. Capabilities – A year ago humanoid robots were clunky, unstable, and for lack of a better word, robotic.
  2. Production – A year ago if a company could produce 5,000 robots in a year, they were industry leaders.

Have a look at this video and you’ll see just how much farther along robots and their production have advanced: ‘China’s New AI Robots Shock Everyone With Impossible Skills

It might be cliche to say, but the future has arrived. First in factories, then in homes! If one thing is certain about our future it is that humanoid robots will be all around us. We’ll have to wait and see how this impacts work, chores, and even social interactions, because there isn’t going to time to think of long term implications before they arrive… everywhere very quickly.

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

Chore masters

I grew up watching the Jetsons, expecting that one day I’d get to travel around in flying cars. And in this cartoon, the Jetson family had a robotic maid named Rosie, who was always cleaning up behind them. While I’m not sure if flying cars are going to be widespread in the next few years, I think we are going to see a lot of robots doing chores for us.

Just a couple weeks ago I was in a store and watched a robot make a latte for one of the customers. His order included a choice of milk art to go on top… a little flair to add to the experience of having a machine be your barista.

How long before we see somewhat intelligent, human-like robots in every house, each doing mundane chores we’d all rather not do? I’m sure these robots won’t put the wrong items in the dryer… like I do. I’m sure they won’t complain about yard work… like I do. I’m sure they won’t sit on the couch at the end of a long day wishing there weren’t chores to get done… like I do.

I look forward to having a chore master robot that will do the mundane things I don’t like to do. I’d be thrilled to not do dishes again. I’d love to not spend time folding down boxes and putting out the garbage and recycling. I’d have no issues with the idea of never vacuuming again. I’m ready, just wondering how long I have to wait?

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Keeping the friction

I’ve been a proponent of integrating technology into schools and classrooms for a couple decades. And in many ways I’m excited about AI and what it has to offer in the field of education.

But I have one major concern above all others: Making learning easier is not the goal.

Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.

We need to make sure that AI is not taking the friction out of learning but rather maintaining or increasing the friction in the best places to promote meaningful learning. Friction is required.

The upside down bell curve

The bell curve, also known as a normal distribution, is a graph that depicts how values in a dataset are distributed. Most values cluster around the average with fewer values appearing at the extremes… those rare few that do very well or very poorly.

But there is a new curve evolving that matters more, the upside down bell curve where the ones on the extremes are where most of the data points are distributed. In an era of free and openly available information, this is the new learning curve. There is no more average majority, instead there are those that understand and those that do not. Those that participate and those that opt out. Those that engage and those that choose not to. Those that seek to learn and those that disengage.

The resources needed to do well are available. The access to information is there for all who want it. The opportunity to get that information in a format or delivery that makes sense to you is easy to find. The question is, are you willing to put the effort in?

If you learn how you best learn, then access to information is no longer a barrier and you will likely learn very well. You will be with the majority of people on the successful side of the distribution curve. If you decide it’s too hard, or choose not to engage, you will be with the other majority, ignorantly selecting the unsuccessful side of the distribution.

There will be anomalies, those that have learning challenges that are not met and struggle, and those that make no effort yet still find it easy to understand things. There will also be those few that just choose to squeak by, capable of more but neither excelling or struggling. But this is the era of extremes. This is a time when the ‘A’, the ‘Exceeding Expectations’, the ability to excel, is available to most… and yet will only be achieved by the ones who actually choose it.

The mathematical average of the curve might be the same, but the distribution will be starkly divided.

Too quick to ban

Laws create outlaws. The moment you’ve banned cell phones in schools is the moment you admit that you’d prefer teachers to police student rather than teach them.

15 years ago I was living in China and tried to share some sites where student reporters were reporting on the Winter Olympics in Vancouver, but the Great Filter Wall of China blocked the site. I wrote this, and created a little poster to go with it:

Now here is the thing… I chose to move to a country where a lot of sites get blocked. I can’t imagine what it’s like for teachers in the ‘free world’ that have their own school districts do this to them!

If you are in a school where filters filter learning, here is a little poster for you to hang up in your front entrance:

That was a different time, when people thought they could shield students from social media sites just by filtering them at school. But how far have we really progressed if what we are trying to do now is ban phones? Are we going to ban their smart watches too? Their smart glasses? Are we going to make classrooms electronic free zones? Oh, wait, why don’t we just ban their laptops too?

Gary Stager recently shared this on LinkedIn:

“Every media outlet and social media feed blames screens for all societal ills.(1) Go ahead, get the screens out of schools just like you did with books, musical instruments, & play. Just keep standardized testing and football! We have entered edtech winter. #discuss
(1) real or imagined”

I commented: “Come to Luddite High, where we prepare you for the previous century.

I find it hard to believe we are here again. Going back to 15 years ago, I wrote, ‘Choose Your Battle‘, where I said,

Filters that also filter learning -or- High expectations about appropriate use?

Banning POD’s -or- High expectations about appropriate use?

Teaching without technology -or- High expectations about appropriate use?

And

So which battle will it be? Do we make classrooms a war zone? A battle zone to keep technology out? Or do we make it a learning zone? A place where we close the gap between digital distractions and digital classroom tools?

And shared this image:

Sarcasm aside, the point is that filtering and banning are not the solutions we need to be considering. What we need to teach is that there is a time and a place for tools in schools.

More recently I shared:

“With great responsibility comes great power”… that’s the reverse of the Spiderman quote, “With great power comes great responsibility”, and a teacher, John Sarte at Inquiry Hub, uses this to explain to students that while we give them a lot of time to work independently (a lot of responsibility) that comes with a lot of power.“

This applies to technology in the classroom too. We expect students to be responsible with their technology use. We give them the power to choose when it’s appropriate, we put the power in their hands… but when they show they are not responsible, when the abuse the power, we then become more responsible and take away their power.

When a Grade 9 student is working independently and I walk by them scrolling on their phone, I have a conversation with them about how they could be using their time more effectively and and ask them to put their phone away. When a I see a Grade 11 or 12 doing the same thing, I might or might not have the same conversation. If a kid hands everything in on time, shows pride in all their work, contributes well in class and in groups, and is not using their phone during a lesson or presentation… well then so what if when I walk by they happen to be taking a break? But if it’s a student who still hasn’t figured out how to get good work done on time, I’m definitely having the same conversation I had with the Grade 9. 

It’s a whole other story when a class is in session. At that point their needs to be a culture and expectation that the phone is either something being used for learning, as permitted by the teacher, or it’s put away. But to ban it… to remove it from schools… to have to police keeping them out of classrooms altogether, is a luddite style draconian policy that sets us back years if not decades. Schools need to be, “A place where we close the gap between digital distractions and digital classroom tools.” Not a place where we shelter students from tools they will be using everywhere else in their lives. 

Digital dog sitter

I went to a store yesterday after work. It was a cold, rainy evening and already dark at around 5:30pm. I picked up the couple items I came for and headed back to my car. Just as I was getting in, I heard a dog barking at me from inside the car next to me. When I looked over, I saw the dog in the back seat and a note on the electric car’s digital display that read:

My driver will be back soon

Then in smaller font:

Don’t worry! The heater is on and it’s 20°C

With the 20°C in very large font, which could easily be read from a distance.

Considering the taboo normally associated with leaving a pet unattended in a car, I thought this was very clever. Highlighting the temperature of the car removed any concern that the dog’s life is in danger from overheating, and noting the driver will be back shortly eases any anxiety for dog lovers who might worry for the dog’s wellbeing.

This also made me think of kids we see today being babysat by technology. The parent in the grocery store handing over their phone to the kid sitting in the front of the grocery cart. The kid in the back seat of a car watching a movie. The kid at home on the iPad while dinner is being made.

What will this look like when we have robots ‘adding value’ to these experiences? Will dog owners send their pets for walks while they step into a store, with the robot babysitter cleaning up the poop the dog might do on the walk? Will kids be playing in the back yard with their robot babysitter rather than having their eyes glued to a screen?

And is this an improvement to what we have now?

I think for dogs it will be, but I wonder about this for kids? What kinds of bonds will kids build with their robotic babysitters? Will we be able to tell when a teenager has been raised more by robots than by humans? What amount of robot time will be considered too much? Will a parent who lets a robot babysit their kid for hours and hours be judged like a dog owner who left his dog in a hot car?

When we think of robots that we will soon have in our homes, we think of the conveniences they will provide. What happens when one of those conveniences is helping to raise our kids? What impact will it have? There’s a difference between dog sitting and babysitting that makes this question very interesting. And while I find the the digital note in a car telling everyone the dog is comfortable and will be attended to soon quite clever, I’m not sure how clever it will be to have robots attending to our kids more than their parents do.

Technical difficulties

I’m surprised that the Jetpack app that I use to publish about 95% of my Daily-Ink posts is so buggy. I often hit ‘Publish’ then get an error. When that happens the post usually publishes anyway, but on the app my post stays in drafts. This can get very confusing.

Sometimes I update the draft and it moves to the Published tab, sometimes it doesn’t. It’s all very confusing. I would not be surprised if I have published the same post twice, thinking that an older post was just a draft I hadn’t completed. The whole process is very messy.

In fact, I was just trying to clean up some of the mess and ended up deleting two posts from a few days ago. I had to go into the Trash and restore them, then date them correctly to have them displayed in the right order on my blog.

I’m always intrigued how advanced our technology is and yet how often we have to put up with bugs and technical difficulties. Our TV doesn’t always work nicely with our cable box. We occasionally have to delete and reinstall streaming apps just so we can log into them. I consider myself pretty tech savvy, but I don’t watch a lot of TV and often give the remote to my wife to navigate… But when it’s time to reinstall the app, she hands it to me to assist. Shouldn’t all this be intuitive? Shouldn’t it all just work?

I wonder if this is going to get better or worse? As all our devices get more technical are we just going to have to face more technical difficulties? I’m guessing this will be the case. We are going to see more and more not-really-smart ‘smart devices’. The limitations of their smartness are going to create a lot more glitches, bugs, and technical issues. In the coming years we are going to see more rather than less technical difficulties.

In my lifetime

I was only one-and-a-half years old when Apollo 11 landed on the moon just 56 years ago. The computer guidance system was sophisticated for the day, but simple by today’s standards. Years later when I bought the 64k adapter for my Commodore Vic 20 home computer, which needed to be plugged into my television, I had access to more memory than the Apollo.

Today most calculators have more memory than that. So do our fridges, and other household items that really don’t even need it. We routinely purchase more sophisticated items than the computer that landed the first space ship on the moon.

Now we are asking LLM’s that do billions of calculations a second questions and we don’t even fully understand their processes leading to the answers. The sophistication of these tools are so much greater than anything humankind has created before. Few people in the world truly understand the workings of these tools, in the same way that not many people understood what the Apollo 11 navigation computer was doing back in 1969.

So where is this all leading? What technological advances am I going to see in my lifetime? Are we all going to have house robots doing chores for us. Will we no longer drive because cars will drive (or fly) themselves better than we can? Will I go to the bathroom and my toilet will tell me I’m deficient in a certain vitamin after analyzing my poop?

I’m fascinated by how fast we’ve innovated in less than 60 years. I recognize how much faster we’ve innovated in the last 30 years compared to the 30 before that, and it makes me think that if the rate of innovation continues, I’ll see even greater innovations in the next 15 years. That’s the nature of exponential growth and I think that innovation has been far more exponential than incremental.

I spend a fair bit of time thinking about the future… Be it the future of technology, education, health and longevity. In each of these areas I see things changing drastically in the next 15 years. But I don’t have a crystal ball and I’m not sure that I can separate science from science fiction, or innovation from imagination, as I look forward. In all honesty I have no idea how far technology and innovations will take us in my lifetime, but I’m excited about the possibilities.