Tag Archives: future

Robots, robots, everywhere

In the world of robots two things are happening at lightning speed:

  1. Capabilities – A year ago humanoid robots were clunky, unstable, and for lack of a better word, robotic.
  2. Production – A year ago if a company could produce 5,000 robots in a year, they were industry leaders.

Have a look at this video and you’ll see just how much farther along robots and their production have advanced: ‘China’s New AI Robots Shock Everyone With Impossible Skills

It might be cliche to say, but the future has arrived. First in factories, then in homes! If one thing is certain about our future it is that humanoid robots will be all around us. We’ll have to wait and see how this impacts work, chores, and even social interactions, because there isn’t going to time to think of long term implications before they arrive… everywhere very quickly.

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

Chore masters

I grew up watching the Jetsons, expecting that one day I’d get to travel around in flying cars. And in this cartoon, the Jetson family had a robotic maid named Rosie, who was always cleaning up behind them. While I’m not sure if flying cars are going to be widespread in the next few years, I think we are going to see a lot of robots doing chores for us.

Just a couple weeks ago I was in a store and watched a robot make a latte for one of the customers. His order included a choice of milk art to go on top… a little flair to add to the experience of having a machine be your barista.

How long before we see somewhat intelligent, human-like robots in every house, each doing mundane chores we’d all rather not do? I’m sure these robots won’t put the wrong items in the dryer… like I do. I’m sure they won’t complain about yard work… like I do. I’m sure they won’t sit on the couch at the end of a long day wishing there weren’t chores to get done… like I do.

I look forward to having a chore master robot that will do the mundane things I don’t like to do. I’d be thrilled to not do dishes again. I’d love to not spend time folding down boxes and putting out the garbage and recycling. I’d have no issues with the idea of never vacuuming again. I’m ready, just wondering how long I have to wait?

What’s the real AI risk in education?

I read a great article on LinkedIn by Ken Shelton. He looked at two articles:

“On one side:
AI as productivity infrastructure.
On the other:
AI as compliance enforcement.

But in both cases, the conversation centers on efficiency and policing, not on whether learning itself has been redesigned for an AI-rich world. Using historical context, one could reasonably make similar arguments around the implementation of technology as well. If students are learning to “sound human” to avoid detection…If institutions are investing in increasingly sophisticated surveillance tools…If teachers are primarily using AI to move faster within the same structures…Then we have to ask, as I have shared in previous posts:

Are we adapting learning?
Or are we simply optimizing and defending legacy systems?”

I found his article more interesting than the two he shared. I especially loved his final paragraph:

“The risk isn’t just that AI is moving too fast. The risk is that our response remains reactive, oscillating between efficiency and enforcement, without addressing purpose, power, and pedagogy. Therefore, the real inflection point isn’t technological, it’s analytical and philosophical.”

My thoughts: In education, go ahead and use AI to make teaching and lessons better, use it to help students learn, and also help them understand how to use AI to enrich their learning. But don’t use it to make learning easier. Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.

So educators need to do two things: First, they need to use AI to make what they are doing even better. And secondly, they need to shift the learning experience to one where they no longer need to worry about policing and compliance. For example: The work isn’t finished with an essay, but with students defending their points in the essay against other students with slightly different points or different perspectives. The students who wrote the essay with AI and didn’t fully comprehend the topic can’t argue their perspective as well as the ones who were willing to do the work… and if they did use AI and can then argue the points better than their peers, that only proves that they understand how to use AI as a learning tool and not a tool to do the work for them. Because the real risk of AI in education is that the AI is doing the work, the struggle, and the learning for the student.

The problem we face is how learning can be circumvented by AI. And so the challenge for educators is to make it more challenging to use AI inappropriately, and to use AI to aid in making learning experiences more challenging. This is not an easy task, but it’s one we need to figure out and do well if we want our students to be learners who will have significance in a world where AI is all around us.

_________
Update: Just found this LinkedIn post by William (Bill) Ferriter, and it has two awesome images to fit with the above.

Update 2: I forgot about this post: Thinking Requires Effort

Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Fiscal year end squeeze

Globally, we’ve seen jumps in prices in the last year that are unsustainable. It seems like everything in the grocery store is more expensive, and prices seem to continue to creep up. The impact must be felt around the world, and cost rises like this unjustly affect people below and near the poverty line more than they affect anyone else.

Now throw on top of this the loss of a job for the main breadwinner in the family, and the results are devastating. Unfortunately a lot of people are about to lose their jobs.

We are reaching, in the next couple months, the fiscal year end for a lot of large corporations, and two related patterns we’ve seen a lot of recently are going to repeat.

  1. Profits over people
  2. Massive layoffs

Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.

Cut. Save. Profit. And in a year, repeat.

And we aren’t taking about a few dozen jobs, we are talking about tens and cumulatively hundreds of thousands of jobs worldwide. We are talking about people with mortgages, people losing health care, people who were already living paycheque to paycheque, suddenly jobless. People who thought they were going to be ok, suddenly seeking a job in a challenging market where thousands like them are in the same situation.

Beyond purely meeting shareholder targets, AI and robotics are also taking jobs away. Companies are choosing to use the former salaries of employees to buy chips and memory storage. Manufacturers are replacing employees with robots that don’t take breaks or sick leave, and which don’t need to end their shift after 8 hours. On top of shareholder pressures, there are pressures to eliminate jobs and have the AI Revolution transform the workplace more dramatically than the Industrial Revolution did.

I think this year we are going to see this happen at an alarming scale. The irony is, the large scale layoffs that I see about to happen, added to soaring prices, are going to drastically affect the spending of consumers who buy the products these large companies need purchased. However, many of these billion dollar companies are circumventing this too, by committing billions of dollars to purchase goods and services from each other, again inflating their perceived profits for shareholders.

All this to say that I see a lot of short term financial pain for a significant number of people in the coming months. I’m predicting the fiscal year end squeeze is going to be a hardship like none we’ve seen before, and a lot of people, a lot of families, are going to struggle as a result.

Tragedy Tourism

I don’t know how widespread the use of this phrase is, but I heard it and to me it is exactly why I struggle to pay attention to the news. The phrase is ‘tragedy tourism’ and it refers to the constant onslaught of tragedy we ‘visit’ viewing current events in the news. The topics vary and change but message is the same:

Share a tragic event, share the outrage, sadness, and horror, briefly examine the details, discuss them, highlight the anger or controversy, and then move on… find a new tragedy and repeat. You don’t get to live too long with any one tragedy, you merely visit and move on.

Your attention can’t stay on any one thing, because the next tragedy is thrust upon you to provoke further outrage, to keep you distracted, triggered. And a mind that is consumed with tragedy is a controlled and manipulated mind. It’s a mind that is angry and distracted from rational thought, it’s a mind that easily forgets the last reason to be outraged because the new reason, the next distraction, fill your consciousness with yet another and another tragedy.

No time for clarity of thought, no time to examine the issues and nuances of the last tragedy you visited, you just move on to the next tragedy because that’s where the news cycle is now. You visit each new tragedy like you are on a vacation bus tour. In the same way that a bus stops to show you a touristy landmark just long enough to learn a few highlights and minor details, and take a picture, the news peppers you with the lowlights, the sadness of the tragedy before putting you back on the metaphorical bus to be dropped off at the next tragedy.

Tragedy tourism keeps you hopping from one tragedy to the next, filling you with new reasons to be angry and upset, but not leaving you long enough on any one tragedy to allow you to feel immersed. The stay at each tragic event too short to care enough to truly understand the tragedy or to meaningfully interact or think critically about it before moving on to the next one.

An angry mind doesn’t think critically. A divided attention doesn’t promote activism or action. A distracted population doesn’t do anything to upset the status quo… and the news pumps out a new tragedy for us to visit.

Keeping the friction

I’ve been a proponent of integrating technology into schools and classrooms for a couple decades. And in many ways I’m excited about AI and what it has to offer in the field of education.

But I have one major concern above all others: Making learning easier is not the goal.

Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.

We need to make sure that AI is not taking the friction out of learning but rather maintaining or increasing the friction in the best places to promote meaningful learning. Friction is required.

Uncertainty training

I read a quote from James Clear which has me thinking:

“The ultimate form of preparation is not planning for a specific scenario, but a mindset that can handle uncertainty.”

It made me wonder, what do we do in schools to prepare students for uncertainty?

I mean, do we do this at all? We spend so much time framing the learning, compartmentalizing it, share our objectives, and ultimately knowing the expected outcomes we want. We are actually told this is good teaching.

Outside of playing on a team sport, when in school are we preparing a kid for uncertainty? Furthermore, I’m at a loss for how we would do this? What would a ‘preparing for uncertainty’ curriculum look like?

My point is that in an age where we are dealing with unpredictable weather, unhinged global politics, unknown job security with AI and robotics exponentially intersecting into every job sector, the only certainty thing about the future is uncertainty. So how do we meaningfully prepare kids for their uncertain futures? How do we cultivate this mindset?

AI – Alternate Identities

I just watched a video clip of Sir Ken Robinson promoting a product to reduce your blood sugar. I’ve already shared ‘An AI Advertisement’ with a fictitious expert, and broke down the flaws with the ad. But now we have an actual (now deceased) celebrity figure doing the promotional plug. It looks and sounds like him, but he never said anything he says in this video advertisement. I know this, but how many people will recognize him and pay a little more attention to this advertising scam because it is delivered by someone famous?

This is just the beginning. We are moving into a ‘post truth era’, where nothing is inherently believable. A decade from now we’ll have multiple alternate identities to choose from… Was the real Al Gore the one warning us about global warming, or was the real one promoting fracking, or alternative medicine, or socialist communism over capitalism? Every video will seem equally real, every source seemingly legitimate. One real, all the others alternative histories indistinguishable from reality.

Will it only be famous people that will fall victim to these alternate identities or are we all going to be replicated? When I’m in my late 80’s will I be watching a video of 50 year old me oblivious to whether this recording actually happened or if it was invented with a perfect imitation of myself?

The implications for scams are immeasurable. Live video of a seemingly real son or daughter extracting banking data from a senior parent. A meticulously created alternative you moving all assets over to someone else. The scams are limited only by imagination, not by technology or capability.

Alternate identities indistinguishable from reality, all playing out as if real. Sir Ken Robinson plugging health suppliments is only just the beginning… We are in for some reality warping performances from AI alternatives to us, and the people we think we trust… This is only just the beginning!