Tag Archives: future

What’s the real AI risk in education?

I read a great article on LinkedIn by Ken Shelton. He looked at two articles:

“On one side:
AI as productivity infrastructure.
On the other:
AI as compliance enforcement.

But in both cases, the conversation centers on efficiency and policing, not on whether learning itself has been redesigned for an AI-rich world. Using historical context, one could reasonably make similar arguments around the implementation of technology as well. If students are learning to “sound human” to avoid detection…If institutions are investing in increasingly sophisticated surveillance tools…If teachers are primarily using AI to move faster within the same structures…Then we have to ask, as I have shared in previous posts:

Are we adapting learning?
Or are we simply optimizing and defending legacy systems?”

I found his article more interesting than the two he shared. I especially loved his final paragraph:

“The risk isn’t just that AI is moving too fast. The risk is that our response remains reactive, oscillating between efficiency and enforcement, without addressing purpose, power, and pedagogy. Therefore, the real inflection point isn’t technological, it’s analytical and philosophical.”

My thoughts: In education, go ahead and use AI to make teaching and lessons better, use it to help students learn, and also help them understand how to use AI to enrich their learning. But don’t use it to make learning easier. Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.

So educators need to do two things: First, they need to use AI to make what they are doing even better. And secondly, they need to shift the learning experience to one where they no longer need to worry about policing and compliance. For example: The work isn’t finished with an essay, but with students defending their points in the essay against other students with slightly different points or different perspectives. The students who wrote the essay with AI and didn’t fully comprehend the topic can’t argue their perspective as well as the ones who were willing to do the work… and if they did use AI and can then argue the points better than their peers, that only proves that they understand how to use AI as a learning tool and not a tool to do the work for them. Because the real risk of AI in education is that the AI is doing the work, the struggle, and the learning for the student.

The problem we face is how learning can be circumvented by AI. And so the challenge for educators is to make it more challenging to use AI inappropriately, and to use AI to aid in making learning experiences more challenging. This is not an easy task, but it’s one we need to figure out and do well if we want our students to be learners who will have significance in a world where AI is all around us.

_________
Update: Just found this LinkedIn post by William (Bill) Ferriter, and it has two awesome images to fit with the above.

Update 2: I forgot about this post: Thinking Requires Effort

Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Fiscal year end squeeze

Globally, we’ve seen jumps in prices in the last year that are unsustainable. It seems like everything in the grocery store is more expensive, and prices seem to continue to creep up. The impact must be felt around the world, and cost rises like this unjustly affect people below and near the poverty line more than they affect anyone else.

Now throw on top of this the loss of a job for the main breadwinner in the family, and the results are devastating. Unfortunately a lot of people are about to lose their jobs.

We are reaching, in the next couple months, the fiscal year end for a lot of large corporations, and two related patterns we’ve seen a lot of recently are going to repeat.

  1. Profits over people
  2. Massive layoffs

Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.

Cut. Save. Profit. And in a year, repeat.

And we aren’t taking about a few dozen jobs, we are talking about tens and cumulatively hundreds of thousands of jobs worldwide. We are talking about people with mortgages, people losing health care, people who were already living paycheque to paycheque, suddenly jobless. People who thought they were going to be ok, suddenly seeking a job in a challenging market where thousands like them are in the same situation.

Beyond purely meeting shareholder targets, AI and robotics are also taking jobs away. Companies are choosing to use the former salaries of employees to buy chips and memory storage. Manufacturers are replacing employees with robots that don’t take breaks or sick leave, and which don’t need to end their shift after 8 hours. On top of shareholder pressures, there are pressures to eliminate jobs and have the AI Revolution transform the workplace more dramatically than the Industrial Revolution did.

I think this year we are going to see this happen at an alarming scale. The irony is, the large scale layoffs that I see about to happen, added to soaring prices, are going to drastically affect the spending of consumers who buy the products these large companies need purchased. However, many of these billion dollar companies are circumventing this too, by committing billions of dollars to purchase goods and services from each other, again inflating their perceived profits for shareholders.

All this to say that I see a lot of short term financial pain for a significant number of people in the coming months. I’m predicting the fiscal year end squeeze is going to be a hardship like none we’ve seen before, and a lot of people, a lot of families, are going to struggle as a result.

Tragedy Tourism

I don’t know how widespread the use of this phrase is, but I heard it and to me it is exactly why I struggle to pay attention to the news. The phrase is ‘tragedy tourism’ and it refers to the constant onslaught of tragedy we ‘visit’ viewing current events in the news. The topics vary and change but message is the same:

Share a tragic event, share the outrage, sadness, and horror, briefly examine the details, discuss them, highlight the anger or controversy, and then move on… find a new tragedy and repeat. You don’t get to live too long with any one tragedy, you merely visit and move on.

Your attention can’t stay on any one thing, because the next tragedy is thrust upon you to provoke further outrage, to keep you distracted, triggered. And a mind that is consumed with tragedy is a controlled and manipulated mind. It’s a mind that is angry and distracted from rational thought, it’s a mind that easily forgets the last reason to be outraged because the new reason, the next distraction, fill your consciousness with yet another and another tragedy.

No time for clarity of thought, no time to examine the issues and nuances of the last tragedy you visited, you just move on to the next tragedy because that’s where the news cycle is now. You visit each new tragedy like you are on a vacation bus tour. In the same way that a bus stops to show you a touristy landmark just long enough to learn a few highlights and minor details, and take a picture, the news peppers you with the lowlights, the sadness of the tragedy before putting you back on the metaphorical bus to be dropped off at the next tragedy.

Tragedy tourism keeps you hopping from one tragedy to the next, filling you with new reasons to be angry and upset, but not leaving you long enough on any one tragedy to allow you to feel immersed. The stay at each tragic event too short to care enough to truly understand the tragedy or to meaningfully interact or think critically about it before moving on to the next one.

An angry mind doesn’t think critically. A divided attention doesn’t promote activism or action. A distracted population doesn’t do anything to upset the status quo… and the news pumps out a new tragedy for us to visit.

Keeping the friction

I’ve been a proponent of integrating technology into schools and classrooms for a couple decades. And in many ways I’m excited about AI and what it has to offer in the field of education.

But I have one major concern above all others: Making learning easier is not the goal.

Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.

We need to make sure that AI is not taking the friction out of learning but rather maintaining or increasing the friction in the best places to promote meaningful learning. Friction is required.

Uncertainty training

I read a quote from James Clear which has me thinking:

“The ultimate form of preparation is not planning for a specific scenario, but a mindset that can handle uncertainty.”

It made me wonder, what do we do in schools to prepare students for uncertainty?

I mean, do we do this at all? We spend so much time framing the learning, compartmentalizing it, share our objectives, and ultimately knowing the expected outcomes we want. We are actually told this is good teaching.

Outside of playing on a team sport, when in school are we preparing a kid for uncertainty? Furthermore, I’m at a loss for how we would do this? What would a ‘preparing for uncertainty’ curriculum look like?

My point is that in an age where we are dealing with unpredictable weather, unhinged global politics, unknown job security with AI and robotics exponentially intersecting into every job sector, the only certainty thing about the future is uncertainty. So how do we meaningfully prepare kids for their uncertain futures? How do we cultivate this mindset?

AI – Alternate Identities

I just watched a video clip of Sir Ken Robinson promoting a product to reduce your blood sugar. I’ve already shared ‘An AI Advertisement’ with a fictitious expert, and broke down the flaws with the ad. But now we have an actual (now deceased) celebrity figure doing the promotional plug. It looks and sounds like him, but he never said anything he says in this video advertisement. I know this, but how many people will recognize him and pay a little more attention to this advertising scam because it is delivered by someone famous?

This is just the beginning. We are moving into a ‘post truth era’, where nothing is inherently believable. A decade from now we’ll have multiple alternate identities to choose from… Was the real Al Gore the one warning us about global warming, or was the real one promoting fracking, or alternative medicine, or socialist communism over capitalism? Every video will seem equally real, every source seemingly legitimate. One real, all the others alternative histories indistinguishable from reality.

Will it only be famous people that will fall victim to these alternate identities or are we all going to be replicated? When I’m in my late 80’s will I be watching a video of 50 year old me oblivious to whether this recording actually happened or if it was invented with a perfect imitation of myself?

The implications for scams are immeasurable. Live video of a seemingly real son or daughter extracting banking data from a senior parent. A meticulously created alternative you moving all assets over to someone else. The scams are limited only by imagination, not by technology or capability.

Alternate identities indistinguishable from reality, all playing out as if real. Sir Ken Robinson plugging health suppliments is only just the beginning… We are in for some reality warping performances from AI alternatives to us, and the people we think we trust… This is only just the beginning!

Post Truth Era

Never mind the ridiculous videos of Mr. Rogers chatting with Tupac Shakur or Bigfoot vlogging, these AI videos seem real enough while fully intending us to know they are AI. What we are seeing now is an indistinguishable bending of real and fake with videos that are completely altering our ability to know what is real and what isn’t.

Voice mimicking was already almost perfect. I saw a video post today from a man whose dad called him to ask what their shared bank account password was. One problem: His dad died last year, he just hadn’t taken his name off of the account yet. He said it sounded so real that had his father been alive, he probably would have shared the password, thinking his dad forgot.

Now AI videos are just as good as AI audio and the combination of the two truly are steering us into a post truth era. People are sharing AI videos completely unaware that they are fake. Even news stations are getting it wrong.

Soon web sites will become bastions of truth. Want to know what someone actually said? Go to ‘their name’ .com or .org and see the actual video shared there. Anything else will be questionable. And wherever else the video is shared must be watched with skepticism. Subtle or overt, very important changes in a message will occur as a result of someone, ultimately anyone taking the original video and making an AI version that gives their message instead of the intended message.

Following specific domains, and maybe a handful of legitimate news channels, are the only suggestions I have. Legislation won’t keep up, and the fakes are just getting better. Essentially, find reliable sites and distrust everything else. Intuition and common sense won’t be enough.

The upside down bell curve

The bell curve, also known as a normal distribution, is a graph that depicts how values in a dataset are distributed. Most values cluster around the average with fewer values appearing at the extremes… those rare few that do very well or very poorly.

But there is a new curve evolving that matters more, the upside down bell curve where the ones on the extremes are where most of the data points are distributed. In an era of free and openly available information, this is the new learning curve. There is no more average majority, instead there are those that understand and those that do not. Those that participate and those that opt out. Those that engage and those that choose not to. Those that seek to learn and those that disengage.

The resources needed to do well are available. The access to information is there for all who want it. The opportunity to get that information in a format or delivery that makes sense to you is easy to find. The question is, are you willing to put the effort in?

If you learn how you best learn, then access to information is no longer a barrier and you will likely learn very well. You will be with the majority of people on the successful side of the distribution curve. If you decide it’s too hard, or choose not to engage, you will be with the other majority, ignorantly selecting the unsuccessful side of the distribution.

There will be anomalies, those that have learning challenges that are not met and struggle, and those that make no effort yet still find it easy to understand things. There will also be those few that just choose to squeak by, capable of more but neither excelling or struggling. But this is the era of extremes. This is a time when the ‘A’, the ‘Exceeding Expectations’, the ability to excel, is available to most… and yet will only be achieved by the ones who actually choose it.

The mathematical average of the curve might be the same, but the distribution will be starkly divided.

Digital dog sitter

I went to a store yesterday after work. It was a cold, rainy evening and already dark at around 5:30pm. I picked up the couple items I came for and headed back to my car. Just as I was getting in, I heard a dog barking at me from inside the car next to me. When I looked over, I saw the dog in the back seat and a note on the electric car’s digital display that read:

My driver will be back soon

Then in smaller font:

Don’t worry! The heater is on and it’s 20°C

With the 20°C in very large font, which could easily be read from a distance.

Considering the taboo normally associated with leaving a pet unattended in a car, I thought this was very clever. Highlighting the temperature of the car removed any concern that the dog’s life is in danger from overheating, and noting the driver will be back shortly eases any anxiety for dog lovers who might worry for the dog’s wellbeing.

This also made me think of kids we see today being babysat by technology. The parent in the grocery store handing over their phone to the kid sitting in the front of the grocery cart. The kid in the back seat of a car watching a movie. The kid at home on the iPad while dinner is being made.

What will this look like when we have robots ‘adding value’ to these experiences? Will dog owners send their pets for walks while they step into a store, with the robot babysitter cleaning up the poop the dog might do on the walk? Will kids be playing in the back yard with their robot babysitter rather than having their eyes glued to a screen?

And is this an improvement to what we have now?

I think for dogs it will be, but I wonder about this for kids? What kinds of bonds will kids build with their robotic babysitters? Will we be able to tell when a teenager has been raised more by robots than by humans? What amount of robot time will be considered too much? Will a parent who lets a robot babysit their kid for hours and hours be judged like a dog owner who left his dog in a hot car?

When we think of robots that we will soon have in our homes, we think of the conveniences they will provide. What happens when one of those conveniences is helping to raise our kids? What impact will it have? There’s a difference between dog sitting and babysitting that makes this question very interesting. And while I find the the digital note in a car telling everyone the dog is comfortable and will be attended to soon quite clever, I’m not sure how clever it will be to have robots attending to our kids more than their parents do.