Tag Archives: AI

Robots, robots, everywhere

In the world of robots two things are happening at lightning speed:

  1. Capabilities – A year ago humanoid robots were clunky, unstable, and for lack of a better word, robotic.
  2. Production – A year ago if a company could produce 5,000 robots in a year, they were industry leaders.

Have a look at this video and you’ll see just how much farther along robots and their production have advanced: ‘China’s New AI Robots Shock Everyone With Impossible Skills

It might be cliche to say, but the future has arrived. First in factories, then in homes! If one thing is certain about our future it is that humanoid robots will be all around us. We’ll have to wait and see how this impacts work, chores, and even social interactions, because there isn’t going to time to think of long term implications before they arrive… everywhere very quickly.

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

Chore masters

I grew up watching the Jetsons, expecting that one day I’d get to travel around in flying cars. And in this cartoon, the Jetson family had a robotic maid named Rosie, who was always cleaning up behind them. While I’m not sure if flying cars are going to be widespread in the next few years, I think we are going to see a lot of robots doing chores for us.

Just a couple weeks ago I was in a store and watched a robot make a latte for one of the customers. His order included a choice of milk art to go on top… a little flair to add to the experience of having a machine be your barista.

How long before we see somewhat intelligent, human-like robots in every house, each doing mundane chores we’d all rather not do? I’m sure these robots won’t put the wrong items in the dryer… like I do. I’m sure they won’t complain about yard work… like I do. I’m sure they won’t sit on the couch at the end of a long day wishing there weren’t chores to get done… like I do.

I look forward to having a chore master robot that will do the mundane things I don’t like to do. I’d be thrilled to not do dishes again. I’d love to not spend time folding down boxes and putting out the garbage and recycling. I’d have no issues with the idea of never vacuuming again. I’m ready, just wondering how long I have to wait?

Blogging Reader Revival

I’m not ready to do it, but maybe someone out in the blogosphere can. Do you know what we need? A revival of Google Reader. Somebody with a paid version of a good AI coder needs to get on this. Build a version of Google reader but with some AI brilliance added in.

3 new features:

1. Have it learn from the reader. Whichever feeds the reader spends more time on gets priority in the feed.

2. AI summaries of the posts. The reader can choose from 3 levels, ranging from a one line summary to a detailed synopsis.

3. An audio reader option.

Make it free for up to 6 feeds, $6 a year for 20 feeds, or $12 a year for unlimited feeds. I’m sick and tired of apps gouging us for yearly fees.

So, who wants it and who’s going to build it?

What’s the real AI risk in education?

I read a great article on LinkedIn by Ken Shelton. He looked at two articles:

“On one side:
AI as productivity infrastructure.
On the other:
AI as compliance enforcement.

But in both cases, the conversation centers on efficiency and policing, not on whether learning itself has been redesigned for an AI-rich world. Using historical context, one could reasonably make similar arguments around the implementation of technology as well. If students are learning to “sound human” to avoid detection…If institutions are investing in increasingly sophisticated surveillance tools…If teachers are primarily using AI to move faster within the same structures…Then we have to ask, as I have shared in previous posts:

Are we adapting learning?
Or are we simply optimizing and defending legacy systems?”

I found his article more interesting than the two he shared. I especially loved his final paragraph:

“The risk isn’t just that AI is moving too fast. The risk is that our response remains reactive, oscillating between efficiency and enforcement, without addressing purpose, power, and pedagogy. Therefore, the real inflection point isn’t technological, it’s analytical and philosophical.”

My thoughts: In education, go ahead and use AI to make teaching and lessons better, use it to help students learn, and also help them understand how to use AI to enrich their learning. But don’t use it to make learning easier. Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.

So educators need to do two things: First, they need to use AI to make what they are doing even better. And secondly, they need to shift the learning experience to one where they no longer need to worry about policing and compliance. For example: The work isn’t finished with an essay, but with students defending their points in the essay against other students with slightly different points or different perspectives. The students who wrote the essay with AI and didn’t fully comprehend the topic can’t argue their perspective as well as the ones who were willing to do the work… and if they did use AI and can then argue the points better than their peers, that only proves that they understand how to use AI as a learning tool and not a tool to do the work for them. Because the real risk of AI in education is that the AI is doing the work, the struggle, and the learning for the student.

The problem we face is how learning can be circumvented by AI. And so the challenge for educators is to make it more challenging to use AI inappropriately, and to use AI to aid in making learning experiences more challenging. This is not an easy task, but it’s one we need to figure out and do well if we want our students to be learners who will have significance in a world where AI is all around us.

_________
Update: Just found this LinkedIn post by William (Bill) Ferriter, and it has two awesome images to fit with the above.

Update 2: I forgot about this post: Thinking Requires Effort

Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Fiscal year end squeeze

Globally, we’ve seen jumps in prices in the last year that are unsustainable. It seems like everything in the grocery store is more expensive, and prices seem to continue to creep up. The impact must be felt around the world, and cost rises like this unjustly affect people below and near the poverty line more than they affect anyone else.

Now throw on top of this the loss of a job for the main breadwinner in the family, and the results are devastating. Unfortunately a lot of people are about to lose their jobs.

We are reaching, in the next couple months, the fiscal year end for a lot of large corporations, and two related patterns we’ve seen a lot of recently are going to repeat.

  1. Profits over people
  2. Massive layoffs

Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.

Cut. Save. Profit. And in a year, repeat.

And we aren’t taking about a few dozen jobs, we are talking about tens and cumulatively hundreds of thousands of jobs worldwide. We are talking about people with mortgages, people losing health care, people who were already living paycheque to paycheque, suddenly jobless. People who thought they were going to be ok, suddenly seeking a job in a challenging market where thousands like them are in the same situation.

Beyond purely meeting shareholder targets, AI and robotics are also taking jobs away. Companies are choosing to use the former salaries of employees to buy chips and memory storage. Manufacturers are replacing employees with robots that don’t take breaks or sick leave, and which don’t need to end their shift after 8 hours. On top of shareholder pressures, there are pressures to eliminate jobs and have the AI Revolution transform the workplace more dramatically than the Industrial Revolution did.

I think this year we are going to see this happen at an alarming scale. The irony is, the large scale layoffs that I see about to happen, added to soaring prices, are going to drastically affect the spending of consumers who buy the products these large companies need purchased. However, many of these billion dollar companies are circumventing this too, by committing billions of dollars to purchase goods and services from each other, again inflating their perceived profits for shareholders.

All this to say that I see a lot of short term financial pain for a significant number of people in the coming months. I’m predicting the fiscal year end squeeze is going to be a hardship like none we’ve seen before, and a lot of people, a lot of families, are going to struggle as a result.

Not if, when

The only thing I use AI for when I write my blog is to make an accompanying image. I don’t use it for editing, and as a result I’ll often not notice a typo, or I’ll create a sentence that doesn’t flow, or I’ll repeat a word a little too frequently in a paragraph. What I’m saying is that I’ll make mistakes that could be caught if I used an artificial intelligence to aid in my editing.

That said, I already do use some AI because a little red line unner under a word lets me know I’ve misspelled it. We often forget that we’ve been using forms of artificial intelligence for a long time now. But I’m specifically talking about using AI as an editor or even as a co-writer. This is something I have not intentionally done yet. However, if I’m honest, the main reason for this is simply time.

I’m already pressed for time to get my writing done in the morning. I recently wrote about how frustrated I was with AI images, and the fact that they weren’t giving me exactly what I wanted, and wasted too much time. I don’t see myself in a position where I’m going to spend time using AI as an editor on top of this.… But it’s coming.

The reason it’s coming is because while I know writing every day has improved the quality of my writing, I’m sure it has also reinforced some of the weaknesses in my style. Doing something repetitively without meaningful feedback doesn’t necessarily make you better. I know that having an editor would make me better. And the reality is, I have an editor available to me whenever I want one. So now it’s just a matter of deciding when?

The ‘when’ is probably after retirement. I think that when I’m not trying to stick an entire routine of habits into under 2 1/2 hours before work, I’ll have time for things like putting my writing into an AI editor. I’ll probably be writing on my laptop instead of my phone, while enjoying a morning coffee. I’ll have the convenience of multiple tabs open on my browser rather than having to use my finger to copy paste information. And most importantly, I’ll have more time to learn, to get feedback and discern, does this AI suggestion make my writing better, or does it make my writing more vanilla?

The point is, it’s going to happen. To have a tool like this, literally at my fingertips and not to use it is silly. Especially when it can help me, with the right prompt, to become better at something I love to do.

Keeping the friction

I’ve been a proponent of integrating technology into schools and classrooms for a couple decades. And in many ways I’m excited about AI and what it has to offer in the field of education.

But I have one major concern above all others: Making learning easier is not the goal.

Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.

We need to make sure that AI is not taking the friction out of learning but rather maintaining or increasing the friction in the best places to promote meaningful learning. Friction is required.

AI image woes

A few years ago I switched from looking for royalty free images to add to my Daily-Ink blog posts to using AI. The main reason for this is that I found myself spending almost as much time searching for images as I was spending writing my blog post. This was not efficient.

While I’ve had a few challenges along the way, for the most part, I got image creation consistently down around 3-5 minutes. This is great, and so much less stressful… except for when it isn’t. The past few days have been a struggle. I couldn’t get the AI to give me what I wanted. Even when I asked for clarification, and a description of my image was recited ack to me in incredible detail, the end product did not match what I hoped for.

Three of the last four days I was at the gym, walking on the treadmill and I was still trying to get my post published, delayed by failed attempts to get the image I hoped for. In all 3 cases I settled for something close. Actually 2 out of 3, for the other one I used a sample Wikipedia image the AI found for me. I can’t believe how hard it is to get AI to create the image of a clock showing the time 5 o’clock!?!

I was tempted to say that AI image creation is getting dummer, but I think what’s happening is that I’ve just started to expect a lot more. The clock is a bad example, but in many cases I’m expecting a level of sophistication I haven’t asked for previously. I want specific perspectives. I’m asking for complex scenarios, and I’m challenging the AI to create ‘unnatural’ situations, like a teacher in a circle of desks with the students all sitting looking out and away from the teacher.

That’s sounds like an easy request but in millions of reference images of teachers, the AI has been trained to have students face the teacher. So despite continued attempts, with the AI actually describing in detail what I’m asking for before giving it to me, I still got an image of the students facing inward, towards the teacher. Again, and again, and again.

So I’m going to dumb it down. I’m going to ask for less complex images. I’m going to settle for an image that might not be perfect, and most importantly I’m going to spend less time on images and more time writing.

—-

Post script: My one and only image request for this post – ‘Create a stylized, abstract watercolour image that looks like an AI image gone wrong, with an uncanny valley styled mishmash of items.’