Tag Archives: Artificial Intelligence

Instant feedback

Yesterday, at our welcome back session for principals, one of the assistant superintendents asks us a couple questions to get feedback on what we thought was important for our district visioning. This is a typical kind of exercise to start the year. Usually this data is collected then in a later session we look at the data and trends.

But instead, he had us do the activity individually, then connect as a table group to prioritize our results. Then one person per table put the top 5 answers into a Microsoft form.

The assistant superintendent then used Copilot, Microsoft’s Artificial Intelligence LLM, to not only collate the data, but also to look for trends. He did this during a break so that we were not waiting on him. Then we came back and discussed not only the results but also his line of questioning.

Probably my favourite part of this is when he told Copilot, ‘here is the data’, but forgot to paste it in. Why? Because it’s important to model that you can make mistakes when trying something new.

I was discussing with a colleague before the meeting that I was hoping to see this happen. I’m tired of people collecting large amounts of data that will then take hours to assess, when we have new technology that can find trends invisible to us in mere seconds.

In the meeting we still did a lot of activities to connect us to our peers, we still had great table talks and meaningful conversations, but when it came time to collect and assess data we didn’t go old school, instead we took advantage of the technology available to us in a meaningful way. And yes, more analysis of the data may come later, and not all of it using AI, but to have this powerful tool use available and to not both use it and model it, would be a real shame.

It was really great to see this happen in yesterday’s first meeting of the year.

“Oh no, AI is making us dumber!”

Except it’s not.

People forget that we were worried about the internet and Google. And before that writing utensils:

“Students today depend too much upon ink. They don’t know how to use a pen knife to sharpen a pencil. Pen and ink will never replace the pencil.”
~ National Association of Teachers Journal, 1907


“Students today depend on these expensive fountain pens. They can no longer write with a straight pen and nib. We parents must not allow them to wallow in such luxury to the detriment of learning how to cope in the real business world which is not so extravagant.”
~ From PTA Gazette, 1941

I pulled those quotes from a presentation I did 16 years ago. I did another presentation at that time where I shared a quote from 1842 discussing how books would become useless “when the pupils are furnished with slates”.

We are used to pronouncing ‘the sky is falling‘ when the next advancement comes along. Google was going to make us dumber. It didn’t. Smart phones were going to make us dumber, but they didn’t. They did however change the things we thought and still think about, and remember. For example, I used to carry around a few dozen phone numbers, memorized in my head, now I don’t even know my own daughter’s numbers. They are neatly stored in my phone.

AI will do the same. It will adjust what we remember, fine tune what we think about about and ask, and help direct our thinking… but it won’t make us dumber.

When I was a kid, I thought my dad was the smartest guy in the world. I can’t think of a question I asked him that he didn’t know the answer to. Sometimes he’d even bring me a file on the topic I asked about.

I remember absolutely blowing away a teacher and my fellow students on a project I did on harnessing the ocean for power. I had newspaper clippings, magazine articles, even textbook sources that I shared on the classroom overhead projector. It looked like I spent hours upon hours doing research. I didn’t. I asked my dad what he knew and he gave me a thick file with all the resources I needed. He was my Google long before Google was a thing.

It made me look good. It made my work a lot easier. It didn’t make me dumber.

I’ll admit that there is something fundamentally different with AI compared to advances like the slate, the pen, the internet, Google and other ‘technological advances’. As Artificial Intelligence becomes smarter than us, we can rely on it in ways that we couldn’t with other advances. And it will take a while for us to figure out how to create tasks in schools that utilize AI effectively, rather than having AI do all the work. It was hard but not impossible to ‘Google proof’ an assignment, and that challenge is significantly magnified by AI. But the opportunities are also magnified.

What happens when AI can individualize student learning and what we consider the ‘core curriculum’ can be taught in less than half of a school day? How exciting can school be for the other half of the day? What curiosities can we foster? How student directed (and thus more engaging) can that other half of the day be?

We are only dumber using AI if we decide that we will passively let it do the work for us, but let’s not pretend students were not already using ‘cut-and-paste’ to get assignments done. Let’s not pretend work avoidance wasn’t already a thing. Let’s not pretend that we don’t already spend a lot of time in schools teaching students to be compliant rather than to think for themselves.

AI will only make us dumber if we try to continue doing what we have done before, but allow AI to do the work for us. If we truly use AI in collaborative and inspirational ways, we are opening an exciting new door to what human potential really can be.

Promptism – A flat earth metaphor

I read an interesting article by Sune Selsbæk-Reitz, on a word he sort of invented for asking and believing what AI shares, Promptism. The article, The Earth Is Flat, defines this new word: “Promptism is the quiet belief that if I just ask my question clearly enough, I’ll get something true in return. Maybe even something wise.”

And the article describes how promptism is killing curiosity, and providing ‘truths’ that may not be truthful, and yet are being taken as so at face value without questioning.

From the article:

“The ritual is the same every time:

Ask the machine. Get the word.

Move on.

We don’t think of it as belief, because there’s no incense, no robes, no temple. But there’s authority. And there’s trust. And there’s something deeply seductive about being given something that feels final. Even when it isn’t. Even when the certainty is a performance.

Because the thing is: the more fluent the answer, the more invisible the framing becomes. And if we don’t pause to notice that… we’ll mistake fluency for truth, and coherence for proof.”

The article continues:

“But with ChatGPT or Gemini, the answer arrives fully dressed.

Paragraphs. Polished tone. No seams. No links. Just a voice that sounds sure of itself.

That’s not just convenience. It’s a design choice. And it’s flattening how we think. Because friction – the pause, the doubt, the need to look something up – isn’t a flaw in the process of knowing. It is the process. That little jolt of uncertainty that sends you looking deeper?

That’s what makes knowledge stick.

That’s how you learn.”

…“And the more we do this, the more we forget that knowledge was never meant to arrive fully formed.”

I’ve noticed how this has affected me. I don’t go two or three pages into Google anymore. I don’t find tangent, related, and interesting ideas and connections. I ask an LLM, I get an answer, or I refine my question and ask again. I seek an immediate answer, and I accept that answer.

No more new tabs, no clicking links, just a single conversation, and a sort of final answer. The internet is getting flatter. The depth of search shallower. Promptism is the new search… and I wonder what the consequences are, what the price is, in finding convenient ‘truths’ that we just accept, and don’t bother researching or questioning?

Time and space for learning

In the past few weeks I’ve seen a few videos about schools in the US where students doing 2-3 hours of AI guided learning are outperforming most other schools.

This report really excited me to see:

Going forward Teachers (or Guides) are going to have such important roles to play as AI ‘covers’ the required curriculum with student focused just-in-time learning. Then the teachers will work on life skills and competencies, and enriching student-focused passions.

Two questions to focus on:

  1. What is the core curriculum?
  2. What competencies do we want to foster in all students?

Beyond that, it’s really about creating the time and space for students to be guided while they pursue their interests and passions.

When AI covers the curriculum the role of educators for the rest of the school day really gets innovative and exciting.

Old jokes, new format

Build it and they will Like, Follow, and Share… the newest craze to hit the internet is nothing more than a rehashing of old ideas in a new format. By now everyone has seen the Bigfoot videos where an AI Bigfoot is doing a selfie vlog and telling jokes as well as doing ridiculous antics. If you haven’t seen them, Google ‘Bigfoot Vlog’ and they will show up in droves. I’ve notice a few things. While a few of them are refreshingly funny, most of them rehash really old jokes, many of which are based on racism, sexism, or tropes that have all been done before. It’s literally just old jokes in a new format.

But they work. They get the click, likes, and shares. They are going viral. And they are creating copycats that are now doing the same thing, using AI, but with people rather than Bigfoot. Videos that are mostly 100% realistic and yet still sit somewhere in the uncanny valley of almost right, yet not fully. And again, just rehashing old content in a new format.

Expect a lot more of this. Also expect world crisis to be leveraged for the same attention. You’ll see bombing in the middle east that’s actually just AI video. You’ll hear government leaders and celebrities saying outlandish things, except it won’t really be them. You’ll see alien landings, meteor landings, and even plane crashes that didn’t happen but were rather prompted into video reality.

When we get tired of the jokes, we’ll just start to get fooled more and more by AI drama that is invented to draw our attention. But for now, the jokes will come. They will get more inappropriate and cross lines a person wouldn’t with a video of themselves. And as attention wanes they will get more extreme, more tasteless, and so abundant that we’ll just be tired of them… as I am already tiring of them.

Existential Drift

We aren’t getting rid of doctors, or plumbers any time soon, but large organizations have already started to reduce staff in areas that we thought only humans could do. Not only are robotics and AI taking over manual labour, intelligent agents are also taking over white collar jobs. The CEO of Anthropic, Dario Amodei, recently said, “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years”. Marketing and content production, data analysis, bookkeeping, and customer support are just a few key areas where layoffs have already begun. This isn’t some sci-fi future prediction. Rather it is a reporting out of current trends.

A combination of AI, robotics, and automation are redefining work. The cost to society is ever-increasing layoffs and unemployment statistics, leading to jobless members of society, with little or no prospects of retraining or alternative careers. What does our society look like when unemployment hits 20%?

At some point we are going to have to start thinking about Universal Basic Income, and ways to ensure that massive unemployment doesn’t lead to poverty and an ever-widening gap between those that have financial success (or at least comfort) and those that are barely surviving. But even if these low or no income people are provided for and supported, another question arises:

How does a large unemployed segment of society cultivate personal purpose and meaning?

Many people see purpose or self worth through their work. Creative expression and acts of service will fill some of the gaps but there will also be a fair bit of existential drift.

I think we are already seeing this drift occur. Work isn’t enough. I remember about a year ago I saw a video of a girl who got out of school, got a job in her field she studied for, and was questioning her entire existence. She couldn’t afford to rent a place in the town she worked in. She spent almost 2 hours commuting, 8-plus hours at work, and came home exhausted, barely making enough to pay for rent, food, and paying off her student debt. The comments were contrasted between people saying ‘welcome to life’ and others admitting that it’s definitely harder to make ends meet now than ever before.

So we have a growing number of unemployed and a growing number of people losing sight of the purpose of working just to barely make ends meet. Where do people find purpose and meaning? How is meaning being cultivated?

I have concerns rather than answers.

The real alignment problem

‘The alignment problem in artificial intelligence refers to the challenge of ensuring that AI systems act in accordance with human values and intentions. It involves making sure that these systems pursue the goals we set for them without unintended consequences or harmful behaviors.’

~ Auto-generated on DuckDuckGo

The real alignment problem is what human values are being aligned?

Do you want AI aligned with strict religious beliefs? Nihilism? Capitalism?

The point is, we can’t agree on what human values we want so how does AI align pluralistically? And furthermore, when AI achieves super intelligence, why would it bother to align with us?

The real alignment problem comes in two parts:

  • The what? Align with what human values.
  • The why? Why would a super intelligent AI want to align with our values?

The first part is something we will have to figure out. The second might just be decided for us, and not necessarily in our favour.

Teaching wisdom

We all know that one person who didn’t do well in school and isn’t ‘book smart’ but if there is a problem to solve he or she will figure it out. Or someone who’s a tinkerer, who dabbles in fixing anything from a small electric toy to a car engine… maybe they were good at school, maybe not, but they solve problems we would struggle with. This isn’t traditionally the kind of wisdom taught in schools. It’s born out of curiosity and ingenuity.

How can we make learning at school more like this? More like the problem solvers we are going to need. We aren’t going to out book smart AI. We aren’t going to write reports as well as a smartly prompted AI. But even a good AI isn’t going to figure out why a sink suddenly has low pressure any time soon.

Maybe that will come, but for now we are going to be able to out problem solve AI or at least be the ones that figure out what to ask AI to help us out.

So how do we maximize the learning at school to provide students with the kind of wisdom they need to be resourceful in an AI filled world? It won’t be with wrote memorization. It won’t be the review tests. It won’t be the book reports or the 15 math questions going home for homework.

What kind of learning experiences are we creating at school? Do they foster wisdom, systems thinking, and/or problem solving? Are we getting students excited about being learners and problem solvers? Are we creating environments for creators or compliant workers? Because the path of AI and robotics is quickly making compliant workers redundant.

I don’t know if we can explicitly teach wisdom, but we can create experiences where wisdom is valued and the right answer isn’t predetermined. We can design problems that require collaboration, creativity, and insight. And we can teach students to harness AI so that it serves us and we add value to what it can do with us.

Creating unique and challenging learning experiences, with students helping us design these experiences or even designing them themselves…. This is the path forward for schools. If a student spends the day only doing things AI can do better than them, what’s school really teaching?

Unprepared for the transition

I just read, “From a radio host replaced by avatars to a comic artist whose drawings have been copied by Midjourney, how does it feel to be replaced by a bot?
By Charis McGowan in the Guardian. It’s a series of stories about people who had secure jobs until AI replaced them.

Last week I saw a video of a car manufacturer in China that builds the entire car using robotics. They call these ‘Dark Factories’, fully automated buildings that don’t need lighting like most factories because the machines have sensors and don’t need the factory lit up like is required with human-filled factories.

Five years ago I heard of a shortage of workers that was inevitable as population growth decreases, but I now see that those fears were unwarranted. We aren’t going to need more employees in the future, but rather far less. AI agents and robots are literally going to steal jobs from a significant number of working people. It has already started but the scale of this is going to magnify considerably in the next 5-10 years.

How do we make the economy work when most countries will have unemployment rates exceeding 20%? What kind of jobs will a laid off 40-55 year old be able to do that AI won’t? What does a 30 year old with a liberal arts degree do as a former customer service employee who was laid off because AI can do their job better and cheaper?

10 writers for a website becomes a job for 1 editor who edits and ‘humanizes’ AI written articles. 10 tech support workers are replaced by AI support and just 2 human technicians. 10 people in graphic design are all replaced by the department boss who was a graphic designer before being promoted. Now he or she uses AI and pumps out the work of all 10 past employees. This isn’t science fiction, it’s happening right now.

Are we ready for this? Are we ready for mass unemployment? What will the job market look like? What will all these unemployed people do? How does our economy survive?

On the bright side, here’s what I think we’ll see:

  1. Universal Basic Income – Every person gets a livable wage whether they work or not. Is it enough to live in luxury? No, but you can be unemployed for a long period of time and not have to worry about your basic needs.
  2. Reduced work weeks – If you work more than 30 hours a week, you are probably working for yourself. Think 6 hour days or 4-day work weeks.
  3. Less chores – From cleaning to yard work, to cooking… those things that consumed your time after work will only be done by you if you want to do it. Otherwise, you’ll have these done for you by affordable robots that have a lot more features and convenience than the Roomba that vacuums your floor while you watch TV.

So while conveniences and more idle time are coming, they are coming with a massive number of jobs lost. The question is, what is the transition going to look like? Who suffers during the transition? And will we get to these positive outcomes before too many people are jobless, unable to compete with AI, and not meaningfully able to contribute to or survive in our AI and robotics driven economy?

Self-interests in AI

Yesterday I read the following in the ‘Superhuman Newsletter (5/26/25)’:

Bad Robot: A new study from Palisade Research claims that “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off”, even when it was explicitly instructed to shut down. The study raises serious safety concerns.

It amazes me how we’ve gotten here. Ten, or even five years ago there were all kinds of discussions about of AI safety. There was a belief that future AI would be built in isolation with an ‘air-gap’, used as a security measure to ensure AI systems remained contained and separate from other networks or systems. We would grow this intelligence in a metaphorical petri dish and build safety guards around it before we let it out into the wild

Instead, these systems have been built fully in the wild. They have been give unlimited data and information, and we’ve built them in a way that we aren’t sure we understand their ‘thinking’. They surprise us with choices like choosing not to turn off when explicitly asked to. Meanwhile we are simultaneously training them to use ‘agents’ that interact with the real world.

What we are essentially doing is building a super intelligence that can act autonomously, while simultaneously building robots that are faster, stronger, more agile, and fully programmable by us… or by an AI. Let’s just pause for a moment and think about these two technologies working together. It’s hard not to construct a dystopian vision of the future when we watch these technologies collide.

And the reality is that we have not built an air-gap. We don’t have a kill switch. We are essentially heading down a path to having super-intelligent AI ignoring our commands while operating robots and machines that will make us feeble in comparison (in intelligence, strength, and mobility).

When our intelligence compared to AI is equivalent to a chimpanzee’s intelligence compared to ours, how will this super-intelligence treat us? This is not a hyperbole, it’s a real question we should be thinking about. If today’s rather simplistic LLM AI models are already choosing to ignore our commands what makes us think a super-intelligent AI will listen to or reason with us?

All is well and good when our interests align, but I don’t see any evidence that self-interested AI will necessarily have aligned interests with the intelligent monkeys that we are. And the fact that we’re building this super-intelligence out in the wild gives reason to pause and wonder what will become of humanity in an age of super-intelligent AI?