Tag Archives: technology

Instant feedback

Yesterday, at our welcome back session for principals, one of the assistant superintendents asks us a couple questions to get feedback on what we thought was important for our district visioning. This is a typical kind of exercise to start the year. Usually this data is collected then in a later session we look at the data and trends.

But instead, he had us do the activity individually, then connect as a table group to prioritize our results. Then one person per table put the top 5 answers into a Microsoft form.

The assistant superintendent then used Copilot, Microsoft’s Artificial Intelligence LLM, to not only collate the data, but also to look for trends. He did this during a break so that we were not waiting on him. Then we came back and discussed not only the results but also his line of questioning.

Probably my favourite part of this is when he told Copilot, ‘here is the data’, but forgot to paste it in. Why? Because it’s important to model that you can make mistakes when trying something new.

I was discussing with a colleague before the meeting that I was hoping to see this happen. I’m tired of people collecting large amounts of data that will then take hours to assess, when we have new technology that can find trends invisible to us in mere seconds.

In the meeting we still did a lot of activities to connect us to our peers, we still had great table talks and meaningful conversations, but when it came time to collect and assess data we didn’t go old school, instead we took advantage of the technology available to us in a meaningful way. And yes, more analysis of the data may come later, and not all of it using AI, but to have this powerful tool use available and to not both use it and model it, would be a real shame.

It was really great to see this happen in yesterday’s first meeting of the year.

No more manuals

One of the things I’ve been using Chat GPT & Copilot for has been to save me time looking through device manuals for things I don’t know how to do.

Today’s request, “This is a photo of my washing machine. It has a ‘Clean washer’ option on the dial, but I don’t know the instructions for that. Could you please let me know what they are?”

It was simple enough, I just needed to add bleach, but asking the question rather than flipping through a 40+ page manual with tiny font is just so much better. Recently I’ve also used AI instructions to fine tune the settings on my hot tub. Again, this was so much easier than pulling out the manual.

It’s like having a specialist at your disposal. Soon AI will be the manual. There will be a QR code on the device and when you scan it with your phone it will send you to an intelligent, personified AI manual which will ask you how it can help. Then in conversation, text, & with created-just-in-time video, it will guide you step-by-step.

You know those annoying, ‘Getting Started’ instructions that help you set up things like a remote control? Well AI can help make those instructions both easier and interactive for when things don’t go as smoothly as expected.

We are just a few years away from never having a paper manual shipped with new devices again. Because, there will be an AI agent designed to help you with far more detail and context specific assistance than an analog booklet could ever provide.

“Oh no, AI is making us dumber!”

Except it’s not.

People forget that we were worried about the internet and Google. And before that writing utensils:

“Students today depend too much upon ink. They don’t know how to use a pen knife to sharpen a pencil. Pen and ink will never replace the pencil.”
~ National Association of Teachers Journal, 1907


“Students today depend on these expensive fountain pens. They can no longer write with a straight pen and nib. We parents must not allow them to wallow in such luxury to the detriment of learning how to cope in the real business world which is not so extravagant.”
~ From PTA Gazette, 1941

I pulled those quotes from a presentation I did 16 years ago. I did another presentation at that time where I shared a quote from 1842 discussing how books would become useless “when the pupils are furnished with slates”.

We are used to pronouncing ‘the sky is falling‘ when the next advancement comes along. Google was going to make us dumber. It didn’t. Smart phones were going to make us dumber, but they didn’t. They did however change the things we thought and still think about, and remember. For example, I used to carry around a few dozen phone numbers, memorized in my head, now I don’t even know my own daughter’s numbers. They are neatly stored in my phone.

AI will do the same. It will adjust what we remember, fine tune what we think about about and ask, and help direct our thinking… but it won’t make us dumber.

When I was a kid, I thought my dad was the smartest guy in the world. I can’t think of a question I asked him that he didn’t know the answer to. Sometimes he’d even bring me a file on the topic I asked about.

I remember absolutely blowing away a teacher and my fellow students on a project I did on harnessing the ocean for power. I had newspaper clippings, magazine articles, even textbook sources that I shared on the classroom overhead projector. It looked like I spent hours upon hours doing research. I didn’t. I asked my dad what he knew and he gave me a thick file with all the resources I needed. He was my Google long before Google was a thing.

It made me look good. It made my work a lot easier. It didn’t make me dumber.

I’ll admit that there is something fundamentally different with AI compared to advances like the slate, the pen, the internet, Google and other ‘technological advances’. As Artificial Intelligence becomes smarter than us, we can rely on it in ways that we couldn’t with other advances. And it will take a while for us to figure out how to create tasks in schools that utilize AI effectively, rather than having AI do all the work. It was hard but not impossible to ‘Google proof’ an assignment, and that challenge is significantly magnified by AI. But the opportunities are also magnified.

What happens when AI can individualize student learning and what we consider the ‘core curriculum’ can be taught in less than half of a school day? How exciting can school be for the other half of the day? What curiosities can we foster? How student directed (and thus more engaging) can that other half of the day be?

We are only dumber using AI if we decide that we will passively let it do the work for us, but let’s not pretend students were not already using ‘cut-and-paste’ to get assignments done. Let’s not pretend work avoidance wasn’t already a thing. Let’s not pretend that we don’t already spend a lot of time in schools teaching students to be compliant rather than to think for themselves.

AI will only make us dumber if we try to continue doing what we have done before, but allow AI to do the work for us. If we truly use AI in collaborative and inspirational ways, we are opening an exciting new door to what human potential really can be.

It should be getting easier, but it isn’t

Sometimes it’s hard to believe that we live in the 21st-century. It should be getting easier, but it isn’t. All of our schools just got new photocopy machines, and there is a one hour video tutorial to learn how to use them. More videos and instructions are required, if you are the one who is doing any kind of basic maintenance like replacing the toner.

Related to this, when was the last time you bought a new TV and you instantly knew how to use the remote? I find incredible irony that there is nothing universal about a universal remote control. I don’t know, call me crazy, but I would think that in this day and age the tools we use would get simpler to use, not more complicated.

Borrow a friend’s car and try to fill the gas tank and you’re left puzzled as to where the release is for the gas tank’s door. Go to the gas station and there’s a process to get your rewards card punched in, because you don’t have room in your wallet for 17 rewards program cards. Try to connect your phone to that same borrowed car, and you’re worried that you’re going to have to cancel another users profile. Or you are faced with a touch screen menu that just doesn’t make intuitive sense.

How is it that the user interface of almost everything we do now is more complicated than necessary? Why is it that every single place we go to online we’re expected to login or create an account, or at the very least close one or two pop-up invitations to do so? i’m looking at a website for 30 seconds, to find out one piece of information, do I really need to decide whether or not I want to accept cookies?

My microwave has a touch dial where I have to spin my finger in a circle to get to the appropriate time. I don’t think I can ever hit the time I want without toggling back-and-forth. This takes me significantly longer than if I had to punch three number keys on a touchpad and hit ‘Start’. There is nothing convenient about this. And that’s my point…

We live in an era when things should be getting a lot easier, user interfaces should be intuitive and natural to use, but instead everything seems to be getting a little more difficult. I just don’t get it.

The real alignment problem

‘The alignment problem in artificial intelligence refers to the challenge of ensuring that AI systems act in accordance with human values and intentions. It involves making sure that these systems pursue the goals we set for them without unintended consequences or harmful behaviors.’

~ Auto-generated on DuckDuckGo

The real alignment problem is what human values are being aligned?

Do you want AI aligned with strict religious beliefs? Nihilism? Capitalism?

The point is, we can’t agree on what human values we want so how does AI align pluralistically? And furthermore, when AI achieves super intelligence, why would it bother to align with us?

The real alignment problem comes in two parts:

  • The what? Align with what human values.
  • The why? Why would a super intelligent AI want to align with our values?

The first part is something we will have to figure out. The second might just be decided for us, and not necessarily in our favour.

Self-interests in AI

Yesterday I read the following in the ‘Superhuman Newsletter (5/26/25)’:

Bad Robot: A new study from Palisade Research claims that “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off”, even when it was explicitly instructed to shut down. The study raises serious safety concerns.

It amazes me how we’ve gotten here. Ten, or even five years ago there were all kinds of discussions about of AI safety. There was a belief that future AI would be built in isolation with an ‘air-gap’, used as a security measure to ensure AI systems remained contained and separate from other networks or systems. We would grow this intelligence in a metaphorical petri dish and build safety guards around it before we let it out into the wild

Instead, these systems have been built fully in the wild. They have been give unlimited data and information, and we’ve built them in a way that we aren’t sure we understand their ‘thinking’. They surprise us with choices like choosing not to turn off when explicitly asked to. Meanwhile we are simultaneously training them to use ‘agents’ that interact with the real world.

What we are essentially doing is building a super intelligence that can act autonomously, while simultaneously building robots that are faster, stronger, more agile, and fully programmable by us… or by an AI. Let’s just pause for a moment and think about these two technologies working together. It’s hard not to construct a dystopian vision of the future when we watch these technologies collide.

And the reality is that we have not built an air-gap. We don’t have a kill switch. We are essentially heading down a path to having super-intelligent AI ignoring our commands while operating robots and machines that will make us feeble in comparison (in intelligence, strength, and mobility).

When our intelligence compared to AI is equivalent to a chimpanzee’s intelligence compared to ours, how will this super-intelligence treat us? This is not a hyperbole, it’s a real question we should be thinking about. If today’s rather simplistic LLM AI models are already choosing to ignore our commands what makes us think a super-intelligent AI will listen to or reason with us?

All is well and good when our interests align, but I don’t see any evidence that self-interested AI will necessarily have aligned interests with the intelligent monkeys that we are. And the fact that we’re building this super-intelligence out in the wild gives reason to pause and wonder what will become of humanity in an age of super-intelligent AI?

Seamless AI text, sound, and video

It’s only 8 seconds long, but this clip of and old sailor could easily be mistaken for real:

And beyond looking real, here is what Google’s new Flow video production platform can do:

Body movement, lip movement, objects moving naturally in gravity, we have the technology to create some truly incredible videos. On the one hand, we have amazing opportunities to be creative and expand the capabilities of our own imaginations. On the other hand we are entering into a world of deep fakes and misinformation.

Such is the case with most technologies. They can be used well and can be used poorly. Those using it well will amaze us with imagery and ideas long stuck in people’s heads without a way previously to express them. Those using it poorly will anger and enrage us. They will confuse us and make it difficult to discern fake news from real.

I am both excited and horrified by the possibilities.

The Right Focus

When I wrote, ‘Google proof vs AI proof‘, I concluded, “We aren’t going to AI proof schoolwork.

While we were successful in Google proofing assignments by creating questions that were not easily answerable using a Google search, we simply can’t make research based questions that will stump a Large Language Model artificial intelligence.

I’ve also previously said that ‘You can’t police it‘. In that post I stated,

“The first instinct with combating new technology is to ban and/or police it: No cell phones in class, leave them in your lockers; You can’t use Wikipedia as a source; Block Snapchat, Instagram, and TikTok on the school wifi. These are all gut reactions to new technologies that frame the problem as policeable… Teachers are not teachers, they aren’t teaching, when they are being police.

It comes down to a simple premise:

Focus on what we ‘should do’ with AI, not what we ‘shouldn’t do’.

Outside the classroom AI is getting used everywhere by almost everyone. From programming and creating scripts to reduce workload, to writing email responses, to planning vacations, to note taking in meetings, to creating recipes. So the question isn’t whether we should use AI in schools but what we should use it for?

The simple line that I see starts with the question I would ask about using an encyclopedia, a calculator, or a phone in the classroom, “How can I use this tool to foster or enhance thinking, rather than have the tool do the thinking for the student?

Because like I said back in 2010,

A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.”

Ultimately our focus needs to be on what we can and should use any tool for, and AI isn’t an exception.

Civilization and Evolution

Evolution is a slow process. Small changes over thousand and millions of years. I’m not thinking about bacteria becoming antibiotic resistant or moths changing colour over time to match their environment. I’m thinking about modern humans (Homo sapiens) who emerged approximately 300,000 years ago. Sure, certain traits like lactose tolerance evolved approximately 5,000–10,000 years in some populations, but for the most part we are a heck of a lot like our ancestors 100,000 years ago. Taller due to better nutrition, but otherwise pretty much the same.

And when we think about civilization as we know it, we are really talking about the last 2,500-3,000 years… and yet we are the same humans who lived as nomads and hunter-gatherers for tens of thousands of years before that. In other words we have not evolved to live in the societies we currently live in.

We didn’t evolve to live mostly indoors, away from nature, and out of sunlight for most of our day. We didn’t evolve to use artificial light at night before going to bed at hours well past dark. We don’t evolve to do shift work, or to sit at a desk all day.

We didn’t evolve to work for made up currencies so that we could go to buildings where we buy food that is over-processed, over-sweetened, and filled with empty calories. We didn’t evolve to spend time in front of screens that distract and overstimulate us.

We are simple but very intelligent animals who have not evolved much at all since we lived in small communities where we knew everyone, and knew what to fear, and how to protect ourselves from dangers.

Yet we now live surrounded by people we don’t know, and we are triggered by stresses that we evolutionarily were not designed for. Everything from being in constant debt, to working in stressful environments, to information overload, to time pressures, social comparison, choice overload, conflicting ideologies, environmental noises and hazards, and social disconnection.

We live in a state of overstimulation, stress, and distraction that we have not evolved to cope with. Then we identify diagnoses to tell us how we are broken, how we don’t fit in, and why we struggle. Maybe it’s the societies we have built that are broken? Maybe we evolutionarily do not belong in the social, technological, and societal structures we’ve created?

Maybe, just maybe, we are trying to live our best lives in an environment we were not designed for. Our modern civilizations are not well equipped to meet the needs of our primitive evolution… We have built ‘advanced’ cages and put ourselves in zoos that are nothing like the environment we are supposed to live in. And we don’t realize that all the things we think are broken about us are actually things that are broken about this fake environment we’ve trapped ourselves in.

And so we spend hours exercising, moving around weights that don’t need to be moved, meditating to empty our minds and seek presence and peace. We spend hours playing or cheering on sports teams so that we can have camaraderie with a small community. We spend thousands of dollars on camping equipment so that we can commune with nature. And some people take drugs or alcohol to escape the zoos and cages that we feel trapped in.

Maybe we’ve built our civilizations in ways that have not meaningfully considered our evolutionary needs.

I don’t ever ‘want to’ see ‘wanna’

Dear Siri,

I love speech to text. When I’m on the go I want to just speak into my phone and not bother typing. This is such a handy way to get words into a text or email with minimal effort. But this is the age of AI and it’s time to grow up and get a little more intelligent.

I don’t ever ‘want to’ say ‘wanna’.

“I don’t ever wanna see that word as a result of you dictating what I say.” (Like you just did.)

Listen to me Siri, I know my diction isn’t perfect. I know I don’t always enunciate the double ‘t’ in ‘want to’. But after I’ve gone to settings and created a text replacement shortcut from wanna to want to;

After I’ve corrected the text you’ve dictated from wanna to want to hundreds of times… Can you learn this simple request?

Don’t ever assume I want to say wanna!

Please.