Tag Archives: Artificial Intelligence

In the middle

I think that a robust and healthy middle class is essential to maintain a vibrant society. But what I see in the world is an increasing gap between the wealthy and an ever larger group of people living in debt and/or from paycheque to paycheque. The (not so) middle class now might still go on a family vacation, and spend money on restaurants, but they are not saving money for the future… they simply can’t do what the middle class of the past did.

A mortgage isn’t to be paid off, it’s something to continue to manage during retirement. Downsizing isn’t a choice to be made, it’s a survival necessity. Working part time during retirement isn’t a way to keep busy, it’s s necessity to make ends meet.

I grew up in a world where I believed I would do better than my parents did. Kids today doubt they will ever own a place like their parents, and many don’t believe they’ll ever own a house. Renting and perhaps owning a small condo one day are all they aspire to. Not because they don’t want more, but because more seems too costly and out of reach.

Then I see the world of AI and robotics we are heading into and I wonder if initially things won’t get worse before they get better? Why hire a dozen programmers, just hire two exceptional ones and they are the quality control for AI agents write most of your code. Why hire a team of writers when one talented writer can edit the writing done by AI? Why hire factory workers that need lunch breaks and are more susceptible to making errors than a team of robots? While some jobs are likely to stick around for a while like trades, childcare, and people in certain medical fields, other jobs will diminish and even disappear.

I don’t think a robot is going to wanted to provide a pregnancy ultrasound any time soon, but AI will analyze that ultrasound better than any human professional. A robot might assist in laying electrical wire at a construction site, but it will still be a human serving you when you can’t figure out most electrical issues that you have in your home. It will still be a human who you call to figure out how to fix your leaky roof or toilet; a human who repairs your broken dishwasher or dryer. These jobs are safe for a while.

But I won’t want my next doctor to be diagnosing me without the aid and assistance of AI. And I would prefer AI to analyze my medical data. I will also prefer the more affordable products created by AI manufacturing. The list goes on and on as I look to where I will both see and want to see AI and robotics aiding me.

And what does this do to the working middle class? How do we tax AI and robots, to help replace the taxation of lost jobs? What do we do about increased unemployment as jobs for (former middle class) humans slowly disappear?

Will we have universal basic income? Will this be enough? What will the middle class look like in 10 or 20 years?

There is no doubt that we are heading into interesting times. The question is, will these disruptions cause upheaval? Will these disruptions widen the wealth gap? Will robotics and AI create more opportunities or more disparity? What will become of our middle class… a class of people necessary to maintain a robust and healthy society?

Micro-learning in 2025

I remember my oldest daughter asking me a question when she was just 4 years old. I don’t remember the actual question but I do remember that after I responded, “I don’t know,” she walked over to our desktop computer and asked Google. I remember being surprised that she thought to do this, and amazed because when I was that age, if my parent didn’t know, I might have looked in our Junior Encyclopedia Brittanica, but I probably would have just accepted that I wouldn’t know the answer.

I remember a time, years later, when I would ask a question of my social media network first, rather than Google. Not for a general knowledge question, but for things like how to use a certain tool, such as accessing a feature on a wiki or blogging platform. People were better that generalized Q&A pages at pinpointing the information I was looking for, and I good hashtag on Twitter would put my question in front of the right people.

And now there are times when I would go to YouTube first, before Google, for things like car repair. Don’t know how to get the cover off of a car light to replace it? Simply put your car name and year into YouTube with the information about what bulb you are replacing, and a video will pop up to show you how to do it.

AI is changing this. More and more, questions are being answered right inside of search. Make a query and the answer is not just links to sites that might know the answer, but an actual answer based on information that is on the sites you would normally have to click to. That’s pretty awesome in and of itself… having instant answers to simple questions, without needing to search any further. But what about more complex questions that might require learning something before you can understand all the concepts being shared? What happens when you ask questions with complex learning required?

This is where I see the power of micro-learning. And this term is being redefined by AI. Want to learn a complex concept? AI will do two things for you. First it will curate your learning for you. And secondly it will be adaptive to your learning needs. Want to learn a complex mathematical concept? AI will be your teacher. Got stuck on one particular concept? AI will realize what mistake you are making and change how it teaches you that concept to better meet your leaning needs, and pace.

It’s like having content area specialists at your finger tips. And soon intelligent agents will get to know us. Like a personalized AI tutor, we can pick just about any topic and become knowledgeable by creating small (micro) learning modules that are based on what we know, what we want to know, and how we learn best.

The AI can deliver a lecture, but also ask questions. It can provide the information in a conversation, or it can point us to videos and experts that would normally have taken considerable research to find. And the idea that it can adapt to how quickly you pick something up or if you struggle with a concept, means that you are getting the learning you need, when you need it. Micro-learning with AI is the new search of 2025, and it’s just going to get better and better.

How will this change schools? What will AI assisted lessons look like in classrooms? How will the learning be individualized by teachers? By students? How will this change the way we look at content? How important will the process be compared to the content?

I think this will be a year of experimentation and adaptation. Micro-learning won’t just be something our students do, but our educators as well. Furthermore, what micro-learning means a year from now will look a lot different than it does now. And frankly, I’m excited about the way micro-learning is adapting to the powerful AI tools that are currently being developed. We are headed into a new frontier of adaptive, just-in-time, micro-learning.

Promise and Doom

I see both promise and doom in the near future. Current advances in technology are incredible, and we will see amazing new insights and discoveries in the coming years. I’m excited to see what problems AI will solve. I’m thrilled about what’s happening to preserve not just life, but healthy life, as I approach my older years. I look forward to a world where many issues like hunger and disease have ever-improving positive outcomes. And yet, I’m scared.

I also see the end of civilized society. I see the threat of near extinction. I see a world destroyed by the very technologies that hold so much promise. As a case in point, see the article, “‘Unprecedented risk’ to life on Earth: Scientists call for halt on ‘mirror life’ microbe research”.

We are already playing with technology that has the potential to “put humans, animals and plants at risk of lethal infections.” What scares me most is the word I chose to start that sentence with, ‘We’. The proverbial ‘we’ right now are top scientists. But a decade, maybe two decades from now that ‘we’ could include an angry, disenfranchised, and disillusioned 22 year old… using an uncensored AI to intentionally develop (or rather synthetically design) a bacteria or a virus that could spread faster than any plague that humans have ever faced. Not a top researcher, not a university trained scientist, a regular ‘Joe’ who has decided at a young age that the world isn’t giving him what he deserves and decides to be vengeful on an epic scale.

The same thing that excites me about technological advancement also scares me… and it’s the power of individuals to impact our future. We all know the names of some great thinkers: Galileo, Newton, Curie, Tesla, and Einstein as incredible scientists that transformed the way we think of the world. People like them are rare, and have had lasting influence on the way we think of the world. For every one of them there are millions, maybe billions of bright thinkers for whom we know nothing.

I don’t fear the famous scientist, I fear the rogue, unhappy misfit who uses incredible technological advancements for nefarious reasons. The same technology that can make our lives easier, and create tremendous good in the world, can also be used with bad intentions. But there are differences between someone using a revolver for bad reasons and someone using a nuclear bomb for bad reasons. The problem we face in the future is that access to the equivalent harm of a nuclear bomb (or worse) will be something more and more people have access to. I don’t think this is something we can stop, and so as amazing as the technology is that we see today, my fear is that it could also be what leads to our demise as a species.

The cat’s out of the bag

I find it mind boggling that just 5 years ago the big AI debate was whether we would let AI out in the wild or not? The idea was, AI would be sort of ‘boxed’ and within our ability to ‘contain’… but we have somehow decided to just bypass this question and set AI free.

Here is a trifecta of things that tell me the cat is out of the bag.

  1. NVIDIA puts out the Jetson Orin Nano. A tiny AI that doesn’t need to be connected to the cloud.
  2. Robots like Optimus from Tesla are already being sold.
  3. AI’s are proving that they can self replicate.

That’s it. That’s all. Just extrapolate what you want to from these three ‘independent’ developments. Put them together, stir in 5 years of technological advancement. Add a good dose of open source access and think about what’s possible… and beyond possible to contain.

Exciting, and quite honestly, scary!

A prediction nears reality

Andrew Wilkinson said on X:

“Just watched my 5-year-old son chat with ChatGPT advanced voice mode for over 45 minutes.

It started with a question about how cars were made.

It explained it in a way that he could understand.

He started peppering it with questions.

Then he told it about his teacher, and that he was learning to count.

ChatGPT started quizzing him on counting, and egging him on, making it into a game.

He was laughing and having a blast, and it (obviously) never lost patience with him.

I think this is going to be revolutionary. The essentially free, infinitely patient, super genius teacher that calibrates itself perfectly to your kid’s learning style and pace.

Excited about the future.”

– – –

I remember visiting my uncle back when I was in university. The year was somewhere around 1988-90. So, at least 34 years ago. We were talking about the future and Joe Truss explained to me what learning would be like in the coming age of computers.

He said, (loosely paraphrased, this was a long time ago):

‘In the future we will have virtual teachers that will be able to teach us exactly what we want to know in exactly the format we need to learn best. You want to learn about relativity? How about learning from Einstein himself? You’ll see him in front of you like he is real. And he will not just lecture you, he will react to your questions and even bio-feedback. You look puzzled, he might ask a question. He sees you looking up and to the left, which he knows means you are trying to visualize something, and so he changes his lesson to provide an image. He will be a teacher personalized to any and all of your learning needs.’

We aren’t quite there yet, but the exchange Andrew Wilkinson’s son had with ChatGPT, and the work being done in virtual and augmented reality, suggest that Joe’s prediction is finally coming into being.

I too am excited about the future, and more specifically, the future of learning.

AI takes down EDU-giant

From the Wall Street Journal:

How ChatGPT Brought Down an Online Education Giant
Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT”

This is an excellent example of job loss due to AI. From the article:

“Since ChatGPT’s launch, Chegg has lost more than half a million subscribers who pay up to $19.95 a month for prewritten answers to textbook questions and on-demand help from experts. Its stock is down 99% from early 2021, erasing some $14.5 billion of market value.”

Chegg is a clear loser, but so is just about every website that offered to write essays for students. Imagine watching your profits disappear and seeing your entire business model collapse before your eyes.

This is just one example. There are many fields and jobs that either have disappeared or are going to disappear. Just think about the shift that’s already happening. People who thought they were in stable jobs, stable careers, are now realizing that they might be obsolete.

There will be new jobs, but more often than not there will be a condensing of jobs… one person where there used to be five, maybe ten people. For example, if a company had 5 writers producing daily articles for websites, they could lay off all but their best writer, who now acts as an editor for articles written by AI. Keep the person that understands the audience best, and that person ensures the AI writing is on point.

Code writing, data analysis, legal services, finance, and as mentioned above, media and marketing, these are but a handful of areas where AI is going to undermine the job market. And jobs are going to disappear, if they haven’t already. This has been something mentioned a lot, but the demise of a company like Chegg, with no vision for how they can pivot, is a perfect example of how this isn’t just a problem of the future, it’s happening right now.

Where will this lead in the next 5 years? What does the future job market look like?

Will there be new jobs? Of course! Will the job loss outpace the creation of new jobs? Very likely. And so where does that lead us?

Maybe it’s time to take a hard look at Universal Basic Income. Maybe it’s time to embrace AI and really think about how to use it in a way that helps us prosper, rather than to help us write emails and word things better. Maybe it’s time to accept that the AI infused world we live in now is going to undermine the current job market, and forever change whole industries. This isn’t some dystopian future, this is happening right now.

Looking at AI and the future of schools

There is no doubt that Artificial Intelligence (AI) is going to influence the way we do school in the very near future. I have been pondering what that influence will look like. What are the implications now and what will they be in just a few short years.

Now: AI is going to get messy. Unlike when Google and Wikipedia came out and we were dealing with plagiarism issues, AI writing is not Google-able, and there are two key issues with this: First, you can create assignments that are not Google-able, but you are much more limited in what you can create that is un-AI-able. That is to say, you can ask a question that isn’t easily answerable by Google search, but AI is quite imaginative and creative and can postulate things that a Google search can’t answer, and then share a coherent response. The second issue is that AI detectors are not evidence of cheating. If I find the exact source that was plagiarized, it’s easy to say that a student copied it, but if a detector says that something is 90% likely to be written by AI that doesn’t mean that it’s only 10% likely to be written by a person. For example, I could write that last sentence in 3 different ways and an AI detector would come up with 3 different percentages of likeliness that it is AI. Same sentence, different percentage of likelihood to be AI written, and all written by me.

So we are entering a messy stage of students choosing to use AI to do the work for them, or to help them do the work, or even to discuss that topic and argue with them so that they can come up with their own, better responses. We can all agree that the three uses I shared above are progressively ‘better’ use of AI, but again, all are using AI in some way. The question is, are we going to try to police this, or try to teach appropriate use at the appropriate time? And even when we do this, what do we do when we suspect misuse, but can’t prove it? Do we give full marks and move on? Do we challenge the student? What’s the best approach?

So we are in an era where it is more and more challenging to figure out when a student is misusing AI and we are further challenged with the burden of proof. Do we now start only marking things we see students do in supervised environments? That seems less than ideal. The obvious choice is to be explicit about expectations and to teach good use of AI, and not pretend like we can continue on and expect students not to use it.

The near future: I find the possible direction of use of AI in schools quite exciting to consider. Watch this short video of Sal Hahn and his son, Imran, working with an Open AI tool to solve a Math question without the AI giving away the answer.

When I see something like this video, made almost 6 month ago, I wonder, what’s going to be possible in another couple years? How much will an AI ‘know’ about a student’s approach to learning, about their challenges? About how best to entice learning specifically for each student? And then what is the teacher’s role?

I’m not worried about teachers being redundant, on the contrary, I’m excited about what’s possible in this now era. When 80% of the class is getting exactly the instruction they need to progress to a grade standard in a class on the required content, how much time does a teacher having during class time to meet with and support the other 20% of students who struggle? When a large part part of the curriculum is covered by AI, meeting and challenging students at their ideal points of challenge, and not a whole class moving at the class targeted needs, how much ‘extra’ time is available to do some really interesting experiments or projects? What can be done to take ideas from a course across multiple disciplines and to teach students how to make real-world connections with the work they are studying?

Students generally spend between 5 and 6 hours a day in class at school. If we are ‘covering’ what we need to with AI assistance in less than 3 hours, what does the rest of the time at school look like? Student directed inquiries based on their passions and interests? Real world community connections? Authentic leadership opportunities? Challenges and competitions that force them to be imaginative and creative? The options seem both exciting and endless.

The path from ‘now’ to ‘the near future’ is going to be messy. That said, I’m quite excited about seeing how the journey unfolds. While it won’t be a smooth ride, it will definitely be one that is both a great adventure and one that is headed to a pretty fantastic destination.

_____
Update: Inspired by my podcast conversation with Dean Shareski, here.

AI Deep Dive

I’ve been working with Joe Truss on a project called ‘Book of Codes’, where we are examining the building blocks of the universe. The main premise is that we live in a Tetraverse, a universe where the smallest possible length in the universe (the Planck length) must be the edge length of a tetrahedron.

We put our ‘We Live in a Tetraverse‘ video into Google’s NotebookLM and had it create this Deep Dive Conversation.

I think this is quite insightful. I’ve been thinking about how to use this tool since I shared it by putting my blog posts into it and having it do a Deep Dive on the content. Since then, I’ve read a few things that have questioned just how useful this kind of podcast really is?

Alan Levine says in ‘Wow Us with your AI Generated Podcast…, “In one sample listen, you might be wowed. But over a series, Biff and Buffy sound like a bunch of gushing sycophants, those office butt kissers you want to kick in the pants.

And in a comment response Aaron Davis mentions my blog’s Deep Dive, “I agree with you Alan about the initial amazement about what is possible, I am not sure how purposeful it is. I listened to David Truss’ podcast he posted and was left thinking about my experience with David Truss’ writing. I imagine that such tools may provide a possible entry way into new content, but I am not sure what is really gained by putting this into an audio format?

It’s true, I’ve done this a few times and while it can be impressive, I do see that this can get a bit old pretty fast. Except for one thing… I think that if you are asking it for a general summary of light content, you are going to get a light and fluffy Deep Dive response. However, if you want to understand something really challenging or different or dense, this could be a really good way to get a general understanding of tough to understand content. The Deep Dive into the Tetraverse video actually did a really good job of describing new content in a clear way. I found the kaleidoscope metaphor it mentioned an insightful analogy and I think that listening to the audio first would help someone appreciate the video even more.

Like any new and shiny tool, this Deep Dive podcast on Google’s Notebook LM will get a lot of play and then dwindle in use… but that doesn’t make it useless. I think it will find it’s rightful place as a way to take dense material and make it digestible. It will be a great content introduction, an insightful entry into new learning. It won’t become something you go to listen to where you also listen to your favourite podcast episodes. Still, it will have a purpose and you might find yourself going to it, or to a similar tool, when you have too much content to summarize, or if the content is significantly challenging to parse.

Information abundance requires pattern recognition

What a fantastic quote by Adam Grant,

“The hallmark of expertise is no longer how much you know. It’s how well you synthesize.

Information scarcity rewarded knowledge acquisition. Information abundance requires pattern recognition.

It’s not enough to collect facts. The future belongs to those who connect dots.”

Pattern recognition and synthesis are the path to innovation, ingenuity, and invention. The collection of knowledge is not enough. Wisdom comes from recognizing how to make connections across different fields, how to make meaning out of relationships that not everyone sees.

Artificial Intelligence can give us the knowledge we seek. It can dumb down the ideas to our level of understanding, and even teach us with relevant examples when we are stuck. More information won’t be what we seek. Instead we will seek new connections, patterns, and relationships.

The desired experts of tomorrow are probably not the siloed experts we once sought. Instead they will be information generalists who understand how to take information from different fields, identify relationships others don’t see, and synthesize information such that they can tell a story others won’t know to tell.

How are we preparing the next generation of learners for this new future? How will schools need to change to help students prepare for the future in a world of abundant and easily accessible information? It certainly won’t be by feeding them content. Instead, the future of education lies in creating challenges where students need to synthesize information and recognize connections and patterns across different fields of study.

Related: My ‘Transforming Our Learning Metaphors’ Ignite Presentation from almost a decade ago.

The best questions

There is a cliche saying that, ‘There is no such thing as a dumb question.” Tell that to a teacher who has just started an engaging discussion in a class and a kid undermines the flow of the conversation with a dumb, often unrelated question. The reality is that questions have innate and even measurable value and there is depth and quality to good question asking.

Think about how important good questioning is in the new world of AI. We need not look far on social media these days to find a post about how to generate intelligent prompts… intelligent questions, well posed, and designed to give you back optimum responses. Design the right question and you increase the chances of an ideal answer.

What’s the best way to promote good questioning in schools? How do we teach ‘Asking good questions?’

At Inquiry Hub we have students design their own inquiries. They take a course developed around the students figuring out what their inquiry question is, then answering it. And they don’t do this once, they do this several times over the year for their first two years, then in Grade 11 they design a full year course.

All the while, students are asking questions, then seeking answers. It’s the practice of asking the questions and not just seeking the answers that makes this process special. They aren’t just asking questions Google or AI can produce answers to. They are not answering a question the teacher asked. They are forming the questions and thus the direction of the learning.

You don’t start asking better and better questions just by answering other people’s questions. You don’t ask better and better questions without practicing forming the questions yourself. Students need to be designing the questions. Because if they are only in charge of answering them, there will be tools and upcoming technologies that will find the same or better answers, faster. The future innovators of the world will be better at writing the best questions, not just answering them.