Tag Archives: future

Promise and Doom

I see both promise and doom in the near future. Current advances in technology are incredible, and we will see amazing new insights and discoveries in the coming years. I’m excited to see what problems AI will solve. I’m thrilled about what’s happening to preserve not just life, but healthy life, as I approach my older years. I look forward to a world where many issues like hunger and disease have ever-improving positive outcomes. And yet, I’m scared.

I also see the end of civilized society. I see the threat of near extinction. I see a world destroyed by the very technologies that hold so much promise. As a case in point, see the article, “‘Unprecedented risk’ to life on Earth: Scientists call for halt on ‘mirror life’ microbe research”.

We are already playing with technology that has the potential to “put humans, animals and plants at risk of lethal infections.” What scares me most is the word I chose to start that sentence with, ‘We’. The proverbial ‘we’ right now are top scientists. But a decade, maybe two decades from now that ‘we’ could include an angry, disenfranchised, and disillusioned 22 year old… using an uncensored AI to intentionally develop (or rather synthetically design) a bacteria or a virus that could spread faster than any plague that humans have ever faced. Not a top researcher, not a university trained scientist, a regular ‘Joe’ who has decided at a young age that the world isn’t giving him what he deserves and decides to be vengeful on an epic scale.

The same thing that excites me about technological advancement also scares me… and it’s the power of individuals to impact our future. We all know the names of some great thinkers: Galileo, Newton, Curie, Tesla, and Einstein as incredible scientists that transformed the way we think of the world. People like them are rare, and have had lasting influence on the way we think of the world. For every one of them there are millions, maybe billions of bright thinkers for whom we know nothing.

I don’t fear the famous scientist, I fear the rogue, unhappy misfit who uses incredible technological advancements for nefarious reasons. The same technology that can make our lives easier, and create tremendous good in the world, can also be used with bad intentions. But there are differences between someone using a revolver for bad reasons and someone using a nuclear bomb for bad reasons. The problem we face in the future is that access to the equivalent harm of a nuclear bomb (or worse) will be something more and more people have access to. I don’t think this is something we can stop, and so as amazing as the technology is that we see today, my fear is that it could also be what leads to our demise as a species.

The cat’s out of the bag

I find it mind boggling that just 5 years ago the big AI debate was whether we would let AI out in the wild or not? The idea was, AI would be sort of ‘boxed’ and within our ability to ‘contain’… but we have somehow decided to just bypass this question and set AI free.

Here is a trifecta of things that tell me the cat is out of the bag.

  1. NVIDIA puts out the Jetson Orin Nano. A tiny AI that doesn’t need to be connected to the cloud.
  2. Robots like Optimus from Tesla are already being sold.
  3. AI’s are proving that they can self replicate.

That’s it. That’s all. Just extrapolate what you want to from these three ‘independent’ developments. Put them together, stir in 5 years of technological advancement. Add a good dose of open source access and think about what’s possible… and beyond possible to contain.

Exciting, and quite honestly, scary!

A prediction nears reality

Andrew Wilkinson said on X:

“Just watched my 5-year-old son chat with ChatGPT advanced voice mode for over 45 minutes.

It started with a question about how cars were made.

It explained it in a way that he could understand.

He started peppering it with questions.

Then he told it about his teacher, and that he was learning to count.

ChatGPT started quizzing him on counting, and egging him on, making it into a game.

He was laughing and having a blast, and it (obviously) never lost patience with him.

I think this is going to be revolutionary. The essentially free, infinitely patient, super genius teacher that calibrates itself perfectly to your kid’s learning style and pace.

Excited about the future.”

– – –

I remember visiting my uncle back when I was in university. The year was somewhere around 1988-90. So, at least 34 years ago. We were talking about the future and Joe Truss explained to me what learning would be like in the coming age of computers.

He said, (loosely paraphrased, this was a long time ago):

‘In the future we will have virtual teachers that will be able to teach us exactly what we want to know in exactly the format we need to learn best. You want to learn about relativity? How about learning from Einstein himself? You’ll see him in front of you like he is real. And he will not just lecture you, he will react to your questions and even bio-feedback. You look puzzled, he might ask a question. He sees you looking up and to the left, which he knows means you are trying to visualize something, and so he changes his lesson to provide an image. He will be a teacher personalized to any and all of your learning needs.’

We aren’t quite there yet, but the exchange Andrew Wilkinson’s son had with ChatGPT, and the work being done in virtual and augmented reality, suggest that Joe’s prediction is finally coming into being.

I too am excited about the future, and more specifically, the future of learning.

AI takes down EDU-giant

From the Wall Street Journal:

How ChatGPT Brought Down an Online Education Giant
Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT”

This is an excellent example of job loss due to AI. From the article:

“Since ChatGPT’s launch, Chegg has lost more than half a million subscribers who pay up to $19.95 a month for prewritten answers to textbook questions and on-demand help from experts. Its stock is down 99% from early 2021, erasing some $14.5 billion of market value.”

Chegg is a clear loser, but so is just about every website that offered to write essays for students. Imagine watching your profits disappear and seeing your entire business model collapse before your eyes.

This is just one example. There are many fields and jobs that either have disappeared or are going to disappear. Just think about the shift that’s already happening. People who thought they were in stable jobs, stable careers, are now realizing that they might be obsolete.

There will be new jobs, but more often than not there will be a condensing of jobs… one person where there used to be five, maybe ten people. For example, if a company had 5 writers producing daily articles for websites, they could lay off all but their best writer, who now acts as an editor for articles written by AI. Keep the person that understands the audience best, and that person ensures the AI writing is on point.

Code writing, data analysis, legal services, finance, and as mentioned above, media and marketing, these are but a handful of areas where AI is going to undermine the job market. And jobs are going to disappear, if they haven’t already. This has been something mentioned a lot, but the demise of a company like Chegg, with no vision for how they can pivot, is a perfect example of how this isn’t just a problem of the future, it’s happening right now.

Where will this lead in the next 5 years? What does the future job market look like?

Will there be new jobs? Of course! Will the job loss outpace the creation of new jobs? Very likely. And so where does that lead us?

Maybe it’s time to take a hard look at Universal Basic Income. Maybe it’s time to embrace AI and really think about how to use it in a way that helps us prosper, rather than to help us write emails and word things better. Maybe it’s time to accept that the AI infused world we live in now is going to undermine the current job market, and forever change whole industries. This isn’t some dystopian future, this is happening right now.

Not the same mistakes

Before I share this, no, it’s not a reflection on my parenting. I’m not wallowing in worry about how I’m messing my kids up. This is just one of the most powerful comics I’ve ever seen, and I think about it a lot as a school principal. Also, profanity warning for the comic below.

Now that I’ve got the disclaimer out of the way, let me share that I think this is one of the most challenging times to grow up in the last few decades. More young adults are living longer with their parents, or committing long hours to be able to afford rent. Many have not hit 25 yet and they don’t see themselves ever owning a house, or having a back yard like the one they had as a kid. Many more are disillusioned by what they see in the news and on social media.

Meanwhile, parents are doing their best not to make the mistakes of their parents, and yet struggling to navigate what that looks like. Some parents are doing all they can to help a disengaged kid stay in school. Others are lost trying to figure out inappropriate behavior. Still others are doing everything to protect their child, but preventing them from learning from failure. And still others are doing everything ‘right’, which works for one kid and doesn’t work for another.

And those are the resourceful parents that are trying their absolute best. They aren’t the divorced parents who fight in front of the kids every time the kids are passed off. They aren’t the ones struggling with their own demons of abuse, drugs, or mental illness. Still doing the best they can with the skills they have, but just not skilled in ways that support their kids.

We don’t want to make the same mistakes our parents did. We don’t want to follow the same patterns. That can be, but probably isn’t, a disparaging complaint about our own parents. Rather it’s a recognition that we want to do better, be better.

But try as we might, family dynamics is challenging, the world we live in is challenging, and this comic sums up the parenting challenge perfectly.

Looking at AI and the future of schools

There is no doubt that Artificial Intelligence (AI) is going to influence the way we do school in the very near future. I have been pondering what that influence will look like. What are the implications now and what will they be in just a few short years.

Now: AI is going to get messy. Unlike when Google and Wikipedia came out and we were dealing with plagiarism issues, AI writing is not Google-able, and there are two key issues with this: First, you can create assignments that are not Google-able, but you are much more limited in what you can create that is un-AI-able. That is to say, you can ask a question that isn’t easily answerable by Google search, but AI is quite imaginative and creative and can postulate things that a Google search can’t answer, and then share a coherent response. The second issue is that AI detectors are not evidence of cheating. If I find the exact source that was plagiarized, it’s easy to say that a student copied it, but if a detector says that something is 90% likely to be written by AI that doesn’t mean that it’s only 10% likely to be written by a person. For example, I could write that last sentence in 3 different ways and an AI detector would come up with 3 different percentages of likeliness that it is AI. Same sentence, different percentage of likelihood to be AI written, and all written by me.

So we are entering a messy stage of students choosing to use AI to do the work for them, or to help them do the work, or even to discuss that topic and argue with them so that they can come up with their own, better responses. We can all agree that the three uses I shared above are progressively ‘better’ use of AI, but again, all are using AI in some way. The question is, are we going to try to police this, or try to teach appropriate use at the appropriate time? And even when we do this, what do we do when we suspect misuse, but can’t prove it? Do we give full marks and move on? Do we challenge the student? What’s the best approach?

So we are in an era where it is more and more challenging to figure out when a student is misusing AI and we are further challenged with the burden of proof. Do we now start only marking things we see students do in supervised environments? That seems less than ideal. The obvious choice is to be explicit about expectations and to teach good use of AI, and not pretend like we can continue on and expect students not to use it.

The near future: I find the possible direction of use of AI in schools quite exciting to consider. Watch this short video of Sal Hahn and his son, Imran, working with an Open AI tool to solve a Math question without the AI giving away the answer.

When I see something like this video, made almost 6 month ago, I wonder, what’s going to be possible in another couple years? How much will an AI ‘know’ about a student’s approach to learning, about their challenges? About how best to entice learning specifically for each student? And then what is the teacher’s role?

I’m not worried about teachers being redundant, on the contrary, I’m excited about what’s possible in this now era. When 80% of the class is getting exactly the instruction they need to progress to a grade standard in a class on the required content, how much time does a teacher having during class time to meet with and support the other 20% of students who struggle? When a large part part of the curriculum is covered by AI, meeting and challenging students at their ideal points of challenge, and not a whole class moving at the class targeted needs, how much ‘extra’ time is available to do some really interesting experiments or projects? What can be done to take ideas from a course across multiple disciplines and to teach students how to make real-world connections with the work they are studying?

Students generally spend between 5 and 6 hours a day in class at school. If we are ‘covering’ what we need to with AI assistance in less than 3 hours, what does the rest of the time at school look like? Student directed inquiries based on their passions and interests? Real world community connections? Authentic leadership opportunities? Challenges and competitions that force them to be imaginative and creative? The options seem both exciting and endless.

The path from ‘now’ to ‘the near future’ is going to be messy. That said, I’m quite excited about seeing how the journey unfolds. While it won’t be a smooth ride, it will definitely be one that is both a great adventure and one that is headed to a pretty fantastic destination.

_____
Update: Inspired by my podcast conversation with Dean Shareski, here.

Ability and Agility

I love this quote, shared in a video on LinkedIn:

“It used to be about ability. And now, in a changing world, I think what we should be looking for is agility. I want to know how quickly do you change your mind? How fast are you to admit you’re wrong? Because what that means is you’re not just going to be reacting to a pandemic or to AI, you’re actually going to be anticipating those problems and seeing around corners, and then leading change as opposed to being a victim of it.” ~Adam Grant

It’s more than just anticipating problems, it’s about being agile, understanding challenges, and addressing them while they are small. It’s about understanding your strengths, and the strengths of your team… as well as weaknesses.

It’s about Agile Ability, which is why I titled this ‘Ability AND Agility’, rather than ‘Ability VERSUS Agility’. We need to embrace our failures and learn from them, recognize problems early, even predict them and be preemptive. This is so different than a culture of accountability and blame.

The desired student, employee, partner, colleague of the future will learn what they need to on the job. They’ll be exceptional because of their agility and willingness to learn, not just because of what they came to the table already knowing.

Not all voices are equal

I love the Bill Nye analogy about the climate debate. He says that if the debate were authentic, rather than having two talking heads debating, it would be hundreds of scientists on one side versus one climate denier on the other.

I saw a social media clip yesterday where a microbiologist was debunking a self declared holistic practitioner on the consumption of unpasteurized milk. The microbiologist wrote his master’s thesis on bacterial infections in cow’s mammary glands.

The self-declared expert espousing unscientific and incorrect information on social media is not an equal voice to an expert. Do they have a right to share their views? Sure. Do they deserve an audience? No.

I wish that I knew how to make the situation better but I don’t have answers. I’m extremely pro ‘free speech’. I think people are entitled to share their views. However, when I see misinformation and disinformation being shared by people with large audiences, I shudder. I worry about how their messages are consumed, by how many people they lead down a bad path.

In 2024 no one, and I mean NO ONE, should believe the earth is flat and yet the group of flat earth believers is getting larger. Imagine being able to own a telescope and see images from the James Webb telescope and still believing something that societies 5,000+ years ago already knew was wrong.

Not all voices are equal, and some voices deserve a larger voice than others. Who decides? Who polices? I don’t know, but I do know that we are entering (have entered) an era where false information gets shared significantly faster than correct information. Corrected information and updated facts don’t get the same play time on social media. So we are essentially living in an era of disinformation.

This doesn’t feel like progress, and as AI models continue to learn from the inputs we are providing, this scares me. I saw a stat that as much as 80% of the internet could be AI generated by the end of 2026. How much of that generated information will be based on incorrect assumptions and conclusions? How much of it will be intentionally misguided? Who is deciding which voices the AI models listen to?

We can’t continue to let ill-informed people have equal voices to those that have more informed perspectives… But I’m not informed enough to know how to change this.

Information abundance requires pattern recognition

What a fantastic quote by Adam Grant,

“The hallmark of expertise is no longer how much you know. It’s how well you synthesize.

Information scarcity rewarded knowledge acquisition. Information abundance requires pattern recognition.

It’s not enough to collect facts. The future belongs to those who connect dots.”

Pattern recognition and synthesis are the path to innovation, ingenuity, and invention. The collection of knowledge is not enough. Wisdom comes from recognizing how to make connections across different fields, how to make meaning out of relationships that not everyone sees.

Artificial Intelligence can give us the knowledge we seek. It can dumb down the ideas to our level of understanding, and even teach us with relevant examples when we are stuck. More information won’t be what we seek. Instead we will seek new connections, patterns, and relationships.

The desired experts of tomorrow are probably not the siloed experts we once sought. Instead they will be information generalists who understand how to take information from different fields, identify relationships others don’t see, and synthesize information such that they can tell a story others won’t know to tell.

How are we preparing the next generation of learners for this new future? How will schools need to change to help students prepare for the future in a world of abundant and easily accessible information? It certainly won’t be by feeding them content. Instead, the future of education lies in creating challenges where students need to synthesize information and recognize connections and patterns across different fields of study.

Related: My ‘Transforming Our Learning Metaphors’ Ignite Presentation from almost a decade ago.

How good, how soon?

I am still a little freaked out by how good the Google NotebookLM’s AI ‘Deep dive conversations’ are. The conversations are so convincing. The little touches it adds, like extended pauses after words like ‘and’ are an excellent example of this.

In the one created for my blog, the male voice asked, “It actually reminds me, you ever read Atomic Habits by James Clear?” And the female voice’s response is, “I haven’t. No.”

Think about what’s happening here in order to continue the conversation in a genuine way. The male voice can now make a point and provide the female voice ‘new’, previously unknown information. But this whole conversation is actually developed by a single AI.

How soon before you have an entire conversation with a customer service representative oblivious to the fact that you are actually talking to an AI? Watch a newscast or a movie unaware that the people you are watching are not really people?

I shared close to 2,000 blog posts I’ve written into the notebook, if I shared my podcasts too and it replicated my voice, I wonder how long it will be before a digital me could be set to write my posts then simultaneously do live readings of them on my blog? Writing and sounding just like me… without me having to do it!

As a scary extension of this, could I learn something from the new content that it produces? Could I gain insights from the digital me that I would struggle to come up with myself?

This is just the beginning. How much of the internet is going to end up being AI generated and filled with AI reactions and responses to other AI’s? And how much longer after that before we notice?