Tag Archives: future

New learning paradigm

I heard something in a meeting recently that I haven’t heard in a while. It was in a meeting with some online educational leaders across the province and the topic of Chat GPT and AI came up. It’s really challenging in an online course, with limited opportunities for supervised work or tests, to know if a student is doing the work, or a parent or tutor, or Artificial Intelligence tools. That’s when a conversation came up that I’ve heard before. It was a bit of a stand on a soapbox diatribe, “If an assignment can be done by Chat GPT, then maybe the problem is in the assignment.”

That’s almost the exact line we started to hear about 15 years ago about Google… I might even have said it, “If you can just Google the answer to the question, then how good is the question?” Back then, this prompted some good discussions about assessment and what we valued in learning. But this is far more relevant to Google than it is to AI.

I can easily create a question that would be hard to Google. It is significantly harder to do the same with LLM’s – Large Language Models like Chat GPT. If I do a Google search I can’t find critical thinking challenges not already shared by someone else. However, I can ask Chat GPT to create answers to almost anything. Furthermore, I can ask it to create things like pro’s & con’s lists, then put those in point form, then do a rough draft of an essay, then improve on the essay. I can even ask it to use the vocabulary of a Grade 9 student. I can also give it a writing sample and ask it to write the essay in the same style.

LLM’s are not just a better Google, they are a paradigm shift. If we are trying to have conversations about how to catch cheaters, students using Chat GPT to do their work, we are stuck in the old paradigm. That said, I openly admit this is a much bigger problem in online learning where we don’t see and work closely with students in front of us. And we are heading into an era where there will be no way to verify what’s student work and what’s not, so it’s time to recognize the paradigm shift and start asking ourselves new questions…

The biggest questions we need to ask ourselves are how can we teach students to effectively use AI to help them learn, and what assignments can we create that ask them to use AI effectively to help them develop and share ideas and new learning?

Back when some teachers were saying, “Wikipedia is not a valid website to use as research and to cite.” Many more progressive educators were saying, “Wikipedia is a great place to start your research,” and, “Make sure you include the date you quoted the Wikipedia page because the page changes over time.” The new paradigm will see some teachers making students write essays in class on paper or on wifi-less internet-less computers, and other teachers will be sending students to Chat GPT and helping them understand how to write better prompts.

That’s the difference between old and new paradigm thinking and learning. The transition is going to be messy. Mistakes are going to be made, both by students and teachers. Where I’m excited is in thinking about how learning experiences are going to change. The thing about a paradigm shift is that it’s not just a slow transition but a leap into new territory. The learning experiences of the future will not be the same, and we can either try to hold on to the past, or we can get excited about the possibilities of the future.

AI and humans together

On Threads, Hank Green said, “AI isn’t scary because of what it’s going to do to humans, it’s scary because of what it’s going to allow humans to do to humans.

I recently shared in, High versus low trust societies, examples of this with: more sophisticated scams; sensationalized click bait news titles and articles; and clever sales pitches, all ‘enhanced’ and improved by Artificial Intelligence. None of these are things AI is doing to us. All of them are ways AI can be used by people to take advantage of other people.

I quoted Hank’s Thread and said, “It’s just a tool, but so are guns, and look at how well we (miss)manage those!

Overall I’m excited about how we will use AI to improve what we can do. There are already fields of medicine where AI can do thousands of hours of work in just a few hours. For example, drug discovery, “A multi-institutional team led by Harvard Medical School researchers has launched a platform that aims to optimize AI-driven drug discovery by developing more realistic data sets and higher-fidelity algorithms.

The true power and potential of AI isn’t what AI can do on its own, it’s what humans and AI can do together.

But I also worry about people using amazing AI tools as weapons. For example, creating viruses or even dirty bombs. These are things that are out of reach for most people now, but AI might make such weapons both more affordable and more available… to anyone and everyone.

All this to say that Hank Green is right. “AI isn’t scary because of what it’s going to do to humans, it’s scary because of what it’s going to allow humans to do to humans.

We are our own worst enemy.

The true danger and threat of AI isn’t what AI can do on its own, it’s what humans and AI can do together.

High versus low trust societies

I love when someone adds to my perspective on social media. That’s exactly what happened after I posted Basic assumptions a couple days ago. The post reflected that, “people no longer give each other the benefit of the doubt that intentions are good. This used to be a basic assumption we operated on, the premise that we can start with the belief that everyone is acting in good faith.

I shared the post on Twitter and Chris Kalaboukis and I had the following conversation thread:

Chris: Reading your post: could we be transitioning from a high-trust to a low-trust society?

Dave: Yes, that seems like an appropriate conclusion. Is there an author that speaks of this idea?

Chris: Not that I can recall, however, if you look at the attributes of low-trust societies you see a lot of what is happening now.

Dave: So true! The circle of high trust seems to be shrinking and it really seems like a step backwards… tribalism trumps the collective of a greater community.

Chris: It is. It seems that even our institutions are driving us towards more tribalism and division.

Dave: And how do you suppose we correct this course? I honestly don’t have a clue, and see things getting worse before they get better.

Chris: I think that in reality, most people prefer to live in a high-trust society. We need leaders and media who support that vision.

Dave: I think the biggest problem right now is that most leaders do not want to step into a limelight where both social media and news outlets are only interested in focussing on the dirt. It seems everyone is measured by their worst transgressions, regardless of many positive deeds.

Chris: If it bleeds it leads. we’ve never been able to communicate with more people at the same time but the only communication which seems to get through is negative. It’s all about keeping your attention to sell more ads.

Dave: I sound like quite the pessimist, that’s not usually my stance on things, but I do struggle to see a way forward from here.

—–

The idea Chris shared that we could be ‘transitioning from a high-trust to a low-trust society’ seems insightful and really intrigues me. It isn’t happening at just one level, but many!

• Scam phone calls and emails are perfect examples. We used to operate from a position of trust, but now unknown calls and unsolicited emails are all necessarily met with skepticism.

• Sensationalized news leads with misleading headlines that are more about getting attention and clicks than about providing truthful news. And if the news slant doesn’t match your beliefs, it’s ‘fake news’.

• Sales pitches and advertising promises almost everything under the sun, you aren’t buying a product with a basic function, you are buying a product that is going to change your life or transform how you do ‘X’, or use ‘Y’… your results will surprise you and you’ll be amazed!

• If you are even slightly left wing you are ‘woke’ or ‘Antifa’ in the most derogatory way you can use these words. If you are even slightly right wing you are ‘Alt-right’ and racist. No one gets to sit on a spectrum, you are either viewed as an extreme on one or the other side. And even agreeing on one topic on the other side makes you less trustworthy on your side.

These are but a few ways we’ve become a lower-trust society. Ad hominem and straw man attacks get more attention than sound arguments. A well said lie is easily shared while complex truths are not. Saying a situation is complex and sharing nuance does not make for catchy sound bites, and aren’t going to go viral on TikTok, or Instagram Reels. No, but the snarky personal attack will, as will a one-sided, extreme view that packs a powerful punch.

What’s worse is that moderate voices get shut out. And in general many people feel silenced or would rather not share a view that is even slightly controversial. So the extreme voices get even more airtime and attention.

I feel this often. Writing every day, and sometimes picking controversial topics to discuss, I find myself tiptoeing and treading very carefully. I said in my Twitter conversation with Chris above, “It seems everyone is measured by their worst transgressions, regardless of many positive deeds.” I sometimes wonder what one thing I’m going to say is going to get blown out of proportion? If I write one single inappropriate or strongly biased phrase, will it define me? Will it undermine the 1,500+ posts that I’ve written, and make me out to be something or someone I’m not?

This sounds paranoid, but I wrote one post a few years ago that a friend private messaged me about, then called me and said I’d gone too far with my opinion on a specific point. I totally saw his point, went back and adjusted my post to tone it down… but I feel like that one issue, that one strong and overly biased opinion shared publicly put a rift in our friendship. And that’s someone I respect, not some stranger coming at me, not someone that doesn’t know my true character. My opinion in his eyes is now less trustworthy, and holds less value. That said, I appreciated the feedback, and respect that he took the time to share it privately. That’s rare these days.

The path forward is not easy. We aren’t just swaying slightly towards a less trustworthy society, we are on a full pendulum swing away from a more trustworthy society. Tribalism, nationalism, and extremism are pulling our world apart. Who do you trust? What institutions? Which governments? Who do you consider a neighbour? Who will you break bread with? Who do you believe?

The circles of trust are getting smaller, and the mechanisms to share bias and misinformation are growing. We are devolving into a less trusting society or rather societies, and it’s undermining our sense of community. We need messages of kindness, love, and peace to prevail. We need tolerance, acceptance, and more than anything trustworthy institutions and leaders. We need moderates and centrists to voice compromise and minimize extremist views. We need to rebuild a high trust society… together.

AI is Coming… to a school near you.

Miguel Guhlin asked on LinkedIn:

“Someone asked these questions in response to a blog entry, and I was wondering, what would YOUR response be?

1. What role/how much should students be using AI, and does this vary based on grade level?

2. What do you think the next five years in education will look like in regards to AI? Complete integration or total ban of AI?”

I commented:

1. Like a pencil or a laptop, AI is a tool to use sometimes and not use other times. The question is about expectations and management.

2. Anywhere that enforces a total ban on AI is going to be playing a never-ending and losing game of catch-up. That said, I have no idea what total integration will look like? Smart teachers are already using AI to develop and improve their lessons, those teachers will know that students can, and both will and should, use these tools as well. But like in question 1… when it’s appropriate. Just because a laptop might be ‘completely integrated’ into a classroom as a tool students use doesn’t mean everything they do in a classroom is with and on a laptop.

I’ve already dealt with some sticky issues around the use of AI in a classroom and online. One situation last school year was particularly messy, with a teacher using Chat GPT as an AI detector, rather than other AI detection tools. It turns out that Chat GPT is not a good AI detector. It might be better now, but I can confirm that in early 2023 it was very bad at this. I even put some of my own work into it and I had Chat GPT tell me that a couple paragraphs were written by it, even though I wrote the piece about 12 years earlier.

But what do we do in the meantime? Especially in my online school where very little, if any, work is supervised? Do we give up on policing altogether and just let AI do the assignments as we try to AI proof them? Do we give students grades for work that isn’t all theirs? How is that fair?

This is something we will figure out. AI, like laptops, will be integrated into education. Back in 2009 I presented on the topic, “The POD’s are Coming!

(Slideshow here) About Personally Owned Devices… laptop etc… coming into our classrooms, and the fear of these devices. We are at that same point with AI now. We’ll get through this and our classrooms will adapt (again).

And in a wonderful full-circle coincidence, one of the images I used in the POD’s post above was a posterized quote by Miguel Guilin.

It’s time to take the leap. AI might be new… but we’ve been here before.

Asimov’s Robot Visions

I’m listening to Isaac Asimov’s book, Robot Visions on Audible. Short stories that center around his Three Laws of Robotics (Asimov’s 3 Laws).

• The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

• The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

• The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These short stories all focus on ways that these laws can go wrong, or perhaps awry is the better term. There are a few things that make these stories insightful but they are also very dated. The early ones were written in the 1940’s and the conventions of the time, including conversational language and sexist undertones, are blatantly exposed as ‘olde’.

It also seems that Asimov made 2 assumptions worth thinking about: First that all robots and Artificial Intelligence would be constructed with the 3 laws at the core of all intelligence built into these machines. Many, many science fiction stories that followed also use these laws. However creators of Large Language Models like Chat GPT are struggling to figure out what guard rails to put on their AI to prevent it from acting maliciously when meeting sometimes scrupulous human requests.

Secondly, a few of the stories include a Robopsychologist, that’s right, a person (always female) who is an expert in the psychology of robots. There would be psychologists whose sole purpose would be to get inside the minds of robots.

Asimov was openly concerned with AI, specifically robots, acting badly, endangering humans, and even following our instructors too literally and with undesirable consequences. But he thought his 3 laws was a good start. Perhaps they are but they are just a start. And with new AI’s coming out with more computing power, more training, and less restrictions, I think Asimov’s concerns may prove prophetic.

The guard rails are off and there is no telling what unexpected consequences and results we will see in the coming months and years.

The task of education today

This quote was shared with us in our Principals meeting yesterday:

“This is the task of education today: to confront the almost unimaginable design challenge of building an education system that provides for the re-creation of civilization during a world system transition. This challenge brings us face to face with the importance of education for humanity and the basic questions that structure education as a human endeavor.” ~Zachary Stein

One of our Assistant Superintendent’s added, “We are trying to build a utopian society in triage conditions.”

There is no doubt that it is much harder to be an educator today than it was when I started my career over 25 years ago. And as we navigate ‘building an education system that provides for the re-creation of civilization during a world system transition,’ we are bound to struggle a bit. How do we mark assignments that are written or co-written by AI? What skills are going to be needed for the top jobs of 2030? How different will that be for 2040?

Ultimately we want to support the education of students who will become kind, contributing citizens. But how do we do this in a world where Truth seems arbitrary to the sources you get it from, and politicians, religion, and corporations are all pushing conflicting information and agendas? I think this goes beyond just working on competencies like critical thinking. It requires mental gymnastics that most adults struggle with.

Meanwhile, the majority of school schedules still put students into blocks of time based on the subjects they are learning. “Let’s think critically, and do challenging problems, but only in this one narrow field of study.” We aren’t meeting the design challenges we face today if that’s what we continue to do.

More than ever students, future citizens, need to understand complex issues from multiple perspectives. They need to understand nuance, and navigate when to defend an idea, when to compromise, and when to avoid engagement altogether. They need to be prepared to say “I don’t know,” and then do the hard work of finding out, when the accuracy of information is incomplete or even suspect. They need to be prepared to say, “I was wrong,” and “I am sorry,” and also be prepared to stick to their convictions and defend ideas that aren’t always the accepted norm.

That’s not the future I prepared students for 25 years ago. It is indeed an ‘almost unimaginable design challenge‘, and as we navigate new challenges we have to recognize that mistakes will be made… but not changing is far scarier than trying.

The enemy of knowledge

“The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.” ~ Stephen Hawking

The illusion of knowledge is more ignorant that just being ignorant. This idea is more relevant today than any time in history. Examples:

1. Every religion starts with the premise that their religion shares true knowledge and all the other religions share illusions. So every devout religious person loves their own illusions, or at the very least believes anyone of a different faith lives in an illusion of ignorance.

2. Anyone who believes in a flat earth, or thinks no one ever landed on the moon lives in an illusion of knowledge. They perceive themselves as more knowledgeable than scientists, experts, and even general employees in the flight and space industry.

3. AI is already generating incredibly persuasive deep fakes and while we used to use a discerning eye to catch a lie, soon we will need to be more discerning to catch the truth. The illusion of knowledge will be more rampant than actual, factual knowledge.

We are moving from an era of knowledge seekers to an era of illusions and ignorance.

The truth is out there… it’s just a lot harder to find, and even harder to defend.

4-day work weeks

An interesting article, ‘Employees are so sick of the five-day workweek that most would take a pay cut to make a four-day week happen‘, by Ryan Hogg, states: “In the battle for a four-day workweek employees seem ready to put their money where their mouth is—they’ll take a pay cut if it means having an extra day of free time.

At a time when inflation and cost of living is extremely high, people would rather sacrifice money for time. This isn’t about a lack of ambition or drive, it’s about wanting balance. It’s about prioritizing wellbeing over profit.

The article continued: “Last year, the U.K. piloted the world’s biggest-ever four-day week trial, made up of more than 60 companies and nearly 3,000 employees. Most businesses maintained or improved their productivity, while the trial also revealed that quit rates among staff plummeted.

Of the businesses involved in the survey, the majority chose to continue with the scheme.

It can work.

I read another article, which I can’t find to source right now, and it was mentioning how many big companies are struggling with high absenteeism, with employees taking more sick time than they ever have before. Employees are taking days off in far greater quantities than my generation and our parent’s generation ever did, and these absences are costing companies far more than expected. Apparently this isn’t just an issue if people being sicker, but rather employees taking more time for ‘mental health days’. Essentially just taking a break from the grind of a 5- day work week. A shorter week could work to reduce this.

I think the 4-day week could work for schools too. Add one hour to each of the 4 remaining school days and you’ve got 2/3’s of the missing school day covered. Add 30 more daily minutes of collaborative/prep time and teachers would be working the same hours, and embedding some needed prep time to their schedules. Same hours of work, close to the same number of hours of class, and so not even a reason to reduce pay.

Doing this, students and teachers would have 4-day weeks and 3-day weekends. I wonder what that would do to student absenteeism? I wonder how well students would perform? This year our senior PE classes start an hour early, and we haven’t noticed an issue with students struggling through a longer school day.

It could be piloted in a high school, but probably not the younger grades because it would be challenging to arrange for child care/supervision on the weekday that students don’t have school. I’d be happy to volunteer my school to try it out… I wonder what teachers, students, and parents would think of this?

It’s about the nuances

Today I ran into a teacher that was a favourite of my two daughters. He brought up a current geopolitical issue that I won’t discuss here, because there is too much nuance and I’m not prepared to write a dissertation of my thoughts… and anything else will only cause me grief. In fact, even a dissertation would cause me grief because I’d be bound to garnish disagreement and even anger. Why? Because no matter what position I hold, no matter how nuanced or not, it will upset people.

We’ve reached an impasse in public conversation when nuance is not part of the conversation. Everything is black & white, and any shade of grey is ‘othered’ to the opposing view. This is unhealthy. Very unhealthy.

It took my conversation today, where we agreed yet were equally reserved, to realize that a more public conversation can’t happen for me. I’m not knowledgeable enough. I haven’t done the hard work to have a strong and well defended view of a sensitive issue. I ask questions that could and would piss off people on either side of the issue.

I’ve said before,

“We want to live, thrive, and love in a pluralistic society. We just need to recognize that in such a society we must be tolerant and accepting of opposing views, unaccepting of hateful and hurtful acts, and smart enough to understand the difference.”

I don’t believe that distinction is being made right now. I don’t see an openness to nuance. I don’t see a way forward where we are moving in the right direction. An upcoming election in the US coupled with AI generated fake news, and the bi-polar positions on the left and right, are going to lead to a shitstorm. It’s going to get ugly, it might get violent, and it will not get better until it gets worse.

We need to find a way to bring back nuanced debate and conversation… where different opinions are met with interest not scorn, with acceptance not ridicule. Discourse can be had without anger, and nuanced opinions will lead to solutions where now we only find conflict.

Different, not easier

Yesterday I saw this question asked by Dean Shareski on LinkedIn,

“I talk to educational leaders every day and for the most part, they are willing and in many cases excited to embrace the potential of Generative AI. When you consider its role in education, what are the specific elements that excite you and what are the aspects that give you pause?”

I commented:

“What excites me is how we can collaborate with AI to generate and iterate ‘with’ AI in ways that would never have been possible before. What gives me pause are when tools are used to make work easier, and the level of challenge becomes low. Different, challenging work is where we need to head, not just easier work, or work avoidance by using AI… so the work itself needs to be rethought, rather than just replaced with AI.”