Tag Archives: AI

Keeping the friction

I’ve been a proponent of integrating technology into schools and classrooms for a couple decades. And in many ways I’m excited about AI and what it has to offer in the field of education.

But I have one major concern above all others: Making learning easier is not the goal.

Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.

We need to make sure that AI is not taking the friction out of learning but rather maintaining or increasing the friction in the best places to promote meaningful learning. Friction is required.

AI image woes

A few years ago I switched from looking for royalty free images to add to my Daily-Ink blog posts to using AI. The main reason for this is that I found myself spending almost as much time searching for images as I was spending writing my blog post. This was not efficient.

While I’ve had a few challenges along the way, for the most part, I got image creation consistently down around 3-5 minutes. This is great, and so much less stressful… except for when it isn’t. The past few days have been a struggle. I couldn’t get the AI to give me what I wanted. Even when I asked for clarification, and a description of my image was recited ack to me in incredible detail, the end product did not match what I hoped for.

Three of the last four days I was at the gym, walking on the treadmill and I was still trying to get my post published, delayed by failed attempts to get the image I hoped for. In all 3 cases I settled for something close. Actually 2 out of 3, for the other one I used a sample Wikipedia image the AI found for me. I can’t believe how hard it is to get AI to create the image of a clock showing the time 5 o’clock!?!

I was tempted to say that AI image creation is getting dummer, but I think what’s happening is that I’ve just started to expect a lot more. The clock is a bad example, but in many cases I’m expecting a level of sophistication I haven’t asked for previously. I want specific perspectives. I’m asking for complex scenarios, and I’m challenging the AI to create ‘unnatural’ situations, like a teacher in a circle of desks with the students all sitting looking out and away from the teacher.

That’s sounds like an easy request but in millions of reference images of teachers, the AI has been trained to have students face the teacher. So despite continued attempts, with the AI actually describing in detail what I’m asking for before giving it to me, I still got an image of the students facing inward, towards the teacher. Again, and again, and again.

So I’m going to dumb it down. I’m going to ask for less complex images. I’m going to settle for an image that might not be perfect, and most importantly I’m going to spend less time on images and more time writing.

—-

Post script: My one and only image request for this post – ‘Create a stylized, abstract watercolour image that looks like an AI image gone wrong, with an uncanny valley styled mishmash of items.’

Thinking Requires Effort

I recently read a great article by Alec Couros, The Radical Act of Thinking. In it he said, “The challenge isn’t finding the tool anymore. The challenge is avoiding it. We’ve reached the point where AI is the path of least resistance for almost every task.

And then he concluded with this:

To succeed, we need to fundamentally reframe “effort.” We have to stop viewing the struggle of thinking as an inefficiency to be solved, and start protecting it as the very thing that helps us grow.

Here are a few ways that I see teachers doing this at Inquiry Hub:

  1. Community video or podcast challenges. Part of the challenge might include creating the video in a specific genre, or a meta part of the presentation where students explicitly describe what they have learned.
  2. Personalized inquiry projects. This is offered through a course designed around the process of learning, not content. So it doesn’t matter if a student is learning to code, designing a website, publishing a book, learning a specific skill in art, composing a song, starting a business, or even learning to crochet… the inquiry is designed around students learning skills they want to learn.
  3. Solving problems in class. I’ve questioned the value of homework for over 15 years now. Watching our senior math teacher teach Math & Physics, I see him focusing on the why of questions. I see his students working in pairs and groups to solve problems together on white boards. I see students actively struggling and learning in class, where they have access to support, and the focus is on the struggle and understanding the problem.

Something else that we do is to be careful not to add things to students loads unnecessarily. I can’t tell you the countless times I hear well-intentioned educators say, “You know what would be a good project for your students to do?” Followed by a legitimately good idea. But we are not an alternate school, we are a regular school with an alternative approach. Our students still need to fulfill the entire regular curriculum on top of the inquiries they do for credit. As good as other ideas may be, they become make-work activities that not all students are interested in, and this just invites students to use AI or to feel like the work is just busywork.

Will Richardson asks, “Every time you’re about to implement a new program or pedagogy or technology or initiative or building project or anything else, ask and answer this simple question: “In service of what?”

When we add anything to our schedule, it’s to serve one of two purposes:

1. Integrate curriculum or make the curriculum more engaging. Our students go on to universities, colleges, and technical institutes, and they need the required courses to get there and do well. But the required curriculum doesn’t need to be taught in a linear, boring fashion. When a project is added in class, the intent is to meaningfully cover more curriculum in less time.

2. We add things in service of students. A recent example: For the last 10 years our PAC has fundraised to provide students with FoodSafe every 2nd year. So all our students learn life skills around preparing and serving food. This year our PAC is also providing our seniors with first aid training. The plan is that they will alternate years between FoodSafe and first aid so that every student who goes through Inquiry Hub will have these life skills when they leave the school. Carving out 8 hours of training time over 2 days involves our senior teachers reworking their schedule… in the service of giving our students a life skill.

I won’t pretend that everything we do is AI proof, and that there aren’t lessons and activities where students could avoid thinking using a tool that does the work for them. I also won’t pretend that every assignment and project is ‘in service’ of authentic learning for students. But I will say that we’ve worked hard to make the learning meaningful for students. We provide them with opportunities to work in our community towards common goals, and we provide them with opportunities to pursue projects meaningful to them, focusing on the process of learning… on the struggle, with a perspective that failure and struggle are a path to real learning, not a barrier.

I’ve said before,

We talk a lot about ‘learning through failure’ in education, but we don’t really mean failure. Because when a student takes lessons from something not working, then it’s a learning opportunity and not actually a failure.”

This fits with what Alec said above,

To succeed, we need to fundamentally reframe “effort.” We have to stop viewing the struggle of thinking as an inefficiency to be solved, and start protecting it as the very thing that helps us grow.

The secret sauce is in providing the space and time for students to struggle out in the open, facing challenges or learning life skills that they will use. However, you don’t create these opportunities by continually adding things to a student’s plate. Adding more to their plates only invites them to find tools to do the work for them.

Thinking requires effort, and providing students with opportunities to demonstrate that effort in meaningful ways is, in my mind, the project of schools. Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

AI – Alternate Identities

I just watched a video clip of Sir Ken Robinson promoting a product to reduce your blood sugar. I’ve already shared ‘An AI Advertisement’ with a fictitious expert, and broke down the flaws with the ad. But now we have an actual (now deceased) celebrity figure doing the promotional plug. It looks and sounds like him, but he never said anything he says in this video advertisement. I know this, but how many people will recognize him and pay a little more attention to this advertising scam because it is delivered by someone famous?

This is just the beginning. We are moving into a ‘post truth era’, where nothing is inherently believable. A decade from now we’ll have multiple alternate identities to choose from… Was the real Al Gore the one warning us about global warming, or was the real one promoting fracking, or alternative medicine, or socialist communism over capitalism? Every video will seem equally real, every source seemingly legitimate. One real, all the others alternative histories indistinguishable from reality.

Will it only be famous people that will fall victim to these alternate identities or are we all going to be replicated? When I’m in my late 80’s will I be watching a video of 50 year old me oblivious to whether this recording actually happened or if it was invented with a perfect imitation of myself?

The implications for scams are immeasurable. Live video of a seemingly real son or daughter extracting banking data from a senior parent. A meticulously created alternative you moving all assets over to someone else. The scams are limited only by imagination, not by technology or capability.

Alternate identities indistinguishable from reality, all playing out as if real. Sir Ken Robinson plugging health suppliments is only just the beginning… We are in for some reality warping performances from AI alternatives to us, and the people we think we trust… This is only just the beginning!

Post Truth Era

Never mind the ridiculous videos of Mr. Rogers chatting with Tupac Shakur or Bigfoot vlogging, these AI videos seem real enough while fully intending us to know they are AI. What we are seeing now is an indistinguishable bending of real and fake with videos that are completely altering our ability to know what is real and what isn’t.

Voice mimicking was already almost perfect. I saw a video post today from a man whose dad called him to ask what their shared bank account password was. One problem: His dad died last year, he just hadn’t taken his name off of the account yet. He said it sounded so real that had his father been alive, he probably would have shared the password, thinking his dad forgot.

Now AI videos are just as good as AI audio and the combination of the two truly are steering us into a post truth era. People are sharing AI videos completely unaware that they are fake. Even news stations are getting it wrong.

Soon web sites will become bastions of truth. Want to know what someone actually said? Go to ‘their name’ .com or .org and see the actual video shared there. Anything else will be questionable. And wherever else the video is shared must be watched with skepticism. Subtle or overt, very important changes in a message will occur as a result of someone, ultimately anyone taking the original video and making an AI version that gives their message instead of the intended message.

Following specific domains, and maybe a handful of legitimate news channels, are the only suggestions I have. Legislation won’t keep up, and the fakes are just getting better. Essentially, find reliable sites and distrust everything else. Intuition and common sense won’t be enough.

Digital dog sitter

I went to a store yesterday after work. It was a cold, rainy evening and already dark at around 5:30pm. I picked up the couple items I came for and headed back to my car. Just as I was getting in, I heard a dog barking at me from inside the car next to me. When I looked over, I saw the dog in the back seat and a note on the electric car’s digital display that read:

My driver will be back soon

Then in smaller font:

Don’t worry! The heater is on and it’s 20°C

With the 20°C in very large font, which could easily be read from a distance.

Considering the taboo normally associated with leaving a pet unattended in a car, I thought this was very clever. Highlighting the temperature of the car removed any concern that the dog’s life is in danger from overheating, and noting the driver will be back shortly eases any anxiety for dog lovers who might worry for the dog’s wellbeing.

This also made me think of kids we see today being babysat by technology. The parent in the grocery store handing over their phone to the kid sitting in the front of the grocery cart. The kid in the back seat of a car watching a movie. The kid at home on the iPad while dinner is being made.

What will this look like when we have robots ‘adding value’ to these experiences? Will dog owners send their pets for walks while they step into a store, with the robot babysitter cleaning up the poop the dog might do on the walk? Will kids be playing in the back yard with their robot babysitter rather than having their eyes glued to a screen?

And is this an improvement to what we have now?

I think for dogs it will be, but I wonder about this for kids? What kinds of bonds will kids build with their robotic babysitters? Will we be able to tell when a teenager has been raised more by robots than by humans? What amount of robot time will be considered too much? Will a parent who lets a robot babysit their kid for hours and hours be judged like a dog owner who left his dog in a hot car?

When we think of robots that we will soon have in our homes, we think of the conveniences they will provide. What happens when one of those conveniences is helping to raise our kids? What impact will it have? There’s a difference between dog sitting and babysitting that makes this question very interesting. And while I find the the digital note in a car telling everyone the dog is comfortable and will be attended to soon quite clever, I’m not sure how clever it will be to have robots attending to our kids more than their parents do.

Infinite within the finite

Civilization is built on infinite growth within a finite system. Until our values move away from a focus on consumerism and wealth accumulation, we are never going to get to either environmental/planetary or human well-being. The energy demands are just too great and simultaneously too destructive.

Will AI solve or magnify these problems? I fear it will indeed magnify them. It’s not just the energy demands of these Artificial Intelligence machines that’s the issue, it’s the promise of more goods at a cheaper price. It’s the promise of every gadget you desire, affordably made by automated, robotic systems in dark factories by intelligent robots that don’t need the lights on. It’s the promise of a luxury electric car for $15,000-20,000; a $5,000 robot that does all your chores at home; a 3D printer that can manufacture high quality, factory grade products in the comfort of your own home. All that’s needed are the resources to build these things… unlimited resources being taken from a planet with limited resources.

That’s right, to make this amazing, almost limitless future possible, we just need infinite resources from a finite planet. Meanwhile, wealth accumulation is being concentrated, the middle class is shrinking, and we are madly extracting resources from the earth, with little concern over the environmental impact.

It’s. Just. Not. Sustainable.

Lie with confidence

Be controversial but wrong, say it with confidence, and watch the likes and re-shares come your way. I had an Instagram video shared with me. The ‘influencer’ who posted it has over 600,000 followers and she claims to be an autoimmune specialist.

“You’ve got to see this,” she says, after saying that a man tested his blood before and after EMF (Electric and Magnetic Field) exposure. Then the clip changes to a guy looking at an image on a screen of what he claims to be red blood cells in “pretty perfect blood… I, mean these cells are absolutely amazing cells… it may even be hard actually to mess them up.”

Then they do a ‘phone test’ where the test subject sits between two cell phones, and has a third one between his legs on the chair, to test how “these EMF’s are affecting his ‘perfect blood’… Admitting that this is, “A bit of a risky game,” He then pricks his finger to draw a drop of blood after this supposed EMF exposure. They put a drop of the blood on a microscope plate and we switch views to see the screen again.

The contrast from the original image is comical. Worse yet the person is scrolling on the screen to a point that would go far beyond the edge of a drop of blood on a microscope plate. The difference in the slides is described as “A lot of inflammation. It’s all over.” After a very non-medical, exaggerated analysis, it concludes with, “None of this is good.”

When the video got to me it had 336,000 views and over 9,500 likes. And again, it was sent to me by someone who was concerned by this and wanted to share it.

We live in an era where confidence trumps competence. Be controversial and convincing and you are going to get not just attention, but believers. If I were to make a video debunking this, it wouldn’t get traction. Even scientists with large followings would likely not get 336,000 views on a debunking video.

So the inventive is huge. This influencer probably gained thousands of followers from this video. She made hundreds if not thousands of dollars from it going viral. And so it pays to put intentionally fake pseudo-scientific crap on the web. Just pick a controversial topic, lie with confidence, and watch the profits flow in. No backlash, no consequences, just greed, and incentives to continue to lie.

My fear? I see this getting worse, not better. AI will only serve to exaggerate the problem with more convincing lies that cater to wider audiences. It feels like as a society, we are actually getting dumber and social media is incentivized to make the problem exponentially worse.

Where else have we seen lying with confidence working? Everywhere from biased news outlets, to product advertising, to politics. Whether selling ideas, products, or partisanship, lying with confidence seems to gain far more traction than telling the truth.

_____

Update: After posting this, (and probably thanks to re-watching the above video a few times to get the quotes right), I opened Instagram and the first post had dramatic music and warned against wearing polyester on planes:

I took the screen shot and didn’t watch the rest of the video. People actually fall for this crap? 🤦‍♂️

An AI advertisement

I scrolled past this add a few times before paying any attention to it. But then it gave off an uncanny valley feeling that made me look a little closer. I think it was the very staged first question that bothered me most, and yesterday I finally took the time to watch it through a critical lens. It’s an ad for a Tai Chi app, but I cropped the video to hide the brand because I don’t want to amplify it, I want to critique it.

Here is the ad:

And here are a list of telltale things that suggest it is AI.

1. Look at the opening image. The woman is talking at a 90° angle to the stage, and there is no one at the podium below her.

2. The ‘expert’ is a perfectly chiseled man who is never named. No recognition of him as an expert in the field… because he’s fictitious.

3. Obviously fake audience members. The first image shows a blurred bearded man who doesn’t seem real to me. The second image has a man wearing a partial microphone like the expert.

4. The painfully fake script.

“Isn’t a gym better?”

“Gym doesn’t work after 40.”

This isn’t necessarily evidence of AI, it could just be bad writing, but it comes off feeling very wrong and unnatural. It’s like there was an intent in the text to make the expert sound like English is his second language but his voice doesn’t carry that same suggestion.

5. Comments are turned off. There is no benefit in having viewers outing the ad as fake. It’s better to allow the ad to fool more people without being called out.

The reality is that I could pick this ad out as fake, but that’s only because it was done poorly. We are going to see a lot more ads done this way and they are going to be good enough to fool us completely. It’s just a matter of time, and that time is approaching very quickly.

Do not amplify if you can not verify

This is a simple, but potent message. Before hitting the ‘Like’ or ‘Share’ button, before telling someone about the interesting fact you heard online, verify it in some way. Is it true?

Do some Ground Truthing. Can you verify the claim? Is it real or AI? Is it worthy of your amplification or are you just contributing to the spread of something unworthy to be shared.

How much better would the internet be if everyone paused and verified what they were sharing before amplifying misinformation, disinformation, fake news, and AI deep fakes?

It’s a worthy and effective mantra:

Do not amplify if you can not verify.