Tag Archives: Artificial Intelligence

Keeping the friction

I’ve been a proponent of integrating technology into schools and classrooms for a couple decades. And in many ways I’m excited about AI and what it has to offer in the field of education.

But I have one major concern above all others: Making learning easier is not the goal.

Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.

We need to make sure that AI is not taking the friction out of learning but rather maintaining or increasing the friction in the best places to promote meaningful learning. Friction is required.

AI image woes

A few years ago I switched from looking for royalty free images to add to my Daily-Ink blog posts to using AI. The main reason for this is that I found myself spending almost as much time searching for images as I was spending writing my blog post. This was not efficient.

While I’ve had a few challenges along the way, for the most part, I got image creation consistently down around 3-5 minutes. This is great, and so much less stressful… except for when it isn’t. The past few days have been a struggle. I couldn’t get the AI to give me what I wanted. Even when I asked for clarification, and a description of my image was recited ack to me in incredible detail, the end product did not match what I hoped for.

Three of the last four days I was at the gym, walking on the treadmill and I was still trying to get my post published, delayed by failed attempts to get the image I hoped for. In all 3 cases I settled for something close. Actually 2 out of 3, for the other one I used a sample Wikipedia image the AI found for me. I can’t believe how hard it is to get AI to create the image of a clock showing the time 5 o’clock!?!

I was tempted to say that AI image creation is getting dummer, but I think what’s happening is that I’ve just started to expect a lot more. The clock is a bad example, but in many cases I’m expecting a level of sophistication I haven’t asked for previously. I want specific perspectives. I’m asking for complex scenarios, and I’m challenging the AI to create ‘unnatural’ situations, like a teacher in a circle of desks with the students all sitting looking out and away from the teacher.

That’s sounds like an easy request but in millions of reference images of teachers, the AI has been trained to have students face the teacher. So despite continued attempts, with the AI actually describing in detail what I’m asking for before giving it to me, I still got an image of the students facing inward, towards the teacher. Again, and again, and again.

So I’m going to dumb it down. I’m going to ask for less complex images. I’m going to settle for an image that might not be perfect, and most importantly I’m going to spend less time on images and more time writing.

—-

Post script: My one and only image request for this post – ‘Create a stylized, abstract watercolour image that looks like an AI image gone wrong, with an uncanny valley styled mishmash of items.’

Abnormally Normal

If I wanted to make light of the sense I’m feeling, I’d say that I feel ‘a disturbance in the force’. Or I’d reference that meme of a dog in a house fire, sitting at a table having coffee, as if the world is fine.

But the new normal is not normal. The dichotomy of politics, the hatred between religious extremists, the focus on vengeance and public shaming in social media, police violence against citizens, the inability to share middle-ground opinions without fear of being ‘othered’ by both sides of the political dichotomy… it’s like we’ve slipped into a dystopian movie, and we are left wondering if this is real life?

It is.

We are bearing witness. We are seeing the collapse of modern society. Sovereignty used to matter, it doesn’t matter anymore. Neighbourly love used to matter, it doesn’t anymore. The rule of law used to matter, it doesn’t anymore. Civility, etiquette, respect, and even kindness used to matter, they don’t matter anymore.

And yet in our day-to-day not much is different. We can rant because we don’t like what we see, or we can move forward blissfully and blind to the world beyond our own existence. Our tolerances vary, but the shenanigans that alter what’s normal in society seem to slip greater into the abnormal without us being able to influence it in any way.

Pick a decade after WWII and tell me how it was more abnormal than what we are seeing today. I can’t.

When I say, “We are seeing the collapse of modern society,” I am not being hyperbolic. I’ve only mentioned social/political abnormalities, without mentioning climate change, microplastics, artificial intelligence, or even cost of living and the decline of the middle class. Factor all these things in and the new normal is anything but normal. Except that’s exactly the point… somehow this is what normal is.

AI – Alternate Identities

I just watched a video clip of Sir Ken Robinson promoting a product to reduce your blood sugar. I’ve already shared ‘An AI Advertisement’ with a fictitious expert, and broke down the flaws with the ad. But now we have an actual (now deceased) celebrity figure doing the promotional plug. It looks and sounds like him, but he never said anything he says in this video advertisement. I know this, but how many people will recognize him and pay a little more attention to this advertising scam because it is delivered by someone famous?

This is just the beginning. We are moving into a ‘post truth era’, where nothing is inherently believable. A decade from now we’ll have multiple alternate identities to choose from… Was the real Al Gore the one warning us about global warming, or was the real one promoting fracking, or alternative medicine, or socialist communism over capitalism? Every video will seem equally real, every source seemingly legitimate. One real, all the others alternative histories indistinguishable from reality.

Will it only be famous people that will fall victim to these alternate identities or are we all going to be replicated? When I’m in my late 80’s will I be watching a video of 50 year old me oblivious to whether this recording actually happened or if it was invented with a perfect imitation of myself?

The implications for scams are immeasurable. Live video of a seemingly real son or daughter extracting banking data from a senior parent. A meticulously created alternative you moving all assets over to someone else. The scams are limited only by imagination, not by technology or capability.

Alternate identities indistinguishable from reality, all playing out as if real. Sir Ken Robinson plugging health suppliments is only just the beginning… We are in for some reality warping performances from AI alternatives to us, and the people we think we trust… This is only just the beginning!

Post Truth Era

Never mind the ridiculous videos of Mr. Rogers chatting with Tupac Shakur or Bigfoot vlogging, these AI videos seem real enough while fully intending us to know they are AI. What we are seeing now is an indistinguishable bending of real and fake with videos that are completely altering our ability to know what is real and what isn’t.

Voice mimicking was already almost perfect. I saw a video post today from a man whose dad called him to ask what their shared bank account password was. One problem: His dad died last year, he just hadn’t taken his name off of the account yet. He said it sounded so real that had his father been alive, he probably would have shared the password, thinking his dad forgot.

Now AI videos are just as good as AI audio and the combination of the two truly are steering us into a post truth era. People are sharing AI videos completely unaware that they are fake. Even news stations are getting it wrong.

Soon web sites will become bastions of truth. Want to know what someone actually said? Go to ‘their name’ .com or .org and see the actual video shared there. Anything else will be questionable. And wherever else the video is shared must be watched with skepticism. Subtle or overt, very important changes in a message will occur as a result of someone, ultimately anyone taking the original video and making an AI version that gives their message instead of the intended message.

Following specific domains, and maybe a handful of legitimate news channels, are the only suggestions I have. Legislation won’t keep up, and the fakes are just getting better. Essentially, find reliable sites and distrust everything else. Intuition and common sense won’t be enough.

Digital dog sitter

I went to a store yesterday after work. It was a cold, rainy evening and already dark at around 5:30pm. I picked up the couple items I came for and headed back to my car. Just as I was getting in, I heard a dog barking at me from inside the car next to me. When I looked over, I saw the dog in the back seat and a note on the electric car’s digital display that read:

My driver will be back soon

Then in smaller font:

Don’t worry! The heater is on and it’s 20°C

With the 20°C in very large font, which could easily be read from a distance.

Considering the taboo normally associated with leaving a pet unattended in a car, I thought this was very clever. Highlighting the temperature of the car removed any concern that the dog’s life is in danger from overheating, and noting the driver will be back shortly eases any anxiety for dog lovers who might worry for the dog’s wellbeing.

This also made me think of kids we see today being babysat by technology. The parent in the grocery store handing over their phone to the kid sitting in the front of the grocery cart. The kid in the back seat of a car watching a movie. The kid at home on the iPad while dinner is being made.

What will this look like when we have robots ‘adding value’ to these experiences? Will dog owners send their pets for walks while they step into a store, with the robot babysitter cleaning up the poop the dog might do on the walk? Will kids be playing in the back yard with their robot babysitter rather than having their eyes glued to a screen?

And is this an improvement to what we have now?

I think for dogs it will be, but I wonder about this for kids? What kinds of bonds will kids build with their robotic babysitters? Will we be able to tell when a teenager has been raised more by robots than by humans? What amount of robot time will be considered too much? Will a parent who lets a robot babysit their kid for hours and hours be judged like a dog owner who left his dog in a hot car?

When we think of robots that we will soon have in our homes, we think of the conveniences they will provide. What happens when one of those conveniences is helping to raise our kids? What impact will it have? There’s a difference between dog sitting and babysitting that makes this question very interesting. And while I find the the digital note in a car telling everyone the dog is comfortable and will be attended to soon quite clever, I’m not sure how clever it will be to have robots attending to our kids more than their parents do.

Infinite within the finite

Civilization is built on infinite growth within a finite system. Until our values move away from a focus on consumerism and wealth accumulation, we are never going to get to either environmental/planetary or human well-being. The energy demands are just too great and simultaneously too destructive.

Will AI solve or magnify these problems? I fear it will indeed magnify them. It’s not just the energy demands of these Artificial Intelligence machines that’s the issue, it’s the promise of more goods at a cheaper price. It’s the promise of every gadget you desire, affordably made by automated, robotic systems in dark factories by intelligent robots that don’t need the lights on. It’s the promise of a luxury electric car for $15,000-20,000; a $5,000 robot that does all your chores at home; a 3D printer that can manufacture high quality, factory grade products in the comfort of your own home. All that’s needed are the resources to build these things… unlimited resources being taken from a planet with limited resources.

That’s right, to make this amazing, almost limitless future possible, we just need infinite resources from a finite planet. Meanwhile, wealth accumulation is being concentrated, the middle class is shrinking, and we are madly extracting resources from the earth, with little concern over the environmental impact.

It’s. Just. Not. Sustainable.

An AI advertisement

I scrolled past this add a few times before paying any attention to it. But then it gave off an uncanny valley feeling that made me look a little closer. I think it was the very staged first question that bothered me most, and yesterday I finally took the time to watch it through a critical lens. It’s an ad for a Tai Chi app, but I cropped the video to hide the brand because I don’t want to amplify it, I want to critique it.

Here is the ad:

And here are a list of telltale things that suggest it is AI.

1. Look at the opening image. The woman is talking at a 90° angle to the stage, and there is no one at the podium below her.

2. The ‘expert’ is a perfectly chiseled man who is never named. No recognition of him as an expert in the field… because he’s fictitious.

3. Obviously fake audience members. The first image shows a blurred bearded man who doesn’t seem real to me. The second image has a man wearing a partial microphone like the expert.

4. The painfully fake script.

“Isn’t a gym better?”

“Gym doesn’t work after 40.”

This isn’t necessarily evidence of AI, it could just be bad writing, but it comes off feeling very wrong and unnatural. It’s like there was an intent in the text to make the expert sound like English is his second language but his voice doesn’t carry that same suggestion.

5. Comments are turned off. There is no benefit in having viewers outing the ad as fake. It’s better to allow the ad to fool more people without being called out.

The reality is that I could pick this ad out as fake, but that’s only because it was done poorly. We are going to see a lot more ads done this way and they are going to be good enough to fool us completely. It’s just a matter of time, and that time is approaching very quickly.

AI Infiltration

Do you want an AI to be able to read and reply to your email? Wouldn’t that be great? Yes, but I’m not doing it!

My personal email is a gateway to everything I do on the web, and that includes my digital banking. It also includes access to EVERY web tool that I use. I can’t count the number of times that I’ve used ‘Forgot my password’ on a website, or an app, and retrieved that information in my email. So, my email gives me, and anyone or anything that has my password a lot of control over the online tools that I regularly use.

As an aside, this is why two-factor authentication is so important, it protects you from someone having full control of everything you do online, simply by having access to your email. Yet, to me, this protection isn’t enough to allow me to give an AI agent access to my email. To me, this is allowing too much access to my whole digital life.

It’s not the reading of my email I’m concerned about. And frankly, I’d love to have an AI respond to basic email communication on my behalf, or to add items to my calendar for me. That would be great. But to do that I’m essentially saying to an AI company, “I’m an open book, go ahead and read me in order to train your AI model.’ And I’m also allowing an Agent full access to my digital life.

What happens when a ‘helpful’ agent decides that in order to help me it needs access to my online banking to make a purchase? Or worse yet, what happens when an AI is injected with a virus designed to collect my passwords and to update this passwords, then delete the ‘Forgot password’ emails so I don’t even know they were changed.

We’ve already seen countless examples of people being able to trick an AI into giving access to programming information that should have been kept private. Or people convincing an AI to respond to inappropriate questions it was trained not to respond to. Knowing this is not terribly hard to do, what makes you believe an AI agent with full access to your email, your life online, can’t be convinced or exploited to share your information and access in a way that will completely compromise you and your personal information?

I’m not convinced the risk is worth the reward. As I use AI more, I’m using it as a tool to help me understand and connect to the world in better, more efficient ways. But I’m not ready to let AI into my email and into my digital life. I’m wondering when the horror stories of full identity theft are going to start to happen? And I’m guessing these stories are going to start with, “I gave an AI agent access to my email.”

Will AI undermine social media?

What if AI created media completely changes our online habits? I’ve already noticed that I’m disappointed when I realize a video that caught my attention is not real… That it’s not (for example) a video catching a house cat scaring away a bear from a child, but rather an AI imagined scenario. Right now that’s about 5-10% of me feed, but what happens when that percentage is over 50%?

Am I going to pay as much attention to what I watch and read when I know more than half of my feed is artificially concocted to attract and hold that attention? Will the appeal be there?

I’m already gravitating to podcast conversations, and a smaller communities of people I actually know, as places to get new information from, will my social media stream look the same as it does today? Or will it shrink away from seeking new, but likely artificially created information, to smaller communities that I know are real?

And how will this affect younger generations and their addictions to their phones? Maybe it will just redirect their attention to seeking real connections, but they’ll still do that digitally, not changing habits as much as where their attention goes. But maybe, just maybe, AI infiltration or perhaps I should say infestation, of social media will see us all living a little further away from our screens.