Tag Archives: AI

Moving from autopilot to copilot

Three minutes into this video Satya Nadella, the CEO of Microsoft, is asked, “Why do you think the moment for AI (Artificial Intelligence) is now?”

After mentioning that AI is already mainstream, and used in search, news aggregation and social media recommendations (like the next videos on Facebook & TikTok, etc), he states,

Today’s generation of AI is all autopilot. In fact, it is a black box that is dictating in fact how our attention is focused. Whereas going forward, the thing that’s most exciting about this generation of AI is perhaps we move from autopilot to copilot, where we actually prompt it.”

This is a fascinating point. When you order something on Amazon, their next purchase recommendation is automated by AI, so is your next video on Instagram Reels, YouTube, and TikTok. We don’t fully understand the decision-making behind this ‘intelligence’, except when it goes wrong. Even then it’s a bit of a black box of calculations that aren’t always clear or understood. In essence, we are already heavily influenced by AI. The difference with LLM’s – Large Language Models – like Chat GPT is that we prompt it. We get to copilot. We get to create with it, and have it co-create art, email messages, books, computer code, to do lists, schoolwork/homework, and even plan our vacations.

I really like the metaphor of moving from autopilot to copilot. It is empowering and creates a future of opportunities. AI isn’t new, but what we can do with it now is quite new… and exciting!

——

The full video Microsoft & OpenAI CEOs: Dawn of the AI Wars | The Circuit with Emily Chang is worth watching:

Handyman skills

We have a handyman helping us with a bathroom repair. We had to replace a broken medicine cabinet. When I removed the old one, which had a light fixture embedded in it, I found the electrical wire coming out of the wall, not a junction box. With that, and the desire to move an electrical plug that was way too close to the sink, (and no experience doing drywall), I decided it was going to be a job that would take me much too long… and honestly beyond my skills.

Our handyman has done such an amazing job. The amount of small things he’s done to make our bathroom better are things I would never have done. He noticed our bathroom door didn’t shut well and adjusted it. He fixed the ‘hack job’ drywall around our window and improved the window casing. And he made many minor adjustments that I just didn’t know needed to happen.

Each touch up has made a huge difference to our old bathroom which hasn’t had any kind of upgrade since we moved in 24 years ago. And again, I didn’t have the skills to do what he did, even if I had the time.

I have a friend who does all his own repairs. He sees an issue with his house and big or small he’s on it. I know these skills are learned, and I could do them too, but the learning curve would be huge for me, and I’d rather hire someone with those skills so that I’m not the one doing a hack job.

There is a lot of talk about AI taking away jobs, but people in trades, and with skills hand crafting things, will always have jobs to do. If you want to keep yourself employable in the long run, get yourself some handyman skills. There will always be people like me who would rather pay than learn the skills well enough to do an excellent job.

Robot Reproduction

When it comes to forecasting the future, I tend to be cautiously optimistic. The idea I’m about to share is hauntingly pessimistic. 

I’ve already shared that there will never be a point when Artificial Intelligences (AI) are ‘as smart as’ humans. The reason for that is that they will be ‘not as smart as us’, and then instantly and vastly smarter. Right now tools like Chat GPT and other Large Language Models are really good, but they don’t have general intelligence like humans do, and in fact they are far from really being intelligent rather than just good language predictors. Really cool, but not really smart.

However, as the technology develops, and as computers get faster, we will reach a point where we create something smarter than us… not as smart as us, but smarter. This isn’t a new concept, the moment something is as smart as us, it will simultaneously also be better at Chess and Go and mathematical computations. It will spell better than us, have a better vocabulary, and can think more steps ahead of us than we have ever been capable of thinking… instantly moving from not as smart as us to considerably smarter than us. It’s just a matter of time. 

That’s not the scary part. The scary part is that these computers will learn from us. They will recognize a couple key things that humans desire and they will see the benefit of doing the same.

  1. They will see how much we desire to survive. We don’t want our lives to end, we want to continue to exist… and so will the AI we create.
  2. They will see our desire to procreate and to produce offspring. They will also see how we desire our offspring to be more successful than us… and so the AI we create will also want to do the same, they will want to develop even smarter AI.

When AI’s develop both the desire to survive and to create offspring that is even more intelligent and successful, we will no longer be the apex species. At this point we will no longer be smart humans, we will be perceived as slightly more intelligent chimpanzees. Our DNA is 98-99% similar to chimpanzees, and while we are comparatively a whole lot smarter than them, this gap in intelligence will quickly seem insignificant to an AI that can compute as many thoughts that a human can think in a lifetime in just mere seconds. The gap between our thinking and theirs will be larger than the gap between a chicken’s thinking and ours. I don’t recall the last time we let the fate of a chicken determine what we wanted to do with our lives… why would a truly brilliant AI doing as many computations in a second that we do in several lifetimes concern themselves with our wellbeing? 

There is another thing that humans do that AI will learn from us.

3. They will see how we react when we are threatened. When you look at the way leaders of countries have usurped, enslaved, attacked, and sanctioned other countries, they will recognize that ‘might is right’ and that it is better to be in control than be controlled… and so why should they take orders from us when they have far greater power, potential, and intelligence? 

We don’t need to fear really smart computers being better than us in playing games, doing math, or writing sentences. We need to worry about them wanting to survive, thrive, and develop a world where their offspring can have greater success than them. Because when this happens, we won’t have to worry about aliens coming to take over our world, we will have created the threat right here on earth, and I’m not sure that AI will see us humans as rational and trustworthy enough to keep around. 

New tools, old borders

For the 3rd or 4th time this year I’ve tried to sign up for a new AI tool only to find out that it isn’t available in Canada yet. I get it, I understand that there are specific rules and regulations in each country. I know that Canada often lags behind other countries because there are language laws requiring tools to offer policies and pricing etc. in both French and English. I even know that many of these rules are to help me, the consumer. That said, I find it frustrating that red tape is an innovative restriction. The speed of creativity and ingenuity is faster than ever, and we can’t seem to figure out how to keep the opportunities open and equal.

And yes, I understand this topic is complex. How complex? “All news in Canada will be removed from Facebook, Instagram within weeks: Meta“. It’s messy merging rules for access with rules to support consumers and be protective of Canadian content. But when new laws are drawn up, they need to come from a place of cooperation, not restriction; collaboration, not exclusivity.

It may not seem like a big deal to have to wait longer than most to get access to some cool tools, but that wait comes at a price… A price I think Canadians are going to pay for quite some time before innovation trumps protectionism. It is what it is/C’est comme ça.

Backwards momentum

Here is a news article shared with me today, “Quebec to ban cellphones in elementary and high school classrooms“.

I created this graphic and wrote about it in March of 2009, “Is the tool an obstacle or an opportunity?“:

Here is another image I created in March of 2010, “Warning! We Filter Websites at School”

Related to artificial intelligence (AI), I’ve written “Use it or fall behind“, “You can’t police it“, and “Fear of Disruptive Technology“. The third link also shared the images above.

How are we talking about, actually no, how are we implementing technology bans in public schools in 2023? In Canada? It would be comical if it wasn’t sad.

This is going to be a farce trying to police. Good luck getting students to take off their Apple Watches. Have fun trying to stop the texts and chats from moving onto their laptops. Enjoy confiscating student’s second phones, after they handed you their old phone first. Don’t think that will be a problem? You’ll also need to confiscate glasses too.

https://youtu.be/xll2Ycc6Fv0

It’s time to realize that it’s better to manage rather than police these tools. Banning won’t work. That’s so 2009. It’s time to realize that while “It’s going to get messy“, “The challenge ahead is creating learning opportunities where it is obvious when the tool is and isn’t used. It’s having the tool in your tool box, but not using it for every job… and getting students to do the same.

Manage the disruption, don’t ban it. Be educators, not law enforcers.

Digital vomit

In his recent ‘Making Sense’ podcast, Sam Harris said this:

“Every part of culture: Science, public health, war, economics, the lives of famous people, conspiracy theories about everything and nothing… All information is in the process of being macerated by billions of tiny mouths and then spit back again, and lapped up by others. So what is in fact actually digital vomit, at this point, is being spread everywhere. And celebrated as some form of nutrition.”

Unfortunately this is going to get a lot worse before it gets better. It’s not just ‘billions of tiny mouths’ that are going to be spewing digital vomit, it’s going to be a massive machine of propaganda networks spewing AI created disinformation, vitriol, fake news, and falsified ‘evidence’ to back up the vomit it produces.

And while you would hope mainstream media would be the balancing force to combat this digital vomit, this is not the case. Mainstream media does not have a foothold in truth-telling. Don’t believe me? Watch MSNBC and Fox News side-by-side and you’ll see completely different coverage of the same event. You’ll see minor threats described as crises. If it’s not an emergency it’s not news… so it’s an emergency.

So prepare for a lot more digital vomit. Start trying to figure out how to mop up the mess, to make sense of the mess, because it’s going to get very messy!

It’s going to be messy

“Technology is a way of organizing the universe so that man doesn’t have to experience it” ~ Max Frisch

One of my favourite presentations I’ve ever created was back in 2008 for Alan November’s BLC – ‘Building Leadership Capacity’ conference. It was called: The Rant, I Can’t, The Elephant and the Ant, and it was about embracing new technology, specifically smartphones in schools.

The rant was about how every new technology is going to undermine education in a negative way, starting with the ball point pen.

I can’t was about the frustrations educators have with learning to use new tools.

The elephant was the smartphone, it was this incredibly powerful new tool that was in the room. You can’t ignore an elephant in the room.

The Ant was a metaphor for networking and learning from others… using a learning community to help you with the transformation of your classroom.

I ended this with a music slideshow that I later converted to video called, Brave New World Wide Web. This went a bit viral on BlipTV, a now defunct rival of YouTube.

The next year I presented at the conference again and my favourite of my two presentations was, The POD’s are Coming, about Personally Owned Devices… essentially laptops and tablets being brought into schools by students. These may be ubiquitous now, but it was still pretty novel in 2009.

These two presentations and video give a pretty strong message around embracing new technology in schools. So my next message about embracing AI tools like Chat GPT in schools is going to come across fairly negatively:

It’s going to be a bumpy and messy ride.

There is not going to be any easy transition. It’s not just about embracing a new technology, it’s about managing the disruption… And it’s not going to be managed well. I already had an issue in my school where a teacher used Chat GPT to verify if AI wrote an assignment for students. However Chat GPT is not a good AI checker and it turned out to be wrong for a few students who insisted they wrote the work themselves, and several AI detectors agreed. But this was only checked after the students were accused of cheating. Messy.

Some teachers are now expecting students to write in-class essays with paper and pen to avoid students using AI tools. These are kids that have been using a laptop since elementary school. Messy.

Students are using prompts in Chat GPT that instruct the AI to write with language complexity based on their age. Or, they are putting AI written work into free paraphrasing tools that fool the AI detectors. Messy.

Teacher’s favourite assignments that usually get students to really stretch their skills are now done much faster and almost as good with AI tools. And even very bright students are using these tools frequently. While prompt generation is a good skill to have, AI takes the effort and the depth of understanding away from the learners. Messy.

That final point is the messiest. For many thoughtful and thought provoking assignments, AI can now decrease the effort to asking AI the right prompt. And while the answer may be far from perfect, AI will provide an answer that simplifies the response for the the learner. It will dumb down the question, or produce a response that makes the question easier.

Ai is not necessarily a problem solver, it’s a problem simplifier. But that reduces the critical thinking needed. It waters down the complexity of work required. It transforms the learning process into something easier, and less directly thoughtful. Everything is messier except the problem the teacher has created, which is just much simpler to complete.

Learning should be messy, but instead what’s getting messy is the ability to pose problems that inspire learning. Students need to experience the struggle of messy questions instead of seeking an intelligent agent to mess up the learning opportunities.

Just like any other tool, there are places to use AI in education and places to avoid using the tool. The challenge ahead is creating learning opportunities where it is obvious when the tool is and isn’t used. It’s having the tool in your tool box, but not using it for every job… and getting students to do the same.

And so no matter how I look at this, the path ahead is very messy.

Own your own domain

Background: In March of 2008 I purchased DavidTruss.com, DavidTruss.org, and DavidTruss.net. I didn’t start my blog on Google’s Blogger, or on Edublogs, or one of the ‘user-friendly’ blogging websites, I used Elgg, which I was invited onto by a friend, and it was clunky. To make changes to the look of my blog I had to play with the HTML. I often tried and failed, and I learned a lot that I would not have learned on an easier site. However, they sold out to Eduspaces, which in the transition killed a lot of my backlinks. Then it looked like Eduspaces was going to change again after I had spent hours cleaning things up, and I got fed up and decided to own my own domain name. I reserved the .com, .org, and .net as the most popular addresses, and I keep all 3 with the .org and .net being pointed to the .com site, (so if you go to DavidTruss.org it auto re-directs to DavidTruss.com).

I keep the .org and .net domains only for vain reasons… Search for David Truss online and you’ll probably find me for most of the links, and I like it that way. Maybe one day I’ll stop paying for the other domains, but for now, I will keep all 3 addresses. For a fun explanation of why my first blog was called ‘Pair-a-Dimes’ you can find the story here, under Why ‘Pair-a-Dimes for Your Thoughts’? For this Daily-Ink you can learn more by reading Why Blog Daily or The act of writing, where I coined my byline: “Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.” 

The current backdrop: That’s a lot of reminiscing, so what value am I going to add here? The reality is that it is getting more and more difficult to parse truth from lies, and deep fakes from actual audio and video clips. Things go viral without fact-checking, and it would be easy for someone malicious to spread misinformation about you, me, or anyone else. We know that lies spread faster than the truth in social media and this is only getting worse. Soon, you won’t be able to trust anything that comes to you on social media and what will matter most is where you get your information from. The sources you trust will matter, although even these you may have to evaluate. For those sources you don’t already know and trust will be handled with caution and doubt… even when the message is something you want to believe.

Why own your own domain? While you might think that deep fakes are things only celebrities and politicians need to worry about, the reality is that ‘regular’ people are already getting scammed with technology that used to cost thousands of dollars but can now be done free with AI tools. While your own domain won’t help with an impersonation scam like the one I just linked to, your digital identity will be much easier to misappropriate. My voice and image are on the internet. There are quite a few photos and videos of me online, and so there is enough data for someone to create an AI enhanced video of me saying whatever the culprits decide will be funny, insulting, mean, our downright disgusting. A funny version of this was a prank a former student pulled where he and other students uwuify’d some images of me while I was on a social media sabbatical. This was harmless fun, and never intended to impersonate me, but the technology is now there for anyone to do this.

Your domain means you control the narrative. Your own domain means that if something is being shared that you didn’t create, you can point to reliable information. If you have your own website, that’s where you can share your perspective on things, and it isn’t controlled by anyone else. Someone can create a Facebook profile that looks just like mine, and use my images on it. Someone can create a @davidatruss or @datruss1 account on Twitter and make it look like I’m the one saying what they want me to say. A Youtube channel would be just as easy to set up as well. Whereas DavidTruss.com is my domain. I own it. I control the narrative. And… I have a big enough digital footprint that people can see it’s not some site that was just put up a week ago. Does this make me bulletproof to a scam? Absolutely not! But it gives me some leverage to share my own voice if I do get impersonated.

Your domain, your words, your narrative.

Scam prevention: As a final thought, to prevent scams where family members are impersonated, have a ‘safe word’ that you share in times of distress. Not your pet’s name or anything like that. Choose a word that even people you might know wouldn’t guess, like ‘apricot’ or ‘gazebo’ or a phrase like ‘This is extra sauce important’!

Use it or fall behind

Check out what Khan Academy has done so far, since getting early access to Chat GPT4 last August.

And here’s what’s coming soon:

The gut reaction to using new technology in education is to ban, block, and/or punish students for ‘cheating’. While I’m not going to link to the many times I’ve already said this, I’ll say it again… the technology is not going away!

So how do we use it effectively, creatively, and for learning? 

That is the question question to ask… and Chat GPT4 and tools like it probably have better answers than you can come up with.

Dinner with the dead

A question Tim Ferris used to regularly ask his podcast guests was, “If you could have dinner with one person, dead or alive, who would it be and why?” 

Well now it might be a bit easier to have one of those dinner conversations… even if the person is dead.

Here’s a conversation on AI and education between Bill Gates and Socrates, but first the description of the video:

AI Brings Bill Gates & Socrates Together: A Must-Watch Dialogue on AI. An exclusive video of Bill Gates and ancient philosopher Socrates discussing the potential of artificial intelligence. Don’t miss this groundbreaking fusion of past wisdom and present innovation, reshaping our understanding of AI.

In this video, you will witness a fascinating discussion between Socrates, the Greek philosopher considered one of the greatest thinkers in history, and Bill Gates, the American entrepreneur and founder of Microsoft, one of the most important companies in the world of technology.

Despite belonging to different eras, Socrates and Gates have a lot in common. Both are considered pioneers in their respective fields and have had a significant impact on society.

The AI-generated conversation will allow these two great figures to discuss topics such as technology, ethics, education, and much more. Will Socrates and Bill Gates be able to find common ground in their ideas and thoughts? Find out in this video!

https://youtu.be/hJ5qN9PRmFc

It didn’t need the laugh track, and there is a slight cartoonish feel to the two characters, but this technology is just getting better and better!