Tag Archives: Artificial Intelligence

From the horses mouth

I’ve already seen some really good deep fakes of famous people that both look and sound real. That was over a year ago and the technology is far better now. I just watched this NBC Nightly News clip on TikTok:

The whole video is cautionary and a little scary, looking at Artificial Intelligence (AI) as a possible threat to Humanity. At the 2:17 minute mark this was said,

“AI tools can already mimic voices, ace exams, create art, and diagnose diseases. And they are getting smarter every day.

In two years by the time of the election, human beings will not be able to tell the difference between what is real and what is fake.”

It occurred to me that the lead up to the next US election is going to include countless deep fakes that will be virally shared and reposted and re-shared, which will be far more convincing than anything we’ve seen so far. The clever ones won’t be far fetched in content, they will be convincing because they will subtly send people down a specific narrative without being outrageous or egregious and easy to spot. For example: Biden supporting and endorsing some ultra-left wing, ‘woke’ group the makes the right outraged. Or Trump speaking to a friend about how he hates guns and the NRA. Each of these can be completely fabricated and completely convincing.

This is really scary because where you get your news will be vitally important. Hopefully major news outlets would vet the videos and verify authenticity before sharing, but digital newspapers are always worried about missing the scoop and letting other networks go viral. Many less reputable sites will share the fake videos just because it fits their narrative. Other sites will knowingly share the fake videos because their intent is to mislead, and to feed anger and vitriol to their naive followers.

Conspiracies will be magnified and any mention of the videos being fake will be counter argued that reports of the video being faked are just the way the government is trying to keep the truth from you. The message being, the fakes are real and the reports of them being fake is the fake news. People already believe fake articles with no fact-checking, they are going to be completely fooled by extremely convincing fake videos.

My advice, find websites that you trust and make sure the web url is correct. Follow people you trust and question anything suspicious from them that isn’t from your trusted sites. If it’s not from your trusted site, if it’s not right from the horses mouth and it seems suspicious, consider it fake until you verify.

How serious is this? You don’t have to be famous to be deep faked. Here is a story of a mom thinking she was talking to her kidnapped daughter, convinced it was her voice on the phone with her kidnappers, but it was a fake and her daughter was upstairs in her room. The point of the video is to have a safe word with your family because anyone’s voice can be easily faked.

We are entering a time when people are defining truth differently, when fakes seem more like truth, and when sources of information are going to be as important as the information itself.

Unrecognizable future

Did you ever see the movie Blast from the Past? Here is the premise from Wikipedia:

The film focuses on a naive 35-year-old man, Adam Webber, who has spent his entire life (1962-1997) living in a Cold War-era fallout shelter built by his survivalist, anti-Communist father, who believes the United States has suffered a Soviet nuclear attack (in reality, a plane crashed into their house). When the doors unlock after 35 years (the amount of time his father believes the nuclear fallout will take to clear), Adam emerges into the modern world, where his innocence and old-fashioned views put him at comedic odds with others.

It’s a cute movie. It makes me wonder what it would be like for someone today to jump ahead and see what the future is like 35 years from now? What does life look like in 2058?

Will work look any like it does today? What about transportation? Money? Space flight? Religion? Politics? Artificial intelligence (AI) and computing?

There is so much that could, and will, be different from today. Some of these things might look similar, but some are going to look considerably different. I can’t imagine a world 35 years from now being less different today than 1997 was to 1962. Life in 2058 will be unrecognizable compared to life today.

For example: The entire monetary system is going to be digital. The workforce is going to be mostly different depending on how much AI has undermined many of today’s jobs. That same AI is going to change the way we integrate technology. Today it’s our phones that seem to be getting smarter and smarter but 35 years from now it will be integrated/cyborg technology that will be enhancing and enriching our lives.

I can imagine glasses or contact lenses or implants that give us information like map directions, names and details about people we meet, and even health data like heads up display data in a military jet today. I can imagine this working in a way with a hearing implant that allows us to have a video conversation like FaceTime on our phone today. Phones won’t be something we carry, they will be something embedded as part of the tools we wear or tools we integrate into implants.

If you don’t think that things will be drastically different than now, then you haven’t paid attention to the exponential rate of change that has happened since the 1990’s. In 1990 there was no Amazon, Deep Blue was 7 years away from beating world Chess champion Garry Kasparov, there were no smart phones, no electric cars, and Google was still 8 years away from being a company. Author Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” Smart phones today would seem like magic in 1990. What does 2058 hold for us that will seem like nothing less than magic to us today? It’s cliche to say, but time will tell.

Fear & Teaching

I just read an interesting article, ‘ChatGPT is going to change education, not destroy it‘, and got to this sentence about a teacher using it in her classroom:

“Not all these approaches will be instantly successful, of course. Donahoe and her students came up with guidelines for using ChatGPT together, but “it may be that we get to the end of this class and I think this absolutely did not work,” she says. “This is still an ongoing experiment.”

The moment I read this I thought, ‘This is a teacher I’d love to work with!’ What’s her approach? Let me summarize it: ‘Here is a new tool, how can I use it in my classroom to help my students learn? Oh, and sometimes what I try won’t work, but if every experiment worked well then we wouldn’t be learning.’

I see so much fear when a new tool enters schools: Ban calculators, ban smartphones, ban Wikipedia, ban Chat GPT… But there are always teachers doing the opposite, wanting to use rather than ban new tools. Teachers who are willing to try new things. Teachers who know that some lessons will flop, and go in unexpected and unintended directions, yet see the value in trying. These teachers can look long term and see the worthy benefits of trying something new, they are unafraid to have a lesson fail on the path of being innovative.

It’s that lack of fear of flopping that I love to see in teachers. There’s a wide gap between, ‘That failed, how embarrassing. I’ll never do that again!’ and ‘Well, that didn’t work! I wonder what I can do next time to make it better?’ The former is quite fixed in their ways, and the latter is considerably more flexible. While fear rules the former, there is a kind of fearlessness in the latter.

Tools like Chat GPT are absolutely going to change education. I’m excited to see some fearless educators figuring out how best to use it, (and many new tools like it), in their classrooms, and with their students. The teachers willing to iterate, try, fail, and learn to use these tools are going to take their students a lot farther and learn a lot more than in places where these tools will be banned, blocked, and shunned.

Capability and Time

This post is about General Artificial Intelligence, but I’m going on a bird walk first.

My dad has had a health setback and will be on a long road to recovery. I heard him talking about his recovery plans and I felt had to share a personal example with him. When I went through 6 months of chronic fatigue I finally found relief after I discovered that I had a severe Vitamin D deficiency and started taking high dose supplements. Here’s the thing, I saw positive results in just 3 days… I went from hearing my alarm and wondering how I could physically get out of bed, to feeling normal when I woke up in just a matter of days of taking suppliments. However, it took over 6 months to recover to the point where I could put my treadmill within 90% of what I could do before the fatigue hit me. My capabilities improved, but much slower than I thought would happen. We are good at setting goals and knowing what’s possible, but we often overestimate how fast we can achieve those goals.

But I digress (my dad’s favourite thing to say when he finishes a bird walks on his way to making a point.)

Chat GPT and similar language predicting software are still pretty far away from artificial general intelligence, and the question is: How far away are we? How far away are we from a computer being able to out-think, out-comprehend, and our problem-solve the brightest of humans? Not just on one task, like they do on competing in Chess or Go, rather in ‘general’ terms, with any task.

I think we are further away in time than most people think (at least those people who think that artificial general intelligence is possible). I think there needs to be at least one if not a few technological leaps that need to happen first, and I think this will take longer than expected to happen.

The hoverboard and flying cars in the movie Back to the Future may not be too far away, but the ‘future’ in the 1987 movie was supposed to be 2015.

Are we going to achieve Artificial General Intelligence any time soon? I doubt it. I think we need a couple quantum leaps in our knowledge first. But when this happens, computers will instantly be much smarter than us. They will be far more capable than humans at that point. So the new question isn’t about when this will happen, but rather what we do when it does happens? Because I’m not a fan of a non-human intelligence looking at humans the way we think about stupid chickens, or even smart pets. What happens when thé Artificial Intelligence we create sees us as stupid, weak animals? Well, I guess time will tell (but don’t think that’s any time too soon).

Fully integrated and invisible AI

We are moving into a new economic revolution. Not since the Industrial Revolution have we seen technology that will change the nature of work so drastically. Artificial Intelligence is about to be weaved so deeply into our lives that we will not know where it starts and ends. And while it’s not completely new to have our work enhanced by AI, the depth of influence and ease of use will make it transformational while also slowly becoming invisible.

What do I mean by invisible? We already use simple forms of AI in everyday life without thinking about it: We have autocorrect correcting our spelling; we have cars that warn us when we drift outside our lane or flash in our rear view mirrors when it’s not safe to change lanes; and, we trust autopilot to do the majority of flying on plane trips around the world. The leap to self-driving cars might have seemed incredible a few years ago, but now you can board a self-driving taxi in San Francisco.

Chat GP3, and now Chat Gp4, are going to change the very nature of work for many people in the coming months and years. Have a look at what Microsoft Copilot is about to offer:

More specifically:

Soon tools like this, aptly named Copilot, will become as useful and integrated into what we do as autocorrect is today. Take meeting notes? Why bother, just record the meeting and ask AI to generate both the notes and the next step tasks. Create a PowerPoint to present new information? Instead, share the information with Copilot and have it create the PowerPoint. Create a website? How about sketching it on the back of a napkin, sharing a picture of it and having Chat GP4 write the code and build the website.

AI is going to redefine the work of many people faster than any time in history, and the technology is going to be so integrated into the things we do daily that the use of AI will quickly become invisible… ever present, very useful, and unnoticed.

Feeding the (AI) brain

I worry about the training of Artificial Intelligence using the internet as the main source of information. One of the biggest challenges in teaching AI is in teaching it how to group things. Unless a group is clearly identified, it’s not a group. That’s ok when counting items, but not ideas. What is fact vs fiction? What is truth vs a lie vs an embellishment vs an exaggeration vs a theory vs a really, really bad theory?

There are some dark places on the internet. There are some deeply flawed ideas about culture, race, gender, politics, and even health and fitness. There are porn sites that objectify women, and anti-science websites that read like they are reporting out facts. There is a lot of ‘stupid shit’ on the internet. How is this information grouped by not-yet intelligent AI systems?

There is the old saying, ‘Garbage in, garbage out’, and essentially that’s my concern. Any form of artificial general intelligence is only as good as the intelligence put into the system, and while the internet is a great source of intelligent information it’s also a cesspool of ridiculous information that’s equally as easy to find. I’m not sure these two dichotomous forms of information are being grouped by AI systems in a meaningful and wise way… mainly because we aren’t smart enough to program these systems well enough to know the difference.

The tools we have for searching the internet are based on algorithms that are constantly gamed by SEO techniques and search is based on words, not ideas. The best ideas on the internet are not the ones necessarily most linked to, and often bad ideas get more clicks, likes, and attention. How does an AI weigh this? How does it group these ideas? And what conclusions does the AI make? Because the reality is that the AI needs to make decisions or it wouldn’t be considered intelligent. Are those decisions ones ‘we’ are going to want it to make? If the internet is the the main database of information then I doubt it.

When search engines become answer engines

One if the most alarming things I’ve read and heard about since I started mentioning Chat GPT and the use of predictive AI tools is that the model for profitability of content creators is going to have to change. With Google and Bing both embedding AI enhanced ‘answers’ as part of their search results, this is going to have a dramatic impact on website visits, (click-throughs and advertising views), content creators count on.

Here is a link to a very long but interesting essay by Alberto Romero on the subject: Google vs Microsoft: Microsoft’s New Bing Is a Paradigm Change for Search and the Browser

This is an excerpt from the section titled, ‘With great power comes great responsibility’,

“Giving value back to creators
One of the core business aspects of search is the reciprocal relationship between the owners of websites (content creators and publishers) and the owners of the search engines (e.g. Google and Microsoft). The relationship is based on what Nadella refers to as “fair use.” Website owners provide search engines with content and the engines give back in form of traffic (or maybe revenue, etc.). Also, search engine owners run ads to extract some profit from the service while keeping it free for the user (a business model that Google popularized and on top of which it amassed a fortune).”

and a little further down,

“…Sridhar Ramaswamy, ex-Google SVP and founder of Neva (a direct competitor of Bing and Google Search), says that “as search engines become answer engines, referral traffic will drop! It’s happened before: Google featured snippets caused this on 10-20% of queries in the past.”

So, getting a response from your search query already has a historical track record of reducing referral traffic and now search is going to get significantly better at answering questions without needing to click through to a website.

What is human (as opposed to Artificial Intelligence) created content going to look like in the future when search answers your questions that would normally require you to visit a website? What happens to creator and publisher profitability when search engines become answer engines?

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

You can’t police it

I said Chat GPT is a game changer in education in my post Fear of Disruptive Technology. Then I started to see more and more social media posts sharing that Artificial Intelligence (AI) detectors can identify passages written by AI. But that doesn’t mean a person can’t use a tool like Chat GPT and then do some of their own editing. Or, as seen here: just input the AI written text into CopyGenius.io and ‘rephrase’, and then AI detectors can’t recognize the text as written by AI.

The first instinct with combating new technology is to ban and/or police it: No cell phones in class, leave them in your lockers; You can’t use Wikipedia as a source; Block Snapchat, Instagram, and TikTok on the school wifi. These are all gut reactions to new technologies that frame the problem as policeable… Teachers are not teachers, they aren’t teaching, when they are being police.

Read that last sentence again. Let it sink in. We aren’t going to police away student use of AI writing tools, so maybe we should embrace these tools and help manage how they are used. We can teach students how to use AI appropriately and effectively. Teach rather than police.

Fear of Disruptive Technology

There is a lot of fear around how the Artificial Intelligence (AI) tool Chat GPT is going to disrupt teaching and learning. I’ve already written about this chatbot:

Next level artificial intelligence

And,

Teaching in an era of AI

And,

The future is now.

To get an understanding of the disruption that is upon us, in the second post, Teaching in an era of AI, I had Chat GPT write an essay for me. Then I noted:

“This is a game changer for teaching. The question won’t be how do we stop students from using this, but rather how do we teach students to use this well? Mike Bouliane said in a comment on yesterday’s post, “Interesting post Dave. It seems we need to get better at asking questions, and in articulating them more precisely, just like in real life with people.

Indeed. The AI isn’t going away, it’s just going to get better.”

And that’s the thing about disruptive technology, it can’t be blocked, it can’t be avoided, it needs to be embraced. Yet I’ve seen conversations online where people are trying to block it in schools. I haven’t seen this kind of ‘filter and hide from students’ philosophy since computers and then phones started to be used in schools. It reminds me of a blog post I wrote in 2010, Warning! We Filter Websites at School, where I shared this tongue-in-cheek poster for educators in highly filtered districts to put up on their doors:

Well now the fervour is back and much of the talk is about how to block Chat GPT, and how to detect its use. And while there are some conversations about how to use it effectively, this means disrupting what most teachers assign to students, and this also disrupts assessment practices. Nobody likes so much disruption to their daily practice happening all at once. So, the block and filter and policing (catching cheaters) discussions begin.

Here is a teacher of Senior AP Literature that uses Chat GPT to improve her students’ critical thinking and writing. Note how she doesn’t use the tool for the whole process. Appropriate, not continuous use of the tool:

@gibsonishere on TikTok

Again going all the way back to 2010, I said in Transformative or just flashy educational tools?,

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.”

And here is something really important to note:

The. Technology. Is. Not. Going. Away.

In fact, AI is only going to get better… and be more disruptive.

Employers are not going to pretend that it doesn’t exist. Imagine an employer saying, “Yes, I know we have power drivers but to test your skill we want you to screw in this 3-inch screw with a handheld screwdriver… and then not letting the new employee use his power tools in their daily work.

Chat GPT is very good at writing code, and many employers test their employee candidates by asking them to write code for them. Are they just going to pretend that Chat GPT can’t write the same code much faster? I can see a performance test for new programmers looking something like this in the future: “We asked Chat GPT to write the code to perform ‘X’, and this is the code it produced. How would you improve this code to make it more effective and eloquent?”

Just like the Tiktok teacher above, employers will expect the tool to be used and will want their employees to know how to use the tool critically and effectively to produce better work than if they didn’t use the tool.

I’m reminded of this carton I created back in 2009:

The title of the accompanying post asks, Is the tool an obstacle or an opportunity? The reality is that AI tools like Chat GPT are going to be very disruptive, and we will be far better off looking at these tools as opportunities rather than obstacles… Because if we choose to see these tools as obstacles then we are the actual obstacles getting in the way of progress.