Tag Archives: AI

AI takes down EDU-giant

From the Wall Street Journal:

How ChatGPT Brought Down an Online Education Giant
Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT”

This is an excellent example of job loss due to AI. From the article:

“Since ChatGPT’s launch, Chegg has lost more than half a million subscribers who pay up to $19.95 a month for prewritten answers to textbook questions and on-demand help from experts. Its stock is down 99% from early 2021, erasing some $14.5 billion of market value.”

Chegg is a clear loser, but so is just about every website that offered to write essays for students. Imagine watching your profits disappear and seeing your entire business model collapse before your eyes.

This is just one example. There are many fields and jobs that either have disappeared or are going to disappear. Just think about the shift that’s already happening. People who thought they were in stable jobs, stable careers, are now realizing that they might be obsolete.

There will be new jobs, but more often than not there will be a condensing of jobs… one person where there used to be five, maybe ten people. For example, if a company had 5 writers producing daily articles for websites, they could lay off all but their best writer, who now acts as an editor for articles written by AI. Keep the person that understands the audience best, and that person ensures the AI writing is on point.

Code writing, data analysis, legal services, finance, and as mentioned above, media and marketing, these are but a handful of areas where AI is going to undermine the job market. And jobs are going to disappear, if they haven’t already. This has been something mentioned a lot, but the demise of a company like Chegg, with no vision for how they can pivot, is a perfect example of how this isn’t just a problem of the future, it’s happening right now.

Where will this lead in the next 5 years? What does the future job market look like?

Will there be new jobs? Of course! Will the job loss outpace the creation of new jobs? Very likely. And so where does that lead us?

Maybe it’s time to take a hard look at Universal Basic Income. Maybe it’s time to embrace AI and really think about how to use it in a way that helps us prosper, rather than to help us write emails and word things better. Maybe it’s time to accept that the AI infused world we live in now is going to undermine the current job market, and forever change whole industries. This isn’t some dystopian future, this is happening right now.

Looking at AI and the future of schools

There is no doubt that Artificial Intelligence (AI) is going to influence the way we do school in the very near future. I have been pondering what that influence will look like. What are the implications now and what will they be in just a few short years.

Now: AI is going to get messy. Unlike when Google and Wikipedia came out and we were dealing with plagiarism issues, AI writing is not Google-able, and there are two key issues with this: First, you can create assignments that are not Google-able, but you are much more limited in what you can create that is un-AI-able. That is to say, you can ask a question that isn’t easily answerable by Google search, but AI is quite imaginative and creative and can postulate things that a Google search can’t answer, and then share a coherent response. The second issue is that AI detectors are not evidence of cheating. If I find the exact source that was plagiarized, it’s easy to say that a student copied it, but if a detector says that something is 90% likely to be written by AI that doesn’t mean that it’s only 10% likely to be written by a person. For example, I could write that last sentence in 3 different ways and an AI detector would come up with 3 different percentages of likeliness that it is AI. Same sentence, different percentage of likelihood to be AI written, and all written by me.

So we are entering a messy stage of students choosing to use AI to do the work for them, or to help them do the work, or even to discuss that topic and argue with them so that they can come up with their own, better responses. We can all agree that the three uses I shared above are progressively ‘better’ use of AI, but again, all are using AI in some way. The question is, are we going to try to police this, or try to teach appropriate use at the appropriate time? And even when we do this, what do we do when we suspect misuse, but can’t prove it? Do we give full marks and move on? Do we challenge the student? What’s the best approach?

So we are in an era where it is more and more challenging to figure out when a student is misusing AI and we are further challenged with the burden of proof. Do we now start only marking things we see students do in supervised environments? That seems less than ideal. The obvious choice is to be explicit about expectations and to teach good use of AI, and not pretend like we can continue on and expect students not to use it.

The near future: I find the possible direction of use of AI in schools quite exciting to consider. Watch this short video of Sal Hahn and his son, Imran, working with an Open AI tool to solve a Math question without the AI giving away the answer.

When I see something like this video, made almost 6 month ago, I wonder, what’s going to be possible in another couple years? How much will an AI ‘know’ about a student’s approach to learning, about their challenges? About how best to entice learning specifically for each student? And then what is the teacher’s role?

I’m not worried about teachers being redundant, on the contrary, I’m excited about what’s possible in this now era. When 80% of the class is getting exactly the instruction they need to progress to a grade standard in a class on the required content, how much time does a teacher having during class time to meet with and support the other 20% of students who struggle? When a large part part of the curriculum is covered by AI, meeting and challenging students at their ideal points of challenge, and not a whole class moving at the class targeted needs, how much ‘extra’ time is available to do some really interesting experiments or projects? What can be done to take ideas from a course across multiple disciplines and to teach students how to make real-world connections with the work they are studying?

Students generally spend between 5 and 6 hours a day in class at school. If we are ‘covering’ what we need to with AI assistance in less than 3 hours, what does the rest of the time at school look like? Student directed inquiries based on their passions and interests? Real world community connections? Authentic leadership opportunities? Challenges and competitions that force them to be imaginative and creative? The options seem both exciting and endless.

The path from ‘now’ to ‘the near future’ is going to be messy. That said, I’m quite excited about seeing how the journey unfolds. While it won’t be a smooth ride, it will definitely be one that is both a great adventure and one that is headed to a pretty fantastic destination.

_____
Update: Inspired by my podcast conversation with Dean Shareski, here.

Ability and Agility

I love this quote, shared in a video on LinkedIn:

“It used to be about ability. And now, in a changing world, I think what we should be looking for is agility. I want to know how quickly do you change your mind? How fast are you to admit you’re wrong? Because what that means is you’re not just going to be reacting to a pandemic or to AI, you’re actually going to be anticipating those problems and seeing around corners, and then leading change as opposed to being a victim of it.” ~Adam Grant

It’s more than just anticipating problems, it’s about being agile, understanding challenges, and addressing them while they are small. It’s about understanding your strengths, and the strengths of your team… as well as weaknesses.

It’s about Agile Ability, which is why I titled this ‘Ability AND Agility’, rather than ‘Ability VERSUS Agility’. We need to embrace our failures and learn from them, recognize problems early, even predict them and be preemptive. This is so different than a culture of accountability and blame.

The desired student, employee, partner, colleague of the future will learn what they need to on the job. They’ll be exceptional because of their agility and willingness to learn, not just because of what they came to the table already knowing.

Not all voices are equal

I love the Bill Nye analogy about the climate debate. He says that if the debate were authentic, rather than having two talking heads debating, it would be hundreds of scientists on one side versus one climate denier on the other.

I saw a social media clip yesterday where a microbiologist was debunking a self declared holistic practitioner on the consumption of unpasteurized milk. The microbiologist wrote his master’s thesis on bacterial infections in cow’s mammary glands.

The self-declared expert espousing unscientific and incorrect information on social media is not an equal voice to an expert. Do they have a right to share their views? Sure. Do they deserve an audience? No.

I wish that I knew how to make the situation better but I don’t have answers. I’m extremely pro ‘free speech’. I think people are entitled to share their views. However, when I see misinformation and disinformation being shared by people with large audiences, I shudder. I worry about how their messages are consumed, by how many people they lead down a bad path.

In 2024 no one, and I mean NO ONE, should believe the earth is flat and yet the group of flat earth believers is getting larger. Imagine being able to own a telescope and see images from the James Webb telescope and still believing something that societies 5,000+ years ago already knew was wrong.

Not all voices are equal, and some voices deserve a larger voice than others. Who decides? Who polices? I don’t know, but I do know that we are entering (have entered) an era where false information gets shared significantly faster than correct information. Corrected information and updated facts don’t get the same play time on social media. So we are essentially living in an era of disinformation.

This doesn’t feel like progress, and as AI models continue to learn from the inputs we are providing, this scares me. I saw a stat that as much as 80% of the internet could be AI generated by the end of 2026. How much of that generated information will be based on incorrect assumptions and conclusions? How much of it will be intentionally misguided? Who is deciding which voices the AI models listen to?

We can’t continue to let ill-informed people have equal voices to those that have more informed perspectives… But I’m not informed enough to know how to change this.

AI Deep Dive

I’ve been working with Joe Truss on a project called ‘Book of Codes’, where we are examining the building blocks of the universe. The main premise is that we live in a Tetraverse, a universe where the smallest possible length in the universe (the Planck length) must be the edge length of a tetrahedron.

We put our ‘We Live in a Tetraverse‘ video into Google’s NotebookLM and had it create this Deep Dive Conversation.

I think this is quite insightful. I’ve been thinking about how to use this tool since I shared it by putting my blog posts into it and having it do a Deep Dive on the content. Since then, I’ve read a few things that have questioned just how useful this kind of podcast really is?

Alan Levine says in ‘Wow Us with your AI Generated Podcast…, “In one sample listen, you might be wowed. But over a series, Biff and Buffy sound like a bunch of gushing sycophants, those office butt kissers you want to kick in the pants.

And in a comment response Aaron Davis mentions my blog’s Deep Dive, “I agree with you Alan about the initial amazement about what is possible, I am not sure how purposeful it is. I listened to David Truss’ podcast he posted and was left thinking about my experience with David Truss’ writing. I imagine that such tools may provide a possible entry way into new content, but I am not sure what is really gained by putting this into an audio format?

It’s true, I’ve done this a few times and while it can be impressive, I do see that this can get a bit old pretty fast. Except for one thing… I think that if you are asking it for a general summary of light content, you are going to get a light and fluffy Deep Dive response. However, if you want to understand something really challenging or different or dense, this could be a really good way to get a general understanding of tough to understand content. The Deep Dive into the Tetraverse video actually did a really good job of describing new content in a clear way. I found the kaleidoscope metaphor it mentioned an insightful analogy and I think that listening to the audio first would help someone appreciate the video even more.

Like any new and shiny tool, this Deep Dive podcast on Google’s Notebook LM will get a lot of play and then dwindle in use… but that doesn’t make it useless. I think it will find it’s rightful place as a way to take dense material and make it digestible. It will be a great content introduction, an insightful entry into new learning. It won’t become something you go to listen to where you also listen to your favourite podcast episodes. Still, it will have a purpose and you might find yourself going to it, or to a similar tool, when you have too much content to summarize, or if the content is significantly challenging to parse.

The best questions

There is a cliche saying that, ‘There is no such thing as a dumb question.” Tell that to a teacher who has just started an engaging discussion in a class and a kid undermines the flow of the conversation with a dumb, often unrelated question. The reality is that questions have innate and even measurable value and there is depth and quality to good question asking.

Think about how important good questioning is in the new world of AI. We need not look far on social media these days to find a post about how to generate intelligent prompts… intelligent questions, well posed, and designed to give you back optimum responses. Design the right question and you increase the chances of an ideal answer.

What’s the best way to promote good questioning in schools? How do we teach ‘Asking good questions?’

At Inquiry Hub we have students design their own inquiries. They take a course developed around the students figuring out what their inquiry question is, then answering it. And they don’t do this once, they do this several times over the year for their first two years, then in Grade 11 they design a full year course.

All the while, students are asking questions, then seeking answers. It’s the practice of asking the questions and not just seeking the answers that makes this process special. They aren’t just asking questions Google or AI can produce answers to. They are not answering a question the teacher asked. They are forming the questions and thus the direction of the learning.

You don’t start asking better and better questions just by answering other people’s questions. You don’t ask better and better questions without practicing forming the questions yourself. Students need to be designing the questions. Because if they are only in charge of answering them, there will be tools and upcoming technologies that will find the same or better answers, faster. The future innovators of the world will be better at writing the best questions, not just answering them.

How good, how soon?

I am still a little freaked out by how good the Google NotebookLM’s AI ‘Deep dive conversations’ are. The conversations are so convincing. The little touches it adds, like extended pauses after words like ‘and’ are an excellent example of this.

In the one created for my blog, the male voice asked, “It actually reminds me, you ever read Atomic Habits by James Clear?” And the female voice’s response is, “I haven’t. No.”

Think about what’s happening here in order to continue the conversation in a genuine way. The male voice can now make a point and provide the female voice ‘new’, previously unknown information. But this whole conversation is actually developed by a single AI.

How soon before you have an entire conversation with a customer service representative oblivious to the fact that you are actually talking to an AI? Watch a newscast or a movie unaware that the people you are watching are not really people?

I shared close to 2,000 blog posts I’ve written into the notebook, if I shared my podcasts too and it replicated my voice, I wonder how long it will be before a digital me could be set to write my posts then simultaneously do live readings of them on my blog? Writing and sounding just like me… without me having to do it!

As a scary extension of this, could I learn something from the new content that it produces? Could I gain insights from the digital me that I would struggle to come up with myself?

This is just the beginning. How much of the internet is going to end up being AI generated and filled with AI reactions and responses to other AI’s? And how much longer after that before we notice?

Playing with Google NotebookLM

Today I added all the posts on my Daily-Ink blog, from the time I started blogging daily in July 2019 up to yesterday, into Google NotebookLM. The first thing I did was to ask it ‘What does David Truss write about on his Daily-Ink Blog?’ I’ll share the response below, but what I really wanted to do was to create an Audio Overview, a ‘Deep dive conversation’ that is basically two AI ‘personalities’ discussing the content I shared.

I didn’t know what to expect, and this is the Deep Dive Conversation into Daily-Ink that was generated.

I am blown away by both the conversational aspect of this AI dialogue as well as the way it stuck to a single theme of habits while covering a fair bit of the things I wrote about.

I’m going to be playing with this tool a lot more in the future… and this is just the beginning of the use of AI to summarize and help us learn from large amounts of information. Think of the possibilities to come!

__

As I mentioned above, the first question I asked on Google NotebookLM was, ‘What does David Truss write about on his Daily-Ink Blog?’ and this was the response:

David Truss writes about a variety of topics in his blog Daily-Ink, but common themes include:
Education: This is a frequent topic for Truss, who is an educator. He has blogged about topics like inquiry-based learning and teaching, blended learning and distributed learning, leadership and innovation in education, and student voice and choice. For instance, in one post, he discusses his visit to Inquiry Hub Secondary, a school where “students design the school” themselves.
Technology: Truss often discusses technology in his blog posts. Specific topics of interest include blended learning, distributed learning, and artificial intelligence. He also used his blog to document his experiences using new technologies. For example, he has written about Dall-E, an AI image generation software and Bing Copilot, an AI assistant.
Personal Reflections: Beyond his professional interests, Truss also writes about his personal life, sharing reflections on topics such as relationships, personal growth, mindfulness, and daily routines. For example, he writes about the challenges of maintaining routines during the pandemic and reflects on the value of taking walks. He also frequently documents his journey with archery and maintaining an active lifestyle.
The Blogging Process: Truss seems to view blogging as a personal practice and frequently shares his thoughts on the writing process itself. He sees writing as an art form and his blog as his canvas. He discusses his commitment to writing daily, the challenges he faces, and the satisfaction he derives from it.
In addition to these recurring themes, Truss also uses his blog to:
Share Links and Resources: Truss frequently shares links to articles, videos, and other resources that he finds interesting or valuable. He often provides context for these links, explaining why he finds them noteworthy and encouraging his readers to explore them further.
Engage in Conversations: Truss uses his blog as a platform for engaging in conversations with his readers. He actively responds to comments, asking for feedback and acknowledging different perspectives.
Overall, David Truss uses his Daily-Ink blog as a space for reflecting on his experiences, exploring his interests, and connecting with others. He is open about his process, his struggles, and his growth, offering a glimpse into his life and his thoughts.

 

 

Information is free, Truth takes effort

We live in an era where:

Lies spread faster than the truth

There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed.

Science, 9 Mar 2018, p. 1146-1151

Media, and even more-so social media, can’t be trusted. And in fact, if it is eye-catching and click-bait worthy it will be sensationalized and potentially untrue. We live in an era of unlimited information and much of it is not factual, and not easily verifiable.

What can we do? I’ve said before that ‘Web Domains Matter’, and they do, but we still need to recognize that even new sites considered reputable have biases.

So we are required to take new information in as skeptics. Meanwhile we have to balance our scepticism with a dose of common sense or we could easily fall down the conspiracy rabbit hole. This is the new normal, this is being information savvy. This does not mean we will get to the Truth. Because it’s not just the information coming in that has bias, we have our own biases too.

We all have work to do, to understand some sort of relevant small ‘t’ truth that is in fact closely related to the capital ’T’ Truth. To find our way amidst an endless stream of information that favours misinformation, fake news, and half-truths. The rabbit hole runs deep, and we are all on a journey down it… with Artificial Intelligence creating a whole new level of generating convincing fakes that are easily believed, and algorithmically shared way more than anything truthful.

Start with the source, where is the information coming from? Apply a sliding scale of scepticism depending on the reliability of the source. Then be savvy in deciding what to believe and what to dismiss.

Source, scepticism, and savviness… the new path to information literacy.

AI Image Fails

I use AI images to accompany roughly 18 to 19 out of 20 Daily-Ink posts. My general rule is that I’ll try one or two requests and pick from those. I don’t want to spend 10 or 15 minutes of my precious morning schedule to search for images, they are the side quest, my writing is the adventure.

However, it being the weekend, yesterday I had time to play… and yet I failed.

Here was my original request for my last blog post:

When that didn’t work, I got more and more detailed, even pausing the requests to ask Copilot (which uses DALL•E 3 to create images) if it knew what a waterpolo cap looked like. It described it perfectly… then I got it to reiterate my request before continuing. This is what I got:

But the caps still came out with helmet masks and at no point was the shooter facing the net. I finally gave up and cropped an image. Here is what I used, and then the full image:

Below are many of the fails. I recognize these are not common requests, and the images have some redeeming qualities, but there is still a way to go when it comes to AI text-to-image requests. So, when you see a less-than-perfect image added to my Daily-Ink posts, please recognize that I’m trying, but I’m not wasting time trying to get everything just right… I’d rather use that time to write, meditate, or exercise.