Tag Archives: Artificial Intelligence

Top Risks 2024

I’d never heard of Eurasia Group before a good friend of mine, an investor, shared the infographic below with me yesterday. According to their website,

In 1998, Ian Bremmer founded Eurasia Group, the first firm devoted exclusively to helping investors and business decision-makers understand the impact of politics on the risks and opportunities in foreign markets. Ian’s idea—to bring political science to the investment community and to corporate decision-makers—launched an industry and positioned Eurasia Group to become the world leader in political risk analysis and consulting.

According to their ‘Top Risks 2024‘ report:

2024. Politically it’s the Voldemort of years. The annus horribilis. The year that must not be named.

Three wars will dominate world affairs: Russia vs. Ukraine, now in its third year; Israel vs. Hamas, now in its third month; and the United States vs. itself, ready to kick off at any moment.

Russia-Ukraine … is getting worse. Ukraine now stands to lose significant international interest and support. For the United States in particular, it’s become a distant second (and increasingly third or lower) policy priority. Despite hundreds of thousands of casualties, millions of displaced people, and a murderous hatred for the Russian regime shared by nearly every Ukrainian that will define the national identity of tens of millions for decades. Which is leading to more desperation on the part of the Ukrainian government, while Vladimir Putin’s Russia remains fully isolated from the West. The conflict is more likely to escalate, and Ukraine is on a path to being partitioned.

Israel-Hamas … is getting worse. There’s no obvious way to end the fighting, and whatever the military outcome, a dramatic increase in radicalization is guaranteed. Of Israeli Jews, feeling themselves globally isolated and even hated after facing the worst violence against them since the Holocaust. Of Palestinians, facing what they consider a genocide, with no opportunities for peace and no prospects of escape. Deep political divisions over the conflict run throughout the Middle East and across over one billion people in the broader Muslim world, not to mention in the United States and Europe.

And then there’s the biggest challenge in 2024 … the United States versus itself. Fully one-third of the global population will go to the polls this year, but an unprecedentedly dysfunctional US election will be by far the most consequential for the world’s security, stability, and economic outlook. The outcome will affect the fate of 8 billion people, and only 160 million Americans will have a say in it, with the winner to be decided by just tens of thousands of voters in a handful of swing states. The losing side—whether Democrats or Republicans—will consider the outcome illegitimate and be unprepared to accept it. The world’s most powerful country faces critical challenges to its core political institutions: free and fair elections, the peaceful transfer of power, and the checks and balances provided by the separation of powers. The political state of the union … is troubled indeed.

None of these three conflicts have adequate guardrails preventing them from getting worse. None have responsible leaders willing and able to fix, or at least clean up, the mess. Indeed, these leaders see their opponents (and their opponents’ supporters) as principal adversaries—“enemies of the people”—and are willing to use extralegal measures to ensure victory. Most problematically, none of the belligerents agree on what they’re fighting over.

Think about this, the Russia-Ukraine and the Israel-Hamas wars both take a back seat to the US election as the top risk of 2024. Both have no positive outcome in sight and they still don’t pose the same threat as a tight election result in the United States. I wish I could disagree, but I too see this as a genuine concern. What makes it worse is Risk #4 – Ungoverned AI, and specifically disinformation:

In a year when four billion people head to the polls, generative AI will be used by domestic and foreign actors—notably Russia—to influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale. Sharply divided Western societies, where voters increasingly access information from social media echo chambers, will be particularly vulnerable to manipulation. A crisis in global democracy is today more likely to be precipitated by AI-created and algorithm-driven disinformation than any other factor.

I want to explore the other risks as well, but by far my biggest concern for 2024 is the US election. My greatest fear is a close and contested election. The by-product of this would not just be tragic for the US, but for the entire world. I wish this was just hyperbole, but it’s not, and reading a report like this just magnifies concerns I already had. Buckle up, we are in for quite a ride in 2024.

You can get the full Top Risks 2024 white paper on their website, (or click the image below).

Conversational AI interface

A future where we have conversations with an AI in order to have it do tasks for us is closer than you might think. Watch this clip, (full video here):

Imagine starting you day by asking an AI ‘assistant’, “What are the emails I need to look at today?” Then saying something like, “Respond to the first 2 for me, and draft an answer for the 3rd one for me to approve. And remind me what meetings I have today.” All while on the treadmill, or while shaving or even showering.

The AI calculates your calories used on your treadmill, and tracks what you eat, and even gives suggestions like, “You might want to add some protein to your meal, may I suggest eggs, you also have some tuna in the pantry, or a protein shake if you don’t want to make anything.”

Later you have the ever-present the AI in the room with you during a meeting and afterwards request of it, “Send us all a message with a summary of what we discussed, and include a ‘To Do’ list for each of us.”

Sitting for too long at work? The AI could suggest standing up, or using the stairs instead of the elevator. Hungry? Maybe your AI assistant recommends a snack because it read your sugar levels off of your watch’s health monitor, and it does this just as you are starting to feel hungry.

It could even remind you to call your mom, or do something kind for someone you love… and do so in a way that helps you feel good about it, not like it’s nagging you.

All this and a lot more without looking at a screen, or typing information into a laptop. This won’t be ready by the end of 2024, but it’s closer to 2024 than it is to 2030. This kind of futuristic engagement with a conversational AI is really just around the corner. And those ready to embrace it are really going to leave those who don’t behind, much like someone insisting on horse travel in an era of automobiles. Are you ready for the next level of AI?

What are you outsourcing?

Alec Couros recently came to Coquitlam and gave a presentation on “The Promise and Challenges of Generative AI”. In this presentation he had a quote, “Outsource tasks, but not your thinking.

I just googled it and found this LinkedIn post by Aodan Enright. (Worth reading but not directly connected to the use of AI.)

It’s incredible what is possible with AI… and it’s just getting better. People are starting businesses, writing books, creating new recipes, and in the case of students, writing essays and doing homework. I just saw a TikTok of a student who goes to their lecture and records it, runs it through AI to take out all the salient points, then has the AI tool create cue cards and test questions to help them study for upcoming tests. That’s pretty clever.

What’s also clever, but perhaps now wise, is having an AI tool write an essay for you, then running the essay through a paraphraser that breaks the AI structure of the essay so that it isn’t detectable by AI detectors. If you have the AI use the vocabulary of a high school student, and throw in a couple run-on sentences, then you’ve got an essay which not only AI detectors but teachers too would be hard pressed to accuse you of cheating. However, what have you learned?

This a worthy point to think about, and to discuss with students: How do you use AI to make your tasks easier, but not do the thinking for you? 

Because if you are using AI to do your thinking, you are essentially learning how to make yourself redundant in a world of ever-smarter AI. Don’t outsource your thinking… Keep your thinking cap on!

Is Generative AI “just a tool”?

Dean Shareski asks,

Is Generative AI “just a tool”? When it comes to technology, the “just a tool” phrase has been used for decades. I understand the sentiment in that it suggests that as humans, we are in control and it simply responds to our lead. But I worry that this at times, diminishes the potential and also suggests that it’s neutral which many argue that technology is never neutral. I’d love to know what you think, is Generative AI “just a tool”?

My quick answer is ‘No’, because as Dean suggests tools have never been neutral.

Here are some things I’ve said before in ‘Transformative or just flashy educational tools?‘. This was 13 years ago, and I too used the term, ‘just a tool’:

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.

A complimentary point: If I have a hammer and try to use it as a screwdriver, I won’t get much value from its’ use.”

My main message was, “A tool is just a tool! It’s not the tool, but how you use it that matters.

I went on to ask, “What should we do with tools to make them great?” And I gave 6 suggestions:

  1. Give students choice
  2. Give students a voice
  3. Give students an audience
  4. Give students a place to collaborate
  5. Give students a place to lead
  6. Give students a digital place to learn

This evolved into a presentation, 7 Ways to Transform Your Classroom, but going back to Dean’s question, first of all I don’t think technology is ever neutral. I also think that technology need not be transformative yet always has the potential to be so if it really does advance capabilities beyond what was available before. So the problem is really about the term ‘just’ in ‘just a tool’. Nothing is just a tool. Generative AI is not “just a tool”.

We respond to technology, and technology transforms us. Just like Marshal McLuhan proposed that the communication medium itself, not the messages it carries, is what really matters, “The medium is the message”, the technology is the transformer, it alters us by our use of it, it is the medium that becomes the message.

Generative AI is going to transform a lot of both simple and complex tasks, a lot of jobs, and ultimately us.

What does this mean for schools, and for teaching and learning? There will be teachers and students trying to do old things in new ways, using AI like we used a thesaurus, a calculator, Wikipedia, and other tools to ‘enhance’ what we could do before. There will also be innovative teachers using AI to co-create assignments and to answer new questions, co-write essays, and co-produce images and movies.

There will be students using generative AI to do their work for them. There will be teachers fearing these tools, there will be teachers trying to police their use, and it’s going to get messy… That said, we will see more and more generative AI being the avenue through which students are themselves designing tasks and conceiving of new ways to use these tools.

And of course we will see innovative teachers using these tools in new ways…

How different is a marketing class when AI is helping students start a business and market a product with a $25 budget to actually get it going?

How different is a debate where generative AI is listening and assisting in response to the opposing team’s rebuttal?

How different are the special effects in an AI assisted movie? How much stronger is the AI assisted plot line? How much greater is the audience when the final product is shared online, with AI assisted search engine optimization?

Technology is not ‘just a tool’. We are going to respond to Generative AI and Generative AI is going to transform us. It is not just going to disrupt education, it’s going to disrupt our job market, our social media (it already has), and our daily lives. Many other tools, will not be just like the tools that came before them… changes will be accelerated, and the future is exciting.

Just a tool’ is deceiving and I have to agree with Dean that it underplays the possibilities of how these tools can be truly transformative. I’m not sure I would have said this 13 years ago, but I also didn’t see technology being as transformative back then. Generative AI isn’t a flash in the pan, it’s not something trendy that’s going to fade in popularity. No, it’s a tool that’s going to completely change landscapes… and ultimately us.

Alien perspective

I think jokes like this are funny:

…because they hold a bit of truth.

We aren’t all that intelligent.

We draw imaginary lines on the globe to separate us. We fight wars in the name of angry Gods that are more concerned with our devotion than for peace and love. We care more about greed than about the environment. We spend more on weapons of destruction than we do on feeding the needy. We judge each other on superficial differences. We have unbelievable intellect, capable of incredible technological advancement, yet we let our monkey brains prevail.

Sure we exhibit some intelligence, we are intelligent viruses.

At least that’s what I think an objective alien visiting our planet would think.

Conversation on an alien ship observing earth:

“Give them another 100 years… if they figure out how to not kill each other and the planet, then let’s introduce ourselves.”

Right now I’m not terribly optimistic about what those aliens will find in our future? ‘Civilized’ humans? A desolate planet? Artificial intelligence treating us like we treat ‘unintelligent’ animals? Or more of the same bickering, posturing, warring, and separatist views of humans trying to usurp dominance over each other?

It would be funny if it wasn’t sad.

New learning paradigm

I heard something in a meeting recently that I haven’t heard in a while. It was in a meeting with some online educational leaders across the province and the topic of Chat GPT and AI came up. It’s really challenging in an online course, with limited opportunities for supervised work or tests, to know if a student is doing the work, or a parent or tutor, or Artificial Intelligence tools. That’s when a conversation came up that I’ve heard before. It was a bit of a stand on a soapbox diatribe, “If an assignment can be done by Chat GPT, then maybe the problem is in the assignment.”

That’s almost the exact line we started to hear about 15 years ago about Google… I might even have said it, “If you can just Google the answer to the question, then how good is the question?” Back then, this prompted some good discussions about assessment and what we valued in learning. But this is far more relevant to Google than it is to AI.

I can easily create a question that would be hard to Google. It is significantly harder to do the same with LLM’s – Large Language Models like Chat GPT. If I do a Google search I can’t find critical thinking challenges not already shared by someone else. However, I can ask Chat GPT to create answers to almost anything. Furthermore, I can ask it to create things like pro’s & con’s lists, then put those in point form, then do a rough draft of an essay, then improve on the essay. I can even ask it to use the vocabulary of a Grade 9 student. I can also give it a writing sample and ask it to write the essay in the same style.

LLM’s are not just a better Google, they are a paradigm shift. If we are trying to have conversations about how to catch cheaters, students using Chat GPT to do their work, we are stuck in the old paradigm. That said, I openly admit this is a much bigger problem in online learning where we don’t see and work closely with students in front of us. And we are heading into an era where there will be no way to verify what’s student work and what’s not, so it’s time to recognize the paradigm shift and start asking ourselves new questions…

The biggest questions we need to ask ourselves are how can we teach students to effectively use AI to help them learn, and what assignments can we create that ask them to use AI effectively to help them develop and share ideas and new learning?

Back when some teachers were saying, “Wikipedia is not a valid website to use as research and to cite.” Many more progressive educators were saying, “Wikipedia is a great place to start your research,” and, “Make sure you include the date you quoted the Wikipedia page because the page changes over time.” The new paradigm will see some teachers making students write essays in class on paper or on wifi-less internet-less computers, and other teachers will be sending students to Chat GPT and helping them understand how to write better prompts.

That’s the difference between old and new paradigm thinking and learning. The transition is going to be messy. Mistakes are going to be made, both by students and teachers. Where I’m excited is in thinking about how learning experiences are going to change. The thing about a paradigm shift is that it’s not just a slow transition but a leap into new territory. The learning experiences of the future will not be the same, and we can either try to hold on to the past, or we can get excited about the possibilities of the future.

AI and humans together

On Threads, Hank Green said, “AI isn’t scary because of what it’s going to do to humans, it’s scary because of what it’s going to allow humans to do to humans.

I recently shared in, High versus low trust societies, examples of this with: more sophisticated scams; sensationalized click bait news titles and articles; and clever sales pitches, all ‘enhanced’ and improved by Artificial Intelligence. None of these are things AI is doing to us. All of them are ways AI can be used by people to take advantage of other people.

I quoted Hank’s Thread and said, “It’s just a tool, but so are guns, and look at how well we (miss)manage those!

Overall I’m excited about how we will use AI to improve what we can do. There are already fields of medicine where AI can do thousands of hours of work in just a few hours. For example, drug discovery, “A multi-institutional team led by Harvard Medical School researchers has launched a platform that aims to optimize AI-driven drug discovery by developing more realistic data sets and higher-fidelity algorithms.

The true power and potential of AI isn’t what AI can do on its own, it’s what humans and AI can do together.

But I also worry about people using amazing AI tools as weapons. For example, creating viruses or even dirty bombs. These are things that are out of reach for most people now, but AI might make such weapons both more affordable and more available… to anyone and everyone.

All this to say that Hank Green is right. “AI isn’t scary because of what it’s going to do to humans, it’s scary because of what it’s going to allow humans to do to humans.

We are our own worst enemy.

The true danger and threat of AI isn’t what AI can do on its own, it’s what humans and AI can do together.

AI is Coming… to a school near you.

Miguel Guhlin asked on LinkedIn:

“Someone asked these questions in response to a blog entry, and I was wondering, what would YOUR response be?

1. What role/how much should students be using AI, and does this vary based on grade level?

2. What do you think the next five years in education will look like in regards to AI? Complete integration or total ban of AI?”

I commented:

1. Like a pencil or a laptop, AI is a tool to use sometimes and not use other times. The question is about expectations and management.

2. Anywhere that enforces a total ban on AI is going to be playing a never-ending and losing game of catch-up. That said, I have no idea what total integration will look like? Smart teachers are already using AI to develop and improve their lessons, those teachers will know that students can, and both will and should, use these tools as well. But like in question 1… when it’s appropriate. Just because a laptop might be ‘completely integrated’ into a classroom as a tool students use doesn’t mean everything they do in a classroom is with and on a laptop.

I’ve already dealt with some sticky issues around the use of AI in a classroom and online. One situation last school year was particularly messy, with a teacher using Chat GPT as an AI detector, rather than other AI detection tools. It turns out that Chat GPT is not a good AI detector. It might be better now, but I can confirm that in early 2023 it was very bad at this. I even put some of my own work into it and I had Chat GPT tell me that a couple paragraphs were written by it, even though I wrote the piece about 12 years earlier.

But what do we do in the meantime? Especially in my online school where very little, if any, work is supervised? Do we give up on policing altogether and just let AI do the assignments as we try to AI proof them? Do we give students grades for work that isn’t all theirs? How is that fair?

This is something we will figure out. AI, like laptops, will be integrated into education. Back in 2009 I presented on the topic, “The POD’s are Coming!

(Slideshow here) About Personally Owned Devices… laptop etc… coming into our classrooms, and the fear of these devices. We are at that same point with AI now. We’ll get through this and our classrooms will adapt (again).

And in a wonderful full-circle coincidence, one of the images I used in the POD’s post above was a posterized quote by Miguel Guilin.

It’s time to take the leap. AI might be new… but we’ve been here before.

Asimov’s Robot Visions

I’m listening to Isaac Asimov’s book, Robot Visions on Audible. Short stories that center around his Three Laws of Robotics (Asimov’s 3 Laws).

• The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

• The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

• The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These short stories all focus on ways that these laws can go wrong, or perhaps awry is the better term. There are a few things that make these stories insightful but they are also very dated. The early ones were written in the 1940’s and the conventions of the time, including conversational language and sexist undertones, are blatantly exposed as ‘olde’.

It also seems that Asimov made 2 assumptions worth thinking about: First that all robots and Artificial Intelligence would be constructed with the 3 laws at the core of all intelligence built into these machines. Many, many science fiction stories that followed also use these laws. However creators of Large Language Models like Chat GPT are struggling to figure out what guard rails to put on their AI to prevent it from acting maliciously when meeting sometimes scrupulous human requests.

Secondly, a few of the stories include a Robopsychologist, that’s right, a person (always female) who is an expert in the psychology of robots. There would be psychologists whose sole purpose would be to get inside the minds of robots.

Asimov was openly concerned with AI, specifically robots, acting badly, endangering humans, and even following our instructors too literally and with undesirable consequences. But he thought his 3 laws was a good start. Perhaps they are but they are just a start. And with new AI’s coming out with more computing power, more training, and less restrictions, I think Asimov’s concerns may prove prophetic.

The guard rails are off and there is no telling what unexpected consequences and results we will see in the coming months and years.

Playing with Bing copilot

I asked Bing copilot about myself:

What does David Truss write about on http://davidtruss.com

The response:

David Truss writes about various topics related to education, learning, and technology on his website http://davidtruss.com. He shares his thoughts, experiences, and reflections as an educator, a student, a thinker, and a dreamer. He also showcases his presentations, podcasts, resources, and projects that he has created or contributed to. Some of the themes that he explores on his website are:

  • Inquiry-based learning and teaching
  • Blended learning and distributed learning
  • Leadership and innovation
  • Student voice and choice
  • Global citizenship and social responsibility
  • Personalized learning and professional growth

You can find more information about David Truss and his work on his website 1 or on his blog Pair-a-Dimes for Your Thoughts.

—–

I followed up more specifically with my daily blog:

What does David Truss write about on http://daily-ink.davidtruss.com?

The response:

David Truss writes about his personal and professional experiences, reflections, and insights on his daily blog Daily-Ink. He describes his writing as his artistic expression, and his keyboard as his brush. He commits to writing daily to feel like an artist. He covers various topics related to education, learning, technology, leadership, innovation, global citizenship, and more. Some of his recent posts are:

  • Rewind: He looks back at the past year of the pandemic and how it changed his life and work. He also rewinds to previous years and how different decisions and events shaped his career and family. He wonders how he will look at February 2021 a decade from now.
  • Headspace: He shares his thoughts on meditation and mindfulness, and how they help him cope with stress and anxiety. He recommends some apps and podcasts that he uses to practice meditation and mindfulness.
  • Cruise ships and education: He draws an analogy between cruise ships and education systems, and how they both need to adapt to the changing needs and expectations of their customers and stakeholders. He argues that education needs to be more personalized, flexible, and relevant for the learners of today and tomorrow.

—–

I had to update the links to the posts because they all went to my blog’s home page, but those were correct post titles. Also the ‘Headspace’ post made no mention of meditation and mindfulness, though I do mention those frequently in other posts… and that particular post isn’t really typical of the things I write about.

I then switched from asking questions to clicking on Bing’s copilot suggested questions. The first one was:

What is David Truss’s background?

The response:

David Truss is a Canadian educator and administrator who is currently the principal of three programs in Coquitlam, BC: Coquitlam Open Learning, Inquiry Hub Secondary School, and Suwa’lkh Learning Centre. He has a background in inquiry-based learning, blended learning, distributed learning, leadership, innovation, global citizenship, and social responsibility. He is also a blogger, podcaster, author, and speaker who shares his thoughts and experiences on education, learning, and technology. He has a website Pair-a-Dimes where he showcases his work and resources. He also wrote an eBook called Twitter Edu where he shares his tips and tricks on using Twitter for professional learning and networking.

—–

This time I didn’t change the links… because I think it’s valuable to mention how wrong they are. The link to my blog, Pair-a-Dimes, above actually goes to a Barbara Bray podcast about me, and the link to my Twitter book actually goes to the Encyclopedia Britannica page for Liz Truss, former Prime Minister of the United Kingdom. The rest of the data is quite accurate (other than I no longer run Suwa’lkh), but it seems copilot has a linking issue.

The next suggested question:

Can you show me some of his projects?

Sure, I can show you some of his projects that he has created or contributed to. Here are some examples:

  • Inquiry Hub Secondary School: This is a high school in Coquitlam, BC, Canada that focuses on independent learning and inquiry-based projects. David Truss is the principal of this school and one of the founders. He also shares his experiences and insights on leading and learning in this innovative environment. You can learn more about this school on its website 1 or on Wikipedia 2.
  • 10 Truss Houses That Brilliantly Blend Architecture With Engineering: This is an article that David Truss wrote for Architizer Journal, where he showcases 10 residential projects that use trusses as a major design element. He explores how these structural systems frame spatial experiences and respond to diverse landscapes. You can read his article here 3.
  • Twitter Edu: This is an eBook that David Truss co-authored with Eric Sheninger, where they share their tips and tricks on using Twitter for professional learning and networking. They explain how to create a personal learning network, find and share resources, and connect with other educators around the world. You can download their eBook here .

—–

The middle point is not remotely about me, and the last one didn’t link to my e-book. This request was the first one that really deviated from my profile with the Architecture article about truss supports. I had to do a search to find this article, since it wasn’t linked, and the link that it goes to above in the search results is an article about Liz Truss.

I followed one more copilot suggested question:

What is David Truss’s Twitter handle?

This was the response:

David Truss’s Twitter handle is @datruss. He is a Canadian educator, administrator, blogger, podcaster, author, and speaker. He tweets about education, learning, technology, leadership, innovation, global citizenship, and more. He has over 18.5K followers and follows over 10.6K accounts. He joined Twitter in March 2008 and has posted over 55.6K tweets as of April 2023. You can follow him on Twitter to get his latest updates and insights

—–

Once again the links went to articles about Liz, but the information was correct other than totals for my follows/following, which are both around 13K.

Overall, I’m impressed by the accuracy of copilot, and disappointed by the lack of reliable links. I think of Bing copilot a bit like I think of Wikipedia, it’s a great place to start your research, but not a definitive source to base any answers on. It’s a copilot when you are taking off on a search but once you are in the air, you need to take over the steering and guide the search plane to more credible or primary sources of information.

One final thought: The html code that pasted onto my blog from copilot was very messy. I’d suggest pasting without formatting and adding the correct links after you have pasted everything… lesson learned!