Tag Archives: Artificial Intelligence

AI and academic integrity

I’ve been using AI to add images to my blog since June of 2022 when I discovered AI generated art: DALL•E. I don’t credit it, I just use it, and find it much easier to generate than to find royalty free alternatives. I haven’t yet used AI as a writing or editing tool on my blog. While I’m sure it would make my writing better, I am writing to write, and I usually do so early in the morning and have limited time.

I already have to limit the time I spend creating an image, if I also had to use AI to edit and revise my work I’d probably only have 15-20 minutes to write… and I write to write, not to use an AI to write or edit for me. That said, I’m not disparaging anyone who uses AI to edit, I think it’s useful and will sometimes use it on emails, I simply don’t want that to be how I spend my (limited) writing time.

I really like the way Chris Kennedy both uses AI and also credits it on his blog. For example, in his recent post, ‘Could AI Reduce Student Technology Use?’ Chris ends with a disclosure: “For this post, I used several AI tools (Chat GPT, Claude, Magic School) as feedback helpers to refine my thinking and assist in the editing process.”

Related side note, I commented on that post,

The magic sauce lies in this part of your post:
“AI won’t automatically shift the focus to human connection—we have to intentionally design learning environments that prioritize it. This involves rethinking instruction, supporting teachers, and ensuring that we use AI as a tool to enhance, not replace, the human elements of education.”

A simple example: I think about the time my teachers spend making students think about formatting their PowerPoint slides, think about colour pallets, theme, aesthetics, and of course messaging… and wonder what they lose in presentation preparation when AI just pumps out a slide or even whole presentation for them? 

“Enhance but not replace,” this is the key, and yet this post really strikes a chord with me because the focus is not just the learning but the human connection, and I think if that is the focus it doesn’t matter if the use of technology is more, less, or the same, what matters is that the activities we do enrich how we engage with each other in the learning.

Take the time to read Chris’ post. He is really thinking deeply about how to use AI effectively in classrooms.

However I’m thinking about the reality that it is a lot harder today to know when a student is using AI to avoid thinking and working. Actually, it’s not just about work avoidance, it’s also about chasing marks. Admittance to university has gotten significantly more challenging, and students care a lot about getting an extra 2-5% in their courses because that difference could mean getting into their choice university or not. So incentives are high… and our ability to detect AI use is getting a lot harder.

Yes, there are AI detectors that we can use, but I could write a complex sentence in three different ways, put it into an AI detector, and one version could say ‘Not AI’, one could say 50% chance that it was written by AI and the third version might say 80% chance of AI… all written by me. 20 years ago, I’d read a complex sentence written in my Grade 8 English class and think, ‘That’s not this kid’s work’. So, I’d put the sentence in quotes in the Google search bar and out would pop the source. When AI is generating the text, the detection is not nearly as simple.

Case in point: ‘The Backlash Against AI Accusations’, and shared in that post, ‘She lost her scholarship over an AI allegation — and it impacted her mental health’. And while I can remember the craze about making assignments ‘Google proof’ by asking questions that can’t easily be answered with Google searches, it is getting significantly harder to create an ‘AI proof’ assessment… and I’d argue that this is getting even harder on a daily basis with AI advances.

Essentially, it’s becoming a simple set of questions that students need to be facing: Do you want to learn this? Do you want to formulate your ideas and improve your thinking? Or do you just want AI to do it for you? The challenge is, if a kid doesn’t care, or if they care more about their mark than their learning, it’s going to be hard to prove they used AI even if you believe they did.

Are there ways to catch students? Yes. But for every example I can think of, I can also think about ways to avoid detection. Here is one example: Microsoft Word documents have version tracking. As a teacher I can look at versions and see large swaths of cut-and-paste sections of writing to ‘prove’ the student is cheating. However, a student could say, “I wrote that part on my phone and sent it to myself to add to the essay”. Or a savvy student could use AI but type the work in rather than pasting it in. All this to say that if a kid really wants to use AI, in many cases they can get away with it.

So what’s the best way to battle this? I’m not sure? What I do know is that taking the policing and detecting approach is a losing battle. Here are my ‘simple to say’ but ‘not so simple to execute’ ideas:

  1. The final product matters less than the process. Have ideation, drafts, and discussions count towards the final grade.
  2. Foster collaboration, have components of the work depend on other student input. Examples include interviews, or reflections of work presented in class, where context matters.
  3. Inject appropriate use of AI into an assignment, so that students learn to use it appropriately and effectively.

Will this prevent inappropriate AI use. No, but it will make the effort to use AI almost as hard as just doing the work. In the end, if a kid wants to use it, it will be harder and harder to detect, so the best strategy is to create assignments that are engaging and fun to do, which also meet the learning objectives that are required… Again, easier said than done.

We won’t recognize the world we live 

Here is a 3-minute read that is well worth your time: Statement from Dario Amodei on the Paris AI Action Summit \ Anthropic

This section in particular:

Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.

There is going to be a ‘life before’ and ‘life after’ AGI -Artificial General Intelligence line that we are going to cross soon, and we won’t recognize the world we live in 2-3 years after we cross that line.

From labour and factories to stock markets and corporations, humans won’t be able to compete with AI… in almost any field… but the field that’s most scary is war. The ‘free world’ may not be free too much longer when the ability to act in bad faith becomes easy to do on a massive scale. I find myself simultaneously excited and horrified by the possibilities. We are literally playing a coin flip game with the future of humanity.

I recently wrote a short tongue-in-cheek post that there is a secret ASI – Artificial Super Intelligence waiting for robotics technology to catch up before taking over the world. But I’m not actually afraid of AI taking over the world. What I do fear is people with bad intentions using AI for nefarious purposes: Hacking banks or hospitals; crashing the stock market; developing deadly viruses; and creating weapons of war that think, react, and are more deadly than any human on their own could ever be.

There is so much potential good that can come from AGI. For example, we aren’t even there yet and we are seeing incredible advancements in medicine, how quickly will they come when AGI is here? But my fear is that while thousands and hundreds of thousands of people will be using AGI for good, that power held in the hands of just a few powerful people with bad intentions has the potential to undermine the good that’s happening.

What I think people don’t realize is that this AGI infused future isn’t decades away, it’s just a few short years away.

“Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”

Who controls that intelligence is what will really matter.

Speed and collar colour

Two things are happening simultaneously.

First, the advancements we see in AI are moving at an exponential rate. Humans don’t really understand how to look at exponential growth because everything in front of them moves faster than what they’ve already experienced.

How many years did it take from the time light bulbs were invented until they were in most houses? I don’t know, but it took a long while. Chat GPT was used by over 100 million people in less than 2 months. And the ability of tools like this are increasing in ability exponentially as well. It’s like we’ve gone warp speed from tungsten and incandescent lights to LED’s in a matter of months rather than years and years.

The other thing happening right now is that for the first time at scale, it’s white collar, not blue collar jobs that are being threatened. Accountants, writers, analysts, coders, are all looking over their shoulder wondering when AI will make most of their jobs redundant. Meanwhile, we are many years away from a robot trying to figure out and repair a bathroom or ceiling leak. Sure, there will be some new tools to help, but I don’t think a plumbing home repair person is something AI is threatening to replace any time soon.

These two things happening so quickly are going to change the future value in careers. Whole sectors will be reinvented. New sectors will emerge. But where does that leave the 20-year accountant in a large firm that finds it can cut staffing by 2/3rds? What careers are not going to be worth going to university for 4+ years for? The safest jobs right now are the trades, and while they too will be challenged as we get AI into autonomous humanoid robots, the immediate threat seen in white collar jobs are not the same for blue collar professions (as opposed to blue collar factory workers, who are equally threatened by exponential changes).

These changes are single-digit years as opposed to decades away… and I’m not sure we are ready to handle the speed at which these changes are coming?

Don’t believe the hype

The open source DeepSeek AI model has been built my the Chinese for a few million dollars, and it seems this model works better than the multi-billion dollar paid version of Chat GPT (at about 1/20th the operating cost). If you watch the news hype, it’s all about how Nvidia and other tech companies have taken a huge financial hit as investors realize that they don’t ‘need’ the massive computing power they thought they did. However, to put this ‘massive hit’ into perspective let’s look at the biggest stock market loser yesterday, Nvidia.

Nvidia has lost 13.5% in the last month, most of which was lost yesterday.

However, if you zoom out and look at Nvidia’s stock price for the last year, they are still up 89.58%!

That’s right, this ‘devastating loss’ is actually a blip when you consider how much the stock has gone up in the last year, even when you include yesterday’s price ‘dive’. If you bought $1,000 of Nvidia stock a year ago, that stock would be worth $1,895.80 today.

Beyond that, the hype is that Nvidia won’t get the big orders they thought they would get, if an open source LLM (Large Language Model) is going to make efficient, affordable access to very intelligent AI, without the need for excessive computing power. But this market is so new, and there is so much growth potential. The cost of the technology is going down and luckily for Nvidia, they produce such superior chips that even if there is a slow down in demand, the demand will still be higher than their supply will allow.

I’m excited to try DeepSeek (I’ve tried signing up a few times but can’t get access yet). I’m excited that an open source model is doing so well, and want to see how it performs. I believe the hype that this model is both very good and affordable. But I don’t believe the hype that this is some sort of game-changing wake up call for the industry.

We are still moving towards AGI, Artificial General Intelligence, and ASI – Super Intelligence. Computing power will still be in high demand. Every robot being built now, and for decades to come, will need high powered chips to operate. DeepSeek has provided an opportunity for a small market correction but it’s not an innovation that will upturn the industry. This ‘devastating’ stock price losses the media is talking about is going to be an almost unnoticeable blip in stock prices when you look at the tech stock prices a year or 5 years from now.

It is easy to get lost in the hype, but zoom out and there will be hundreds of both little and big innovations that will cause fluctuations in stock market prices. This isn’t some major market correction. It’s not the downfall of companies like Nvidia and Open AI. Instead, it’s a blip in a fast-moving field that will see some amazing, and exciting, technological advances in the years to come… and that’s not just hype.

The false AGI threshold

I think a very conservative prediction would be that we will see Artificial General Intelligence (AGI) in the next 5 years. In other words, there will be AI agents, working with recursive self-improvement, that will learn how to do new tasks outside the realm of its training, faster than a human could. But when this actually happens will be open for debate.

The reason for this is that there isn’t going to be some magical threshold that AGI suddenly passes, but rather a continuous moving of the bar that defines what AGI is. There will be a working definition of AGI that puts up an artificial threshold, then an AGI model will achieve that definition and experts will admit that this model surpasses that definition, but will still think the model lacks some sufficient skills or expected outputs to truly call it AGI.

And so over the next 5 years or so we will have far more sophisticated AI models, all doing things that today we would define as AGI but will not meet the newest definition. The thing is that these moving goal posts will not be adjusted incrementally but rather exponentially. What that means is that the newer definition of AGI is going to include significantly greater output expectations. Then looking back, we will be hard pressed to say “this model’ was the one or ‘this day’ was the day that we achieved AGI.

Sometime soon there will be an AI model put out into the world that will build its own AI agent that starts a billion dollar company without the aid of a human… but that might happen even before consensus that AGI has been achieved. There will be an AI agent that costs lives or endangers lives with its decisions made in the real world, but that too might happen before consensus that AGI has been achieved.

Because the goal posts will keep moving while the technology is on an exponential curve, we are not going to have a magic threshold day when AGI occurred. Instead, in less than 5 years, well before 2030, we are going to be looking back in amazement wondering when we passed the threshold? But make no mistake, that’s going to happen and we don’t have an on/off switch to stop it when it does. This is both exciting and scary.

In the middle

I think that a robust and healthy middle class is essential to maintain a vibrant society. But what I see in the world is an increasing gap between the wealthy and an ever larger group of people living in debt and/or from paycheque to paycheque. The (not so) middle class now might still go on a family vacation, and spend money on restaurants, but they are not saving money for the future… they simply can’t do what the middle class of the past did.

A mortgage isn’t to be paid off, it’s something to continue to manage during retirement. Downsizing isn’t a choice to be made, it’s a survival necessity. Working part time during retirement isn’t a way to keep busy, it’s s necessity to make ends meet.

I grew up in a world where I believed I would do better than my parents did. Kids today doubt they will ever own a place like their parents, and many don’t believe they’ll ever own a house. Renting and perhaps owning a small condo one day are all they aspire to. Not because they don’t want more, but because more seems too costly and out of reach.

Then I see the world of AI and robotics we are heading into and I wonder if initially things won’t get worse before they get better? Why hire a dozen programmers, just hire two exceptional ones and they are the quality control for AI agents write most of your code. Why hire a team of writers when one talented writer can edit the writing done by AI? Why hire factory workers that need lunch breaks and are more susceptible to making errors than a team of robots? While some jobs are likely to stick around for a while like trades, childcare, and people in certain medical fields, other jobs will diminish and even disappear.

I don’t think a robot is going to wanted to provide a pregnancy ultrasound any time soon, but AI will analyze that ultrasound better than any human professional. A robot might assist in laying electrical wire at a construction site, but it will still be a human serving you when you can’t figure out most electrical issues that you have in your home. It will still be a human who you call to figure out how to fix your leaky roof or toilet; a human who repairs your broken dishwasher or dryer. These jobs are safe for a while.

But I won’t want my next doctor to be diagnosing me without the aid and assistance of AI. And I would prefer AI to analyze my medical data. I will also prefer the more affordable products created by AI manufacturing. The list goes on and on as I look to where I will both see and want to see AI and robotics aiding me.

And what does this do to the working middle class? How do we tax AI and robots, to help replace the taxation of lost jobs? What do we do about increased unemployment as jobs for (former middle class) humans slowly disappear?

Will we have universal basic income? Will this be enough? What will the middle class look like in 10 or 20 years?

There is no doubt that we are heading into interesting times. The question is, will these disruptions cause upheaval? Will these disruptions widen the wealth gap? Will robotics and AI create more opportunities or more disparity? What will become of our middle class… a class of people necessary to maintain a robust and healthy society?

Micro-learning in 2025

I remember my oldest daughter asking me a question when she was just 4 years old. I don’t remember the actual question but I do remember that after I responded, “I don’t know,” she walked over to our desktop computer and asked Google. I remember being surprised that she thought to do this, and amazed because when I was that age, if my parent didn’t know, I might have looked in our Junior Encyclopedia Brittanica, but I probably would have just accepted that I wouldn’t know the answer.

I remember a time, years later, when I would ask a question of my social media network first, rather than Google. Not for a general knowledge question, but for things like how to use a certain tool, such as accessing a feature on a wiki or blogging platform. People were better that generalized Q&A pages at pinpointing the information I was looking for, and I good hashtag on Twitter would put my question in front of the right people.

And now there are times when I would go to YouTube first, before Google, for things like car repair. Don’t know how to get the cover off of a car light to replace it? Simply put your car name and year into YouTube with the information about what bulb you are replacing, and a video will pop up to show you how to do it.

AI is changing this. More and more, questions are being answered right inside of search. Make a query and the answer is not just links to sites that might know the answer, but an actual answer based on information that is on the sites you would normally have to click to. That’s pretty awesome in and of itself… having instant answers to simple questions, without needing to search any further. But what about more complex questions that might require learning something before you can understand all the concepts being shared? What happens when you ask questions with complex learning required?

This is where I see the power of micro-learning. And this term is being redefined by AI. Want to learn a complex concept? AI will do two things for you. First it will curate your learning for you. And secondly it will be adaptive to your learning needs. Want to learn a complex mathematical concept? AI will be your teacher. Got stuck on one particular concept? AI will realize what mistake you are making and change how it teaches you that concept to better meet your leaning needs, and pace.

It’s like having content area specialists at your finger tips. And soon intelligent agents will get to know us. Like a personalized AI tutor, we can pick just about any topic and become knowledgeable by creating small (micro) learning modules that are based on what we know, what we want to know, and how we learn best.

The AI can deliver a lecture, but also ask questions. It can provide the information in a conversation, or it can point us to videos and experts that would normally have taken considerable research to find. And the idea that it can adapt to how quickly you pick something up or if you struggle with a concept, means that you are getting the learning you need, when you need it. Micro-learning with AI is the new search of 2025, and it’s just going to get better and better.

How will this change schools? What will AI assisted lessons look like in classrooms? How will the learning be individualized by teachers? By students? How will this change the way we look at content? How important will the process be compared to the content?

I think this will be a year of experimentation and adaptation. Micro-learning won’t just be something our students do, but our educators as well. Furthermore, what micro-learning means a year from now will look a lot different than it does now. And frankly, I’m excited about the way micro-learning is adapting to the powerful AI tools that are currently being developed. We are headed into a new frontier of adaptive, just-in-time, micro-learning.

Promise and Doom

I see both promise and doom in the near future. Current advances in technology are incredible, and we will see amazing new insights and discoveries in the coming years. I’m excited to see what problems AI will solve. I’m thrilled about what’s happening to preserve not just life, but healthy life, as I approach my older years. I look forward to a world where many issues like hunger and disease have ever-improving positive outcomes. And yet, I’m scared.

I also see the end of civilized society. I see the threat of near extinction. I see a world destroyed by the very technologies that hold so much promise. As a case in point, see the article, “‘Unprecedented risk’ to life on Earth: Scientists call for halt on ‘mirror life’ microbe research”.

We are already playing with technology that has the potential to “put humans, animals and plants at risk of lethal infections.” What scares me most is the word I chose to start that sentence with, ‘We’. The proverbial ‘we’ right now are top scientists. But a decade, maybe two decades from now that ‘we’ could include an angry, disenfranchised, and disillusioned 22 year old… using an uncensored AI to intentionally develop (or rather synthetically design) a bacteria or a virus that could spread faster than any plague that humans have ever faced. Not a top researcher, not a university trained scientist, a regular ‘Joe’ who has decided at a young age that the world isn’t giving him what he deserves and decides to be vengeful on an epic scale.

The same thing that excites me about technological advancement also scares me… and it’s the power of individuals to impact our future. We all know the names of some great thinkers: Galileo, Newton, Curie, Tesla, and Einstein as incredible scientists that transformed the way we think of the world. People like them are rare, and have had lasting influence on the way we think of the world. For every one of them there are millions, maybe billions of bright thinkers for whom we know nothing.

I don’t fear the famous scientist, I fear the rogue, unhappy misfit who uses incredible technological advancements for nefarious reasons. The same technology that can make our lives easier, and create tremendous good in the world, can also be used with bad intentions. But there are differences between someone using a revolver for bad reasons and someone using a nuclear bomb for bad reasons. The problem we face in the future is that access to the equivalent harm of a nuclear bomb (or worse) will be something more and more people have access to. I don’t think this is something we can stop, and so as amazing as the technology is that we see today, my fear is that it could also be what leads to our demise as a species.

The cat’s out of the bag

I find it mind boggling that just 5 years ago the big AI debate was whether we would let AI out in the wild or not? The idea was, AI would be sort of ‘boxed’ and within our ability to ‘contain’… but we have somehow decided to just bypass this question and set AI free.

Here is a trifecta of things that tell me the cat is out of the bag.

  1. NVIDIA puts out the Jetson Orin Nano. A tiny AI that doesn’t need to be connected to the cloud.
  2. Robots like Optimus from Tesla are already being sold.
  3. AI’s are proving that they can self replicate.

That’s it. That’s all. Just extrapolate what you want to from these three ‘independent’ developments. Put them together, stir in 5 years of technological advancement. Add a good dose of open source access and think about what’s possible… and beyond possible to contain.

Exciting, and quite honestly, scary!

A prediction nears reality

Andrew Wilkinson said on X:

“Just watched my 5-year-old son chat with ChatGPT advanced voice mode for over 45 minutes.

It started with a question about how cars were made.

It explained it in a way that he could understand.

He started peppering it with questions.

Then he told it about his teacher, and that he was learning to count.

ChatGPT started quizzing him on counting, and egging him on, making it into a game.

He was laughing and having a blast, and it (obviously) never lost patience with him.

I think this is going to be revolutionary. The essentially free, infinitely patient, super genius teacher that calibrates itself perfectly to your kid’s learning style and pace.

Excited about the future.”

– – –

I remember visiting my uncle back when I was in university. The year was somewhere around 1988-90. So, at least 34 years ago. We were talking about the future and Joe Truss explained to me what learning would be like in the coming age of computers.

He said, (loosely paraphrased, this was a long time ago):

‘In the future we will have virtual teachers that will be able to teach us exactly what we want to know in exactly the format we need to learn best. You want to learn about relativity? How about learning from Einstein himself? You’ll see him in front of you like he is real. And he will not just lecture you, he will react to your questions and even bio-feedback. You look puzzled, he might ask a question. He sees you looking up and to the left, which he knows means you are trying to visualize something, and so he changes his lesson to provide an image. He will be a teacher personalized to any and all of your learning needs.’

We aren’t quite there yet, but the exchange Andrew Wilkinson’s son had with ChatGPT, and the work being done in virtual and augmented reality, suggest that Joe’s prediction is finally coming into being.

I too am excited about the future, and more specifically, the future of learning.