Tag Archives: AI

Right and wrong

I was talking with a colleague yesterday and he shared two interesting things with me. The first was that he has a friend who works for a large company, I think he said Oracle, but I’m not 100% sure. He told me that this friend has unlimited holidays, but the output expectations are so high that she can’t really take advantage of this. The premise is that you can take more time off than just the designated 10-15 days a year (as a traditional US company would allow) as long as you get your job done. The catch is, the workload probably doesn’t even allow that much time off.

That’s a case of ‘The right idea but the wrong outcome’.

The other thing he said was a prediction that I agree with. He predicts that very soon we’ll see the implementation of 4-day work weeks. The reason he thinks this will happen sooner rather than later is AI and robotics. Essentially the economy requires citizens to have buying power, and so you need a paid workforce… but there won’t be enough jobs to sustain everyone putting in 40-hour, 5-day work weeks, and there will also be efficiencies each worker has, thanks to their use of AI and robotics.

That’s a case of ‘The right idea but for the wrong reason’. The societal benefits of a 4-day work week shouldn’t have to wait for technological advancement in my humble opinion.

I would like to think that we are advanced enough as a species that we could do the right things for the right reasons, but more often than not we have to accept the wrong to get the right. We have ‘just’ wars, citizen surveillance to fight terrorism, over-censorship to reduce perceived conflict… the morality of these is dependent on how one is affected.

If you live in country where you have many freedoms but fear violence, you might appreciate heavy surveillance. If you live in a country where expressing your opinion could get you jailed, surveillance feels Orwellian.

‘The right idea but the wrong outcome.’

‘The right idea but for the wrong reason.’

Right and wrong.

“Don’t Bring a Résumé. Bring Receipts.”

In the article, ‘The Proof Economy’ Anand Sanwal says, Don’t Bring a Résumé. Bring Receipts.” Anand starts with two definitions saying that we’ve moved from the Parchment Economy to a Proof Economy,

“We’ve entered the Proof Economy, a world where the most valuable signal isn’t where you went to school, what your GPA was, or which honors you collected, but what you’ve actually done and can do. In this new landscape, demonstrated ability trumps pedigree, and what you’ve built matters more than where you studied.

Meanwhile, the Parchment Economy, that centuries-old system where formal credentials and institutional validation serve as proxies for capability, is losing its monopoly on opportunity. The elaborate dance of transcripts, recommendation letters, diplomas and prestige markers is becoming increasingly irrelevant in field after field.”

This is something I’ve been describing for a while now, without properly defining the difference in the two ‘economies’. Beyond credentialed professionals like doctors, engineers, and lawyers, what now matters most is your portfolio, not your schooling certificates. ‘What is it that you can do better than others to earn you a spot in our organization?’ (Regardless of your credentials.)

Anand says,

“When anyone can access expertise through prompts and build a prototype video, software product or design via AI, the value shifts decisively from knowledge possession to knowledge application.”

But for me the most interesting section in his article is:

What Education Needs to Become

If we accept that we’re entering the Proof Economy, schools can’t just add a few electives or rethink assessment to focus on progress and not perfection..

They need to rewire what they reward.

We should expect:

  • Projects over problem sets: Real-world challenges that apply knowledge, not just recall it.
  • Portfolios over transcripts: A body of work that shows thinking, skill, and growth.
  • Public work over private grading: Output that lives in the world, not a Google Doc.
  • Coaching over compliance: Adults who challenge and support, not just evaluate.
  • Failure as fuel: A system that treats failed attempts as essential steps, not permanent marks.

At Inquiry Hub Secondary our students are still entrenched in the old public education system in that they complete required courses to meet provincial high school graduation requirements, and most of them still head off to university, college, or a technical institute to further their studies. However, along the way they are given the time, space, and credits (towards their graduation), to produce documentation of learning in areas of interest. They have an opportunity to design and build projects, (documented receipts), most other students could only get done on their own time, outside of traditional classrooms.

They also get to live in an environment where they have to cooperate with fellow students in scrum projects with tight timelines and defined roles (not just group projects with everyone having identical outcomes and expectations). They have to do frequent presentations, alone and in groups, with training to give and receive feedback with radical candour. They understand iteration, they pivot based on where their learning takes them, and they embrace failure as learning opportunities because sometimes obstacles become the way. And they are provided with greater and greater autonomy over their time as they progress from Grade 9 to 12.

Essentially, Inquiry Hub students still get their resume of courses, but they are also provided the opportunity to bring receipts too.

Students choose, AI delivers

Thinking about AI use in schools, the vast majority of assignments are currently pretty easy to use AI to assist. Students can use this to extend their learning or to do the work for them/make the work significantly easier to do. And then teachers become police… not teachers, trying to figure out how a student is using AI to cheat.

Two big takeaways, one being a positive shift the other being a challenge:

  1. Process matters more than final products.
  2. Students will choose if they want AI to help them or do the work for them. Will they choose to have AI assist their thinking or do the thinking for them?

We have control over whether we focus on process or content. Students have the choice as to whether they use AI to help them think or to think for them. A focus on process can reduce how much a student relies on AI… but a student can always get AI to assist them with the next step.

I’m excited about how students will use AI to dig deeper and extend their learning. I’m equally concerned for students who are choosing to use AI to take the friction out of leaning… opting out of thinking for themselves. Whichever of these approaches students choose, AI will deliver.

A bigger gig economy

The gig economy is a system where people work as freelancers or take on side jobs for companies instead of having a regular full‑time position. Uber drivers are a great example of this. There are a few reasons why I think the gig economy is going to grow:

  1. High prices are making a side hustle of some sort essential if you want to enjoy things beyond what salaries allow.
  2. Companies like the structure because pay is based on performance rather than a set salary.
  3. Entertainment is shifting to live performances, gigs, as a primary form of earnings. Getting your music to stream is not enough to keep most musicians going without a concert tour.
  4. A trend now in social media, is to see a lot of affiliate marketing. Only the biggest of social media stars can make this a full-time living. For the vast majority affiliate marketing is nothing more than a gig economy.
  5. We are going to see a wave of AI trainers needed to train robots to do everyday skills. Work as a maid in a hotel? We’ll pay you to wear a GoPro for two weeks while you work. That video will train an AI that’s going to take your job less than a decade later.

Companies are afraid to hire full time staff. Money is better spent on technology than on training a human on a fixed salary. As a result, the gig economy is just going to get bigger and bigger.

UCI rather than UBI

As AI and robotics continue to scale at unimaginable speeds, with AI getting exponentially smarter and robots increasingly more agile, we’ve got to realize how many jobs will disappear in a very short time period. This isn’t a gradual transition, it’s not a move from one field to another like farmers transitioning into factory workers during the industrial revolution. It’s a massive shift from human labour to machine labour that the world’s economies simply aren’t designed to absorb.

I’ve seen a growth in the number of people talking about the need for Universal Basic Income (UBI), but I fear this isn’t enough. I fear that the idea of giving millions if not billions of people a basic income, but no real means for most of them to supplement those incomes as an insufficient solution. We don’t need UBI, we need UCI – Universal Comfortable Income. It’s not going to be enough to give people a basic survival income. We are going to need to see governments, and maybe even companies, share their resources and wealth with people, or else who is going to have the resources to buy the products and services AI and robots will offer?

The potential for dissatisfaction and ultimately unrest seems scary to me. A world with a couple dozen trillion-dollar companies, and a handful of trillionaires running them, is also a world with vast populations of people eking out a subsistence lifestyle, unable to do more than meet their survival needs. A basic income, requiring additional sources of income to appreciate the offerings of a fully automated economy, will not survive without a revolt for too long.

Maybe I’m wrong. Maybe there are other solutions to this problem. Maybe I’m too bullish about to how far things will advance in a short time. That said, the potential for the scenario above to occur in the next decade is not zero. It might be a pessimistic bad-case or even worse-case scenario, but it’s possible… and scary. If things advance as fast as I think they will, we can’t continue to have UBI conversations, we need to move the goal posts and start really thinking of UCI.

A tidal wave of spam

Head of products at Twitter/x.com, Nikita Bier, said this on February 11th, 2 months ago today:

“Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail.

And we will have no way to stop it.”

Anthropic’s newest AI model, called Claude Mythos, is not being released to the public due to concerns about its ability to uncover high-severity cybersecurity flaws in major operating systems and web browsers. But make no mistake, this AI version and more (some privately owned and some free and open source) will be available in the next month. With this will come a tidal wave of security breaches, identity theft, and corporate as well as personal blackmail crimes.

The fact is that these AI models are professional lock pickers put in the hands of anyone who wants to use them. Almost no skill needed. Unlike the movies where the people doing a heist needed to recruit that one-of-a-kind safe cracker with crazy skills, now a 15 year old in his parent’s basement can do it without leaving the house.

This wave of ‘safe crackers’ is going to be let loose soon. But something else is headed this way and that’s the scammer coming for you and me via our phones, laptops, and social media accounts. These used to show up in poorly written emails, or broken English texts and phone calls that made them easy to detect. Now three things have fundamentally changed:

1. The quality of the messaging is flawless;

2. The ability of spammers and scammers to target you and share enough information to seem legitimate;

3. The sheer volume of spam coming our way. 1 spammer used to mean 1 phone call at a time being followed up with a real person. But with AI agents, one command could unleash wave upon wave of simultaneous emails, phone texts, and messages across many social media platforms.

The biggest problem with AI in the next 5 years isn’t what AI can do on its own, but rather what people with bad intentions can… and will… do with AI. It’s bad faith actors who will be our nemesis. Ultimately, the tidal wave is coming, “And we will have no way to stop it.”

So easy to cheat

We aren’t far away from contact lenses that can do the same. The article, ‘Smart Glasses for Exam Cheating: Best Models, Prices and Risks in 2026’, shares multiple options that can provide AI delivered test answers, in seconds, via a small ear piece or even projected text answers which can only be heard or seen by the user. Banned? Of course. Easily detected? Not all models, with more sleuth and hidden models being developed every day. And as mentioned, what happens when these are as invisible as contact lenses?

Make no mistake, cheating has been around as long as tests have. In some respects this is not new. But most methods of cheating demand guessing what questions will be on the test in advance. Methods like these are responsive to every question asked. And the speed of responses are natural. While you are still reading the question, a response is already headed your way. No need to shift your eyes from the screen or test paper. No hidden notes to conceal, and no wrong answers unless you are choosing to get less than a perfect score, to not seem suspiciously smart.

I remember a friend telling me about him and his friends getting hold of their ethics exam a couple days before they had to write it. The irony of cheating on an ethics exam is not lost on me. They memorized the questions and answers, and all chose different ones to get wrong, while still achieving high ‘A’s. Then on the day of the test my friend was horrified when his friend raised his hand 30 minutes into a 3-hour exam, and shared a typo on a question that no one should have gotten to in such a short time. Despite this poor choice, they all got their ‘A’s.

That’s going to be the new challenge in cheating, how to not do too well to bring attention to yourself. A good problem to have for a cheater.

So here we are in a new era of cheating. Prescription glasses, hidden cameras and microphones, and curated wrong answers. And in all honesty, less and less opportunity for detection. Ultimately, it’s the tests that will need to change.

Way more Waymo

Here is a statistic from the company Waymo:

“In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today.” Source: Waymo’s skyrocketing ridership in one chart

This is amazing growth. It’s not an isolated statistic. We are seeing this kind of growth in robotic focused manufacturing, and we are seeing it in the use of AI to do many jobs that humans used to do.

Are we ready for this? Are we ready for the gig economy to be eaten up by automation? Are we ready for not just blue collar but also white collar jobs to dwindle as AI takes over these jobs at an exponential rate? Are we ready for AI teachers, AI servants, AI drivers, AI delivery, AI accountants, AI lawyers, AI programmers, and AI in fields we thought would always need humans in them? Are we ready for way more of this kind of Waymo growth occurring simultaneously across many sectors?

We aren’t ready. Yet this is coming our way. It’s that simple.

Our responses in each case will be reactionary. For every current Waymo passenger there are probably a few potential customers thinking, ‘That’s scary, I’m not ready to put my life in the hands of a robot driver on the highway or the busy streets of downtown at rush hour.’ But those stats will dwindle. For every worker who thinks, ‘My job is safe, they’ll always need me,’ there are others who thought the same just a few years ago, and they are now looking for a job, often in a different sector than what they’ve been in.

Yes, there are limitations to this growth in some sectors. Yes, new jobs may come up that are uniquely human in nature. Yes, there are yet unharnessed opportunities for people to make a greater income (with less effort) in areas that they would not have imagined just a few months ago. It’s not all doom and gloom… but make no mistake, the exponential growth of AI powered advances will be drastically affecting all of our everyday lives sooner than most people realize. Waymo’s growth is emblematic of the kind of growth we will see in almost every aspect of our lives.

Who will get us there?

Stephen Downes shared the following on LinkedIn:

“I was asked, “Please provide a brief abstract that summarises your views on the impact of AI on higher education.”

As far more than the language models that have captured the attention of the world over the last few years, artificial intelligence (AI) represents a significant increase in human capability, augmenting and sometimes exceeding our natural capacities to perceive, reason, create and remember. Ubiquitous access to these capabilities changes the definition of what it means to learn and to be educated. Skills once reserved to the domain of experts are now in the hands of everyday people, while most every discipline is devising new models, methods and pragmatics of work alongside, or teaming with, these new tools. This challenges educators along a number of fronts, impacting how they teach, what they teach, and even what it means to teach. Today’s educator in a world of AI is responsible for far more than passing along knowledge (indeed, the machine can do most of that). We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care.

Thoughts?” ~ Stephen Downes

Although my thoughts align with K-12 education as well as higher education, these thoughts come to me in the form of a question:

Who is going to get us there?

Who is the ‘We’ that Stephen is talking about when he says, “We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care”?

Because I love this vision of what teaching can become, I just don’t see a clear path to take us there.

‘We’ won’t get there following the guidance of financially lucrative edu-tech business, products, and tools… their locked-in subscriptions will tout measures of success that don’t align with this vision, even when they say that they will.

‘We’ won’t get there like we did with Web2.0 tools in the late 2000’s and early 2010’s, on the backs of tech savvy educators leading the charge.

‘We’ won’t get there because of some governmental vision pushing a new AI enhanced curriculum, or even new guidelines that somehow redefine for teachers, “how they teach, what they teach, and even what it means to teach”.

I hope I’m not coming off as a pessimist. I’m excited about what’s possible. I just fear that ‘we’ aren’t going to get ‘there’ any time soon unless ‘We’ align philosophy, policy, and economic support for the transformation of schools into something different.

Short of that, I fear that ‘We’ will be having the same ‘20th century schools in a 21st century world’ conversation in another 10 years… which I’ve heard since getting into education in the late 1900’s.

AI Agents and Trust

I read this in the Superhuman Newsletter today,

“Agents need authorization, not
just authentication…

The winners in enterprise AI won’t have the most features. They’ll be the ones enterprises can safely trust.”

I am still very far away from letting any kind of AI agent access my email. I don’t care how efficient the tool might make me; don’t care if it can prioritize and reduce my attention on unimportant information. The reality is that my email is the gateway to every login credential and password to every online identity I have… and it’s not only the agent itself I fear, it’s the vulnerabilities that they open me up to if a bad actor can trick the agent into giving them access.

Maybe I’m just paranoid, but I don’t think there are enough kinks worked out in the area of privacy and security. Oh, and to ad an important PSA: Make sure your email password is different than all other passwords you use online. I’d rather be paranoid than overconfident when it comes to online safety and security.