Tag Archives: future

Right and wrong

I was talking with a colleague yesterday and he shared two interesting things with me. The first was that he has a friend who works for a large company, I think he said Oracle, but I’m not 100% sure. He told me that this friend has unlimited holidays, but the output expectations are so high that she can’t really take advantage of this. The premise is that you can take more time off than just the designated 10-15 days a year (as a traditional US company would allow) as long as you get your job done. The catch is, the workload probably doesn’t even allow that much time off.

That’s a case of ‘The right idea but the wrong outcome’.

The other thing he said was a prediction that I agree with. He predicts that very soon we’ll see the implementation of 4-day work weeks. The reason he thinks this will happen sooner rather than later is AI and robotics. Essentially the economy requires citizens to have buying power, and so you need a paid workforce… but there won’t be enough jobs to sustain everyone putting in 40-hour, 5-day work weeks, and there will also be efficiencies each worker has, thanks to their use of AI and robotics.

That’s a case of ‘The right idea but for the wrong reason’. The societal benefits of a 4-day work week shouldn’t have to wait for technological advancement in my humble opinion.

I would like to think that we are advanced enough as a species that we could do the right things for the right reasons, but more often than not we have to accept the wrong to get the right. We have ‘just’ wars, citizen surveillance to fight terrorism, over-censorship to reduce perceived conflict… the morality of these is dependent on how one is affected.

If you live in country where you have many freedoms but fear violence, you might appreciate heavy surveillance. If you live in a country where expressing your opinion could get you jailed, surveillance feels Orwellian.

‘The right idea but the wrong outcome.’

‘The right idea but for the wrong reason.’

Right and wrong.

“Don’t Bring a Résumé. Bring Receipts.”

In the article, ‘The Proof Economy’ Anand Sanwal says, Don’t Bring a Résumé. Bring Receipts.” Anand starts with two definitions saying that we’ve moved from the Parchment Economy to a Proof Economy,

“We’ve entered the Proof Economy, a world where the most valuable signal isn’t where you went to school, what your GPA was, or which honors you collected, but what you’ve actually done and can do. In this new landscape, demonstrated ability trumps pedigree, and what you’ve built matters more than where you studied.

Meanwhile, the Parchment Economy, that centuries-old system where formal credentials and institutional validation serve as proxies for capability, is losing its monopoly on opportunity. The elaborate dance of transcripts, recommendation letters, diplomas and prestige markers is becoming increasingly irrelevant in field after field.”

This is something I’ve been describing for a while now, without properly defining the difference in the two ‘economies’. Beyond credentialed professionals like doctors, engineers, and lawyers, what now matters most is your portfolio, not your schooling certificates. ‘What is it that you can do better than others to earn you a spot in our organization?’ (Regardless of your credentials.)

Anand says,

“When anyone can access expertise through prompts and build a prototype video, software product or design via AI, the value shifts decisively from knowledge possession to knowledge application.”

But for me the most interesting section in his article is:

What Education Needs to Become

If we accept that we’re entering the Proof Economy, schools can’t just add a few electives or rethink assessment to focus on progress and not perfection..

They need to rewire what they reward.

We should expect:

  • Projects over problem sets: Real-world challenges that apply knowledge, not just recall it.
  • Portfolios over transcripts: A body of work that shows thinking, skill, and growth.
  • Public work over private grading: Output that lives in the world, not a Google Doc.
  • Coaching over compliance: Adults who challenge and support, not just evaluate.
  • Failure as fuel: A system that treats failed attempts as essential steps, not permanent marks.

At Inquiry Hub Secondary our students are still entrenched in the old public education system in that they complete required courses to meet provincial high school graduation requirements, and most of them still head off to university, college, or a technical institute to further their studies. However, along the way they are given the time, space, and credits (towards their graduation), to produce documentation of learning in areas of interest. They have an opportunity to design and build projects, (documented receipts), most other students could only get done on their own time, outside of traditional classrooms.

They also get to live in an environment where they have to cooperate with fellow students in scrum projects with tight timelines and defined roles (not just group projects with everyone having identical outcomes and expectations). They have to do frequent presentations, alone and in groups, with training to give and receive feedback with radical candour. They understand iteration, they pivot based on where their learning takes them, and they embrace failure as learning opportunities because sometimes obstacles become the way. And they are provided with greater and greater autonomy over their time as they progress from Grade 9 to 12.

Essentially, Inquiry Hub students still get their resume of courses, but they are also provided the opportunity to bring receipts too.

Students choose, AI delivers

Thinking about AI use in schools, the vast majority of assignments are currently pretty easy to use AI to assist. Students can use this to extend their learning or to do the work for them/make the work significantly easier to do. And then teachers become police… not teachers, trying to figure out how a student is using AI to cheat.

Two big takeaways, one being a positive shift the other being a challenge:

  1. Process matters more than final products.
  2. Students will choose if they want AI to help them or do the work for them. Will they choose to have AI assist their thinking or do the thinking for them?

We have control over whether we focus on process or content. Students have the choice as to whether they use AI to help them think or to think for them. A focus on process can reduce how much a student relies on AI… but a student can always get AI to assist them with the next step.

I’m excited about how students will use AI to dig deeper and extend their learning. I’m equally concerned for students who are choosing to use AI to take the friction out of leaning… opting out of thinking for themselves. Whichever of these approaches students choose, AI will deliver.

UCI rather than UBI

As AI and robotics continue to scale at unimaginable speeds, with AI getting exponentially smarter and robots increasingly more agile, we’ve got to realize how many jobs will disappear in a very short time period. This isn’t a gradual transition, it’s not a move from one field to another like farmers transitioning into factory workers during the industrial revolution. It’s a massive shift from human labour to machine labour that the world’s economies simply aren’t designed to absorb.

I’ve seen a growth in the number of people talking about the need for Universal Basic Income (UBI), but I fear this isn’t enough. I fear that the idea of giving millions if not billions of people a basic income, but no real means for most of them to supplement those incomes as an insufficient solution. We don’t need UBI, we need UCI – Universal Comfortable Income. It’s not going to be enough to give people a basic survival income. We are going to need to see governments, and maybe even companies, share their resources and wealth with people, or else who is going to have the resources to buy the products and services AI and robots will offer?

The potential for dissatisfaction and ultimately unrest seems scary to me. A world with a couple dozen trillion-dollar companies, and a handful of trillionaires running them, is also a world with vast populations of people eking out a subsistence lifestyle, unable to do more than meet their survival needs. A basic income, requiring additional sources of income to appreciate the offerings of a fully automated economy, will not survive without a revolt for too long.

Maybe I’m wrong. Maybe there are other solutions to this problem. Maybe I’m too bullish about to how far things will advance in a short time. That said, the potential for the scenario above to occur in the next decade is not zero. It might be a pessimistic bad-case or even worse-case scenario, but it’s possible… and scary. If things advance as fast as I think they will, we can’t continue to have UBI conversations, we need to move the goal posts and start really thinking of UCI.

A tidal wave of spam

Head of products at Twitter/x.com, Nikita Bier, said this on February 11th, 2 months ago today:

“Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail.

And we will have no way to stop it.”

Anthropic’s newest AI model, called Claude Mythos, is not being released to the public due to concerns about its ability to uncover high-severity cybersecurity flaws in major operating systems and web browsers. But make no mistake, this AI version and more (some privately owned and some free and open source) will be available in the next month. With this will come a tidal wave of security breaches, identity theft, and corporate as well as personal blackmail crimes.

The fact is that these AI models are professional lock pickers put in the hands of anyone who wants to use them. Almost no skill needed. Unlike the movies where the people doing a heist needed to recruit that one-of-a-kind safe cracker with crazy skills, now a 15 year old in his parent’s basement can do it without leaving the house.

This wave of ‘safe crackers’ is going to be let loose soon. But something else is headed this way and that’s the scammer coming for you and me via our phones, laptops, and social media accounts. These used to show up in poorly written emails, or broken English texts and phone calls that made them easy to detect. Now three things have fundamentally changed:

1. The quality of the messaging is flawless;

2. The ability of spammers and scammers to target you and share enough information to seem legitimate;

3. The sheer volume of spam coming our way. 1 spammer used to mean 1 phone call at a time being followed up with a real person. But with AI agents, one command could unleash wave upon wave of simultaneous emails, phone texts, and messages across many social media platforms.

The biggest problem with AI in the next 5 years isn’t what AI can do on its own, but rather what people with bad intentions can… and will… do with AI. It’s bad faith actors who will be our nemesis. Ultimately, the tidal wave is coming, “And we will have no way to stop it.”

Way more Waymo

Here is a statistic from the company Waymo:

“In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today.” Source: Waymo’s skyrocketing ridership in one chart

This is amazing growth. It’s not an isolated statistic. We are seeing this kind of growth in robotic focused manufacturing, and we are seeing it in the use of AI to do many jobs that humans used to do.

Are we ready for this? Are we ready for the gig economy to be eaten up by automation? Are we ready for not just blue collar but also white collar jobs to dwindle as AI takes over these jobs at an exponential rate? Are we ready for AI teachers, AI servants, AI drivers, AI delivery, AI accountants, AI lawyers, AI programmers, and AI in fields we thought would always need humans in them? Are we ready for way more of this kind of Waymo growth occurring simultaneously across many sectors?

We aren’t ready. Yet this is coming our way. It’s that simple.

Our responses in each case will be reactionary. For every current Waymo passenger there are probably a few potential customers thinking, ‘That’s scary, I’m not ready to put my life in the hands of a robot driver on the highway or the busy streets of downtown at rush hour.’ But those stats will dwindle. For every worker who thinks, ‘My job is safe, they’ll always need me,’ there are others who thought the same just a few years ago, and they are now looking for a job, often in a different sector than what they’ve been in.

Yes, there are limitations to this growth in some sectors. Yes, new jobs may come up that are uniquely human in nature. Yes, there are yet unharnessed opportunities for people to make a greater income (with less effort) in areas that they would not have imagined just a few months ago. It’s not all doom and gloom… but make no mistake, the exponential growth of AI powered advances will be drastically affecting all of our everyday lives sooner than most people realize. Waymo’s growth is emblematic of the kind of growth we will see in almost every aspect of our lives.

Who will get us there?

Stephen Downes shared the following on LinkedIn:

“I was asked, “Please provide a brief abstract that summarises your views on the impact of AI on higher education.”

As far more than the language models that have captured the attention of the world over the last few years, artificial intelligence (AI) represents a significant increase in human capability, augmenting and sometimes exceeding our natural capacities to perceive, reason, create and remember. Ubiquitous access to these capabilities changes the definition of what it means to learn and to be educated. Skills once reserved to the domain of experts are now in the hands of everyday people, while most every discipline is devising new models, methods and pragmatics of work alongside, or teaming with, these new tools. This challenges educators along a number of fronts, impacting how they teach, what they teach, and even what it means to teach. Today’s educator in a world of AI is responsible for far more than passing along knowledge (indeed, the machine can do most of that). We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care.

Thoughts?” ~ Stephen Downes

Although my thoughts align with K-12 education as well as higher education, these thoughts come to me in the form of a question:

Who is going to get us there?

Who is the ‘We’ that Stephen is talking about when he says, “We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care”?

Because I love this vision of what teaching can become, I just don’t see a clear path to take us there.

‘We’ won’t get there following the guidance of financially lucrative edu-tech business, products, and tools… their locked-in subscriptions will tout measures of success that don’t align with this vision, even when they say that they will.

‘We’ won’t get there like we did with Web2.0 tools in the late 2000’s and early 2010’s, on the backs of tech savvy educators leading the charge.

‘We’ won’t get there because of some governmental vision pushing a new AI enhanced curriculum, or even new guidelines that somehow redefine for teachers, “how they teach, what they teach, and even what it means to teach”.

I hope I’m not coming off as a pessimist. I’m excited about what’s possible. I just fear that ‘we’ aren’t going to get ‘there’ any time soon unless ‘We’ align philosophy, policy, and economic support for the transformation of schools into something different.

Short of that, I fear that ‘We’ will be having the same ‘20th century schools in a 21st century world’ conversation in another 10 years… which I’ve heard since getting into education in the late 1900’s.

The worst it will be from now on

I used Google’s Notebook LM in September 2024 and I was impressed with the podcast it created, sharing a summary of my blog. A month later I had it do the same for a video I created with Joe Truss. This is a novel theory, not a general knowledge concept and yet the AI grasped the majority of the concepts and did a very good summary.

Today I went back to Notebook LM because I heard it can now do video summaries. Again, I was impressed. While the accompanying visuals were not ideal, (we discuss complex geometry), the audio summary was excellent and it was valuable to see what takeaways were summarized and how the ideas were structured.

I then explained some of the geometric issues and the AI produced a pdf with the correct geometry. Joe and I then tried creating a slide deck, another new feature. The resulting text was excellent again, but some of the images were not quite accurate, yet we could see the possibilities in correcting the details and providing other sources to make it give us impressive results.

Reflecting on these improvements it occurred to me just how good this tool is now, and yet this is the worst version it will ever be. Artificial Intelligence and robotics are both advancing exponentially in capabilities. It’s exciting to think that what we are capable of using these tools for now will be considered simplistic if not archaic in just a few years.

Today I saw a video about a Chinese company that is selling a three and a half foot tall humanoid robot for the price of a new iPhone. It is not a simplistic toy, it is extremely agile and comes with a fully programmable operating system, meaning it is completely trainable for skills it doesn’t come with. That same company is expecting to reach 1,000 units produced a month by the end of this year.

We are in an era where advances happen daily, and what we marvel at today will be commonplace tomorrow. Every day the advances get a little better and so we are perpetually living with the worst technology we’ll ever know.

Enshitification

I asked Copilot to search my blog for, “posts where technology improves while systems (work, economy, institutions, structures) get worse.

It shared the following summary:

 

The reason I asked for this is because I wanted to look back on posts that reminded me of this skit out of Norway. It is, as the Threads post suggests, “utterly brilliant”!

You can find the video and more information on the website at the end of the video.

We aren’t imagining this, things are getting intentionally worse. On social media we are not the customer, we are the product sold to advertisers. And even when we are the customer, we don’t buy anything outright anymore, no, we get locked into subscriptions.

Copilot didn’t find the one post I was looking for, but I found it to share here:

Sometime technology s(UX)

I’ll end here with a couple paragraphs from that post, no need to try writing something I already said,

“I want to use my credit card at a gas station, not only must I put in my pin, I need to say how much I want to spend as a maximum. Every instant teller I go to asks me what language I want to work in… how hard would it be for the machine to know my preference after asking once? And as for autocorrect… it’s getting worse, not better.

I love my tech, but it seems to me that technology is all about adding features, and not about user experience (UX). The user is forgotten as new bells and whistles are added. Or things are so locked down that I need Face ID, a confirmation text, and coming soon, a DNA scan. Between new features and new security measures, there seems little time spent thinking about what the experience is for the end user.”

Dystopian hiring

We aren’t that far away from a rather dystopian world where so much of our lives are monitored and recorded that we will be an open book.

Imagine going for a job interview and before you arrive a digital, AI private detective has tracked every possible video, image, and written document that you’ve shared publicly, and given you a score based on company criteria that you are not privy to. And maybe that tracking will go beyond publicly shared data and reveal even more about you, like medical information scraped from a data breach you know nothing about.

Imagine going into that interviews where you are submitted to a ‘voluntary’ brain scan as part of the interview… that you agree to knowing full well that you won’t get the job if you don’t volunteer.

That scan will check to see if you are being honest during the interview, and it will also do things like measure the size of your anterior medsingulate cortex, which will let the company know if you are someone who does or does not challenge themselves. The company hiring you will know more about you than your friends and family do.

And for a real dystopian plot twist: it’s an android interviewing you for a mundane job that androids consider too menial to do! Even without this twist, I wonder what the job market will look like in 20 years? What role will humans play in the overall work force? What jobs are uniquely human, and what jobs can a brilliant if not super intelligent AI do?

I’m not sure how much the job market will truly change in just 20 years, but at the rate of advancement that I’m seeing in robotics and artificial intelligence, I really think a major disruption in what we call work is coming. The disruption will be uneven at first, taking more jobs in different sectors than in others, but sooner than we would want to envision, the disruption is coming in almost every sector. What will that really mean for humans and the things we define as work?