Tag Archives: Artificial Intelligence

UCI rather than UBI

As AI and robotics continue to scale at unimaginable speeds, with AI getting exponentially smarter and robots increasingly more agile, we’ve got to realize how many jobs will disappear in a very short time period. This isn’t a gradual transition, it’s not a move from one field to another like farmers transitioning into factory workers during the industrial revolution. It’s a massive shift from human labour to machine labour that the world’s economies simply aren’t designed to absorb.

I’ve seen a growth in the number of people talking about the need for Universal Basic Income (UBI), but I fear this isn’t enough. I fear that the idea of giving millions if not billions of people a basic income, but no real means for most of them to supplement those incomes as an insufficient solution. We don’t need UBI, we need UCI – Universal Comfortable Income. It’s not going to be enough to give people a basic survival income. We are going to need to see governments, and maybe even companies, share their resources and wealth with people, or else who is going to have the resources to buy the products and services AI and robots will offer?

The potential for dissatisfaction and ultimately unrest seems scary to me. A world with a couple dozen trillion-dollar companies, and a handful of trillionaires running them, is also a world with vast populations of people eking out a subsistence lifestyle, unable to do more than meet their survival needs. A basic income, requiring additional sources of income to appreciate the offerings of a fully automated economy, will not survive without a revolt for too long.

Maybe I’m wrong. Maybe there are other solutions to this problem. Maybe I’m too bullish about to how far things will advance in a short time. That said, the potential for the scenario above to occur in the next decade is not zero. It might be a pessimistic bad-case or even worse-case scenario, but it’s possible… and scary. If things advance as fast as I think they will, we can’t continue to have UBI conversations, we need to move the goal posts and start really thinking of UCI.

A tidal wave of spam

Head of products at Twitter/x.com, Nikita Bier, said this on February 11th, 2 months ago today:

“Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail.

And we will have no way to stop it.”

Anthropic’s newest AI model, called Claude Mythos, is not being released to the public due to concerns about its ability to uncover high-severity cybersecurity flaws in major operating systems and web browsers. But make no mistake, this AI version and more (some privately owned and some free and open source) will be available in the next month. With this will come a tidal wave of security breaches, identity theft, and corporate as well as personal blackmail crimes.

The fact is that these AI models are professional lock pickers put in the hands of anyone who wants to use them. Almost no skill needed. Unlike the movies where the people doing a heist needed to recruit that one-of-a-kind safe cracker with crazy skills, now a 15 year old in his parent’s basement can do it without leaving the house.

This wave of ‘safe crackers’ is going to be let loose soon. But something else is headed this way and that’s the scammer coming for you and me via our phones, laptops, and social media accounts. These used to show up in poorly written emails, or broken English texts and phone calls that made them easy to detect. Now three things have fundamentally changed:

1. The quality of the messaging is flawless;

2. The ability of spammers and scammers to target you and share enough information to seem legitimate;

3. The sheer volume of spam coming our way. 1 spammer used to mean 1 phone call at a time being followed up with a real person. But with AI agents, one command could unleash wave upon wave of simultaneous emails, phone texts, and messages across many social media platforms.

The biggest problem with AI in the next 5 years isn’t what AI can do on its own, but rather what people with bad intentions can… and will… do with AI. It’s bad faith actors who will be our nemesis. Ultimately, the tidal wave is coming, “And we will have no way to stop it.”

Way more Waymo

Here is a statistic from the company Waymo:

“In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today.” Source: Waymo’s skyrocketing ridership in one chart

This is amazing growth. It’s not an isolated statistic. We are seeing this kind of growth in robotic focused manufacturing, and we are seeing it in the use of AI to do many jobs that humans used to do.

Are we ready for this? Are we ready for the gig economy to be eaten up by automation? Are we ready for not just blue collar but also white collar jobs to dwindle as AI takes over these jobs at an exponential rate? Are we ready for AI teachers, AI servants, AI drivers, AI delivery, AI accountants, AI lawyers, AI programmers, and AI in fields we thought would always need humans in them? Are we ready for way more of this kind of Waymo growth occurring simultaneously across many sectors?

We aren’t ready. Yet this is coming our way. It’s that simple.

Our responses in each case will be reactionary. For every current Waymo passenger there are probably a few potential customers thinking, ‘That’s scary, I’m not ready to put my life in the hands of a robot driver on the highway or the busy streets of downtown at rush hour.’ But those stats will dwindle. For every worker who thinks, ‘My job is safe, they’ll always need me,’ there are others who thought the same just a few years ago, and they are now looking for a job, often in a different sector than what they’ve been in.

Yes, there are limitations to this growth in some sectors. Yes, new jobs may come up that are uniquely human in nature. Yes, there are yet unharnessed opportunities for people to make a greater income (with less effort) in areas that they would not have imagined just a few months ago. It’s not all doom and gloom… but make no mistake, the exponential growth of AI powered advances will be drastically affecting all of our everyday lives sooner than most people realize. Waymo’s growth is emblematic of the kind of growth we will see in almost every aspect of our lives.

Who will get us there?

Stephen Downes shared the following on LinkedIn:

“I was asked, “Please provide a brief abstract that summarises your views on the impact of AI on higher education.”

As far more than the language models that have captured the attention of the world over the last few years, artificial intelligence (AI) represents a significant increase in human capability, augmenting and sometimes exceeding our natural capacities to perceive, reason, create and remember. Ubiquitous access to these capabilities changes the definition of what it means to learn and to be educated. Skills once reserved to the domain of experts are now in the hands of everyday people, while most every discipline is devising new models, methods and pragmatics of work alongside, or teaming with, these new tools. This challenges educators along a number of fronts, impacting how they teach, what they teach, and even what it means to teach. Today’s educator in a world of AI is responsible for far more than passing along knowledge (indeed, the machine can do most of that). We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care.

Thoughts?” ~ Stephen Downes

Although my thoughts align with K-12 education as well as higher education, these thoughts come to me in the form of a question:

Who is going to get us there?

Who is the ‘We’ that Stephen is talking about when he says, “We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care”?

Because I love this vision of what teaching can become, I just don’t see a clear path to take us there.

‘We’ won’t get there following the guidance of financially lucrative edu-tech business, products, and tools… their locked-in subscriptions will tout measures of success that don’t align with this vision, even when they say that they will.

‘We’ won’t get there like we did with Web2.0 tools in the late 2000’s and early 2010’s, on the backs of tech savvy educators leading the charge.

‘We’ won’t get there because of some governmental vision pushing a new AI enhanced curriculum, or even new guidelines that somehow redefine for teachers, “how they teach, what they teach, and even what it means to teach”.

I hope I’m not coming off as a pessimist. I’m excited about what’s possible. I just fear that ‘we’ aren’t going to get ‘there’ any time soon unless ‘We’ align philosophy, policy, and economic support for the transformation of schools into something different.

Short of that, I fear that ‘We’ will be having the same ‘20th century schools in a 21st century world’ conversation in another 10 years… which I’ve heard since getting into education in the late 1900’s.

The worst it will be from now on

I used Google’s Notebook LM in September 2024 and I was impressed with the podcast it created, sharing a summary of my blog. A month later I had it do the same for a video I created with Joe Truss. This is a novel theory, not a general knowledge concept and yet the AI grasped the majority of the concepts and did a very good summary.

Today I went back to Notebook LM because I heard it can now do video summaries. Again, I was impressed. While the accompanying visuals were not ideal, (we discuss complex geometry), the audio summary was excellent and it was valuable to see what takeaways were summarized and how the ideas were structured.

I then explained some of the geometric issues and the AI produced a pdf with the correct geometry. Joe and I then tried creating a slide deck, another new feature. The resulting text was excellent again, but some of the images were not quite accurate, yet we could see the possibilities in correcting the details and providing other sources to make it give us impressive results.

Reflecting on these improvements it occurred to me just how good this tool is now, and yet this is the worst version it will ever be. Artificial Intelligence and robotics are both advancing exponentially in capabilities. It’s exciting to think that what we are capable of using these tools for now will be considered simplistic if not archaic in just a few years.

Today I saw a video about a Chinese company that is selling a three and a half foot tall humanoid robot for the price of a new iPhone. It is not a simplistic toy, it is extremely agile and comes with a fully programmable operating system, meaning it is completely trainable for skills it doesn’t come with. That same company is expecting to reach 1,000 units produced a month by the end of this year.

We are in an era where advances happen daily, and what we marvel at today will be commonplace tomorrow. Every day the advances get a little better and so we are perpetually living with the worst technology we’ll ever know.

Dystopian hiring

We aren’t that far away from a rather dystopian world where so much of our lives are monitored and recorded that we will be an open book.

Imagine going for a job interview and before you arrive a digital, AI private detective has tracked every possible video, image, and written document that you’ve shared publicly, and given you a score based on company criteria that you are not privy to. And maybe that tracking will go beyond publicly shared data and reveal even more about you, like medical information scraped from a data breach you know nothing about.

Imagine going into that interviews where you are submitted to a ‘voluntary’ brain scan as part of the interview… that you agree to knowing full well that you won’t get the job if you don’t volunteer.

That scan will check to see if you are being honest during the interview, and it will also do things like measure the size of your anterior medsingulate cortex, which will let the company know if you are someone who does or does not challenge themselves. The company hiring you will know more about you than your friends and family do.

And for a real dystopian plot twist: it’s an android interviewing you for a mundane job that androids consider too menial to do! Even without this twist, I wonder what the job market will look like in 20 years? What role will humans play in the overall work force? What jobs are uniquely human, and what jobs can a brilliant if not super intelligent AI do?

I’m not sure how much the job market will truly change in just 20 years, but at the rate of advancement that I’m seeing in robotics and artificial intelligence, I really think a major disruption in what we call work is coming. The disruption will be uneven at first, taking more jobs in different sectors than in others, but sooner than we would want to envision, the disruption is coming in almost every sector. What will that really mean for humans and the things we define as work?

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

What’s the real AI risk in education?

I read a great article on LinkedIn by Ken Shelton. He looked at two articles:

“On one side:
AI as productivity infrastructure.
On the other:
AI as compliance enforcement.

But in both cases, the conversation centers on efficiency and policing, not on whether learning itself has been redesigned for an AI-rich world. Using historical context, one could reasonably make similar arguments around the implementation of technology as well. If students are learning to “sound human” to avoid detection…If institutions are investing in increasingly sophisticated surveillance tools…If teachers are primarily using AI to move faster within the same structures…Then we have to ask, as I have shared in previous posts:

Are we adapting learning?
Or are we simply optimizing and defending legacy systems?”

I found his article more interesting than the two he shared. I especially loved his final paragraph:

“The risk isn’t just that AI is moving too fast. The risk is that our response remains reactive, oscillating between efficiency and enforcement, without addressing purpose, power, and pedagogy. Therefore, the real inflection point isn’t technological, it’s analytical and philosophical.”

My thoughts: In education, go ahead and use AI to make teaching and lessons better, use it to help students learn, and also help them understand how to use AI to enrich their learning. But don’t use it to make learning easier. Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.

So educators need to do two things: First, they need to use AI to make what they are doing even better. And secondly, they need to shift the learning experience to one where they no longer need to worry about policing and compliance. For example: The work isn’t finished with an essay, but with students defending their points in the essay against other students with slightly different points or different perspectives. The students who wrote the essay with AI and didn’t fully comprehend the topic can’t argue their perspective as well as the ones who were willing to do the work… and if they did use AI and can then argue the points better than their peers, that only proves that they understand how to use AI as a learning tool and not a tool to do the work for them. Because the real risk of AI in education is that the AI is doing the work, the struggle, and the learning for the student.

The problem we face is how learning can be circumvented by AI. And so the challenge for educators is to make it more challenging to use AI inappropriately, and to use AI to aid in making learning experiences more challenging. This is not an easy task, but it’s one we need to figure out and do well if we want our students to be learners who will have significance in a world where AI is all around us.

_________
Update: Just found this LinkedIn post by William (Bill) Ferriter, and it has two awesome images to fit with the above.

Update 2: I forgot about this post: Thinking Requires Effort

Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Not if, when

The only thing I use AI for when I write my blog is to make an accompanying image. I don’t use it for editing, and as a result I’ll often not notice a typo, or I’ll create a sentence that doesn’t flow, or I’ll repeat a word a little too frequently in a paragraph. What I’m saying is that I’ll make mistakes that could be caught if I used an artificial intelligence to aid in my editing.

That said, I already do use some AI because a little red line unner under a word lets me know I’ve misspelled it. We often forget that we’ve been using forms of artificial intelligence for a long time now. But I’m specifically talking about using AI as an editor or even as a co-writer. This is something I have not intentionally done yet. However, if I’m honest, the main reason for this is simply time.

I’m already pressed for time to get my writing done in the morning. I recently wrote about how frustrated I was with AI images, and the fact that they weren’t giving me exactly what I wanted, and wasted too much time. I don’t see myself in a position where I’m going to spend time using AI as an editor on top of this.… But it’s coming.

The reason it’s coming is because while I know writing every day has improved the quality of my writing, I’m sure it has also reinforced some of the weaknesses in my style. Doing something repetitively without meaningful feedback doesn’t necessarily make you better. I know that having an editor would make me better. And the reality is, I have an editor available to me whenever I want one. So now it’s just a matter of deciding when?

The ‘when’ is probably after retirement. I think that when I’m not trying to stick an entire routine of habits into under 2 1/2 hours before work, I’ll have time for things like putting my writing into an AI editor. I’ll probably be writing on my laptop instead of my phone, while enjoying a morning coffee. I’ll have the convenience of multiple tabs open on my browser rather than having to use my finger to copy paste information. And most importantly, I’ll have more time to learn, to get feedback and discern, does this AI suggestion make my writing better, or does it make my writing more vanilla?

The point is, it’s going to happen. To have a tool like this, literally at my fingertips and not to use it is silly. Especially when it can help me, with the right prompt, to become better at something I love to do.