Tag Archives: technology

UCI rather than UBI

As AI and robotics continue to scale at unimaginable speeds, with AI getting exponentially smarter and robots increasingly more agile, we’ve got to realize how many jobs will disappear in a very short time period. This isn’t a gradual transition, it’s not a move from one field to another like farmers transitioning into factory workers during the industrial revolution. It’s a massive shift from human labour to machine labour that the world’s economies simply aren’t designed to absorb.

I’ve seen a growth in the number of people talking about the need for Universal Basic Income (UBI), but I fear this isn’t enough. I fear that the idea of giving millions if not billions of people a basic income, but no real means for most of them to supplement those incomes as an insufficient solution. We don’t need UBI, we need UCI – Universal Comfortable Income. It’s not going to be enough to give people a basic survival income. We are going to need to see governments, and maybe even companies, share their resources and wealth with people, or else who is going to have the resources to buy the products and services AI and robots will offer?

The potential for dissatisfaction and ultimately unrest seems scary to me. A world with a couple dozen trillion-dollar companies, and a handful of trillionaires running them, is also a world with vast populations of people eking out a subsistence lifestyle, unable to do more than meet their survival needs. A basic income, requiring additional sources of income to appreciate the offerings of a fully automated economy, will not survive without a revolt for too long.

Maybe I’m wrong. Maybe there are other solutions to this problem. Maybe I’m too bullish about to how far things will advance in a short time. That said, the potential for the scenario above to occur in the next decade is not zero. It might be a pessimistic bad-case or even worse-case scenario, but it’s possible… and scary. If things advance as fast as I think they will, we can’t continue to have UBI conversations, we need to move the goal posts and start really thinking of UCI.

So easy to cheat

We aren’t far away from contact lenses that can do the same. The article, ‘Smart Glasses for Exam Cheating: Best Models, Prices and Risks in 2026’, shares multiple options that can provide AI delivered test answers, in seconds, via a small ear piece or even projected text answers which can only be heard or seen by the user. Banned? Of course. Easily detected? Not all models, with more sleuth and hidden models being developed every day. And as mentioned, what happens when these are as invisible as contact lenses?

Make no mistake, cheating has been around as long as tests have. In some respects this is not new. But most methods of cheating demand guessing what questions will be on the test in advance. Methods like these are responsive to every question asked. And the speed of responses are natural. While you are still reading the question, a response is already headed your way. No need to shift your eyes from the screen or test paper. No hidden notes to conceal, and no wrong answers unless you are choosing to get less than a perfect score, to not seem suspiciously smart.

I remember a friend telling me about him and his friends getting hold of their ethics exam a couple days before they had to write it. The irony of cheating on an ethics exam is not lost on me. They memorized the questions and answers, and all chose different ones to get wrong, while still achieving high ‘A’s. Then on the day of the test my friend was horrified when his friend raised his hand 30 minutes into a 3-hour exam, and shared a typo on a question that no one should have gotten to in such a short time. Despite this poor choice, they all got their ‘A’s.

That’s going to be the new challenge in cheating, how to not do too well to bring attention to yourself. A good problem to have for a cheater.

So here we are in a new era of cheating. Prescription glasses, hidden cameras and microphones, and curated wrong answers. And in all honesty, less and less opportunity for detection. Ultimately, it’s the tests that will need to change.

Way more Waymo

Here is a statistic from the company Waymo:

“In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today.” Source: Waymo’s skyrocketing ridership in one chart

This is amazing growth. It’s not an isolated statistic. We are seeing this kind of growth in robotic focused manufacturing, and we are seeing it in the use of AI to do many jobs that humans used to do.

Are we ready for this? Are we ready for the gig economy to be eaten up by automation? Are we ready for not just blue collar but also white collar jobs to dwindle as AI takes over these jobs at an exponential rate? Are we ready for AI teachers, AI servants, AI drivers, AI delivery, AI accountants, AI lawyers, AI programmers, and AI in fields we thought would always need humans in them? Are we ready for way more of this kind of Waymo growth occurring simultaneously across many sectors?

We aren’t ready. Yet this is coming our way. It’s that simple.

Our responses in each case will be reactionary. For every current Waymo passenger there are probably a few potential customers thinking, ‘That’s scary, I’m not ready to put my life in the hands of a robot driver on the highway or the busy streets of downtown at rush hour.’ But those stats will dwindle. For every worker who thinks, ‘My job is safe, they’ll always need me,’ there are others who thought the same just a few years ago, and they are now looking for a job, often in a different sector than what they’ve been in.

Yes, there are limitations to this growth in some sectors. Yes, new jobs may come up that are uniquely human in nature. Yes, there are yet unharnessed opportunities for people to make a greater income (with less effort) in areas that they would not have imagined just a few months ago. It’s not all doom and gloom… but make no mistake, the exponential growth of AI powered advances will be drastically affecting all of our everyday lives sooner than most people realize. Waymo’s growth is emblematic of the kind of growth we will see in almost every aspect of our lives.

Who will get us there?

Stephen Downes shared the following on LinkedIn:

“I was asked, “Please provide a brief abstract that summarises your views on the impact of AI on higher education.”

As far more than the language models that have captured the attention of the world over the last few years, artificial intelligence (AI) represents a significant increase in human capability, augmenting and sometimes exceeding our natural capacities to perceive, reason, create and remember. Ubiquitous access to these capabilities changes the definition of what it means to learn and to be educated. Skills once reserved to the domain of experts are now in the hands of everyday people, while most every discipline is devising new models, methods and pragmatics of work alongside, or teaming with, these new tools. This challenges educators along a number of fronts, impacting how they teach, what they teach, and even what it means to teach. Today’s educator in a world of AI is responsible for far more than passing along knowledge (indeed, the machine can do most of that). We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care.

Thoughts?” ~ Stephen Downes

Although my thoughts align with K-12 education as well as higher education, these thoughts come to me in the form of a question:

Who is going to get us there?

Who is the ‘We’ that Stephen is talking about when he says, “We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care”?

Because I love this vision of what teaching can become, I just don’t see a clear path to take us there.

‘We’ won’t get there following the guidance of financially lucrative edu-tech business, products, and tools… their locked-in subscriptions will tout measures of success that don’t align with this vision, even when they say that they will.

‘We’ won’t get there like we did with Web2.0 tools in the late 2000’s and early 2010’s, on the backs of tech savvy educators leading the charge.

‘We’ won’t get there because of some governmental vision pushing a new AI enhanced curriculum, or even new guidelines that somehow redefine for teachers, “how they teach, what they teach, and even what it means to teach”.

I hope I’m not coming off as a pessimist. I’m excited about what’s possible. I just fear that ‘we’ aren’t going to get ‘there’ any time soon unless ‘We’ align philosophy, policy, and economic support for the transformation of schools into something different.

Short of that, I fear that ‘We’ will be having the same ‘20th century schools in a 21st century world’ conversation in another 10 years… which I’ve heard since getting into education in the late 1900’s.

The worst it will be from now on

I used Google’s Notebook LM in September 2024 and I was impressed with the podcast it created, sharing a summary of my blog. A month later I had it do the same for a video I created with Joe Truss. This is a novel theory, not a general knowledge concept and yet the AI grasped the majority of the concepts and did a very good summary.

Today I went back to Notebook LM because I heard it can now do video summaries. Again, I was impressed. While the accompanying visuals were not ideal, (we discuss complex geometry), the audio summary was excellent and it was valuable to see what takeaways were summarized and how the ideas were structured.

I then explained some of the geometric issues and the AI produced a pdf with the correct geometry. Joe and I then tried creating a slide deck, another new feature. The resulting text was excellent again, but some of the images were not quite accurate, yet we could see the possibilities in correcting the details and providing other sources to make it give us impressive results.

Reflecting on these improvements it occurred to me just how good this tool is now, and yet this is the worst version it will ever be. Artificial Intelligence and robotics are both advancing exponentially in capabilities. It’s exciting to think that what we are capable of using these tools for now will be considered simplistic if not archaic in just a few years.

Today I saw a video about a Chinese company that is selling a three and a half foot tall humanoid robot for the price of a new iPhone. It is not a simplistic toy, it is extremely agile and comes with a fully programmable operating system, meaning it is completely trainable for skills it doesn’t come with. That same company is expecting to reach 1,000 units produced a month by the end of this year.

We are in an era where advances happen daily, and what we marvel at today will be commonplace tomorrow. Every day the advances get a little better and so we are perpetually living with the worst technology we’ll ever know.

Enshitification

I asked Copilot to search my blog for, “posts where technology improves while systems (work, economy, institutions, structures) get worse.

It shared the following summary:

 

The reason I asked for this is because I wanted to look back on posts that reminded me of this skit out of Norway. It is, as the Threads post suggests, “utterly brilliant”!

You can find the video and more information on the website at the end of the video.

We aren’t imagining this, things are getting intentionally worse. On social media we are not the customer, we are the product sold to advertisers. And even when we are the customer, we don’t buy anything outright anymore, no, we get locked into subscriptions.

Copilot didn’t find the one post I was looking for, but I found it to share here:

Sometime technology s(UX)

I’ll end here with a couple paragraphs from that post, no need to try writing something I already said,

“I want to use my credit card at a gas station, not only must I put in my pin, I need to say how much I want to spend as a maximum. Every instant teller I go to asks me what language I want to work in… how hard would it be for the machine to know my preference after asking once? And as for autocorrect… it’s getting worse, not better.

I love my tech, but it seems to me that technology is all about adding features, and not about user experience (UX). The user is forgotten as new bells and whistles are added. Or things are so locked down that I need Face ID, a confirmation text, and coming soon, a DNA scan. Between new features and new security measures, there seems little time spent thinking about what the experience is for the end user.”

Robots, robots, everywhere

In the world of robots two things are happening at lightning speed:

  1. Capabilities – A year ago humanoid robots were clunky, unstable, and for lack of a better word, robotic.
  2. Production – A year ago if a company could produce 5,000 robots in a year, they were industry leaders.

Have a look at this video and you’ll see just how much farther along robots and their production have advanced: ‘China’s New AI Robots Shock Everyone With Impossible Skills

It might be cliche to say, but the future has arrived. First in factories, then in homes! If one thing is certain about our future it is that humanoid robots will be all around us. We’ll have to wait and see how this impacts work, chores, and even social interactions, because there isn’t going to time to think of long term implications before they arrive… everywhere very quickly.

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

Chore masters

I grew up watching the Jetsons, expecting that one day I’d get to travel around in flying cars. And in this cartoon, the Jetson family had a robotic maid named Rosie, who was always cleaning up behind them. While I’m not sure if flying cars are going to be widespread in the next few years, I think we are going to see a lot of robots doing chores for us.

Just a couple weeks ago I was in a store and watched a robot make a latte for one of the customers. His order included a choice of milk art to go on top… a little flair to add to the experience of having a machine be your barista.

How long before we see somewhat intelligent, human-like robots in every house, each doing mundane chores we’d all rather not do? I’m sure these robots won’t put the wrong items in the dryer… like I do. I’m sure they won’t complain about yard work… like I do. I’m sure they won’t sit on the couch at the end of a long day wishing there weren’t chores to get done… like I do.

I look forward to having a chore master robot that will do the mundane things I don’t like to do. I’d be thrilled to not do dishes again. I’d love to not spend time folding down boxes and putting out the garbage and recycling. I’d have no issues with the idea of never vacuuming again. I’m ready, just wondering how long I have to wait?

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.