Tag Archives: future

UCI rather than UBI

As AI and robotics continue to scale at unimaginable speeds, with AI getting exponentially smarter and robots increasingly more agile, we’ve got to realize how many jobs will disappear in a very short time period. This isn’t a gradual transition, it’s not a move from one field to another like farmers transitioning into factory workers during the industrial revolution. It’s a massive shift from human labour to machine labour that the world’s economies simply aren’t designed to absorb.

I’ve seen a growth in the number of people talking about the need for Universal Basic Income (UBI), but I fear this isn’t enough. I fear that the idea of giving millions if not billions of people a basic income, but no real means for most of them to supplement those incomes as an insufficient solution. We don’t need UBI, we need UCI – Universal Comfortable Income. It’s not going to be enough to give people a basic survival income. We are going to need to see governments, and maybe even companies, share their resources and wealth with people, or else who is going to have the resources to buy the products and services AI and robots will offer?

The potential for dissatisfaction and ultimately unrest seems scary to me. A world with a couple dozen trillion-dollar companies, and a handful of trillionaires running them, is also a world with vast populations of people eking out a subsistence lifestyle, unable to do more than meet their survival needs. A basic income, requiring additional sources of income to appreciate the offerings of a fully automated economy, will not survive without a revolt for too long.

Maybe I’m wrong. Maybe there are other solutions to this problem. Maybe I’m too bullish about to how far things will advance in a short time. That said, the potential for the scenario above to occur in the next decade is not zero. It might be a pessimistic bad-case or even worse-case scenario, but it’s possible… and scary. If things advance as fast as I think they will, we can’t continue to have UBI conversations, we need to move the goal posts and start really thinking of UCI.

A tidal wave of spam

Head of products at Twitter/x.com, Nikita Bier, said this on February 11th, 2 months ago today:

“Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail.

And we will have no way to stop it.”

Anthropic’s newest AI model, called Claude Mythos, is not being released to the public due to concerns about its ability to uncover high-severity cybersecurity flaws in major operating systems and web browsers. But make no mistake, this AI version and more (some privately owned and some free and open source) will be available in the next month. With this will come a tidal wave of security breaches, identity theft, and corporate as well as personal blackmail crimes.

The fact is that these AI models are professional lock pickers put in the hands of anyone who wants to use them. Almost no skill needed. Unlike the movies where the people doing a heist needed to recruit that one-of-a-kind safe cracker with crazy skills, now a 15 year old in his parent’s basement can do it without leaving the house.

This wave of ‘safe crackers’ is going to be let loose soon. But something else is headed this way and that’s the scammer coming for you and me via our phones, laptops, and social media accounts. These used to show up in poorly written emails, or broken English texts and phone calls that made them easy to detect. Now three things have fundamentally changed:

1. The quality of the messaging is flawless;

2. The ability of spammers and scammers to target you and share enough information to seem legitimate;

3. The sheer volume of spam coming our way. 1 spammer used to mean 1 phone call at a time being followed up with a real person. But with AI agents, one command could unleash wave upon wave of simultaneous emails, phone texts, and messages across many social media platforms.

The biggest problem with AI in the next 5 years isn’t what AI can do on its own, but rather what people with bad intentions can… and will… do with AI. It’s bad faith actors who will be our nemesis. Ultimately, the tidal wave is coming, “And we will have no way to stop it.”

Way more Waymo

Here is a statistic from the company Waymo:

“In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today.” Source: Waymo’s skyrocketing ridership in one chart

This is amazing growth. It’s not an isolated statistic. We are seeing this kind of growth in robotic focused manufacturing, and we are seeing it in the use of AI to do many jobs that humans used to do.

Are we ready for this? Are we ready for the gig economy to be eaten up by automation? Are we ready for not just blue collar but also white collar jobs to dwindle as AI takes over these jobs at an exponential rate? Are we ready for AI teachers, AI servants, AI drivers, AI delivery, AI accountants, AI lawyers, AI programmers, and AI in fields we thought would always need humans in them? Are we ready for way more of this kind of Waymo growth occurring simultaneously across many sectors?

We aren’t ready. Yet this is coming our way. It’s that simple.

Our responses in each case will be reactionary. For every current Waymo passenger there are probably a few potential customers thinking, ‘That’s scary, I’m not ready to put my life in the hands of a robot driver on the highway or the busy streets of downtown at rush hour.’ But those stats will dwindle. For every worker who thinks, ‘My job is safe, they’ll always need me,’ there are others who thought the same just a few years ago, and they are now looking for a job, often in a different sector than what they’ve been in.

Yes, there are limitations to this growth in some sectors. Yes, new jobs may come up that are uniquely human in nature. Yes, there are yet unharnessed opportunities for people to make a greater income (with less effort) in areas that they would not have imagined just a few months ago. It’s not all doom and gloom… but make no mistake, the exponential growth of AI powered advances will be drastically affecting all of our everyday lives sooner than most people realize. Waymo’s growth is emblematic of the kind of growth we will see in almost every aspect of our lives.

Who will get us there?

Stephen Downes shared the following on LinkedIn:

“I was asked, “Please provide a brief abstract that summarises your views on the impact of AI on higher education.”

As far more than the language models that have captured the attention of the world over the last few years, artificial intelligence (AI) represents a significant increase in human capability, augmenting and sometimes exceeding our natural capacities to perceive, reason, create and remember. Ubiquitous access to these capabilities changes the definition of what it means to learn and to be educated. Skills once reserved to the domain of experts are now in the hands of everyday people, while most every discipline is devising new models, methods and pragmatics of work alongside, or teaming with, these new tools. This challenges educators along a number of fronts, impacting how they teach, what they teach, and even what it means to teach. Today’s educator in a world of AI is responsible for far more than passing along knowledge (indeed, the machine can do most of that). We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care.

Thoughts?” ~ Stephen Downes

Although my thoughts align with K-12 education as well as higher education, these thoughts come to me in the form of a question:

Who is going to get us there?

Who is the ‘We’ that Stephen is talking about when he says, “We will be responsible for challenging students both young and old to find new ways of seeing and creating, leading them through demonstration of dedication, resilience and passion, and modeling for them the best values of civil and social responsibility, contribution and care”?

Because I love this vision of what teaching can become, I just don’t see a clear path to take us there.

‘We’ won’t get there following the guidance of financially lucrative edu-tech business, products, and tools… their locked-in subscriptions will tout measures of success that don’t align with this vision, even when they say that they will.

‘We’ won’t get there like we did with Web2.0 tools in the late 2000’s and early 2010’s, on the backs of tech savvy educators leading the charge.

‘We’ won’t get there because of some governmental vision pushing a new AI enhanced curriculum, or even new guidelines that somehow redefine for teachers, “how they teach, what they teach, and even what it means to teach”.

I hope I’m not coming off as a pessimist. I’m excited about what’s possible. I just fear that ‘we’ aren’t going to get ‘there’ any time soon unless ‘We’ align philosophy, policy, and economic support for the transformation of schools into something different.

Short of that, I fear that ‘We’ will be having the same ‘20th century schools in a 21st century world’ conversation in another 10 years… which I’ve heard since getting into education in the late 1900’s.

The worst it will be from now on

I used Google’s Notebook LM in September 2024 and I was impressed with the podcast it created, sharing a summary of my blog. A month later I had it do the same for a video I created with Joe Truss. This is a novel theory, not a general knowledge concept and yet the AI grasped the majority of the concepts and did a very good summary.

Today I went back to Notebook LM because I heard it can now do video summaries. Again, I was impressed. While the accompanying visuals were not ideal, (we discuss complex geometry), the audio summary was excellent and it was valuable to see what takeaways were summarized and how the ideas were structured.

I then explained some of the geometric issues and the AI produced a pdf with the correct geometry. Joe and I then tried creating a slide deck, another new feature. The resulting text was excellent again, but some of the images were not quite accurate, yet we could see the possibilities in correcting the details and providing other sources to make it give us impressive results.

Reflecting on these improvements it occurred to me just how good this tool is now, and yet this is the worst version it will ever be. Artificial Intelligence and robotics are both advancing exponentially in capabilities. It’s exciting to think that what we are capable of using these tools for now will be considered simplistic if not archaic in just a few years.

Today I saw a video about a Chinese company that is selling a three and a half foot tall humanoid robot for the price of a new iPhone. It is not a simplistic toy, it is extremely agile and comes with a fully programmable operating system, meaning it is completely trainable for skills it doesn’t come with. That same company is expecting to reach 1,000 units produced a month by the end of this year.

We are in an era where advances happen daily, and what we marvel at today will be commonplace tomorrow. Every day the advances get a little better and so we are perpetually living with the worst technology we’ll ever know.

Enshitification

I asked Copilot to search my blog for, “posts where technology improves while systems (work, economy, institutions, structures) get worse.

It shared the following summary:

 

The reason I asked for this is because I wanted to look back on posts that reminded me of this skit out of Norway. It is, as the Threads post suggests, “utterly brilliant”!

You can find the video and more information on the website at the end of the video.

We aren’t imagining this, things are getting intentionally worse. On social media we are not the customer, we are the product sold to advertisers. And even when we are the customer, we don’t buy anything outright anymore, no, we get locked into subscriptions.

Copilot didn’t find the one post I was looking for, but I found it to share here:

Sometime technology s(UX)

I’ll end here with a couple paragraphs from that post, no need to try writing something I already said,

“I want to use my credit card at a gas station, not only must I put in my pin, I need to say how much I want to spend as a maximum. Every instant teller I go to asks me what language I want to work in… how hard would it be for the machine to know my preference after asking once? And as for autocorrect… it’s getting worse, not better.

I love my tech, but it seems to me that technology is all about adding features, and not about user experience (UX). The user is forgotten as new bells and whistles are added. Or things are so locked down that I need Face ID, a confirmation text, and coming soon, a DNA scan. Between new features and new security measures, there seems little time spent thinking about what the experience is for the end user.”

Dystopian hiring

We aren’t that far away from a rather dystopian world where so much of our lives are monitored and recorded that we will be an open book.

Imagine going for a job interview and before you arrive a digital, AI private detective has tracked every possible video, image, and written document that you’ve shared publicly, and given you a score based on company criteria that you are not privy to. And maybe that tracking will go beyond publicly shared data and reveal even more about you, like medical information scraped from a data breach you know nothing about.

Imagine going into that interviews where you are submitted to a ‘voluntary’ brain scan as part of the interview… that you agree to knowing full well that you won’t get the job if you don’t volunteer.

That scan will check to see if you are being honest during the interview, and it will also do things like measure the size of your anterior medsingulate cortex, which will let the company know if you are someone who does or does not challenge themselves. The company hiring you will know more about you than your friends and family do.

And for a real dystopian plot twist: it’s an android interviewing you for a mundane job that androids consider too menial to do! Even without this twist, I wonder what the job market will look like in 20 years? What role will humans play in the overall work force? What jobs are uniquely human, and what jobs can a brilliant if not super intelligent AI do?

I’m not sure how much the job market will truly change in just 20 years, but at the rate of advancement that I’m seeing in robotics and artificial intelligence, I really think a major disruption in what we call work is coming. The disruption will be uneven at first, taking more jobs in different sectors than in others, but sooner than we would want to envision, the disruption is coming in almost every sector. What will that really mean for humans and the things we define as work?

Robots, robots, everywhere

In the world of robots two things are happening at lightning speed:

  1. Capabilities – A year ago humanoid robots were clunky, unstable, and for lack of a better word, robotic.
  2. Production – A year ago if a company could produce 5,000 robots in a year, they were industry leaders.

Have a look at this video and you’ll see just how much farther along robots and their production have advanced: ‘China’s New AI Robots Shock Everyone With Impossible Skills

It might be cliche to say, but the future has arrived. First in factories, then in homes! If one thing is certain about our future it is that humanoid robots will be all around us. We’ll have to wait and see how this impacts work, chores, and even social interactions, because there isn’t going to time to think of long term implications before they arrive… everywhere very quickly.

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

Chore masters

I grew up watching the Jetsons, expecting that one day I’d get to travel around in flying cars. And in this cartoon, the Jetson family had a robotic maid named Rosie, who was always cleaning up behind them. While I’m not sure if flying cars are going to be widespread in the next few years, I think we are going to see a lot of robots doing chores for us.

Just a couple weeks ago I was in a store and watched a robot make a latte for one of the customers. His order included a choice of milk art to go on top… a little flair to add to the experience of having a machine be your barista.

How long before we see somewhat intelligent, human-like robots in every house, each doing mundane chores we’d all rather not do? I’m sure these robots won’t put the wrong items in the dryer… like I do. I’m sure they won’t complain about yard work… like I do. I’m sure they won’t sit on the couch at the end of a long day wishing there weren’t chores to get done… like I do.

I look forward to having a chore master robot that will do the mundane things I don’t like to do. I’d be thrilled to not do dishes again. I’d love to not spend time folding down boxes and putting out the garbage and recycling. I’d have no issues with the idea of never vacuuming again. I’m ready, just wondering how long I have to wait?