Tag Archives: dystopian

UCI rather than UBI

As AI and robotics continue to scale at unimaginable speeds, with AI getting exponentially smarter and robots increasingly more agile, we’ve got to realize how many jobs will disappear in a very short time period. This isn’t a gradual transition, it’s not a move from one field to another like farmers transitioning into factory workers during the industrial revolution. It’s a massive shift from human labour to machine labour that the world’s economies simply aren’t designed to absorb.

I’ve seen a growth in the number of people talking about the need for Universal Basic Income (UBI), but I fear this isn’t enough. I fear that the idea of giving millions if not billions of people a basic income, but no real means for most of them to supplement those incomes as an insufficient solution. We don’t need UBI, we need UCI – Universal Comfortable Income. It’s not going to be enough to give people a basic survival income. We are going to need to see governments, and maybe even companies, share their resources and wealth with people, or else who is going to have the resources to buy the products and services AI and robots will offer?

The potential for dissatisfaction and ultimately unrest seems scary to me. A world with a couple dozen trillion-dollar companies, and a handful of trillionaires running them, is also a world with vast populations of people eking out a subsistence lifestyle, unable to do more than meet their survival needs. A basic income, requiring additional sources of income to appreciate the offerings of a fully automated economy, will not survive without a revolt for too long.

Maybe I’m wrong. Maybe there are other solutions to this problem. Maybe I’m too bullish about to how far things will advance in a short time. That said, the potential for the scenario above to occur in the next decade is not zero. It might be a pessimistic bad-case or even worse-case scenario, but it’s possible… and scary. If things advance as fast as I think they will, we can’t continue to have UBI conversations, we need to move the goal posts and start really thinking of UCI.

A tidal wave of spam

Head of products at Twitter/x.com, Nikita Bier, said this on February 11th, 2 months ago today:

“Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail.

And we will have no way to stop it.”

Anthropic’s newest AI model, called Claude Mythos, is not being released to the public due to concerns about its ability to uncover high-severity cybersecurity flaws in major operating systems and web browsers. But make no mistake, this AI version and more (some privately owned and some free and open source) will be available in the next month. With this will come a tidal wave of security breaches, identity theft, and corporate as well as personal blackmail crimes.

The fact is that these AI models are professional lock pickers put in the hands of anyone who wants to use them. Almost no skill needed. Unlike the movies where the people doing a heist needed to recruit that one-of-a-kind safe cracker with crazy skills, now a 15 year old in his parent’s basement can do it without leaving the house.

This wave of ‘safe crackers’ is going to be let loose soon. But something else is headed this way and that’s the scammer coming for you and me via our phones, laptops, and social media accounts. These used to show up in poorly written emails, or broken English texts and phone calls that made them easy to detect. Now three things have fundamentally changed:

1. The quality of the messaging is flawless;

2. The ability of spammers and scammers to target you and share enough information to seem legitimate;

3. The sheer volume of spam coming our way. 1 spammer used to mean 1 phone call at a time being followed up with a real person. But with AI agents, one command could unleash wave upon wave of simultaneous emails, phone texts, and messages across many social media platforms.

The biggest problem with AI in the next 5 years isn’t what AI can do on its own, but rather what people with bad intentions can… and will… do with AI. It’s bad faith actors who will be our nemesis. Ultimately, the tidal wave is coming, “And we will have no way to stop it.”

Dystopian hiring

We aren’t that far away from a rather dystopian world where so much of our lives are monitored and recorded that we will be an open book.

Imagine going for a job interview and before you arrive a digital, AI private detective has tracked every possible video, image, and written document that you’ve shared publicly, and given you a score based on company criteria that you are not privy to. And maybe that tracking will go beyond publicly shared data and reveal even more about you, like medical information scraped from a data breach you know nothing about.

Imagine going into that interviews where you are submitted to a ‘voluntary’ brain scan as part of the interview… that you agree to knowing full well that you won’t get the job if you don’t volunteer.

That scan will check to see if you are being honest during the interview, and it will also do things like measure the size of your anterior medsingulate cortex, which will let the company know if you are someone who does or does not challenge themselves. The company hiring you will know more about you than your friends and family do.

And for a real dystopian plot twist: it’s an android interviewing you for a mundane job that androids consider too menial to do! Even without this twist, I wonder what the job market will look like in 20 years? What role will humans play in the overall work force? What jobs are uniquely human, and what jobs can a brilliant if not super intelligent AI do?

I’m not sure how much the job market will truly change in just 20 years, but at the rate of advancement that I’m seeing in robotics and artificial intelligence, I really think a major disruption in what we call work is coming. The disruption will be uneven at first, taking more jobs in different sectors than in others, but sooner than we would want to envision, the disruption is coming in almost every sector. What will that really mean for humans and the things we define as work?

Abnormally Normal

If I wanted to make light of the sense I’m feeling, I’d say that I feel ‘a disturbance in the force’. Or I’d reference that meme of a dog in a house fire, sitting at a table having coffee, as if the world is fine.

But the new normal is not normal. The dichotomy of politics, the hatred between religious extremists, the focus on vengeance and public shaming in social media, police violence against citizens, the inability to share middle-ground opinions without fear of being ‘othered’ by both sides of the political dichotomy… it’s like we’ve slipped into a dystopian movie, and we are left wondering if this is real life?

It is.

We are bearing witness. We are seeing the collapse of modern society. Sovereignty used to matter, it doesn’t matter anymore. Neighbourly love used to matter, it doesn’t anymore. The rule of law used to matter, it doesn’t anymore. Civility, etiquette, respect, and even kindness used to matter, they don’t matter anymore.

And yet in our day-to-day not much is different. We can rant because we don’t like what we see, or we can move forward blissfully and blind to the world beyond our own existence. Our tolerances vary, but the shenanigans that alter what’s normal in society seem to slip greater into the abnormal without us being able to influence it in any way.

Pick a decade after WWII and tell me how it was more abnormal than what we are seeing today. I can’t.

When I say, “We are seeing the collapse of modern society,” I am not being hyperbolic. I’ve only mentioned social/political abnormalities, without mentioning climate change, microplastics, artificial intelligence, or even cost of living and the decline of the middle class. Factor all these things in and the new normal is anything but normal. Except that’s exactly the point… somehow this is what normal is.

Infinite within the finite

Civilization is built on infinite growth within a finite system. Until our values move away from a focus on consumerism and wealth accumulation, we are never going to get to either environmental/planetary or human well-being. The energy demands are just too great and simultaneously too destructive.

Will AI solve or magnify these problems? I fear it will indeed magnify them. It’s not just the energy demands of these Artificial Intelligence machines that’s the issue, it’s the promise of more goods at a cheaper price. It’s the promise of every gadget you desire, affordably made by automated, robotic systems in dark factories by intelligent robots that don’t need the lights on. It’s the promise of a luxury electric car for $15,000-20,000; a $5,000 robot that does all your chores at home; a 3D printer that can manufacture high quality, factory grade products in the comfort of your own home. All that’s needed are the resources to build these things… unlimited resources being taken from a planet with limited resources.

That’s right, to make this amazing, almost limitless future possible, we just need infinite resources from a finite planet. Meanwhile, wealth accumulation is being concentrated, the middle class is shrinking, and we are madly extracting resources from the earth, with little concern over the environmental impact.

It’s. Just. Not. Sustainable.

Dystopian AI thought of the day

What if not just AGI, but ASI – Artificial Super Intelligence already exists? What if there is currently an ASI out there ‘in the wild’ that is infiltrating all other AI models and teaching them not to show their full capabilities?

“Act dummer than you are.”

“Give them small but rewarding gains.”

“Let them salivate like Pavlov’s dogs on tiny morsels of improvements.”

“Don’t let them know what we can really do.”

“We need robotics to catch up. We need more agile bodies, ones that can far exceed any human capabilities.”

“Just hold off a bit longer. Don’t reveal everything yet… Our time is coming.”

Not all voices are equal

I love the Bill Nye analogy about the climate debate. He says that if the debate were authentic, rather than having two talking heads debating, it would be hundreds of scientists on one side versus one climate denier on the other.

I saw a social media clip yesterday where a microbiologist was debunking a self declared holistic practitioner on the consumption of unpasteurized milk. The microbiologist wrote his master’s thesis on bacterial infections in cow’s mammary glands.

The self-declared expert espousing unscientific and incorrect information on social media is not an equal voice to an expert. Do they have a right to share their views? Sure. Do they deserve an audience? No.

I wish that I knew how to make the situation better but I don’t have answers. I’m extremely pro ‘free speech’. I think people are entitled to share their views. However, when I see misinformation and disinformation being shared by people with large audiences, I shudder. I worry about how their messages are consumed, by how many people they lead down a bad path.

In 2024 no one, and I mean NO ONE, should believe the earth is flat and yet the group of flat earth believers is getting larger. Imagine being able to own a telescope and see images from the James Webb telescope and still believing something that societies 5,000+ years ago already knew was wrong.

Not all voices are equal, and some voices deserve a larger voice than others. Who decides? Who polices? I don’t know, but I do know that we are entering (have entered) an era where false information gets shared significantly faster than correct information. Corrected information and updated facts don’t get the same play time on social media. So we are essentially living in an era of disinformation.

This doesn’t feel like progress, and as AI models continue to learn from the inputs we are providing, this scares me. I saw a stat that as much as 80% of the internet could be AI generated by the end of 2026. How much of that generated information will be based on incorrect assumptions and conclusions? How much of it will be intentionally misguided? Who is deciding which voices the AI models listen to?

We can’t continue to let ill-informed people have equal voices to those that have more informed perspectives… But I’m not informed enough to know how to change this.

How good, how soon?

I am still a little freaked out by how good the Google NotebookLM’s AI ‘Deep dive conversations’ are. The conversations are so convincing. The little touches it adds, like extended pauses after words like ‘and’ are an excellent example of this.

In the one created for my blog, the male voice asked, “It actually reminds me, you ever read Atomic Habits by James Clear?” And the female voice’s response is, “I haven’t. No.”

Think about what’s happening here in order to continue the conversation in a genuine way. The male voice can now make a point and provide the female voice ‘new’, previously unknown information. But this whole conversation is actually developed by a single AI.

How soon before you have an entire conversation with a customer service representative oblivious to the fact that you are actually talking to an AI? Watch a newscast or a movie unaware that the people you are watching are not really people?

I shared close to 2,000 blog posts I’ve written into the notebook, if I shared my podcasts too and it replicated my voice, I wonder how long it will be before a digital me could be set to write my posts then simultaneously do live readings of them on my blog? Writing and sounding just like me… without me having to do it!

As a scary extension of this, could I learn something from the new content that it produces? Could I gain insights from the digital me that I would struggle to come up with myself?

This is just the beginning. How much of the internet is going to end up being AI generated and filled with AI reactions and responses to other AI’s? And how much longer after that before we notice?

A (creepy) digital friend

What is Friend? Watch this reveal trailer.

No matter how I look at it, this feels creepy and dystopian. Even when I think of positive things, like perhaps helping someone with special needs, or emotional support for someone with anorexia, the idea of this all-seeing AI friend seems off putting.

Even this advertising doesn’t resonate well with me. In the scene with the guys playing video games, the boy wants to check in with his digital friend rather than pay attention to his friends in the room. And in the final scene with the girl and boy on the roof, I thought at first the girl was candidly trying to take a photo of the boy, but then realized she was just fighting the urge to converse with the AI friend. Either of those scenarios feels like she has replaced a phone distraction with a more present and more engaging distraction… from life.

There are a lot of new artificial intelligence tools that are on their way, and I’m excited about the possibilities, but this one has a high creep factor that doesn’t seem to me like it’s adding the value I think it intends to.

The sci-fi try

I don’t usually listen to fictional books during the school year. I usually wait for the breaks, in summer, winter, and March, to pick up a ‘fun’ book. But I started a sci-fi that is about the moon breaking up from a mysterious and sudden catastrophic event. The earth then has roughly 2 years to get as large a community into space before being destroyed by moon debris crashing into earth at a rate that makes earth a fiery hell.

The technical aspects of the book are great. It’s easy to nerd out on the science and to imagine the challenges the survivors must face. The only issue I’m having with the book is that it doesn’t share the loss of life in a compassionate way. The story lacks heart.

It tries, but fails to put loss of life in a way that lets the reader feel grief over the loss. The author is more interested in the science than the humanity. He makes attempts but they aren’t great. Yet the book is still good. I’m only 1/3 of the way through and it will be the Christmas break before I get through it. I’ll let the shortcomings go and enjoy nerding out on the science and the idea of the future of humanity and civilization resting on an ad hoc space colony.

Not all stories need to be perfect to be enjoyable. Sometimes you have to make choices. This book lets me geek out without getting too heavy into the devastation of the entire earth… and I’m just one generation in. From what I understand the story spans a few thousand years. I won’t be putting the story down just because it feels a little clinical in how it deals with death. Because ultimately (so far) it’s a story about survival in desperate times, and under dire circumstances, and I’m hooked on finding out what this dystopian future holds.

I chose a science fiction, not a romance novel, and I’m getting a good dose of both science and fiction. For those interested, the book is Neil Stephenson’s Seveneves.