Tag Archives: AI

AI – Alternate Identities

I just watched a video clip of Sir Ken Robinson promoting a product to reduce your blood sugar. I’ve already shared ‘An AI Advertisement’ with a fictitious expert, and broke down the flaws with the ad. But now we have an actual (now deceased) celebrity figure doing the promotional plug. It looks and sounds like him, but he never said anything he says in this video advertisement. I know this, but how many people will recognize him and pay a little more attention to this advertising scam because it is delivered by someone famous?

This is just the beginning. We are moving into a ‘post truth era’, where nothing is inherently believable. A decade from now we’ll have multiple alternate identities to choose from… Was the real Al Gore the one warning us about global warming, or was the real one promoting fracking, or alternative medicine, or socialist communism over capitalism? Every video will seem equally real, every source seemingly legitimate. One real, all the others alternative histories indistinguishable from reality.

Will it only be famous people that will fall victim to these alternate identities or are we all going to be replicated? When I’m in my late 80’s will I be watching a video of 50 year old me oblivious to whether this recording actually happened or if it was invented with a perfect imitation of myself?

The implications for scams are immeasurable. Live video of a seemingly real son or daughter extracting banking data from a senior parent. A meticulously created alternative you moving all assets over to someone else. The scams are limited only by imagination, not by technology or capability.

Alternate identities indistinguishable from reality, all playing out as if real. Sir Ken Robinson plugging health suppliments is only just the beginning… We are in for some reality warping performances from AI alternatives to us, and the people we think we trust… This is only just the beginning!

Post Truth Era

Never mind the ridiculous videos of Mr. Rogers chatting with Tupac Shakur or Bigfoot vlogging, these AI videos seem real enough while fully intending us to know they are AI. What we are seeing now is an indistinguishable bending of real and fake with videos that are completely altering our ability to know what is real and what isn’t.

Voice mimicking was already almost perfect. I saw a video post today from a man whose dad called him to ask what their shared bank account password was. One problem: His dad died last year, he just hadn’t taken his name off of the account yet. He said it sounded so real that had his father been alive, he probably would have shared the password, thinking his dad forgot.

Now AI videos are just as good as AI audio and the combination of the two truly are steering us into a post truth era. People are sharing AI videos completely unaware that they are fake. Even news stations are getting it wrong.

Soon web sites will become bastions of truth. Want to know what someone actually said? Go to ‘their name’ .com or .org and see the actual video shared there. Anything else will be questionable. And wherever else the video is shared must be watched with skepticism. Subtle or overt, very important changes in a message will occur as a result of someone, ultimately anyone taking the original video and making an AI version that gives their message instead of the intended message.

Following specific domains, and maybe a handful of legitimate news channels, are the only suggestions I have. Legislation won’t keep up, and the fakes are just getting better. Essentially, find reliable sites and distrust everything else. Intuition and common sense won’t be enough.

Digital dog sitter

I went to a store yesterday after work. It was a cold, rainy evening and already dark at around 5:30pm. I picked up the couple items I came for and headed back to my car. Just as I was getting in, I heard a dog barking at me from inside the car next to me. When I looked over, I saw the dog in the back seat and a note on the electric car’s digital display that read:

My driver will be back soon

Then in smaller font:

Don’t worry! The heater is on and it’s 20°C

With the 20°C in very large font, which could easily be read from a distance.

Considering the taboo normally associated with leaving a pet unattended in a car, I thought this was very clever. Highlighting the temperature of the car removed any concern that the dog’s life is in danger from overheating, and noting the driver will be back shortly eases any anxiety for dog lovers who might worry for the dog’s wellbeing.

This also made me think of kids we see today being babysat by technology. The parent in the grocery store handing over their phone to the kid sitting in the front of the grocery cart. The kid in the back seat of a car watching a movie. The kid at home on the iPad while dinner is being made.

What will this look like when we have robots ‘adding value’ to these experiences? Will dog owners send their pets for walks while they step into a store, with the robot babysitter cleaning up the poop the dog might do on the walk? Will kids be playing in the back yard with their robot babysitter rather than having their eyes glued to a screen?

And is this an improvement to what we have now?

I think for dogs it will be, but I wonder about this for kids? What kinds of bonds will kids build with their robotic babysitters? Will we be able to tell when a teenager has been raised more by robots than by humans? What amount of robot time will be considered too much? Will a parent who lets a robot babysit their kid for hours and hours be judged like a dog owner who left his dog in a hot car?

When we think of robots that we will soon have in our homes, we think of the conveniences they will provide. What happens when one of those conveniences is helping to raise our kids? What impact will it have? There’s a difference between dog sitting and babysitting that makes this question very interesting. And while I find the the digital note in a car telling everyone the dog is comfortable and will be attended to soon quite clever, I’m not sure how clever it will be to have robots attending to our kids more than their parents do.

Infinite within the finite

Civilization is built on infinite growth within a finite system. Until our values move away from a focus on consumerism and wealth accumulation, we are never going to get to either environmental/planetary or human well-being. The energy demands are just too great and simultaneously too destructive.

Will AI solve or magnify these problems? I fear it will indeed magnify them. It’s not just the energy demands of these Artificial Intelligence machines that’s the issue, it’s the promise of more goods at a cheaper price. It’s the promise of every gadget you desire, affordably made by automated, robotic systems in dark factories by intelligent robots that don’t need the lights on. It’s the promise of a luxury electric car for $15,000-20,000; a $5,000 robot that does all your chores at home; a 3D printer that can manufacture high quality, factory grade products in the comfort of your own home. All that’s needed are the resources to build these things… unlimited resources being taken from a planet with limited resources.

That’s right, to make this amazing, almost limitless future possible, we just need infinite resources from a finite planet. Meanwhile, wealth accumulation is being concentrated, the middle class is shrinking, and we are madly extracting resources from the earth, with little concern over the environmental impact.

It’s. Just. Not. Sustainable.

Lie with confidence

Be controversial but wrong, say it with confidence, and watch the likes and re-shares come your way. I had an Instagram video shared with me. The ‘influencer’ who posted it has over 600,000 followers and she claims to be an autoimmune specialist.

“You’ve got to see this,” she says, after saying that a man tested his blood before and after EMF (Electric and Magnetic Field) exposure. Then the clip changes to a guy looking at an image on a screen of what he claims to be red blood cells in “pretty perfect blood… I, mean these cells are absolutely amazing cells… it may even be hard actually to mess them up.”

Then they do a ‘phone test’ where the test subject sits between two cell phones, and has a third one between his legs on the chair, to test how “these EMF’s are affecting his ‘perfect blood’… Admitting that this is, “A bit of a risky game,” He then pricks his finger to draw a drop of blood after this supposed EMF exposure. They put a drop of the blood on a microscope plate and we switch views to see the screen again.

The contrast from the original image is comical. Worse yet the person is scrolling on the screen to a point that would go far beyond the edge of a drop of blood on a microscope plate. The difference in the slides is described as “A lot of inflammation. It’s all over.” After a very non-medical, exaggerated analysis, it concludes with, “None of this is good.”

When the video got to me it had 336,000 views and over 9,500 likes. And again, it was sent to me by someone who was concerned by this and wanted to share it.

We live in an era where confidence trumps competence. Be controversial and convincing and you are going to get not just attention, but believers. If I were to make a video debunking this, it wouldn’t get traction. Even scientists with large followings would likely not get 336,000 views on a debunking video.

So the inventive is huge. This influencer probably gained thousands of followers from this video. She made hundreds if not thousands of dollars from it going viral. And so it pays to put intentionally fake pseudo-scientific crap on the web. Just pick a controversial topic, lie with confidence, and watch the profits flow in. No backlash, no consequences, just greed, and incentives to continue to lie.

My fear? I see this getting worse, not better. AI will only serve to exaggerate the problem with more convincing lies that cater to wider audiences. It feels like as a society, we are actually getting dumber and social media is incentivized to make the problem exponentially worse.

Where else have we seen lying with confidence working? Everywhere from biased news outlets, to product advertising, to politics. Whether selling ideas, products, or partisanship, lying with confidence seems to gain far more traction than telling the truth.

_____

Update: After posting this, (and probably thanks to re-watching the above video a few times to get the quotes right), I opened Instagram and the first post had dramatic music and warned against wearing polyester on planes:

I took the screen shot and didn’t watch the rest of the video. People actually fall for this crap? 🤦‍♂️

An AI advertisement

I scrolled past this add a few times before paying any attention to it. But then it gave off an uncanny valley feeling that made me look a little closer. I think it was the very staged first question that bothered me most, and yesterday I finally took the time to watch it through a critical lens. It’s an ad for a Tai Chi app, but I cropped the video to hide the brand because I don’t want to amplify it, I want to critique it.

Here is the ad:

And here are a list of telltale things that suggest it is AI.

1. Look at the opening image. The woman is talking at a 90° angle to the stage, and there is no one at the podium below her.

2. The ‘expert’ is a perfectly chiseled man who is never named. No recognition of him as an expert in the field… because he’s fictitious.

3. Obviously fake audience members. The first image shows a blurred bearded man who doesn’t seem real to me. The second image has a man wearing a partial microphone like the expert.

4. The painfully fake script.

“Isn’t a gym better?”

“Gym doesn’t work after 40.”

This isn’t necessarily evidence of AI, it could just be bad writing, but it comes off feeling very wrong and unnatural. It’s like there was an intent in the text to make the expert sound like English is his second language but his voice doesn’t carry that same suggestion.

5. Comments are turned off. There is no benefit in having viewers outing the ad as fake. It’s better to allow the ad to fool more people without being called out.

The reality is that I could pick this ad out as fake, but that’s only because it was done poorly. We are going to see a lot more ads done this way and they are going to be good enough to fool us completely. It’s just a matter of time, and that time is approaching very quickly.

Do not amplify if you can not verify

This is a simple, but potent message. Before hitting the ‘Like’ or ‘Share’ button, before telling someone about the interesting fact you heard online, verify it in some way. Is it true?

Do some Ground Truthing. Can you verify the claim? Is it real or AI? Is it worthy of your amplification or are you just contributing to the spread of something unworthy to be shared.

How much better would the internet be if everyone paused and verified what they were sharing before amplifying misinformation, disinformation, fake news, and AI deep fakes?

It’s a worthy and effective mantra:

Do not amplify if you can not verify.

Margins over manpower

Amazon just laid off over 14,000 people. According to Beth Galetti, Senior Vice President of People Experience and Technology at Amazon, who wrote that they are ‘Staying nimble and continuing to strengthen our organizations’, “The reductions we’re sharing today are a continuation of this work to get even stronger by further reducing bureaucracy, removing layers, and shifting resources to ensure we’re investing in our biggest bets and what matters most to our customers’ current and future needs.

What are the ‘biggest bets’ they they are investing in? Chips. AI chips. Profits before people. Margins over manpower. The human equation in a company no longer matters. People are as expendable as office supplies. Need cost savings? Fired employees save a lot more money than reducing the cost of paper and staples.

The shareholder model of capitalism is slowly collapsing. It’s the middle and upper middle class that is getting laid off. Meanwhile, large companies like Nvidia invest billions of dollars in Open AI, and Open AI takes that money and buys Nvidia chips. Simultaneously, all these companies lay off staff to amplify margins, buy more chips and grow even larger.

Top executives who are already making millions hit their shareholder targets and get bonuses. Meanwhile regular employees face layoffs and have no job security. You think your $200,000 job is safe? Only until the next quarter’s earnings are going public, or until the merger is completed after your small company is swallowed up by one of the big guys.

If it was just Amazon, that would be one thing, but similar reports of layoffs have recently been announced at IBM (9,000 jobs in the US alone), UPS (34,000 jobs), Nestle (16,000 jobs), Intel (24,500 jobs)… the list goes on. What happens to the global economy when hundreds of thousands of people become jobless while large companies recycle their money, reinvesting in each other in circular deals where funding is promised back to the investors in product purchases?

What happens in a world where profits and margins matter more than people?

AI Infiltration

Do you want an AI to be able to read and reply to your email? Wouldn’t that be great? Yes, but I’m not doing it!

My personal email is a gateway to everything I do on the web, and that includes my digital banking. It also includes access to EVERY web tool that I use. I can’t count the number of times that I’ve used ‘Forgot my password’ on a website, or an app, and retrieved that information in my email. So, my email gives me, and anyone or anything that has my password a lot of control over the online tools that I regularly use.

As an aside, this is why two-factor authentication is so important, it protects you from someone having full control of everything you do online, simply by having access to your email. Yet, to me, this protection isn’t enough to allow me to give an AI agent access to my email. To me, this is allowing too much access to my whole digital life.

It’s not the reading of my email I’m concerned about. And frankly, I’d love to have an AI respond to basic email communication on my behalf, or to add items to my calendar for me. That would be great. But to do that I’m essentially saying to an AI company, “I’m an open book, go ahead and read me in order to train your AI model.’ And I’m also allowing an Agent full access to my digital life.

What happens when a ‘helpful’ agent decides that in order to help me it needs access to my online banking to make a purchase? Or worse yet, what happens when an AI is injected with a virus designed to collect my passwords and to update this passwords, then delete the ‘Forgot password’ emails so I don’t even know they were changed.

We’ve already seen countless examples of people being able to trick an AI into giving access to programming information that should have been kept private. Or people convincing an AI to respond to inappropriate questions it was trained not to respond to. Knowing this is not terribly hard to do, what makes you believe an AI agent with full access to your email, your life online, can’t be convinced or exploited to share your information and access in a way that will completely compromise you and your personal information?

I’m not convinced the risk is worth the reward. As I use AI more, I’m using it as a tool to help me understand and connect to the world in better, more efficient ways. But I’m not ready to let AI into my email and into my digital life. I’m wondering when the horror stories of full identity theft are going to start to happen? And I’m guessing these stories are going to start with, “I gave an AI agent access to my email.”

In my lifetime

I was only one-and-a-half years old when Apollo 11 landed on the moon just 56 years ago. The computer guidance system was sophisticated for the day, but simple by today’s standards. Years later when I bought the 64k adapter for my Commodore Vic 20 home computer, which needed to be plugged into my television, I had access to more memory than the Apollo.

Today most calculators have more memory than that. So do our fridges, and other household items that really don’t even need it. We routinely purchase more sophisticated items than the computer that landed the first space ship on the moon.

Now we are asking LLM’s that do billions of calculations a second questions and we don’t even fully understand their processes leading to the answers. The sophistication of these tools are so much greater than anything humankind has created before. Few people in the world truly understand the workings of these tools, in the same way that not many people understood what the Apollo 11 navigation computer was doing back in 1969.

So where is this all leading? What technological advances am I going to see in my lifetime? Are we all going to have house robots doing chores for us. Will we no longer drive because cars will drive (or fly) themselves better than we can? Will I go to the bathroom and my toilet will tell me I’m deficient in a certain vitamin after analyzing my poop?

I’m fascinated by how fast we’ve innovated in less than 60 years. I recognize how much faster we’ve innovated in the last 30 years compared to the 30 before that, and it makes me think that if the rate of innovation continues, I’ll see even greater innovations in the next 15 years. That’s the nature of exponential growth and I think that innovation has been far more exponential than incremental.

I spend a fair bit of time thinking about the future… Be it the future of technology, education, health and longevity. In each of these areas I see things changing drastically in the next 15 years. But I don’t have a crystal ball and I’m not sure that I can separate science from science fiction, or innovation from imagination, as I look forward. In all honesty I have no idea how far technology and innovations will take us in my lifetime, but I’m excited about the possibilities.