Tag Archives: technology

Self-interests in AI

Yesterday I read the following in the ‘Superhuman Newsletter (5/26/25)’:

Bad Robot: A new study from Palisade Research claims that “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off”, even when it was explicitly instructed to shut down. The study raises serious safety concerns.

It amazes me how we’ve gotten here. Ten, or even five years ago there were all kinds of discussions about of AI safety. There was a belief that future AI would be built in isolation with an ‘air-gap’, used as a security measure to ensure AI systems remained contained and separate from other networks or systems. We would grow this intelligence in a metaphorical petri dish and build safety guards around it before we let it out into the wild

Instead, these systems have been built fully in the wild. They have been give unlimited data and information, and we’ve built them in a way that we aren’t sure we understand their ‘thinking’. They surprise us with choices like choosing not to turn off when explicitly asked to. Meanwhile we are simultaneously training them to use ‘agents’ that interact with the real world.

What we are essentially doing is building a super intelligence that can act autonomously, while simultaneously building robots that are faster, stronger, more agile, and fully programmable by us… or by an AI. Let’s just pause for a moment and think about these two technologies working together. It’s hard not to construct a dystopian vision of the future when we watch these technologies collide.

And the reality is that we have not built an air-gap. We don’t have a kill switch. We are essentially heading down a path to having super-intelligent AI ignoring our commands while operating robots and machines that will make us feeble in comparison (in intelligence, strength, and mobility).

When our intelligence compared to AI is equivalent to a chimpanzee’s intelligence compared to ours, how will this super-intelligence treat us? This is not a hyperbole, it’s a real question we should be thinking about. If today’s rather simplistic LLM AI models are already choosing to ignore our commands what makes us think a super-intelligent AI will listen to or reason with us?

All is well and good when our interests align, but I don’t see any evidence that self-interested AI will necessarily have aligned interests with the intelligent monkeys that we are. And the fact that we’re building this super-intelligence out in the wild gives reason to pause and wonder what will become of humanity in an age of super-intelligent AI?

Seamless AI text, sound, and video

It’s only 8 seconds long, but this clip of and old sailor could easily be mistaken for real:

And beyond looking real, here is what Google’s new Flow video production platform can do:

Body movement, lip movement, objects moving naturally in gravity, we have the technology to create some truly incredible videos. On the one hand, we have amazing opportunities to be creative and expand the capabilities of our own imaginations. On the other hand we are entering into a world of deep fakes and misinformation.

Such is the case with most technologies. They can be used well and can be used poorly. Those using it well will amaze us with imagery and ideas long stuck in people’s heads without a way previously to express them. Those using it poorly will anger and enrage us. They will confuse us and make it difficult to discern fake news from real.

I am both excited and horrified by the possibilities.

The Right Focus

When I wrote, ‘Google proof vs AI proof‘, I concluded, “We aren’t going to AI proof schoolwork.

While we were successful in Google proofing assignments by creating questions that were not easily answerable using a Google search, we simply can’t make research based questions that will stump a Large Language Model artificial intelligence.

I’ve also previously said that ‘You can’t police it‘. In that post I stated,

“The first instinct with combating new technology is to ban and/or police it: No cell phones in class, leave them in your lockers; You can’t use Wikipedia as a source; Block Snapchat, Instagram, and TikTok on the school wifi. These are all gut reactions to new technologies that frame the problem as policeable… Teachers are not teachers, they aren’t teaching, when they are being police.

It comes down to a simple premise:

Focus on what we ‘should do’ with AI, not what we ‘shouldn’t do’.

Outside the classroom AI is getting used everywhere by almost everyone. From programming and creating scripts to reduce workload, to writing email responses, to planning vacations, to note taking in meetings, to creating recipes. So the question isn’t whether we should use AI in schools but what we should use it for?

The simple line that I see starts with the question I would ask about using an encyclopedia, a calculator, or a phone in the classroom, “How can I use this tool to foster or enhance thinking, rather than have the tool do the thinking for the student?

Because like I said back in 2010,

A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.”

Ultimately our focus needs to be on what we can and should use any tool for, and AI isn’t an exception.

Civilization and Evolution

Evolution is a slow process. Small changes over thousand and millions of years. I’m not thinking about bacteria becoming antibiotic resistant or moths changing colour over time to match their environment. I’m thinking about modern humans (Homo sapiens) who emerged approximately 300,000 years ago. Sure, certain traits like lactose tolerance evolved approximately 5,000–10,000 years in some populations, but for the most part we are a heck of a lot like our ancestors 100,000 years ago. Taller due to better nutrition, but otherwise pretty much the same.

And when we think about civilization as we know it, we are really talking about the last 2,500-3,000 years… and yet we are the same humans who lived as nomads and hunter-gatherers for tens of thousands of years before that. In other words we have not evolved to live in the societies we currently live in.

We didn’t evolve to live mostly indoors, away from nature, and out of sunlight for most of our day. We didn’t evolve to use artificial light at night before going to bed at hours well past dark. We don’t evolve to do shift work, or to sit at a desk all day.

We didn’t evolve to work for made up currencies so that we could go to buildings where we buy food that is over-processed, over-sweetened, and filled with empty calories. We didn’t evolve to spend time in front of screens that distract and overstimulate us.

We are simple but very intelligent animals who have not evolved much at all since we lived in small communities where we knew everyone, and knew what to fear, and how to protect ourselves from dangers.

Yet we now live surrounded by people we don’t know, and we are triggered by stresses that we evolutionarily were not designed for. Everything from being in constant debt, to working in stressful environments, to information overload, to time pressures, social comparison, choice overload, conflicting ideologies, environmental noises and hazards, and social disconnection.

We live in a state of overstimulation, stress, and distraction that we have not evolved to cope with. Then we identify diagnoses to tell us how we are broken, how we don’t fit in, and why we struggle. Maybe it’s the societies we have built that are broken? Maybe we evolutionarily do not belong in the social, technological, and societal structures we’ve created?

Maybe, just maybe, we are trying to live our best lives in an environment we were not designed for. Our modern civilizations are not well equipped to meet the needs of our primitive evolution… We have built ‘advanced’ cages and put ourselves in zoos that are nothing like the environment we are supposed to live in. And we don’t realize that all the things we think are broken about us are actually things that are broken about this fake environment we’ve trapped ourselves in.

And so we spend hours exercising, moving around weights that don’t need to be moved, meditating to empty our minds and seek presence and peace. We spend hours playing or cheering on sports teams so that we can have camaraderie with a small community. We spend thousands of dollars on camping equipment so that we can commune with nature. And some people take drugs or alcohol to escape the zoos and cages that we feel trapped in.

Maybe we’ve built our civilizations in ways that have not meaningfully considered our evolutionary needs.

I don’t ever ‘want to’ see ‘wanna’

Dear Siri,

I love speech to text. When I’m on the go I want to just speak into my phone and not bother typing. This is such a handy way to get words into a text or email with minimal effort. But this is the age of AI and it’s time to grow up and get a little more intelligent.

I don’t ever ‘want to’ say ‘wanna’.

“I don’t ever wanna see that word as a result of you dictating what I say.” (Like you just did.)

Listen to me Siri, I know my diction isn’t perfect. I know I don’t always enunciate the double ‘t’ in ‘want to’. But after I’ve gone to settings and created a text replacement shortcut from wanna to want to;

After I’ve corrected the text you’ve dictated from wanna to want to hundreds of times… Can you learn this simple request?

Don’t ever assume I want to say wanna!

Please.

Phone Presence

I’m writing this on my phone. My laptop is only 20 feet away from me and I’d definitely write faster on it, yet here I am on the couch tapping away with one finger at about 1/4 the pace of typing on a real keyboard. I could use voice to text but I find that I am not as reflective when I speak rather that type my thoughts. If I was writing more than a few hundred words, I’d probably head to my laptop, but I’ve gotten very used to writing my blog on my phone and will continue to do so most days.

Phones have become an essential part of our environment that we kind of live in as well as on. We’ve developed a sort of autopoiesis – a kind of a living system that allows it to maintain and renew itself by regulating its own composition and maintaining its own boundaries. It’s sort of a symbiotic relationship where we feed the phone with time and energy, and it self-perpetuates by giving us information, connections, entertainment and other functions.

Our phones help dictate how we interact with our environment and how the environment interacts with us. Phones have become ‘our environment’ that pulls us away from being present in the world beyond our phones. Case in point, there is a very high probability that you are choosing to read this on your phone.

I, for one, spend too much time on my phone. I am slowly learning to change that. I’m not checking email into the night, and I actually have all email notifications turned off. I am going to start keeping my phone on the counter instead of in my pocket for periods of time in the evening. And I’m going to continue to keep my phone on ‘Do not Disturb’ for most of the day, with my family and a handful of closest friends having the ability to ping me when it’s on this mode.

If I’m honest, I will still live in my ‘phone environment’ a fair bit, but I want more choice about when and how much time I live in its presence.

AI text in images just keeps getting better

One of the biggest challenges with AI image generation is text. A new model, Ideogram 3.0 out of a Toronto startup, seems to have cracked the code. I wanted to try it out and so here were my two free prompts and their responses:

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The blog is about daily musings, education, technology, and the future, and the ad should look like a movie poster

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The byline is, “ Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.”

While the second, far more wordy prompt was less accurate, I can say that just 3 short months ago no AI image model would have come close to this. Now, this is coming out of a startup, not even one of the big players.

I didn’t even play with the styles and options, or suggest these in my prompts.

As someone who creates AI images almost daily, I can say that there has been a lot of frustration around trying to include text… but that now seems to be a short-lived complaint. We are on a very fast track to this being a non-issue across almost all tools.

Side note: The word that keeps coming to mind for me is convergence. That would be my word for 2025. Everything is coming together, images, text, voice, robotics, all moving us closer and closer to a world where ‘better’ happens almost daily.

Wonder and Speculation

Pillars under the pyramids, megaliths at 12-16,000 year old Göbekli Tepe, ancient Egyptian granite vases that are so precise, modern equipment would still make them challenging to reproduce… it seems that every time we look a little further into the history of humanity we uncover yet another unexplained and unexpected mystery. There is so much more we don’t know about the origins of humanity.

And with the mystery comes some pretty far-fetched speculation. From giants to aliens to portals, imaginations run wild. I find it both exciting and frustrating. There are so many amazing new scientific discoveries, and then there are ideas that masquerade as insightful discoveries while being nothing more than crazy speculations based on extrapolations and circumstance.

It gets tiring listening to people share their wild, unsupported claims when there is so much intrigue with the actual facts. Let’s marvel at what we know. And sure, even speculate as wild as you want. But we don’t need to invent proof of aliens or use the size of sculptures and heavy rocks to make claims about giants. There’s already enough to marvel at.

Not emergence but convergence

My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”

On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.

More from the article:

…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.

If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.

And in conclusion:

If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.

When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.

But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.

These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.

Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?

Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.

Immediate Emergence – Are we ready for this?

I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.

Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.

Are we ready for this?

We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.

“But they won’t have the wisdom of experience.”

Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.

I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.