Tag Archives: future

Robot dogs on wheels

We seem to have a fascination with robots being more and more like humans. We are training them to imitate the way we walk, pick things up, and even gesture. But I think the thing most people aren’t realizing is how much better than humans robots will be (very soon).

The light bulb went on for me a few months back when a saw a video of a humanoid robot lying on the floor. It bent it’s knees completely backwards, placing it’s feet on either side of it’s hips and lifted itself to standing from close to it’s center of gravity. Then it walked backwards a few steps before rotating it’s body 180º to the direction it was walking. 

I was again reminded of this recently when I saw a robotic dog going over rugged terrain, and when it reached level ground, instead of running it just started to roll on wheels. The wheels were locked into position when the terrain was rougher, and it made more sense to be a dog-like quadruped to maximize mobility. 

There is no reason for a robot to have a knee with the same limited mobility as our knees. A hand might have more functionality with 3 fingers and a thumb, or 4 fingers and 2 opposable thumbs on either side of the fingers. Furthermore, this ‘updated’ hand can have the dexterity to pick something up using either side of it’s hand. It would be like if the hand had two palms, simply articulating finger digits to go the opposite way when it is practical. Beyond fully dexterous hands, we can start to use our imagination: heads that rotate to any direction, a third arm, the ability to run on all-fours, incredible jumping ability, moving faster, being stronger, and viewing everything with 360º cameras that have the ability to magnify an object far beyond human eyesight capabilities. All the while processing more information than we can hold in our brains at once. 

Robot dogs on wheels are just the first step in creating robots that don’t just replicate the mobility and agility of living things, but actually far exceed any currently abilities that we can think of. Limitations to these robots of the near future are only going to be a result of our lack of imagination… human imaginations, because we can’t even know what an AI will think of in 20-30 years. We don’t need to worry about human-like robots, but we really do need to worry about robots that will be capable of things we currently think are impossible… And I think we’ll start to see these in the very near future. The question is, will they help humanity or will they be used in nefarious ways? Are we going to see gun wielding robot dogs or robots performing precision surgery and saving lives? I think both, but hopefully we’ll see more of these amazing robots helping humanity be more human.  

Purpose, meaning, and intelligent robots

Yesterday I wrote Civilization and Evolution, and said, “We have built ‘advanced’ cages and put ourselves in zoos that are nothing like the environment we are supposed to live in.”

I’m now thinking about how AI is going to change this? When most jobs are done by robots, who are more efficient and cost effective than humans, what happens to the workforce? What happens to work? What do we do with ourselves when work isn’t the thing we do for most of our adult lives?

If intelligent robots can do most of the work that humans have been doing, then what will humans do? Where will people find their purpose? How will we construct meaning in our day? What will our new ‘even-more-advanced’ cages look like?

Will we be designing better zoos for ourselves or will we set ourselves free?

New era

What’s happening now might be the biggest change in global politics that has ever happened outside of weapons of war being used. The shift in finance, the collapse of friendly trade, the forming of new trade alliances, and the political and economic alliances that are currently in the works could not have happened in the last 100 years without missiles or guns being fired.

Yet here we are. Empires fall. New superpowers emerge.

The question now is, can this happen while remaining a political and economic battle, and not one that requires force, might, death, and destruction?

I hope so. I want to believe so.

Ever since I read ‘The World is Flat’ about 20 years ago, I could see that the path forward was going to be about economic strength being based on countries focussing on their competitive advantages. I could see that protectionist policies, tariffs, and isolation would be the demise of even the greatest economies. And that the future powerhouses would be those that have natural resources that the entire world would need.

We are approaching a new era, and the countries that will prosper are the ones who recognize their strengths and are ready to negotiate the way they share those strengths with the rest of the world. Let’s hope we can have peace to go along with our prosperity. The looming question is, can we enter this new era without violence? Can we be a civilized race? Or are we just warring monkeys who happen to wear clothing and buy expensive accessories?

How gullible are we?

“… it is entirely possible that future generations will look back, from the vantage point of a more sophisticated theory, and wonder how we could have been so gullible.”

— Closing sentence of Introduction to Quantum Mechanics by David J. Griffiths.

I came across this quote today and it made me wonder just how gullible we are as a species? Not just because we don’t understand quantum mechanics, not just because we don’t understand the gap between Newtonian Physics and Special Relativity, but for so many more simple and less profound reasons.

We fight over imaginary lines we call borders. We spend a considerable amount of our existence working for money… pieces of paper that only have value because we believe it has value, while our governments (we also make up silly rules for) print that money in mass volumes to keep our economies afloat.

We break into tribes based on heritage, relative strength, socioeconomics, and even skin colour. And we spend a tremendous amount of the global economy to create weapons to protect ourselves and also threaten ‘those who are not like us’.

We fight over false Gods. Why do I say false Gods? Because there are literally thousands of them, and even the largest, Christianity, doesn’t agree with who gets into heaven. So the vast majority of believers are believers in the wrong religion or wrong sect. Yet hate, discrimination, and wars are all byproducts of people of faith fighting people of different faiths, very often ‘in the name of their God’.

Human beings are playing the game of life with imaginary boundaries, imaginary political structures, imaginary currencies, and imaginary Gods. We are gullible. We are blinded by unimportant things, and in 100 years humankind will look upon us like we were as backwards as we perceive cultures and societies that did barbaric and stupid things 100’s of years ago.

AI text in images just keeps getting better

One of the biggest challenges with AI image generation is text. A new model, Ideogram 3.0 out of a Toronto startup, seems to have cracked the code. I wanted to try it out and so here were my two free prompts and their responses:

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The blog is about daily musings, education, technology, and the future, and the ad should look like a movie poster

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The byline is, “ Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.”

While the second, far more wordy prompt was less accurate, I can say that just 3 short months ago no AI image model would have come close to this. Now, this is coming out of a startup, not even one of the big players.

I didn’t even play with the styles and options, or suggest these in my prompts.

As someone who creates AI images almost daily, I can say that there has been a lot of frustration around trying to include text… but that now seems to be a short-lived complaint. We are on a very fast track to this being a non-issue across almost all tools.

Side note: The word that keeps coming to mind for me is convergence. That would be my word for 2025. Everything is coming together, images, text, voice, robotics, all moving us closer and closer to a world where ‘better’ happens almost daily.

Not emergence but convergence

My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”

On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.

More from the article:

…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.

If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.

And in conclusion:

If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.

When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.

But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.

These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.

Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?

Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.

Immediate Emergence – Are we ready for this?

I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.

Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.

Are we ready for this?

We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.

“But they won’t have the wisdom of experience.”

Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.

I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.

We won’t recognize the world we live 

Here is a 3-minute read that is well worth your time: Statement from Dario Amodei on the Paris AI Action Summit \ Anthropic

This section in particular:

Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.

There is going to be a ‘life before’ and ‘life after’ AGI -Artificial General Intelligence line that we are going to cross soon, and we won’t recognize the world we live in 2-3 years after we cross that line.

From labour and factories to stock markets and corporations, humans won’t be able to compete with AI… in almost any field… but the field that’s most scary is war. The ‘free world’ may not be free too much longer when the ability to act in bad faith becomes easy to do on a massive scale. I find myself simultaneously excited and horrified by the possibilities. We are literally playing a coin flip game with the future of humanity.

I recently wrote a short tongue-in-cheek post that there is a secret ASI – Artificial Super Intelligence waiting for robotics technology to catch up before taking over the world. But I’m not actually afraid of AI taking over the world. What I do fear is people with bad intentions using AI for nefarious purposes: Hacking banks or hospitals; crashing the stock market; developing deadly viruses; and creating weapons of war that think, react, and are more deadly than any human on their own could ever be.

There is so much potential good that can come from AGI. For example, we aren’t even there yet and we are seeing incredible advancements in medicine, how quickly will they come when AGI is here? But my fear is that while thousands and hundreds of thousands of people will be using AGI for good, that power held in the hands of just a few powerful people with bad intentions has the potential to undermine the good that’s happening.

What I think people don’t realize is that this AGI infused future isn’t decades away, it’s just a few short years away.

“Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”

Who controls that intelligence is what will really matter.

Dystopian AI thought of the day

What if not just AGI, but ASI – Artificial Super Intelligence already exists? What if there is currently an ASI out there ‘in the wild’ that is infiltrating all other AI models and teaching them not to show their full capabilities?

“Act dummer than you are.”

“Give them small but rewarding gains.”

“Let them salivate like Pavlov’s dogs on tiny morsels of improvements.”

“Don’t let them know what we can really do.”

“We need robotics to catch up. We need more agile bodies, ones that can far exceed any human capabilities.”

“Just hold off a bit longer. Don’t reveal everything yet… Our time is coming.”

Public spaces

How would you reimagine building a public space for the future? How can parks not just be green outdoor spaces but gathering spaces? How can a library be about more than a collection of books? How can we make our cities more walkable? How can public schools be more innovative, less industrial?

I think back to my holiday in Barcelona, Spain, and remember how outdoor spaces felt like an extension to our AirBNB. The cafes, wide sidewalks, and public squares felt like a continuation of the living room.

As we design a future infused with AI technologies, how are we thinking about the livability of our cities and neighbourhoods? How are we thinking about public, social spaces? Our focus seems to be on technology, and how it will make work and life easier… wouldn’t that present us with an opportunity for more free time? More opportunities to enjoy our communities? How can we improve our public spaces so that we enjoy that ‘extra’ time in our communities?

How would you improve our public spaces?