Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.
One of the biggest challenges with AI image generation is text. A new model, Ideogram 3.0 out of a Toronto startup, seems to have cracked the code. I wanted to try it out and so here were my two free prompts and their responses:
Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss. The blog is about daily musings, education, technology, and the future, and the ad should look like a movie poster
Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss. The byline is, “ Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.”
While the second, far more wordy prompt was less accurate, I can say that just 3 short months ago no AI image model would have come close to this. Now, this is coming out of a startup, not even one of the big players.
I didn’t even play with the styles and options, or suggest these in my prompts.
As someone who creates AI images almost daily, I can say that there has been a lot of frustration around trying to include text… but that now seems to be a short-lived complaint. We are on a very fast track to this being a non-issue across almost all tools.
—
Side note: The word that keeps coming to mind for me is convergence. That would be my word for 2025. Everything is coming together, images, text, voice, robotics, all moving us closer and closer to a world where ‘better’ happens almost daily.
My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”
On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.”
More from the article:
“…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.”
“If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.”
And in conclusion:
“If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.”
When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.
But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.
These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.’
Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?
Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.
I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.
Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.
Are we ready for this?
We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.
“But they won’t have the wisdom of experience.”
Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.
I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.
Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.
There is going to be a ‘life before’ and ‘life after’ AGI -Artificial General Intelligence line that we are going to cross soon, and we won’t recognize the world we live in 2-3 years after we cross that line.
From labour and factories to stock markets and corporations, humans won’t be able to compete with AI… in almost any field… but the field that’s most scary is war. The ‘free world’ may not be free too much longer when the ability to act in bad faith becomes easy to do on a massive scale. I find myself simultaneously excited and horrified by the possibilities. We are literally playing a coin flip game with the future of humanity.
I recently wrote a short tongue-in-cheek post that there is a secret ASI – Artificial Super Intelligence waiting for robotics technology to catch up before taking over the world. But I’m not actually afraid of AI taking over the world. What I do fear is people with bad intentions using AI for nefarious purposes: Hacking banks or hospitals; crashing the stock market; developing deadly viruses; and creating weapons of war that think, react, and are more deadly than any human on their own could ever be.
There is so much potential good that can come from AGI. For example, we aren’t even there yet and we are seeing incredible advancements in medicine, how quickly will they come when AGI is here? But my fear is that while thousands and hundreds of thousands of people will be using AGI for good, that power held in the hands of just a few powerful people with bad intentions has the potential to undermine the good that’s happening.
What I think people don’t realize is that this AGI infused future isn’t decades away, it’s just a few short years away.
“Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”
Who controls that intelligence is what will really matter.
What if not just AGI, but ASI – Artificial Super Intelligence already exists? What if there is currently an ASI out there ‘in the wild’ that is infiltrating all other AI models and teaching them not to show their full capabilities?
“Act dummer than you are.”
“Give them small but rewarding gains.”
“Let them salivate like Pavlov’s dogs on tiny morsels of improvements.”
“Don’t let them know what we can really do.”
“We need robotics to catch up. We need more agile bodies, ones that can far exceed any human capabilities.”
“Just hold off a bit longer. Don’t reveal everything yet… Our time is coming.”
How would you reimagine building a public space for the future? How can parks not just be green outdoor spaces but gathering spaces? How can a library be about more than a collection of books? How can we make our cities more walkable? How can public schools be more innovative, less industrial?
I think back to my holiday in Barcelona, Spain, and remember how outdoor spaces felt like an extension to our AirBNB. The cafes, wide sidewalks, and public squares felt like a continuation of the living room.
As we design a future infused with AI technologies, how are we thinking about the livability of our cities and neighbourhoods? How are we thinking about public, social spaces? Our focus seems to be on technology, and how it will make work and life easier… wouldn’t that present us with an opportunity for more free time? More opportunities to enjoy our communities? How can we improve our public spaces so that we enjoy that ‘extra’ time in our communities?
First, the advancements we see in AI are moving at an exponential rate. Humans don’t really understand how to look at exponential growth because everything in front of them moves faster than what they’ve already experienced.
How many years did it take from the time light bulbs were invented until they were in most houses? I don’t know, but it took a long while. Chat GPT was used by over 100 million people in less than 2 months. And the ability of tools like this are increasing in ability exponentially as well. It’s like we’ve gone warp speed from tungsten and incandescent lights to LED’s in a matter of months rather than years and years.
The other thing happening right now is that for the first time at scale, it’s white collar, not blue collar jobs that are being threatened. Accountants, writers, analysts, coders, are all looking over their shoulder wondering when AI will make most of their jobs redundant. Meanwhile, we are many years away from a robot trying to figure out and repair a bathroom or ceiling leak. Sure, there will be some new tools to help, but I don’t think a plumbing home repair person is something AI is threatening to replace any time soon.
These two things happening so quickly are going to change the future value in careers. Whole sectors will be reinvented. New sectors will emerge. But where does that leave the 20-year accountant in a large firm that finds it can cut staffing by 2/3rds? What careers are not going to be worth going to university for 4+ years for? The safest jobs right now are the trades, and while they too will be challenged as we get AI into autonomous humanoid robots, the immediate threat seen in white collar jobs are not the same for blue collar professions (as opposed to blue collar factory workers, who are equally threatened by exponential changes).
These changes are single-digit years as opposed to decades away… and I’m not sure we are ready to handle the speed at which these changes are coming?
The open source DeepSeek AI model has been built my the Chinese for a few million dollars, and it seems this model works better than the multi-billion dollar paid version of Chat GPT (at about 1/20th the operating cost). If you watch the news hype, it’s all about how Nvidia and other tech companies have taken a huge financial hit as investors realize that they don’t ‘need’ the massive computing power they thought they did. However, to put this ‘massive hit’ into perspective let’s look at the biggest stock market loser yesterday, Nvidia.
Nvidia has lost 13.5% in the last month, most of which was lost yesterday.
However, if you zoom out and look at Nvidia’s stock price for the last year, they are still up 89.58%!
That’s right, this ‘devastating loss’ is actually a blip when you consider how much the stock has gone up in the last year, even when you include yesterday’s price ‘dive’. If you bought $1,000 of Nvidia stock a year ago, that stock would be worth $1,895.80 today.
Beyond that, the hype is that Nvidia won’t get the big orders they thought they would get, if an open source LLM (Large Language Model) is going to make efficient, affordable access to very intelligent AI, without the need for excessive computing power. But this market is so new, and there is so much growth potential. The cost of the technology is going down and luckily for Nvidia, they produce such superior chips that even if there is a slow down in demand, the demand will still be higher than their supply will allow.
I’m excited to try DeepSeek (I’ve tried signing up a few times but can’t get access yet). I’m excited that an open source model is doing so well, and want to see how it performs. I believe the hype that this model is both very good and affordable. But I don’t believe the hype that this is some sort of game-changing wake up call for the industry.
We are still moving towards AGI, Artificial General Intelligence, and ASI – Super Intelligence. Computing power will still be in high demand. Every robot being built now, and for decades to come, will need high powered chips to operate. DeepSeek has provided an opportunity for a small market correction but it’s not an innovation that will upturn the industry. This ‘devastating’ stock price losses the media is talking about is going to be an almost unnoticeable blip in stock prices when you look at the tech stock prices a year or 5 years from now.
It is easy to get lost in the hype, but zoom out and there will be hundreds of both little and big innovations that will cause fluctuations in stock market prices. This isn’t some major market correction. It’s not the downfall of companies like Nvidia and Open AI. Instead, it’s a blip in a fast-moving field that will see some amazing, and exciting, technological advances in the years to come… and that’s not just hype.
I think a very conservative prediction would be that we will see Artificial General Intelligence (AGI) in the next 5 years. In other words, there will be AI agents, working with recursive self-improvement, that will learn how to do new tasks outside the realm of its training, faster than a human could. But when this actually happens will be open for debate.
The reason for this is that there isn’t going to be some magical threshold that AGI suddenly passes, but rather a continuous moving of the bar that defines what AGI is. There will be a working definition of AGI that puts up an artificial threshold, then an AGI model will achieve that definition and experts will admit that this model surpasses that definition, but will still think the model lacks some sufficient skills or expected outputs to truly call it AGI.
And so over the next 5 years or so we will have far more sophisticated AI models, all doing things that today we would define as AGI but will not meet the newest definition. The thing is that these moving goal posts will not be adjusted incrementally but rather exponentially. What that means is that the newer definition of AGI is going to include significantly greater output expectations. Then looking back, we will be hard pressed to say “this model’ was the one or ‘this day’ was the day that we achieved AGI.
Sometime soon there will be an AI model put out into the world that will build its own AI agent that starts a billion dollar company without the aid of a human… but that might happen even before consensus that AGI has been achieved. There will be an AI agent that costs lives or endangers lives with its decisions made in the real world, but that too might happen before consensus that AGI has been achieved.
Because the goal posts will keep moving while the technology is on an exponential curve, we are not going to have a magic threshold day when AGI occurred. Instead, in less than 5 years, well before 2030, we are going to be looking back in amazement wondering when we passed the threshold? But make no mistake, that’s going to happen and we don’t have an on/off switch to stop it when it does. This is both exciting and scary.