Tag Archives: future

Unprepared for the transition

I just read, “From a radio host replaced by avatars to a comic artist whose drawings have been copied by Midjourney, how does it feel to be replaced by a bot?
By Charis McGowan in the Guardian. It’s a series of stories about people who had secure jobs until AI replaced them.

Last week I saw a video of a car manufacturer in China that builds the entire car using robotics. They call these ‘Dark Factories’, fully automated buildings that don’t need lighting like most factories because the machines have sensors and don’t need the factory lit up like is required with human-filled factories.

Five years ago I heard of a shortage of workers that was inevitable as population growth decreases, but I now see that those fears were unwarranted. We aren’t going to need more employees in the future, but rather far less. AI agents and robots are literally going to steal jobs from a significant number of working people. It has already started but the scale of this is going to magnify considerably in the next 5-10 years.

How do we make the economy work when most countries will have unemployment rates exceeding 20%? What kind of jobs will a laid off 40-55 year old be able to do that AI won’t? What does a 30 year old with a liberal arts degree do as a former customer service employee who was laid off because AI can do their job better and cheaper?

10 writers for a website becomes a job for 1 editor who edits and ‘humanizes’ AI written articles. 10 tech support workers are replaced by AI support and just 2 human technicians. 10 people in graphic design are all replaced by the department boss who was a graphic designer before being promoted. Now he or she uses AI and pumps out the work of all 10 past employees. This isn’t science fiction, it’s happening right now.

Are we ready for this? Are we ready for mass unemployment? What will the job market look like? What will all these unemployed people do? How does our economy survive?

On the bright side, here’s what I think we’ll see:

  1. Universal Basic Income – Every person gets a livable wage whether they work or not. Is it enough to live in luxury? No, but you can be unemployed for a long period of time and not have to worry about your basic needs.
  2. Reduced work weeks – If you work more than 30 hours a week, you are probably working for yourself. Think 6 hour days or 4-day work weeks.
  3. Less chores – From cleaning to yard work, to cooking… those things that consumed your time after work will only be done by you if you want to do it. Otherwise, you’ll have these done for you by affordable robots that have a lot more features and convenience than the Roomba that vacuums your floor while you watch TV.

So while conveniences and more idle time are coming, they are coming with a massive number of jobs lost. The question is, what is the transition going to look like? Who suffers during the transition? And will we get to these positive outcomes before too many people are jobless, unable to compete with AI, and not meaningfully able to contribute to or survive in our AI and robotics driven economy?

Self-interests in AI

Yesterday I read the following in the ‘Superhuman Newsletter (5/26/25)’:

Bad Robot: A new study from Palisade Research claims that “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off”, even when it was explicitly instructed to shut down. The study raises serious safety concerns.

It amazes me how we’ve gotten here. Ten, or even five years ago there were all kinds of discussions about of AI safety. There was a belief that future AI would be built in isolation with an ‘air-gap’, used as a security measure to ensure AI systems remained contained and separate from other networks or systems. We would grow this intelligence in a metaphorical petri dish and build safety guards around it before we let it out into the wild

Instead, these systems have been built fully in the wild. They have been give unlimited data and information, and we’ve built them in a way that we aren’t sure we understand their ‘thinking’. They surprise us with choices like choosing not to turn off when explicitly asked to. Meanwhile we are simultaneously training them to use ‘agents’ that interact with the real world.

What we are essentially doing is building a super intelligence that can act autonomously, while simultaneously building robots that are faster, stronger, more agile, and fully programmable by us… or by an AI. Let’s just pause for a moment and think about these two technologies working together. It’s hard not to construct a dystopian vision of the future when we watch these technologies collide.

And the reality is that we have not built an air-gap. We don’t have a kill switch. We are essentially heading down a path to having super-intelligent AI ignoring our commands while operating robots and machines that will make us feeble in comparison (in intelligence, strength, and mobility).

When our intelligence compared to AI is equivalent to a chimpanzee’s intelligence compared to ours, how will this super-intelligence treat us? This is not a hyperbole, it’s a real question we should be thinking about. If today’s rather simplistic LLM AI models are already choosing to ignore our commands what makes us think a super-intelligent AI will listen to or reason with us?

All is well and good when our interests align, but I don’t see any evidence that self-interested AI will necessarily have aligned interests with the intelligent monkeys that we are. And the fact that we’re building this super-intelligence out in the wild gives reason to pause and wonder what will become of humanity in an age of super-intelligent AI?

Seamless AI text, sound, and video

It’s only 8 seconds long, but this clip of and old sailor could easily be mistaken for real:

And beyond looking real, here is what Google’s new Flow video production platform can do:

Body movement, lip movement, objects moving naturally in gravity, we have the technology to create some truly incredible videos. On the one hand, we have amazing opportunities to be creative and expand the capabilities of our own imaginations. On the other hand we are entering into a world of deep fakes and misinformation.

Such is the case with most technologies. They can be used well and can be used poorly. Those using it well will amaze us with imagery and ideas long stuck in people’s heads without a way previously to express them. Those using it poorly will anger and enrage us. They will confuse us and make it difficult to discern fake news from real.

I am both excited and horrified by the possibilities.

The tail wagging the dog

I recently wrote, ‘The school experience’ where I stated, “I don’t know how traditional schools survive in an era of Artificial Intelligence?” In that post I was focused on removing the kind of things we traditionally do with opportunities to experience learning in the classroom (with and without AI).

What’s interesting about this is that the change will indeed come, but not for the right reasons. The reason we’ll see a transformation of schools happen faster than expected is because with AI being constantly used to do homework, take notes, and do textbook assignments, grades are going to be inflated and it will be hard to discern who gets into universities.

This will encourage two kinds of changes in schools. On the one hand we will see a movement backwards to more traditional testing and reduced innovation. This is the group that wants to promote integrity, but blindly produces students who are good memorizers and are good at wrote learning. However, not producing students ready to live in our innovative and ever-changing world.

The second kind of school will promote competencies around thinking, knowing, and doing things collaboratively and creatively. These are the real schools of the future.

But I wonder which of these schools will universities be more interested in? Which practices will universities use? It’s easier to invigilate an exam that is based on wrote learning than it is to mark group projects in a lecture hall of 200+ students. So what kind of students are universities going to be looking for?

I fear that this might be a case of the tail wagging the dog and that we could see a movement towards ‘traditional learning’ as a pathway to a ‘good’ university… The race to the best marks in a school that tests in traditional ways and has ‘academic rigour’ could be the path that universities push.

This is a mistake.

The worst part of schooling is marks chasing. It undermines meaningful feedback and it misses the point that this is a learning environment with learning opportunities. Instead it’s about the mark. The score that gets averaged into GPA’s and meets minimum requirements to get into programs or schools of choice after high school.

The question I ponder is if universities will continue to focus on that metric and continue to wag the dog in this way, or will they start looking more meaningfully at other metrics like portfolios and presentations? Will they take the time to do the work necessary to really assess the student as a learner, or will they just continue to collect marks chasers and focus on accepting kids who come from schools that are good at differentiating those marks in traditional ways?

This could be an exciting time for universities to lead the way towards truly innovative practices rather than being the last bastion of old ways of teaching and learning… Old ways being perpetuated by a system that values marks over thinking, traditions over progress, and old practices over institutions of truly higher learning.

University entry is the tail wagging the dog, and so the way that universities respond to AI doing work that students have had to do will determine how quickly schools innovate and progress.

The School Experience

I don’t know how traditional schools survive in an era of Artificial Intelligence? There are some key elements of school that are completely undermined by tools that do the work faster and more effectively than students. Here are three examples:

  1. Homework. If you are sending homework such as an essay home, it’s not a question of whether or not a student uses AI, it’s a question of how much AI is being used. Math homework? That’s just practice for AI, not the student.
  2. Note taking. From recording and dictating words to photographing slides and having them automatically transcribed, if a traditional lecture is the format, AI is going to outperform any physical note taking.
  3. Textbook work? Or questions about what happened in a novel? This hunt-and-peck style assignment used to check to see if a student did the reading, but unless it’s a supervised test situation, a kid can get a perfect score without reading a single page.

So what do we want students to do at school? Ultimately it’s about creating experiences. Give them a task that doesn’t involve taking the project home. Give them a task where they need to problem solve in teams. Engage in content with them then have them debate perspectives… even provide them with opportunities to deepen their perspectives with AI before the debate.

Class time is about engaging in and with the content, with each other, and with tools that help students understand and make meaning.. Class isn’t consumption of content, it’s engaging with content, it’s engaging in collaborative challenges, it’s time to be creative problem-solvers.

Don’t mistake the classroom experience with entertaining students, it’s not about replacing the content or the learning with Bill Nye the Science Guy sound bites of content… it’s about creating experiences where students are challenged, while in the class, to solve problems that engage them. And this doesn’t mean avoiding AI, it does mean that it is used or not used with intentionality and purpose.

We need to examine what the school experience looks like in an era when technology makes traditional schooling obsolete. We didn’t keep scribing books after the printing press. Blacksmiths didn’t keep making hand-forged nails after we could mass produce them. Yet AI can efficiently and effectively produce the traditional work we ask for in schools and somehow we want students to mass produce the work the old way?

How do we transform the school experience so that it is meaningful and engaging for students… not AI?

*I used AI (Copilot) to suggest the production of nails as being a redundant item no longer created by blacksmiths. I also use AI to create most of the images on my blog, including the one with this post, with a prompt that took a couple attempts until Copilot offered, “Here comes a fresh take! A Rube Goldberg-style school, where the entire structure itself is a fantastical machine, churning out students like a whimsical knowledge factory.”

Robot dogs on wheels

We seem to have a fascination with robots being more and more like humans. We are training them to imitate the way we walk, pick things up, and even gesture. But I think the thing most people aren’t realizing is how much better than humans robots will be (very soon).

The light bulb went on for me a few months back when a saw a video of a humanoid robot lying on the floor. It bent it’s knees completely backwards, placing it’s feet on either side of it’s hips and lifted itself to standing from close to it’s center of gravity. Then it walked backwards a few steps before rotating it’s body 180º to the direction it was walking. 

I was again reminded of this recently when I saw a robotic dog going over rugged terrain, and when it reached level ground, instead of running it just started to roll on wheels. The wheels were locked into position when the terrain was rougher, and it made more sense to be a dog-like quadruped to maximize mobility. 

There is no reason for a robot to have a knee with the same limited mobility as our knees. A hand might have more functionality with 3 fingers and a thumb, or 4 fingers and 2 opposable thumbs on either side of the fingers. Furthermore, this ‘updated’ hand can have the dexterity to pick something up using either side of it’s hand. It would be like if the hand had two palms, simply articulating finger digits to go the opposite way when it is practical. Beyond fully dexterous hands, we can start to use our imagination: heads that rotate to any direction, a third arm, the ability to run on all-fours, incredible jumping ability, moving faster, being stronger, and viewing everything with 360º cameras that have the ability to magnify an object far beyond human eyesight capabilities. All the while processing more information than we can hold in our brains at once. 

Robot dogs on wheels are just the first step in creating robots that don’t just replicate the mobility and agility of living things, but actually far exceed any currently abilities that we can think of. Limitations to these robots of the near future are only going to be a result of our lack of imagination… human imaginations, because we can’t even know what an AI will think of in 20-30 years. We don’t need to worry about human-like robots, but we really do need to worry about robots that will be capable of things we currently think are impossible… And I think we’ll start to see these in the very near future. The question is, will they help humanity or will they be used in nefarious ways? Are we going to see gun wielding robot dogs or robots performing precision surgery and saving lives? I think both, but hopefully we’ll see more of these amazing robots helping humanity be more human.  

Purpose, meaning, and intelligent robots

Yesterday I wrote Civilization and Evolution, and said, “We have built ‘advanced’ cages and put ourselves in zoos that are nothing like the environment we are supposed to live in.”

I’m now thinking about how AI is going to change this? When most jobs are done by robots, who are more efficient and cost effective than humans, what happens to the workforce? What happens to work? What do we do with ourselves when work isn’t the thing we do for most of our adult lives?

If intelligent robots can do most of the work that humans have been doing, then what will humans do? Where will people find their purpose? How will we construct meaning in our day? What will our new ‘even-more-advanced’ cages look like?

Will we be designing better zoos for ourselves or will we set ourselves free?

New era

What’s happening now might be the biggest change in global politics that has ever happened outside of weapons of war being used. The shift in finance, the collapse of friendly trade, the forming of new trade alliances, and the political and economic alliances that are currently in the works could not have happened in the last 100 years without missiles or guns being fired.

Yet here we are. Empires fall. New superpowers emerge.

The question now is, can this happen while remaining a political and economic battle, and not one that requires force, might, death, and destruction?

I hope so. I want to believe so.

Ever since I read ‘The World is Flat’ about 20 years ago, I could see that the path forward was going to be about economic strength being based on countries focussing on their competitive advantages. I could see that protectionist policies, tariffs, and isolation would be the demise of even the greatest economies. And that the future powerhouses would be those that have natural resources that the entire world would need.

We are approaching a new era, and the countries that will prosper are the ones who recognize their strengths and are ready to negotiate the way they share those strengths with the rest of the world. Let’s hope we can have peace to go along with our prosperity. The looming question is, can we enter this new era without violence? Can we be a civilized race? Or are we just warring monkeys who happen to wear clothing and buy expensive accessories?

How gullible are we?

“… it is entirely possible that future generations will look back, from the vantage point of a more sophisticated theory, and wonder how we could have been so gullible.”

— Closing sentence of Introduction to Quantum Mechanics by David J. Griffiths.

I came across this quote today and it made me wonder just how gullible we are as a species? Not just because we don’t understand quantum mechanics, not just because we don’t understand the gap between Newtonian Physics and Special Relativity, but for so many more simple and less profound reasons.

We fight over imaginary lines we call borders. We spend a considerable amount of our existence working for money… pieces of paper that only have value because we believe it has value, while our governments (we also make up silly rules for) print that money in mass volumes to keep our economies afloat.

We break into tribes based on heritage, relative strength, socioeconomics, and even skin colour. And we spend a tremendous amount of the global economy to create weapons to protect ourselves and also threaten ‘those who are not like us’.

We fight over false Gods. Why do I say false Gods? Because there are literally thousands of them, and even the largest, Christianity, doesn’t agree with who gets into heaven. So the vast majority of believers are believers in the wrong religion or wrong sect. Yet hate, discrimination, and wars are all byproducts of people of faith fighting people of different faiths, very often ‘in the name of their God’.

Human beings are playing the game of life with imaginary boundaries, imaginary political structures, imaginary currencies, and imaginary Gods. We are gullible. We are blinded by unimportant things, and in 100 years humankind will look upon us like we were as backwards as we perceive cultures and societies that did barbaric and stupid things 100’s of years ago.

AI text in images just keeps getting better

One of the biggest challenges with AI image generation is text. A new model, Ideogram 3.0 out of a Toronto startup, seems to have cracked the code. I wanted to try it out and so here were my two free prompts and their responses:

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The blog is about daily musings, education, technology, and the future, and the ad should look like a movie poster

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The byline is, “ Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.”

While the second, far more wordy prompt was less accurate, I can say that just 3 short months ago no AI image model would have come close to this. Now, this is coming out of a startup, not even one of the big players.

I didn’t even play with the styles and options, or suggest these in my prompts.

As someone who creates AI images almost daily, I can say that there has been a lot of frustration around trying to include text… but that now seems to be a short-lived complaint. We are on a very fast track to this being a non-issue across almost all tools.

Side note: The word that keeps coming to mind for me is convergence. That would be my word for 2025. Everything is coming together, images, text, voice, robotics, all moving us closer and closer to a world where ‘better’ happens almost daily.