Tag Archives: future

The quest for food

I’m on holidays and I’ve had the privilege of watching a few sunrises over the ocean. Before the sun rises, but the day has brightened, and before the glare gets in the way, birds nose dive for small fish feeding on the turmoil of the ocean; as waves crash near the shore. I’m reminded of another privilege we all have: we don’t have to spend most of our day seeking food.

These diving birds must constantly be on the move, seeking their next meal. Food is life, and the quest for food makes up a significant part of most bird’s and mammal’s day. We don’t have to do that. We have the luxury of grocery stores, restaurants, refrigerators, and means to store food without it going bad. Much of our innovation and subsequent convenience comes from our ability to spend precious time not in the quest for food.

But it’s not just about innovation and convenience, it’s also about creativity. I think we are on the threshold of a new era of creativity. AI and robotics are going to move us into an era of greater innovation and convenience, and ultimately give us more precious time to design, create, and be artistically inspired.

The quest for food will be replaced by the quest for self-expression. A new chapter is about to be written… it will feel much more like fiction than reality.

The greatest threat to mankind

I recently wrote about the Top Risks of 2024, which were in order of concern:

  • The United States versus itself
  • The Middle East on the brink
  • Partitioned Ukraine

Any of these three risks can have dire consequences on the stability of global politics, global trade, and global conflicts far beyond the borders of the mentioned countries.

These are imminent dangers that leave the rest of the world feeling like pawns on a chessboard filled with ‘other’ power pieces making all the strategic moves. But there is one danger on the geopolitical chessboard that I think will become the biggest threat we face when in the near future, and that’s the pawns themselves. Not the powerful pieces, but rather a rogue ‘nobody’.

While people fear Artificial Intelligence, and the rise of AI robots, what I fear is rogue humans using AI with harmful intent. The future will permit individuals with evil intentions to have too much power. It comes down to two well known adages: information is power, and power corrupts.

The problem isn’t a rogue leader, or a rogue country, it’s a rogue individual with too much information and too much power. A perfect example? See #5 on this article: ‘Why we’ll never actually destroy the last samples of smallpox’,

5) We could always recreate smallpox from genetic information

One could argue that in the information and genetics age, nothing really dies forever. It just dies until the technology to resurrect it appears. And for smallpox, that time is now.

The technology is here. And so is the necessary information: the complete DNA sequences of roughly 50 smallpox samples are available to the general public. This means that people could make smallpox in the lab. “Someone could if they wished recreate live virus from scratch just from that public information,”

We are less than a decade away from one intelligent crackpot, working in his or her (more likely an incel ‘his’) basement lab, creating or recreating a deadly virus and having it spread covid-19 style across the globe.

We are 15-20 years away from some crackpot scientist developing a nuclear bomb from parts and resources ordered online… without ever raising red flags to warn of his intentions.

The greatest threat to mankind isn’t wealthy people, politicians, and powerful countries, it’s one individual with malice in his heart and access to knowledge and information more power than anyone should ever have.

It’s already here!

Just yesterday morning I wrote:

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right… the most daring prophecies seem laughably conservative.

Then last night I found this post by Zain Khan on LinkedIn:

🚨 BREAKING: OpenAI just made intelligent robots a reality

It’s called Figure 01 and it’s built by OpenAI and robotics company Figure:

  • It’s powered by an AI model built by OpenAI
  • It can hear and speak naturally
  • It can understand commands, plan, and carry out physical actions

Watch the video below to see how realistic it’s speech and movement abilities are. The ability to handle objects so delicately is stunning.

Intelligent robots aren’t a decade away. They’re going to be here any day now.

This video, shared in the post, is mind-blowingly impressive!

This is just the beginning… we are moving exponentially fast into a future that is hard to imagine. Last week I would have guessed we were 5-10 years away from this, and it’s already here! Where will we really be with AI robotics 5 years from now?

(Whatever you just guessed is probably laughably conservative.)

The most daring prophecies

In the early 1950’s Arthur C. Clarke said,

“If we have learned one thing from the history of invention and discovery, it is that, in the long run — and often in the short one — the most daring prophecies seem laughably conservative.”

As humans we don’t understand exponential growth. The well known wheat or rice on a chessboard problem is a perfect example:

If a chessboard were to have wheat placed upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?

The answer: 264−1 or 18,446,744,073,709,551,615… which is over 2,000 times the annual world production of wheat.

All this to say that we are ill-prepared to understand how quickly AI and robotics are going to change our world.

1. Robots are being trained to interact with the world through verbal commands. They used to be trained to do specific tasks like ‘find one of a set of items in a bin and pick it up’. While the robot was sorting, it was only sorting specific items it was trained to do. Now, there are robots that sense and interpret the world around them.

“The chatbot can discuss the items it sees—but also manipulate them. When WIRED suggests Chen ask it to grab a piece of fruit, the arm reaches down, gently grasps the apple, and then moves it to another bin nearby.

This hands-on chatbot is a step toward giving robots the kind of general and flexible capabilities exhibited by programs like ChatGPT. There is hope that AI could finally fix the long-standing difficulty of programming robots and having them do more than a narrow set of chores.”

The article goes on to say,

“The model has also shown it can learn to control similar hardware not in its training data. With further training, this might even mean that the same general model could operate a humanoid robot.”

2. Robot learning is becoming more generalized: ‘Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning’.

“A new AI agent developed by NVIDIA Researchthat can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can…

Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

3. Put these ideas together then fast forward the training exponentially. We have robots that understand what we are asking them, which are trained and positively reinforced in a virtual physics lab. These robots are practicing how to do a new task before actually doing it… not practicing a few times, or even a few thousand times, but actually doing millions of practice simulations in seconds. Just like the chess bots that learned to play chess by playing itself millions of times, we will have robots where we ask them to do a task and they ‘rehearse’ it over and over again in a simulator, then do the task for the first time as if it had already done it perfectly thousands of times.

In our brains, we think about learning a new task as a clunky, slow experience. Learning takes time. When a robot can think and interact in our world while simultaneously rehearsing new tasks millions of times virtually in the blink of an eye, we will see them leap forward in capabilities at a rate that will be hard to comprehend.

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right…

…the most daring prophecies seem laughably conservative.

AI, Content and Context

I found this quote very interesting. On his podcast, Diary of a CEO, Steven Bartlett is talking to Daniel Priestley and Steven mentions that Open AI’s Sam Altman believes we are not far away from a 1 person company making a billion dollars, using AI rather than other employees. Daniel pushes back and says while that might happen, a more likely and more repeatable scenario would be a 5 person team. Then he says this:

“AI is very good at content but not context. And having 5 people who share a context and create a context, together… then the content can happen using AI. AI without that context, it doesn’t know what to do, so it doesn’t have any purpose.”

Daniel Priestley

Like I shared before, “The true power and potential of AI isn’t what AI can do on its own, it’s what humans and AI can do together.

This idea of context versus content seems to be the ingredients that make this marriage so ideal. This is noticeable when generating AI images, as I’ve done for quite some time, creating images to go with this blog. For example, I’ll describe something like a guy on a treadmill and maybe one of the four images created would have the guy backwards on the treadmill – content correct, but not context. As well, AI is really unaware of its’ own biases that humans can more easily see. These context errors are common.

But just as AI will be better teaming with humans, humans are also better when they team with other humans, rather than being solo. We miss context too, we struggle to see our own biases, unless we have people around us to both share and create the context.

The best innovations of the future are going to come from small teams of people providing rich contexts for AI. And while AI will get better at both context and content, it’s going to be a while before AI can do both of these really well. It’s what AI and humans can do together that will be really exciting to see.

Google proof vs AI proof

I remember the fear mongering when Google revolutionized search. “Students are just going to Google their answers, they aren’t going to think for themselves.” Then came the EDU-gurus proclaiming, “If students can Google the answers to your assignments, then the assignments are the problem! You need to Google proof what you are asking students to do!”

In reality this was a good thing. It provoked a lot of reworking of assignments, and promoted more critical thinking first from teachers, then from students. It is possible to be creative and ask a question that involves thoughtful and insightful responses that are not easily found on Google, or would have so few useful search responses that it would be easy to know if a student created the work themselves, or if they copied from the internet.

That isn’t the case for Artificial Intelligence. AI is different. I can think of a question that would get no useful search responses on Google that will then be completely answerable using AI. Unless you are watching students do the work with pen and paper in front of you, then you really don’t know if the work is AI assisted. So what next?

Ultimately the answer is two-fold:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

This isn’t a ‘make it Google proof’ kind of question. It’s more challenging than that.

I got to hear John Cohn, recently retired from MIT, speak yesterday. There are two things he said that kind of stuck with me. The first was a loose quote of a Business Review article. ’AI won’t take over people, but people with AI are going to take over people.

This is insightful. The reality is that the people who are going to be successful and influential in the future are those that understand how to use AI well. So, we would be doing students a disservice to not bring AI into the classroom.

The other thing he said that really struck me was, “If you approach AI with fear, good things won’t happen, and the bad things still will.

We can’t police its use, but we can guide students to use it appropriately… and effectively. I really like this AI Acceptable Use Scale shared by Cari Wilson:

This is one way to embrace AI rather than fear and avoid it in classrooms. Again I ask:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

One way is to question the value of homework. Maybe it’s time to revisit our expectations of what is done at home. Give students work that bolsters creativity at home, and keep the real work of school at school. But whether or not homework is something that changes, what we do need to change is how we think about embracing AI in schools, and how we help students navigate it’s appropriate, effective, and even ethical use. If we don’t, then we really aren’t preparing our kids for today’s world, much less the future.

We aren’t going to AI proof schoolwork.

Beyond a simple blood test

I have to go get some blood work done. It’s time to check a few levels, and make sure that I’m in the healthy range. I have an issue maintaining my Vitamin D levels, and cholesterol issues run in my family… and also in me. So I’ll head to the medical center and line up this weekend for them to poke me in the arm and fill a few small vials of blood. I’m a week or so I’ll get a call from my doctor after she looks at the results.

I wonder how far away we are from being able to do this from home? Prick your finger, put a drop of blood on a sensor, and get a full spectrum of results. Add to this a health monitor on your smart watch that tracks heart rate and rhythm, as well as activity, and you’ve got a full service health monitoring system that can be preemptive and preventative. And add to this a toilet that analyzes your urine, and you’ve got a regular no-line-up doctor’s visit without ever leaving your home.

Cholesterol levels seem high? The monitor will tell me, and my doctor. Vitamin D levels low? My watch tells me to double my morning dose. Imagine your watch telling you that you should go to the hospital because it detected a heart arrhythmia that is consistent with the early signs of a heart attack. Wouldn’t that be so much better than not knowing?

The possibilities of what you can do to improve your health with a system like this are incredible. Is this possible in the next 5 years? I think so! It’s going to be amazing to see the way technology enhances our healthcare system in the next decade. We will literally be able to regularly and continuously monitor things that we used to have to do several doctor and clinic visits to do on a yearly (or longer) basis. And when you do go to the doctor, your complaints about your health won’t just be anecdotal, you’ll have streams of data to share. This is exciting for everyone except hypochondriacs… these poor people are going to have a lot more to worry about!

almost free

The internet needs a makeover. I remember when I wanted to make a fun certificate or a personalized card, I could just do a Google search and find a free resource. Now when you do it, the top 10+ sites found in the search all require you to register, login, sign up, or sign in with Google or Facebook. Don’t worry, your first 30 days are free, or you’ll need to put your email in to get promotional spam sent to your inbox.

I get it. It costs money to run a website. I know, I pay to keep DavidTruss running and thanks to some affiliate links I’ve made about $35-$40 over the past 15 years. Add another $15 if you include royalties from my ebook, which I give away free everywhere except on Amazon where I couldn’t lower the price. This is my sarcastic way of saying that I don’t make any money off of my blogging and I actually have to pay to keep it running. That’s fine for me, I don’t do this for an income, but most websites need a flow of cash coming in to keep them going.

But no matter how you look at it, things on the internet have gotten a lot less free over the past decade. My blog’s Facebook page doesn’t make it onto most people’s stream because I don’t pay to boost the posts. Twitter, since it became X, has been all about seeing paid-for blue check profiles and my stream feels like it caters to ‘most popular or outlandish tweets’ rather than people I actually enjoy following. Even news sites are riddled with flashy advertising and gimmicky headlines to keep your eyes on those ads.

There needs to be a way to keep things ‘almost free’ on the internet, while not inundating us with attention seeking ads, or making us register and give away our email address to be spammed by promotional messages we don’t want. I think it will come. I think there will be an opportunity to choose between ads or micropayments. Read the kind of news you want or listen to a podcast for a penny. Like what you read/hear? Give a dime, or quarter, or even a dollar if you really like it.  There are already people donating this way on Live events on YouTube and Twitch and other similar sites, it just needs to get to the point where it’s happening on any web page. I’d rather pay a tiny bit than be inundated with ads. It’s coming, but not before it gets worse… we now have ads coming to Netflix and Prime. They want us to pay MORE to avoid them. The model is still about exploitation rather than building a fan base. Subscriptions will dominate for a while and so will models that upsell you to reduce the clutter… but eventually, eventually we will see the return of the ‘almost free’.

Conversational AI interface

A future where we have conversations with an AI in order to have it do tasks for us is closer than you might think. Watch this clip, (full video here):

Imagine starting you day by asking an AI ‘assistant’, “What are the emails I need to look at today?” Then saying something like, “Respond to the first 2 for me, and draft an answer for the 3rd one for me to approve. And remind me what meetings I have today.” All while on the treadmill, or while shaving or even showering.

The AI calculates your calories used on your treadmill, and tracks what you eat, and even gives suggestions like, “You might want to add some protein to your meal, may I suggest eggs, you also have some tuna in the pantry, or a protein shake if you don’t want to make anything.”

Later you have the ever-present the AI in the room with you during a meeting and afterwards request of it, “Send us all a message with a summary of what we discussed, and include a ‘To Do’ list for each of us.”

Sitting for too long at work? The AI could suggest standing up, or using the stairs instead of the elevator. Hungry? Maybe your AI assistant recommends a snack because it read your sugar levels off of your watch’s health monitor, and it does this just as you are starting to feel hungry.

It could even remind you to call your mom, or do something kind for someone you love… and do so in a way that helps you feel good about it, not like it’s nagging you.

All this and a lot more without looking at a screen, or typing information into a laptop. This won’t be ready by the end of 2024, but it’s closer to 2024 than it is to 2030. This kind of futuristic engagement with a conversational AI is really just around the corner. And those ready to embrace it are really going to leave those who don’t behind, much like someone insisting on horse travel in an era of automobiles. Are you ready for the next level of AI?

What are you outsourcing?

Alec Couros recently came to Coquitlam and gave a presentation on “The Promise and Challenges of Generative AI”. In this presentation he had a quote, “Outsource tasks, but not your thinking.

I just googled it and found this LinkedIn post by Aodan Enright. (Worth reading but not directly connected to the use of AI.)

It’s incredible what is possible with AI… and it’s just getting better. People are starting businesses, writing books, creating new recipes, and in the case of students, writing essays and doing homework. I just saw a TikTok of a student who goes to their lecture and records it, runs it through AI to take out all the salient points, then has the AI tool create cue cards and test questions to help them study for upcoming tests. That’s pretty clever.

What’s also clever, but perhaps now wise, is having an AI tool write an essay for you, then running the essay through a paraphraser that breaks the AI structure of the essay so that it isn’t detectable by AI detectors. If you have the AI use the vocabulary of a high school student, and throw in a couple run-on sentences, then you’ve got an essay which not only AI detectors but teachers too would be hard pressed to accuse you of cheating. However, what have you learned?

This a worthy point to think about, and to discuss with students: How do you use AI to make your tasks easier, but not do the thinking for you? 

Because if you are using AI to do your thinking, you are essentially learning how to make yourself redundant in a world of ever-smarter AI. Don’t outsource your thinking… Keep your thinking cap on!