Tag Archives: innovation

It’s all happening so fast

I subscribe to superhuman.ai, a daily email newsletter. Most days I peruse it for about 3-5 minutes before work, primarily focussing on the ‘Today in AI’ section. It’s fascinating to see how the field of AI is rapidly advancing. On weekends the email shifts topics. Saturday is a robotics special and Sundays are focused on scientific and technological breakthroughs outside of AI.

Here are some videos shared in yesterday’s Superhuman robotics focused update:

Then here are 3 sections from today’s email. Two related to technological advances:

Star Power: France just took a massive lead in the race to near-limitless clean energy. The country’s CEA WEST Tokamak reactor has shattered China’s record, maintaining a hydrogen plasma reaction for 22 minutes and 17 seconds flat. While it’s not commercial-ready yet, it’s a major leap in fusion research and has huge implications for the development of ITER, the world’s largest fusion project, in the south of France. 

Two-way Street: Chinese researchers have built the world’s first two-way brain-computer interface (BCI). Unlike conventional BCIs that just decode brain signals, this system creates a feedback loop where both the brain and the machine learn from each other and improve at working together over time.

And 3 related to health and longevity:

Cancer Counter: Scientists at Memorial Sloan Kettering have reported promising results from a small trial that used personalized mRNA vaccines to fight pancreatic cancer. Out of the 16 participants who were administered the vaccine, at least half generated long-lasting cancer-fighting T cells, with early results suggesting fewer recurrences. Researchers estimate these T cells could persist for years, offering hope for a future breakthrough.

Fountain of Youth: Japanese bioengineers claim to have found the ‘rewind’ button for aging. Noticing that older cells were considerably larger in size than younger ones, the scientists discovered that they were packed in a layer of the AP2A1 protein. This led them to conclude that blocking the protein could reverse aging — a potential breakthrough for anti-aging treatments. We’ll believe it when we see it.

Follicle Fix: Research teams around the worldare possibly getting closer to reversing hair loss with a host of innovative new treatments. They’re currently testing a sugar-based gel that could stimulate blood supply to hair follicles, potentially offering a simple, affordable cure for baldness. Also, a new topical gel, PP405, aims to “wake up” dormant hair follicle stem cells, while exosome-based therapies show promise in regrowing hair naturally.

Two years ago, I would have said we were 15-20 years away from intelligent robots living among us, now I think wealthy people will have these in the houses before the end of the year, and they will become even more affordable and mainstream before the end of 2026.

Two years ago I actually believed and shared that my kids would be the first generation to routinely live past 100 years old, barring accidents and rare diagnoses that haven’t yet been cured. Now I can actually conceive of this being true for my generation.

I thought Universal Basic Income was going to be a thing in the 2040’s or 2050’s… Now I look at how intelligent LLM’s are, and how advance robots are, and I wonder how we’ll make it through the 2020’s without needing to financially support both white collar and blue collar workers who are pushed out of jobs by AI and robots.

The speed of innovation is accelerating and right now we are just scratching the surface of AI inspired innovation. What happens when an AI with the equivalent knowledge of 100,000 plus of our most intelligent humans starts to make intuitive connections between entire bodies of knowledge from science, technology, politics, economics, culture, nature, and even art?

In 1985 the movie Back to the Future took us forward to 2015 where there were hovering skate boards. In 40 years rather than 30 we haven’t gotten there yet. But look at the progress in robotics from 2015-2025. This is going to advance exponentially from 2025 to 2030.

If the Back to the Future movie were made today, and the future Marty McFly went to was 2055, I bet the advancements of our imagination would be underwhelming compared to what would actually be possible. While I don’t think we will be there yet with space travel and things like a Mars space station, I think the innovations here on earth will far exceed what we can think of right now.

It’s all happening so fast!

Don’t believe the hype

The open source DeepSeek AI model has been built my the Chinese for a few million dollars, and it seems this model works better than the multi-billion dollar paid version of Chat GPT (at about 1/20th the operating cost). If you watch the news hype, it’s all about how Nvidia and other tech companies have taken a huge financial hit as investors realize that they don’t ‘need’ the massive computing power they thought they did. However, to put this ‘massive hit’ into perspective let’s look at the biggest stock market loser yesterday, Nvidia.

Nvidia has lost 13.5% in the last month, most of which was lost yesterday.

However, if you zoom out and look at Nvidia’s stock price for the last year, they are still up 89.58%!

That’s right, this ‘devastating loss’ is actually a blip when you consider how much the stock has gone up in the last year, even when you include yesterday’s price ‘dive’. If you bought $1,000 of Nvidia stock a year ago, that stock would be worth $1,895.80 today.

Beyond that, the hype is that Nvidia won’t get the big orders they thought they would get, if an open source LLM (Large Language Model) is going to make efficient, affordable access to very intelligent AI, without the need for excessive computing power. But this market is so new, and there is so much growth potential. The cost of the technology is going down and luckily for Nvidia, they produce such superior chips that even if there is a slow down in demand, the demand will still be higher than their supply will allow.

I’m excited to try DeepSeek (I’ve tried signing up a few times but can’t get access yet). I’m excited that an open source model is doing so well, and want to see how it performs. I believe the hype that this model is both very good and affordable. But I don’t believe the hype that this is some sort of game-changing wake up call for the industry.

We are still moving towards AGI, Artificial General Intelligence, and ASI – Super Intelligence. Computing power will still be in high demand. Every robot being built now, and for decades to come, will need high powered chips to operate. DeepSeek has provided an opportunity for a small market correction but it’s not an innovation that will upturn the industry. This ‘devastating’ stock price losses the media is talking about is going to be an almost unnoticeable blip in stock prices when you look at the tech stock prices a year or 5 years from now.

It is easy to get lost in the hype, but zoom out and there will be hundreds of both little and big innovations that will cause fluctuations in stock market prices. This isn’t some major market correction. It’s not the downfall of companies like Nvidia and Open AI. Instead, it’s a blip in a fast-moving field that will see some amazing, and exciting, technological advances in the years to come… and that’s not just hype.

The cat’s out of the bag

I find it mind boggling that just 5 years ago the big AI debate was whether we would let AI out in the wild or not? The idea was, AI would be sort of ‘boxed’ and within our ability to ‘contain’… but we have somehow decided to just bypass this question and set AI free.

Here is a trifecta of things that tell me the cat is out of the bag.

  1. NVIDIA puts out the Jetson Orin Nano. A tiny AI that doesn’t need to be connected to the cloud.
  2. Robots like Optimus from Tesla are already being sold.
  3. AI’s are proving that they can self replicate.

That’s it. That’s all. Just extrapolate what you want to from these three ‘independent’ developments. Put them together, stir in 5 years of technological advancement. Add a good dose of open source access and think about what’s possible… and beyond possible to contain.

Exciting, and quite honestly, scary!

Positively Subversive Leadership

A colleague in my district was recently complaining that when she went to her staff to see what their needs were, they said they needed new desks. These are an equipment update that:

  1. Will require a fair percentage of the school’s expendable budget.
  2. Won’t help the school progress or move forward with their practice.

As such, my colleague was disappointed that this is what she’d be spending so much of her budget on.

I suggested that she only purchase tables for two students and not individual desks. This meets the need of replacing old desks but invites, if not almost forces, a different way of running a classroom. Students must be in pairs or fours, the tables fold to get out of the way, and they are usually on wheels which invites them being moved regularly.

Instead of randomly replacing desks in different classrooms, she could put all the new desks in one classroom, and distribute the still good desks in that room to where they are needed. Who gets the new double desks? The teacher that wants to have them. If there is more than one, she can have discussions about how the teachers will use them to help her decide.

Transitioning to 2-student tables provides so many opportunities for collaboration, and a teacher excited about using them will change their practice even if they already grouped students together often. This takes a school need and turns it into an opportunity. If someone else is asking for those same tables and the budget isn’t there for them, that invites a conversation about how single desks can be used together to achieve the same goals.

It’s not fun having to use a lot of your budget to replace old items, but this can provide a chance to upgrade in a way that invites a different approach. This is a way to take an expense and turn it into an opportunity.

Dear Grocery Store

It’s time to enter the 21st-century and provide intelligent search in your stores.

Create an App that is a shopping list that automatically sorts products by the aisle that they are in. Have sensors that the app detects so that once in that isle, you can see right where the product can be found.

Yes, I know you make a lot of extra money selling stuff because people are searching, can’t find their products, and see other products to pick up, but here’s a better way to do it!

You have an extreme amount of data about what people buy together. Instead of relying on them accidentally stumbling upon something as they search, use the data you have:

1. Put things together that people have bought together on their past shopping experiences. There are hundreds of millions of orders you have already tracked on every receipt you’ve ever printed and stored. Yes, that might be challenging for someone who doesn’t use the app, however, you’re going to design the app so that it would be stupid not to use it.

Who knew that people who bought tacos also bought mango curry sauce… you do because the data says so. So, why not put them next to each other in the grocery store?

2. Use the bottom 1/4 of the App to a) suggest related items when an item is added to the list, and b) show related items or data-suggested items while looking for the next item on the list.

3. Personalize suggestions based on individualized purchase history. If someone adds taco shells regularly, remind them about taco shells or salsa the next time they buy ground beef.

4. Make the App features great. Make it ideal for other lists too, make it indispensable. Add features that might suggest other items if you are on a diet. Or let a vegetarian know that a sauce has meat products and suggest another one that doesn’t.

5. Don’t game this too much. We aren’t stupid. If we see that every item you suggest is by Kraft, or we see the same items again and again that your App is pushing to us for advertising kickbacks, we’ll know the App is made for you and not for us. Here’s a magical idea, don’t just say you put the customer first, actually put them first!

The days of forcing people down every isle for them to buy more products are over… or they will be for you if you competitor takes this App idea before you do.

The quest for food

I’m on holidays and I’ve had the privilege of watching a few sunrises over the ocean. Before the sun rises, but the day has brightened, and before the glare gets in the way, birds nose dive for small fish feeding on the turmoil of the ocean; as waves crash near the shore. I’m reminded of another privilege we all have: we don’t have to spend most of our day seeking food.

These diving birds must constantly be on the move, seeking their next meal. Food is life, and the quest for food makes up a significant part of most bird’s and mammal’s day. We don’t have to do that. We have the luxury of grocery stores, restaurants, refrigerators, and means to store food without it going bad. Much of our innovation and subsequent convenience comes from our ability to spend precious time not in the quest for food.

But it’s not just about innovation and convenience, it’s also about creativity. I think we are on the threshold of a new era of creativity. AI and robotics are going to move us into an era of greater innovation and convenience, and ultimately give us more precious time to design, create, and be artistically inspired.

The quest for food will be replaced by the quest for self-expression. A new chapter is about to be written… it will feel much more like fiction than reality.

It’s already here!

Just yesterday morning I wrote:

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right… the most daring prophecies seem laughably conservative.

Then last night I found this post by Zain Khan on LinkedIn:

🚨 BREAKING: OpenAI just made intelligent robots a reality

It’s called Figure 01 and it’s built by OpenAI and robotics company Figure:

  • It’s powered by an AI model built by OpenAI
  • It can hear and speak naturally
  • It can understand commands, plan, and carry out physical actions

Watch the video below to see how realistic it’s speech and movement abilities are. The ability to handle objects so delicately is stunning.

Intelligent robots aren’t a decade away. They’re going to be here any day now.

This video, shared in the post, is mind-blowingly impressive!

This is just the beginning… we are moving exponentially fast into a future that is hard to imagine. Last week I would have guessed we were 5-10 years away from this, and it’s already here! Where will we really be with AI robotics 5 years from now?

(Whatever you just guessed is probably laughably conservative.)

The most daring prophecies

In the early 1950’s Arthur C. Clarke said,

“If we have learned one thing from the history of invention and discovery, it is that, in the long run — and often in the short one — the most daring prophecies seem laughably conservative.”

As humans we don’t understand exponential growth. The well known wheat or rice on a chessboard problem is a perfect example:

If a chessboard were to have wheat placed upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?

The answer: 264−1 or 18,446,744,073,709,551,615… which is over 2,000 times the annual world production of wheat.

All this to say that we are ill-prepared to understand how quickly AI and robotics are going to change our world.

1. Robots are being trained to interact with the world through verbal commands. They used to be trained to do specific tasks like ‘find one of a set of items in a bin and pick it up’. While the robot was sorting, it was only sorting specific items it was trained to do. Now, there are robots that sense and interpret the world around them.

“The chatbot can discuss the items it sees—but also manipulate them. When WIRED suggests Chen ask it to grab a piece of fruit, the arm reaches down, gently grasps the apple, and then moves it to another bin nearby.

This hands-on chatbot is a step toward giving robots the kind of general and flexible capabilities exhibited by programs like ChatGPT. There is hope that AI could finally fix the long-standing difficulty of programming robots and having them do more than a narrow set of chores.”

The article goes on to say,

“The model has also shown it can learn to control similar hardware not in its training data. With further training, this might even mean that the same general model could operate a humanoid robot.”

2. Robot learning is becoming more generalized: ‘Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning’.

“A new AI agent developed by NVIDIA Researchthat can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can…

Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

3. Put these ideas together then fast forward the training exponentially. We have robots that understand what we are asking them, which are trained and positively reinforced in a virtual physics lab. These robots are practicing how to do a new task before actually doing it… not practicing a few times, or even a few thousand times, but actually doing millions of practice simulations in seconds. Just like the chess bots that learned to play chess by playing itself millions of times, we will have robots where we ask them to do a task and they ‘rehearse’ it over and over again in a simulator, then do the task for the first time as if it had already done it perfectly thousands of times.

In our brains, we think about learning a new task as a clunky, slow experience. Learning takes time. When a robot can think and interact in our world while simultaneously rehearsing new tasks millions of times virtually in the blink of an eye, we will see them leap forward in capabilities at a rate that will be hard to comprehend.

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right…

…the most daring prophecies seem laughably conservative.

Sensing our world

I’m going to need glasses and hearing aids in the next few years. I already use small magnification readers when text is small or my eyes are fatigued, and I know that my hearing has diminished. One example of my hearing issue is that when we shut down our gas fireplace it beeps, but I don’t hear the beep. That sound is no longer in the range that I can hear. I only know it’s there because my wife and daughter mentioned it.

It’s only in relatively recent history that we’ve had access to technologies that allow us to enhance our senses when they fall below ‘normal’ capabilities. Before that we just lived less vivid lives as our senses worsened.

Having my family ask me ‘Can’t you hear that?’ and listening to nothing but their voices, knowing full well that I’m missing something is a little disconcerting. How are they getting to experience a sound that is outside the range of my capability? But the reality is that there are sounds they too can’t hear, which dogs and other animals can.

This makes me wonder what our world really looks and sounds like? What are we incapable of sensing and hearing, and how does that alter our reality? And for that matter, how do we perceive the world differently not just from other species but from each other? We can agree that certain colours on a spectrum are red, green, and blue, but is my experience of blue the same as yours? If it was, wouldn’t we all have the same favourite colour?

A few years back I had an eye condition that affected my vision at the focal point of my left eye. Later, I accidentally discovered that this eye doesn’t distinguish the difference between some blues and greens, but only at the focal point. I learned this playing a silly bubble bursting game on my phone. Without playing this game I might not have realized the limitations of my vision, and would have been ignorantly blind to my limited vision.

That’s the thought of of the day for me, how are we ignorantly blind to the limitations of our senses? What are we missing that our world tries to share with us? How will technology aid us in seeing what can’t be seen? Hearing what we can’t usually hear? That is to say, that we haven’t already accomplished in detecting? Our houses have carbon monoxide detectors, and we have sensors for radiation that are used in different occupations. We have sensors that detect infrared light, and accurately measure temperature and humidity. This kind of sense enhancing technology isn’t new.

Still, while we have sensors and tools to detect these things for us, we can’t fully experience aspects of our world that are present but undetectable by our senses. It makes me wonder just how much of our world we don’t experience? We are blessed with amazing senses and we have some incredible tools to help us observe the world in greater detail, but what are we missing? What are we ignorantly (or should I say blissfully) unaware of?

Nature-centric design

I came across this company, Oxman.com, and it defines Nature-centric design as:

Nature-centric design views every design construct as a whole system, intrinsically connected to its environment through heterogeneous and complex interrelations that may be mediated through design. It embodies a shift from consuming Nature as a geological resource to nurturing her as a biological one.
Bringing together top-down form generation with bottom-up biological growth, designers are empowered to dream up new, dynamic design possibilities, where products and structures can grow, heal, and adapt.

Here is a video, Nature x Humanity (OXMAN), that shows how this company is using glass, biopolymers, fibres, pigments, and robotics for large scale digital manufacturing, to rethink architecture to be more in tune with nature and less of an imposition on our natural world.

Nature x Humanity (OXMAN) from OXMAN on Vimeo.

This kind of thinking, design, and innovation excites me. It makes me think of Antoni Gaudi styled architecture except with the added bonus of using materials and designs that are less about just the aesthetic and more about the symbiotic and naturally infused use of materials that help us share our world with other living organisms, rather than our constructions imposing a cancer-like imposition on our world, scarring and damaging the very environment that sustains our life.

Imagine living in a building that allows more natural air flow, is cheaper to heat and cool, and has a positive emissions footprint, while also being a place that makes you feel like you are in a natural rather than concrete environment. Less corners, less uniformity, and ultimately less institutional homes, schools, and office buildings, which are more inviting, more naturally lit, and more comfortable to be in.

This truly is architectural design of the future, and it has already started… I can’t wait to see how these kinds of innovations shape the world we live in!