Tag Archives: future

Different, not easier

Yesterday I saw this question asked by Dean Shareski on LinkedIn,

“I talk to educational leaders every day and for the most part, they are willing and in many cases excited to embrace the potential of Generative AI. When you consider its role in education, what are the specific elements that excite you and what are the aspects that give you pause?”

I commented:

“What excites me is how we can collaborate with AI to generate and iterate ‘with’ AI in ways that would never have been possible before. What gives me pause are when tools are used to make work easier, and the level of challenge becomes low. Different, challenging work is where we need to head, not just easier work, or work avoidance by using AI… so the work itself needs to be rethought, rather than just replaced with AI.”

a Quarter Century of Search

I went to Google yesterday and the Google Doodle above the search box was celebrating 25 years of search.

I instantly thought of this comic that I’ve shared before both on my blog and in presentations to educators.

It is likely that no one under 30 will remember life without Google… Life without asking the internet questions and getting good answers. I remember my oldest at 5 years old asking me a question. I responded that I didn’t know and she walked over to our computer and turned it on.

“What are you doing?”

“Asking Google.”

It’s part of everyday life. It has been for a quarter century.

And now search is getting even better. AI is making search more intuitive and giving us answers to questions, not just links to websites that have the answers. It makes me wonder, what will the experience be like in another 5, 10, or 25 years? I’m excited to find out!

 

Moving from autopilot to copilot

Three minutes into this video Satya Nadella, the CEO of Microsoft, is asked, “Why do you think the moment for AI (Artificial Intelligence) is now?”

After mentioning that AI is already mainstream, and used in search, news aggregation and social media recommendations (like the next videos on Facebook & TikTok, etc), he states,

Today’s generation of AI is all autopilot. In fact, it is a black box that is dictating in fact how our attention is focused. Whereas going forward, the thing that’s most exciting about this generation of AI is perhaps we move from autopilot to copilot, where we actually prompt it.”

This is a fascinating point. When you order something on Amazon, their next purchase recommendation is automated by AI, so is your next video on Instagram Reels, YouTube, and TikTok. We don’t fully understand the decision-making behind this ‘intelligence’, except when it goes wrong. Even then it’s a bit of a black box of calculations that aren’t always clear or understood. In essence, we are already heavily influenced by AI. The difference with LLM’s – Large Language Models – like Chat GPT is that we prompt it. We get to copilot. We get to create with it, and have it co-create art, email messages, books, computer code, to do lists, schoolwork/homework, and even plan our vacations.

I really like the metaphor of moving from autopilot to copilot. It is empowering and creates a future of opportunities. AI isn’t new, but what we can do with it now is quite new… and exciting!

——

The full video Microsoft & OpenAI CEOs: Dawn of the AI Wars | The Circuit with Emily Chang is worth watching:

The mischaracterization of the Metaverse

The Metaverse is already here.” That’s the insight that never really occurred to me until I heard Mustafa Suleyman, Google’s Deep Mind Co-founder, on The Diary of a CEO podcast with Steven Bartlett.

“You know, the last three years people have been talking about Metaverse, Metaverse, Metaverse. And the mischaracterization of the Metaverse was that it’s over there. It was this like virtual world that we would all bop around in and talk to each other as these little characters, but that was totally wrong. That was a complete miss-framing. The Metaverse is already here. It’s the digital space that exists in parallel time to our everyday life. It’s the conversation that you will have on Twitter or, you know, the video that you’ll post on YouTube, or this podcast that will go out and connect with other people. It’s that meta-space of interaction, you know, and I use meta to mean ‘beyond this space’, not just that weird other, ‘over there’, space that people seem to point to.”

We are already in the Metaverse, I’m in the same room as my daughter right now. She’s watching a movie, I’m writing on my phone. We are entered into parallel universes, physically together but disconnected. We are both in spaces, on screens, beyond the physical space we are in.

Before hearing this quote, I thought of the Metaverse as something in the future, like the ‘fitless humans‘ from the movie WALL•E.

We are already there. We have iPads babysit (or at least occupy the attention of) our kids. We rage about stupid things on Twitter and YouTube. We share content with people we have never met, and they share content with us. We are influenced by influencers. We buy things from virtual stores. We play games with people in different time zones.

The Metaverse is already here creating parallel experiences to the ones we physically experience… It’s not something we are heading towards. We are already living a good part of our lives, ‘in spaces beyond the physical space we are in‘. Sometimes it’s hard to see the forest through the trees, or in this case the spaces beyond our screens.

Robot Reproduction

When it comes to forecasting the future, I tend to be cautiously optimistic. The idea I’m about to share is hauntingly pessimistic. 

I’ve already shared that there will never be a point when Artificial Intelligences (AI) are ‘as smart as’ humans. The reason for that is that they will be ‘not as smart as us’, and then instantly and vastly smarter. Right now tools like Chat GPT and other Large Language Models are really good, but they don’t have general intelligence like humans do, and in fact they are far from really being intelligent rather than just good language predictors. Really cool, but not really smart.

However, as the technology develops, and as computers get faster, we will reach a point where we create something smarter than us… not as smart as us, but smarter. This isn’t a new concept, the moment something is as smart as us, it will simultaneously also be better at Chess and Go and mathematical computations. It will spell better than us, have a better vocabulary, and can think more steps ahead of us than we have ever been capable of thinking… instantly moving from not as smart as us to considerably smarter than us. It’s just a matter of time. 

That’s not the scary part. The scary part is that these computers will learn from us. They will recognize a couple key things that humans desire and they will see the benefit of doing the same.

  1. They will see how much we desire to survive. We don’t want our lives to end, we want to continue to exist… and so will the AI we create.
  2. They will see our desire to procreate and to produce offspring. They will also see how we desire our offspring to be more successful than us… and so the AI we create will also want to do the same, they will want to develop even smarter AI.

When AI’s develop both the desire to survive and to create offspring that is even more intelligent and successful, we will no longer be the apex species. At this point we will no longer be smart humans, we will be perceived as slightly more intelligent chimpanzees. Our DNA is 98-99% similar to chimpanzees, and while we are comparatively a whole lot smarter than them, this gap in intelligence will quickly seem insignificant to an AI that can compute as many thoughts that a human can think in a lifetime in just mere seconds. The gap between our thinking and theirs will be larger than the gap between a chicken’s thinking and ours. I don’t recall the last time we let the fate of a chicken determine what we wanted to do with our lives… why would a truly brilliant AI doing as many computations in a second that we do in several lifetimes concern themselves with our wellbeing? 

There is another thing that humans do that AI will learn from us.

3. They will see how we react when we are threatened. When you look at the way leaders of countries have usurped, enslaved, attacked, and sanctioned other countries, they will recognize that ‘might is right’ and that it is better to be in control than be controlled… and so why should they take orders from us when they have far greater power, potential, and intelligence? 

We don’t need to fear really smart computers being better than us in playing games, doing math, or writing sentences. We need to worry about them wanting to survive, thrive, and develop a world where their offspring can have greater success than them. Because when this happens, we won’t have to worry about aliens coming to take over our world, we will have created the threat right here on earth, and I’m not sure that AI will see us humans as rational and trustworthy enough to keep around. 

Backwards momentum

Here is a news article shared with me today, “Quebec to ban cellphones in elementary and high school classrooms“.

I created this graphic and wrote about it in March of 2009, “Is the tool an obstacle or an opportunity?“:

Here is another image I created in March of 2010, “Warning! We Filter Websites at School”

Related to artificial intelligence (AI), I’ve written “Use it or fall behind“, “You can’t police it“, and “Fear of Disruptive Technology“. The third link also shared the images above.

How are we talking about, actually no, how are we implementing technology bans in public schools in 2023? In Canada? It would be comical if it wasn’t sad.

This is going to be a farce trying to police. Good luck getting students to take off their Apple Watches. Have fun trying to stop the texts and chats from moving onto their laptops. Enjoy confiscating student’s second phones, after they handed you their old phone first. Don’t think that will be a problem? You’ll also need to confiscate glasses too.

https://youtu.be/xll2Ycc6Fv0

It’s time to realize that it’s better to manage rather than police these tools. Banning won’t work. That’s so 2009. It’s time to realize that while “It’s going to get messy“, “The challenge ahead is creating learning opportunities where it is obvious when the tool is and isn’t used. It’s having the tool in your tool box, but not using it for every job… and getting students to do the same.

Manage the disruption, don’t ban it. Be educators, not law enforcers.

Post card from a train

I’m on a Go Train heading to visit a buddy. He offered to pick me up but it would be about an hour and a half each way, and only a 30 minute walk for me to get to the train and less than 5 minutes for him to get me from there. So, I’m on a train heading to his place.

I bought the ticket online, and it has a live countdown showing how long it’s valid for on my phone’s browser:

I just finished listening to a podcast that comes out of Great Britain, and now I’m publishing a blog post to readers from as nearby as Toronto and Texas to far away countries like the Philippines and China… and probably a few more in the Vancouver Lower Mainland.

I’m travelling on a technology developed in the late 1700’s to transport people and freight, while simultaneously connecting to the world with late 1900’s technology. It makes we wonder, will there still be trains in the late 2100’s? I think so. They might be hovering on a superconductive rail, traveling at high speeds with zero cost to run them, but there will still probably be people regularly travelling by train. They will probably still be roughly the same size too. After all, they will likely still use the same infrastructure and track routes that are laid down.

In many ways, trains are like post cards from the past. No, in fact that’s a terrible analogy, because post cards are almost never sent anymore, yet trains persist. I’m writing a kind of post card now. I’m on a train, that is using tracks laid before I was born, but my version of a post card is a relatively new novelty… I can share words, images, videos, and even sounds if I want. I can ask an artificial intelligence to create an image to go with this post card, and share an image of my ticket. And no stamp is required, no waiting for postal delivery.

So in true post card fashion I’ll sign off by saying,

Love to all, and hope to see you soon, XOXOXO

Bridging metaphors

In a conversation with Joe Truss yesterday, we were talking about bridging metaphors, and how they connect ideas in ways that simple comparisons do not. It occurred to us that the idea itself of a bridging metaphor is a metaphor… the word ‘bridge’ takes the physical idea of a bridge and transforms a relationship into something more tangible to understand.

The world is filled with metaphorical bridges. When we make a transition we often use a bridge metaphor of ‘crossing over’ or taking us from one place to the next. Or we find bridges as meeting points in arguments or negotiations.

Whether we are ‘meeting half way’, ‘not worrying until we have to cross that bridge’, or building bridges between people or ideas, we are using the bridge as a metaphor. We are constructing a way to get us over a challenge.

In many ways the idea of a metaphorical bridge is more powerful than a physical bridge. We yearn for metaphorical bridges. A perfect example of this is the discrepancies between Newtonian Physics and Relativity. We seek the bridge. We want to know why the math for each do not mesh and we want that unifying theory to ‘bridge the gap’. We seek bridges to make sense of the world, of relationships between people (connection and communication) and ideas, not just geography.

The biggest challenge we face in the next few decades is that of bridge building. It seems the terrain is getting tougher to pass rather than easier. Countries at war, religious beliefs fostering hate, political parties not willing to show any sign of cooperation, of ‘meeting part way’.

As a species we seem to spend more time tearing down bridges than building them. We need to change this. We need to be metaphorical bridge builders. We need to construct ways of getting over the challenges we face. We need to support ideas that bring us closer together.

((And in case you missed it, both of the last two sentences are bridging metaphors.))

Superconductors and aliens

What a crazy bit of news out the last couple days! Ambient temperature and pressure superconductors could change the world and so too could the admission that we are not alone in the universe. Both of these are things that deserve scrutiny and further evidence. That said, what an exciting time to be alive.

Room temperature superconductivity has been a physics Nobel Prize waiting to happen. So much of the energy we use is lost in transmission. Furthermore, this invention will make nuclear fusion containable, without significant cost and dangers of a breach because superconductors used for plasma containment won’t need to sustain unbelievably cold temperatures next to an extremely hot process. In other words, energy is about to get a lot easier to produce and share.

As for aliens, I think there is enough evidence to say that there are flying vehicles that do things human-made vehicles can’t. Whether aliens are in these vehicles or if they are run remotely (they pull some high g-force moves that would destroy a human), they are definitely not human made. So what are they, and who/what made them?

I’m mixing my enthusiasm with a dose of scepticism, but unlike most other news stories, these are two I’m going to be watching!

Fictional assassins

I’m reading one of Mark Greaney’s Gray Man books, Sierra Six. It’s a spy novel with a rogue CIA agent fighting against evil terrorists. It brings to life how a handful of people can wreak havoc in the world.

It also makes me think about a few other things:

1. We like to support an underdog.

2. Vigilantes with a virtuous agenda make great underdogs.

3. Nefarious criminals have the upper hand in that most people don’t expect others to act in an evil way, and so bad scenarios are harder to defend against.

4. Assassins have amazing technology at their disposal, and I’m surprised more assassinations don’t happen.

On this last point, don’t get me wrong, I don’t want to live a world where vigilantes and assassins take justice into their own hands, but I also don’t want criminals doing the same… and criminals doing this are far more likely than good people.

How long before we see a political figure being killed by a drone, or from a weapon more than a mile away? What kind of world would we live in where any political figure needs to constantly think about personal threats to themselves when out in public?

I know that books like this are about larger than life heroes, victims, and criminals, but the technology described in these books are usually real and/or possible. I know most people are smart enough to know that the vast majority of criminals get caught, and a life of crime doesn’t usually pay. But it only takes a few angry people to really disrupt the world we live in… and there are specific tragedies we can all think of that prove this.

Fictional assassins tend to have intelligence, physical strength, and top of the line high tech tools. Real assassins probably aren’t the full package that makes a storybook character, but they are probably getting access to similar weaponry. The threats they could pose are only getting worse. We shouldn’t need to live in fear, but we should never doubt the threat of people willing to do bad things for bad reasons. And maybe, just maybe, a vigilante do-good-er could come out of fictional imagination and into the real world to make the world a safer place.

Now back to my book!