Tag Archives: augmented reality

A prediction nears reality

Andrew Wilkinson said on X:

“Just watched my 5-year-old son chat with ChatGPT advanced voice mode for over 45 minutes.

It started with a question about how cars were made.

It explained it in a way that he could understand.

He started peppering it with questions.

Then he told it about his teacher, and that he was learning to count.

ChatGPT started quizzing him on counting, and egging him on, making it into a game.

He was laughing and having a blast, and it (obviously) never lost patience with him.

I think this is going to be revolutionary. The essentially free, infinitely patient, super genius teacher that calibrates itself perfectly to your kid’s learning style and pace.

Excited about the future.”

– – –

I remember visiting my uncle back when I was in university. The year was somewhere around 1988-90. So, at least 34 years ago. We were talking about the future and Joe Truss explained to me what learning would be like in the coming age of computers.

He said, (loosely paraphrased, this was a long time ago):

‘In the future we will have virtual teachers that will be able to teach us exactly what we want to know in exactly the format we need to learn best. You want to learn about relativity? How about learning from Einstein himself? You’ll see him in front of you like he is real. And he will not just lecture you, he will react to your questions and even bio-feedback. You look puzzled, he might ask a question. He sees you looking up and to the left, which he knows means you are trying to visualize something, and so he changes his lesson to provide an image. He will be a teacher personalized to any and all of your learning needs.’

We aren’t quite there yet, but the exchange Andrew Wilkinson’s son had with ChatGPT, and the work being done in virtual and augmented reality, suggest that Joe’s prediction is finally coming into being.

I too am excited about the future, and more specifically, the future of learning.

AI, Batman, and the Borg

In one of my earliest blog posts, originally written back in November 2006, I wrote:

“I come from the Batman era, adding items to my utility belt while students today are the Borg from Star Trek, assimilating technology into their lives.”

I later noted that students were not the ‘digital natives’ I thought they were. Then I went back and forth on the idea a few times on my blog after that, ultimately looking more at ‘digital exposure‘ and not lumping students/kids together as digital immigrants or natives, but rather seeing that everyone is on a spectrum based on their exposure and interest.

Many of us are already a blend of Batman and Borg. We wear glasses and hearing aids that assist and improve our senses. We track our fitness on our phones and smart watches. We even have pacemakers that keep our hearts regular when our bodies don’t do a good job of it. In a more ubiquitous use of smart tools, almost all of us count on our phones to share maps with us, and we can even get an augmented view of the world with directions showing up superimposed on our camera view.

How else are we going to be augmenting our reality with new AI tools in the next 10 to 20 years?

We now have tools that can: read and help us respond to emails; decide our next meal based on the ingredients we have in our fridge; plan our next vacation; and even drive us to our destination without our assistance.

What’s next?

I know there are some ideas in the works that I’m excited to see developed. For example, I’m looking forward to getting glasses or contact lenses with full heads-up display information. I’m walking up to someone and their name becomes visible to me on a virtual screen. I look at a phone number and I can call it with an eye gesture. I see something I want to know more about, anything from an object, to a building, to a person, and I can learn more with a gesture.

I don’t think this technology is too far away. But what else are we in store for? What new tools are we adding to our utility belts, what new technologies are going to enhance our senses?

I used to make a Batman/Borg comparison to look at how we add versus integrate technology into our lives, but I think everyone will be doing more and more of both. The questions going forward are how much do we add, how reliant do we get, and how different will we be as a result? Would 2024 me even recognize the integrated capabilities of 2044 me, or will that future me be as foreign and advanced as a person from 1924 looking at a person in 2024?

I’m excited about the possibilities!

Future Tech: Prescription Glasses Metaphor

It’s the early 2030’s and you are walking downtown, heading to a specialist appointment. You don’t know where the office is, but you aren’t looking at a map on your phone. You haven’t done that in a few years. No, instead, you are looking through a contact lens that is like a heads-up display giving you augmented reality directions. There is simultaneously an arrow flashing 3 times in your view, showing you that you need to turn right in 15 meters, and a haptic vibration from an implant in your right elbow. The vibration you feel in your elbow has a pattern of long-short-short, which you have set to let you know is a map direction.

Had the vibration been in both elbows with a short-short pause short-short vibration, then you would know that it was a phone call from one of your chosen favourites, and your heads-up display would have shown you the name and/or photo. You have it set so that you need to look down and right to see the name and face of the person calling, but it could have been set to come up right into your line of sight. If the call was from an unknown number, you would not even have been bothered. Instead the call would have been answered by an Artificial Intelligence (AI) assistant that (for voice phone calls) sounds like you, but with a decision tree to decide if the call is worth bugging you, leaving you a message for later, or even blocking the call if it determines it is spam. Since you are just walking, it might have offered you a text version of the message on an augmented display, but if you were in the specialist’s office, the AI would have waited until your appointment was over to notify you of a message.

You arrive at the specialist’s office and because you connected on LinkedIn, your AI has identified the secretary from her profile picture, and her name pops up above her head so that you can greet her appropriately. You wave your hand over a scanner, stare briefly at a ‘Yes’ box to indicate that you want to share your personal medical information, and you are done signing in. Because this is a medical specialist, that data includes the last year’s worth of bio information like heart rate, blood-sugar levels, and even blood pressure. A small implant collects this data in a just slightly more sophisticated way than the current apple watch. If the doctor didn’t want to run some specific tests on you, all this would have been done remotely, with the Zoom call actually happening through your contact  lenses and an implant in your ear. To get around the fact that you don’t have a camera on your face, the person you are talking to sees a perfect rendition of your face, and even if you were watching, you wouldn’t know that is wasn’t actually you. It even uses your voice intonation to help determine the emphasis on facial expressions, so if you said the same thing twice, it would look subtly different, rather than robotic.

Is this a future you want? Because it’s coming… and you will embrace it. You will participate in it. Because to not do so, would be to have a disadvantage.

When my youngest daughter was 9, we took a trip to England and France. In Paris we went to the Eiffel Tower, and it was there that we learned our daughter needed glasses. We had just come back from China, where we had been living for 2 years, and we hadn’t had eye exams in almost 3 years. We got to the top of the tower and my wife started pointing out things to look at, and my daughter couldn’t see any of them. At 9, she didn’t know that she couldn’t see well. She thought everyone saw the way she did. Distant hills were supposed to be blurry. What about street and store signs? Who needed to see those, they were in Chinese anyway, and we couldn’t read them. Once we returned home, we went to the optometrist and our daughter has worn glasses or contacts ever since.

The future I shared above is a future with a metaphorical 30/20 vision. It is the ability to see and feel things that people today can not see or feel without augmentation… and this will be the new version of 20/20 vision. The same way that my daughter was disadvantaged without her glasses, any person not augmenting their lives with technology will be disadvantaged compared to those around them. They will be less connected, less informed, less able to see. It would be like my daughter realizing that she couldn’t see like everyone else, and still deciding not to get glasses. Augmented reality will be the prescription glasses of the future, and you can choose to use the prescription, or stay in the dark.