Tag Archives: technology

The most daring prophecies

In the early 1950’s Arthur C. Clarke said,

“If we have learned one thing from the history of invention and discovery, it is that, in the long run — and often in the short one — the most daring prophecies seem laughably conservative.”

As humans we don’t understand exponential growth. The well known wheat or rice on a chessboard problem is a perfect example:

If a chessboard were to have wheat placed upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?

The answer: 264−1 or 18,446,744,073,709,551,615… which is over 2,000 times the annual world production of wheat.

All this to say that we are ill-prepared to understand how quickly AI and robotics are going to change our world.

1. Robots are being trained to interact with the world through verbal commands. They used to be trained to do specific tasks like ‘find one of a set of items in a bin and pick it up’. While the robot was sorting, it was only sorting specific items it was trained to do. Now, there are robots that sense and interpret the world around them.

“The chatbot can discuss the items it sees—but also manipulate them. When WIRED suggests Chen ask it to grab a piece of fruit, the arm reaches down, gently grasps the apple, and then moves it to another bin nearby.

This hands-on chatbot is a step toward giving robots the kind of general and flexible capabilities exhibited by programs like ChatGPT. There is hope that AI could finally fix the long-standing difficulty of programming robots and having them do more than a narrow set of chores.”

The article goes on to say,

“The model has also shown it can learn to control similar hardware not in its training data. With further training, this might even mean that the same general model could operate a humanoid robot.”

2. Robot learning is becoming more generalized: ‘Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning’.

“A new AI agent developed by NVIDIA Researchthat can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can…

Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

3. Put these ideas together then fast forward the training exponentially. We have robots that understand what we are asking them, which are trained and positively reinforced in a virtual physics lab. These robots are practicing how to do a new task before actually doing it… not practicing a few times, or even a few thousand times, but actually doing millions of practice simulations in seconds. Just like the chess bots that learned to play chess by playing itself millions of times, we will have robots where we ask them to do a task and they ‘rehearse’ it over and over again in a simulator, then do the task for the first time as if it had already done it perfectly thousands of times.

In our brains, we think about learning a new task as a clunky, slow experience. Learning takes time. When a robot can think and interact in our world while simultaneously rehearsing new tasks millions of times virtually in the blink of an eye, we will see them leap forward in capabilities at a rate that will be hard to comprehend.

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right…

…the most daring prophecies seem laughably conservative.

Google proof vs AI proof

I remember the fear mongering when Google revolutionized search. “Students are just going to Google their answers, they aren’t going to think for themselves.” Then came the EDU-gurus proclaiming, “If students can Google the answers to your assignments, then the assignments are the problem! You need to Google proof what you are asking students to do!”

In reality this was a good thing. It provoked a lot of reworking of assignments, and promoted more critical thinking first from teachers, then from students. It is possible to be creative and ask a question that involves thoughtful and insightful responses that are not easily found on Google, or would have so few useful search responses that it would be easy to know if a student created the work themselves, or if they copied from the internet.

That isn’t the case for Artificial Intelligence. AI is different. I can think of a question that would get no useful search responses on Google that will then be completely answerable using AI. Unless you are watching students do the work with pen and paper in front of you, then you really don’t know if the work is AI assisted. So what next?

Ultimately the answer is two-fold:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

This isn’t a ‘make it Google proof’ kind of question. It’s more challenging than that.

I got to hear John Cohn, recently retired from MIT, speak yesterday. There are two things he said that kind of stuck with me. The first was a loose quote of a Business Review article. ’AI won’t take over people, but people with AI are going to take over people.

This is insightful. The reality is that the people who are going to be successful and influential in the future are those that understand how to use AI well. So, we would be doing students a disservice to not bring AI into the classroom.

The other thing he said that really struck me was, “If you approach AI with fear, good things won’t happen, and the bad things still will.

We can’t police its use, but we can guide students to use it appropriately… and effectively. I really like this AI Acceptable Use Scale shared by Cari Wilson:

This is one way to embrace AI rather than fear and avoid it in classrooms. Again I ask:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

One way is to question the value of homework. Maybe it’s time to revisit our expectations of what is done at home. Give students work that bolsters creativity at home, and keep the real work of school at school. But whether or not homework is something that changes, what we do need to change is how we think about embracing AI in schools, and how we help students navigate it’s appropriate, effective, and even ethical use. If we don’t, then we really aren’t preparing our kids for today’s world, much less the future.

We aren’t going to AI proof schoolwork.

Beyond a simple blood test

I have to go get some blood work done. It’s time to check a few levels, and make sure that I’m in the healthy range. I have an issue maintaining my Vitamin D levels, and cholesterol issues run in my family… and also in me. So I’ll head to the medical center and line up this weekend for them to poke me in the arm and fill a few small vials of blood. I’m a week or so I’ll get a call from my doctor after she looks at the results.

I wonder how far away we are from being able to do this from home? Prick your finger, put a drop of blood on a sensor, and get a full spectrum of results. Add to this a health monitor on your smart watch that tracks heart rate and rhythm, as well as activity, and you’ve got a full service health monitoring system that can be preemptive and preventative. And add to this a toilet that analyzes your urine, and you’ve got a regular no-line-up doctor’s visit without ever leaving your home.

Cholesterol levels seem high? The monitor will tell me, and my doctor. Vitamin D levels low? My watch tells me to double my morning dose. Imagine your watch telling you that you should go to the hospital because it detected a heart arrhythmia that is consistent with the early signs of a heart attack. Wouldn’t that be so much better than not knowing?

The possibilities of what you can do to improve your health with a system like this are incredible. Is this possible in the next 5 years? I think so! It’s going to be amazing to see the way technology enhances our healthcare system in the next decade. We will literally be able to regularly and continuously monitor things that we used to have to do several doctor and clinic visits to do on a yearly (or longer) basis. And when you do go to the doctor, your complaints about your health won’t just be anecdotal, you’ll have streams of data to share. This is exciting for everyone except hypochondriacs… these poor people are going to have a lot more to worry about!

Conversational AI interface

A future where we have conversations with an AI in order to have it do tasks for us is closer than you might think. Watch this clip, (full video here):

Imagine starting you day by asking an AI ‘assistant’, “What are the emails I need to look at today?” Then saying something like, “Respond to the first 2 for me, and draft an answer for the 3rd one for me to approve. And remind me what meetings I have today.” All while on the treadmill, or while shaving or even showering.

The AI calculates your calories used on your treadmill, and tracks what you eat, and even gives suggestions like, “You might want to add some protein to your meal, may I suggest eggs, you also have some tuna in the pantry, or a protein shake if you don’t want to make anything.”

Later you have the ever-present the AI in the room with you during a meeting and afterwards request of it, “Send us all a message with a summary of what we discussed, and include a ‘To Do’ list for each of us.”

Sitting for too long at work? The AI could suggest standing up, or using the stairs instead of the elevator. Hungry? Maybe your AI assistant recommends a snack because it read your sugar levels off of your watch’s health monitor, and it does this just as you are starting to feel hungry.

It could even remind you to call your mom, or do something kind for someone you love… and do so in a way that helps you feel good about it, not like it’s nagging you.

All this and a lot more without looking at a screen, or typing information into a laptop. This won’t be ready by the end of 2024, but it’s closer to 2024 than it is to 2030. This kind of futuristic engagement with a conversational AI is really just around the corner. And those ready to embrace it are really going to leave those who don’t behind, much like someone insisting on horse travel in an era of automobiles. Are you ready for the next level of AI?

What are you outsourcing?

Alec Couros recently came to Coquitlam and gave a presentation on “The Promise and Challenges of Generative AI”. In this presentation he had a quote, “Outsource tasks, but not your thinking.

I just googled it and found this LinkedIn post by Aodan Enright. (Worth reading but not directly connected to the use of AI.)

It’s incredible what is possible with AI… and it’s just getting better. People are starting businesses, writing books, creating new recipes, and in the case of students, writing essays and doing homework. I just saw a TikTok of a student who goes to their lecture and records it, runs it through AI to take out all the salient points, then has the AI tool create cue cards and test questions to help them study for upcoming tests. That’s pretty clever.

What’s also clever, but perhaps now wise, is having an AI tool write an essay for you, then running the essay through a paraphraser that breaks the AI structure of the essay so that it isn’t detectable by AI detectors. If you have the AI use the vocabulary of a high school student, and throw in a couple run-on sentences, then you’ve got an essay which not only AI detectors but teachers too would be hard pressed to accuse you of cheating. However, what have you learned?

This a worthy point to think about, and to discuss with students: How do you use AI to make your tasks easier, but not do the thinking for you? 

Because if you are using AI to do your thinking, you are essentially learning how to make yourself redundant in a world of ever-smarter AI. Don’t outsource your thinking… Keep your thinking cap on!

The right tool for the job

Last weekend’s Coquitlam Crunch walk was cold. We were the only ones in the parking lot at 8:30am.

We walked about 1/3 the way up then we put on our grip-on cleats, and the cold air was a lot more difficult for me to tackle compared to the actual walking conditions. Still, we usually do the walk in 55 to 56 minutes and it took us 1 hour. A four minute difference.

Today was another story. It started the same with just us in the parking lot, but the lot was very slushy and slippery and so Dave and I put our over-shoe cleats on right away.

Walking conditions this time were much harder to tackle. One thing that added to the challenge was that we had to stop at least 10 times for Dave to adjust his cleats, which kept slipping off of his shoes. I don’t think Strava counted all the adjustment stops because when I stopped my timer it said 1 hour and 14 minutes, but it saved the time as 1 hour and 11 minutes.

That’s a significantly slower time due to the slippery, slushy conditions. We don’t mind, it wasn’t a race, and we love the opportunity to be together, get some exercise, and also feel the accomplishment of ‘just doing it’ even when conditions are less than favourable. But one thing that was quite clear was that my cleats provided a much better experience than Dave’s. In essence, my cleats were a tool that I used, but didn’t have to think about, didn’t have to manage. I put them on at the start, they did their job, and I took them off at the end. Dave’s cleats needed his attention. They took away from the flow of the experience… they interrupted our walk.

Don’t get me wrong, this wasn’t a big deal, it didn’t ruin or walk or anything like that, they simply required our attention. On the way down Dave suggested that we think about a metaphor for the experience and the best one we came up with was, “Sometimes it’s worth getting a great tool instead of accepting and tolerating the use of a good tool.”

The cleats I own were just $21 on Amazon, and a few dollars more than the ones Dave has. The cost difference isn’t much, but the experience is so much better. Unfortunately after our walk last week, I forgot to share the link with Dave until yesterday, so he’ll get his by Monday and be ready for next week, but they didn’t come in time for today’s walk.

It’s a good lesson to think about though. Sometimes we just use a tool because it’s the one we have, the one we’ve always used, or the one that is easy to access, rather than seeking the best tool for the job. Sometimes it’s worth the time and research, and/or the extra cost, to get a tool that does the job extremely well… and reduce the challenges of using a less than ideal tool.

In the grand scheme of things, we’ll probably only need these cleats 1-3 more times this entire year, and if Dave stuck with his, it wouldn’t be a big deal. But there are things in our lives that we readily tolerate that could become ‘invisible’ and require less of our time, energy, focus, and attention… working seamlessly because we have found the right tool for the job.

Not so techie

I’ve shared before about how I’m not as tech savvy as most people think. The reality is that I’m just willing to spend a lot of time getting to the bottom of an issue, and so my savviness has more to do with patience than with prowess. That said, I’m getting very frustrated with the technology challenges that seem to be coming my way that I can’t solve. A couple days ago the WordPress App stopped working. I could no longer save anything on it and so I couldn’t write posts on my phone. I deleted and re-installed the app, I tried logging in with my back-up access account, and then I gave up and finally moved to the Jetpack App that I have been begrudgingly avoiding. I didn’t want to make the switch because it forces block editing, which I think is clunky and works against me, rather than helping me with my writing. Now that app won’t work with my blog either. Maybe that’s a good thing because I wanted to write on my laptop rather than phone, so this might be the push that I needed.

Still, this wasn’t my only technology challenge this week or today. My wife is with her parents and her dad can’t access his Shaw mail. It’s an issue on his computer because my wife can access it on her phone and I can access it on my computer, so it has to be an issue with his machine. But the account uses web-based access and I suggested updating Chrome, and then we tried Firefox and while he can log into the account, he can’t click on any of the items in his inbox to read them on either web browser. The fact that I’m trying to give support over Facetime doesn’t make it an easier. I have Teamviewer (to take over a computer remotely) on my mother-in-law’s computer, but not on my father-in-laws, and while I’ll set that up soon, I didn’t feel like doing that for what I thought was a minor issue, and with my wife there, the support itself went fast, even if we couldn’t figure out the issue.

So here is my little rant, why does it seem that there are a lot more things breaking rather than working these days? I have to manually share my blog posts on social media because the tools I try to use (and have even paid for) don’t seem to work consistently. My wife gets a new phone and I spend a week updating issues that come up that were not a problem with the old phone. I upload a new plugin (after the issue with the WordPress login – yes I thought about that being an issue already), and it takes an hour to move from the free version to the paid one. I get stuck on a technical issue and google searchers seem less helpful than they used to. I buy a new toaster oven and the extra features make it harder to use and less convenient than the old one. I can’t decide if I’m getting old and curmudgeonly, or if things are being made less convenient and harder to repair?

In any event, I’m not feeling so techie right now. I seem to be coming across issues that are too hard for me to fix, and my patience is thinning. Cue the memes about old people not understanding technology… I hope that’s not me despite my little rant.

Sensing our world

I’m going to need glasses and hearing aids in the next few years. I already use small magnification readers when text is small or my eyes are fatigued, and I know that my hearing has diminished. One example of my hearing issue is that when we shut down our gas fireplace it beeps, but I don’t hear the beep. That sound is no longer in the range that I can hear. I only know it’s there because my wife and daughter mentioned it.

It’s only in relatively recent history that we’ve had access to technologies that allow us to enhance our senses when they fall below ‘normal’ capabilities. Before that we just lived less vivid lives as our senses worsened.

Having my family ask me ‘Can’t you hear that?’ and listening to nothing but their voices, knowing full well that I’m missing something is a little disconcerting. How are they getting to experience a sound that is outside the range of my capability? But the reality is that there are sounds they too can’t hear, which dogs and other animals can.

This makes me wonder what our world really looks and sounds like? What are we incapable of sensing and hearing, and how does that alter our reality? And for that matter, how do we perceive the world differently not just from other species but from each other? We can agree that certain colours on a spectrum are red, green, and blue, but is my experience of blue the same as yours? If it was, wouldn’t we all have the same favourite colour?

A few years back I had an eye condition that affected my vision at the focal point of my left eye. Later, I accidentally discovered that this eye doesn’t distinguish the difference between some blues and greens, but only at the focal point. I learned this playing a silly bubble bursting game on my phone. Without playing this game I might not have realized the limitations of my vision, and would have been ignorantly blind to my limited vision.

That’s the thought of of the day for me, how are we ignorantly blind to the limitations of our senses? What are we missing that our world tries to share with us? How will technology aid us in seeing what can’t be seen? Hearing what we can’t usually hear? That is to say, that we haven’t already accomplished in detecting? Our houses have carbon monoxide detectors, and we have sensors for radiation that are used in different occupations. We have sensors that detect infrared light, and accurately measure temperature and humidity. This kind of sense enhancing technology isn’t new.

Still, while we have sensors and tools to detect these things for us, we can’t fully experience aspects of our world that are present but undetectable by our senses. It makes me wonder just how much of our world we don’t experience? We are blessed with amazing senses and we have some incredible tools to help us observe the world in greater detail, but what are we missing? What are we ignorantly (or should I say blissfully) unaware of?

Nature-centric design

I came across this company, Oxman.com, and it defines Nature-centric design as:

Nature-centric design views every design construct as a whole system, intrinsically connected to its environment through heterogeneous and complex interrelations that may be mediated through design. It embodies a shift from consuming Nature as a geological resource to nurturing her as a biological one.
Bringing together top-down form generation with bottom-up biological growth, designers are empowered to dream up new, dynamic design possibilities, where products and structures can grow, heal, and adapt.

Here is a video, Nature x Humanity (OXMAN), that shows how this company is using glass, biopolymers, fibres, pigments, and robotics for large scale digital manufacturing, to rethink architecture to be more in tune with nature and less of an imposition on our natural world.

Nature x Humanity (OXMAN) from OXMAN on Vimeo.

This kind of thinking, design, and innovation excites me. It makes me think of Antoni Gaudi styled architecture except with the added bonus of using materials and designs that are less about just the aesthetic and more about the symbiotic and naturally infused use of materials that help us share our world with other living organisms, rather than our constructions imposing a cancer-like imposition on our world, scarring and damaging the very environment that sustains our life.

Imagine living in a building that allows more natural air flow, is cheaper to heat and cool, and has a positive emissions footprint, while also being a place that makes you feel like you are in a natural rather than concrete environment. Less corners, less uniformity, and ultimately less institutional homes, schools, and office buildings, which are more inviting, more naturally lit, and more comfortable to be in.

This truly is architectural design of the future, and it has already started… I can’t wait to see how these kinds of innovations shape the world we live in!

Digital distraction

Last night we went out for a wonderful dinner. I’m the restaurant we had a booth next to a round table which had a mother and 3 daughters. I’d guess the kid’s ages to be about 7, 12, and 14. My youngest daughter was sitting next to me and whispered, “They are all on devices.”

When I looked, the 7 year old had an Anime video playing on her laptop, which was about 8-10 inches (20-25cm) from her face. The 12 year old had over-ear headphones on and was endlessly scrolling on social media. The 14 year old was opposite me and all I could see was that she had one earbud in, on the far side of her mom, and she was bouncing between drawing (she definitely had some art skills) and scrolling on her phone.

The whole table sat in what was mostly silence, eating slowly. This continued from the time they sat down until we left the restaurant.

My daughter then pointed out the table behind us where a boy, about 5, had his face over a tablet, his face lit up from the light off of it, since he was so close to it.

It’s the era of digital babysitting, digital distractions, but creating distraction from what? Mealtime, family time, conversation, social engagement? …All of the above.

I think this form of distraction is fundamentally changing the way we socialize and this will affect our sense of family, community, and culture.

What happens when our screens become more important than the people around us?