Tag Archives: technology

How good, how soon?

I am still a little freaked out by how good the Google NotebookLM’s AI ‘Deep dive conversations’ are. The conversations are so convincing. The little touches it adds, like extended pauses after words like ‘and’ are an excellent example of this.

In the one created for my blog, the male voice asked, “It actually reminds me, you ever read Atomic Habits by James Clear?” And the female voice’s response is, “I haven’t. No.”

Think about what’s happening here in order to continue the conversation in a genuine way. The male voice can now make a point and provide the female voice ‘new’, previously unknown information. But this whole conversation is actually developed by a single AI.

How soon before you have an entire conversation with a customer service representative oblivious to the fact that you are actually talking to an AI? Watch a newscast or a movie unaware that the people you are watching are not really people?

I shared close to 2,000 blog posts I’ve written into the notebook, if I shared my podcasts too and it replicated my voice, I wonder how long it will be before a digital me could be set to write my posts then simultaneously do live readings of them on my blog? Writing and sounding just like me… without me having to do it!

As a scary extension of this, could I learn something from the new content that it produces? Could I gain insights from the digital me that I would struggle to come up with myself?

This is just the beginning. How much of the internet is going to end up being AI generated and filled with AI reactions and responses to other AI’s? And how much longer after that before we notice?

A (creepy) digital friend

What is Friend? Watch this reveal trailer.

No matter how I look at it, this feels creepy and dystopian. Even when I think of positive things, like perhaps helping someone with special needs, or emotional support for someone with anorexia, the idea of this all-seeing AI friend seems off putting.

Even this advertising doesn’t resonate well with me. In the scene with the guys playing video games, the boy wants to check in with his digital friend rather than pay attention to his friends in the room. And in the final scene with the girl and boy on the roof, I thought at first the girl was candidly trying to take a photo of the boy, but then realized she was just fighting the urge to converse with the AI friend. Either of those scenarios feels like she has replaced a phone distraction with a more present and more engaging distraction… from life.

There are a lot of new artificial intelligence tools that are on their way, and I’m excited about the possibilities, but this one has a high creep factor that doesn’t seem to me like it’s adding the value I think it intends to.

You’ve made good time

It wasn’t a a question, but rather a statement, “You’ve made good time.”

We were on our way to holiday in Kelowna and our youngest daughter was spying on us. Well, not really spying, that suggests something clandestine and this was fully consensual. We share our locations with each other on our phones.

I think my daughter uses it on my wife and I more routinely than we do on her, and that’s perfectly ok with me. I tend to use it when I’m headed to bed and she’s not home yet, and sometimes when I’m the first one home from work and wondering where everyone else is?

I’ll sometime get texts from my daughter that say, “You’re still at work?”, and I know that again is more of a statement than a question… she checked my location before asking. Then the conversation moves to dinner plans or evening plans, and maybe even a request for a drive so that she doesn’t have to take her car somewhere that she may have drinks. Again this is perfectly fine with me.

I can see how this tool can be weaponized by a controlling parent or spouse, but in the hands of mutually respectful people it is really handy. It allows us to connect and feel connected, even when we are headed on holidays. And it changes the conversation from ‘Where are you?’ to the follow up questions that matter more.

It shouldn’t be this hard

I get really frustrated when things don’t work like they should. I’m putting in medical claims into my online insurance claim form and the form won’t let me upload the requested evidence of receipts. Is my file too big? No. How do I know? Because it’s my second time through and I’ve made the file smaller this time.

I’ve refreshed the page and restarted my claim from scratch. I’ve made the file into a different format… and I’m watching the little spin-wheel loading symbol go around and around and around. I’m now going to start again on a different browser.

It amazes me how in such a technologically advanced age we run into issues like this so often. I’ve had people tell me they wanted to leave me a comment on my blog but they couldn’t figure out a way to sign in. But if I don’t have a sign in either by email, Facebook, or WordPress then I’m inviting spam messages. It shouldn’t be a process that doesn’t work, people sign in to things all the time.

I’m now on a second web browser and the file still is t uploading. I’m going to give up and try again later. Maybe restart my computer first. Maybe reduce the file size some more. Basically I’m going to waste a whole bunch of time doing something that should already have been over 30 minutes ago.

It really shouldn’t be this hard. I feel for elderly people who run into issues like this, then spend 45 minutes on hold waiting to ask for help, then getting flustered even more trying to follow instructions over the phone. Maybe AI will help, eventually, but I see things getting more frustrating rather than better in the short term. It all boils down to bad user experience and ultimately bad customer service.

(And as a final thought, I was trying to cut/paste a few words in the last sentence on my blogging app and my highlight feature froze on the wrong words… I had to save the draft to be able to do anything else. A small inconvenience but still, one of those little things that should just work!)

The Nvidia omniverse

The future is here. In this quick video Nvideo CEO Breaks Down Omniverse, Jensen Huang discusses a virtual space where robots and Artificial Intelligence (AI) practice and rehearse their actions in a virtual space before trying things out in the real world. Specifically, he discusses car manufacturing and trying out designs of both machines and factories before physically building them, reducing rebuilding and redesigning time.

This is a game changer in the design of not just systems but in building physical things. The design phase of new products just got a steroids boost, and the world we know is no longer the world we live in.

We are now in an era of AI assisted design and manufacturing that is going to explode with amazing new products. Robots using AI in a virtual omniverse, trying out new creative ways to build new items faster, and more efficiently. Robots building robots that are tested in a virtual world, tweaked by AI, and retested virtually, tweaking the design of the very robots that will be building the new robots. Let that last sentence sink in… robots and AI redesigning other robots and AI… machines building and designing machines.

But that in itself isn’t the steroid boost. The real power comes from practicing everything first in this virtual omniverse world. Trying out the physics and dynamics of the new tools in an environment where they can be tested thousands and even millions of times before actually being built. This is where the learning is accelerated, and where things will move so much faster than we’ve ever seen before.

Products used to takes years of development to be built, but now a lot of that time is going to happen virtually… and with iterations not happening sequentially but simultaneously. So years of development and production design happen in moments rather than years. With the omniverse we are going to see an acceleration of design and production that will make the next few years unrecognizable.

It makes me wonder what amazing new products await us in the next 5-10 years?

AI and languages

I just watched a video where the new Chat GPT-4o seamlessly translated a conversation between an Italian and English speaker. I know this isn’t the first tool to do this, but it’s the first time I’ve seen an example where I thought about how useful this is. It gave me the realization that instant language translation will revitalize diversity of language

In my travels, I’ve noticed that English is a language that is becoming more and more widespread. Not everyone knows English, but recently in both France and Spain I had far less challenges communicating compared to my travels to France 12 years earlier. I think this stems from a move towards everyone desiring to speak a common language. Want to be able to talk to people in most parts of the world? Learn English.

But maybe that desire will diminish now. If I get to speak in my mother tongue and someone who speaks English can hear a seamless translation, do I really need to learn English? Maybe in the future people will be less likely to pick up a new language? Will we see a slowdown in the acquisition of the English language?

While I think we’ll see this shift, it won’t be drastic. Yet I can see both positives and benefits to this. A positive is that people will be more likely to hold on to the language of their heritage. A negative could be that in countries with high immigration the effort to learn the country’s home language might be less desirable. While this won’t necessarily cause an issue communicating since these AI tools can help, it can potentially undermine the social fabric of the country.

And maybe that’s not as big a concern as I’m making it out to be?

Still, I’m excited about the ease with which I’ll be able to travel to countries where the primary language isn’t English. I look forward to having conversations I could not have previously had. Tools like this make almost every person in the entire world a possible acquaintance, colleague, and friend. That’s a pretty exciting thing to think about.

~~~

As an aside, a lot of AI image creators still have issues with text, as the image accompanying this post demonstrates. This was my prompt: An English, Spanish, and French person sitting at a table, each saying “Good Morning” in their own language, in a speech bubble.

Ashtrays and Newspaper Racks

If you are Gen X, then at some point in your schooling you probably made your parents an ashtray out of clay. I did, and my parents didn’t even smoke. And if you were in a woodworking class you probably made some sort of newspaper or magazine rack, which was something your parents might have had in your living room. Depending on how good it was, this wooden creation may or may not have been as prominently displayed in your house as the ashtray. But these were a couple ‘practical’ things we made in school ‘back in the day’.

Both my daughters, who went to different middle schools, made gum ball machines out of wood, which used a mason jar to hold the gum balls. And I think for both of them the other option was a birdhouse. These were their versions of ashtrays and newspaper racks.

I bet most kids today will come home from school at some point with a 3D printed keychain. Most houses don’t have ashtrays, or newspapers or even magazines. Most parents wouldn’t know where to go to buy loose gum balls to put in a school made gum ball machine. Times change and so do the crafts students create at school.

Some of the other things students might (and do) create at school these days include: Apps, websites, and online businesses. These are the modern day ashtrays. A bit more practical, and a lot more relevant. That said, I hope kids still get a chance to work with clay and wood. I still want to see art that is 3D but not 3D printed. No one needs a newspaper rack or gum ball machine but bird houses can still be made.

There are cookie-cutter style ‘everyone makes the same design’ kind of bird houses, and then there are versions of the same project which are open to design thinking and personalization. And it really doesn’t have to be a bird house… just a hands-on creation using tools rather than a keyboard. But when I said, “I still want to see art that is 3D but not 3D printed.” I also should have mentioned that I want kids to also 3D print things.

The message of this little, nostalgic visit down memory lane isn’t just to say bring back the old hands-on projects, and do away with the new ones. Rather it’s to say we need both. We need students creating physical crafts, with their hands, at school and we need them designing new digital products with new tools as well. I’d be a bit concerned if kids today came home with ashtrays, but I’d still love to see them producing creative works that involve building and creating physical things with their hands.

I also wonder what the 2050 version of the school made ashtray will be?

Ancient Wisdom

Watch this video about tomorrow’s solar eclipse.

Predicting the next total eclipse is not a simple math problem, having several independent factors. Yet as the video mentions, an ancient Babylonian tablet tracked all the solar eclipses from 347 to 258 BCE.

It makes me wonder about the wisdom of some ancient civilizations. What did they know, that has been lost to us? From medicines to space to science, what intelligence was previously discovered and has since needed to be rediscovered, relearned.

And what did the ancients know that we still don’t know?

How soon, Siri?

I’m excited to see how Siri will be updated with the advancements seen in Artificial Intelligence. AI has come a long way and I think it’s time Siri got a serious upgrade.

I will often ask Siri a question and the response I get is, “Here is what I found,” with web links from a simple Google search.

I want Siri to give me the details in a conversation. I want Siri to ask me follow-up questions so its response is better. I want Siri to figure out better searches based on my previous lines of questioning. I want a fluid conversation, not just a simple and often unhelpful question and response.

Essentially I want a Siri that feels less like voice response to a simple query, and more like a personal assistant. when is this upgrade going to happen?

It’s already here!

Just yesterday morning I wrote:

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right… the most daring prophecies seem laughably conservative.

Then last night I found this post by Zain Khan on LinkedIn:

🚨 BREAKING: OpenAI just made intelligent robots a reality

It’s called Figure 01 and it’s built by OpenAI and robotics company Figure:

  • It’s powered by an AI model built by OpenAI
  • It can hear and speak naturally
  • It can understand commands, plan, and carry out physical actions

Watch the video below to see how realistic it’s speech and movement abilities are. The ability to handle objects so delicately is stunning.

Intelligent robots aren’t a decade away. They’re going to be here any day now.

This video, shared in the post, is mind-blowingly impressive!

This is just the beginning… we are moving exponentially fast into a future that is hard to imagine. Last week I would have guessed we were 5-10 years away from this, and it’s already here! Where will we really be with AI robotics 5 years from now?

(Whatever you just guessed is probably laughably conservative.)