Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.
I am still a little freaked out by how good the Google NotebookLM’s AI ‘Deep dive conversations’ are. The conversations are so convincing. The little touches it adds, like extended pauses after words like ‘and’ are an excellent example of this.
In the one created for my blog, the male voice asked, “It actually reminds me, you ever read Atomic Habits by James Clear?” And the female voice’s response is, “I haven’t. No.”
Think about what’s happening here in order to continue the conversation in a genuine way. The male voice can now make a point and provide the female voice ‘new’, previously unknown information. But this whole conversation is actually developed by a single AI.
How soon before you have an entire conversation with a customer service representative oblivious to the fact that you are actually talking to an AI? Watch a newscast or a movie unaware that the people you are watching are not really people?
I shared close to 2,000 blog posts I’ve written into the notebook, if I shared my podcasts too and it replicated my voice, I wonder how long it will be before a digital me could be set to write my posts then simultaneously do live readings of them on my blog? Writing and sounding just like me… without me having to do it!
As a scary extension of this, could I learn something from the new content that it produces? Could I gain insights from the digital me that I would struggle to come up with myself?
This is just the beginning. How much of the internet is going to end up being AI generated and filled with AI reactions and responses to other AI’s? And how much longer after that before we notice?
Today I added all the posts on my Daily-Ink blog, from the time I started blogging daily in July 2019 up to yesterday, into Google NotebookLM. The first thing I did was to ask it ‘What does David Truss write about on his Daily-Ink Blog?’ I’ll share the response below, but what I really wanted to do was to create an Audio Overview, a ‘Deep dive conversation’ that is basically two AI ‘personalities’ discussing the content I shared.
I am blown away by both the conversational aspect of this AI dialogue as well as the way it stuck to a single theme of habits while covering a fair bit of the things I wrote about.
I’m going to be playing with this tool a lot more in the future… and this is just the beginning of the use of AI to summarize and help us learn from large amounts of information. Think of the possibilities to come!
__
As I mentioned above, the first question I asked on Google NotebookLM was, ‘What does David Truss write about on his Daily-Ink Blog?’ and this was the response:
David Truss writes about a variety of topics in his blog Daily-Ink, but common themes include:
● Education: This is a frequent topic for Truss, who is an educator. He has blogged about topics like inquiry-based learning and teaching, blended learning and distributed learning, leadership and innovation in education, and student voice and choice. For instance, in one post, he discusses his visit to Inquiry Hub Secondary, a school where “students design the school” themselves.
● Technology: Truss often discusses technology in his blog posts. Specific topics of interest include blended learning, distributed learning, and artificial intelligence. He also used his blog to document his experiences using new technologies. For example, he has written about Dall-E, an AI image generation software and Bing Copilot, an AI assistant.
● Personal Reflections: Beyond his professional interests, Truss also writes about his personal life, sharing reflections on topics such as relationships, personal growth, mindfulness, and daily routines. For example, he writes about the challenges of maintaining routines during the pandemic and reflects on the value of taking walks. He also frequently documents his journey with archery and maintaining an active lifestyle.
● The Blogging Process: Truss seems to view blogging as a personal practice and frequently shares his thoughts on the writing process itself. He sees writing as an art form and his blog as his canvas. He discusses his commitment to writing daily, the challenges he faces, and the satisfaction he derives from it.
In addition to these recurring themes, Truss also uses his blog to:
● Share Links and Resources: Truss frequently shares links to articles, videos, and other resources that he finds interesting or valuable. He often provides context for these links, explaining why he finds them noteworthy and encouraging his readers to explore them further.
● Engage in Conversations: Truss uses his blog as a platform for engaging in conversations with his readers. He actively responds to comments, asking for feedback and acknowledging different perspectives.
Overall, David Truss uses his Daily-Ink blog as a space for reflecting on his experiences, exploring his interests, and connecting with others. He is open about his process, his struggles, and his growth, offering a glimpse into his life and his thoughts.
I use AI images to accompany roughly 18 to 19 out of 20 Daily-Ink posts. My general rule is that I’ll try one or two requests and pick from those. I don’t want to spend 10 or 15 minutes of my precious morning schedule to search for images, they are the side quest, my writing is the adventure.
However, it being the weekend, yesterday I had time to play… and yet I failed.
When that didn’t work, I got more and more detailed, even pausing the requests to ask Copilot (which uses DALL•E 3 to create images) if it knew what a waterpolo cap looked like. It described it perfectly… then I got it to reiterate my request before continuing. This is what I got:
But the caps still came out with helmet masks and at no point was the shooter facing the net. I finally gave up and cropped an image. Here is what I used, and then the full image:
Below are many of the fails. I recognize these are not common requests, and the images have some redeeming qualities, but there is still a way to go when it comes to AI text-to-image requests. So, when you see a less-than-perfect image added to my Daily-Ink posts, please recognize that I’m trying, but I’m not wasting time trying to get everything just right… I’d rather use that time to write, meditate, or exercise.
No matter how I look at it, this feels creepy and dystopian. Even when I think of positive things, like perhaps helping someone with special needs, or emotional support for someone with anorexia, the idea of this all-seeing AI friend seems off putting.
Even this advertising doesn’t resonate well with me. In the scene with the guys playing video games, the boy wants to check in with his digital friend rather than pay attention to his friends in the room. And in the final scene with the girl and boy on the roof, I thought at first the girl was candidly trying to take a photo of the boy, but then realized she was just fighting the urge to converse with the AI friend. Either of those scenarios feels like she has replaced a phone distraction with a more present and more engaging distraction… from life.
There are a lot of new artificial intelligence tools that are on their way, and I’m excited about the possibilities, but this one has a high creep factor that doesn’t seem to me like it’s adding the value I think it intends to.
After discussing the fact that people stuck in an echo chamber of like-minded people start to call a real photograph an AI generated fake… he says,
“Here’s what keeps me up at night: We’re converging on a point where it is easier to claim that real images are fake than it is to prove that images are generated using AI, or manipulated using AI. And that means we have no reasonable expectation of any image or any video or any audio being real. And we don’t have the tools or the media literacy to really do this analysis.
…and we are in the situation we’re in now where people can choose their own reality and live in a reality dysfunction. And AI provides the tools and capabilities to make that reality disfunction into our lived reality.”
Indeed, our shared reality has collapsed. AI generated fakes spread like wildfire through echo chambers of like-minded groups, and even when discovered to be fake, there is no effort to make corrections if the fake fits the group’s narrative… and any real media that doesn’t fit that same reality is easily dismissed as a fake.
Maya Angelou said, “We are more alike, my friends, than we unalike.” I would agree with that when we had a common shared reality, but I question it now in a world filled with AI generated fakes, and a lack of media savviness to determine what really is real. The collapse of a shared reality is a threat to our world, whether the split is socioeconomic, political, or religious. We are increasingly growing unalike.
“The mass media serve as a system for communicating messages and symbols to the general populace. It is their function to amuse, entertain, and inform, and to inculcate individuals with the values, beliefs, and codes of behavior that will integrate them into the institutional structures of the larger society. In a world of concentrated wealth and major conflicts of class interest, to fulfill this role requires systematic propaganda.”
~ Edward Herman & Noam Chomsky, Manufacturing Consent, 1988
Embrace yourself. We are in for a whirlwind of propaganda directed to both sway and embolden our beliefs. We will need to question the sources of our information. We will need to understand where the bias of the message is coming from. We will need to fact check for reliability, accuracy, and exaggeration.
We will be entertained. We will be angered. We will be emboldened. We will be ridiculed by those who disagree with us. And we will be the ones ridiculing others who hold different beliefs than us. Systematic propaganda will slowly lead us to more polarized views.
This is not a test of your emergency broadcasting system. This is also not an emergency. No, it is the emergence of political propaganda in a post Truth era. Find your own truth, fabricate your own truth. Because the media outlets you believed you believed in are not the ones in existence today… and they do not transmit Truth.
Marshall McLuhan was correct, ‘the medium is the message’, and the medium is designed not to inform but to entertain; to excite; to anger; to draw attention, clicks, and eyes on advertising. No, not to inform, to trick you. Sway your opinion, and lie to you.
Are your beliefs your own or have they been manufactured, manipulated, and swayed by the media you watch? Will you be able to answer that question as convincingly a year from now? Will your beliefs be yours or will they be governed by the propaganda you choose to watch and believe in? Be warned, the answer to that question might not be the answer you currently believe.
The future is here. In this quick video Nvideo CEO Breaks Down Omniverse, Jensen Huang discusses a virtual space where robots and Artificial Intelligence (AI) practice and rehearse their actions in a virtual space before trying things out in the real world. Specifically, he discusses car manufacturing and trying out designs of both machines and factories before physically building them, reducing rebuilding and redesigning time.
This is a game changer in the design of not just systems but in building physical things. The design phase of new products just got a steroids boost, and the world we know is no longer the world we live in.
We are now in an era of AI assisted design and manufacturing that is going to explode with amazing new products. Robots using AI in a virtual omniverse, trying out new creative ways to build new items faster, and more efficiently. Robots building robots that are tested in a virtual world, tweaked by AI, and retested virtually, tweaking the design of the very robots that will be building the new robots. Let that last sentence sink in… robots and AI redesigning other robots and AI… machines building and designing machines.
But that in itself isn’t the steroid boost. The real power comes from practicing everything first in this virtual omniverse world. Trying out the physics and dynamics of the new tools in an environment where they can be tested thousands and even millions of times before actually being built. This is where the learning is accelerated, and where things will move so much faster than we’ve ever seen before.
Products used to takes years of development to be built, but now a lot of that time is going to happen virtually… and with iterations not happening sequentially but simultaneously. So years of development and production design happen in moments rather than years. With the omniverse we are going to see an acceleration of design and production that will make the next few years unrecognizable.
It makes me wonder what amazing new products await us in the next 5-10 years?
I just watched a video where the new Chat GPT-4o seamlessly translated a conversation between an Italian and English speaker. I know this isn’t the first tool to do this, but it’s the first time I’ve seen an example where I thought about how useful this is. It gave me the realization that instant language translation will revitalize diversity of language
In my travels, I’ve noticed that English is a language that is becoming more and more widespread. Not everyone knows English, but recently in both France and Spain I had far less challenges communicating compared to my travels to France 12 years earlier. I think this stems from a move towards everyone desiring to speak a common language. Want to be able to talk to people in most parts of the world? Learn English.
But maybe that desire will diminish now. If I get to speak in my mother tongue and someone who speaks English can hear a seamless translation, do I really need to learn English? Maybe in the future people will be less likely to pick up a new language? Will we see a slowdown in the acquisition of the English language?
While I think we’ll see this shift, it won’t be drastic. Yet I can see both positives and benefits to this. A positive is that people will be more likely to hold on to the language of their heritage. A negative could be that in countries with high immigration the effort to learn the country’s home language might be less desirable. While this won’t necessarily cause an issue communicating since these AI tools can help, it can potentially undermine the social fabric of the country.
And maybe that’s not as big a concern as I’m making it out to be?
Still, I’m excited about the ease with which I’ll be able to travel to countries where the primary language isn’t English. I look forward to having conversations I could not have previously had. Tools like this make almost every person in the entire world a possible acquaintance, colleague, and friend. That’s a pretty exciting thing to think about.
~~~
As an aside, a lot of AI image creators still have issues with text, as the image accompanying this post demonstrates. This was my prompt: An English, Spanish, and French person sitting at a table, each saying “Good Morning” in their own language, in a speech bubble.
It’s interesting how the idea that ‘learning can be fun’ has been translated into the gamification of education, which in turn has devolved into making games that are essentially about practice pages that are ‘fun and interactive’.
I think AI has the ability to change this. Learning can be less about practice questions and more about deeper learning. Instead of playing a game with progressively harder, very predictable levels, the learning could authentically go where a student is interested. Two students could start the same, entertaining journey but end up learning and achieving vastly different outcomes. Not just higher math skills, but rather practical learning. A puzzle trying to determine the wiring of some gadget could lead to teaching basic electronics and it could lead to learning about electrical engineering.
The more used approach in machine assisted learning is to have specific goals and be responsive to the learner’s ability. The more advanced approach is to have general objectives and to be responsive to the learner’s interests.
It’s not just the outcomes of these that are drastically different, it’s also the entire approach to what it means to say, ‘Learning can be fun’.
I’m excited to see how Siri will be updated with the advancements seen in Artificial Intelligence. AI has come a long way and I think it’s time Siri got a serious upgrade.
I will often ask Siri a question and the response I get is, “Here is what I found,” with web links from a simple Google search.
I want Siri to give me the details in a conversation. I want Siri to ask me follow-up questions so its response is better. I want Siri to figure out better searches based on my previous lines of questioning. I want a fluid conversation, not just a simple and often unhelpful question and response.
Essentially I want a Siri that feels less like voice response to a simple query, and more like a personal assistant. when is this upgrade going to happen?