Tag Archives: future

almost free

The internet needs a makeover. I remember when I wanted to make a fun certificate or a personalized card, I could just do a Google search and find a free resource. Now when you do it, the top 10+ sites found in the search all require you to register, login, sign up, or sign in with Google or Facebook. Don’t worry, your first 30 days are free, or you’ll need to put your email in to get promotional spam sent to your inbox.

I get it. It costs money to run a website. I know, I pay to keep DavidTruss running and thanks to some affiliate links I’ve made about $35-$40 over the past 15 years. Add another $15 if you include royalties from my ebook, which I give away free everywhere except on Amazon where I couldn’t lower the price. This is my sarcastic way of saying that I don’t make any money off of my blogging and I actually have to pay to keep it running. That’s fine for me, I don’t do this for an income, but most websites need a flow of cash coming in to keep them going.

But no matter how you look at it, things on the internet have gotten a lot less free over the past decade. My blog’s Facebook page doesn’t make it onto most people’s stream because I don’t pay to boost the posts. Twitter, since it became X, has been all about seeing paid-for blue check profiles and my stream feels like it caters to ‘most popular or outlandish tweets’ rather than people I actually enjoy following. Even news sites are riddled with flashy advertising and gimmicky headlines to keep your eyes on those ads.

There needs to be a way to keep things ‘almost free’ on the internet, while not inundating us with attention seeking ads, or making us register and give away our email address to be spammed by promotional messages we don’t want. I think it will come. I think there will be an opportunity to choose between ads or micropayments. Read the kind of news you want or listen to a podcast for a penny. Like what you read/hear? Give a dime, or quarter, or even a dollar if you really like it.  There are already people donating this way on Live events on YouTube and Twitch and other similar sites, it just needs to get to the point where it’s happening on any web page. I’d rather pay a tiny bit than be inundated with ads. It’s coming, but not before it gets worse… we now have ads coming to Netflix and Prime. They want us to pay MORE to avoid them. The model is still about exploitation rather than building a fan base. Subscriptions will dominate for a while and so will models that upsell you to reduce the clutter… but eventually, eventually we will see the return of the ‘almost free’.

Conversational AI interface

A future where we have conversations with an AI in order to have it do tasks for us is closer than you might think. Watch this clip, (full video here):

Imagine starting you day by asking an AI ‘assistant’, “What are the emails I need to look at today?” Then saying something like, “Respond to the first 2 for me, and draft an answer for the 3rd one for me to approve. And remind me what meetings I have today.” All while on the treadmill, or while shaving or even showering.

The AI calculates your calories used on your treadmill, and tracks what you eat, and even gives suggestions like, “You might want to add some protein to your meal, may I suggest eggs, you also have some tuna in the pantry, or a protein shake if you don’t want to make anything.”

Later you have the ever-present the AI in the room with you during a meeting and afterwards request of it, “Send us all a message with a summary of what we discussed, and include a ‘To Do’ list for each of us.”

Sitting for too long at work? The AI could suggest standing up, or using the stairs instead of the elevator. Hungry? Maybe your AI assistant recommends a snack because it read your sugar levels off of your watch’s health monitor, and it does this just as you are starting to feel hungry.

It could even remind you to call your mom, or do something kind for someone you love… and do so in a way that helps you feel good about it, not like it’s nagging you.

All this and a lot more without looking at a screen, or typing information into a laptop. This won’t be ready by the end of 2024, but it’s closer to 2024 than it is to 2030. This kind of futuristic engagement with a conversational AI is really just around the corner. And those ready to embrace it are really going to leave those who don’t behind, much like someone insisting on horse travel in an era of automobiles. Are you ready for the next level of AI?

What are you outsourcing?

Alec Couros recently came to Coquitlam and gave a presentation on “The Promise and Challenges of Generative AI”. In this presentation he had a quote, “Outsource tasks, but not your thinking.

I just googled it and found this LinkedIn post by Aodan Enright. (Worth reading but not directly connected to the use of AI.)

It’s incredible what is possible with AI… and it’s just getting better. People are starting businesses, writing books, creating new recipes, and in the case of students, writing essays and doing homework. I just saw a TikTok of a student who goes to their lecture and records it, runs it through AI to take out all the salient points, then has the AI tool create cue cards and test questions to help them study for upcoming tests. That’s pretty clever.

What’s also clever, but perhaps now wise, is having an AI tool write an essay for you, then running the essay through a paraphraser that breaks the AI structure of the essay so that it isn’t detectable by AI detectors. If you have the AI use the vocabulary of a high school student, and throw in a couple run-on sentences, then you’ve got an essay which not only AI detectors but teachers too would be hard pressed to accuse you of cheating. However, what have you learned?

This a worthy point to think about, and to discuss with students: How do you use AI to make your tasks easier, but not do the thinking for you? 

Because if you are using AI to do your thinking, you are essentially learning how to make yourself redundant in a world of ever-smarter AI. Don’t outsource your thinking… Keep your thinking cap on!

Nature-centric design

I came across this company, Oxman.com, and it defines Nature-centric design as:

Nature-centric design views every design construct as a whole system, intrinsically connected to its environment through heterogeneous and complex interrelations that may be mediated through design. It embodies a shift from consuming Nature as a geological resource to nurturing her as a biological one.
Bringing together top-down form generation with bottom-up biological growth, designers are empowered to dream up new, dynamic design possibilities, where products and structures can grow, heal, and adapt.

Here is a video, Nature x Humanity (OXMAN), that shows how this company is using glass, biopolymers, fibres, pigments, and robotics for large scale digital manufacturing, to rethink architecture to be more in tune with nature and less of an imposition on our natural world.

Nature x Humanity (OXMAN) from OXMAN on Vimeo.

This kind of thinking, design, and innovation excites me. It makes me think of Antoni Gaudi styled architecture except with the added bonus of using materials and designs that are less about just the aesthetic and more about the symbiotic and naturally infused use of materials that help us share our world with other living organisms, rather than our constructions imposing a cancer-like imposition on our world, scarring and damaging the very environment that sustains our life.

Imagine living in a building that allows more natural air flow, is cheaper to heat and cool, and has a positive emissions footprint, while also being a place that makes you feel like you are in a natural rather than concrete environment. Less corners, less uniformity, and ultimately less institutional homes, schools, and office buildings, which are more inviting, more naturally lit, and more comfortable to be in.

This truly is architectural design of the future, and it has already started… I can’t wait to see how these kinds of innovations shape the world we live in!

Digital distraction

Last night we went out for a wonderful dinner. I’m the restaurant we had a booth next to a round table which had a mother and 3 daughters. I’d guess the kid’s ages to be about 7, 12, and 14. My youngest daughter was sitting next to me and whispered, “They are all on devices.”

When I looked, the 7 year old had an Anime video playing on her laptop, which was about 8-10 inches (20-25cm) from her face. The 12 year old had over-ear headphones on and was endlessly scrolling on social media. The 14 year old was opposite me and all I could see was that she had one earbud in, on the far side of her mom, and she was bouncing between drawing (she definitely had some art skills) and scrolling on her phone.

The whole table sat in what was mostly silence, eating slowly. This continued from the time they sat down until we left the restaurant.

My daughter then pointed out the table behind us where a boy, about 5, had his face over a tablet, his face lit up from the light off of it, since he was so close to it.

It’s the era of digital babysitting, digital distractions, but creating distraction from what? Mealtime, family time, conversation, social engagement? …All of the above.

I think this form of distraction is fundamentally changing the way we socialize and this will affect our sense of family, community, and culture.

What happens when our screens become more important than the people around us?

AI, Batman, and the Borg

In one of my earliest blog posts, originally written back in November 2006, I wrote:

“I come from the Batman era, adding items to my utility belt while students today are the Borg from Star Trek, assimilating technology into their lives.”

I later noted that students were not the ‘digital natives’ I thought they were. Then I went back and forth on the idea a few times on my blog after that, ultimately looking more at ‘digital exposure‘ and not lumping students/kids together as digital immigrants or natives, but rather seeing that everyone is on a spectrum based on their exposure and interest.

Many of us are already a blend of Batman and Borg. We wear glasses and hearing aids that assist and improve our senses. We track our fitness on our phones and smart watches. We even have pacemakers that keep our hearts regular when our bodies don’t do a good job of it. In a more ubiquitous use of smart tools, almost all of us count on our phones to share maps with us, and we can even get an augmented view of the world with directions showing up superimposed on our camera view.

How else are we going to be augmenting our reality with new AI tools in the next 10 to 20 years?

We now have tools that can: read and help us respond to emails; decide our next meal based on the ingredients we have in our fridge; plan our next vacation; and even drive us to our destination without our assistance.

What’s next?

I know there are some ideas in the works that I’m excited to see developed. For example, I’m looking forward to getting glasses or contact lenses with full heads-up display information. I’m walking up to someone and their name becomes visible to me on a virtual screen. I look at a phone number and I can call it with an eye gesture. I see something I want to know more about, anything from an object, to a building, to a person, and I can learn more with a gesture.

I don’t think this technology is too far away. But what else are we in store for? What new tools are we adding to our utility belts, what new technologies are going to enhance our senses?

I used to make a Batman/Borg comparison to look at how we add versus integrate technology into our lives, but I think everyone will be doing more and more of both. The questions going forward are how much do we add, how reliant do we get, and how different will we be as a result? Would 2024 me even recognize the integrated capabilities of 2044 me, or will that future me be as foreign and advanced as a person from 1924 looking at a person in 2024?

I’m excited about the possibilities!

Is Generative AI “just a tool”?

Dean Shareski asks,

Is Generative AI “just a tool”? When it comes to technology, the “just a tool” phrase has been used for decades. I understand the sentiment in that it suggests that as humans, we are in control and it simply responds to our lead. But I worry that this at times, diminishes the potential and also suggests that it’s neutral which many argue that technology is never neutral. I’d love to know what you think, is Generative AI “just a tool”?

My quick answer is ‘No’, because as Dean suggests tools have never been neutral.

Here are some things I’ve said before in ‘Transformative or just flashy educational tools?‘. This was 13 years ago, and I too used the term, ‘just a tool’:

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.

A complimentary point: If I have a hammer and try to use it as a screwdriver, I won’t get much value from its’ use.”

My main message was, “A tool is just a tool! It’s not the tool, but how you use it that matters.

I went on to ask, “What should we do with tools to make them great?” And I gave 6 suggestions:

  1. Give students choice
  2. Give students a voice
  3. Give students an audience
  4. Give students a place to collaborate
  5. Give students a place to lead
  6. Give students a digital place to learn

This evolved into a presentation, 7 Ways to Transform Your Classroom, but going back to Dean’s question, first of all I don’t think technology is ever neutral. I also think that technology need not be transformative yet always has the potential to be so if it really does advance capabilities beyond what was available before. So the problem is really about the term ‘just’ in ‘just a tool’. Nothing is just a tool. Generative AI is not “just a tool”.

We respond to technology, and technology transforms us. Just like Marshal McLuhan proposed that the communication medium itself, not the messages it carries, is what really matters, “The medium is the message”, the technology is the transformer, it alters us by our use of it, it is the medium that becomes the message.

Generative AI is going to transform a lot of both simple and complex tasks, a lot of jobs, and ultimately us.

What does this mean for schools, and for teaching and learning? There will be teachers and students trying to do old things in new ways, using AI like we used a thesaurus, a calculator, Wikipedia, and other tools to ‘enhance’ what we could do before. There will also be innovative teachers using AI to co-create assignments and to answer new questions, co-write essays, and co-produce images and movies.

There will be students using generative AI to do their work for them. There will be teachers fearing these tools, there will be teachers trying to police their use, and it’s going to get messy… That said, we will see more and more generative AI being the avenue through which students are themselves designing tasks and conceiving of new ways to use these tools.

And of course we will see innovative teachers using these tools in new ways…

How different is a marketing class when AI is helping students start a business and market a product with a $25 budget to actually get it going?

How different is a debate where generative AI is listening and assisting in response to the opposing team’s rebuttal?

How different are the special effects in an AI assisted movie? How much stronger is the AI assisted plot line? How much greater is the audience when the final product is shared online, with AI assisted search engine optimization?

Technology is not ‘just a tool’. We are going to respond to Generative AI and Generative AI is going to transform us. It is not just going to disrupt education, it’s going to disrupt our job market, our social media (it already has), and our daily lives. Many other tools, will not be just like the tools that came before them… changes will be accelerated, and the future is exciting.

Just a tool’ is deceiving and I have to agree with Dean that it underplays the possibilities of how these tools can be truly transformative. I’m not sure I would have said this 13 years ago, but I also didn’t see technology being as transformative back then. Generative AI isn’t a flash in the pan, it’s not something trendy that’s going to fade in popularity. No, it’s a tool that’s going to completely change landscapes… and ultimately us.

Alien perspective

I think jokes like this are funny:

…because they hold a bit of truth.

We aren’t all that intelligent.

We draw imaginary lines on the globe to separate us. We fight wars in the name of angry Gods that are more concerned with our devotion than for peace and love. We care more about greed than about the environment. We spend more on weapons of destruction than we do on feeding the needy. We judge each other on superficial differences. We have unbelievable intellect, capable of incredible technological advancement, yet we let our monkey brains prevail.

Sure we exhibit some intelligence, we are intelligent viruses.

At least that’s what I think an objective alien visiting our planet would think.

Conversation on an alien ship observing earth:

“Give them another 100 years… if they figure out how to not kill each other and the planet, then let’s introduce ourselves.”

Right now I’m not terribly optimistic about what those aliens will find in our future? ‘Civilized’ humans? A desolate planet? Artificial intelligence treating us like we treat ‘unintelligent’ animals? Or more of the same bickering, posturing, warring, and separatist views of humans trying to usurp dominance over each other?

It would be funny if it wasn’t sad.

Intolerance for bad faith actors

I have always been a pretty strong advocate for free speech. To me it’s the underpinning of a robust democratic society. We don’t have to like what someone says, but they have a right to say it as long as it isn’t hate speech or harmful to someone. We shouldn’t allow racism, threats, and doxing, but we should allow differences of opinions and even angry rants when they are not threatening to a person or group of people.

But I’m struggling with the lack of good faith that I’m seeing. In our country, I see a lot of protests and anger towards our Prime Minister. I believe people should be allowed to protest and share their concerns, but when I see articles like, ‘Attack on Trudeau unsurprising, experts say, warning of future violence against politicians‘ stating that he was “pelted with gravel while at a campaign stop in London, Ont.” Or I read that he was heckled so loudly that he couldn’t continue a speech… Then that is going way too far. This isn’t protest, it’s fascist, it is intolerant and oppressive.

There is a difference between voicing concerns and harassment. There is a difference between protesting and threatening, there is a difference between peaceful, civil behavior and what seems to be happening today.

If I was to describe my politics, I’m definitely left of center. And while I fundamentally disagree with many things Ben Shapiro thinks and says, I get upset when I read articles that he can’t even speak at a university because of safety concerns… And that was 6 years ago! Things are even worse now. Much worse.

When I recently read, “The presidents of three of the nation’s top universities are facing intense backlash, including from the White House, after being accused of evading questions during a congressional hearing about whether calls by students for the genocide of Jews would constitute harassment under the schools’ codes of conduct.” I am deeply concerned. Should students be allowed to protest? Absolutely! Should they be allowed to promote genocide of any person or people as part of their protest? Absolutely not.

It’s an easy line to draw. Absolutely not. That’s acting in bad faith. That’s undermining our democracy and our freedoms.

We need to differentiate how we handle protests and free speech by people who are acting in good faith from those acting in bad faith. The very rights and freedoms we are given in a free and democratic society depend on us doing so. When we give those freedoms to people that abuse them, we subvert our own liberty. We diminish our freedoms and allow others, with harmful words and actions, to impose less civil values on us.

When free speech is misused, it harms us all. When violence is advocated or permitted; when protests prevent civil conversation and debate; when harassment is permitted; we all suffer. We can’t let people acting in bad faith weaken our civil liberties. We can’t just expect people to act in good faith, the minority who don’t will be too disruptive. We need to squash the bad faith actors. The trick is that we need to do so with legal actions. We need to have zero tolerance for intolerance, and we need to create laws that clearly restrict and penalize threats, hate crimes, and malice.

This is known as the paradox of tolerance, “The paradox of tolerance states that if a society’s practice of tolerance is inclusive of the intolerant, intolerance will ultimately dominate, eliminating the tolerant and the practice of tolerance with them. Karl Popper described it as the seemingly self-contradictory idea that, in order to maintain a tolerant society, the society must retain the right to be intolerant of intolerance.”

Instead, what I am seeing is things like this happening:

People who have caused over a decade of harm to others do not deserve a social media platform. That’s not censorship, that’s prevention of further malice, pain, and suffering to innocent people. As I contemplate leaving Twitter, news like this makes me lean towards shutting down my account. But I don’t pretend that will have any meaningful impact beyond my own peace of mind.

The acceptance of bad faith actors has been building over the past decade, and we are deep into the consequences now. Free speech should only be a right for people who act in good faith. There can be disagreement, there can be discourse, there can even be civil arguments and protests. What there can’t be are bad faith actors and activists using free speech as a mechanism to promote harmful ideas, hate, violence, and disruptions to public discourse. For this we need zero tolerance.


Related: Ideas on a Spectrum

New learning paradigm

I heard something in a meeting recently that I haven’t heard in a while. It was in a meeting with some online educational leaders across the province and the topic of Chat GPT and AI came up. It’s really challenging in an online course, with limited opportunities for supervised work or tests, to know if a student is doing the work, or a parent or tutor, or Artificial Intelligence tools. That’s when a conversation came up that I’ve heard before. It was a bit of a stand on a soapbox diatribe, “If an assignment can be done by Chat GPT, then maybe the problem is in the assignment.”

That’s almost the exact line we started to hear about 15 years ago about Google… I might even have said it, “If you can just Google the answer to the question, then how good is the question?” Back then, this prompted some good discussions about assessment and what we valued in learning. But this is far more relevant to Google than it is to AI.

I can easily create a question that would be hard to Google. It is significantly harder to do the same with LLM’s – Large Language Models like Chat GPT. If I do a Google search I can’t find critical thinking challenges not already shared by someone else. However, I can ask Chat GPT to create answers to almost anything. Furthermore, I can ask it to create things like pro’s & con’s lists, then put those in point form, then do a rough draft of an essay, then improve on the essay. I can even ask it to use the vocabulary of a Grade 9 student. I can also give it a writing sample and ask it to write the essay in the same style.

LLM’s are not just a better Google, they are a paradigm shift. If we are trying to have conversations about how to catch cheaters, students using Chat GPT to do their work, we are stuck in the old paradigm. That said, I openly admit this is a much bigger problem in online learning where we don’t see and work closely with students in front of us. And we are heading into an era where there will be no way to verify what’s student work and what’s not, so it’s time to recognize the paradigm shift and start asking ourselves new questions…

The biggest questions we need to ask ourselves are how can we teach students to effectively use AI to help them learn, and what assignments can we create that ask them to use AI effectively to help them develop and share ideas and new learning?

Back when some teachers were saying, “Wikipedia is not a valid website to use as research and to cite.” Many more progressive educators were saying, “Wikipedia is a great place to start your research,” and, “Make sure you include the date you quoted the Wikipedia page because the page changes over time.” The new paradigm will see some teachers making students write essays in class on paper or on wifi-less internet-less computers, and other teachers will be sending students to Chat GPT and helping them understand how to write better prompts.

That’s the difference between old and new paradigm thinking and learning. The transition is going to be messy. Mistakes are going to be made, both by students and teachers. Where I’m excited is in thinking about how learning experiences are going to change. The thing about a paradigm shift is that it’s not just a slow transition but a leap into new territory. The learning experiences of the future will not be the same, and we can either try to hold on to the past, or we can get excited about the possibilities of the future.