Tag Archives: technology

Not so techie

I’ve shared before about how I’m not as tech savvy as most people think. The reality is that I’m just willing to spend a lot of time getting to the bottom of an issue, and so my savviness has more to do with patience than with prowess. That said, I’m getting very frustrated with the technology challenges that seem to be coming my way that I can’t solve. A couple days ago the WordPress App stopped working. I could no longer save anything on it and so I couldn’t write posts on my phone. I deleted and re-installed the app, I tried logging in with my back-up access account, and then I gave up and finally moved to the Jetpack App that I have been begrudgingly avoiding. I didn’t want to make the switch because it forces block editing, which I think is clunky and works against me, rather than helping me with my writing. Now that app won’t work with my blog either. Maybe that’s a good thing because I wanted to write on my laptop rather than phone, so this might be the push that I needed.

Still, this wasn’t my only technology challenge this week or today. My wife is with her parents and her dad can’t access his Shaw mail. It’s an issue on his computer because my wife can access it on her phone and I can access it on my computer, so it has to be an issue with his machine. But the account uses web-based access and I suggested updating Chrome, and then we tried Firefox and while he can log into the account, he can’t click on any of the items in his inbox to read them on either web browser. The fact that I’m trying to give support over Facetime doesn’t make it an easier. I have Teamviewer (to take over a computer remotely) on my mother-in-law’s computer, but not on my father-in-laws, and while I’ll set that up soon, I didn’t feel like doing that for what I thought was a minor issue, and with my wife there, the support itself went fast, even if we couldn’t figure out the issue.

So here is my little rant, why does it seem that there are a lot more things breaking rather than working these days? I have to manually share my blog posts on social media because the tools I try to use (and have even paid for) don’t seem to work consistently. My wife gets a new phone and I spend a week updating issues that come up that were not a problem with the old phone. I upload a new plugin (after the issue with the WordPress login – yes I thought about that being an issue already), and it takes an hour to move from the free version to the paid one. I get stuck on a technical issue and google searchers seem less helpful than they used to. I buy a new toaster oven and the extra features make it harder to use and less convenient than the old one. I can’t decide if I’m getting old and curmudgeonly, or if things are being made less convenient and harder to repair?

In any event, I’m not feeling so techie right now. I seem to be coming across issues that are too hard for me to fix, and my patience is thinning. Cue the memes about old people not understanding technology… I hope that’s not me despite my little rant.

Sensing our world

I’m going to need glasses and hearing aids in the next few years. I already use small magnification readers when text is small or my eyes are fatigued, and I know that my hearing has diminished. One example of my hearing issue is that when we shut down our gas fireplace it beeps, but I don’t hear the beep. That sound is no longer in the range that I can hear. I only know it’s there because my wife and daughter mentioned it.

It’s only in relatively recent history that we’ve had access to technologies that allow us to enhance our senses when they fall below ‘normal’ capabilities. Before that we just lived less vivid lives as our senses worsened.

Having my family ask me ‘Can’t you hear that?’ and listening to nothing but their voices, knowing full well that I’m missing something is a little disconcerting. How are they getting to experience a sound that is outside the range of my capability? But the reality is that there are sounds they too can’t hear, which dogs and other animals can.

This makes me wonder what our world really looks and sounds like? What are we incapable of sensing and hearing, and how does that alter our reality? And for that matter, how do we perceive the world differently not just from other species but from each other? We can agree that certain colours on a spectrum are red, green, and blue, but is my experience of blue the same as yours? If it was, wouldn’t we all have the same favourite colour?

A few years back I had an eye condition that affected my vision at the focal point of my left eye. Later, I accidentally discovered that this eye doesn’t distinguish the difference between some blues and greens, but only at the focal point. I learned this playing a silly bubble bursting game on my phone. Without playing this game I might not have realized the limitations of my vision, and would have been ignorantly blind to my limited vision.

That’s the thought of of the day for me, how are we ignorantly blind to the limitations of our senses? What are we missing that our world tries to share with us? How will technology aid us in seeing what can’t be seen? Hearing what we can’t usually hear? That is to say, that we haven’t already accomplished in detecting? Our houses have carbon monoxide detectors, and we have sensors for radiation that are used in different occupations. We have sensors that detect infrared light, and accurately measure temperature and humidity. This kind of sense enhancing technology isn’t new.

Still, while we have sensors and tools to detect these things for us, we can’t fully experience aspects of our world that are present but undetectable by our senses. It makes me wonder just how much of our world we don’t experience? We are blessed with amazing senses and we have some incredible tools to help us observe the world in greater detail, but what are we missing? What are we ignorantly (or should I say blissfully) unaware of?

Nature-centric design

I came across this company, Oxman.com, and it defines Nature-centric design as:

Nature-centric design views every design construct as a whole system, intrinsically connected to its environment through heterogeneous and complex interrelations that may be mediated through design. It embodies a shift from consuming Nature as a geological resource to nurturing her as a biological one.
Bringing together top-down form generation with bottom-up biological growth, designers are empowered to dream up new, dynamic design possibilities, where products and structures can grow, heal, and adapt.

Here is a video, Nature x Humanity (OXMAN), that shows how this company is using glass, biopolymers, fibres, pigments, and robotics for large scale digital manufacturing, to rethink architecture to be more in tune with nature and less of an imposition on our natural world.

Nature x Humanity (OXMAN) from OXMAN on Vimeo.

This kind of thinking, design, and innovation excites me. It makes me think of Antoni Gaudi styled architecture except with the added bonus of using materials and designs that are less about just the aesthetic and more about the symbiotic and naturally infused use of materials that help us share our world with other living organisms, rather than our constructions imposing a cancer-like imposition on our world, scarring and damaging the very environment that sustains our life.

Imagine living in a building that allows more natural air flow, is cheaper to heat and cool, and has a positive emissions footprint, while also being a place that makes you feel like you are in a natural rather than concrete environment. Less corners, less uniformity, and ultimately less institutional homes, schools, and office buildings, which are more inviting, more naturally lit, and more comfortable to be in.

This truly is architectural design of the future, and it has already started… I can’t wait to see how these kinds of innovations shape the world we live in!

Digital distraction

Last night we went out for a wonderful dinner. I’m the restaurant we had a booth next to a round table which had a mother and 3 daughters. I’d guess the kid’s ages to be about 7, 12, and 14. My youngest daughter was sitting next to me and whispered, “They are all on devices.”

When I looked, the 7 year old had an Anime video playing on her laptop, which was about 8-10 inches (20-25cm) from her face. The 12 year old had over-ear headphones on and was endlessly scrolling on social media. The 14 year old was opposite me and all I could see was that she had one earbud in, on the far side of her mom, and she was bouncing between drawing (she definitely had some art skills) and scrolling on her phone.

The whole table sat in what was mostly silence, eating slowly. This continued from the time they sat down until we left the restaurant.

My daughter then pointed out the table behind us where a boy, about 5, had his face over a tablet, his face lit up from the light off of it, since he was so close to it.

It’s the era of digital babysitting, digital distractions, but creating distraction from what? Mealtime, family time, conversation, social engagement? …All of the above.

I think this form of distraction is fundamentally changing the way we socialize and this will affect our sense of family, community, and culture.

What happens when our screens become more important than the people around us?

AI, Batman, and the Borg

In one of my earliest blog posts, originally written back in November 2006, I wrote:

“I come from the Batman era, adding items to my utility belt while students today are the Borg from Star Trek, assimilating technology into their lives.”

I later noted that students were not the ‘digital natives’ I thought they were. Then I went back and forth on the idea a few times on my blog after that, ultimately looking more at ‘digital exposure‘ and not lumping students/kids together as digital immigrants or natives, but rather seeing that everyone is on a spectrum based on their exposure and interest.

Many of us are already a blend of Batman and Borg. We wear glasses and hearing aids that assist and improve our senses. We track our fitness on our phones and smart watches. We even have pacemakers that keep our hearts regular when our bodies don’t do a good job of it. In a more ubiquitous use of smart tools, almost all of us count on our phones to share maps with us, and we can even get an augmented view of the world with directions showing up superimposed on our camera view.

How else are we going to be augmenting our reality with new AI tools in the next 10 to 20 years?

We now have tools that can: read and help us respond to emails; decide our next meal based on the ingredients we have in our fridge; plan our next vacation; and even drive us to our destination without our assistance.

What’s next?

I know there are some ideas in the works that I’m excited to see developed. For example, I’m looking forward to getting glasses or contact lenses with full heads-up display information. I’m walking up to someone and their name becomes visible to me on a virtual screen. I look at a phone number and I can call it with an eye gesture. I see something I want to know more about, anything from an object, to a building, to a person, and I can learn more with a gesture.

I don’t think this technology is too far away. But what else are we in store for? What new tools are we adding to our utility belts, what new technologies are going to enhance our senses?

I used to make a Batman/Borg comparison to look at how we add versus integrate technology into our lives, but I think everyone will be doing more and more of both. The questions going forward are how much do we add, how reliant do we get, and how different will we be as a result? Would 2024 me even recognize the integrated capabilities of 2044 me, or will that future me be as foreign and advanced as a person from 1924 looking at a person in 2024?

I’m excited about the possibilities!

Is Generative AI “just a tool”?

Dean Shareski asks,

Is Generative AI “just a tool”? When it comes to technology, the “just a tool” phrase has been used for decades. I understand the sentiment in that it suggests that as humans, we are in control and it simply responds to our lead. But I worry that this at times, diminishes the potential and also suggests that it’s neutral which many argue that technology is never neutral. I’d love to know what you think, is Generative AI “just a tool”?

My quick answer is ‘No’, because as Dean suggests tools have never been neutral.

Here are some things I’ve said before in ‘Transformative or just flashy educational tools?‘. This was 13 years ago, and I too used the term, ‘just a tool’:

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.

A complimentary point: If I have a hammer and try to use it as a screwdriver, I won’t get much value from its’ use.”

My main message was, “A tool is just a tool! It’s not the tool, but how you use it that matters.

I went on to ask, “What should we do with tools to make them great?” And I gave 6 suggestions:

  1. Give students choice
  2. Give students a voice
  3. Give students an audience
  4. Give students a place to collaborate
  5. Give students a place to lead
  6. Give students a digital place to learn

This evolved into a presentation, 7 Ways to Transform Your Classroom, but going back to Dean’s question, first of all I don’t think technology is ever neutral. I also think that technology need not be transformative yet always has the potential to be so if it really does advance capabilities beyond what was available before. So the problem is really about the term ‘just’ in ‘just a tool’. Nothing is just a tool. Generative AI is not “just a tool”.

We respond to technology, and technology transforms us. Just like Marshal McLuhan proposed that the communication medium itself, not the messages it carries, is what really matters, “The medium is the message”, the technology is the transformer, it alters us by our use of it, it is the medium that becomes the message.

Generative AI is going to transform a lot of both simple and complex tasks, a lot of jobs, and ultimately us.

What does this mean for schools, and for teaching and learning? There will be teachers and students trying to do old things in new ways, using AI like we used a thesaurus, a calculator, Wikipedia, and other tools to ‘enhance’ what we could do before. There will also be innovative teachers using AI to co-create assignments and to answer new questions, co-write essays, and co-produce images and movies.

There will be students using generative AI to do their work for them. There will be teachers fearing these tools, there will be teachers trying to police their use, and it’s going to get messy… That said, we will see more and more generative AI being the avenue through which students are themselves designing tasks and conceiving of new ways to use these tools.

And of course we will see innovative teachers using these tools in new ways…

How different is a marketing class when AI is helping students start a business and market a product with a $25 budget to actually get it going?

How different is a debate where generative AI is listening and assisting in response to the opposing team’s rebuttal?

How different are the special effects in an AI assisted movie? How much stronger is the AI assisted plot line? How much greater is the audience when the final product is shared online, with AI assisted search engine optimization?

Technology is not ‘just a tool’. We are going to respond to Generative AI and Generative AI is going to transform us. It is not just going to disrupt education, it’s going to disrupt our job market, our social media (it already has), and our daily lives. Many other tools, will not be just like the tools that came before them… changes will be accelerated, and the future is exciting.

Just a tool’ is deceiving and I have to agree with Dean that it underplays the possibilities of how these tools can be truly transformative. I’m not sure I would have said this 13 years ago, but I also didn’t see technology being as transformative back then. Generative AI isn’t a flash in the pan, it’s not something trendy that’s going to fade in popularity. No, it’s a tool that’s going to completely change landscapes… and ultimately us.

New learning paradigm

I heard something in a meeting recently that I haven’t heard in a while. It was in a meeting with some online educational leaders across the province and the topic of Chat GPT and AI came up. It’s really challenging in an online course, with limited opportunities for supervised work or tests, to know if a student is doing the work, or a parent or tutor, or Artificial Intelligence tools. That’s when a conversation came up that I’ve heard before. It was a bit of a stand on a soapbox diatribe, “If an assignment can be done by Chat GPT, then maybe the problem is in the assignment.”

That’s almost the exact line we started to hear about 15 years ago about Google… I might even have said it, “If you can just Google the answer to the question, then how good is the question?” Back then, this prompted some good discussions about assessment and what we valued in learning. But this is far more relevant to Google than it is to AI.

I can easily create a question that would be hard to Google. It is significantly harder to do the same with LLM’s – Large Language Models like Chat GPT. If I do a Google search I can’t find critical thinking challenges not already shared by someone else. However, I can ask Chat GPT to create answers to almost anything. Furthermore, I can ask it to create things like pro’s & con’s lists, then put those in point form, then do a rough draft of an essay, then improve on the essay. I can even ask it to use the vocabulary of a Grade 9 student. I can also give it a writing sample and ask it to write the essay in the same style.

LLM’s are not just a better Google, they are a paradigm shift. If we are trying to have conversations about how to catch cheaters, students using Chat GPT to do their work, we are stuck in the old paradigm. That said, I openly admit this is a much bigger problem in online learning where we don’t see and work closely with students in front of us. And we are heading into an era where there will be no way to verify what’s student work and what’s not, so it’s time to recognize the paradigm shift and start asking ourselves new questions…

The biggest questions we need to ask ourselves are how can we teach students to effectively use AI to help them learn, and what assignments can we create that ask them to use AI effectively to help them develop and share ideas and new learning?

Back when some teachers were saying, “Wikipedia is not a valid website to use as research and to cite.” Many more progressive educators were saying, “Wikipedia is a great place to start your research,” and, “Make sure you include the date you quoted the Wikipedia page because the page changes over time.” The new paradigm will see some teachers making students write essays in class on paper or on wifi-less internet-less computers, and other teachers will be sending students to Chat GPT and helping them understand how to write better prompts.

That’s the difference between old and new paradigm thinking and learning. The transition is going to be messy. Mistakes are going to be made, both by students and teachers. Where I’m excited is in thinking about how learning experiences are going to change. The thing about a paradigm shift is that it’s not just a slow transition but a leap into new territory. The learning experiences of the future will not be the same, and we can either try to hold on to the past, or we can get excited about the possibilities of the future.

AI is Coming… to a school near you.

Miguel Guhlin asked on LinkedIn:

“Someone asked these questions in response to a blog entry, and I was wondering, what would YOUR response be?

1. What role/how much should students be using AI, and does this vary based on grade level?

2. What do you think the next five years in education will look like in regards to AI? Complete integration or total ban of AI?”

I commented:

1. Like a pencil or a laptop, AI is a tool to use sometimes and not use other times. The question is about expectations and management.

2. Anywhere that enforces a total ban on AI is going to be playing a never-ending and losing game of catch-up. That said, I have no idea what total integration will look like? Smart teachers are already using AI to develop and improve their lessons, those teachers will know that students can, and both will and should, use these tools as well. But like in question 1… when it’s appropriate. Just because a laptop might be ‘completely integrated’ into a classroom as a tool students use doesn’t mean everything they do in a classroom is with and on a laptop.

I’ve already dealt with some sticky issues around the use of AI in a classroom and online. One situation last school year was particularly messy, with a teacher using Chat GPT as an AI detector, rather than other AI detection tools. It turns out that Chat GPT is not a good AI detector. It might be better now, but I can confirm that in early 2023 it was very bad at this. I even put some of my own work into it and I had Chat GPT tell me that a couple paragraphs were written by it, even though I wrote the piece about 12 years earlier.

But what do we do in the meantime? Especially in my online school where very little, if any, work is supervised? Do we give up on policing altogether and just let AI do the assignments as we try to AI proof them? Do we give students grades for work that isn’t all theirs? How is that fair?

This is something we will figure out. AI, like laptops, will be integrated into education. Back in 2009 I presented on the topic, “The POD’s are Coming!

(Slideshow here) About Personally Owned Devices… laptop etc… coming into our classrooms, and the fear of these devices. We are at that same point with AI now. We’ll get through this and our classrooms will adapt (again).

And in a wonderful full-circle coincidence, one of the images I used in the POD’s post above was a posterized quote by Miguel Guilin.

It’s time to take the leap. AI might be new… but we’ve been here before.

Paper maps

I remember driving from Toronto Ontario to Phoenix Arizona over 3 days with only paper maps. That’s over 3,500 kilometres of travel before the era of ‘Sat Nav’ and GPS. A wrong turn wasn’t met by auditory instructions to turn around, or automatic rerouting. No, it was followed by being oblivious until you saw a highway sign that told you that you are on the wrong road, or looking for the name of an exit you saw, only to realize it’s on a different highway.

You’d drive into a city at night and then start looking for a hotel. No google searches or cell phones to call anywhere in advance. No instructions to get back on the highway unless you asked at the front desk.

But it was usually the last 5-10 kilometres that were the toughest. Highways are well labeled on maps, but side streets are a whole other story. You could spend 4 hours traveling at the maximum speed limit and make great time, only to flounder in the last few minutes and spend 20 minutes lost and frustrated.

I remember being lost in a suburban community with my wife once and we gave up, just deciding to follow another car out of the maze of houses we were in… only to be disappointed when he turned into his driveway. We were quite embarrassed after circling the cul-de-sac and the person we followed was standing outside his car wondering who we were and why we followed him?

It was a different time, and one I’m not yearning to repeat. I’m happy to have Waze or Google Maps take all the mystery out of driving somewhere I’ve never been before. I am glad my daughters don’t have to navigate with a paper map, although they will never know the joy of driving over the paper crease where you folded the map, leaving a section that you spent hours driving through. Unfortunately phones can be as big or bigger a distraction than a paper map, but make no mistake, when it came to trying to read a paper map, your attention was definitely not on the road.

We are over the fold now, never to return to the era of paper maps. There are no creases on a digital map, no more folding, unfolding, and refolding. No more getting totally lost with no clue what to do next. We can always have a little voice telling us that we are rerouting, and always have a digital line we can follow.

The easy way out

I love the ingenuity of students when it comes to avoiding work. I remember a student showing me how playing 3 French YouTube videos in different tabs simultaneously somehow fooled the Rosetta Stone language learning software to think he was responding to oral tests correctly. How on earth did he figure that out?

Here’s a video of a kid who, while doing an online math quiz for homework, figured out that if you go to the web browser’s developer ‘inspect element’ tool you can find out the correct answer. Just hover over the code of the multiple choice questions and it highlights the choices and the code tells you if that choice is true or false.

@imemezy

Kids know every trick in the book…i mean computer #computer #maths #homework #madeeasy #lol #children #schoolwork #schools #hack #hacks #tricks #tips #test #exam #learning #learn

♬ original sound – Memezy

If there is an easy way to solve things, students will figure it out.

There isn’t an AI detector that can figure out with full certainty that someone cheated using a tool like Chat GPT. And if you find one, it probably would not detect it if the student also used an AI paraphrasing tool to rework the final product. It would be harder again if their prompt said something like, ‘Use grammar, sentence structure, and word choice that a Grade 10 student would use’.

So AI will be used for assignments. Students will go into the inspector code of a web page and find the right answers, and it’s probably already the case that shy students have trained an AI tool to speak with their voice so that they could submit oral (and even video) work without actually having to read anything aloud.

These tools are getting better and better, and thus much harder to detect.

I think tricks and tools like this invite educators to be more creative about what they do in class. We are seeing some of this already, but we are also seeing a lot of backwards sliding: School districts blocking AI tools, teachers giving tests on computers that are blocked from accessing the internet, and even teachers making students, who are used to working with computers, write paper tests.

Meanwhile other teachers are embracing the changes. Wes Fryer created AI Guidelines for students to tell them how to use these tools appropriately for school work. That seems far more enabling than locking tools down and blocking them. Besides, I think that if students are going to use these tools outside of school anyway, we should focus on teaching them appropriate use rather than creating a learning environment that is nothing like the real world.

All that said, if you send home online math quizzes, some students will find an easy way to avoid doing the work. If you have students write essays at home and aren’t actively having them revise that work in class, some will use AI. Basically, some students will cheat the system, and themselves of the learning experience, if they are given the opportunity to do so.

The difference is that innovative, creative teachers will use these tools to enhance learning, and they will be in position to learn along with students how to embrace these tools openly, rather than kids sneakily using them to avoid work, or to lessen the work they need to do… either way, kids are going to use these tools.