Tag Archives: technology

It’s going to be messy

“Technology is a way of organizing the universe so that man doesn’t have to experience it” ~ Max Frisch

One of my favourite presentations I’ve ever created was back in 2008 for Alan November’s BLC – ‘Building Leadership Capacity’ conference. It was called: The Rant, I Can’t, The Elephant and the Ant, and it was about embracing new technology, specifically smartphones in schools.

The rant was about how every new technology is going to undermine education in a negative way, starting with the ball point pen.

I can’t was about the frustrations educators have with learning to use new tools.

The elephant was the smartphone, it was this incredibly powerful new tool that was in the room. You can’t ignore an elephant in the room.

The Ant was a metaphor for networking and learning from others… using a learning community to help you with the transformation of your classroom.

I ended this with a music slideshow that I later converted to video called, Brave New World Wide Web. This went a bit viral on BlipTV, a now defunct rival of YouTube.

The next year I presented at the conference again and my favourite of my two presentations was, The POD’s are Coming, about Personally Owned Devices… essentially laptops and tablets being brought into schools by students. These may be ubiquitous now, but it was still pretty novel in 2009.

These two presentations and video give a pretty strong message around embracing new technology in schools. So my next message about embracing AI tools like Chat GPT in schools is going to come across fairly negatively:

It’s going to be a bumpy and messy ride.

There is not going to be any easy transition. It’s not just about embracing a new technology, it’s about managing the disruption… And it’s not going to be managed well. I already had an issue in my school where a teacher used Chat GPT to verify if AI wrote an assignment for students. However Chat GPT is not a good AI checker and it turned out to be wrong for a few students who insisted they wrote the work themselves, and several AI detectors agreed. But this was only checked after the students were accused of cheating. Messy.

Some teachers are now expecting students to write in-class essays with paper and pen to avoid students using AI tools. These are kids that have been using a laptop since elementary school. Messy.

Students are using prompts in Chat GPT that instruct the AI to write with language complexity based on their age. Or, they are putting AI written work into free paraphrasing tools that fool the AI detectors. Messy.

Teacher’s favourite assignments that usually get students to really stretch their skills are now done much faster and almost as good with AI tools. And even very bright students are using these tools frequently. While prompt generation is a good skill to have, AI takes the effort and the depth of understanding away from the learners. Messy.

That final point is the messiest. For many thoughtful and thought provoking assignments, AI can now decrease the effort to asking AI the right prompt. And while the answer may be far from perfect, AI will provide an answer that simplifies the response for the the learner. It will dumb down the question, or produce a response that makes the question easier.

Ai is not necessarily a problem solver, it’s a problem simplifier. But that reduces the critical thinking needed. It waters down the complexity of work required. It transforms the learning process into something easier, and less directly thoughtful. Everything is messier except the problem the teacher has created, which is just much simpler to complete.

Learning should be messy, but instead what’s getting messy is the ability to pose problems that inspire learning. Students need to experience the struggle of messy questions instead of seeking an intelligent agent to mess up the learning opportunities.

Just like any other tool, there are places to use AI in education and places to avoid using the tool. The challenge ahead is creating learning opportunities where it is obvious when the tool is and isn’t used. It’s having the tool in your tool box, but not using it for every job… and getting students to do the same.

And so no matter how I look at this, the path ahead is very messy.

Just a call away

Today I saw a sunset in Greece. It was hours ago, and although the sun hasn’t set here yet, my daughter is on a Greek island and she FaceTime’d me. The photo shared above is from a Snapchat she shared just before calling. She was on a balcony at her hostel, and we chatted for a few minutes while her friends got ready to go to dinner.

When my wife did a similar backpacking trip 30 years ago she spoke to her parents by collect call each time she was heading to or arrived in another country and that would be it for contact for days if not longer than a week. For this trip my wife is in contact with our kid almost daily, even if just by WhatsApp chat. She checks in with her dad a little less frequently, knowing I get the updates from my wife.

Time zones are the only challenge to communication. As I’m writing this at 7:30pm here, and it’s 5:30am in Greece. But beyond that, it’s pretty awesome that we can stay connected… for free with a simple wifi connection. This shouldn’t still amaze me but it does. It would take me 14.5 hours including a layover to get to her, but I can see her ‘live’ on my phone with the only challenge being what time we go to sleep.

Makes me think, who else is just a call away, but I haven’t made the effort?

Use it or fall behind

Check out what Khan Academy has done so far, since getting early access to Chat GPT4 last August.

And here’s what’s coming soon:

The gut reaction to using new technology in education is to ban, block, and/or punish students for ‘cheating’. While I’m not going to link to the many times I’ve already said this, I’ll say it again… the technology is not going away!

So how do we use it effectively, creatively, and for learning? 

That is the question question to ask… and Chat GPT4 and tools like it probably have better answers than you can come up with.

Dinner with the dead

A question Tim Ferris used to regularly ask his podcast guests was, “If you could have dinner with one person, dead or alive, who would it be and why?” 

Well now it might be a bit easier to have one of those dinner conversations… even if the person is dead.

Here’s a conversation on AI and education between Bill Gates and Socrates, but first the description of the video:

AI Brings Bill Gates & Socrates Together: A Must-Watch Dialogue on AI. An exclusive video of Bill Gates and ancient philosopher Socrates discussing the potential of artificial intelligence. Don’t miss this groundbreaking fusion of past wisdom and present innovation, reshaping our understanding of AI.

In this video, you will witness a fascinating discussion between Socrates, the Greek philosopher considered one of the greatest thinkers in history, and Bill Gates, the American entrepreneur and founder of Microsoft, one of the most important companies in the world of technology.

Despite belonging to different eras, Socrates and Gates have a lot in common. Both are considered pioneers in their respective fields and have had a significant impact on society.

The AI-generated conversation will allow these two great figures to discuss topics such as technology, ethics, education, and much more. Will Socrates and Bill Gates be able to find common ground in their ideas and thoughts? Find out in this video!

https://youtu.be/hJ5qN9PRmFc

It didn’t need the laugh track, and there is a slight cartoonish feel to the two characters, but this technology is just getting better and better!

“How good are my AI prompts?”

Two thoughts about yesterday’s post, ‘Playing with Chat GPT‘:

1. I used the plural phrase ‘Artificial intelligences’ and followed up with, “yes plural, AI is not a single thing”. What’s both exciting and scary is that Chat GPT and other incredible AI tools are revolutionizing markets like health care diagnostics, manufacturing and logistics, coding, customer service and tech support, copy editing and content generation, audio and video editing, and even education. I think anyone who uses these tools can see why it’s exciting, but why do I also say scary? Here are two reasons:

First of all, many of these tools are open source or open access and/or very affordable for anyone to build on top of. This is great, but also permits people to do some pretty nefarious things, like produce deep fakes, and use these tools in increasingly evil ways. And as AI gets better, so does the ability to do greater harm.

Secondly, we are going to see a major decrease in jobs. Now this is under debate, with some people thinking there will just be a shift in jobs, but I disagree. For example, you own an online website that hires content 10 content writers to produce daily content to get new articles in front of your readers. You lay off 7 of them, keeping your best ones, and you have them use Chat GPT to write articles similar to the best, most popular ones on your site, and the 3 best remaining editors tweak the AI writing, make it better, easily doing the work of 10 writers. 

This kind of shift isn’t happening with just Chat GPT, there are more and more AI tools that are quickly shifting the need to less staff, who are more creative and innovative, to do jobs many more people did. If you are an elegant coder or excellent problem-solving tech support worker, your job is safe. If you are just competent at coding or tech support, an AI can and will do the job better than you, and you won’t be needed much longer. The irony is that your years of providing support will have helped train the very AI replacing you.

This isn’t just about Chat GPT, it’s about a plethora of Artificial Intelligences changing the way we learn, access information, get fooled, and work. And the pace of change will rival the any prior advancement in human history.

2. After publishing yesterday’s post, I went back to Chat GPT to play some more (as seen in the ‘Update’ at the bottom of the post). Three prompts after my original one I had something that was easy to read, and would take just one read-over and final edit to be something I could publish, which would be insightful, and difficult to know was AI generated. I ended my update with: The question isn’t how good is the AI tool, the question is, “How good are my prompts?”

…and if your prompts are not that good… just ask Chat GPT to improve them!

Here are a couple Twitter threads with some insightful prompts for Chat GPT.

Playing with Chat GPT

Yesterday I asked Chat GPT for blog post ideas for a blog that discusses ‘education, learning, and general life lessons‘. This is what it responded with:

I then chose #4 and asked: Write a short blog post titled, “The Role of Reflection in Learning” on the topic of Exploring the significance of reflective practices in enhancing learning and personal growth.

This is what Chat GPT came up with:

Would I ever just use Chat GPT to write my Daily Ink? No. But would I use it as a launching pad to help me generate ideas, or help me write a first draft? Absolutely. However, that’s just me. What you see above will be the kind of thing people who generate daily posts as part of a paid gig will do regularly. Sure they will make it sound a bit less formal, a bit more natural or conversational… but there are actually other Artificial Intelligences (yes plural, AI is not a single thing) which will do this and make it undetectable should you ask an AI detector if a human wrote it. Need something longer, like a school essay? Just ask Chat GPT to expand each of the 5 points into full paragraphs.

I didn’t iterate. I didn’t ask a more detailed question. I didn’t ask Chat GPT to further develop the ideas… If I did, it would make the above results even better and it would do so in a matter of seconds. And educators, try as you might, you can’t police it.

We are not far away from finding more AI written (or AI assisted) content on the internet than just purely human generated content… and we won’t know the difference.

— — — — — — — — — — — — — — — — — 

Update: (…I came back to this a few hours later.)

TLDR: The question isn’t how good is the AI tool, the question is, “How good are my prompts?”

I followed up and requested Chap GPT to, “Write the above blog post in a more conversational manner, using paragraphs without numbering or section titles.”
Here is the updated version:

The Role of Reflection in Learning

In the fast-paced world of education, where we’re constantly bombarded with information, it’s important to take a step back and reflect on our learning journey. Reflection is a powerful tool that not only enhances our understanding but also fosters personal growth. So, let’s dive into the significance of reflective practices and how they can positively impact our learning experiences.

When we engage in reflection, we go beyond surface-level learning. It provides us with an opportunity to delve deeper into our knowledge and identify any gaps in our understanding. By pausing, processing, and evaluating what we’ve learned, we can connect the dots and develop a more comprehensive understanding of the subject matter. Whether it’s through journaling, discussions, or self-assessment, reflection allows us to make sense of complex concepts and strengthen our grasp on the material.

But reflection goes beyond academic understanding. It also enhances our self-awareness, which is a crucial aspect of personal growth. By reflecting on our learning process, we become more conscious of our strengths, weaknesses, and preferred learning styles. This self-awareness empowers us to tailor our learning strategies, seek additional support when needed, and leverage our strengths to maximize our potential. It’s like having a roadmap that guides us towards effective learning.

Another great benefit of reflection is its ability to foster critical thinking skills. When we reflect, we engage in thoughtful analysis, evaluation, and synthesis of information. We question assumptions, challenge existing ideas, and gain a deeper understanding of various perspectives. This critical thinking not only sharpens our intellect but also enhances our problem-solving abilities. It equips us with the skills needed to tackle real-world challenges with confidence and creativity.

Reflection is also a catalyst for continuous improvement. Learning is a lifelong journey, and reflection plays a vital role in our growth. By reflecting on our successes and failures, we can identify areas for improvement and set meaningful goals. Regular self-reflection allows us to track our progress, celebrate achievements, and make necessary adjustments to our learning strategies. It promotes a growth mindset, inspiring us to embrace challenges, persist through obstacles, and strive for excellence.

In addition to all these benefits, reflection fosters metacognition, which is our ability to think about our own thinking. When we engage in metacognitive practices, such as reflecting on our learning process, monitoring our comprehension, and evaluating our strategies, we become more self-regulated learners. This metacognitive awareness empowers us to take control of our learning, adapt our approaches, and become more efficient and effective learners overall.

So, let’s make a conscious effort to pause, reflect, and embark on a transformative learning journey filled with growth, insight, and personal development. By embracing reflective practices, we deepen our understanding, foster self-awareness, sharpen critical thinking skills, promote continuous improvement, and cultivate metacognitive abilities. It’s time to unlock the true potential of our learning experiences through the power of reflection.

Then I followed up with, “Try again, still more conversational and less like an essay. Vary sentence and paragraph length, and end with a thought provoking question.”

I didn’t like what it came up with so I followed up one more time with, “This last version is a bit too conversational, try again and this time don’t start sentences with coordinating conjunctions like ‘But’ and ‘So’. Also change the first sentence so it less formal.”

Reflection in Learning: Unleashing the Power Within

In the whirlwind of education, where information bombards us from all directions, it’s easy to overlook the importance of reflection in our learning journey. We often rush from one lesson to the next, absorbing knowledge without truly digesting it. But what if we hit the pause button and took a moment to reflect?

Reflection goes beyond simply memorizing facts and figures. It’s a transformative process that allows us to delve deeper into our thoughts, experiences, and understanding of the subjects we encounter. It’s like shining a spotlight on our minds and examining everything we’ve absorbed in a more meaningful way.

What’s truly remarkable about reflection is that it extends beyond academia. It plays a crucial role in our personal growth as well. By engaging in reflection, we become more aware of our strengths, weaknesses, and unique learning styles. It’s like getting to know ourselves as learners on a whole new level. This self-awareness empowers us to tailor our learning strategies and leverage our strengths to maximize our potential.

Reflection isn’t a passive exercise; it’s an active one that fuels critical thinking. When we reflect, we question assumptions, evaluate information, and connect the dots. It enables us to go beyond the surface-level understanding and dive into the deeper layers of knowledge. This kind of thinking not only expands our intellectual capacity but also equips us with the skills to tackle real-world challenges with confidence and creativity.

Yet, the power of reflection doesn’t stop there. It serves as a catalyst for growth and continuous improvement. When we take the time to reflect on our learning journey, we identify areas where we can enhance our skills and knowledge. We set meaningful goals and work diligently towards them. It’s like having a personal roadmap for our own development, guiding us towards excellence and progress.

Intriguingly, reflection is also about metacognition – thinking about our own thinking. By engaging in metacognitive practices, such as reflecting on our learning process or evaluating our strategies, we become self-regulated learners. We take charge of our learning experience, adapt our approaches, and become more efficient and effective learners overall. It’s like having an internal compass that helps us navigate the vast landscape of knowledge.

So, let’s embrace the power of reflection. Let’s pause, process, and ponder on what we’ve learned. When we do, we unlock the true potential of our learning experiences. We deepen our understanding, foster self-awareness, sharpen critical thinking skills, and pave the way for continuous growth and personal transformation.

Have you taken a moment to reflect on your learning journey lately? What insights have you gained? How has reflection impacted your personal growth? Share your valuable thoughts and experiences in the comments below.

I think it would take one final personal edit, tweaking as I read it over once, and I’d have a pretty good blog post that’s both insightful and hard to identify as AI generated.

The question isn’t how good is the AI tool, the question is, “How good are my prompts?”

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

You can’t police it

I said Chat GPT is a game changer in education in my post Fear of Disruptive Technology. Then I started to see more and more social media posts sharing that Artificial Intelligence (AI) detectors can identify passages written by AI. But that doesn’t mean a person can’t use a tool like Chat GPT and then do some of their own editing. Or, as seen here: just input the AI written text into CopyGenius.io and ‘rephrase’, and then AI detectors can’t recognize the text as written by AI.

The first instinct with combating new technology is to ban and/or police it: No cell phones in class, leave them in your lockers; You can’t use Wikipedia as a source; Block Snapchat, Instagram, and TikTok on the school wifi. These are all gut reactions to new technologies that frame the problem as policeable… Teachers are not teachers, they aren’t teaching, when they are being police.

Read that last sentence again. Let it sink in. We aren’t going to police away student use of AI writing tools, so maybe we should embrace these tools and help manage how they are used. We can teach students how to use AI appropriately and effectively. Teach rather than police.

Technological leaps

I’ve been very interested in bicycle gadgets for a while. I designed a backpack for bicycle commuters and patented a bicycle lock, both things that I’ll share in detail here at a later date. But today I want to share a brilliant, even revolutionary new advancement in bicycle design.

This Ultra-Efficient Bike Has No Chains and No Derailleurs

This video explains how it works:


Absolutely brilliant! No more chains, much more efficiency. Wireless electronic shifting and a split pinion that adjusts to the next gear while still engaged with the previous gear.

This isn’t just a better design, it’s a leap forward. I have questions around how it would perform in dirt and mud, and reliability in ‘the real world’, but those are things that can be tweaked over time. The reality is that this isn’t a tweak, it’s a fundamental shift in design that is going to change the future designs of bicycles, and other drive shaft designs, in the years to come.

Amazing!

Fear of Disruptive Technology

There is a lot of fear around how the Artificial Intelligence (AI) tool Chat GPT is going to disrupt teaching and learning. I’ve already written about this chatbot:

Next level artificial intelligence

And,

Teaching in an era of AI

And,

The future is now.

To get an understanding of the disruption that is upon us, in the second post, Teaching in an era of AI, I had Chat GPT write an essay for me. Then I noted:

“This is a game changer for teaching. The question won’t be how do we stop students from using this, but rather how do we teach students to use this well? Mike Bouliane said in a comment on yesterday’s post, “Interesting post Dave. It seems we need to get better at asking questions, and in articulating them more precisely, just like in real life with people.

Indeed. The AI isn’t going away, it’s just going to get better.”

And that’s the thing about disruptive technology, it can’t be blocked, it can’t be avoided, it needs to be embraced. Yet I’ve seen conversations online where people are trying to block it in schools. I haven’t seen this kind of ‘filter and hide from students’ philosophy since computers and then phones started to be used in schools. It reminds me of a blog post I wrote in 2010, Warning! We Filter Websites at School, where I shared this tongue-in-cheek poster for educators in highly filtered districts to put up on their doors:

Well now the fervour is back and much of the talk is about how to block Chat GPT, and how to detect its use. And while there are some conversations about how to use it effectively, this means disrupting what most teachers assign to students, and this also disrupts assessment practices. Nobody likes so much disruption to their daily practice happening all at once. So, the block and filter and policing (catching cheaters) discussions begin.

Here is a teacher of Senior AP Literature that uses Chat GPT to improve her students’ critical thinking and writing. Note how she doesn’t use the tool for the whole process. Appropriate, not continuous use of the tool:

@gibsonishere on TikTok

Again going all the way back to 2010, I said in Transformative or just flashy educational tools?,

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.”

And here is something really important to note:

The. Technology. Is. Not. Going. Away.

In fact, AI is only going to get better… and be more disruptive.

Employers are not going to pretend that it doesn’t exist. Imagine an employer saying, “Yes, I know we have power drivers but to test your skill we want you to screw in this 3-inch screw with a handheld screwdriver… and then not letting the new employee use his power tools in their daily work.

Chat GPT is very good at writing code, and many employers test their employee candidates by asking them to write code for them. Are they just going to pretend that Chat GPT can’t write the same code much faster? I can see a performance test for new programmers looking something like this in the future: “We asked Chat GPT to write the code to perform ‘X’, and this is the code it produced. How would you improve this code to make it more effective and eloquent?”

Just like the Tiktok teacher above, employers will expect the tool to be used and will want their employees to know how to use the tool critically and effectively to produce better work than if they didn’t use the tool.

I’m reminded of this carton I created back in 2009:

The title of the accompanying post asks, Is the tool an obstacle or an opportunity? The reality is that AI tools like Chat GPT are going to be very disruptive, and we will be far better off looking at these tools as opportunities rather than obstacles… Because if we choose to see these tools as obstacles then we are the actual obstacles getting in the way of progress.