Tag Archives: technology

Gangsta AI in the hood

This is next level music production and creation. The quality of this remix is unreal. I think this is one of the best remixes of a song I’ve ever heard… and I’m not even a blues fan.

Here is Coolio’s Gangsta’s Paradise.

And here is the AI blues version

I don’t know how I feel about liking AI created music so much? To me, it’s the creative endeavours of humankind that make us such unique beings in the galaxy, if not the universe.

Then I hear this and I think, we are not alone anymore. I expect AI to ‘out intelligence us’ soon enough, but I wasn’t expecting such a quick transition to ‘out artistically creating us’! Sure this is based on a song by Coolio, which is based on Pastime Paradise by Stevie Wonder…. And so it is not truly original. But we are still in the very early stages of AI musical creativity, and I fear just like we can’t trust video clips anymore without questioning if they are AI, soon we won’t be able to listen to a great new song without wondering which AI model created it?

Loving the song version but feeling like AI is getting pretty gangsta and taking over the formerly human creative hood.

Bells and whistles

My wife bought a scale that tells you more than your weight. It’s called Hume and it gives a whole bunch of data to you about your body. You stand on it barefoot and hold a handle with sensors on it and it gives you your fat percentage, lean mass, subcutaneous fat mass, body water percentage, heart rate, and more information.

It seems interesting and I’ll try it for a few more weeks, but I’m questioning the accuracy of it. First of all, it had my body fat percentage go up over 3% in 10 days. That seems odd. And it says my heart rate is higher than I think it is. Yesterday and today I canceled and redid my first weigh-in because it said my heart rate was above 80 yesterday and 78 today. I know that when I wake up in the morning my heart rate is not that high. My second reading today was 70. I then took out our blood pressure monitor which also measures my heart rate and it measured it at 57.

Sometimes I think too many features are put onto things and they come at a cost to other features. In this day and age, heart rate should be a low bar for accuracy. Being off about 20 beats a minute is not acceptable. I don’t want ‘all the bells and whistles’ if they are not hitting accurate notes.

Again, I’ll try this new toy out for a couple more weeks, but I have to say I’m disappointed so far, and if it can’t get heart rate and body fat percentage right, can you blame me for questioning the other results? And if I’m not accurately tracking these data points, why would I use the product, no matter what bells and whistles it claims to offer.

The infinite classroom

I recently heard someone describe AI as the infinite classroom… You can get anytime learning, catered just to you. And for a moment I thought, ‘I remember Google being described like that, and YouTube too.’ Now, I know that the ‘catered to you’ part of Artificial Intelligence is a richer experience than Google or YouTube, but that doesn’t mean that we haven’t kind of been here before. The guy went on to say that schools today are irrelevant. He was American and his focus wasn’t K-12 education but rather ‘investing’ $200k+ for a college degree that could be irrelevant by the time you get it.

Still, this made me think of all the digital distractions that make school less appealing and engaging compared to out-of-school offerings and opportunities… From AI providing meaningful, just-in-time learning, to social media, to gaming. Be it for learning or entertainment the competition for attention is significant outside of school.

So how do we engage students in schools when an infinite classroom as well as unlimited distractions are happening outside of schools?

What we shouldn’t do is bring back more traditional testing to ensure students don’t cheat using AI. What we also shouldn’t do is try to compete with the outside world and attempt to make schools more entertaining.

What we should do is create rich experiences where students are exposed to concepts and ideas that they would not have found on their own. We should provide social opportunities to learn together. We should provide opportunities for student voice and choice.

It’s not about competing with the infinite. It’s about cultivating learning experiences where students feel invested in the experience. It’s about fostering curiosity and providing shared learning opportunities that challenge students meaningfully.

In a world of infinite distractions, engagement in schools needs to be community and relationship focused. If it’s just about accumulating information and content, then classrooms as we know them will be no match for the infinite classroom (and unlimited distractions) that out of school opportunities will provide.

Not Ready, and Ready

I’m not ready to connect AI to my email, to have it view my calendar, to let it automate my communication, or write for me unsupervised. I’m not trusting AI to organize my life in any way.

But…

I am ready to share all my health data. I’m ready for AI to know everything about my health that I can provide it. I want to get a DEXA scan and share it with Chat GPT or some other tool for feedback.

Analyze and diagnose me, but don’t run my life.

That’s my current line… let’s see how it changes in a year.

Instant feedback

Yesterday, at our welcome back session for principals, one of the assistant superintendents asks us a couple questions to get feedback on what we thought was important for our district visioning. This is a typical kind of exercise to start the year. Usually this data is collected then in a later session we look at the data and trends.

But instead, he had us do the activity individually, then connect as a table group to prioritize our results. Then one person per table put the top 5 answers into a Microsoft form.

The assistant superintendent then used Copilot, Microsoft’s Artificial Intelligence LLM, to not only collate the data, but also to look for trends. He did this during a break so that we were not waiting on him. Then we came back and discussed not only the results but also his line of questioning.

Probably my favourite part of this is when he told Copilot, ‘here is the data’, but forgot to paste it in. Why? Because it’s important to model that you can make mistakes when trying something new.

I was discussing with a colleague before the meeting that I was hoping to see this happen. I’m tired of people collecting large amounts of data that will then take hours to assess, when we have new technology that can find trends invisible to us in mere seconds.

In the meeting we still did a lot of activities to connect us to our peers, we still had great table talks and meaningful conversations, but when it came time to collect and assess data we didn’t go old school, instead we took advantage of the technology available to us in a meaningful way. And yes, more analysis of the data may come later, and not all of it using AI, but to have this powerful tool use available and to not both use it and model it, would be a real shame.

It was really great to see this happen in yesterday’s first meeting of the year.

No more manuals

One of the things I’ve been using Chat GPT & Copilot for has been to save me time looking through device manuals for things I don’t know how to do.

Today’s request, “This is a photo of my washing machine. It has a ‘Clean washer’ option on the dial, but I don’t know the instructions for that. Could you please let me know what they are?”

It was simple enough, I just needed to add bleach, but asking the question rather than flipping through a 40+ page manual with tiny font is just so much better. Recently I’ve also used AI instructions to fine tune the settings on my hot tub. Again, this was so much easier than pulling out the manual.

It’s like having a specialist at your disposal. Soon AI will be the manual. There will be a QR code on the device and when you scan it with your phone it will send you to an intelligent, personified AI manual which will ask you how it can help. Then in conversation, text, & with created-just-in-time video, it will guide you step-by-step.

You know those annoying, ‘Getting Started’ instructions that help you set up things like a remote control? Well AI can help make those instructions both easier and interactive for when things don’t go as smoothly as expected.

We are just a few years away from never having a paper manual shipped with new devices again. Because, there will be an AI agent designed to help you with far more detail and context specific assistance than an analog booklet could ever provide.

“Oh no, AI is making us dumber!”

Except it’s not.

People forget that we were worried about the internet and Google. And before that writing utensils:

“Students today depend too much upon ink. They don’t know how to use a pen knife to sharpen a pencil. Pen and ink will never replace the pencil.”
~ National Association of Teachers Journal, 1907


“Students today depend on these expensive fountain pens. They can no longer write with a straight pen and nib. We parents must not allow them to wallow in such luxury to the detriment of learning how to cope in the real business world which is not so extravagant.”
~ From PTA Gazette, 1941

I pulled those quotes from a presentation I did 16 years ago. I did another presentation at that time where I shared a quote from 1842 discussing how books would become useless “when the pupils are furnished with slates”.

We are used to pronouncing ‘the sky is falling‘ when the next advancement comes along. Google was going to make us dumber. It didn’t. Smart phones were going to make us dumber, but they didn’t. They did however change the things we thought and still think about, and remember. For example, I used to carry around a few dozen phone numbers, memorized in my head, now I don’t even know my own daughter’s numbers. They are neatly stored in my phone.

AI will do the same. It will adjust what we remember, fine tune what we think about about and ask, and help direct our thinking… but it won’t make us dumber.

When I was a kid, I thought my dad was the smartest guy in the world. I can’t think of a question I asked him that he didn’t know the answer to. Sometimes he’d even bring me a file on the topic I asked about.

I remember absolutely blowing away a teacher and my fellow students on a project I did on harnessing the ocean for power. I had newspaper clippings, magazine articles, even textbook sources that I shared on the classroom overhead projector. It looked like I spent hours upon hours doing research. I didn’t. I asked my dad what he knew and he gave me a thick file with all the resources I needed. He was my Google long before Google was a thing.

It made me look good. It made my work a lot easier. It didn’t make me dumber.

I’ll admit that there is something fundamentally different with AI compared to advances like the slate, the pen, the internet, Google and other ‘technological advances’. As Artificial Intelligence becomes smarter than us, we can rely on it in ways that we couldn’t with other advances. And it will take a while for us to figure out how to create tasks in schools that utilize AI effectively, rather than having AI do all the work. It was hard but not impossible to ‘Google proof’ an assignment, and that challenge is significantly magnified by AI. But the opportunities are also magnified.

What happens when AI can individualize student learning and what we consider the ‘core curriculum’ can be taught in less than half of a school day? How exciting can school be for the other half of the day? What curiosities can we foster? How student directed (and thus more engaging) can that other half of the day be?

We are only dumber using AI if we decide that we will passively let it do the work for us, but let’s not pretend students were not already using ‘cut-and-paste’ to get assignments done. Let’s not pretend work avoidance wasn’t already a thing. Let’s not pretend that we don’t already spend a lot of time in schools teaching students to be compliant rather than to think for themselves.

AI will only make us dumber if we try to continue doing what we have done before, but allow AI to do the work for us. If we truly use AI in collaborative and inspirational ways, we are opening an exciting new door to what human potential really can be.

It should be getting easier, but it isn’t

Sometimes it’s hard to believe that we live in the 21st-century. It should be getting easier, but it isn’t. All of our schools just got new photocopy machines, and there is a one hour video tutorial to learn how to use them. More videos and instructions are required, if you are the one who is doing any kind of basic maintenance like replacing the toner.

Related to this, when was the last time you bought a new TV and you instantly knew how to use the remote? I find incredible irony that there is nothing universal about a universal remote control. I don’t know, call me crazy, but I would think that in this day and age the tools we use would get simpler to use, not more complicated.

Borrow a friend’s car and try to fill the gas tank and you’re left puzzled as to where the release is for the gas tank’s door. Go to the gas station and there’s a process to get your rewards card punched in, because you don’t have room in your wallet for 17 rewards program cards. Try to connect your phone to that same borrowed car, and you’re worried that you’re going to have to cancel another users profile. Or you are faced with a touch screen menu that just doesn’t make intuitive sense.

How is it that the user interface of almost everything we do now is more complicated than necessary? Why is it that every single place we go to online we’re expected to login or create an account, or at the very least close one or two pop-up invitations to do so? i’m looking at a website for 30 seconds, to find out one piece of information, do I really need to decide whether or not I want to accept cookies?

My microwave has a touch dial where I have to spin my finger in a circle to get to the appropriate time. I don’t think I can ever hit the time I want without toggling back-and-forth. This takes me significantly longer than if I had to punch three number keys on a touchpad and hit ‘Start’. There is nothing convenient about this. And that’s my point…

We live in an era when things should be getting a lot easier, user interfaces should be intuitive and natural to use, but instead everything seems to be getting a little more difficult. I just don’t get it.

The real alignment problem

‘The alignment problem in artificial intelligence refers to the challenge of ensuring that AI systems act in accordance with human values and intentions. It involves making sure that these systems pursue the goals we set for them without unintended consequences or harmful behaviors.’

~ Auto-generated on DuckDuckGo

The real alignment problem is what human values are being aligned?

Do you want AI aligned with strict religious beliefs? Nihilism? Capitalism?

The point is, we can’t agree on what human values we want so how does AI align pluralistically? And furthermore, when AI achieves super intelligence, why would it bother to align with us?

The real alignment problem comes in two parts:

  • The what? Align with what human values.
  • The why? Why would a super intelligent AI want to align with our values?

The first part is something we will have to figure out. The second might just be decided for us, and not necessarily in our favour.

Self-interests in AI

Yesterday I read the following in the ‘Superhuman Newsletter (5/26/25)’:

Bad Robot: A new study from Palisade Research claims that “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off”, even when it was explicitly instructed to shut down. The study raises serious safety concerns.

It amazes me how we’ve gotten here. Ten, or even five years ago there were all kinds of discussions about of AI safety. There was a belief that future AI would be built in isolation with an ‘air-gap’, used as a security measure to ensure AI systems remained contained and separate from other networks or systems. We would grow this intelligence in a metaphorical petri dish and build safety guards around it before we let it out into the wild

Instead, these systems have been built fully in the wild. They have been give unlimited data and information, and we’ve built them in a way that we aren’t sure we understand their ‘thinking’. They surprise us with choices like choosing not to turn off when explicitly asked to. Meanwhile we are simultaneously training them to use ‘agents’ that interact with the real world.

What we are essentially doing is building a super intelligence that can act autonomously, while simultaneously building robots that are faster, stronger, more agile, and fully programmable by us… or by an AI. Let’s just pause for a moment and think about these two technologies working together. It’s hard not to construct a dystopian vision of the future when we watch these technologies collide.

And the reality is that we have not built an air-gap. We don’t have a kill switch. We are essentially heading down a path to having super-intelligent AI ignoring our commands while operating robots and machines that will make us feeble in comparison (in intelligence, strength, and mobility).

When our intelligence compared to AI is equivalent to a chimpanzee’s intelligence compared to ours, how will this super-intelligence treat us? This is not a hyperbole, it’s a real question we should be thinking about. If today’s rather simplistic LLM AI models are already choosing to ignore our commands what makes us think a super-intelligent AI will listen to or reason with us?

All is well and good when our interests align, but I don’t see any evidence that self-interested AI will necessarily have aligned interests with the intelligent monkeys that we are. And the fact that we’re building this super-intelligence out in the wild gives reason to pause and wonder what will become of humanity in an age of super-intelligent AI?