Author Archives: David Truss

Appreciate the tiny wins

Tiny wins are often hard to see. They don’t seem significant, but they accumulate.

James Clear explains in Atomic Habits that 1% better daily will compound into becoming 37 times better in a year.

You don’t go heavier on a lift in the gym, but you eke out a couple extra reps.

You walk into a coffee shop and get right to the counter before a rush of people that have to line up behind you.

You hit almost every green light on your way home from work.

You actually enjoy a meal that sounds too healthy to be tasty.

You write a single sentence and suddenly your muse has arrived.

We don’t always see them, we rarely celebrate them, but the tiny little things that we can choose to pay attention to and appreciate can be the highlight of the day… or the precursor to more wins, big and small, in the future.

What’s the real AI risk in education?

I read a great article on LinkedIn by Ken Shelton. He looked at two articles:

“On one side:
AI as productivity infrastructure.
On the other:
AI as compliance enforcement.

But in both cases, the conversation centers on efficiency and policing, not on whether learning itself has been redesigned for an AI-rich world. Using historical context, one could reasonably make similar arguments around the implementation of technology as well. If students are learning to “sound human” to avoid detection…If institutions are investing in increasingly sophisticated surveillance tools…If teachers are primarily using AI to move faster within the same structures…Then we have to ask, as I have shared in previous posts:

Are we adapting learning?
Or are we simply optimizing and defending legacy systems?”

I found his article more interesting than the two he shared. I especially loved his final paragraph:

“The risk isn’t just that AI is moving too fast. The risk is that our response remains reactive, oscillating between efficiency and enforcement, without addressing purpose, power, and pedagogy. Therefore, the real inflection point isn’t technological, it’s analytical and philosophical.”

My thoughts: In education, go ahead and use AI to make teaching and lessons better, use it to help students learn, and also help them understand how to use AI to enrich their learning. But don’t use it to make learning easier. Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.

So educators need to do two things: First, they need to use AI to make what they are doing even better. And secondly, they need to shift the learning experience to one where they no longer need to worry about policing and compliance. For example: The work isn’t finished with an essay, but with students defending their points in the essay against other students with slightly different points or different perspectives. The students who wrote the essay with AI and didn’t fully comprehend the topic can’t argue their perspective as well as the ones who were willing to do the work… and if they did use AI and can then argue the points better than their peers, that only proves that they understand how to use AI as a learning tool and not a tool to do the work for them. Because the real risk of AI in education is that the AI is doing the work, the struggle, and the learning for the student.

The problem we face is how learning can be circumvented by AI. And so the challenge for educators is to make it more challenging to use AI inappropriately, and to use AI to aid in making learning experiences more challenging. This is not an easy task, but it’s one we need to figure out and do well if we want our students to be learners who will have significance in a world where AI is all around us.

_________
Update: Just found this LinkedIn post by William (Bill) Ferriter, and it has two awesome images to fit with the above.

Update 2: I forgot about this post: Thinking Requires Effort

Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Archiving memories

There is a quote I often hear about the fact that nobody will know your name in 3 generations. This makes me think of my grandparents, and the stories they used to tell. I was fortunate enough to get some video recordings of both of my grandmothers (Granny T & Granny B), but not of my grandfathers. Today I dug up the 10 pages story that my grandfather, Motel Truss, had recorded a few months before he died. I don’t have the recording itself, but I have the document that my mom transcribed from the recording for him. He wrote it as a request from the Barbados Jewish Community. Later, a book was written, ‘Peddlers All: Stories of the First Ashkenazi Jewish Settlers in Barbados‘ and my grandparents were all mentioned in it as well.

I think towards the end of the year I am going to try to document images and stories of each of my grandparents. Nothing extravagant, but something that my kids, and maybe their grandkids could look at to learn a bit about their distant ancestors. It was a very different time, with completely different hardships and challenges, and I think their stories are worth documenting and sharing.

Post Truth Era

Never mind the ridiculous videos of Mr. Rogers chatting with Tupac Shakur or Bigfoot vlogging, these AI videos seem real enough while fully intending us to know they are AI. What we are seeing now is an indistinguishable bending of real and fake with videos that are completely altering our ability to know what is real and what isn’t.

Voice mimicking was already almost perfect. I saw a video post today from a man whose dad called him to ask what their shared bank account password was. One problem: His dad died last year, he just hadn’t taken his name off of the account yet. He said it sounded so real that had his father been alive, he probably would have shared the password, thinking his dad forgot.

Now AI videos are just as good as AI audio and the combination of the two truly are steering us into a post truth era. People are sharing AI videos completely unaware that they are fake. Even news stations are getting it wrong.

Soon web sites will become bastions of truth. Want to know what someone actually said? Go to ‘their name’ .com or .org and see the actual video shared there. Anything else will be questionable. And wherever else the video is shared must be watched with skepticism. Subtle or overt, very important changes in a message will occur as a result of someone, ultimately anyone taking the original video and making an AI version that gives their message instead of the intended message.

Following specific domains, and maybe a handful of legitimate news channels, are the only suggestions I have. Legislation won’t keep up, and the fakes are just getting better. Essentially, find reliable sites and distrust everything else. Intuition and common sense won’t be enough.

Foundational Geometry of the Cosmic Matrix in the Tetraverse

This is the next installation in the Book of Codes series that I do with Joe Truss.

Foundational Geometry of the Cosmic Matrix in the Tetraverse

It’s an easier video to understand if you are willing to take the time to watch ‘We Live in a Tetraverse‘, our introductory video based on the premise that the smallest building blocks in the universe must be tetrahedral.

Joe and I spent 4 hours putting the final touches on the ‘Foundational Geometry of the Cosmic Matrix in the Tetraverse’ video this morning, after working on it almost every Sunday morning for a few months now. Here are other videos in the series:

Secret Origins of the Enneagram

A Short Take on Assembly Theory in the Tetraverse Model: A Geometric Representation

A Dimensional Twist of the Tetraverse (A response video to Klee Irwin’s 20 Group Twist)

As always, feedback is greatly appreciated.

Take action despite fear and doubt

This weekend I had the opportunity to see Chris Williamson speak at the Vogue Theatre.

A few things he said seemed to circle around a theme of taking action despite fear and doubt. Here are some of the ideas he shared:
(I took notes not perfect quotes, but all the ideas below came from Chris.)

He quoted Christopher Hutchins, “In life we must choose our regrets.” This is a feature, not a bug. You can’t pick the right path and not still have regrets for not making another choice, choosing another path. Which regret do you want? Which regret can you not live with?

Contemplate the consequences of inaction. Don’t pretend that inaction does not have a price. (ie. The anxiety cost of ‘I still have X to do today.’)

Belief: Self-belief never waivers when the hero decides on his journey… But there is doubt ALL ALONG THE WAY! That’s why it’s so easy to fall back into old patterns.

We aren’t afraid of failure, we are afraid of what other will say when we fail… Don’t outsource your self image to the opinions of others.

Best question to ask: What is it that ‘you tomorrow‘ would want ‘you today‘ to do? Optimize for your future self.

Don’t follow what most people do… you don’t want the results they get.

You make the most progress when things are hard… and looking back, in retrospect, would you avoid them if you could, now that you’ve accomplished those hard things?

You don’t need to be certain, just confident that you are moving in the right direction. Have a bias for action.

He also quoted Jocko Willink regarding the fact that you can’t fake bravery. Pretending to be brave when you are scared IS bravery. Motivation is similar, just do the thing… Preparing isn’t the thing, neither is telling people, writing about the fact that you are going to do the thing, reading about it, or fantasizing about it. Again, just do the thing.

And finally, on this topic, an audience member quoted Chis during the Q&A, “The magic that you are looking for is in the thing that you are avoiding.

~~~~~~~~
How much of our lives are spent questioning ourselves, doubting ourselves, and avoiding action for fear of an outcome we don’t want?

I’ve shared this before, but when my wife and I were deciding if we were going to take our young family to China to take jobs as principal and teacher in a Foreign National school, we discussed it for over 2 hours late one night. We didn’t come to any conclusion, and the next night after work we put the kids down to sleep, and we sat down to continue the conversation. We made tea and popcorn and prepared for another marathon discussion, and then one of us (neither of us remember who) said, “If we don’t do this, will we regret it?” Absolutely. We had decided. The discussion moved to how to tell the kids. Any regrets for going would be overshadowed by the regret of not going.

As a photographer, I never regretted taking a photo, but I regretted the photographs that I never took.

We avoid time under tension, even though we know it strengthens us, “We cannot strengthen our resilience unless we face things that are challenging us for longer than we could previously tolerate.

And as a final thought from me, Avoidance is easy, “How much time do we spend in a state of busyness rather than dealing with business? Avoiding the real task by doing other things, or worse yet doing something that’s merely a distraction. Some things get automated, habits get ritualized, and the work just gets done. But sometimes the struggle is real. The action avoidance becomes the easy task and the work doesn’t become the work, but actually just getting down to work. Because once you start the work gets done.

~~~~~~~~
Also related: Be Fearless, James Clear on The pain of inaction, and many posts on failure.

Crappy user experience

A bit of a rant here. I’m doing some medical expense claiming and my provider has an App that is not designed with the end user in mind. First I have to go to 3 different pages to make a claim. Then after all the claim details are entered, I have to scroll down on a confirmation page that has my address on individual lines that take up to much screen real-estate that the ‘Consent and Declaration’ is hidden under the ‘Submit’ button. So you end up hitting the Submit button only then to learn that you need to scroll down and click the consent, which opens up in another page.

Also, I use my laptop and phone for much of the day and don’t need reading glasses, but my pharmacy prints the details I need to make the above claim in tiny, hard to read font. This is so unnecessary. It’s already really confusing trying to locate all the information, which is spread out into 3 different sections of the prescription receipt, does it also need to be in microscopic font? This is the only thing I’ve had to put reading glasses on to see in the last few months.

I get tired of user interfaces that are designed for the product and not the user. The insurance company probably doesn’t want to claims to be easy to do, they’d rather you had to go through a more challenging process to make a claim. The pharmacy changed their format so that the prescription receipt gets printed on a small sticker, and I’m sure cost saving was more important to them than providing a readable receipt for their aging customers. And this kind of behaviour may or may not be intentional, but it is ignorant of the end user’s experience. I’ve complained before about inconsistencies in remote controls, apps that want your attention at the cost of your convenience, and how it feels like we are decades behind where we should be when doing things like setting up a new printer. I would say that over 95% of the things I rant about are related to products and services providing crappy user experiences.

How hard would it be to have the customer in mind as a priority, rather than an afterthought?

Do not amplify if you can not verify

This is a simple, but potent message. Before hitting the ‘Like’ or ‘Share’ button, before telling someone about the interesting fact you heard online, verify it in some way. Is it true?

Do some Ground Truthing. Can you verify the claim? Is it real or AI? Is it worthy of your amplification or are you just contributing to the spread of something unworthy to be shared.

How much better would the internet be if everyone paused and verified what they were sharing before amplifying misinformation, disinformation, fake news, and AI deep fakes?

It’s a worthy and effective mantra:

Do not amplify if you can not verify.

Ground Truthing

The term ‘ground truthing’ was shared with me last night by a friend, Neil. I had never heard the term before so did a quick MS365 Copilot request to learn more.

Ground truthing is the process of collecting data on-site (in the real world) to validate and calibrate information obtained from remote sensing technologies, models, or other indirect methods… It’s essentially a reality check to ensure that what the data suggests matches what’s actually happening on the ground.

While it is primarily used in Geography & Remote Sensing, Environmental Science, Agriculture, and Machine Learning & AI, I think it’s a term (or at least a practice) we are going to see a lot more use of in the future. More and more, when I receive information I’m immediately questioning if it’s real. Anything remotely controversial, or surprising, easily falls into a category of doubt… ‘I wonder if this is real or AI?’ But more recently, almost every video and article I see seems to sit in an uncanny valley of almost true or almost real. Before I accept new information, I have to ask myself, ‘Where can I verify this?’ In other words, ‘How can I ground truth this?’

Here is a simple example, in that the information is obviously false, but the deep fake is impressively realistic.

I also saw a video of Physicist Brian Cox saying that comet ATLAS 3i was definitely a spaceship. I didn’t bother fact checking it, I new it was fake, but enough of his followers questioned these kinds of videos that Brain came out on social media to say this:

“I keep seeing AI shite of me popping up on YouTube. The general rule is that if I appear to say something that you agree with and you are a UFO nobber, flat earth bell end or think comet ATLAS 3i is a spaceship, it’s fake.”

Where it gets more complicated is where actual facts are taken and then exaggerated. On the same theme of science and space, I recently saw a video that was talking about the theory that our entire universe might be in a massive black hole. From Copilot:

Some physicists propose that our universe might exist inside a black hole. This idea stems from the observation that black holes warp space and time so intensely that they could create a new, self-contained universe within. The consistent spin direction of many galaxies could be a result of the angular momentum inherited from the parent black hole, influencing the structure and motion of matter in our universe.

This is indeed a theory that is being considered by some scientists and I find it very interesting. So when a video comes up on my social media stream about it, I watch it. But when 20 seconds in I hear the narrator say that this is now considered true, I can’t even get myself to watch to the end of the video. These kind of videos really piss me off. I am angered that someone would create a video based on factual, interesting and novel ideas, but exaggerate the information and outright lie about it for the sake of views, clicks, and likes.

All 3 of these examples are actually easy, because my BS detector goes off. Where I’m concerned now is where that detector does not go off. What happens when the lies are more subtle, when the information is more nuanced? For example, do I really understand the issues happening in one of the many global conflicts right now? What’s the bias of the news or broadcasting station sharing the information? Where do I get more authentic information? How do I go about ground truthing what I’ve heard? Can I even get access to information ‘from the ground’?

It’s getting to the point where I have to question almost everything I hear. Is it real, what is the source, and where can I verify this? I hadn’t heard the term ground truthing before last night, but I realize that I’ve already started doing it, and I’m going to be doing a whole lot more of it in the future.