Author Archives: David Truss

Reimagining Schools

Since November I’ve been connecting, every few weeks, with Will Richardson and a group of educational leaders from around BC, Canada in a professional development session run by the BCPVPA (BC Principals and Vice Principals Association) called ‘Reimagining Schools: Confronting Education‘. Right off the bat, Will shared some framings:

• Everything is nature
• We’re not facing “problems” to be solved. We are in a predicament.
• Our predicament stems from the fact that we are out of relationship with each other and all living things on the planet. All of our challenges flow from this disconnect.
• Education is complicit in creating these challenges.
• Collapse is not new. What’s new is that systemic privilege is no longer a buffer.
• Our personal challenge is to face reality or “sit with the shit” and not run from complex, difficult questions.

There are a few deep thoughts that have brewed from these sessions, and yet oddly enough the two most impactful things came from outside the sessions.

First a conflict within me. Will shared a post on LinkedIn where he said,

“I think it’s telling that for all of the conferences and presentations and talks and essays and “achievements” that people post and discuss here, only about 2% of them seem to make any note of the fact that they are happening while:

~ecological limits are being breached
~social trust is eroding
~ AI is reshaping cognition
~ politics are destabilizing
~ inequality is deepening
~ biodiversity is declining at alarming rates

I mean, without using those contexts as a lens for our gatherings or our teaching or writing, what is the actual relevance that we can claim, not just around education, but around living life on the planet in general?

It’s either denial or ignorance. Or maybe it’s concern that if we ground our work in those lenses, no one will show up or read or listen…”

I commented:

“I’d push back a bit and ask what is the conference about?

There’s a cognitive dissonance that is invited when the mind has to weigh these things AND also take in information that people are going to a conference to learn about.

I’m seeing your question play out on social media where people are being called out for not being political and sharing their political stance… on a channel where politics is never discussed.

There needs to be a balance, we can’t stick our head in the sand, but we also can’t pretend (and I do intentionally mean pretend) that acknowledging major issues of global concern are equivalent to somehow authentically addressing them… and topically addressing them when our topic isn’t directly affected by them is to me worse than not mentioning them. It can be a distraction without gain to the intended message.”

Will responded:

Dave Truss So, I’ll push back a bit on the push back. 🤣

I don’t think it’s a “calling out” as much as it is a reminder. And I don’t disagree that just naming them authentically “addresses” them, but it does provide a different lens for whatever question is in front of us at that moment. Every topic is affected by them. Every one.

Modernity wants to separate everything out into pieces and ignore the interconnectedness of the whole. This is the world we live in right now. It’s all entangled.”

The comment conversation continued, and is worth reading, but doesn’t add to my conflicted feelings about this. On the one hand I completely agree with Will, if we aren’t bringing a contextual lens to what we are sharing, we are somehow missing the interconnectedness of some of the things we should most value and care about. But on the other hand, I’m sitting in a place right now where just two days ago I wrote about being ‘Intentionally disconnected‘ because paying attention to the rather disturbing world events right now feels like too much. I ended the post saying, “for now I lack the capacity to engage. It seems like a futile activity that will anger and upset me, with no gain. It is rare for me to actively choose to be uninformed, but right now is one of those times.”

Therein lies the conflict. I agree with Will, yet I don’t think I’m the only one who isn’t ready to face the harsh realities of the predicaments we are in… especially when I’m trying to learn something new. I think for our students it’s the same. The last of the framings above is, ‘Our personal challenge is to face reality or “sit with the shit” and not run from complex, difficult questions.”

I get it, I really do. But when I’m at a conference or when a student sits in a class, do we really need to ‘sit in it’? Do we need to connect everything we do to the predicaments we live in? Do we need this lens to permeate what we are learning? If I channel my inner Will Richardson I think I’d ask myself, ‘But what value is the learning if it isn’t addressing the predicaments we are in?’ … Again, I’m left conflicted.

For example, can I teach students about using AI in an ethical way and not mention the cost of the energy drain? Is mentioning this once enough or should that be the bigger lesson? Do I need to bring the dire state of the world into every lesson, predicament after predicament? Is this even healthy? Maybe I’m just too stuck in the current educational context to see the bigger picture? I really don’t think that these sessions answered this for me, and yet I feel I have a deeper understanding of the need to confront hard truths… and ensure that what we choose to teach be taught with a lens of a world in environmental, political, and social challenges. Will shared the following quote in one of our sessions:

“If we fully accept the world as it is—in all its harsh realities— then we can develop the very qualities we need to be in that world and not succumb to that harshness. We find our courage, morality, and gentle, nonaggressive actions by clear seeing and acceptance. As we accept what is, we become people who stand in contrast to what is, freed from the aggression, grasping and confusion of this time. With that clarity, we can contribute things of eternal importance no matter what’s going on around us—how to live exercising our best human qualities, and how to support others to discover these qualities in themselves.”
~ Margaret Wheatley “So Far From Home”

The second insight I’d like to share came after our first session. Will invited any of us who could stay on to do so. During that after-session conversation I mentioned that I was retiring. The topic of my school, Inquiry Hub, came up and I mentioned that I was proud of what our team has been able to do, transforming the learning outside of the traditional high school box. And yet, I was disappointed that our little school has not had a greater impact on the rest of the district. Will responded saying something like, ‘Dave, if you were able to do that, you would be a unicorn because I haven’t seen that happen yet.’

That simple statement had an unburdening effect on me. It is sad, yet it comforted me. For the past 13 years my small team of teachers and I have created a very special place for self-directed learners to have some true agency over what they are learning, while still providing an opportunity for them to meet all the requirements they need for their post high school ambitions. It has been an amazing ride, and the fact that it didn’t really spread beyond our walls isn’t something that should weigh on me as I head into retirement. The test of my leadership will show if the school thrives after I’m gone.

Overall, I really enjoyed the sessions with Will, and with the other educational leaders from across BC. I appreciated the experience of sitting in the discomfort of knowing things must change in education and sitting in the predicament rather than cherrypicking shallow solutions and discussing them like we were solving all the world’s problems. I encourage educators to follow Will on his journey to confront education and reimagine schools and join one of his cohorts of educators on similar journeys of discovery.

Is Artificial Intelligence Reducing Our Intelligence?

Joe Truss shared a great article with me, ‘The hidden cost of letting AI make your life easier‘, by Shai Tubali on Big Think. Towards the end of the article, Shai shares this:

“[Sven Nyholm’s] deeper worry is not that AI will outperform humans, but that it will appear to do so, especially to non-expert eyes. “Current forms of AI threaten meaningful activities,” he argues, “because they look far more intelligent than they are.” This appearance invites trust. People begin to treat AI as an oracle, mistaking an impressive engineering achievement for understanding. As misplaced confidence grows, judgment weakens. Skills develop less fully. Capacities are handed over too easily, and with them, forms of meaning that depend on effort.

Nyholm links this directly to the value of processes, including confusion, detours, and lingering with complexity. He punctures the idea that everything should be fast and efficient. Speed may feel pleasant, he concedes, yet it undermines patient thinking and reconsideration. He points to an Anthropic advertisement promising a paper completed in a single day: brainstorming in the morning, drafting by noon, polishing by afternoon. What disappears in this vision is the slow work of searching, getting lost, following the wrong thread, and returning with insight. “Many ideas,” Nyholm says, “come from looking for one thing and finding something else instead.” When AI delivers tidy, unified answers, it spares us that work. In doing so, it risks weakening our capacity to break complex problems into parts, examine assumptions, and think things through with precision.”

AI reduces the productive effort and struggle that makes both learning and understanding stick. Accessing information is profoundly different than understanding information, and directs the learner towards an answer instead of a learning process. This article reinforced some ideas I’ve already shared.

In ‘Keeping the friction‘ I said, “Decreasing the challenge doesn’t foster meaningful learning. Reducing the required effort doesn’t make the learning more memorable. Encouraging deeper thinking is the goal, not doing the thinking for you.”

And in ‘What’s the real AI risk in education?‘ I said, “Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.”

I see this in my own learning. There are times I sit and read a full article, like the one shared above, but there are other times that I don’t bother and just throw a long article into an LLM and ask for a bulleted summary of the key ideas. However, I remember articles I read far better than articles where I only read the AI summaries.

How deep would my learning and understanding be if I only went as far as to read AI summaries? How much will my confidence and my belief in understanding grow, without the depth of knowledge to support my confidence and understanding? Would I be creating a kind of false fluency in topics where I lack true depth of understanding?

The convenience of using AI might not just be changing how we learn, it might be changing what we believe learning is… Perceiving learning as having access to information rather than having a deep understanding of a topic that needed to wrestle with to be truly understood. In this way, the convenience of using AI to think for us might just be reducing our intelligence.

Appreciate the tiny wins

Tiny wins are often hard to see. They don’t seem significant, but they accumulate.

James Clear explains in Atomic Habits that 1% better daily will compound into becoming 37 times better in a year.

You don’t go heavier on a lift in the gym, but you eke out a couple extra reps.

You walk into a coffee shop and get right to the counter before a rush of people that have to line up behind you.

You hit almost every green light on your way home from work.

You actually enjoy a meal that sounds too healthy to be tasty.

You write a single sentence and suddenly your muse has arrived.

We don’t always see them, we rarely celebrate them, but the tiny little things that we can choose to pay attention to and appreciate can be the highlight of the day… or the precursor to more wins, big and small, in the future.

What’s the real AI risk in education?

I read a great article on LinkedIn by Ken Shelton. He looked at two articles:

“On one side:
AI as productivity infrastructure.
On the other:
AI as compliance enforcement.

But in both cases, the conversation centers on efficiency and policing, not on whether learning itself has been redesigned for an AI-rich world. Using historical context, one could reasonably make similar arguments around the implementation of technology as well. If students are learning to “sound human” to avoid detection…If institutions are investing in increasingly sophisticated surveillance tools…If teachers are primarily using AI to move faster within the same structures…Then we have to ask, as I have shared in previous posts:

Are we adapting learning?
Or are we simply optimizing and defending legacy systems?”

I found his article more interesting than the two he shared. I especially loved his final paragraph:

“The risk isn’t just that AI is moving too fast. The risk is that our response remains reactive, oscillating between efficiency and enforcement, without addressing purpose, power, and pedagogy. Therefore, the real inflection point isn’t technological, it’s analytical and philosophical.”

My thoughts: In education, go ahead and use AI to make teaching and lessons better, use it to help students learn, and also help them understand how to use AI to enrich their learning. But don’t use it to make learning easier. Real learning has a charge to it, it needs to come with some challenge, and hardship. If the learning experience is too easy, it won’t be remembered. If there isn’t enough challenge, if the answers are provided rather than constructed, the learning will soon be forgotten. Remove being stuck, struggling, and failure, and you’ve removed the greatest part of a learning experience.

So educators need to do two things: First, they need to use AI to make what they are doing even better. And secondly, they need to shift the learning experience to one where they no longer need to worry about policing and compliance. For example: The work isn’t finished with an essay, but with students defending their points in the essay against other students with slightly different points or different perspectives. The students who wrote the essay with AI and didn’t fully comprehend the topic can’t argue their perspective as well as the ones who were willing to do the work… and if they did use AI and can then argue the points better than their peers, that only proves that they understand how to use AI as a learning tool and not a tool to do the work for them. Because the real risk of AI in education is that the AI is doing the work, the struggle, and the learning for the student.

The problem we face is how learning can be circumvented by AI. And so the challenge for educators is to make it more challenging to use AI inappropriately, and to use AI to aid in making learning experiences more challenging. This is not an easy task, but it’s one we need to figure out and do well if we want our students to be learners who will have significance in a world where AI is all around us.

_________
Update: Just found this LinkedIn post by William (Bill) Ferriter, and it has two awesome images to fit with the above.

Update 2: I forgot about this post: Thinking Requires Effort

Reducing busywork, and maximizing the problem-solving time, in a community of learners who find benefit from working together, is what schools should be in service of.

Oblivious to what’s coming

If you talk to people about LLM’s like ChatGPT, Perplexity, or Claude, you’ll still hear things like, ‘They hallucinate and will make up fake research’, and something I heard recently, ‘they actually make work harder because workers need to spend more time editing and cleaning up what they produce’. What people who say this don’t realize is that this is pre-January 2026, and we are now fully into February 2026. Yes, things are moving that fast! And furthermore, what most people, including me, have not been paying attention to is that when we use the free version of these tools, we are essentially months and months behind what the latest models can do.

Matt Shumer’s ‘Something Big Is Happening‘, was written just 4 days ago and has already been seen by millions of people. Yes, it’s a bit of a long read, but it is also a ‘must read’. Here is an excerpt:

“Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”

I recently shared my thoughts on the upcoming ‘Fiscal year end squeeze‘, where I said, “Corporations care about pleasing shareholders and maintaining stock value over caring for the people who work for them. This is the ugly side of capitalism. Eliminate thousands of salaries and suddenly the balance sheet proves to be more profitable. Never mind that these are people’s careers and livelihood that are being cut short. And never mind about loyalty to the company.” What I’m realizing now, after reading Matt’s article, is that the situation is far worse than I thought, because AI is coming after not just these jobs, but almost every other jobs these newly unemployed people will be looking for.

If I was out of a job right now, what I’d be doing is paying the monthly fees for the 2-3 best AI models out there and learning how to power use them. I wouldn’t be looking for a job, I’d be trying to find a niche where I could work for myself, or maybe become a contractor doing things for people who don’t realize that AI is good enough to get the work done faster than they can do it. Because the reality is that the vast majority of people in the world are oblivious to just how fast this disruption is coming, and unlike other disruptions in the past this one is going to happen everywhere and all at once. Most people can’t fathom how disruptive this will be, and even as I share this as a warning… I’m not sure I fully grasp the full impact either.

Archiving memories

There is a quote I often hear about the fact that nobody will know your name in 3 generations. This makes me think of my grandparents, and the stories they used to tell. I was fortunate enough to get some video recordings of both of my grandmothers (Granny T & Granny B), but not of my grandfathers. Today I dug up the 10 pages story that my grandfather, Motel Truss, had recorded a few months before he died. I don’t have the recording itself, but I have the document that my mom transcribed from the recording for him. He wrote it as a request from the Barbados Jewish Community. Later, a book was written, ‘Peddlers All: Stories of the First Ashkenazi Jewish Settlers in Barbados‘ and my grandparents were all mentioned in it as well.

I think towards the end of the year I am going to try to document images and stories of each of my grandparents. Nothing extravagant, but something that my kids, and maybe their grandkids could look at to learn a bit about their distant ancestors. It was a very different time, with completely different hardships and challenges, and I think their stories are worth documenting and sharing.

Post Truth Era

Never mind the ridiculous videos of Mr. Rogers chatting with Tupac Shakur or Bigfoot vlogging, these AI videos seem real enough while fully intending us to know they are AI. What we are seeing now is an indistinguishable bending of real and fake with videos that are completely altering our ability to know what is real and what isn’t.

Voice mimicking was already almost perfect. I saw a video post today from a man whose dad called him to ask what their shared bank account password was. One problem: His dad died last year, he just hadn’t taken his name off of the account yet. He said it sounded so real that had his father been alive, he probably would have shared the password, thinking his dad forgot.

Now AI videos are just as good as AI audio and the combination of the two truly are steering us into a post truth era. People are sharing AI videos completely unaware that they are fake. Even news stations are getting it wrong.

Soon web sites will become bastions of truth. Want to know what someone actually said? Go to ‘their name’ .com or .org and see the actual video shared there. Anything else will be questionable. And wherever else the video is shared must be watched with skepticism. Subtle or overt, very important changes in a message will occur as a result of someone, ultimately anyone taking the original video and making an AI version that gives their message instead of the intended message.

Following specific domains, and maybe a handful of legitimate news channels, are the only suggestions I have. Legislation won’t keep up, and the fakes are just getting better. Essentially, find reliable sites and distrust everything else. Intuition and common sense won’t be enough.

Foundational Geometry of the Cosmic Matrix in the Tetraverse

This is the next installation in the Book of Codes series that I do with Joe Truss.

Foundational Geometry of the Cosmic Matrix in the Tetraverse

It’s an easier video to understand if you are willing to take the time to watch ‘We Live in a Tetraverse‘, our introductory video based on the premise that the smallest building blocks in the universe must be tetrahedral.

Joe and I spent 4 hours putting the final touches on the ‘Foundational Geometry of the Cosmic Matrix in the Tetraverse’ video this morning, after working on it almost every Sunday morning for a few months now. Here are other videos in the series:

Secret Origins of the Enneagram

A Short Take on Assembly Theory in the Tetraverse Model: A Geometric Representation

A Dimensional Twist of the Tetraverse (A response video to Klee Irwin’s 20 Group Twist)

As always, feedback is greatly appreciated.

Take action despite fear and doubt

This weekend I had the opportunity to see Chris Williamson speak at the Vogue Theatre.

A few things he said seemed to circle around a theme of taking action despite fear and doubt. Here are some of the ideas he shared:
(I took notes not perfect quotes, but all the ideas below came from Chris.)

He quoted Christopher Hutchins, “In life we must choose our regrets.” This is a feature, not a bug. You can’t pick the right path and not still have regrets for not making another choice, choosing another path. Which regret do you want? Which regret can you not live with?

Contemplate the consequences of inaction. Don’t pretend that inaction does not have a price. (ie. The anxiety cost of ‘I still have X to do today.’)

Belief: Self-belief never waivers when the hero decides on his journey… But there is doubt ALL ALONG THE WAY! That’s why it’s so easy to fall back into old patterns.

We aren’t afraid of failure, we are afraid of what other will say when we fail… Don’t outsource your self image to the opinions of others.

Best question to ask: What is it that ‘you tomorrow‘ would want ‘you today‘ to do? Optimize for your future self.

Don’t follow what most people do… you don’t want the results they get.

You make the most progress when things are hard… and looking back, in retrospect, would you avoid them if you could, now that you’ve accomplished those hard things?

You don’t need to be certain, just confident that you are moving in the right direction. Have a bias for action.

He also quoted Jocko Willink regarding the fact that you can’t fake bravery. Pretending to be brave when you are scared IS bravery. Motivation is similar, just do the thing… Preparing isn’t the thing, neither is telling people, writing about the fact that you are going to do the thing, reading about it, or fantasizing about it. Again, just do the thing.

And finally, on this topic, an audience member quoted Chis during the Q&A, “The magic that you are looking for is in the thing that you are avoiding.

~~~~~~~~
How much of our lives are spent questioning ourselves, doubting ourselves, and avoiding action for fear of an outcome we don’t want?

I’ve shared this before, but when my wife and I were deciding if we were going to take our young family to China to take jobs as principal and teacher in a Foreign National school, we discussed it for over 2 hours late one night. We didn’t come to any conclusion, and the next night after work we put the kids down to sleep, and we sat down to continue the conversation. We made tea and popcorn and prepared for another marathon discussion, and then one of us (neither of us remember who) said, “If we don’t do this, will we regret it?” Absolutely. We had decided. The discussion moved to how to tell the kids. Any regrets for going would be overshadowed by the regret of not going.

As a photographer, I never regretted taking a photo, but I regretted the photographs that I never took.

We avoid time under tension, even though we know it strengthens us, “We cannot strengthen our resilience unless we face things that are challenging us for longer than we could previously tolerate.

And as a final thought from me, Avoidance is easy, “How much time do we spend in a state of busyness rather than dealing with business? Avoiding the real task by doing other things, or worse yet doing something that’s merely a distraction. Some things get automated, habits get ritualized, and the work just gets done. But sometimes the struggle is real. The action avoidance becomes the easy task and the work doesn’t become the work, but actually just getting down to work. Because once you start the work gets done.

~~~~~~~~
Also related: Be Fearless, James Clear on The pain of inaction, and many posts on failure.

Crappy user experience

A bit of a rant here. I’m doing some medical expense claiming and my provider has an App that is not designed with the end user in mind. First I have to go to 3 different pages to make a claim. Then after all the claim details are entered, I have to scroll down on a confirmation page that has my address on individual lines that take up to much screen real-estate that the ‘Consent and Declaration’ is hidden under the ‘Submit’ button. So you end up hitting the Submit button only then to learn that you need to scroll down and click the consent, which opens up in another page.

Also, I use my laptop and phone for much of the day and don’t need reading glasses, but my pharmacy prints the details I need to make the above claim in tiny, hard to read font. This is so unnecessary. It’s already really confusing trying to locate all the information, which is spread out into 3 different sections of the prescription receipt, does it also need to be in microscopic font? This is the only thing I’ve had to put reading glasses on to see in the last few months.

I get tired of user interfaces that are designed for the product and not the user. The insurance company probably doesn’t want to claims to be easy to do, they’d rather you had to go through a more challenging process to make a claim. The pharmacy changed their format so that the prescription receipt gets printed on a small sticker, and I’m sure cost saving was more important to them than providing a readable receipt for their aging customers. And this kind of behaviour may or may not be intentional, but it is ignorant of the end user’s experience. I’ve complained before about inconsistencies in remote controls, apps that want your attention at the cost of your convenience, and how it feels like we are decades behind where we should be when doing things like setting up a new printer. I would say that over 95% of the things I rant about are related to products and services providing crappy user experiences.

How hard would it be to have the customer in mind as a priority, rather than an afterthought?