Tag Archives: AI

I don’t ever ‘want to’ see ‘wanna’

Dear Siri,

I love speech to text. When I’m on the go I want to just speak into my phone and not bother typing. This is such a handy way to get words into a text or email with minimal effort. But this is the age of AI and it’s time to grow up and get a little more intelligent.

I don’t ever ‘want to’ say ‘wanna’.

“I don’t ever wanna see that word as a result of you dictating what I say.” (Like you just did.)

Listen to me Siri, I know my diction isn’t perfect. I know I don’t always enunciate the double ‘t’ in ‘want to’. But after I’ve gone to settings and created a text replacement shortcut from wanna to want to;

After I’ve corrected the text you’ve dictated from wanna to want to hundreds of times… Can you learn this simple request?

Don’t ever assume I want to say wanna!

Please.

Consciousness and AI

I have a theory about consciousness being on a spectrum. That itself isn’t anything new but I think the factors that play into consciousness are: Basic needs, computational ‘processing’, and idleness. Consciousness comes from having more processing time than needed to meet basic needs, along with the inability for processing (early thinking) to be idle, and so for lack of a better word desires are created.

Think of a very simple organism when all of its needs are met. This isn’t a real thought process I’m going to share but rather a meta look at this simple organism: “I have enough heat, light, and food, what should I do with myself?”

  • Seek better food
  • Move closer to the heat or light source
  • Try different food
  • Join another organism that can help me
  • Find efficiencies
  • Find easier ways to move
  • Hunt

At first, these are not conscious decisions, they are only a choice of simple processes. But eventually, the desires grow. Think decisions that start like, “If I store more energy I can survive longer in times of famine.” And evolve to more of a desire not just for survival but for pleasure (for lack of a better word): “I like this food more than other kinds and want more of it.” …All stemming from having idle processing/thinking time.

I don’t know when ‘the lights turn on‘, when an organism moves from running basic decisions of survival to wanting and desiring things, and being conscious? I believe consciousness is on a spectrum and it is idle processing/thinking time that eventually gets an organism to sentience. It’s sort of like the bottom of Maslow’s hierarchy pyramid must be met, (psychological and safety) AND there then needs to be extra, unnecessary processing time, idle time that the processor then uses for what I’m calling desires… interests beyond basic needs.

Our brains are answer-making machines. We ask a question and it answers, whether we want it to or not. If I say what does a purple elephant with yellow polkadots look like? You will inevitably think of what that looks like simply from reading the question. I think that is what happens at a very fundamental level of consciousness. When all needs are met the processors in the brain don’t suddenly stop and sit idle. Instead, the questions arise, “How do I get more food?”, “Where would be better for me to move to?” Eventually all needs are met, but the questions keep coming. At first based on simple desires, but more and more complex over generations and eons of time.

So why did I title this, ‘Consciousness and AI’? I think one of the missing ingredients in developing Artificial General (or Super) intelligence is that we are just giving AI’s tasks to complete and process at faster and faster times, and when the processing of these tasks are completed, the AI sits idle. An AI has no built in desire that an organic organism has to use that idle time to ask questions, to want something beyond completing the ‘basic needs’ tasks it is asked to do.

If we figure out a way to make AI curious, to have it desire to learn more, and to not let itself be idle, at that point it will be a very short path to AI being a lot smarter than us.

I’m currently listening to Annaka Harris’ audio book ‘LIGHTS ON: How Understanding Consciousness Helps Us Understand the Universe’ on Audible, and that’s inspiring a lot of my thinking. That said, this post is me rehashing an idea that I had back in December 2019, when I wrote, ‘What does in mean to be conscious?’… I go into this idea of idle time further in that post:

“…life requires consciousness, and it starts with the desire to reproduce. From there, consciousness coincidentally builds with an organism’s complexity and boredom, or idle processing time, when brains do not have to worry about basic survival. Our consciousness is created by the number of connections in our brains, and the amount of freedom we have to think beyond our basic survival.”

My conclusions in that post focused more on animal life, but listening to Annaka’s documentary of interviews with scientists I’m realizing that I really do think there is some level of consciousness right down to the simplest life forms. If it’s idle time and desires that bring about sentience, then figuring out how to make AI’s naturally curious will be the path to artificial general intelligence… Because they are already at a place where they have unused processing time, which is continuing to grow exponentially fast.

What it means to be literate?

Can you read? Can you do basic math? Is that enough?

The critical thinking required to make sense of the world today is ever increasing. We have a world leader using magical math to make a trade deficit calculation into a reciprocal tariff calculation, and claiming that this is, “Kind reciprocal, not full reciprocal.”

What? Help me make it make sense?

Meanwhile, I saw a video that someone created using AI. He uploaded a pdf article for two AI‘s to discuss, one of the AI’s was a version of himself, with his voice, and the other was a female at a desk. The only thing that suggested to me that the conversation was between two AI’s was some awkward hand gestures. Take those movements away, or make them a bit more natural/realistic and I would have no idea that I was watching an AI conversation.

Meanwhile, in Egypt, there are some wild claims about structures under the great pyramids, and while the evidence is unclear, I’ve seen many videos explaining these not-yet-proven structures. These claims include that they are a network of power sources connected to other structures around the world, and another theory claiming that aliens created them.

And speaking of aliens, wasn’t it just a few short months ago that we ‘discovered’ aliens living in our oceans? What ever happened to that story?

It’s becoming almost impossible to be informationally literate today. By the time you have time to seriously fact check something the story is already old, and there are new crazy claims that require your skeptical attention. What’s the source of this information? Where did they get their data from? What’s the bias of the news source? How is this data being manipulated? Who paid for the study? Is this a real quote? Is this video real, or CGI, or AI?

Who is fact checking the fact checkers? Meanwhile, here in Canada, a fact checker hired by one of our news stations was let go because trolls that don’t like their favourite political party being fact checked brought so much negative attention to her that the news station let her go.

What? Help me make it make sense?

The reality is that reading and writing and doing basic math is not enough to be functionally and informationally literate today. The critical thinking required to simply consume the information being thrown at us is overly demanding. I think the way forward for the short term is to find trusted sources and rely on them… and yet that’s the very thing that has seemed to get us into trouble. How many people get their news from just one or two biased sources? I’m literally now suggesting to find an echo chamber to sit in… hopefully you can find one that echoes facts, common sense, and some semblance of the truth.

AI text in images just keeps getting better

One of the biggest challenges with AI image generation is text. A new model, Ideogram 3.0 out of a Toronto startup, seems to have cracked the code. I wanted to try it out and so here were my two free prompts and their responses:

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The blog is about daily musings, education, technology, and the future, and the ad should look like a movie poster

Prompt: Create an ad for a blog titled ‘Daily-Ink’ by David Truss.
The byline is, “ Writing is my artistic expression. My keyboard is my brush. Words are my medium. My blog is my canvas. And committing to writing daily makes me feel like an artist.”

While the second, far more wordy prompt was less accurate, I can say that just 3 short months ago no AI image model would have come close to this. Now, this is coming out of a startup, not even one of the big players.

I didn’t even play with the styles and options, or suggest these in my prompts.

As someone who creates AI images almost daily, I can say that there has been a lot of frustration around trying to include text… but that now seems to be a short-lived complaint. We are on a very fast track to this being a non-issue across almost all tools.

Side note: The word that keeps coming to mind for me is convergence. That would be my word for 2025. Everything is coming together, images, text, voice, robotics, all moving us closer and closer to a world where ‘better’ happens almost daily.

Not emergence but convergence

My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”

On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.

More from the article:

…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.

If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.

And in conclusion:

If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.

When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.

But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.

These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.

Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?

Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.

Immediate Emergence – Are we ready for this?

I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.

Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.

Are we ready for this?

We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.

“But they won’t have the wisdom of experience.”

Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.

I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.

Morality police

I have regularly created AI images to go with my blog posts since June, 2022. I try not to spend too much time creating them because I’d rather be writing blog posts than image prompts. But sometimes I try to create images and they just don’t convey what I want them to, or they come across as a bit too much in the uncanny valley, feeling unnatural. That happened with my post image 4 days ago, and I used the image anyway, because I was pressed for time.

(Look carefully at this image and you’ll see a lot wrong with it.)

I made 5 or 6 attempts to adjust my prompt, but still kept getting bad results, so I made do with the only one that resembled what I wanted.

And then for the past couple days I had a different challenge. I don’t know if it’s because of using the version of Bing’s Copilot that is associated with my school account, but my attempts to create images were blocked.

And:

However, Grok 3, a much less restricted AI, had no problem creating these images for me:

And:

I’m a little bothered by the idea that I am being limited by an AI in using these image prompts. The first one is social commentary, the second one, while a ‘hot topic’, certainly isn’t worthy of being restricted.

It begs the question, who are the morality police deciding what we can and cannot use AI to draw? the reality is that there are tools out there that have no filters and can create any image you want, no matter how tasteless or inappropriate they are, and I’m not sure that’s ideal… but neither is being prevented from making images like the ones I requested. What is it about these images requests that make them inappropriate?

I get that this is a small issue in comparison to what’s happening in the US right now. The morality police are in full force there with one group, the Christian far right, using the influence they have in the White House to impose their morality on others. This is a far greater concern than restrictions to image prompts in AI… but these are both concerns on the same continuum.

Who decides? Why do they get to decide? What are the justifications for their decisions?

It seems to me that the moral decisions being made recently have not been made by the right people asking the right questions… and it concerns me greatly that people are imposing their morals on others in ways that limit our choices and our freedoms.

Who gets to be the morality police? And why?

It’s all happening so fast

I subscribe to superhuman.ai, a daily email newsletter. Most days I peruse it for about 3-5 minutes before work, primarily focussing on the ‘Today in AI’ section. It’s fascinating to see how the field of AI is rapidly advancing. On weekends the email shifts topics. Saturday is a robotics special and Sundays are focused on scientific and technological breakthroughs outside of AI.

Here are some videos shared in yesterday’s Superhuman robotics focused update:

Then here are 3 sections from today’s email. Two related to technological advances:

Star Power: France just took a massive lead in the race to near-limitless clean energy. The country’s CEA WEST Tokamak reactor has shattered China’s record, maintaining a hydrogen plasma reaction for 22 minutes and 17 seconds flat. While it’s not commercial-ready yet, it’s a major leap in fusion research and has huge implications for the development of ITER, the world’s largest fusion project, in the south of France. 

Two-way Street: Chinese researchers have built the world’s first two-way brain-computer interface (BCI). Unlike conventional BCIs that just decode brain signals, this system creates a feedback loop where both the brain and the machine learn from each other and improve at working together over time.

And 3 related to health and longevity:

Cancer Counter: Scientists at Memorial Sloan Kettering have reported promising results from a small trial that used personalized mRNA vaccines to fight pancreatic cancer. Out of the 16 participants who were administered the vaccine, at least half generated long-lasting cancer-fighting T cells, with early results suggesting fewer recurrences. Researchers estimate these T cells could persist for years, offering hope for a future breakthrough.

Fountain of Youth: Japanese bioengineers claim to have found the ‘rewind’ button for aging. Noticing that older cells were considerably larger in size than younger ones, the scientists discovered that they were packed in a layer of the AP2A1 protein. This led them to conclude that blocking the protein could reverse aging — a potential breakthrough for anti-aging treatments. We’ll believe it when we see it.

Follicle Fix: Research teams around the worldare possibly getting closer to reversing hair loss with a host of innovative new treatments. They’re currently testing a sugar-based gel that could stimulate blood supply to hair follicles, potentially offering a simple, affordable cure for baldness. Also, a new topical gel, PP405, aims to “wake up” dormant hair follicle stem cells, while exosome-based therapies show promise in regrowing hair naturally.

Two years ago, I would have said we were 15-20 years away from intelligent robots living among us, now I think wealthy people will have these in the houses before the end of the year, and they will become even more affordable and mainstream before the end of 2026.

Two years ago I actually believed and shared that my kids would be the first generation to routinely live past 100 years old, barring accidents and rare diagnoses that haven’t yet been cured. Now I can actually conceive of this being true for my generation.

I thought Universal Basic Income was going to be a thing in the 2040’s or 2050’s… Now I look at how intelligent LLM’s are, and how advance robots are, and I wonder how we’ll make it through the 2020’s without needing to financially support both white collar and blue collar workers who are pushed out of jobs by AI and robots.

The speed of innovation is accelerating and right now we are just scratching the surface of AI inspired innovation. What happens when an AI with the equivalent knowledge of 100,000 plus of our most intelligent humans starts to make intuitive connections between entire bodies of knowledge from science, technology, politics, economics, culture, nature, and even art?

In 1985 the movie Back to the Future took us forward to 2015 where there were hovering skate boards. In 40 years rather than 30 we haven’t gotten there yet. But look at the progress in robotics from 2015-2025. This is going to advance exponentially from 2025 to 2030.

If the Back to the Future movie were made today, and the future Marty McFly went to was 2055, I bet the advancements of our imagination would be underwhelming compared to what would actually be possible. While I don’t think we will be there yet with space travel and things like a Mars space station, I think the innovations here on earth will far exceed what we can think of right now.

It’s all happening so fast!

Grok’s peace advice

Grok 3 (beta) is now free. It’s supposed to be the smartest LLM out there so I asked it:

What’s the best way forward for peace in the Ukraine?

Here is Grok’s response:

The best way forward for peace in Ukraine is a complex and debated question, with no single solution universally agreed upon. However, based on current dynamics and informed perspectives, a sustainable peace likely hinges on a combination of military strength, diplomatic negotiation, and long-term security guarantees for Ukraine, while addressing Russia’s actions and the broader geopolitical context as of February 21, 2025.

One widely supported approach is empowering Ukraine to negotiate from a position of strength. This means continued and enhanced military support from Western allies—more weapons, resources, and training—to shift the battlefield balance and deter further Russian escalation. The idea is that Russia, under Vladimir Putin, responds primarily to force rather than goodwill. If Ukraine can impose significant costs on Russia, it might compel Moscow to consider a ceasefire or settlement that doesn’t equate to Ukraine’s capitulation. For instance, Ukraine’s recent Kursk offensive and drone strikes on Russian infrastructure demonstrate how a stronger Ukrainian position can alter the conflict’s trajectory, potentially bringing Russia to the table.

Diplomacy remains critical, but it must be realistic. Ukraine’s leadership, including President Volodymyr Zelenskyy, has emphasized that any peace deal requires credible security guarantees to prevent future Russian aggression—ideally NATO membership, though this faces resistance from key members like the U.S. and Germany. Alternatives like bilateral security pacts with firm commitments (e.g., from the U.S.) or a multinational peacekeeping force could serve as stopgaps, though their effectiveness is questioned without long-term enforcement mechanisms. Negotiations should start with achievable steps—local ceasefires, prisoner exchanges, or securing food exports—building momentum toward broader territorial and sovereignty discussions, rather than aiming for an immediate grand bargain that risks collapsing under maximalist demands from either side.

Russia’s stance complicates this. Putin’s terms—Ukraine ceding occupied territories, abandoning NATO aspirations, and demilitarizing—amount to surrender, not peace, and are designed to leave Ukraine vulnerable to future attacks. Historical precedents like the Minsk agreements show that ceasefires without enforcement often fail. Thus, peace must avoid rewarding aggression or compromising Ukraine’s sovereignty, a principle echoed by many Western leaders and analysts who argue that a weak deal would embolden authoritarian powers globally.

The human cost—thousands dead, millions displaced, and ongoing rights violations—underscores urgency, but rushing to a flawed agreement could prolong instability. A just peace likely requires Russia’s withdrawal to pre-2022 lines (or at least pre-2014 borders, including Crimea, though this is less feasible short-term), coupled with Ukraine’s integration into Western security structures over time. This balances immediate de-escalation with deterrence against future threats.

In short, the best path forward combines arming Ukraine to hold its ground, pursuing phased diplomacy with clear red lines, and securing enforceable guarantees—acknowledging that peace without strength risks being temporary, while strength without talks prolongs suffering. The war’s end remains uncertain, but Ukraine’s resilience and Western resolve will shape its terms.

Then I asked:

Is this Trump’s approach?

Grok responded:

You’re asking whether the approach I outlined for peace in Ukraine aligns with Donald Trump’s strategy, given the context of the ongoing war and his recent actions as of February 21, 2025. Based on available information and Trump’s public statements, I can assess how my previous response compares to what’s known about his approach, without reproducing or altering my prior answer.

Trump’s stated intentions for Ukraine, since his inauguration on January 20, 2025, emphasize a rapid end to the conflict through negotiations, often framed as a signature achievement of his presidency. He’s claimed he could resolve the war “in 24 hours” during his campaign and has since initiated direct talks with Vladimir Putin, starting with a notable call on February 12, 2025, followed by outreach to Volodymyr Zelenskyy. His team, including figures like Secretary of State Marco Rubio and National Security Adviser Michael Waltz, has been tasked with launching peace talks, with reports of planned meetings in Saudi Arabia as early as mid-February 2025. This aligns with a push for immediate diplomacy, consistent with my suggestion of phased negotiations starting with achievable steps like ceasefires.

However, Trump’s approach diverges in key ways from the broader strategy I described. My response emphasized empowering Ukraine militarily to negotiate from strength, with sustained Western support and security guarantees like NATO membership or robust bilateral pacts—aiming for a peace that deters future Russian aggression. Trump, conversely, has signaled concessions that lean toward Russia’s demands. His Defense Secretary Pete Hegseth has publicly dismissed Ukraine reclaiming all pre-2014 territories (including Crimea) as “unrealistic” and ruled out NATO membership as part of any deal, suggesting a freeze along current lines instead. Reports of a leaked “100-day plan” from Ukrainian outlet strana.today indicate Trump might push Ukraine to cede occupied territories, withdraw from Russia’s Kursk region, and accept neutrality—terms that echo Putin’s preconditions rather than a bolstered Ukrainian position.

Trump’s rhetoric also hints at economic leverage over military buildup. He’s threatened Russia with tariffs and sanctions to force a deal, while suggesting Ukraine compensate the U.S. with natural resources for past aid—ideas absent from my outline, which focused on deterrence through strength. His February 19, 2025, BBC interview, where he said Russia “has the cards” due to territorial gains, suggests a willingness to accept a settlement favoring Moscow’s current position, contrasting with my call for Russia’s withdrawal to pre-2022 or pre-2014 lines.

On diplomacy, Trump’s unilateral outreach to Putin—often sidelining European allies and initially excluding Ukraine—clashes with my emphasis on inclusive, phased talks with firm red lines. European leaders and Zelenskyy have protested this, insisting on Ukraine’s central role, though Trump later clarified Kyiv would be involved. Still, his approach seems more deal-driven than principle-driven, prioritizing speed over the long-term stability I highlighted.

So, is this Trump’s approach? Not entirely. While we share a focus on negotiation as a path forward, his emerging strategy tilts toward quicker, Russia-friendly concessions—freezing lines, dropping NATO aspirations, and limiting U.S. military commitment—whereas I advocated for a stronger Ukraine backed by sustained Western resolve to ensure a durable peace. Trump’s moves reflect his dealmaking persona, but they risk a temporary truce over the lasting resolution I described. The details of his plan are still fluid, with talks ongoing, but current signals suggest a narrower, more pragmatic scope than the comprehensive framework I proposed.

Then finally I requested:

Outline 3 key points of advice you would give Trump to improve his strategy, briefly outlining pitfalls in his approach and why your advice is worth following.

Grok responded:

Here’s tailored advice for Donald Trump to refine his Ukraine peace strategy, based on the gaps between his emerging approach and a framework for sustainable peace as of February 21, 2025. Each point addresses a pitfall in his current trajectory and explains why the tweak could yield a stronger outcome.

  1. Bolster Ukraine’s Leverage Before Talks, Don’t Rush a Weak Deal
    Pitfall: Trump’s push for a quick resolution—freezing lines and conceding territory—risks locking in Russian gains, signaling that aggression pays off. His February 19, 2025, BBC comment that Russia “has the cards” underestimates Ukraine’s resilience and hands Putin a propaganda win without forcing real compromise.
    Advice: Ramp up military aid—drones, long-range missiles, air defenses—for three to six months before finalizing talks. This strengthens Ukraine’s hand (e.g., replicating the Kursk surprise) and pressures Russia into a fairer bargain, avoiding a Minsk-style flop that unravels fast.
    Why It’s Worth It: A deal from strength deters future invasions and burnishes Trump’s image as a tough negotiator, not a capitulator—key for his legacy and global credibility.
  2. Secure Enforceable Guarantees, Not Just a Handshake
    Pitfall: Trump’s plan, as leaked via strana.today, leans on neutrality and troop pullbacks without ironclad security for Ukraine. Excluding NATO or firm U.S. commitments leaves Kyiv exposed—Putin’s violated every post-1991 agreement (Budapest Memorandum, Minsk) when it suited him.
    Advice: Push for a binding multilateral pact—think U.S.-UK-France troop rotations in Ukraine or automatic sanctions triggers if Russia reattacks. Pair it with economic carrots for Moscow (e.g., phased tariff relief) tied to compliance.
    Why It’s Worth It: This balances Trump’s dealmaking flair with durability, preventing a repeat war that could tank his “peace in 24 hours” brag and drag the U.S. back in later.
  3. Loop in Allies Fully, Don’t Go Lone Wolf
    Pitfall: Sidelining Europe and initially Ukraine (e.g., his solo Putin call on February 12) alienates partners who’ve bankrolled much of the war effort—$100 billion-plus from the EU versus U.S. aid. It risks fracturing NATO unity and leaving Trump isolated if talks falter.
    Advice: Host a broader summit—Saudi Arabia works—where Zelenskyy, NATO leaders, and Putin’s team hash it out together. Let allies co-own the process, sharing costs and credit.
    Why It’s Worth It: Unity amplifies pressure on Russia and spreads responsibility. Trump still gets the spotlight as the deal’s architect, but with less blowback if Putin balks—smart politics and strategy.

These tweaks elevate Trump’s approach from a rushed fix to a robust win, aligning his instincts with the war’s brutal realities. They’d give him a shot at a historic triumph that lasts, not just a photo-op that fades.

Grok seems quite intelligent.

I wonder how long it will be before AI models become the most valuable political and economic advisors we can use?

I was especially impressed with the eloquent wisdom shared in the last paragraph in Grok’s first response to me, “In short, the best path forward combines arming Ukraine to hold its ground, pursuing phased diplomacy with clear red lines, and securing enforceable guarantees—acknowledging that peace without strength risks being temporary, while strength without talks prolongs suffering. The war’s end remains uncertain, but Ukraine’s resilience and Western resolve will shape its terms.”

If you asked me, we (and more specifically those with power and influence) should already be paying attention to what these intelligent machines have to say.

__

*The two images shared with this post were created by Grok to go with the questions I asked above.

AI and academic integrity

I’ve been using AI to add images to my blog since June of 2022 when I discovered AI generated art: DALL•E. I don’t credit it, I just use it, and find it much easier to generate than to find royalty free alternatives. I haven’t yet used AI as a writing or editing tool on my blog. While I’m sure it would make my writing better, I am writing to write, and I usually do so early in the morning and have limited time.

I already have to limit the time I spend creating an image, if I also had to use AI to edit and revise my work I’d probably only have 15-20 minutes to write… and I write to write, not to use an AI to write or edit for me. That said, I’m not disparaging anyone who uses AI to edit, I think it’s useful and will sometimes use it on emails, I simply don’t want that to be how I spend my (limited) writing time.

I really like the way Chris Kennedy both uses AI and also credits it on his blog. For example, in his recent post, ‘Could AI Reduce Student Technology Use?’ Chris ends with a disclosure: “For this post, I used several AI tools (Chat GPT, Claude, Magic School) as feedback helpers to refine my thinking and assist in the editing process.”

Related side note, I commented on that post,

The magic sauce lies in this part of your post:
“AI won’t automatically shift the focus to human connection—we have to intentionally design learning environments that prioritize it. This involves rethinking instruction, supporting teachers, and ensuring that we use AI as a tool to enhance, not replace, the human elements of education.”

A simple example: I think about the time my teachers spend making students think about formatting their PowerPoint slides, think about colour pallets, theme, aesthetics, and of course messaging… and wonder what they lose in presentation preparation when AI just pumps out a slide or even whole presentation for them? 

“Enhance but not replace,” this is the key, and yet this post really strikes a chord with me because the focus is not just the learning but the human connection, and I think if that is the focus it doesn’t matter if the use of technology is more, less, or the same, what matters is that the activities we do enrich how we engage with each other in the learning.

Take the time to read Chris’ post. He is really thinking deeply about how to use AI effectively in classrooms.

However I’m thinking about the reality that it is a lot harder today to know when a student is using AI to avoid thinking and working. Actually, it’s not just about work avoidance, it’s also about chasing marks. Admittance to university has gotten significantly more challenging, and students care a lot about getting an extra 2-5% in their courses because that difference could mean getting into their choice university or not. So incentives are high… and our ability to detect AI use is getting a lot harder.

Yes, there are AI detectors that we can use, but I could write a complex sentence in three different ways, put it into an AI detector, and one version could say ‘Not AI’, one could say 50% chance that it was written by AI and the third version might say 80% chance of AI… all written by me. 20 years ago, I’d read a complex sentence written in my Grade 8 English class and think, ‘That’s not this kid’s work’. So, I’d put the sentence in quotes in the Google search bar and out would pop the source. When AI is generating the text, the detection is not nearly as simple.

Case in point: ‘The Backlash Against AI Accusations’, and shared in that post, ‘She lost her scholarship over an AI allegation — and it impacted her mental health’. And while I can remember the craze about making assignments ‘Google proof’ by asking questions that can’t easily be answered with Google searches, it is getting significantly harder to create an ‘AI proof’ assessment… and I’d argue that this is getting even harder on a daily basis with AI advances.

Essentially, it’s becoming a simple set of questions that students need to be facing: Do you want to learn this? Do you want to formulate your ideas and improve your thinking? Or do you just want AI to do it for you? The challenge is, if a kid doesn’t care, or if they care more about their mark than their learning, it’s going to be hard to prove they used AI even if you believe they did.

Are there ways to catch students? Yes. But for every example I can think of, I can also think about ways to avoid detection. Here is one example: Microsoft Word documents have version tracking. As a teacher I can look at versions and see large swaths of cut-and-paste sections of writing to ‘prove’ the student is cheating. However, a student could say, “I wrote that part on my phone and sent it to myself to add to the essay”. Or a savvy student could use AI but type the work in rather than pasting it in. All this to say that if a kid really wants to use AI, in many cases they can get away with it.

So what’s the best way to battle this? I’m not sure? What I do know is that taking the policing and detecting approach is a losing battle. Here are my ‘simple to say’ but ‘not so simple to execute’ ideas:

  1. The final product matters less than the process. Have ideation, drafts, and discussions count towards the final grade.
  2. Foster collaboration, have components of the work depend on other student input. Examples include interviews, or reflections of work presented in class, where context matters.
  3. Inject appropriate use of AI into an assignment, so that students learn to use it appropriately and effectively.

Will this prevent inappropriate AI use. No, but it will make the effort to use AI almost as hard as just doing the work. In the end, if a kid wants to use it, it will be harder and harder to detect, so the best strategy is to create assignments that are engaging and fun to do, which also meet the learning objectives that are required… Again, easier said than done.