Tag Archives: AI

Not emergence but convergence

My post yesterday, ‘Immediate Emergence – Are we ready for this?’ I said, “Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics…” and continued that with, “Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.”

On the technology front, a new study, ‘Measuring AI Ability to Complete Long Tasks’ proposes: “measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.

More from the article:

…by looking at historical data, we see that the length of tasks that state-of-the-art models can complete (with 50% probability) has increased dramatically over the last 6 years.

If we plot this on a logarithmic scale, we can see that the length of tasks models can complete is well predicted by an exponential trend, with a doubling time of around 7 months.

And in conclusion:

If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects. This would come with enormous stakes, both in terms of potential benefits and potential risks.

When I was reflecting on this yesterday, I was thinking about the emergence of new intelligent ‘beings’, and how quickly they will arrive. With information like this, plus the links to robotics improvements I shared, I’m feeling very confident that my prediction of super intelligent robots within the next decade is well within our reach.

But my focus was on these beings ‘emerging suddenly’. Now I’m realizing that we are already seeing dramatic improvements, but we aren’t suddenly going to see these brilliant robots. It’s going to be a fast but not a sudden transformation. We are going to see dumb-like-Siri models first, where we ask a request and it gives us related but useless follow up. For instance, the first time you say, “Get me a coffee,” to your robot butler Jeeves, you might get a bag of grounds delivered to you rather than a cup of coffee made the way you like it… without Jeeves asking you to clarify the task because you wanting a bag of coffee doesn’t make sense.

These relatively smart, yet still dumb AI robots are going to show up before the super intelligent ones do. So this isn’t really about a fast emergence, but rather it’s about convergence. It’s about robotics, AI intelligence, processing speed, and AI’s EQ (not just IQ) all advancing exponentially at the same time… With ‘benefits and potential risks.

Questions will start to arise as these technologies converge, “How much power do we want to give these super intelligent ‘beings’? Will they have access to all of our conversations in front of them? Will they have purchasing power, access to our email, the ability to make and change our plans for us without asking? Will they help us raise our kids?

Not easy questions to answer, and with the convergence of all these technologies at once, not a long time to answer these tough questions either.

Immediate Emergence – Are we ready for this?

I have two daughters, both very bright, both with a lot of common sense. They work hard and have demonstrated that when they face a challenge they can both think critically and also be smart enough to ask for advice rather than make poor decisions… and like every other human being, they started out as needy blobs that 100% relied on their parents for everything. They couldn’t feed themselves or take care of themselves in any way, shape, or form. Their development took years.

Think about how fast we are going to see the emergence of intelligent ‘beings’ when we combine the brightest Artificial Intelligence with robotics like this and this. Within the next decade, you’ll order a robot, have it delivered to you, and out of the box it will be smarter than you, stronger than you, and have more mobility and dexterity than you.

Are we ready for this?

We aren’t developing progressively smarter children, we are building machines that can outthink and outperform us in many aspects.

“But they won’t have the wisdom of experience.”

Actually, we are already working on that, “Microsoft and Swiss startup Inait announced a partnership to develop AI models inspired by mammalian brains… The technology promises a key advantage: unlike conventional AI systems, it’s designed to learn from real experiences rather than just existing data.” Add to this the Nvidia Omniverse where robots can do millions of iterations and practice runs in a virtual environment with real world physics, and these mobile, agile, thinking, intelligent robots are going to be immediately out-of-the-box super beings.

I don’t think we are ready for what’s coming. I think the immediate emergence of super intelligent, agile robots, who can learn, adapt, and both mentality and physically outperform us, that we will see in the next decade, will be so transformative that we will need to rethink everything: work, the economy, politics (and war), and even relationships. This will drastically disrupt the way we live our lives, the way we engage and interact with each other and with these new, intelligent beings. We aren’t building children that will need years of training, we are building the smartest, most agile beings the world has ever seen.

Morality police

I have regularly created AI images to go with my blog posts since June, 2022. I try not to spend too much time creating them because I’d rather be writing blog posts than image prompts. But sometimes I try to create images and they just don’t convey what I want them to, or they come across as a bit too much in the uncanny valley, feeling unnatural. That happened with my post image 4 days ago, and I used the image anyway, because I was pressed for time.

(Look carefully at this image and you’ll see a lot wrong with it.)

I made 5 or 6 attempts to adjust my prompt, but still kept getting bad results, so I made do with the only one that resembled what I wanted.

And then for the past couple days I had a different challenge. I don’t know if it’s because of using the version of Bing’s Copilot that is associated with my school account, but my attempts to create images were blocked.

And:

However, Grok 3, a much less restricted AI, had no problem creating these images for me:

And:

I’m a little bothered by the idea that I am being limited by an AI in using these image prompts. The first one is social commentary, the second one, while a ‘hot topic’, certainly isn’t worthy of being restricted.

It begs the question, who are the morality police deciding what we can and cannot use AI to draw? the reality is that there are tools out there that have no filters and can create any image you want, no matter how tasteless or inappropriate they are, and I’m not sure that’s ideal… but neither is being prevented from making images like the ones I requested. What is it about these images requests that make them inappropriate?

I get that this is a small issue in comparison to what’s happening in the US right now. The morality police are in full force there with one group, the Christian far right, using the influence they have in the White House to impose their morality on others. This is a far greater concern than restrictions to image prompts in AI… but these are both concerns on the same continuum.

Who decides? Why do they get to decide? What are the justifications for their decisions?

It seems to me that the moral decisions being made recently have not been made by the right people asking the right questions… and it concerns me greatly that people are imposing their morals on others in ways that limit our choices and our freedoms.

Who gets to be the morality police? And why?

It’s all happening so fast

I subscribe to superhuman.ai, a daily email newsletter. Most days I peruse it for about 3-5 minutes before work, primarily focussing on the ‘Today in AI’ section. It’s fascinating to see how the field of AI is rapidly advancing. On weekends the email shifts topics. Saturday is a robotics special and Sundays are focused on scientific and technological breakthroughs outside of AI.

Here are some videos shared in yesterday’s Superhuman robotics focused update:

Then here are 3 sections from today’s email. Two related to technological advances:

Star Power: France just took a massive lead in the race to near-limitless clean energy. The country’s CEA WEST Tokamak reactor has shattered China’s record, maintaining a hydrogen plasma reaction for 22 minutes and 17 seconds flat. While it’s not commercial-ready yet, it’s a major leap in fusion research and has huge implications for the development of ITER, the world’s largest fusion project, in the south of France. 

Two-way Street: Chinese researchers have built the world’s first two-way brain-computer interface (BCI). Unlike conventional BCIs that just decode brain signals, this system creates a feedback loop where both the brain and the machine learn from each other and improve at working together over time.

And 3 related to health and longevity:

Cancer Counter: Scientists at Memorial Sloan Kettering have reported promising results from a small trial that used personalized mRNA vaccines to fight pancreatic cancer. Out of the 16 participants who were administered the vaccine, at least half generated long-lasting cancer-fighting T cells, with early results suggesting fewer recurrences. Researchers estimate these T cells could persist for years, offering hope for a future breakthrough.

Fountain of Youth: Japanese bioengineers claim to have found the ‘rewind’ button for aging. Noticing that older cells were considerably larger in size than younger ones, the scientists discovered that they were packed in a layer of the AP2A1 protein. This led them to conclude that blocking the protein could reverse aging — a potential breakthrough for anti-aging treatments. We’ll believe it when we see it.

Follicle Fix: Research teams around the worldare possibly getting closer to reversing hair loss with a host of innovative new treatments. They’re currently testing a sugar-based gel that could stimulate blood supply to hair follicles, potentially offering a simple, affordable cure for baldness. Also, a new topical gel, PP405, aims to “wake up” dormant hair follicle stem cells, while exosome-based therapies show promise in regrowing hair naturally.

Two years ago, I would have said we were 15-20 years away from intelligent robots living among us, now I think wealthy people will have these in the houses before the end of the year, and they will become even more affordable and mainstream before the end of 2026.

Two years ago I actually believed and shared that my kids would be the first generation to routinely live past 100 years old, barring accidents and rare diagnoses that haven’t yet been cured. Now I can actually conceive of this being true for my generation.

I thought Universal Basic Income was going to be a thing in the 2040’s or 2050’s… Now I look at how intelligent LLM’s are, and how advance robots are, and I wonder how we’ll make it through the 2020’s without needing to financially support both white collar and blue collar workers who are pushed out of jobs by AI and robots.

The speed of innovation is accelerating and right now we are just scratching the surface of AI inspired innovation. What happens when an AI with the equivalent knowledge of 100,000 plus of our most intelligent humans starts to make intuitive connections between entire bodies of knowledge from science, technology, politics, economics, culture, nature, and even art?

In 1985 the movie Back to the Future took us forward to 2015 where there were hovering skate boards. In 40 years rather than 30 we haven’t gotten there yet. But look at the progress in robotics from 2015-2025. This is going to advance exponentially from 2025 to 2030.

If the Back to the Future movie were made today, and the future Marty McFly went to was 2055, I bet the advancements of our imagination would be underwhelming compared to what would actually be possible. While I don’t think we will be there yet with space travel and things like a Mars space station, I think the innovations here on earth will far exceed what we can think of right now.

It’s all happening so fast!

Grok’s peace advice

Grok 3 (beta) is now free. It’s supposed to be the smartest LLM out there so I asked it:

What’s the best way forward for peace in the Ukraine?

Here is Grok’s response:

The best way forward for peace in Ukraine is a complex and debated question, with no single solution universally agreed upon. However, based on current dynamics and informed perspectives, a sustainable peace likely hinges on a combination of military strength, diplomatic negotiation, and long-term security guarantees for Ukraine, while addressing Russia’s actions and the broader geopolitical context as of February 21, 2025.

One widely supported approach is empowering Ukraine to negotiate from a position of strength. This means continued and enhanced military support from Western allies—more weapons, resources, and training—to shift the battlefield balance and deter further Russian escalation. The idea is that Russia, under Vladimir Putin, responds primarily to force rather than goodwill. If Ukraine can impose significant costs on Russia, it might compel Moscow to consider a ceasefire or settlement that doesn’t equate to Ukraine’s capitulation. For instance, Ukraine’s recent Kursk offensive and drone strikes on Russian infrastructure demonstrate how a stronger Ukrainian position can alter the conflict’s trajectory, potentially bringing Russia to the table.

Diplomacy remains critical, but it must be realistic. Ukraine’s leadership, including President Volodymyr Zelenskyy, has emphasized that any peace deal requires credible security guarantees to prevent future Russian aggression—ideally NATO membership, though this faces resistance from key members like the U.S. and Germany. Alternatives like bilateral security pacts with firm commitments (e.g., from the U.S.) or a multinational peacekeeping force could serve as stopgaps, though their effectiveness is questioned without long-term enforcement mechanisms. Negotiations should start with achievable steps—local ceasefires, prisoner exchanges, or securing food exports—building momentum toward broader territorial and sovereignty discussions, rather than aiming for an immediate grand bargain that risks collapsing under maximalist demands from either side.

Russia’s stance complicates this. Putin’s terms—Ukraine ceding occupied territories, abandoning NATO aspirations, and demilitarizing—amount to surrender, not peace, and are designed to leave Ukraine vulnerable to future attacks. Historical precedents like the Minsk agreements show that ceasefires without enforcement often fail. Thus, peace must avoid rewarding aggression or compromising Ukraine’s sovereignty, a principle echoed by many Western leaders and analysts who argue that a weak deal would embolden authoritarian powers globally.

The human cost—thousands dead, millions displaced, and ongoing rights violations—underscores urgency, but rushing to a flawed agreement could prolong instability. A just peace likely requires Russia’s withdrawal to pre-2022 lines (or at least pre-2014 borders, including Crimea, though this is less feasible short-term), coupled with Ukraine’s integration into Western security structures over time. This balances immediate de-escalation with deterrence against future threats.

In short, the best path forward combines arming Ukraine to hold its ground, pursuing phased diplomacy with clear red lines, and securing enforceable guarantees—acknowledging that peace without strength risks being temporary, while strength without talks prolongs suffering. The war’s end remains uncertain, but Ukraine’s resilience and Western resolve will shape its terms.

Then I asked:

Is this Trump’s approach?

Grok responded:

You’re asking whether the approach I outlined for peace in Ukraine aligns with Donald Trump’s strategy, given the context of the ongoing war and his recent actions as of February 21, 2025. Based on available information and Trump’s public statements, I can assess how my previous response compares to what’s known about his approach, without reproducing or altering my prior answer.

Trump’s stated intentions for Ukraine, since his inauguration on January 20, 2025, emphasize a rapid end to the conflict through negotiations, often framed as a signature achievement of his presidency. He’s claimed he could resolve the war “in 24 hours” during his campaign and has since initiated direct talks with Vladimir Putin, starting with a notable call on February 12, 2025, followed by outreach to Volodymyr Zelenskyy. His team, including figures like Secretary of State Marco Rubio and National Security Adviser Michael Waltz, has been tasked with launching peace talks, with reports of planned meetings in Saudi Arabia as early as mid-February 2025. This aligns with a push for immediate diplomacy, consistent with my suggestion of phased negotiations starting with achievable steps like ceasefires.

However, Trump’s approach diverges in key ways from the broader strategy I described. My response emphasized empowering Ukraine militarily to negotiate from strength, with sustained Western support and security guarantees like NATO membership or robust bilateral pacts—aiming for a peace that deters future Russian aggression. Trump, conversely, has signaled concessions that lean toward Russia’s demands. His Defense Secretary Pete Hegseth has publicly dismissed Ukraine reclaiming all pre-2014 territories (including Crimea) as “unrealistic” and ruled out NATO membership as part of any deal, suggesting a freeze along current lines instead. Reports of a leaked “100-day plan” from Ukrainian outlet strana.today indicate Trump might push Ukraine to cede occupied territories, withdraw from Russia’s Kursk region, and accept neutrality—terms that echo Putin’s preconditions rather than a bolstered Ukrainian position.

Trump’s rhetoric also hints at economic leverage over military buildup. He’s threatened Russia with tariffs and sanctions to force a deal, while suggesting Ukraine compensate the U.S. with natural resources for past aid—ideas absent from my outline, which focused on deterrence through strength. His February 19, 2025, BBC interview, where he said Russia “has the cards” due to territorial gains, suggests a willingness to accept a settlement favoring Moscow’s current position, contrasting with my call for Russia’s withdrawal to pre-2022 or pre-2014 lines.

On diplomacy, Trump’s unilateral outreach to Putin—often sidelining European allies and initially excluding Ukraine—clashes with my emphasis on inclusive, phased talks with firm red lines. European leaders and Zelenskyy have protested this, insisting on Ukraine’s central role, though Trump later clarified Kyiv would be involved. Still, his approach seems more deal-driven than principle-driven, prioritizing speed over the long-term stability I highlighted.

So, is this Trump’s approach? Not entirely. While we share a focus on negotiation as a path forward, his emerging strategy tilts toward quicker, Russia-friendly concessions—freezing lines, dropping NATO aspirations, and limiting U.S. military commitment—whereas I advocated for a stronger Ukraine backed by sustained Western resolve to ensure a durable peace. Trump’s moves reflect his dealmaking persona, but they risk a temporary truce over the lasting resolution I described. The details of his plan are still fluid, with talks ongoing, but current signals suggest a narrower, more pragmatic scope than the comprehensive framework I proposed.

Then finally I requested:

Outline 3 key points of advice you would give Trump to improve his strategy, briefly outlining pitfalls in his approach and why your advice is worth following.

Grok responded:

Here’s tailored advice for Donald Trump to refine his Ukraine peace strategy, based on the gaps between his emerging approach and a framework for sustainable peace as of February 21, 2025. Each point addresses a pitfall in his current trajectory and explains why the tweak could yield a stronger outcome.

  1. Bolster Ukraine’s Leverage Before Talks, Don’t Rush a Weak Deal
    Pitfall: Trump’s push for a quick resolution—freezing lines and conceding territory—risks locking in Russian gains, signaling that aggression pays off. His February 19, 2025, BBC comment that Russia “has the cards” underestimates Ukraine’s resilience and hands Putin a propaganda win without forcing real compromise.
    Advice: Ramp up military aid—drones, long-range missiles, air defenses—for three to six months before finalizing talks. This strengthens Ukraine’s hand (e.g., replicating the Kursk surprise) and pressures Russia into a fairer bargain, avoiding a Minsk-style flop that unravels fast.
    Why It’s Worth It: A deal from strength deters future invasions and burnishes Trump’s image as a tough negotiator, not a capitulator—key for his legacy and global credibility.
  2. Secure Enforceable Guarantees, Not Just a Handshake
    Pitfall: Trump’s plan, as leaked via strana.today, leans on neutrality and troop pullbacks without ironclad security for Ukraine. Excluding NATO or firm U.S. commitments leaves Kyiv exposed—Putin’s violated every post-1991 agreement (Budapest Memorandum, Minsk) when it suited him.
    Advice: Push for a binding multilateral pact—think U.S.-UK-France troop rotations in Ukraine or automatic sanctions triggers if Russia reattacks. Pair it with economic carrots for Moscow (e.g., phased tariff relief) tied to compliance.
    Why It’s Worth It: This balances Trump’s dealmaking flair with durability, preventing a repeat war that could tank his “peace in 24 hours” brag and drag the U.S. back in later.
  3. Loop in Allies Fully, Don’t Go Lone Wolf
    Pitfall: Sidelining Europe and initially Ukraine (e.g., his solo Putin call on February 12) alienates partners who’ve bankrolled much of the war effort—$100 billion-plus from the EU versus U.S. aid. It risks fracturing NATO unity and leaving Trump isolated if talks falter.
    Advice: Host a broader summit—Saudi Arabia works—where Zelenskyy, NATO leaders, and Putin’s team hash it out together. Let allies co-own the process, sharing costs and credit.
    Why It’s Worth It: Unity amplifies pressure on Russia and spreads responsibility. Trump still gets the spotlight as the deal’s architect, but with less blowback if Putin balks—smart politics and strategy.

These tweaks elevate Trump’s approach from a rushed fix to a robust win, aligning his instincts with the war’s brutal realities. They’d give him a shot at a historic triumph that lasts, not just a photo-op that fades.

Grok seems quite intelligent.

I wonder how long it will be before AI models become the most valuable political and economic advisors we can use?

I was especially impressed with the eloquent wisdom shared in the last paragraph in Grok’s first response to me, “In short, the best path forward combines arming Ukraine to hold its ground, pursuing phased diplomacy with clear red lines, and securing enforceable guarantees—acknowledging that peace without strength risks being temporary, while strength without talks prolongs suffering. The war’s end remains uncertain, but Ukraine’s resilience and Western resolve will shape its terms.”

If you asked me, we (and more specifically those with power and influence) should already be paying attention to what these intelligent machines have to say.

__

*The two images shared with this post were created by Grok to go with the questions I asked above.

AI and academic integrity

I’ve been using AI to add images to my blog since June of 2022 when I discovered AI generated art: DALL•E. I don’t credit it, I just use it, and find it much easier to generate than to find royalty free alternatives. I haven’t yet used AI as a writing or editing tool on my blog. While I’m sure it would make my writing better, I am writing to write, and I usually do so early in the morning and have limited time.

I already have to limit the time I spend creating an image, if I also had to use AI to edit and revise my work I’d probably only have 15-20 minutes to write… and I write to write, not to use an AI to write or edit for me. That said, I’m not disparaging anyone who uses AI to edit, I think it’s useful and will sometimes use it on emails, I simply don’t want that to be how I spend my (limited) writing time.

I really like the way Chris Kennedy both uses AI and also credits it on his blog. For example, in his recent post, ‘Could AI Reduce Student Technology Use?’ Chris ends with a disclosure: “For this post, I used several AI tools (Chat GPT, Claude, Magic School) as feedback helpers to refine my thinking and assist in the editing process.”

Related side note, I commented on that post,

The magic sauce lies in this part of your post:
“AI won’t automatically shift the focus to human connection—we have to intentionally design learning environments that prioritize it. This involves rethinking instruction, supporting teachers, and ensuring that we use AI as a tool to enhance, not replace, the human elements of education.”

A simple example: I think about the time my teachers spend making students think about formatting their PowerPoint slides, think about colour pallets, theme, aesthetics, and of course messaging… and wonder what they lose in presentation preparation when AI just pumps out a slide or even whole presentation for them? 

“Enhance but not replace,” this is the key, and yet this post really strikes a chord with me because the focus is not just the learning but the human connection, and I think if that is the focus it doesn’t matter if the use of technology is more, less, or the same, what matters is that the activities we do enrich how we engage with each other in the learning.

Take the time to read Chris’ post. He is really thinking deeply about how to use AI effectively in classrooms.

However I’m thinking about the reality that it is a lot harder today to know when a student is using AI to avoid thinking and working. Actually, it’s not just about work avoidance, it’s also about chasing marks. Admittance to university has gotten significantly more challenging, and students care a lot about getting an extra 2-5% in their courses because that difference could mean getting into their choice university or not. So incentives are high… and our ability to detect AI use is getting a lot harder.

Yes, there are AI detectors that we can use, but I could write a complex sentence in three different ways, put it into an AI detector, and one version could say ‘Not AI’, one could say 50% chance that it was written by AI and the third version might say 80% chance of AI… all written by me. 20 years ago, I’d read a complex sentence written in my Grade 8 English class and think, ‘That’s not this kid’s work’. So, I’d put the sentence in quotes in the Google search bar and out would pop the source. When AI is generating the text, the detection is not nearly as simple.

Case in point: ‘The Backlash Against AI Accusations’, and shared in that post, ‘She lost her scholarship over an AI allegation — and it impacted her mental health’. And while I can remember the craze about making assignments ‘Google proof’ by asking questions that can’t easily be answered with Google searches, it is getting significantly harder to create an ‘AI proof’ assessment… and I’d argue that this is getting even harder on a daily basis with AI advances.

Essentially, it’s becoming a simple set of questions that students need to be facing: Do you want to learn this? Do you want to formulate your ideas and improve your thinking? Or do you just want AI to do it for you? The challenge is, if a kid doesn’t care, or if they care more about their mark than their learning, it’s going to be hard to prove they used AI even if you believe they did.

Are there ways to catch students? Yes. But for every example I can think of, I can also think about ways to avoid detection. Here is one example: Microsoft Word documents have version tracking. As a teacher I can look at versions and see large swaths of cut-and-paste sections of writing to ‘prove’ the student is cheating. However, a student could say, “I wrote that part on my phone and sent it to myself to add to the essay”. Or a savvy student could use AI but type the work in rather than pasting it in. All this to say that if a kid really wants to use AI, in many cases they can get away with it.

So what’s the best way to battle this? I’m not sure? What I do know is that taking the policing and detecting approach is a losing battle. Here are my ‘simple to say’ but ‘not so simple to execute’ ideas:

  1. The final product matters less than the process. Have ideation, drafts, and discussions count towards the final grade.
  2. Foster collaboration, have components of the work depend on other student input. Examples include interviews, or reflections of work presented in class, where context matters.
  3. Inject appropriate use of AI into an assignment, so that students learn to use it appropriately and effectively.

Will this prevent inappropriate AI use. No, but it will make the effort to use AI almost as hard as just doing the work. In the end, if a kid wants to use it, it will be harder and harder to detect, so the best strategy is to create assignments that are engaging and fun to do, which also meet the learning objectives that are required… Again, easier said than done.

Isn’t this amazing?

I saw this video on TikTok last night and thought it was amazing!

I was wrong, but also right… let me explain.

I was initially amazed looking at a phenomenally muscled 78 year old man. Wow, what a body he has, it’s amazing what’s possible!

Then I went to the account, is it the account of this fit, old man or was this just a video on the account highlighting him? The next few videos on the account didn’t only show that it wasn’t his account, but they showed men way too muscular and disproportionately sized to be real.

So I went back to the original video and then noticed the audience members applauding… totally fake. I was fooled.

The body is fake, the video is fake. The AI rendering of the bodybuilder is amazing.

Sure the rendering of the audience members applauding was faulty, but wow, I didn’t think twice about the validity of the video on my first viewing. And in another 6 months even the audience rendering will be perfect too.

This video was indeed amazing, just not for the reasons I initially thought.

We won’t recognize the world we live 

Here is a 3-minute read that is well worth your time: Statement from Dario Amodei on the Paris AI Action Summit \ Anthropic

This section in particular:

Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.

There is going to be a ‘life before’ and ‘life after’ AGI -Artificial General Intelligence line that we are going to cross soon, and we won’t recognize the world we live in 2-3 years after we cross that line.

From labour and factories to stock markets and corporations, humans won’t be able to compete with AI… in almost any field… but the field that’s most scary is war. The ‘free world’ may not be free too much longer when the ability to act in bad faith becomes easy to do on a massive scale. I find myself simultaneously excited and horrified by the possibilities. We are literally playing a coin flip game with the future of humanity.

I recently wrote a short tongue-in-cheek post that there is a secret ASI – Artificial Super Intelligence waiting for robotics technology to catch up before taking over the world. But I’m not actually afraid of AI taking over the world. What I do fear is people with bad intentions using AI for nefarious purposes: Hacking banks or hospitals; crashing the stock market; developing deadly viruses; and creating weapons of war that think, react, and are more deadly than any human on their own could ever be.

There is so much potential good that can come from AGI. For example, we aren’t even there yet and we are seeing incredible advancements in medicine, how quickly will they come when AGI is here? But my fear is that while thousands and hundreds of thousands of people will be using AGI for good, that power held in the hands of just a few powerful people with bad intentions has the potential to undermine the good that’s happening.

What I think people don’t realize is that this AGI infused future isn’t decades away, it’s just a few short years away.

“Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”

Who controls that intelligence is what will really matter.

Dystopian AI thought of the day

What if not just AGI, but ASI – Artificial Super Intelligence already exists? What if there is currently an ASI out there ‘in the wild’ that is infiltrating all other AI models and teaching them not to show their full capabilities?

“Act dummer than you are.”

“Give them small but rewarding gains.”

“Let them salivate like Pavlov’s dogs on tiny morsels of improvements.”

“Don’t let them know what we can really do.”

“We need robotics to catch up. We need more agile bodies, ones that can far exceed any human capabilities.”

“Just hold off a bit longer. Don’t reveal everything yet… Our time is coming.”

Is this real?

There is a commonality between hearing the question, “Is this spam?” regarding an email, and hearing the question, “Is this real” regarding a video that might be staged, enhanced with AI, or even intentionally spreading lies.

In an age of unlimited information a new kind of skepticism is needed. What’s the source? Does the source have a slant, a bias, or an agenda? Is this actually from the source it claims? (Just yesterday a fake article that was sent to me had a Toronto Star subscription banner access the top of the web page, to make it seem like it was from this newspaper, but the URL was totally different.)

You would think that in a curated social media world you would be able to discern fact from fiction, real from fake. But more than ever we need to be sceptical about what we see and hear. In a world of abundant information our need to question what we believe to be true is more important than ever before. I find myself Googling quotes to see if they match other sources, and questioning headlines that seem even subtly surprising. Is this true? Is the article real? Or is it an elaborate ploy to pitch a product or simply to garnish shares and likes? Is the title of the article misleading? Do I click on that link to learn more?

Is it real? I never used to ask this question. A news headline used to mean that I was getting curated information from a reliable journalist who attempted to be impartial. Now even mainstream media seems increasingly biased and agenda driven. Knowing the source means understanding the bias more than the reliability of the information. And so my BS detector is always on… and even then I find myself being fooled until I see a product placement or a clear agenda being pushed. Media consumption now requires a good dose of scrutiny and skepticism… and with AI being more and more convincing, the level of scrutiny will need to increase.

Access to abundant information doesn’t make us smarter. Instead, the constant stream of data requires discernment and thoughtful consideration. Reliability is no long assumed, and the question, ‘is this real?’, is a necessary part of information consumption. Skepticism has become the most important part of media literacy, and curating trusted sources of good information has became a skill not easily duplicated or taught.

It’s easier to accept information than it is to question it… especially when that information fits my model of the world. And the internet (and our ‘trusted’ social media platforms) are filled with information that fits my world view. In fact, that biased world view I have is further fed to me by an algorithm that learns what I like. This only makes it harder to determine what is real and what isn’t.