Tag Archives: future

Fully integrated and invisible AI

We are moving into a new economic revolution. Not since the Industrial Revolution have we seen technology that will change the nature of work so drastically. Artificial Intelligence is about to be weaved so deeply into our lives that we will not know where it starts and ends. And while it’s not completely new to have our work enhanced by AI, the depth of influence and ease of use will make it transformational while also slowly becoming invisible.

What do I mean by invisible? We already use simple forms of AI in everyday life without thinking about it: We have autocorrect correcting our spelling; we have cars that warn us when we drift outside our lane or flash in our rear view mirrors when it’s not safe to change lanes; and, we trust autopilot to do the majority of flying on plane trips around the world. The leap to self-driving cars might have seemed incredible a few years ago, but now you can board a self-driving taxi in San Francisco.

Chat GP3, and now Chat Gp4, are going to change the very nature of work for many people in the coming months and years. Have a look at what Microsoft Copilot is about to offer:

More specifically:

Soon tools like this, aptly named Copilot, will become as useful and integrated into what we do as autocorrect is today. Take meeting notes? Why bother, just record the meeting and ask AI to generate both the notes and the next step tasks. Create a PowerPoint to present new information? Instead, share the information with Copilot and have it create the PowerPoint. Create a website? How about sketching it on the back of a napkin, sharing a picture of it and having Chat GP4 write the code and build the website.

AI is going to redefine the work of many people faster than any time in history, and the technology is going to be so integrated into the things we do daily that the use of AI will quickly become invisible… ever present, very useful, and unnoticed.

Feeding the (AI) brain

I worry about the training of Artificial Intelligence using the internet as the main source of information. One of the biggest challenges in teaching AI is in teaching it how to group things. Unless a group is clearly identified, it’s not a group. That’s ok when counting items, but not ideas. What is fact vs fiction? What is truth vs a lie vs an embellishment vs an exaggeration vs a theory vs a really, really bad theory?

There are some dark places on the internet. There are some deeply flawed ideas about culture, race, gender, politics, and even health and fitness. There are porn sites that objectify women, and anti-science websites that read like they are reporting out facts. There is a lot of ‘stupid shit’ on the internet. How is this information grouped by not-yet intelligent AI systems?

There is the old saying, ‘Garbage in, garbage out’, and essentially that’s my concern. Any form of artificial general intelligence is only as good as the intelligence put into the system, and while the internet is a great source of intelligent information it’s also a cesspool of ridiculous information that’s equally as easy to find. I’m not sure these two dichotomous forms of information are being grouped by AI systems in a meaningful and wise way… mainly because we aren’t smart enough to program these systems well enough to know the difference.

The tools we have for searching the internet are based on algorithms that are constantly gamed by SEO techniques and search is based on words, not ideas. The best ideas on the internet are not the ones necessarily most linked to, and often bad ideas get more clicks, likes, and attention. How does an AI weigh this? How does it group these ideas? And what conclusions does the AI make? Because the reality is that the AI needs to make decisions or it wouldn’t be considered intelligent. Are those decisions ones ‘we’ are going to want it to make? If the internet is the the main database of information then I doubt it.

Monochromatic cars

In a world of flashy outfits and accessories how has the car remained a single colour for so long? I understand that for some people resale value is important, but there are a lot of old cars out there just waiting to become someone’s work of art. We live in a world where so many people do things to stick out, but car paint has stayed monochrome. One colour per car.

I think this is going to change a lot in the next few years. We are going to see some crazy looking cars, and people will use them to express their identity, their uniqueness. Maybe not just artwork, maybe different colours for different parts of the car. Shades of a colour accented with darker hues on fenders and over wheels. A complementary colour on the hood.

It’s coming. Flashy cars for flashy personalities. And crazy artwork for people wanting to express their love for creativity. It’s just a matter of time.

When search engines become answer engines

One if the most alarming things I’ve read and heard about since I started mentioning Chat GPT and the use of predictive AI tools is that the model for profitability of content creators is going to have to change. With Google and Bing both embedding AI enhanced ‘answers’ as part of their search results, this is going to have a dramatic impact on website visits, (click-throughs and advertising views), content creators count on.

Here is a link to a very long but interesting essay by Alberto Romero on the subject: Google vs Microsoft: Microsoft’s New Bing Is a Paradigm Change for Search and the Browser

This is an excerpt from the section titled, ‘With great power comes great responsibility’,

“Giving value back to creators
One of the core business aspects of search is the reciprocal relationship between the owners of websites (content creators and publishers) and the owners of the search engines (e.g. Google and Microsoft). The relationship is based on what Nadella refers to as “fair use.” Website owners provide search engines with content and the engines give back in form of traffic (or maybe revenue, etc.). Also, search engine owners run ads to extract some profit from the service while keeping it free for the user (a business model that Google popularized and on top of which it amassed a fortune).”

and a little further down,

“…Sridhar Ramaswamy, ex-Google SVP and founder of Neva (a direct competitor of Bing and Google Search), says that “as search engines become answer engines, referral traffic will drop! It’s happened before: Google featured snippets caused this on 10-20% of queries in the past.”

So, getting a response from your search query already has a historical track record of reducing referral traffic and now search is going to get significantly better at answering questions without needing to click through to a website.

What is human (as opposed to Artificial Intelligence) created content going to look like in the future when search answers your questions that would normally require you to visit a website? What happens to creator and publisher profitability when search engines become answer engines?

Disengaged

It’s apparent in schools, it’s apparent in the workforce… there are students and young adults who are disengaged with societal norms and constructs around school and work. They are questioning why they need to conform? Why they need to participate? There is a dissatisfaction with complying with expectations that schools is necessary, or that a ‘9-5’ job is somehow meaningful.

Some will buck the norm, find innovative alternatives, and create their own niches in the world. Others, many others, will struggle, wallow in unhappiness, and fight mental health demons that will leave them feeling defeated, or riddled with anxiety, or fully disengaged with a world they feel they don’t fit in. Some will escape this, some will find pharmaceutical ways to reduce or enhance their disconnect. Some of these will be doctor prescribed, others will be legally or illegally self-prescribed.

The fully immersive worlds of addictive, time-sucking on-demand television series, first-person online games, and glamorous, ‘living my best life’, ‘you will never be as happy as me’ illusions on social media certainly don’t help. Neither does unlimited access to porn, violence, and anti-Karen social justice warriors dishing out revenge and hate in the name of justice. The choices are fully immersed, unhappily jealous, or infuriatingly angry… and disengaged with the world. Real life is not as interesting, and not as engaging as experiences that our technological tools can provide. School is hard, a full day at work is boring, and it’s easier to disengage than participate.

The question is, will this disengaged group find their way? Or will they find themselves in their 30’s living in their parent’s basements or subsisting on minimal income, working only enough to survive, and never enough to thrive?

School and work can’t compete with the sheer entertainment value this group gets from disengaging, so what’s the path forward? We can’t make them buy in if they refuse, and we can’t let school-aged students wallow in a school-less escapes from an engaged and full life. I don’t have any solutions, but I have genuine concerns for a growing number of disengaged young adults who seem dissatisfied with living in a world they don’t feel they can participate meaningfully in.

What does the future hold for those who disengage by choice?

Website domains matter

I think in an era of fake news and deepfakes, we are going to see a resurgence and refocus on web page branding. When you can’t even trust a video, much less a news article, the source of your information will become even more important.

I was on Twitter recently, after the tragic earthquake in Turkey and Syria, when I came across a video of what was claimed to be a nuclear reactor explosion in Turkey. The hashtags suggested that it was a video from the recent crisis, but with a little digging I discovered that it was an explosion many years ago and nowhere near Turkey as was suggested. The video had tens of thousands of views, likes, and retweets. I didn’t take the video at face value, but many others did. I reported the tweet, but doubt that it was removed before it was shared many more times.

Although I wasn’t fooled this time, I have been fooled before and I will be fooled again. That said, part of my ‘bullshit detector’ is paying attention to the source. Recently I saw a hard-to-believe article online by a major news station… except that the page was designed to look like the major news station but had a completely different web address. The article was fake. What drew my attention to it being fake was that it seemed more like an advertisement than a news article. Otherwise I probably would have been fooled. As soon as I was suspicious, the first thing I did was ask myself if this really was the news organization I thought it was? I went into my browser history and looked for the website this morning to take a screenshot of the article, and I found this:

The website is down… which is good, but again I wonder how many people it fooled? It was a website surprisingly high up in a google search just a few days ago, and so I clicked thinking it would be legitimate.

When looking for information from controversial people or topics, it’s going to get harder and harder to know if the source of the information is reliable. One sure fire way to be certain is to look at the website. In some cases even if the source is legitimate, you might still have to question the accuracy of the source, and use a tool like MEDIA BIAS/FACT Check to see what kind of bias the site tends to hold. But you will build a repertoire of reliable sites and go to them first.

More and more the web domain will be the ultimate litmus test that will help you determine if a claim or a quote (delivered in written, audio, or even video format) is legitimate. Because fake news and deepfakes will become more convincing, more authentic looking, and more prevalent… and that trend has already started.

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

You can’t police it

I said Chat GPT is a game changer in education in my post Fear of Disruptive Technology. Then I started to see more and more social media posts sharing that Artificial Intelligence (AI) detectors can identify passages written by AI. But that doesn’t mean a person can’t use a tool like Chat GPT and then do some of their own editing. Or, as seen here: just input the AI written text into CopyGenius.io and ‘rephrase’, and then AI detectors can’t recognize the text as written by AI.

The first instinct with combating new technology is to ban and/or police it: No cell phones in class, leave them in your lockers; You can’t use Wikipedia as a source; Block Snapchat, Instagram, and TikTok on the school wifi. These are all gut reactions to new technologies that frame the problem as policeable… Teachers are not teachers, they aren’t teaching, when they are being police.

Read that last sentence again. Let it sink in. We aren’t going to police away student use of AI writing tools, so maybe we should embrace these tools and help manage how they are used. We can teach students how to use AI appropriately and effectively. Teach rather than police.

Technological leaps

I’ve been very interested in bicycle gadgets for a while. I designed a backpack for bicycle commuters and patented a bicycle lock, both things that I’ll share in detail here at a later date. But today I want to share a brilliant, even revolutionary new advancement in bicycle design.

This Ultra-Efficient Bike Has No Chains and No Derailleurs

This video explains how it works:


Absolutely brilliant! No more chains, much more efficiency. Wireless electronic shifting and a split pinion that adjusts to the next gear while still engaged with the previous gear.

This isn’t just a better design, it’s a leap forward. I have questions around how it would perform in dirt and mud, and reliability in ‘the real world’, but those are things that can be tweaked over time. The reality is that this isn’t a tweak, it’s a fundamental shift in design that is going to change the future designs of bicycles, and other drive shaft designs, in the years to come.

Amazing!

Fear of Disruptive Technology

There is a lot of fear around how the Artificial Intelligence (AI) tool Chat GPT is going to disrupt teaching and learning. I’ve already written about this chatbot:

Next level artificial intelligence

And,

Teaching in an era of AI

And,

The future is now.

To get an understanding of the disruption that is upon us, in the second post, Teaching in an era of AI, I had Chat GPT write an essay for me. Then I noted:

“This is a game changer for teaching. The question won’t be how do we stop students from using this, but rather how do we teach students to use this well? Mike Bouliane said in a comment on yesterday’s post, “Interesting post Dave. It seems we need to get better at asking questions, and in articulating them more precisely, just like in real life with people.

Indeed. The AI isn’t going away, it’s just going to get better.”

And that’s the thing about disruptive technology, it can’t be blocked, it can’t be avoided, it needs to be embraced. Yet I’ve seen conversations online where people are trying to block it in schools. I haven’t seen this kind of ‘filter and hide from students’ philosophy since computers and then phones started to be used in schools. It reminds me of a blog post I wrote in 2010, Warning! We Filter Websites at School, where I shared this tongue-in-cheek poster for educators in highly filtered districts to put up on their doors:

Well now the fervour is back and much of the talk is about how to block Chat GPT, and how to detect its use. And while there are some conversations about how to use it effectively, this means disrupting what most teachers assign to students, and this also disrupts assessment practices. Nobody likes so much disruption to their daily practice happening all at once. So, the block and filter and policing (catching cheaters) discussions begin.

Here is a teacher of Senior AP Literature that uses Chat GPT to improve her students’ critical thinking and writing. Note how she doesn’t use the tool for the whole process. Appropriate, not continuous use of the tool:

@gibsonishere on TikTok

Again going all the way back to 2010, I said in Transformative or just flashy educational tools?,

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.”

And here is something really important to note:

The. Technology. Is. Not. Going. Away.

In fact, AI is only going to get better… and be more disruptive.

Employers are not going to pretend that it doesn’t exist. Imagine an employer saying, “Yes, I know we have power drivers but to test your skill we want you to screw in this 3-inch screw with a handheld screwdriver… and then not letting the new employee use his power tools in their daily work.

Chat GPT is very good at writing code, and many employers test their employee candidates by asking them to write code for them. Are they just going to pretend that Chat GPT can’t write the same code much faster? I can see a performance test for new programmers looking something like this in the future: “We asked Chat GPT to write the code to perform ‘X’, and this is the code it produced. How would you improve this code to make it more effective and eloquent?”

Just like the Tiktok teacher above, employers will expect the tool to be used and will want their employees to know how to use the tool critically and effectively to produce better work than if they didn’t use the tool.

I’m reminded of this carton I created back in 2009:

The title of the accompanying post asks, Is the tool an obstacle or an opportunity? The reality is that AI tools like Chat GPT are going to be very disruptive, and we will be far better off looking at these tools as opportunities rather than obstacles… Because if we choose to see these tools as obstacles then we are the actual obstacles getting in the way of progress.