Tag Archives: Artificial Intelligence

Fully integrated and invisible AI

We are moving into a new economic revolution. Not since the Industrial Revolution have we seen technology that will change the nature of work so drastically. Artificial Intelligence is about to be weaved so deeply into our lives that we will not know where it starts and ends. And while it’s not completely new to have our work enhanced by AI, the depth of influence and ease of use will make it transformational while also slowly becoming invisible.

What do I mean by invisible? We already use simple forms of AI in everyday life without thinking about it: We have autocorrect correcting our spelling; we have cars that warn us when we drift outside our lane or flash in our rear view mirrors when it’s not safe to change lanes; and, we trust autopilot to do the majority of flying on plane trips around the world. The leap to self-driving cars might have seemed incredible a few years ago, but now you can board a self-driving taxi in San Francisco.

Chat GP3, and now Chat Gp4, are going to change the very nature of work for many people in the coming months and years. Have a look at what Microsoft Copilot is about to offer:

More specifically:

Soon tools like this, aptly named Copilot, will become as useful and integrated into what we do as autocorrect is today. Take meeting notes? Why bother, just record the meeting and ask AI to generate both the notes and the next step tasks. Create a PowerPoint to present new information? Instead, share the information with Copilot and have it create the PowerPoint. Create a website? How about sketching it on the back of a napkin, sharing a picture of it and having Chat GP4 write the code and build the website.

AI is going to redefine the work of many people faster than any time in history, and the technology is going to be so integrated into the things we do daily that the use of AI will quickly become invisible… ever present, very useful, and unnoticed.

Feeding the (AI) brain

I worry about the training of Artificial Intelligence using the internet as the main source of information. One of the biggest challenges in teaching AI is in teaching it how to group things. Unless a group is clearly identified, it’s not a group. That’s ok when counting items, but not ideas. What is fact vs fiction? What is truth vs a lie vs an embellishment vs an exaggeration vs a theory vs a really, really bad theory?

There are some dark places on the internet. There are some deeply flawed ideas about culture, race, gender, politics, and even health and fitness. There are porn sites that objectify women, and anti-science websites that read like they are reporting out facts. There is a lot of ‘stupid shit’ on the internet. How is this information grouped by not-yet intelligent AI systems?

There is the old saying, ‘Garbage in, garbage out’, and essentially that’s my concern. Any form of artificial general intelligence is only as good as the intelligence put into the system, and while the internet is a great source of intelligent information it’s also a cesspool of ridiculous information that’s equally as easy to find. I’m not sure these two dichotomous forms of information are being grouped by AI systems in a meaningful and wise way… mainly because we aren’t smart enough to program these systems well enough to know the difference.

The tools we have for searching the internet are based on algorithms that are constantly gamed by SEO techniques and search is based on words, not ideas. The best ideas on the internet are not the ones necessarily most linked to, and often bad ideas get more clicks, likes, and attention. How does an AI weigh this? How does it group these ideas? And what conclusions does the AI make? Because the reality is that the AI needs to make decisions or it wouldn’t be considered intelligent. Are those decisions ones ‘we’ are going to want it to make? If the internet is the the main database of information then I doubt it.

When search engines become answer engines

One if the most alarming things I’ve read and heard about since I started mentioning Chat GPT and the use of predictive AI tools is that the model for profitability of content creators is going to have to change. With Google and Bing both embedding AI enhanced ‘answers’ as part of their search results, this is going to have a dramatic impact on website visits, (click-throughs and advertising views), content creators count on.

Here is a link to a very long but interesting essay by Alberto Romero on the subject: Google vs Microsoft: Microsoft’s New Bing Is a Paradigm Change for Search and the Browser

This is an excerpt from the section titled, ‘With great power comes great responsibility’,

“Giving value back to creators
One of the core business aspects of search is the reciprocal relationship between the owners of websites (content creators and publishers) and the owners of the search engines (e.g. Google and Microsoft). The relationship is based on what Nadella refers to as “fair use.” Website owners provide search engines with content and the engines give back in form of traffic (or maybe revenue, etc.). Also, search engine owners run ads to extract some profit from the service while keeping it free for the user (a business model that Google popularized and on top of which it amassed a fortune).”

and a little further down,

“…Sridhar Ramaswamy, ex-Google SVP and founder of Neva (a direct competitor of Bing and Google Search), says that “as search engines become answer engines, referral traffic will drop! It’s happened before: Google featured snippets caused this on 10-20% of queries in the past.”

So, getting a response from your search query already has a historical track record of reducing referral traffic and now search is going to get significantly better at answering questions without needing to click through to a website.

What is human (as opposed to Artificial Intelligence) created content going to look like in the future when search answers your questions that would normally require you to visit a website? What happens to creator and publisher profitability when search engines become answer engines?

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

You can’t police it

I said Chat GPT is a game changer in education in my post Fear of Disruptive Technology. Then I started to see more and more social media posts sharing that Artificial Intelligence (AI) detectors can identify passages written by AI. But that doesn’t mean a person can’t use a tool like Chat GPT and then do some of their own editing. Or, as seen here: just input the AI written text into CopyGenius.io and ‘rephrase’, and then AI detectors can’t recognize the text as written by AI.

The first instinct with combating new technology is to ban and/or police it: No cell phones in class, leave them in your lockers; You can’t use Wikipedia as a source; Block Snapchat, Instagram, and TikTok on the school wifi. These are all gut reactions to new technologies that frame the problem as policeable… Teachers are not teachers, they aren’t teaching, when they are being police.

Read that last sentence again. Let it sink in. We aren’t going to police away student use of AI writing tools, so maybe we should embrace these tools and help manage how they are used. We can teach students how to use AI appropriately and effectively. Teach rather than police.

Fear of Disruptive Technology

There is a lot of fear around how the Artificial Intelligence (AI) tool Chat GPT is going to disrupt teaching and learning. I’ve already written about this chatbot:

Next level artificial intelligence

And,

Teaching in an era of AI

And,

The future is now.

To get an understanding of the disruption that is upon us, in the second post, Teaching in an era of AI, I had Chat GPT write an essay for me. Then I noted:

“This is a game changer for teaching. The question won’t be how do we stop students from using this, but rather how do we teach students to use this well? Mike Bouliane said in a comment on yesterday’s post, “Interesting post Dave. It seems we need to get better at asking questions, and in articulating them more precisely, just like in real life with people.

Indeed. The AI isn’t going away, it’s just going to get better.”

And that’s the thing about disruptive technology, it can’t be blocked, it can’t be avoided, it needs to be embraced. Yet I’ve seen conversations online where people are trying to block it in schools. I haven’t seen this kind of ‘filter and hide from students’ philosophy since computers and then phones started to be used in schools. It reminds me of a blog post I wrote in 2010, Warning! We Filter Websites at School, where I shared this tongue-in-cheek poster for educators in highly filtered districts to put up on their doors:

Well now the fervour is back and much of the talk is about how to block Chat GPT, and how to detect its use. And while there are some conversations about how to use it effectively, this means disrupting what most teachers assign to students, and this also disrupts assessment practices. Nobody likes so much disruption to their daily practice happening all at once. So, the block and filter and policing (catching cheaters) discussions begin.

Here is a teacher of Senior AP Literature that uses Chat GPT to improve her students’ critical thinking and writing. Note how she doesn’t use the tool for the whole process. Appropriate, not continuous use of the tool:

@gibsonishere on TikTok

Again going all the way back to 2010, I said in Transformative or just flashy educational tools?,

“A tool is just a tool! I can use a hammer to build a house and I can use the same hammer on a human skull. It’s not the tool, but how you use it that matters.”

And here is something really important to note:

The. Technology. Is. Not. Going. Away.

In fact, AI is only going to get better… and be more disruptive.

Employers are not going to pretend that it doesn’t exist. Imagine an employer saying, “Yes, I know we have power drivers but to test your skill we want you to screw in this 3-inch screw with a handheld screwdriver… and then not letting the new employee use his power tools in their daily work.

Chat GPT is very good at writing code, and many employers test their employee candidates by asking them to write code for them. Are they just going to pretend that Chat GPT can’t write the same code much faster? I can see a performance test for new programmers looking something like this in the future: “We asked Chat GPT to write the code to perform ‘X’, and this is the code it produced. How would you improve this code to make it more effective and eloquent?”

Just like the Tiktok teacher above, employers will expect the tool to be used and will want their employees to know how to use the tool critically and effectively to produce better work than if they didn’t use the tool.

I’m reminded of this carton I created back in 2009:

The title of the accompanying post asks, Is the tool an obstacle or an opportunity? The reality is that AI tools like Chat GPT are going to be very disruptive, and we will be far better off looking at these tools as opportunities rather than obstacles… Because if we choose to see these tools as obstacles then we are the actual obstacles getting in the way of progress.

The future is now

I’ve shared Chat GPT a couple times (1, 2), and I really think that tools like this are going to create a massive shift in jobs, education, and creativity. It can been seen as both scary and exciting.

On another front, scientists have achieved ‘ignition’ in a nuclear fusion test. This is the creation of a fusion reaction where the energy output is greater than the energy input. For over a decade this was an unachievable goal, any reaction created required so much energy to produce that the costs were greater than the returns.

If you showed someone from 1995 the technology we had 25 years later in 2020, they would be impressed and amazed. I’m reminded of Arthur C. Clarke’s quote, “Any sufficiently advanced technology is indistinguishable from magic.” But what I’m seeing right now seems like magic.

I think the leaps in technology and ingenuity that will happen in the next 25 years will so far exceed what we saw happen in the last 25 years that it will feel more like we are 50 years in the future rather than just 25. So, buckle up and get ready for a wild ride into the future… it’s just getting started now!

Teaching in an era of AI

Yesterday I wrote about Next level artificial intelligence and since then I’ve learned quite a bit about ChatGPT capabilities. It’s an amazing tool and people are catching on, with over 1 million users trying it out in just 6 days. The implications for teaching are huge. Students can use this AI to have original work created for them that will not show up in a Google search if a teacher suspects plagiarism.

For years now I’ve heard the mantra that teachers need to ask questions that you can’t find the answer to on Google. But those questions could be uniquely answered by Chat GPT with no way of a teacher knowing it wasn’t done by a student… other than the fact that the writing might be of a higher quality than usual.

I don’t see Chat GPT completely doing all the work, but it will be a tool students use to build on. For instance, I asked this AI to do the following:

Write a 5 paragraph essay that compares and contrasts Winston Smith in George Orwell’s novel 1984 with Bernard Marx in Aldous Huxley’s novel Brave New World.

This is what it came up with in about 5 seconds:

Winston Smith and Bernard Marx are both characters in dystopian novels, but they have some significant differences. In George Orwell’s 1984, Winston is a rebellious member of a totalitarian society who secretly resists the oppressive government. In contrast, Bernard Marx in Aldous Huxley’s Brave New World is a member of a highly technologically advanced society where the government has complete control over its citizens.

One of the main differences between the two characters is their relationship with their respective governments. Winston is actively opposed to the government, and he goes to great lengths to avoid detection and to resist their control. Bernard, on the other hand, is more accepting of the government’s authority, and he is content to live within the strict societal boundaries set by the ruling class.

Another significant difference is their social status. Winston is a low-ranking member of society who works in the Ministry of Truth. He is not well-liked and is considered an outsider by many of his coworkers. Bernard, on the other hand, is a high-ranking member of society and is considered to be one of the ruling elite. He is well-liked and respected by his peers, and he has access to privileges and luxuries that are not available to the lower classes.

Additionally, the two characters have different views on love and relationships. In 1984, love is considered to be a dangerous emotion that must be suppressed. Winston, however, longs for intimacy and connection, and he ultimately falls in love with Julia despite the risks. In Brave New World, love is viewed as a primitive and outdated concept. Bernard, like many other members of society, is content with superficial relationships and casual sex.

Overall, Winston Smith and Bernard Marx are two characters who live in vastly different societies, and they have different attitudes towards the government and their own social status. While Winston is a rebellious outsider who longs for love and connection, Bernard is a conformist member of the ruling class who is content with the superficial relationships of his society.

Now I don’t think this is a great piece of literature, but I will tell you that the comparing and contrasting that was done offers insights I might not have made if I was writing this essay on my own. This would be a great starting point to a more comprehensive essay… which I could further get Chat GPT to assist me with.

This is a game changer for teaching. The question won’t be how do we stop students from using this, but rather how do we teach students to use this well? Mike Bouliane said in a comment on yesterday’s post, “Interesting post Dave. It seems we need to get better at asking questions, and in articulating them more precisely, just like in real life with people.

Indeed. The AI isn’t going away, it’s just going to get better. When I asked the question above, I didn’t ask the AI to give specific examples as evidence. That was my first and only attempt. What happens if I iterate a bit more? Add more detail? Put the result back into the AI and ask it to make improvements?

We aren’t going to be able to teach around this tool, so we are probably going to have to teach students when and how to use it. Teaching in the age of very articulate AI is going to have to look different than teaching just a few years ago.

Next level artificial intelligence

Most of the images I use to accompany my blog are created by artificial intelligence (AI). For yesterday’s post on Childhood memories I wrote about a memory of being frozen from the walk home from school on a cold day. I put the following into DALL•E, “A boy walking on a sidewalk in a snowstorm, in an abstract watercolour ” and chose one of the 4 images this AI created:

Impressive to think that this artwork was created by a computer.

Well now there is a new AI called Chat GPT.

Chat GPT is language based… and when I say language based I mean just about any language, including code. Apparently it can write the code to create an APP if you tell it what features you want the APP to have. You can share code in any computer language and ask this AI to find errors. Or you can input an error message you got in your code and it will find the problem.

That’s great for coding, but it’s also very articulate and creative in most languages. I was interested in what it could write and so my first attempt was to have it write a blog post for me. So, this is what I asked, “Read a few posts at https://daily-ink.davidtruss.com/ and write a post that the author would write”. It didn’t do exactly what I asked. Instead it created a promotional advertisement for my blog. Even though I didn’t get what I was asking for read this and see how well it is written:

As a writer, I am constantly on the lookout for inspiration and new ideas. One of my favorite sources of inspiration is reading the work of other writers. Recently, I came across the website https://daily-ink.davidtruss.com/, and I was blown away by the quality of the writing I found there.

The posts on this website cover a wide range of topics, from personal reflections on life and love to thought-provoking discussions of current events. Each post is beautifully written, with clear and concise language that is a joy to read.

One of the things that really stood out to me about this website is the author’s ability to convey complex ideas in a way that is accessible to a wide audience. Whether you are a seasoned writer or someone who is just starting out, you will find something to love on this website.

In addition to the high-quality writing, I also appreciated the website’s clean and simple design. The layout is easy to navigate, and the posts are organized in a way that makes it easy to find what you are looking for.

If you are looking for a new source of inspiration, I highly recommend checking out https://daily-ink.davidtruss.com/. I guarantee that you will not be disappointed.

Not a person, an artificial intelligence wrote that!

Here was my second attempt: “Write a short news story about the coming of age of artificial intelligence.”

And this is what Chat GPT came up with in just a few seconds after my query:

In recent years, there has been a significant increase in the capabilities of artificial intelligence. This technology, which was once only found in science fiction novels, is now becoming a reality.

One of the key developments in the field of AI has been the advancement of machine learning algorithms. These algorithms allow computers to learn from data and improve their performance on a variety of tasks. This has led to the creation of intelligent systems that can perform a wide range of tasks, from recognizing images and speech, to playing games and even driving cars.

As AI technology continues to improve, it is being used in an increasingly wide range of applications. It is being used in healthcare to help diagnose diseases, in finance to make better investment decisions, and even in the criminal justice system to help predict the likelihood of recidivism.

Despite the many benefits of AI, there are also concerns about the potential negative impacts it could have. Some worry about the potential loss of jobs as more tasks are automated, and there are also concerns about the ethical implications of creating machines that can make decisions without human input.

Overall, the coming of age of AI is a significant development that has the potential to change our world in many ways. As the technology continues to evolve, it will be important to carefully consider the potential benefits and drawbacks of this powerful technology.

Impressive! You literally can not tell that a human didn’t write that article. Wow. AI is going to change the world as we know it… this is absolutely a new level of intelligence.

AI video

If you’ve seen my blog on social media or on my website (as opposed to in your mailbox), you’ve seen images I created in Dall•E 2. Like these, along with my requests that created them:

“A man walking in a forest path and hugging a tree in a pastel drawing”

“A Picasso drawing of a young man walking in a forest surrounded by clocks”

“A small man on a giant piece of paper trying to write with an oversized pen that is bigger than the person”

“A person standing at the intersection of a path that leads to a choice of a dark, loomy forest or a bright open field in a van Gogh style”

“A transparent head with a tree, an ocean wave, cogs, math symbols, a protractor, and science lab equipment inside the brain. 

“A hand squeezing the film coming out of a reel of film with a movie projector in the background in a vector style”

“A vapourware drawing of a kid running with a kite in his hand. 

“Headphones in the ear of a dark haired man, in a Kandinsky style 

—-

I write a description and it gives me 4 options to choose from. I don’t always get exactly what I had in mind, but if it’s too far off from my expectation then I just refine my wording and try again.

Well now Meta AI has come up with Make-A-Video, which creates video clips from descriptions. The examples on the website download onto my phone as images rather than video clips, so you’ll need to visit there to see them.

This is exciting stuff in the field of video creation and soon it will also become so good that you won’t be able to distinguish it from a real video. The creativity that’s possible is exciting. I think this is just the tip of the iceberg and soon we’ll see all kinds of fascinating uses for this tool.

Artificial Intelligence is just getting better and better and in the field of creative arts this is going to really change the landscape of what is possible.

(Also, I recognize that there are some scary deep fake implications, but for now I’m just excited to see what people do creatively with this tool.)