Tag Archives: Google

New learning paradigm

I heard something in a meeting recently that I haven’t heard in a while. It was in a meeting with some online educational leaders across the province and the topic of Chat GPT and AI came up. It’s really challenging in an online course, with limited opportunities for supervised work or tests, to know if a student is doing the work, or a parent or tutor, or Artificial Intelligence tools. That’s when a conversation came up that I’ve heard before. It was a bit of a stand on a soapbox diatribe, “If an assignment can be done by Chat GPT, then maybe the problem is in the assignment.”

That’s almost the exact line we started to hear about 15 years ago about Google… I might even have said it, “If you can just Google the answer to the question, then how good is the question?” Back then, this prompted some good discussions about assessment and what we valued in learning. But this is far more relevant to Google than it is to AI.

I can easily create a question that would be hard to Google. It is significantly harder to do the same with LLM’s – Large Language Models like Chat GPT. If I do a Google search I can’t find critical thinking challenges not already shared by someone else. However, I can ask Chat GPT to create answers to almost anything. Furthermore, I can ask it to create things like pro’s & con’s lists, then put those in point form, then do a rough draft of an essay, then improve on the essay. I can even ask it to use the vocabulary of a Grade 9 student. I can also give it a writing sample and ask it to write the essay in the same style.

LLM’s are not just a better Google, they are a paradigm shift. If we are trying to have conversations about how to catch cheaters, students using Chat GPT to do their work, we are stuck in the old paradigm. That said, I openly admit this is a much bigger problem in online learning where we don’t see and work closely with students in front of us. And we are heading into an era where there will be no way to verify what’s student work and what’s not, so it’s time to recognize the paradigm shift and start asking ourselves new questions…

The biggest questions we need to ask ourselves are how can we teach students to effectively use AI to help them learn, and what assignments can we create that ask them to use AI effectively to help them develop and share ideas and new learning?

Back when some teachers were saying, “Wikipedia is not a valid website to use as research and to cite.” Many more progressive educators were saying, “Wikipedia is a great place to start your research,” and, “Make sure you include the date you quoted the Wikipedia page because the page changes over time.” The new paradigm will see some teachers making students write essays in class on paper or on wifi-less internet-less computers, and other teachers will be sending students to Chat GPT and helping them understand how to write better prompts.

That’s the difference between old and new paradigm thinking and learning. The transition is going to be messy. Mistakes are going to be made, both by students and teachers. Where I’m excited is in thinking about how learning experiences are going to change. The thing about a paradigm shift is that it’s not just a slow transition but a leap into new territory. The learning experiences of the future will not be the same, and we can either try to hold on to the past, or we can get excited about the possibilities of the future.

a Quarter Century of Search

I went to Google yesterday and the Google Doodle above the search box was celebrating 25 years of search.

I instantly thought of this comic that I’ve shared before both on my blog and in presentations to educators.

It is likely that no one under 30 will remember life without Google… Life without asking the internet questions and getting good answers. I remember my oldest at 5 years old asking me a question. I responded that I didn’t know and she walked over to our computer and turned it on.

“What are you doing?”

“Asking Google.”

It’s part of everyday life. It has been for a quarter century.

And now search is getting even better. AI is making search more intuitive and giving us answers to questions, not just links to websites that have the answers. It makes me wonder, what will the experience be like in another 5, 10, or 25 years? I’m excited to find out!

 

My life before Google

I shared this in a post a few years ago:

I grew up in a pre-Google era, but I had something better… I had my dad. It seemed that no matter what question I may ask, my dad had, and still has, a comprehensive answer. My only hesitation to ask him a question was that I needed to be sure I was interested enough to get his extensive and detailed answer.

He didn’t just have verbal answers, he had books, thousands of books, and files, and files, and still more files. In Grade 11 I had to do a project on tidal power, and so I asked dad if he had any information for me. He did, and after moving some file boxes around he found it. No easy task when there were layers of boxes to reorganize… and not a box, or a file was labelled!!!

The tidal energy file was 2-3 centimetres thick and I blew away my class and teacher with the research I shared. In a pre-Google era it would have taken 15-20 hours searching library bookshelves and microfiche to collect research, newspaper clippings, and magazine articles that I had at my fingertips.

This was the life most people lived before Google:

I always had the information I wanted, I just had to ask my dad.

When search engines become answer engines

One if the most alarming things I’ve read and heard about since I started mentioning Chat GPT and the use of predictive AI tools is that the model for profitability of content creators is going to have to change. With Google and Bing both embedding AI enhanced ‘answers’ as part of their search results, this is going to have a dramatic impact on website visits, (click-throughs and advertising views), content creators count on.

Here is a link to a very long but interesting essay by Alberto Romero on the subject: Google vs Microsoft: Microsoft’s New Bing Is a Paradigm Change for Search and the Browser

This is an excerpt from the section titled, ‘With great power comes great responsibility’,

“Giving value back to creators
One of the core business aspects of search is the reciprocal relationship between the owners of websites (content creators and publishers) and the owners of the search engines (e.g. Google and Microsoft). The relationship is based on what Nadella refers to as “fair use.” Website owners provide search engines with content and the engines give back in form of traffic (or maybe revenue, etc.). Also, search engine owners run ads to extract some profit from the service while keeping it free for the user (a business model that Google popularized and on top of which it amassed a fortune).”

and a little further down,

“…Sridhar Ramaswamy, ex-Google SVP and founder of Neva (a direct competitor of Bing and Google Search), says that “as search engines become answer engines, referral traffic will drop! It’s happened before: Google featured snippets caused this on 10-20% of queries in the past.”

So, getting a response from your search query already has a historical track record of reducing referral traffic and now search is going to get significantly better at answering questions without needing to click through to a website.

What is human (as opposed to Artificial Intelligence) created content going to look like in the future when search answers your questions that would normally require you to visit a website? What happens to creator and publisher profitability when search engines become answer engines?

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

Beyond Google

About 17 or 18 years ago when my oldest was 5 or 6, she asked me a question and I responded, “I don’t know?” So, she walked into our office and went to our desktop computer and asked Google. She didn’t think twice about it, she just went to find the answer on the search engine that became a verb: ‘Have a question? Just Google it.’

But shortly after that I started to learn that for some things my network was better than Google:

Now social media sites are the new Google, articles like this one, ‘Many Gen Zers don’t use Google. Here’s why they prefer to search on TikTok and Instagram,’ explain that for many searches the younger generation are bypassing Google and going directly to TikTok, Instagram, YouTube, and even Pinterest to look for things that older folks would Google.

Looking for makeup tips? TikTok or Instagram. Looking for help changing your car’s turn signal light? YouTube. There are many reasons to trust either your network or people using appropriate tags that you search on social media more than some website that has maximized its SEO and finds itself at the top of a Google search… with little reference to what you are actually searching for.

It’s now an era where a Google search is just one of many search tools that might be used to answer questions you might have. Social networks and platforms are taking us on a journey beyond Google.

Simple Wikipedia

I think Wikipedia is a GREAT resource! 

To me it should be the ‘first place to go’ for students… The start of research, before digging deeper and finding other sites.

BUT…

The language can be a bit tough for ELL or younger students! 
So, check this out:

Simple English Wikipedia: Wikipedias are places where people work together to write encyclopedias in different languages. We use Simple English words and grammar here. The Simple English Wikipedia is for everyone! That includes children and adults who are learning English.

AND for younger students:

Wiki for Kids delivered search results from the “Simple Edition” of Wikipedia and is powered by Google SafeSearch for added safety.