Tag Archives: learning

Empowering students

There is an element of control that needs to be given up by teachers if they are truly empowering students. There has to be a willingness to accept a potential outcome that is less than ideal… An understanding that students won’t always hit the high standard you expect.

This isn’t about lowering standards or expectations, in fact, if you are empowering students you need to make your high expectations clear. Rather, this is the realization that students bite off more than they can chew (or rather can do), and then they end up scrambling to do less and still produce a good product or presentation. It’s an acceptance that a student’s vision doesn’t match yours but their outcome is still good, or (and this is the tough part for teachers) good enough. It’s about mistakes being honoured as learning opportunities rather than as something to penalize.

Empowering students doesn’t happen with outcomes that are exactlywhat the teacher envisioned and expected. Outcomes will vary. Results will be less predictable. But the learning will be rich, authentic, and far more meaningful and memorable for the students… As long as they feel empowered, and are given the space to have autonomy, lead, and learn in ways that they choose.

And while that won’t always end with results that the teacher envisioned or expected, it will always end with learners feeling like they owned their own learning. Shouldn’t that be the essence of a great learning experience?

——

Related: Teacher as Compass

Edu-tainment and the future

It’s interesting how the idea that ‘learning can be fun’ has been translated into the gamification of education, which in turn has devolved into making games that are essentially about practice pages that are ‘fun and interactive’.

I think AI has the ability to change this. Learning can be less about practice questions and more about deeper learning. Instead of playing a game with progressively harder, very predictable levels, the learning could authentically go where a student is interested. Two students could start the same, entertaining journey but end up learning and achieving vastly different outcomes. Not just higher math skills, but rather practical learning. A puzzle trying to determine the wiring of some gadget could lead to teaching basic electronics and it could lead to learning about electrical engineering.

The more used approach in machine assisted learning is to have specific goals and be responsive to the learner’s ability. The more advanced approach is to have general objectives and to be responsive to the learner’s interests.

It’s not just the outcomes of these that are drastically different, it’s also the entire approach to what it means to say, ‘Learning can be fun’.

Knowing/doing gap

It has become abundantly clear that the old adage that, “Knowing is half the battle,” is a pile of BS that should never have become a knowable quote. People know that smoking is bad for them; they know that second serving of desert is not a good idea; they know that they should work out. The only time knowing matters is after a life-threatening experience, a kick in gut that says ‘get your shit together or else.’

Prior to that, knowing is maybe 5% of the battle. Doing is the real threshold to get more than 1/2 way to the goal you are hoping for.

Doing something to change, no matter how small, is how you get to the goals you want to achieve… to accomplish positive change.

What you do could be less than 1% of what you hope to accomplish. It could be a small step in the right direction. It could even be an initial step in the wrong direction (but with the right intention). What matters is action.

Doing is more than half the battle.

Change doesn’t happen because you know it should, it happens because you took action.

It’s already here!

Just yesterday morning I wrote:

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right… the most daring prophecies seem laughably conservative.

Then last night I found this post by Zain Khan on LinkedIn:

🚨 BREAKING: OpenAI just made intelligent robots a reality

It’s called Figure 01 and it’s built by OpenAI and robotics company Figure:

  • It’s powered by an AI model built by OpenAI
  • It can hear and speak naturally
  • It can understand commands, plan, and carry out physical actions

Watch the video below to see how realistic it’s speech and movement abilities are. The ability to handle objects so delicately is stunning.

Intelligent robots aren’t a decade away. They’re going to be here any day now.

This video, shared in the post, is mind-blowingly impressive!

This is just the beginning… we are moving exponentially fast into a future that is hard to imagine. Last week I would have guessed we were 5-10 years away from this, and it’s already here! Where will we really be with AI robotics 5 years from now?

(Whatever you just guessed is probably laughably conservative.)

The most daring prophecies

In the early 1950’s Arthur C. Clarke said,

“If we have learned one thing from the history of invention and discovery, it is that, in the long run — and often in the short one — the most daring prophecies seem laughably conservative.”

As humans we don’t understand exponential growth. The well known wheat or rice on a chessboard problem is a perfect example:

If a chessboard were to have wheat placed upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?

The answer: 264−1 or 18,446,744,073,709,551,615… which is over 2,000 times the annual world production of wheat.

All this to say that we are ill-prepared to understand how quickly AI and robotics are going to change our world.

1. Robots are being trained to interact with the world through verbal commands. They used to be trained to do specific tasks like ‘find one of a set of items in a bin and pick it up’. While the robot was sorting, it was only sorting specific items it was trained to do. Now, there are robots that sense and interpret the world around them.

“The chatbot can discuss the items it sees—but also manipulate them. When WIRED suggests Chen ask it to grab a piece of fruit, the arm reaches down, gently grasps the apple, and then moves it to another bin nearby.

This hands-on chatbot is a step toward giving robots the kind of general and flexible capabilities exhibited by programs like ChatGPT. There is hope that AI could finally fix the long-standing difficulty of programming robots and having them do more than a narrow set of chores.”

The article goes on to say,

“The model has also shown it can learn to control similar hardware not in its training data. With further training, this might even mean that the same general model could operate a humanoid robot.”

2. Robot learning is becoming more generalized: ‘Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning’.

“A new AI agent developed by NVIDIA Researchthat can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can…

Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

3. Put these ideas together then fast forward the training exponentially. We have robots that understand what we are asking them, which are trained and positively reinforced in a virtual physics lab. These robots are practicing how to do a new task before actually doing it… not practicing a few times, or even a few thousand times, but actually doing millions of practice simulations in seconds. Just like the chess bots that learned to play chess by playing itself millions of times, we will have robots where we ask them to do a task and they ‘rehearse’ it over and over again in a simulator, then do the task for the first time as if it had already done it perfectly thousands of times.

In our brains, we think about learning a new task as a clunky, slow experience. Learning takes time. When a robot can think and interact in our world while simultaneously rehearsing new tasks millions of times virtually in the blink of an eye, we will see them leap forward in capabilities at a rate that will be hard to comprehend.

Robots will be smarter, stronger, and faster than humans not after years of programming, but simply after the suggestion that the robot try something new. Where do I think this is going, and how soon will we see it? I think Arther C. Clarke was right…

…the most daring prophecies seem laughably conservative.

Habits and goals

I know setting goals works. I have a few fitness goals and they inspire me to work harder. But I’ve never really been someone who sets a lot of goals and I am not overly goal driven. I can remember teaching students SMART Goals and watching them create goals that I just knew they wouldn’t hit… SMART or not.

Habits are what get you to your goals. Habits are the repeated patterns that lead you to improvements. Goals are just lofty ideas until you build systems and habits that move you towards them.

A career goal doesn’t happen if you haven’t built solid habits and routines to consistently do your current job well.

A diet weight is a goal you want to hit. A habit of regular exercise and a habit of eating well are what get you there.

Goals are important, but it’s the habits you create that get you to those goals.

Ego in the way

This is one of the most enjoyable graduation addresses that I’ve ever heard. Rick Rigsby’s “Lessons from a third grade dropout” shares some wonderful insights with a delivery that leaves you wanting more.

Two ‘truth bombs’ that he delivers are the following quotes:

“Ego is the anesthesia that deadens the pain of stupidity.”

And,

“Pride is the burden of a foolish person.”

I see it more and more, people’s egos and their pride get in the way of many things ranging from being a lifelong learner, to being a decent human being. Rick got to experience the wisdom of a 3rd grade dropout who was one of the smartest people he knew. That’s a gift, an opportunity for insight.

We often see social media posts where someone mistreats or underestimates a person with lower social stature, and then learns the errors of their ways when this person is smarter or more helpful than expected… or that ‘lowly’ person outwits the more affluent or pompous person.

This ‘underdog as hero’ message is prevalent in movies too.

The other message in these stories is ‘don’t be a jerk’.

Despite all these social media and movie ‘lessons’ we see shared, there seems to be no shortage of egotistical and pride filled people in the world. In fact many people think you need this to be great. Where would some of the most noted (and notorious) athletes, movie stars, and politicians be without there inflated egos? You don’t get attention when you are selfless. Maybe you can, and maybe if more people did, this would trend more, and the big egos would get less attention. Maybe.

_____

But there’s a lot more to this speech than just those two quotes. There are also valuable lessons on failure.

“Wisdom will come to you from the unlikeliest of sources. A lot of times from failure. When you hit rock-bottom, remember this: while you’re struggling, rock-bottom can also be a great foundation on which to build, and on which to grow.

I’m not worried that you’ll be successful, I’m worried that you won’t fail from time to time. The person that gets up off the canvas and keeps growing, that’s the person that will continue to grow their influence.”

Truth.

Watch the video and enjoy this inspirational speech.

Content Free Learning (in a world of AI)

Yesterday, when I took a look at how it’s easier to make school work Google proof than it is to make school work AI proof, I said:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

This got me thinking about using AI effectively, and that led me to thinking about ‘content free’ learning. Before I go further, I’d like to define that term. By ‘content free’ I do NOT mean that there is no content. Rather, what I mean is learning regardless of content. That is to say, it doesn’t matter if it’s Math, English, Social Studies, Science, or any other subject, the learning is the same (or at least similar). So keeping with the Artificial Intelligence theme, here are some questions we can ask to promote creativity and productivity in any AI infused classroom or lesson:

“What questions should we ask ourselves before we ask AI?”
“What’s a better question to ask the AI?”
“How would you improve on this response?”
“What would your prompt be to create an image for this story?”
“How could we get to a more desired response faster?”
“What biases do you notice?”
“Who is our audience, and how do we let the AI know this?”
“How do we make these results more engaging for the audience?”
“If you had to argue against this AI, what are 3 points you or your partner would start with?”

In a Math class, solving a word problem, you could ask AI, “What are the ‘knowns and unknowns’ in the question?”

In a Social Studies class, looking at a historical event, you could ask AI, “What else was happening in the world during this event?” Or you could have it create narratives from different perspectives, before having a debate from the different perspectives.

In each of these cases, there can be discussion about the AI responses which are what students are developing and thinking about… and learning about. The subject matter can be vastly different but the students are asked to think metacognitively about the questions and tasks you give AI, or to do the same with the results an AI produces.

A great example of this is the Foundations of Inquiry courses we offer at Inquiry Hub. Student do projects on any topics of interest, and they are assessed on their learning regardless of the content.  See the chart of Curricular Competencies and Content in the course description. As described in the Goals and Rationale:

At its heart inquiry is a process of metacognition. The purpose of this course is to bring this metacognition to the forefront AS the learning and have students demonstrate their ability to identify the various forms of inquiry – across domains and disciplines and the stages of inquiry as they move through them, experience failure and stuckness at each level. Foundations of Inquiry 10 recognizes that competence in an area of study requires factual knowledge organized around conceptual frameworks to facilitate knowledge retrieval and application. Classroom activities are designed to develop understanding through in-depth study both within and outside the required curriculum.

This delves into the idea of learning and failure, which I’ve spoke a lot about before.In each of the examples above, we are asking students challenging questions. We are asking them to critically think about what we are asking AI; to think about how we can improve on AI responses; or, to think about how to use AI responses as a launching point to new questions and directions. The use of AI isn’t to ‘get to’ the answer but rather to get to a challenging place to stump students and force them to think critically about the questions and responses they get from AI.

And sometimes the activity will be too easy, other times too hard, but even those become learning opportunities… content free learning opportunities.

Google proof vs AI proof

I remember the fear mongering when Google revolutionized search. “Students are just going to Google their answers, they aren’t going to think for themselves.” Then came the EDU-gurus proclaiming, “If students can Google the answers to your assignments, then the assignments are the problem! You need to Google proof what you are asking students to do!”

In reality this was a good thing. It provoked a lot of reworking of assignments, and promoted more critical thinking first from teachers, then from students. It is possible to be creative and ask a question that involves thoughtful and insightful responses that are not easily found on Google, or would have so few useful search responses that it would be easy to know if a student created the work themselves, or if they copied from the internet.

That isn’t the case for Artificial Intelligence. AI is different. I can think of a question that would get no useful search responses on Google that will then be completely answerable using AI. Unless you are watching students do the work with pen and paper in front of you, then you really don’t know if the work is AI assisted. So what next?

Ultimately the answer is two-fold:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

This isn’t a ‘make it Google proof’ kind of question. It’s more challenging than that.

I got to hear John Cohn, recently retired from MIT, speak yesterday. There are two things he said that kind of stuck with me. The first was a loose quote of a Business Review article. ’AI won’t take over people, but people with AI are going to take over people.

This is insightful. The reality is that the people who are going to be successful and influential in the future are those that understand how to use AI well. So, we would be doing students a disservice to not bring AI into the classroom.

The other thing he said that really struck me was, “If you approach AI with fear, good things won’t happen, and the bad things still will.

We can’t police its use, but we can guide students to use it appropriately… and effectively. I really like this AI Acceptable Use Scale shared by Cari Wilson:

This is one way to embrace AI rather than fear and avoid it in classrooms. Again I ask:

How do we bolster creativity and productivity with AND without the use of Artificial Intelligence?

One way is to question the value of homework. Maybe it’s time to revisit our expectations of what is done at home. Give students work that bolsters creativity at home, and keep the real work of school at school. But whether or not homework is something that changes, what we do need to change is how we think about embracing AI in schools, and how we help students navigate it’s appropriate, effective, and even ethical use. If we don’t, then we really aren’t preparing our kids for today’s world, much less the future.

We aren’t going to AI proof schoolwork.

We Live in a Tetraverse

Whenever you see a movie like the Matrix, data sets, information, and all storage are shown in cubes.

Even beyond the movies, it is clear that we represent the world on three axis: X, Y, and Z.

In the words of Joe Truss, these 3 axis are ‘necessary but not sufficient‘ to really understand the world we live in. We Live in a Tetraverse:

This is the first video in a series called ‘Book of Codes’, which over time you will help you discover for yourself the power you inherently have as a natural geometer. Join Joe and Dave Truss as they discuss the building blocks of a tetraverse… where the foundations of life, and everything in our universe is built around the unique geometric shapes that are comprised in the geometry of stacked and interlocking triangles.

The Book of Codes will awaken the natural geometer within you.

 

Some people spend their weekends watching sports… while the Super Bowl was on, I was putting the finishing touches on this video. That’s not a slag on anyone who enjoys watching sports, it’s just not my thing. What I do enjoy is nerding out and thinking about how I can use geometry to make sense of challenging ideas in mathematics and physics that are actually way beyond my capabilities to calculate and understand without the geometry. Thanks to Joe, I have almost weekly meetings on Sunday mornings to learn from him and to think deeply about the hidden geometry behind our universe and all life within it.

We record most of our meetings. This is hopefully the first of many we will share. While there are more videos to come, don’t expect them too soon… I only really get a chance to work on them on the weekends and editing video takes a lot of time. Still, I hope you enjoy this video, and as always, feedback is appreciated.