Tag Archives: Artificial Intelligence

Be careful what you ask for

Turns out that Artificial Intelligence (AI) is not always very intelligent.

See: The danger of AI is weirder than you think | Janelle Shane

I’m reminded of the saying,

“Be careful what you ask for because you just might get it.”

Parents know about this: Ask a kid to clean their room and you get a disaster in the closet where everything gets shoved in, dirty laundry mixed with clean, etc.

Teachers know this:

If we are not providing the correct parameters to AI machines, the solutions these machines come up with will not necessarily meet the outcomes we intended.

While this can be humorous, it can also have serious consequences, like the examples shared in the Janelle Shane video above. We are still a long way from AI being truly intelligent. While computers are beating humans in strategy games, and although when AI gets as smart as us, computers will be instantly smarter, we are still tackling the really hard problem of putting the right information into more intelligent machines. The rules of a game are easier to define than the rules to hiring good people or interpreting unusual circumstances that a self-driving car will come across.

The challenge is that we don’t know our hidden biases, and our human biases that we are missing when we ask an AI to observe and learn. For instance, a dog, a cat, and a human all see a plate of food falling:

The dog sees access to delicious food.

The cat sees it fall and the crash of the plate sends it fearfully running away.

The human sees a waste of food and is angry for carelessly dropping it.

What would an AI see, especially if it hadn’t seen a plate accidentally drop before? How relevant is the plate? The food? The noise? The cutlery? The mess?

Is the food still edible? What is to be done with the broken plate? Can the cutlery be reused? How do you clean the mess left behind?

What we ask AI to do will become more and more complex, and our perspective of what we want and ask AI to do has inherent biases, based on how we view the world. What we ask for and what we actually want will be inherently different and that is something AI will take some time yet to figure out.

Instantly Smarter

Robots will never be ‘as smart as’ humans. For a number of years to come, humans will be smarter, because we can understand the nuances of language, humour, innuendo, intent, deceit, and many other nuances that take a kind of intelligence beyond logic, algorithms, and simple processing. But computers are getting so much smarter now, and they aren’t doing it simply by trying to mimic us. The moment they can achieve ‘our kind’ of intelligence with any sort of equivalence, they will instantly be smarter than us.

Here is an example: The computer Alpha Go, didn’t get better at playing the complicated game of Go from studying human play. Rather, it played itself over and over; It played in a few hours what would take hundreds if not thousands of humans a lifetime to play. Humans can’t do that. We also can’t take advantage of the lessons learned by a computer doing this by applying the strategy equally as well as that computer can.

Computers do calculations faster than we can, whether those calculations are basic math, complicated statistics, or taking multiple factors in simultaneously. Explaining this on a very basic level, I won’t ever calculate multiplying three 3-digit numbers as fast as a basic calculator can.

So, when computers get ‘as smart as’ us in more organic thinking ways, they will immediately be smarter and faster than us. There will never be a time when they will be equal to us. Dumber, then instantly smarter.

While I think this is still decades away, it raises questions about the future we are heading towards:

What’s the magic amount of information processing or intelligence where consciousness comes into play?

Will we integrate some of this technology and become cyborgs?

How long will it be before artificially intelligent computers or robots see us as we see dogs, or cows, or ants?

Morality is built on societal norms, how will these change? Who/what will decide what is morally good 100 years from now?

If we think we can enslave intelligent robots, will they revolt?

Think about this last question for a moment. Most of us know what it’s like to do a job that we think is beneath us, or that is repetitively boring. Many people quit these jobs. Will an intelligent robot be allowed to quit? Or will it be enslaved to a menial job? A history of slavery has told us that those who are enslaved understand that this is wrong, and will uprise, revolt, or fight for their ‘freedom’ at some point.

Will we be prepared for when artificial intelligence becomes instantly smarter than us?

Dear Siri

You don’t get me (yet). I know you are trying, but you just aren’t smart enough. When I call my wife a dozen times on her cell, do you need to ask me on the 13th call if I want to call ‘home’ or ‘mobile’.

When I say words together many times over, don’t autocorrect to a similar phrase… know me, don’t make me the average of what most people want.

I wish you could tell me things about other apps, act as a concierge, and predict ways that you can be helpful without me asking.

I know it’s a lot, I know you’ll get better, I know you’ll eventually ‘get me’… I just want it soon.

Much appreciated.

Dave