Tag Archives: ethics

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

Profit and greed

Watching the price of gas top $2.30 a litre and knowing that big oil companies are making billions in profit is maddening. Never waste a good crisis. Now don’t get me wrong, there is nothing wrong with a company taking profits, but I really wonder where the world is headed when company shareholders care more about increasing profit than anything else.

I think too many people confuse democracy and capitalism. They think a free world should include unfettered opportunities for business and profit. But a model where corporate value is tied to how much returns shareholders can get for their investment is not designed to make the world better or more free. It’s designed to put more wealth into the hands of the rich, who can afford to buy stocks and shares in companies.

Greed is the underlying driver of such a system. Not democracy, not freedom, greed.

‘This approach isn’t as good for the environment, but it is more profitable.’

‘We could pay our employees a better living wage, but that would hurt our profits.’

‘If we lay people off and increase efficiency, we can meet our targets and get our bonuses.’

There are stories like this one, where a boss decided that every employee will get at least 70k a year, and the company is still thriving 5 years later. And there is a local company run by Paul Macdonald that donates 50% of corporate profits to charity.

But these stories are anomalies. They shouldn’t be. There shouldn’t be a reason that the world we live in is driven by profit and greed. I hope to see more people doing what these guys are doing. They are earning a good living, and making the world around them better. That shouldn’t be a novel idea, it should be what drives us.

Five to Eight Percent

When I think about the modern company with shareholders, I can’t help but think that this system is designed to undermine ethical and environmental progress. There are companies laying off workers right now while providing shareholders huge dividends and returns. The system is flawed. These returns help drive the company stock price up at the expense of ethically growing the company… instead of helping workers keep their job and keep their wages fair in comparison to what shareholders get

What if companies promised shareholders a maximum of a 5%-8% return? Any company profits beyond that are invested back into the company, towards employees, and/or towards environmental or community initiatives. If this were the case, companies would still have the same commitment to meet shareholders targets, but those targets wouldn’t be based on greed. Instead they would be focussed on doing the most good.

I’m not an economist and don’t know all the ins and outs of how this would work? I don’t know what the magic return percentage should be? But I do know that the current model is based on greed and unsustainable growth. If companies capped shareholder returns at a safe investment amount, and promised to do good with what would have been more returns, I think there would still be a market for the stocks… And these companies could help make the world a better place.

Having choice

There are billions of people in our world that are constrained by not having enough choice.

How many people in the world don’t have a choice of what their next meal will be? How much they will get? How nourishing it is?

How many children must work, and do not have the opportunity to go to school?

How many children do not have a choice of more than one thing to wear? Or are forced to wear something for religious reasons?

How many people pray to an unjust and cruel God, for fear of the wrath of their own family or community, (and not God), to ask questions?

How many people are not given the chance to speak out against their ruling government for fear of imprisonment or death?

Basic human and civil liberties are something that have improved over the past 50 years, and simple metrics like reductions in poverty and in deaths by malnutrition tell us this. But in an ever shrinking world brought together by the internet, inequalities are far more visible. And the sensitive nature of some of these topics are such that people speaking out can face ridicule, harassment, and might even fear for their lives.

Some people are given less choice about how they get to live their lives: The language they speak, their geography, their ethnicity, their gender, their sexual orientation, their parents, their social and economic status, all these can in some way limit or privilege the choices a person has. But for many, they are not limited in their ability to see what others have, and even show off, that they do not have. Affluence and privilege is flaunted openly and excessively. This creates an even bigger divide, because the rich and the famous so obviously have choices that others do not. Agency feels relative when comparing those who have much of it from those that do not.

How important is the right to basic survival (food and shelter)?

How important is the right to a good education?

How important are civil rights and freedoms?

These are all vitally important when they are not available, and easy to undervalue when they are readily available. When we are given the freedom and choices others are not, what is our obligation to speak up and to help the less fortunate?

What obligation should the wealthiest people of the world, those with the most choice, have towards those with less choice?

If you earned $1,300.00 a day for 2,000 years, you still wouldn’t be a billionaire. If you spent $36,000.00 a day for 75 years, you still would not have spent a billion dollars. How is it that the number of billionaires in the world are growing? What does this small group of people need this much money for?

Inequalities are so blatantly obvious in our world today. Some of these are being addressed in amazing ways, but globally inequalities are being exaggerated. Geography, wealth, culture, and history matter significantly and these all factor into the choices people have and, in many cases, the choices people don’t have. I think the most powerful choice we can make is to choose what we value, and devote time, effort, and compassion to those with less choice than us… and not valuing fortune, fame, and financial affluence. This is a choice we can all make.