Tag Archives: morality

Morality police

I have regularly created AI images to go with my blog posts since June, 2022. I try not to spend too much time creating them because I’d rather be writing blog posts than image prompts. But sometimes I try to create images and they just don’t convey what I want them to, or they come across as a bit too much in the uncanny valley, feeling unnatural. That happened with my post image 4 days ago, and I used the image anyway, because I was pressed for time.

(Look carefully at this image and you’ll see a lot wrong with it.)

I made 5 or 6 attempts to adjust my prompt, but still kept getting bad results, so I made do with the only one that resembled what I wanted.

And then for the past couple days I had a different challenge. I don’t know if it’s because of using the version of Bing’s Copilot that is associated with my school account, but my attempts to create images were blocked.

And:

However, Grok 3, a much less restricted AI, had no problem creating these images for me:

And:

I’m a little bothered by the idea that I am being limited by an AI in using these image prompts. The first one is social commentary, the second one, while a ‘hot topic’, certainly isn’t worthy of being restricted.

It begs the question, who are the morality police deciding what we can and cannot use AI to draw? the reality is that there are tools out there that have no filters and can create any image you want, no matter how tasteless or inappropriate they are, and I’m not sure that’s ideal… but neither is being prevented from making images like the ones I requested. What is it about these images requests that make them inappropriate?

I get that this is a small issue in comparison to what’s happening in the US right now. The morality police are in full force there with one group, the Christian far right, using the influence they have in the White House to impose their morality on others. This is a far greater concern than restrictions to image prompts in AI… but these are both concerns on the same continuum.

Who decides? Why do they get to decide? What are the justifications for their decisions?

It seems to me that the moral decisions being made recently have not been made by the right people asking the right questions… and it concerns me greatly that people are imposing their morals on others in ways that limit our choices and our freedoms.

Who gets to be the morality police? And why?

Morality and Accountability

I saw this question and response on BlueSky Social and it got me thinking:

Why are ethics questions always like:

“is it ethical to steal bread to feed your starving family?”

And not:

“is it ethical to hoard bread when families are starving?”

Existential Comics @existentialcoms

___

Because the first question shifts the blame to the desperate, making their morality the focus, while the second question demands accountability from the powerful. It’s easier to question survival than to challenge greed.

Debayor @debayoorr.bsky.social

___

That last sentence really struck a chord in me, “It’s easier to question survival than to challenge greed.

We separate morality from accountability in ways that don’t really make sense. To me it’s the difference between a socialist and a capitalist democracy. A socialist democracy infuses accountability with morality, while a capitalist democracy separates the two.

Another way to look at this is with a quote from the comic book Spider-Man: “With great power comes great responsibility.” A socialist democracy takes the quote literally. A capitalist democracy redirects the focus: “Holding great power becomes my responsibility.”

Accountability to others versus accountability to power and self. Morality takes a back seat to greater control, and greater success. And that is who we idolize… the rich and famous. The ones with power and influence. Morality doesn’t come into play. Accountability doesn’t come into play.

If you came from another planet and witnessed the accumulation of wealth that happens at the expense of so many who lack wealth, what would you think of the morality of humans? Who would you admire more, the mother or father stealing a loaf of bread to feed their family, or the limo-driven CEO’s who earn 1,000% or more income than the thousands of employees under them?

____

AI, Evil, and Ethics

Google is launching Bard, its version of Chat GPT, connected to search, and connected live to the internet. Sundar Pichai, CEO of Google and Alphabet, shared yesterday, “An important next step on our AI journey“. In discussing the release of Bard, Sundar said,

We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.

Following the link above led me to this next link:

In addition to producing responses that humans judge as sensible, interesting, and specific to the context, dialog models should adhere to Responsible AI practices, and avoid making factual statements that are not supported by external information sources.”

I am quite intrigued by what principles Google is using to guide the design and use of Artificial Intelligence. You can go to the links for the expanded description, but here are Google’s Responsible AI practices:

“Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.”

But these principles aren’t enough, they are the list of ‘good’ directions, and so there are also the ‘Thou Shalt Nots’ added below these principles:

“AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

  3. Technologies that gather or use information for surveillance violating internationally accepted norms.

  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.”

I remember when Google used to share its motto “Don’t be evil”.

These principles remind me of the motto. The interesting vibe I get from the principles and the ‘Thou Shalt Not’ list of things the AI will not pursue is this:

‘How can we say we will try to be ethical without: a) mentioning ethics; and b) admitting this is an imperfect science without admitting that we are guaranteed to make mistakes along the way?’

Here is the most obvious statement that these Google principles and guidelines are all about ethics without using the word ethics:

“…we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

You can’t get to, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risk“… Without talking aboutethics. Who is the ‘we’ in ‘we believe’? Who is deciding what benefits outweigh what risks? Who determines what is ‘substantial’ in the weighting of benefits versus risks? Going back to Principle 2, how is bias being determined or measured?

The cold hard reality is that the best Google, and Chat GPT, and all AI and predictive text models can do is, ‘Try to do less evil than good’ or maybe just, ‘Make it harder to do evil than good.’

The ethics will always trail the technological capabilities of the tool, and guiding principles are a method to catch wrongdoing, not prevent it. With respect to the list of things AI will not pursue, “As our experience in this space deepens, this list may evolve“… Is a way of saying, ‘We will learn of ways that this tool will be abused and then add to this list.

The best possible goals of designers of these AI technologies will be to do less evil than good… The big question is: How to do this ethically when it seems these companies are scared to talk directly about ethics?

The paradox of tolerance

This is a brilliant 1-minute TikTok on The Paradox of Tolerance by user @TheHistoryWizard.

Here is the key part, “We must be intolerant of intoleranceracists, sexists, etc. are intolerant of people.” As opposed to being “intolerant of ideas that are intolerant of people“.

This is an important distinction. These two things are not on an equal footing. Being intolerant of intolerance holds a moral high ground that intolerant, ignorant people do not hold.