Digital Morality: A New Moral Dilemma

Is this real? Photo by Danka & Peter on Unsplash

“Believe nothing you hear, and only one half that you see.” — “The System of Dr. Tarr and Prof. Fether” by Edgar Allan Poe

Digital Morality is something you’re going to hear about a lot in the future. It may not be called my little coined phrase — but the concept will be the same:

What we’re discussing here isn’t exactly new — but the ramifications of the outcome will become so important that it could ruin marriages, destroy a country or even bring global economy to its knees.

So what’s the issue? We’ve been airbrushing, “Photoshopping”, auto tuning, glamor shotting, CGI’ing … pretty much cheating our entire lives with photos, music and video. What’s the big deal now?

There is a big difference between knowing something is “fake” and something being passed off as The Real Deal.

When you look at a magazine cover or glamor shot — you know you’re being fooled. Before digital wizardry they used camera tricks, special lighting, make up and airbrushing to make the models look better. There is no question of morality there. The viewer knows what they are looking at isn’t real.

… and if they don’t? They ought to.

When Cher’s Believe song came out, the mass populace heard an effect called “auto tune” — a digital process that could be used to make voices sounds pitch bendy and robotically cool. What a lot of people didn’t initially realize is that auto tune wasn’t just about making vocal effects — it could literally “fix” poor singing by automatically “tuning” the voice to the correct notes that the artist didn’t (or couldn’t) hit. At first, this was expensive and easy to pick out — but over time it got cheaper and better. Next thing you know — you can get an auto tune application for your phone and fool your friends. Still, most people are aware that auto tune exists and if you watch your favorite pop artist live — you may understand why auto tune is in every major recording studio.

Popular Auto Tune at work …

What about movies? CGI has gotten better over the years — but watch any modern super hero movie and you can usually pick out what is real and what is computerized hocus pocus (in some cases; it doesn’t even look like they tried for realism). Regardless, you know what you’re watching isn’t real — and whether your eyes bought into the illusion or not; you are fully aware that what you are watching didn’t really happen.

So digital nonsense has been around for a long time. Why am I just now calling its morality into question?

Years ago in 2016, Adobe showed off something called VoCo. With it and 20 minutes of audio recorded by a person — it could realistically create that person’s voice reading any text aloud — convincingly. Baidu’s Deep Voice can now do this with 3.7 seconds of audio.

Using this technology, you could fool pretty much anyone over the phone; a cornerstone to the 9/11 Conspiracy that calls from hijacked planes were faked.

This is definitely consideration for #DigitalMorality.

Last year, something known as DeepFakes appeared. I called it out on my podcast Passenger Seat Radio as having dire potential of causing mayhem and chaos (or worse) — saying it could easily be the new number one threat to society. This digital process allows almost anyone with simple over the counter computer hardware (don’t need Industrial Light and Magic workstations here) and free software (not $50,000 per module special FX stuff) can take hundreds of images of a person’s face, feed it into the application — then have that face super-imposed realistically onto a real video. This was used initially to make “deepfake” porn videos of famous celebrity woman — some of which were frighteningly convincing.

While DeepFakes seems benign as a Super Threat (unless you’re an attractive actress that has a porn star body double) — it doesn’t take much imagination to see what could happen if someone of importance was portrayed on a less than desirable video. Imagine the President of the United States “deepfaked” into a video declaring war on North Korea.

These fake videos already exist — and it is the sheer infancy of the technology that keeps them from being taken as the real thing. Unfortunately in our era of social media and fake news, there aren’t many people verifying sources and checking things out before sharing and blasting the videos they find in their daily feeds.

#DigitalMorality …? You bet your ass it is.

At the end of the day, when someone is dragged through the media — the disposition is rarely remembered; only that there was a scandal.

So now we can’t believe what we hear … or the videos that we see.

Surely “a picture is still worth 1000 words”, no?

Digital tomfoolery has been around since the first version of Paint appeared on Windows. Since then, the tools available for home users have exponentially increased in power. Magic that used to be only available to digital wizards — like background dropping, object removal or full blown digital beautification can be had in editing software that costs under $50 (or even free).

In Search Of … Missing Persons …

Half the pictures on Tinder or Snapchat are probably altered in one way or another. But we’ve come to almost expect people to lie about how they look — especially in a social media situation. There is almost an assumption that people mess with their photos to appear .. better.

But that process requires an action of insurrection. A conscious decision to deceive. That is part of the #DigitalMorality movement in itself — and it is obvious.

All this has been leading up to what I consider a new threat to the morality of the pixel world.

Artificial intelligence.

Recently, smartphones have gotten powerful enough to employ “machine learning” or “artificial intelligence” to assist in the taking of better photos. Despite having lesser lenses and/or sensors — these smart phones are able to take better pictures by using special software that performs “post processing” digital magic.

So what? Your smartphone can take better pictures — where is the #DigitalMorality clause in that?

Currently, this artificial technology primarily fixes lighting issues using something Google calls “Night Sight”.

Doesn’t even look like the same picture, does it? You might even say that the one taken on the right is “daytime” — while the other is “evening”…

Now, the naysayers will say, ‘But Shane .. That’s not cheating. You can do the same thing with a tripod and a longer shutter speed; all you’re doing is letting in more light” … But that’s not correct.

If you take a photograph in a manner that exposes the sensor (or film) to more light — that is merely seeing “better” based on the concept of light (or less absence of).

What Google’s A.I. is doing is altering the image. It is using “intelligence” to change the image into something that it was not originally. Comparing this to a longer shutter cycle is not accurate.

Closer approximation would be the loading of the digital photo into Paint Shop Pro and manually “fixing” it using plugins or your own adjustment of contrast, brightness and/or color. Is there any argument that someone has “altered” the photo at that point? Of course not.

Same thing for the A.I. in the Pixel 3 smartphone. You didn’t open Photoshop — but your photo was still altered.

What is next? Google’s A.I. decides that you don’t need that tree in your picture. Or you turn on “Just One” filter and anyone else in the photo other than the primary subject is removed with A.I. Maybe A.I. can make you skinnier. Or make subtle changes to your face to improve your attractive characteristics. A lot of this stuff exists now — but you have to explicitly tell your software/camera/phone to do it.

At some point, no photo taken will be the actual subject matter. It will have been “improved”, or “enhanced” or otherwise “altered” … All in the name of better pictures and possibly with or without your consent or knowledge.

Fellow Medium author Bennat Berger released a story that discusses Facebook and its use of A.I. to “better” the pictures of its users. He said (and I even highlighted):

Such technology may improve these photos in the eyes of many, but it’s essentially robbing them of the honesty that has historically made photography such a cherished medium.

Aside from the obvious deception of sending people photos of yourself that are outright lies (you don’t look that good, you’re not that thin, your face isn’t that clear … your wife used to be in that picture) — it breaches #DigitalMorality.

Where do we end in disguising, lying, altering what we see into that which we believe it should be?

Remember when photos, videos and recordings were legal and valid forms of evidence for a court appearance? It used to be that eye witness testimony was considered a shoo-in for a trial — but it has been called into question time and time again.

Then, photographic (then video) evidence was considered a gold standard.

Can it still be considered such? … and if not — what else can be?

I’m crazy? Move along? Nothing to see here? Well .. maybe there used to be — but nobody knows for sure. It was taken with a Pixel 6 camera with “Just One” turned on.

I write, blog, record and review anything that interests me — including humanity, parenting, gizmos & gadgets, video games and media.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store