The camera never lies. Except, of course, it does – and seemingly more often with each passing day.

Nowadays, with smartphones, it’s common to make quick changes to photos using apps. You can make colors brighter or adjust how bright the picture is.

Google’s new phones, the Pixel 8 and Pixel 8 Pro, do something even more special. They use smart computer programs (AI) to change how people look in pictures.

You might have been in a picture where one person isn’t looking at the camera or not smiling. Well, Google’s phones can now look at your pictures and use what they learned to make that person look like they’re smiling. They call it “Best Take”.

https://www.bbc.com/news/technology-67170014

These phones can do even more amazing things. They let you remove, move, or change things in a picture, like people or buildings. They use something called Magic Editor to do this. It’s like a really smart computer program that learns from lots of other pictures to figure out what should go in the empty space.

And here’s the cool part – you can use this Magic Editor not only on pictures you take with the phone, but also on any pictures you have in your Google Photos collection.

‘Icky and creepy’

For some observers this raises fresh questions about how we take photographs.

Google’s new AI technology has been described variously by tech commentators and reviewers as potentially “icky” (The Verge), “creepy” (Tech Radar) and having the potential to “pose serious threats to people’s (already fragile) trust of online content” (Cnet).

Andrew Pearsall, a professional photographer, and senior lecturer in Journalism at the University of South Wales, agreed that AI manipulation held dangers.

“One simple manipulation, even for aesthetic reasons, can lead us down a dark path,” he said.

He said the risks were greater for those who used AI in professional contexts but there were implications to for everyone to consider.

“You’ve got to be very careful about ‘When do you step over the line?’.

“It’s quite worrying now you can take a picture and remove something instantly on your phone. I think we are moving into this realm of a kind of fake world.”

Speaking to the BBC, Google’s Isaac Reynolds, who leads the team developing the camera systems on the firm’s smartphones, said the company takes the ethical consideration of its consumer technology seriously.

He was quick to point out that features like Best Take were not “faking” anything.

This photograph has been edited using Google’s AI Magic Editor to shift the position and size of the people in the foreground

Camera quality and software are key to the company competing with Samsung, Apple and others – and these AI features are seen as a unique selling point.

And all of the reviewers who raised concerns about the tech praised the quality of the camera system’s photos.

“You can finally get that shot where everyone’s how you want them to look- and that’s something you have not been able to do on any smartphone camera, or on any camera, period,” Reynolds said.

“If there was a version [of the photo you’ve taken] where that person was smiling, it will show it to you. But if there was no version where they smiled, yeah, you won’t see that,” he explained.

For Mr Reynolds, the final image becomes a “representation of a moment”. In other words, that specific moment may not have happened but it’s the picture you wanted to happen created from multiple real moments.

‘People don’t want reality’

Professor Rafal Mantiuk, an expert in graphics and displays at the University of Cambridge, said it was important to remember that the use of AI in smartphones was not to make the photographs look like real life.

“People don’t want to capture reality,” he said. “They want to capture beautiful images. The whole image processing pipeline in smartphones is meant to produce good-looking images – not real ones.”

The physical limitations of smartphones mean they rely on machine learning to “fill in” information that doesn’t exist in the photo.

This helps improve zoom, improve low light photographs, and – in the case of Google’s Magic Editor feature – add elements to photographs that were either never there or swapping in elements from other photos, such as replacing a frown with a smile.

Manipulation of photographs is not new – it’s as old as the art form itself. But never has it been easier to augment the real thanks to artificial intelligence.

Earlier this year Samsung came in for criticism for the way it used deep learning algorithms to improve the quality of photos taken of the Moon with its smartphones. Tests found it didn’t matter how poor an image you took to begin with, it always gave you a useable image.

In other words – your Moon photo was not necessarily a photo of the Moon you were looking at.

The company acknowledged the criticism, saying it was working to “reduce any potential confusion that may occur between the act of taking a picture of the real Moon and an image of the Moon”.

On Google’s new tech, Reynolds says the company adds metadata to its photos – the digital footprint of an image – using an industry standard to flag when AI is used.

“It is a question that we talk about internally. And we’ve talked at length. Because we’ve been working on these things for years. It’s a conversation, and we listen to what our users are saying,” he says.

Google is clearly confident users will agree – the AI features of its new phones are at the heart of its advertising campaign.

So, is there a line Google would not cross when it comes to image manipulation?

Mr Reynolds said the debate about the use of artificial intelligence was too nuanced to simply point to a line in the sand and say it was too far.

“As you get deeper into building features, you start to realise that a line is sort of an oversimplification of what ends up being a very tricky feature-by-feature decision,” he says.

Even as these new technologies raise ethical considerations about what is and what isn’t reality, Professor Mantiuk said we must also consider the limitations of our own eyes.

He said: “The fact that we see sharp colourful images is because our brain can reconstruct information and infer even missing information.

“So, you may complain cameras do ‘fake stuff’, but the human brain actually does the same thing in a different way.”

SOURCE:BBC

Leave a Reply

Your email address will not be published. Required fields are marked *