Has Google Safeguarded Its New AI Photo Features or Not?
I’m having a hard time reconciling this assertion by Jay Peters for The Verge that Google didn’t build any safeguards into their new AI camera features…
There’s nothing inherently wrong with manipulating your own photos. People have done it for a very long time. But Google’s tools put powerful photo manipulation features — the kinds of edits that were previously only available with some Photoshop knowledge and hours of work — into everyone’s hands and encourage them to be used on a wide scale, without any particular guardrails or consideration for what that might mean. Suddenly, almost any photo you take can be instantly turned into a fake.
…with these, well, safeguards described just a few paragraphs later:
Using Best Take does not add metadata to photos, Marconi says, but there are some restrictions on the feature that could prevent it from being used nefariously. Best Take does not generate new facial expressions, and it “uses an on-device face detection algorithm to match up a face across six photos taken within seconds of each other,” according to Marconi. It also can’t pull expressions from photos outside that timeframe; Marconi says the source images for Best Take “requires metadata that shows they were taken within a 10-second window.”
I’m as wary as the next person about this acceleration into AI-manipulated content — okay, I’m probably a little less wary than the next person as I think that the bumpy road will smooth out and people will adapt — but it feels a bit unfair to characterize Google as having given no forethought to the potential downsides of their technology. Their direct quotes demonstrate otherwise.
Related: This discussion between David Pierce and Allison Johnson on The Vergecast about how making these kinds of AI edits on your own photos will make you feel has stuck with me:
Continue listening to hear more about the guardrails that Google has put into its new Magic Editor.