The paper, titled “Automatic Face Aging in Videoas via Deep Reinforcement Learning” was recently accepted at this year’s Conference on Computer Vision and Pattern Recognition (CVPR), one of the most prestigious conferences in AI.
Hollywood, which has a long history of unconvincing aging in movies, would obviously be one possible user of this technology. Khoa Luu, one of the co-authors of the paper, said in an email that he has also received some interest and questions from police departments who think that this technology could help in missing children cases.
Understanding faces that have been changed by time or by other guises—and detecting them—is a major research concern in AI. Other recent papers include “On Matching Faces with Alterations due to Plastic Surgery and Disguise,” which the authors argue are two of the bigger challenges to face recognition. Another paper from 2017 uses a generative adversarial network, also known as a GAN, to estimate what people might look like older or younger.
Before AI-powered surveillance technology gets too powerful, however, there are legal and ethical concerns. Microsoft’s president, Brad Smith, has called for the U.S. government to start regulating face recognition technology and limiting where it can be deployed (although he hasn’t supported calls by some privacy advocates for a broader moratorium). Meanwhile, Congress and several U.S. states are now considering passing legislation against AI-altered videos and audio.
But the march to using our faces as identification or used against us in constant surveillance systems has already started. Face recognition is becoming part of the airport experience, and police departments around the country are trying out similar software. The technology needs more public debate and policy before additional wrinkles like synthetic aging software can be added to the mix.