The dataset of more than 10 million images had been linked to China's crackdown on ethnic Muslims.
Microsoft has quietly taken down a massive facial recognition database containing more than 10 million images of roughly 100,000 people.
The images were gathered from search engines and published in 2016 to a dataset called MS Celeb and used to train facial recognition systems around the world, including by military researchers and Chinese firms such as SenseTime and Megvii, the Financial Times reported Thursday. The dataset -- previously used in an AI project to recognize celebrities -- had been linked to China's efforts to crack down on ethnic minorities in the country.
"The site was intended for academic purposes," Microsoft said in a statement. "It was run by an employee that is no longer with Microsoft and has since been removed."
Facial-recognition technology is commonly used for everyday tasks like unlocking phones and tagging friends on social media, but privacy concerns persist. Advances in artificial intelligence and the proliferation of cameras have made it increasingly easy to watch and track what individuals are doing.
Law enforcement agencies frequently rely on technology to help with investigations, but the software isn't without its flaws. Software used by the UK's Metropolitan Police was reported earlier this year to produce.
Many of the people featured in the dataset were not asked for their consent to be included, but their images were scraped from the internet under the Creative Commons license, according to the FT. The Creative Commons license allows academic reuse of photos, a permission granted by the image's copyright holder, not the photo's subject.
The Chinese government used facial recognition software to track and control 11 million Uighurs, a largely Muslim minority, in the country, The New York Times reported in April. Tapping an expansive network of surveillance cameras, the technology looked for Uighurs based on their appearance, kept tabs on their movement and put millions in detention camps, the Times reported.
In December, Microsoft blog post.that requires facial-recognition technology to be independently tested to ensure accuracy. "Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues," Microsoft chief counsel Brad Smith wrote in a