A cybersecurity company claims that a Clearview AI server has been publicly exposed and has temporarily released the source code associated with its facial recognition technology. A worrying situation since the start-up’s database contains several billion photos of users of social networks that could be exploited by malicious people.

Trouble continues for Clearview AI! After being accused of collecting photos of social media users to feed its facial recognition system, the New York start-up has seen its source code and some of its private keys displayed publicly following a security breach. TechCrunch explains this Friday, April 17, 2020, that the firm SpiderSilk noticed that one of the servers of Clearview AI was exposed and its configuration allowed everyone to register.


A DANGEROUS DATABASE

With billions of images, the Clearview AI database can recognize large numbers of people from single photography. The start-up has kept repeating that its software is only accessible by the authorities. However, many organizations and companies have already used it. The fact that the source code for this technology is leaked worries that it could fall into the wrong hands.

Security breach exposed Clearview AI source code and app data

The exposed server contained, beyond the source code which is associated with the company's database, multiple copies of its application for devices that run on Windows, MacOS, Android and iOS. SpiderSilk has taken screenshots of the latter, recently removed from the AppStore after violating the terms of use imposed by Apple. The cybersecurity firm also spotted the authentication tokens associated with Clearview AI on the Slack collaboration platform which would have technically enabled them to consult internal exchanges if they had wished to do so.


CLEARVIEW AI CONDUCTS AN INTERNAL AUDIT

Some 70,000 videos filmed in a residential building which would be located in Manhattan (New York), but which could not be formally identified at this time were also stored on the cloud of the start-up. Asked by TechCrunch about them, the founder of Clearview AI assured that these "were filmed with the explicit authorization of the place's owner within the framework of the prototyping of a CCTV".

Besides, Hoan Ton-That also judged that the security breach which his company has just paid for has "exposed no biometric or personal data" and indicated that a "full internal audit is underway" for determining its origin. As a reminder, an investigation was opened by the attorney general of the US state of Vermont to determine whether Clearview AI has violated certain data protection rules. The social networks on which the start-up probably collected photos of users such as Facebook, Twitter, and YouTube, in particular, urged the firm to put an end to these practices.