Post By: David Lukić
While AI has humble roots in the Turing machine which provided the blueprint for modern-day computers, it has certainly evolved since that time. Today, it is used for millions of applications from making weather applications, filtering spam messages, making search predictions in search engines, making improved workflow processes, providing systems that respond to voice commands, and recognizing a person with facial recognition software. Many of the AI technologies in existence today rely on machine-learning algorithms that allow them to better categorize data and act on it.
Despite these great strides, new forms of using AI create potential issues regarding cybersecurity. Being aware of these privacy concerns can help you establish compliance measures related to its use.
Science Behind Facial Recognition
Facial recognition harnesses technology to recognize a face in a computer system. The technology uses biometrics to map facial features derived from a photo or video. For example, the system may notice the shape of your eyes or the distance from your forehead to your chin. This set of biometrics is compared with existing information in a database, such as DMV records. Ultimately, the facial recognition program creates a mathematical formula based on your unique facial features to differentiate your face from the many others in the database.
Controversy Surrounding AI Company’s Breached Data
Facial recognition has been successfully used to find missing persons and capture fugitives. However, companies that retain large amounts of facial recognition data are prime targets for hackers who may use this data for nefarious purposes, such as:
- Stealing identities
- Creating illegal forms of identification or granting access to unauthorized personnel
- Supporting ransomware missions
- Falsely imprisoning people
- Using the images in hacktivism attacks
In one alarming example of the danger of breached facial recognition data, the Manhattan-based tech company Clearview AI had its data breached. The company uses a database of over 3 billion images that it assembles from various social media platforms and compares them with photos of suspects that law enforcement agencies send them.
In February 2020, the company experienced a data breach after a hacker gained unauthorized access to the company’s client list. The company sent out a notification letter to its clients, which included various law enforcement agencies in the United States and various corporations. The hacker was able to gather information about the clients, including:
- Their names
- The number of user accounts each client created
- The number of searches each client conducted
The company tried to assure its clients that their system was still safe to use and that the hacker did not obtain the actual search history of the clients. Cybersecurity experts have questioned whether the company was honest about the data breach’s full extent. The data breach poses unknown implications with clients and the public unaware of how the breached data may be used. Some think that the cyber attack occurred because of retribution against the company for its controversial policies of taking images from social media and making them available to law enforcement agencies, including state-based agencies, as well as the FBI and the Department of Homeland Security.
Shortly after, the company disclosed that it had experienced a second data security breach even though it had previously stated that security was its top priority.
As you can see, the potential fallout caused by a breach of facial recognition data can have a significant impact on the business, making it especially important to establish rules surrounding the use of AI within your organization.
Protections Against Facial Recognition Breaches
As a compliance officer, you are the strongest defense against facial recognition breaches. You can safeguard your organization from possible data breaches by taking the following steps:
- Establish clear policies related to the use of AI – Make sure that personal information is not associated with images. Restrict computer usage to business-use only. Do not permit employees to use peer-to-peer websites or unapproved software.
- Set up VPNs for remote workers – Make connections secure for remote workers so they do not use public wi-fi where their transmissions may be intercepted.
- Use ID monitoring or identity scan services – ID monitoring or identity scan services monitors various data points, including the recreation of images and how they are used.
- Restrict access – Set up your computer system so only people who actually need access to facial recognition or other programs have this access.
- Secure all company computers – Use strong passwords, time-out functions, and other safeguards to protect all company computers.
- Update security software – Set all company assets to update automatically when there are new versions or patches.
- Encrypt data – Encrypt all data transmissions, including company email.
- Train employees – Teach employees about privacy and data security.
- Manage portable media use – Have set guidelines pertaining to portable media use, such as flash drives, which can prevent a security risk if used inappropriately.
While facial recognition and AI can help improve security, it can also pose its own security risks. As a compliance officer, you will need to carefully consider whether to integrate AI into your organization and what measures to put in place to prevent data breaches. The tips above can help you integrate the newest technology without compromising your cybersecurity.
About the Author: David Lukić is an information privacy, security and compliance consultant at IDstrong.com. The passion to make cyber security accessible and interesting has led David to share all the knowledge he has.