Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

To open a new bank account, one needs to have two things - money and a face. Puzzled? Don’t be. Welcome to the world of AI-enabled face recognition technology (FRT) that allows an ‘instant and highly secure verification’ process before opening a bank new account. In the UAE, a leading bank collaborated with the Ministry of interior's facial recognition verification system to become the first bank in the country to perform a seamless and efficient verification process using enhanced security measures against fraud and other forms of identity theft. What this means is that UAE citizens and residents can open a new account remotely without the need to visit the bank by using just their faces.

While doing so, FRT algorithm completes the process in two phases -- identification and verification. The algorithm highlights a person’s face in the image, calculates about 68 defining individual characteristics features, and compares and assesses two received facial images that refer to the same person. In the verification phase, the resulting digital model is compared with the known faces in the database to complete the identification process. Facial recognition systems can identify people in photos, videos, or in real-time.

But what was wrong with fingerprints?

Technologists feel that fingerprint software had proven to be a decent security measure over the years; however, with cybercriminals becoming clever by the day, it is being deemed ineffective when it comes to personal cyber breaches. That being said, fingerprint specification when it comes to verifying the user’s identity is still valid, however, especially with the COVID-19 scenario, hygiene has become a major issue when using fingerprint scanners for fear of spreading unwanted bacteria. Also, fingerprint patterns are impossible to recreate, however, if the finger is dirty, wet, or has scars, there are chances that the technology might not work.

Similarly, facial recognition or face ID has major security benefits as one’s unique facial contours can unlock devices and is much more inclusive as compared to the fingerprint biometric, making it effortless to access their tech. However, it is not without some glitches. Face recognition technology can make one feel like being constantly monitored and analyzed when recording and scanning. Moreover, lighting, makeup, or even aging can affect facial recognition results. In terms of privacy matters, many think that the technology can be a threat to their privacy.

Common FRT services

The global contactless biometrics technology market is expected to reach $22.44 billion by 2026 growing at a CAGR of 18.3% from 2019 to 2026. A component of this technology, FRT is being used in around 160 countries, including the UAE, USA, China, Russia, Brazil, Japan, and Australia for a variety of purposes in various sectors such as banking, law enforcement, airports, and border controls, human trafficking checkpoints, healthcare, marketing and retail and a host of other areas.

What’s in FRT for telcos?

Other than forming the backbone for all the connectivity needed for seamless operations, telecom operators can use FRT to verify the identity of people opening new mobile phone accounts which would give them the advantage to combat fraud, like scam calls and other issues. Similarly, vendors can develop technologies that will be used in other use cases other than in the areas mentioned above. For instance, the technology is being tested in street crossings to catch jaywalkers.

Developments in FRT

Face Recognition Vendor Test (FRVT) conducted by National Institute of Standards and Technology (NIST) suggests developers aim FRT algorithms to learn intrinsic and extrinsic uniformity by employing large volumes of images.

According to NIST, a leading algorithm in 2018 made 20 misses fewer than its predecessor algorithm with the use of convolutional neural network (CNN) solution that generates filters to extract image features. Modern FRT algorithms have almost reached a near-perfection level in human identification. However, most algorithms are still far from achieving such impressive results. Besides, similar to other biometric identification methods, the accuracy rate varies widely throughout different industries.

Biases and inequalities remain

There have been many instances of racial biases in the use of facial recognition technology.

Coded Bias, an American documentary film, shows how MIT Media Lab researcher and MIT doctoral candidate Joy Buolamwini encountered an unpleasant experience when AI facial recognition software did not register her face as she was dark-skinned. However, when she holds a white mask over her face, the computer gives a positive response, indicating the technology’s algorithm to favour white humans.

Moreover, Las Vegas hackathon participants discovered recently how a Twitter algorithm was coded with unspoken bias against older people, differently-abled individuals, and Muslims, apart from dark-skinned people.

According to weforum.org, organizations such as NIST have conducted studies on AI bias, including its FRVT program, which evaluated 189 software algorithms from 99 developers and reached “alarming conclusions,”. Moreover, most of the FRT used by the likes of Amazon’s Rekognition, Microsoft Azure’s Face, China’s Face++ or India’s FaceX have all shown inaccuracy issues in detecting certain facial features.

Despite the contradictions, experts say that improvements in the FTR evolution eventually could help reduce potential ethnic bias and assist in securing the life of law-abiding citizens. However, what it calls for is specific regulations in place to create standards in terms of quality and a defined level of accuracy to be used by any authority. In addition, a solid data protection law to prevent misuse of personal data gathered by these systems and accountability mechanisms in the case of misuse has to be established.

NIST is currently preparing a document titled ‘a proposal for identifying and managing bias in artificial intelligence’ that hopes to find ways to identify and manage biases in AI through public discourse. Once complete, the researchers intend to use the responses to address the issue through collaborative events and awareness campaigns.