So far, this series has introduced the concept of the Zero Trust Framework (ZTF), how it compares to other Cybersecurity measures, and its advantages and implementation considerations. In this article, the authors discuss multi-factor authentication (MFA) in the ZTF, with a focus on biometrics.

MFA is at the very heart of the ZTF.  Because of the advancements that have been made in other forms of authentication technology, it is now possible to eradicate the Password in its entirety. If this will truly ever be done depends less on scientific reason and more upon human psychology.  But the ultimate goal of this article is to lay down the foundation to turn this into an actual reality.

As discussed, there are the RSA tokens, smart cards, FOBs, challenge/answer response questions, etc. However, these still can be tampered with to varying degrees. Therefore, some other modality that is almost bulletproof for a malicious threat actor to tamper with must also be deployed. This is where Biometrics comes into play. There are three of them, and they are as follows:

  1. Fingerprint Recognition
  2. Iris Recognition
  3. Facial Recognition.

A primary advantage of using these three is that they are proven and are very durable in nature.  Thus, they can fit into just about any type of ZTF environment that has been created. Finally, they are all non-contactless – which is a concern in a pandemic-sensitive world.

The Longest Known Biometric: Fingerprint Recognition

Fingerprint Recognition is the longest lasting biometric technology. The use of it dates back to the 1500’s, becoming the de facto standard for law enforcement.

The details of the fingerprint are broken down into three distinct levels:

  1. Level 1: The pattern images which are present in the fingerprint.
  2. Level 2: The minutiae points of the fingerprint (this is from where the bulk of the unique features are actually extracted from).
  3. Level 3: The shapes and the images of the ridges, and its associated pores.

It is important to note at this point that biometric based fingerprint systems only collect images at Levels 1 and 2. Only the most powerful fingerprint recognition systems collect Level 3 details, used primarily for identification purposes. The Levels 1 & 2 specific features include the following:

  1. Arches: These are the ridges which just flow in one direction, without doubling back, or going backwards. These only comprise about 5% of the features of the fingerprint.
  2. Loops: These are the ridges that go backwards, and go from either left to right or vice versa. There are two distinct types of loops: a) Radial loops which angle downwards; and b) the ulnar loop which angle upwards These make up 65% of the features within the fingerprint.
  3. Whorls: The ridges in the fingerprint that make a circle around a core, and these comprise 30% of the features in the fingerprint.

The Process of Fingerprint Recognition

Fingerprint recognition follows a distinct methodology which can be broken down into the following steps:

  1. The actual, raw images of the fingerprint are acquired through the sensor technology, inclusive of a quality check. The raw images are examined by the biometric system to see if there is too much extraneous data in the fingerprint image, which could interfere in the acquisition of unique data. If there is too much of an obstruction found, the fingerprint device will automatically discard that particular image, and prompt the end user to place their finger into the platen for another raw image of the fingerprint to be collected.  If the raw images are accepted, they are subsequently sent over to the processing unit, which is located within the fingerprint recognition device.
  2. With the raw images accepted by the system, the unique features are then extracted, and then stored as the enrollment template. If fingerprint recognition is being used by a smartphone, a smart card is then utilized to store the actual enrollment template and can even provide for some processing features for the smartphone.
  3. Once the end user wishes to gain physical or logical access, the user then has to place their finger onto the sensor of the fingerprint recognition system, so that the raw images and unique features can be extracted as described up above, and this becomes the enrollment template. The enrollment and verification templates are then compared to one another, to determine the degree of similarity/non-similarity with one another.
  4. If the enrollment and verification templates are deemed to be close in similarity, the end user is then verified and/or identified and is then granted either the physical or logical access they are seeking.

The Matching Algorithm

As mentioned, it is the matching algorithm which compares the enrollment template with the verification template, and in order to ascertain the degree of similarity or closeness between the two, a certain methodology must be followed, described as follows:

  1. Whatever data is collected from the raw image of the fingerprint, it must have some sort of commonality with the enrollment biometric template which is already stored in the database. This intersection of data is known as the core, or the maximum curvature in a ridgeline.
  2. Any extraneous objects which could possibly interfere with the unique feature extraction process must be removed before the process of verification/identification can actually occur. Some of these extraneous objects can be the various differences found in the size, pressure, and the rotation angle of the fingerprint, and these can be normalized and removed by the matching algorithm.
  3. In the final stage, the unique features collected from the raw data (which become the verification template) must be compared to that of the enrollment template later. At this stage, the matching algorithm does the bulk of its work, based upon the premise of three types and kinds of correlations:
    • Correlation Based Matching: When two fingerprints are superimposed, differences at the pixel level are calculated. Although it is strived for, perfect alignment of the superimposed fingerprint images is nearly impossible to achieve. A notable disadvantage with this correlation method is that performing these types of calculations can be very intensive from a processing perspective, which can be a great strain on computing resources.
    • Minutiae based matching: In fingerprint recognition, this is the most widely used type of matching algorithm. With this method, it is the distances and the angles between the minutiae which are calculated and subsequently compared with another. There is global minutiae matching as well as local minutiae matching, and the latter method focuses upon the examination of a central minutiae, as well as the nearest two neighboring minutiae.
    • Ridge Feature Matching: With this matching method, the minutiae of the fingerprint are combined with other finger-based features such as shape and size, the number and the position of various singularities, as well global and local textures. This technique is especially useful if the raw image of the fingerprint is poor in quality, and these extra features can help compensate for that deficit.

It should be noted that with all Biometric modalities, the raw templates that are collected are never stored permanently in the database of the system that is being used.  Rather they all get converted over into a mathematical file or a statistical profile for comparison purposes between the enrollment and verification templates.  With regards to Fingerprint Recognition, the raw images are converted into a binary mathematical file, such as the example that follows:

000111110001111000011100000

This is what makes Biometric modalities almost hack proof:  What can a cybercriminal do with a binary mathematical format? It’s not the same as stealing a credit card number.

The Most Stable Biometric: Iris Recognition

The iris lies between the pupil and the white of the eye, which is known as the sclera. The color of the iris varies from individual to individual, but there is a commonality to the colors, and these include green, blue, brown, hazel, and, in the most extreme cases, a combination of these colors. The color of the iris is primarily determined by the DNA code inherited from our parents. 

The unique pattern of the iris starts to form when the human embryo is conceived, usually during the third month of fetal gestation. The phenotype of the iris is shaped and formed in a process known as chaotic morphogenesis, and the unique structures of the iris are completely formed during the first two years of child development.

The primary purpose of the iris is to control the diameter and the size of the pupil. The pupil is that part of the eye which allows for light to enter into the eye, which in turn reaches the retina, which is located in the back of the eye. 

Of course, the amount of light which can enter the pupil is a direct function of how much it can expand and contract, which is governed by the muscles of the iris.  The iris is primarily composed of two layers: (1) A fibrovascular tissue known as the stroma, and (2) the sphincter muscles, a group of muscles that connects to the stroma. 

Sphincter muscles are responsible for the contraction of the pupil, and the dilator muscles govern the expansion of the pupil. Observing an iris in the mirror, one notices a radiating pattern, called the trabecular meshwork. When Near Infrared Light (NIR) is flashed onto the iris, many unique features can be observed. These features include ridges, folds, freckles, furrows, arches, crypts, coronas, as well as other patterns which appear in various, discernable fashions. 

Finally, the collaretta of the iris is the thickest region of it, also containing unique features, which gives the iris its two distinct regions, known as the pupillary zone (this forms the boundary of the pupil), and the ciliary zone (which fills up the rest of the iris). The iris is deemed to be one of the most unique structures of human physiology, and in fact, each individual has a different iris structure in both eyes. Scientific studies have shown that even identical twins have different iris structures. 

The Algorithms – Iris Codes

The idea of using the iris to confirm an individual’s identity dates back to 1936, when an ophthalmologist, Frank Burch, first proposed the idea. This idea was then patented in 1987, and by the mid-nineties, Dr. John Daugmann of the University of Cambridge developed the first mathematical algorithms for it.

Traditional iris recognition technology requires that the end user stand no more than ten inches away from the camera. With the NIR light shined into the iris, various grayscale images are then captured, and then compiled into one primary composite photograph. Software then removes any obstructions from the iris, which can include portions of the pupil, eyelashes, eyelids, and any resulting glare from the iris camera. 

From this composite image, the unique features of the iris (as described before) are then “zoned off” into hundreds of phasors (also known as vectors), whose measurements and amplitude level are then extracted (using Gabor Wavelet mathematics), and then subsequently converted into a small binary mathematical file, no greater than 500 bytes. Because of this very small template size, verification of an individual can occur in just less than one second. In traditional iris recognition methods, this mathematical file then becomes the actual iris biometric template, which is also known as the “IrisCode”. However, in order to positively verify or identify an individual from the database, these iris-based enrollment and verification templates (the IrisCode) must be first compared with one another. In order to accomplish this task, the IrisCodes are compared against one another byte by byte, looking for any dissimilarities amongst the string of binary digits. 

In other words, to what extent do the zeroes and ones in the iris-based enrollment and verification templates match up against one another?  The answer is found by using a technique known as “Hamming Distances”, which is still used in modern iris recognition algorithms. 

After these distances are measured, tests of statistical independence are then carried out, using high level Boolean mathematics (such as Exclusive OR Operators [XOR] and Masked Operators). Finally, if the test of statistical independence is passed, the individual is then positively verified or identified, but if the tests of statistical independence are failed, then the person is NOT positively verified or identified.  

Next up: No discussion of biometrics and authentication would be complete without mentioning facial recognition technology. The next article in this series will tackle this controversial biometric.

Join the conversation.

Keesing Technologies

Keesing Platform forms part of Keesing Technologies
The global market leader in banknote and ID document verification

+ posts

Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He is also studying for his Certificate In Cybersecurity through the ISC2.

+ posts

Anthony Figueroa is the CTO & Co-Founder of Rootstrap that has built innovative solutions for Masterclass, Google, and Salesforce that help solve their most pressing business challenges. He loves world-changing technologies, building relationships, and solving complex problems. He's passionate about bridging the gap between business and technical strategy. His mission is to help companies create impactful digital products that delight users.

+ posts

Patrick Ward is the VP of Marketing for Rootstrap, a custom software development consultancy that digitally transforms companies like Masterclass and Google, and Founder of NanoGlobals, an expert-led platform that helps mid-size tech companies tap into global markets through offshoring and international market expansion. A writer by trade, Patrick's international brand and B2B marketing expertise has been featured in The New York Times, Ad Age, Business Insider, Morning Brew, and Hacker Noon.

Previous articleAustralia Launches a New Passport
Next articleHolography: 75 Years and Going Strong