In Part 1 of this series, Ravi Das looked at an overview into and the process of retinal recognition, which he describes as the “mother” of biometrics. He now digs deeper into the advantages and disadvantages of retinal recognition.
Just like the other biometric modalities, retinal recognition has its own set of pros and cons. They can be described as follows:
- The retina is deemed to be very stable, and hardly ever changes over the lifetime of an individual. Thus, in this regard, it is considered to be the most reliable biometric technology which is available in the marketplace today.
- Given the small file size of the retinal recognition templates, the time it takes for the system to confirm the identity of an individual is very quick; it can take place in less than two seconds.
- Because of the high number of unique data points the retina possesses, there is almost no error that when an identity of an individual is confirmed, it is truly that person. In other words, the statistical probability of an impostor being falsely accepted by a retinal recognition system is almost non-existent.
- Since the retina is located from within the structure of the eye itself, it is not prone to the harshness of the external environment like hand geometry recognition and fingerprint recognition.
- Overall, there is a very negative attitude about using retinal recognition amongst the public, at least here in the United States. For example, because of the sheer intrusiveness, which is involved with it, many people perceive that it poses a serious health risk to the eye. There have been no documented cases in this regard.
- There is a very strong unease about having to place the eye into a receptacle and having an infrared light beam being shone directly onto it.
- When compared to all the other biometric modalities, retinal recognition demands the highest levels of cooperation and motivation from the end-user to capture high quality, raw images. As a result, the ability to verify metric can be as low as 85% (other modalities are as high as 99% or even 100%).
- Because of the attention that is required by the end-user, it can take numerous attempts and a long time to get the results which are required. Thus, as a result, if the process is not done correctly, it can lead to a very large false rejection rate (this occurs when a legitimate individual is improperly denied access to either physical or logical based resources by the retinal recognition system).
Overall, the effectiveness and viability of retinal recognition can be examined against seven criteria which are also used by the biometrics industry here in the United States:
Each and every individual, unless they are afflicted with blindness or some other serious disease of the eye, has a retina, so thus, it can be scanned.
Except for the DNA strand, it is the retina which possesses the greatest number of differing data points in the entire human anatomy.
Unless an individual has diabetes, glaucoma, high blood pressure, or cardiac disease, the retina will hardly every change regarding structure or physiological makeup during the lifetime of the person.
Even if the individual is fully cooperative, it can still be difficult to a collect a high-quality, raw image of the retina. This is because the scan area is so small, when compared to the other biometric modalities.
Because of its stability, the retina possesses extremely high levels of accuracy. In fact, under optimal conditions, the error rate can be as low as 1 in 1 million.
As mentioned earlier, the acceptance of retinal recognition by the general public is extremely low.
Resistance to circumvention:
Because of its stability and richness, it is almost impossible to spoof a retinal recognition system.
Taking all of this into consideration, the market applications are extremely limited for retinal recognition. Thus, retinal recognition is used primarily in physical access entry applications, where extremely high levels of security are required.
This includes military installations and bases, nuclear facilities, and laboratories where very high calibre research and development is taking place.
Government-based security applications, to a certain degree, also make use of retinal recognition. One of the best examples of this is the State of Illinois, where this modality was used in identifying welfare recipients to reduce fraud (this would occur when an individual would use several aliases to receive multiple payments).
Overall, it is expected that the use of iris recognition will grow rapidly over a short period of time, but retinal recognition will still only occupy that very limited pool of applications just described, and not grow any further from that point.