Advertisement
Once upon a time, your fingerprints were considered to be one of the most secure ways of proving your identity. Nowadays, fingerprints are one of the very first things to get faked, as American researchers recently demonstrated with their new “universal fingerprint” that tricks fingerprint sensors on smartphones.
The DeepMasterPrint was shown to spoof the most common fingerprint sensors three out of every four times. These lower-end security sensors are used by the earlier and most widely used models of mobile devices.
It is much less effective against the more advanced fingerprint recognition sensors. However, such high-end biometric sensor systems are much rarer, far more expensive, and generally cannot fit on small smart devices. (Related: Latest Consumer Reports study identifies most hackable Smart TVs.)
In recent years, studies and incidents alike showed that fingerprint recognition systems are vulnerable to “dictionary attacks.” These are brute force hacking attempts that throw millions or even billions of different passwords at a security system until one finally sticks.
A dictionary attack employs MasterPrints, each of which bears unnerving resemblances to many other fingerprints. A MasterPrint has a higher chance of being mistaken for the fingerprint of the authorized owner.
MasterPrints can be real fingerprints or artificial prints made by mixing and matching together different patterns that are fairly common. A 2017 study by researchers at New York University (NYU) reported creating artificial MasterPrints with feature-level details that can spoof fingerprint sensors.
Another group of NYU researchers worked alongside their counterparts at the University of Michigan (UMich) to create DeepMasterPrints. The successor to MasterPrints, the new universal fingerprint is convincing at the even smaller image level of detail, making it a more powerful means of bypassing fingerprint security systems.
DeepMasterPrints are produced by a method called Latent Variable Evolution. In this methodology, the researchers used a generative adversarial network (GAN) to produce increasingly convincing fake fingerprints.
A GAN is made up of two machine-learning artificial intelligence units. The “generator” AI was trained on images of real fingerprints and produced fake fingerprints to try and trick the other AI, the “discriminator.”
The interaction between the generator and discriminator AIs created a lot of fake fingerprints with improving levels of detail. The researchers set the GAN to make sure that the impostor fingerprints matched as many real fingerprints as possible.
The fingerprint recognition system of smartphones and other mobile devices use capacitive sensors to scan the fingers. Because of the size constraints, deliberate design, and intended use of a mobile device, the sensor has to make a lot of compromises.
For example, it will accept a partial print that matches enough features of the phone’s owner’s fingerprint. It will also clear a print that has been rotated out of the sensor’s original orientation. These lax protocols makes capacitive sensors more vulnerable than they should be.
Smartphones are supposed to have second tier security. The average fingerprint sensor is expected to produce a false positive only once out of every 1,000 times.
The NYU-UMich research team reported that their DeepMasterPrints could defeat the fingerprint recognition system of the newest smartphones on 11 occasions out of every 50 attempts (22 percent). Against earlier, less capable models of capacitive sensors, the universal fingerprint achieved much greater changes of success.
According to the researchers, their findings are of vital importance in improving fingerprint recognition systems. Patching this gap is especially important since smartphones are so numerous and used for important tasks.
Sources include:
Submit a correction >>
Advertisement
Advertisements