Transform 2022 will be brought back in-person on July 19 and around July 20 - 28. Join AI and data leaders for exciting networking opportunities.
Last year, when the Internal Revenue Service (IRS) signed an $86 million agreement with ID.me to provide, it was a huge confidence pool for this technology. Taxpayers may now verify their identities online using facial biometrics, a move intended to improve the administration of federal tax concerns.
Following widespread opposition from privacy advocates and bipartisan legislators, the IRS issued an out-of-face statement in February, renouncing its strategy. These critics opposed the requirement that taxpayers submit their biometrics in the form of a selfie as part of the new identity verification program. However, the very public misunderstanding of the IRS' relationship with ID.me has sparked concern.
Though the IRS has agreed to continue offering ID.me's facial-matching biometric technology as an alternative to opt-out, confusion is still brewing. The high-profile complaints against the IRS have, for the time being, reduced public trust in biometric authentication technology, and enabled fraudsters to feel completely relieved. However, there are other issues to consider as the ID.me debacle fades in the rearview mirror.
Don't underestimate the political value of a controversy.
This recent dispute demonstrates the need for improved education and understanding of biometric technology's limitations, as well as the types of information that may be subjected to versus facial matching, the use cases and potential privacy concerns, as well as the policies and regulations that will help protect consumer rights and interests.
It is very common that biometrics have explicit user consent for a single, one-time purpose that benefits the user, such as identity verification and authentication, in order to protect the user's identity from fraud. However, when social media or other internet sites expressly communicate such activity, it is tending to be buried in the privacy guidelines, described in terms incomprehensible to the average user. In the case of ID.me, organizations implementing this "scraping" technology should be required to educate users and obtain explicit informed
Different biometric technologies that appear to be performing the same function may not be created equally. Benchmarks like the NIST FRVT provide a thorough analysis of biometric matching technologies and a standard method of analyzing their functionality and ability to avoid demographic performance bias across attributes, such as skin tone, age, or gender. Biometric technology companies should be held accountable for non- ethical use of biometrics, but the equitable use of biometrics that works well for the entire population they serve.
Politicians and privacy advocates are keeping biometrics technology providers to a high standard. And they should the stakes are high, and privacy matters. As such, these businesses must be transparent, clear, and perhaps most importantly proactive about communicating the meanings of their technology to those who participate in a campaign. One misinformed, fiery speech from a politician who sought to win hearts can wreak havoc on an otherwise consistent and focused consumer education effort. Sen. Ron Wyden, a member
When implementing facial recognition at the airport, at government facilities, and in many workplaces, Sen. Wyden wished that millions of Americans would not be aware of this dismal moment. However, ID.me and the IRS allowed the public to be openly misinformed and to describe the agency's use of facial matching techniques as unusual and nefarious.
Honesty is a lucrative goal.
ID.me's response was late and convoluted if not misleading, according to Hall. In January, the company did not use one:many facial recognition technology, based on comparisons between one face and others stored in a central repository. Despite a week of controversy, Hall declared that ID.me does use 1:many, but only once during enrollment. An ID.me engineer that incongruity in a prescient Slack channel post:
"We may cancel the 1:many face search, but then lose a valuable fraud-fighting tool. Or we might change our public view on using the 1:many face search. However, it appears we can't keep doing one thing and saying another, as it will cause us to fall in love. "
Transparent and consistent communication with the public and key influencers, including using print and digital media as well as other creative channels, will help combat misinformation and provide assurance that facial biometric technology, when used with explicit informed consent to protect consumers, is more secure than legacy-based alternatives.
Prepare for a regulation.
Despite the rise in cyber crime, state and federal lawmaking has been more aggressive, while policymakers have placed themselves in the center of the push-pull between privacy and security, and from there they must act. Agency heads can claim that their legislative actions are driven by a commitment to citizens' safety, security, and privacy, but Congress and the White House must decide what sweeping regulations can protect all Americans from the current cyber threat.
The California Consumer Protection Protection Act (CCPA) and its historic European cousin, the General Data Protection Regulation (GDPR), model how to ensure that users understand the kinds of data that organizations collect from them, how it is being used, how to monitor and manage that data, and how to opt-out of data collection. To date, officials in Washington have left data protection infrastructure to the states. The Biometric Information Privacy Act (BIPA) in Illinois, as well as similar laws in Texas and Washington, requires organizations to obtain permission before collecting or disc
If legislators were to develop and pass a legislation that combines the principles of the CCPA and GDPR and the biometric-specific regulations described in BIPA, then they might gain the right foundation for the security and convenience of biometric authentication technology.
The future of biometrics
Bidenmark authorities and government agencies must be good shepherds of the technology they offer and procure and most important when it comes to educating the public. Some hide behind the ostensible fear of giving cybercriminals too much information about the technology itself. Where there is a lack of communication and transparency, one may find opportunistic critics who deliberately misrepresented biometric facial matching techniques to advance their own agendas.
Despite the fact that several politicians have classified facial recognition and biometrics companies as bad actors, they have missed the opportunity to dismantle the real offenders, including cybercriminals and identity crooks.
AuthID.ai has appointed Tom Thimot as the CEO.
Welcome to VentureBeat's VentureBeat community!
Experts, including the technical people who perform data work, may share data-related insights and innovations, according to DataDecisionMakers.
Join DataDecisionMakers for a look at cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology.
You may even consider your own personal items!