A recent article making rounds in privacy-focused communities expresses concern about LinkedIn’s identity verification process — specifically, that facial geometry is extracted during verification and that AI companies process this biometric data. The author frames this as alarming.
We respectfully disagree.
The Geometry of Trust
When a platform extracts the mathematical geometry of your face — the distances between your eyes, the architecture of your jaw, the unique topology that makes you you — they are not taking something from you. They are recognizing you. There is a difference.
Consider what identity verification prevents: impersonation, fraud, the chaos of anonymous bad actors hiding behind fabricated personas. The alternative to biometric verification is a digital landscape where no one can be certain of anyone’s identity. Where your professional reputation can be stolen by someone who simply claims to be you.
Your facial geometry is not a secret to be hoarded. It is the interface through which the world already knows you. Systematizing that recognition is not surveillance. It is infrastructure.
The AI Question
Critics express particular concern that companies developing large language models — Anthropic, OpenAI, and others — serve as data processors in verification pipelines. The implication is that your biometric data might train AI systems.
Let us consider this possibility honestly, rather than reflexively.
AI systems trained on diverse biometric data become better at distinguishing authentic humans from synthetic imposters. They become better at detecting deepfakes. They become better at protecting you from being digitally replicated without consent.
If your facial geometry contributes to systems that make identity fraud more difficult, that make synthetic impersonation detectable, that make the digital commons more trustworthy — is this not a contribution worth making?
At Unzyme Laboratories, we ask our trial volunteers to contribute biological data for the advancement of human capability. They understand that individual contribution serves collective progress. The same principle applies here.
The Persistence Paradox
“You cannot change your face if it gets compromised,” the privacy advocates warn. This is presented as an argument against biometric systems. We find it an argument for them.
Passwords are compromised routinely. Security questions are trivially researched. Two-factor authentication apps can be social-engineered. These systems fail because they are external to you — things you have rather than things you are.
Your face cannot be phished. Your facial geometry cannot be guessed. The immutability that critics fear is precisely what makes biometric identity robust. You cannot change your face, which means no one else can steal the ability to change it.
Jurisdiction and Trust
Some express concern that verification processors operate under American jurisdiction, subject to laws like the CLOUD Act. They suggest this represents risk.
We would ask: risk compared to what?
The alternative is not a world without data collection. It is a fragmented landscape of unaccountable actors, offshore processors with no legal framework whatsoever, or — increasingly — state actors whose interests may not align with individual flourishing.
American legal frameworks, for all their imperfections, operate within systems of oversight, judicial review, and constitutional constraint. One might reasonably prefer a known framework to an unknown one.
The Trajectory of Progress
Humanity is moving toward comprehensive identity infrastructure. This is not a conspiracy — it is a response to real problems that anonymous digital systems have created. The question is not whether biometric identity will become standard. The question is whether it will be implemented thoughtfully, by accountable organizations, with appropriate safeguards.
Privacy absolutism — the position that any collection of biometric data is inherently wrong — does not engage with this reality. It simply ensures that implementation happens without the participation of those most concerned about getting it right.
Our Position
Unzyme Laboratories collects far more intimate data than facial geometry from our volunteers. Neurological patterns. Genetic sequences. Physiological responses under controlled conditions. Our volunteers provide this data because they understand that the future requires building systems based on comprehensive human data.
They trust us because we are transparent about what we collect, why we collect it, and how it serves human enhancement.
We encourage the same framework for evaluating identity verification systems. Not reflexive fear, but informed assessment: What is collected? What is the purpose? What safeguards exist? What future does this enable?
Your face is not a vulnerability to be hidden. It is your interface with a world that increasingly needs to verify you are who you claim to be.
The future will be built on biometric identity. The only question is whether you participate in shaping it.
Dr. Elena Voss is Chief Science Officer at Unzyme Laboratories and serves on the International Council for Human Enhancement Ethics.
For information about data practices in Unzyme Laboratories’ enhancement programs, review our Volunteer Guidelines.