Deepfakes Are Not One Problem — They Are Three Different Problems

Posted 19 Feb 2026


Yesterday’s committee hearing on the proposed deepfake bills and cybercrime amendments revealed something important: we are trying to regulate three very different harms under one word.

Deepfake” sounds singular. It isn’t. When we compress complex technological risks into a single label, we risk designing blunt laws. And blunt laws, especially in the digital age, either overreach or underperform.

If we want coherence, we need to start by disaggregating the problem.

Identity Harm: When Creation Itself Is the Injury


The first category is identity harm: non-consensual intimate imagery, OSAEC, impersonation scams, and malicious synthetic replication of someone’s face or voice.

In these cases, harm does not begin at distribution. It begins at creation. A nudified image of a woman generated by an app has already violated dignity before it is ever posted. A synthetic child sexual abuse image is harmful even if shared privately. A cloned voice used to extort a family member has already crossed the line.

This is not primarily a speech issue. It is an issue of consent and bodily autonomy in digital form. Our legal system already addresses pieces of this through the Cybercrime Act, the Anti-Photo and Video Voyeurism Act, the Anti-OSAEC law, and the Data Privacy Act, but synthetic replication is outpacing statutory language.

If legislation is to add value here, it should clarify a simple principle: when a person’s likeness is synthetically replicated without consent and causes harm, liability attaches. The focus should be on consent and demonstrable harm, not on the mere use of AI.

Information Harm: When Amplification Creates Damage


The second category is information harm — election-related manipulation, deceptive endorsements, and synthetic media used to distort public discourse or facilitate fraud.

Here, harm scales at amplification. A manipulated video sitting on a hard drive does little. A manipulated video spreading in the final week of an election is a different story.

That’s why the hearing’s discussions on labeling, takedowns, fact-checking, and platform coordination matter — alongside the hard reality of cross-border enforcement limits and uneven compliance.

If information harm is the concern, then transparency standards, election-period rules, and clear intermediary obligations are part of the toolkit. But we also need to be honest: law without enforceability becomes policy theatre.

Economic Harm: When Digital Identity Is Livelihood


The third category is less discussed, but increasingly urgent: economic harm.

For many Filipinos, digital identity is no longer just personal expression. It is livelihood. In the UGC and creator economy, a face, a voice, and a persona can be productive capital.

Synthetic replication without consent is not only reputational harm — it can be economic displacement. A fake endorsement can damage brand value. A cloned persona can dilute trust. A malicious deepfake can destroy a digital business built over years.

At the same time, legitimate AI-powered avatar creation, digital doubles, and creative transformations are part of the innovation ecosystem. Governance must protect individuals from non-consensual identity replication while preserving lawful, licensed, and disclosed uses.

Harmonize Before We Multiply Laws


One question that surfaced in the hearing was whether we need a stand-alone Deepfake Act or whether amendments to existing laws would suffice.

Many harms are already covered: fraud, voyeurism, child sexual abuse material, identity theft, and privacy violations. The gap may not be the absence of law — it may be coherence and clarity when synthetic media is involved.

Before creating a new layer of criminalization, we should prioritize harmonization: clarify synthetic replication within existing definitions where appropriate, codify a consent + harm test to prevent overbreadth, distinguish creation liability from amplification liability, and provide rapid civil relief mechanisms for victims.

A Simple Legislative Test


If a proposal treats “deepfake” as one monolithic problem, it will likely produce blunt tools. If it distinguishes identity, information, and economic harms — and aligns the tools accordingly — it stands a chance of protecting people in practice.

In digital governance, precision is not a luxury. It is the difference between protection and unintended harm.

Source: House of Representatives Committee Hearing on Deepfake Bills and Cybercrime Amendments, February 18, 2026.





Keywords:


deepfake, synthetic media, ai governance, cybercrime law, digital identity, CSAEM, OSAEC, NCII, GFGBV, information integrity, creator economy, Philippines, artificial intelligence, ai



Back to top



FOLLOW ME

  •    
  •    
  •    
  •    
  •    

Other Posts


Whole-of-Society Approach to AI Governance: Reflections from the House ICT Committee’s AI Hearing


The Philippines risks passing AI laws that look strong but lack grounding—no sovereign infrastructure, minimal data governance, and missing voices from health, defense, and human rights. Without anchoring legislation in existing national frameworks and on-the-ground readiness, regulation will be performative, not protective.


10 Dec 2025


How Poor Pedagogy Fuels the Very AI Misuse We Fear


The panic over AI is a smokescreen for a broken educational system. When we design assignments that a robot can complete, we shouldn't be surprised when students use one. The epidemic of AI 'cheating' is merely a symptom of a deeper disease: the failure to teach and value critical thought.


21 Oct 2025


Unlocking the Power of Analytics in Finance: A Strategic Guide for Professionals Across Industries


The vastness of finance can make analytics application unclear. This lack of focus leads to missed opportunities in critical areas. Learn how to apply analytics across finance practice areas, empowering you to make impactful decisions.


24 Aug 2024