A Whole-of-Society Approach to AI Governance: Reflections from the House ICT Committee’s AI Hearing
Posted 10 Dec 2025
Artificial Intelligence is reshaping the daily lives of Filipinos in ways that cut across classrooms, workplaces, platforms, families, and public institutions. In response, the House Committee on Information and Communications Technology (ICT) convened a hearing on 19 proposed AI-related bills last December 3, 2025 to better understand the current landscape and explore potential governance pathways.
Crucially, this session was not designed to finalize implementing rules or technical details. It was an exploratory, agenda-setting hearing—an opportunity to map stakeholder positions, identify concerns, and surface pathways for eventual bill consolidation.
Yet even within this limited mandate, deeper governance signals emerged: infrastructure gaps, uneven sectoral participation, asymmetrical influence, and the need for legislation that reflects Philippine lived realities rather than inherited foreign templates.
These insights matter not only for legislative drafting but for shaping a whole-of-society governance approach, in which educators, parents, SMEs, agencies, technologists, civil society, and national security institutions all have a meaningful role.
What the Hearing Revealed
1. The Illusion of Urgency Without Foundational Infrastructure
The Department of Budget and Management (DBM), as co-chair of the Medium-Term Information and Communications Technology Harmonization Initiative (MITHI) Steering Committee, confirmed the issuance of MITHI Resolution No. 2025-05, which is an inter-agency policy on the responsible use of artificial intelligence and machine learning in government. The resolution establishes core requirements: human oversight in all AI-supported decisions, fairness, transparency, privacy protection, and a prohibition on using AI for surveillance without lawful authority.
DBM also reported it is developing a closed, Filipino-trained large language model, disconnected from the public internet, to ensure AI-generated outputs align with local language, institutional context, and data sensitivity protocols.
Despite these concrete steps, such infrastructure-focused groundwork is largely absent from the 21 proposed legislative measures. As the Securities and Exchange Commission (SEC) noted, “across all the proposed bills, data governance is mentioned only once.” The agency emphasized that “any derivative of AI should be based on the consideration of data governance,” underscoring that trustworthy AI systems depend on reliable, well-governed data.
These contributions, along with broader assessments of national readiness, reveal persistent structural gaps:
- ● Absence of sovereign compute capability
- ● Fragmented data standards and non-interoperable datasets across agencies
- ● Limited capacity to test, audit, or explain algorithmic decisions
- ● Shortage of technical personnel trained in AI governance
- ● Insufficient safeguards for algorithmic decision systems
In practice, this means the Philippines cannot regulate AI systems it cannot host, verify, or assess independently..
This tension points to a key insight: legislative ambition must match institutional and infrastructure readiness. Without it, even well-designed laws risk governing from a distance, while critical AI functions remain dependent on external platforms, foreign models, and data ecosystems beyond national control.
2. Civil Society Showed Up Ready—With Frameworks, Not Just Opinions
The hearing featured a wide range of voices—government, civil society, academia, and private organizations. But the depth and direction of their contributions varied significantly.
Data and AI Ethics PH delivered one of the most grounded, implementable interventions. Their contribution included:
- ● Access, Literacy, and Trust as the foundation of responsible AI use.
- ● The 4E Framework (Education, Engineering, Enforcement, Ethics) as a unifying structure for the 19 AI bills.
- ● Bill mapping showing enforcement dominates (18 of 19 bills), while engineering is considerably underrepresented.
- ● Aligning AI oversight with existing laws on privacy, cybercrime, labor, and IP.
- ● A layered model of AI (data → algorithms → inference → deployment → societal impact) to distinguish foundational risks from surface symptoms.
- ● Prioritizing governance at the deployment layer, where AI affects real people and institutions.
Analytics and Artificial Intelligence Association of the Philippines (AAP) grounded its recommendations in three existing national frameworks: the National AI Strategy (NAISPH), Philippine National Standards on AI (PNS-AI), and the Philippine Skills Framework for AI (PSF-AI). AAP urged Congress to consolidate the bills into a single, standards-aligned law that leverages NAISPH’s whole-of-government architecture, assigning strategy, R&D, and enforcement to agencies already tasked under the strategy, rather than creating redundant structures.
CyberGuardians PH shared grounded insights from their AI literacy work with students and teachers: college scholars are "dumbing down their language" to avoid false AI detection, while teachers lack clear guidelines on handling AI-assisted work. They recommended aligning AI regulation with RA 9262 (Anti-VAWC Law) to combat technology-facilitated gender-based violence, respecting intellectual property rights, and embedding safety-by-design in educational AI tools. They also stressed that any national AI body must formally include civil society, youth, and marginalized groups.
The organizations' inputs were feasible, actionable, and aligned with international norms emphasizing literacy, ethics, safety-by-design, and institutional accountability.
Other stakeholders presented optimistic narratives about innovation, upcoming events, and global AI trends. These contributions help contextualize opportunity but offered limited guidance for drafting enforceable, sector-specific legislation.
A whole-of-society governance framework requires balancing these narratives so that public-interest and technical expertise receive proportionate weight in shaping legislation that works in practice, not just in theory.
3. Influence, Readiness, and the Weight of Contribution
During the hearing, one group positioned itself as a neutral convener on AI governance, offering policy ideas aligned with global models and promoting its annual forum and an unpublished Philippine AI Report.
It received extended committee engagement—multiple follow-up questions, invitations to future events, and offers to review its forthcoming report—while contributors sharing on-the-ground work (like AI literacy programs, local-language models, and bill mapping) got less opportunity to elaborate.
Both convening and implementation are essential to a healthy AI ecosystem. But in the context of lawmaking, demonstrated public-interest contribution, such as developing community safeguards, advancing local technical research, strengthening agency capacity, supporting vulnerable sectors, or enabling ethical and rights-based AI engineering, must carry significant weight.
In emerging policy spaces, visibility and narrative fluency can unintentionally outweigh demonstrated implementation readiness. The situation is further nuanced by the fact that a sitting bill sponsor also holds a leadership role in the same organization. While procedurally valid, such overlaps create governance risks that warrant transparent acknowledgment.
As the committee moves toward forming a Technical Working Group (TWG), it has a critical chance to recalibrate: ensure that influence reflects not just who can articulate a global vision, but who is already building and testing AI in Philippine realities.
4. Agency Participation and Uneven Representation
The hearing reflected varying levels of agency preparedness, which is understandable for an exploratory session.
- • Department of Information and Communications Technology (DICT) expressed support for consolidating the 21 legislative measures and emphasized the need for a unified body to coordinate currently siloed AI initiatives.
- • Department of Justice (DOJ) recommended that the proposed AI office focus on research and policy-making, align enforcement with the Cybercrime Prevention Act (RA 10175) to avoid legal conflicts, and use technologically neutral language to ensure future adaptability.
- • Commission on Higher Education (CHED) confirmed that AI literacy is being integrated across IT and computer science curricula, with a forthcoming technical panel on AI to oversee standards. AI is not yet a standalone subject but is embedded in digital literacy from K–12 through higher education.
- • Department of Budget and Management (DBM) went beyond principles, sharing that it has issued MITHI Resolution No. 2025-05. It has also disclosed it is already developing a closed, Filipino-trained large language model, disconnected from the internet, to ensure data sovereignty and contextual relevance, and has procured NVIDIA chips for this sovereign infrastructure.
- • Securities and Exchange Commission (SEC) highlighted that data governance is mentioned only once across all 19 bills and advocated for regulatory sandboxes to enable adaptive, market-responsive oversight.
- • Technical Education and Skills Development Authority (TESDA) reported concrete action: 100,000 BPO workers are targeted for AI upskilling, with a pilot already underway in Albay in partnership with local government and industry.
However, critical perspectives were missing or underheard.
The Department of Health (DOH) was present but did not deliver remarks, leaving health-sector AI risks, such as diagnostic tools or patient data systems, unaddressed.
The Commission on Human Rights (CHR), though formally invited, did not have a representative in attendance, resulting in no human rights input on algorithmic bias, surveillance, or digital due process.
Others, including the National Intelligence Coordinating Agency (NICA), the Department of National Defense (DND) and the Department of Science and Technology (DOST), had to volunteer input near the end of the session, underscoring the importance of including security and scientific perspectives in future deliberations.
These patterns are typical of broad hearings but highlight a clear need: as the TWG forms, structured, sector-specific consultations must ensure that health, human rights, national security, science, labor, education, and local government voices are not just invited, but intentionally centered, in shaping an AI law that reflects the full spectrum of Philippine realities.
Why the Philippines Needs Its Own AI Law
AI governance models borrowed from foreign jurisdictions, such as the EU AI Act, OECD standards, and UNESCO guidance, offer useful vocabulary and principles. But the Philippines faces realities that differ fundamentally from those contexts:
- • A large diaspora workforce and an economy heavily reliant on IT-BPM and overseas employment
- • Significant youth population encountering AI first through social media, gig work, and classrooms with uneven support
- • Deep inequalities in connectivity, device access, and digital literacy
- • Long-standing governance challenges in data quality, public-sector capacity, and enforcement
- • Exposure to foreign influence operations and information warfare, as flagged by national security agencies
These conditions produce a distinct risk profile.
A Philippine AI Omnibus Law should therefore:
- • Protect workers at risk of displacement
- • Safeguard children and youth
- • Strengthen public-school and LGU capacity
- • Establish sovereignty over data and compute
- • Embed rights-based protections aligned with local realities
- • Integrate national security, public welfare, and community resilience
Foreign models should inform, but not define, our legislative architecture.
What the Hearing Got Right
Despite its constraints, the hearing demonstrated:
- • Cross-sector recognition that AI governance is urgent.
- • Agency awareness of capacity limitations.
- • Civil society’s readiness to provide grounded, technical, rights-based insights.
- • The presence of multiple bill sponsors, reflecting bipartisan interest.
These are strong foundations for future deliberations.
Closing Insight
The ICT Committee hearing revealed a landscape marked by momentum, complexity, and uneven readiness. Yet it also demonstrated that:
- • Civil society is capable of offering governance-ready frameworks.
- • Agencies understand their own capacity constraints.
- • The Philippines has the opportunity to craft an AI law rooted in the lived realities of Filipino workers, students, parents, and communities.
A whole-of-society governance approach, supported by a Philippine AI Law, will allow the country not merely to regulate AI, but to steward its use toward dignity, safety, innovation, and national resilience.
AI governance is not about drafting the fastest law. It is about crafting the right one, anchored in readiness, rights, and the Filipino experience.
Watch the full session here:
Keywords:
artificial intelligence, ai, ai policy philippines, ai legislation philippines, ai governance philippines, whole-of-society governance, data ethics, ai ethics, philippine ai magna carta, multisector ai policy
Back to top


