AI Safety Governance in Southeast Asia: How ASEAN Is Shaping Responsible AI Policy
Table of Contents
- Why AI Safety Governance in Southeast Asia Matters Now
- The ASEAN AI Governance Landscape in 2026
- National AI Safety Strategies Across Southeast Asia
- Singapore’s Leadership in AI Safety Governance
- AI Safety Governance Challenges Facing ASEAN Nations
- The Southeast Asian Way: A Distinct Governance Model
- ASEAN AI Safety Governance and Global Frameworks
- Frontier AI Risks and Regional Preparedness
- Building AI Safety Capacity Across Southeast Asia
- The Future of AI Safety Governance in ASEAN
📌 Key Takeaways
- Trillion-dollar stakes: AI could boost Southeast Asia’s GDP by 10–18%, but only if governance keeps pace with deployment across 700 million people.
- ASEAN’s own path: The region favors voluntary harmonization over prescriptive regulation, engaging industry as partners rather than adversaries.
- Singapore leads regionally: Ranked 11th globally for responsible AI, Singapore chairs the ASEAN Working Group on AI and hosts the region’s only AI Safety Institute.
- Shared challenges persist: Low-resource languages, infrastructure gaps, talent shortages, and cybersecurity vulnerabilities remain barriers across all member states.
- Global voice needed: Southeast Asia risks having AI governance norms set without its input unless it strengthens international representation immediately.
Why AI Safety Governance in Southeast Asia Matters Now
As artificial intelligence systems grow more powerful and pervasive, the question of who shapes global AI governance has never been more urgent. While the United States, European Union, and China dominate conversations about AI regulation, a region of over 700 million people—Southeast Asia—is charting its own course for AI safety governance. This is not simply a regional concern. How Southeast Asia manages AI safety governance will influence global norms, affect cross-border digital trade, and determine whether developing nations have a meaningful seat at the table when the rules of the AI era are written.
A landmark 2025 Brookings Institution report titled “AI Safety Governance, The Southeast Asian Way” makes a compelling case: the Association of Southeast Asian Nations (ASEAN) cannot afford to be a bystander as AI governance models harden elsewhere. With internet penetration exceeding 73% and a digital economy projected to approach one trillion dollars by 2030, the region is deeply digitally connected yet underrepresented in global AI policy discourse. The stakes are enormous—AI could generate a 10–18% increase in GDP across the region, valued at approximately one trillion dollars, but only if governance frameworks ensure these technologies are deployed safely and equitably.
Understanding how AI safety governance in Southeast Asia is evolving offers critical lessons for policymakers, technologists, and business leaders worldwide. The region’s approach—pragmatic, pluralistic, and cooperative—presents a viable alternative to both the EU’s prescriptive regulation and America’s market-driven approach. For organizations looking to understand global AI regulatory trends, Southeast Asia is an essential case study.
The ASEAN AI Governance Landscape in 2026
The ASEAN AI governance landscape has undergone rapid transformation. At the regional level, several foundational frameworks now provide the scaffolding for AI safety governance in Southeast Asia. The ASEAN Digital Masterplan 2025, adopted in 2021, laid the groundwork by recognizing AI as a strategic priority. Building on this, the ASEAN Guide on AI Governance and Ethics (AIGE), released in 2024, established seven core ethical AI principles oriented primarily toward the private sector. Its approach aligns more closely with the US National Institute of Standards and Technology AI Risk Management Framework than with the EU AI Act’s binding regulatory model.
In January 2025, ASEAN expanded the AIGE to specifically address generative AI, reflecting the rapid adoption of large language models across the region. The ASEAN Responsible AI Roadmap 2025–2030, released in March 2025, provides a strategic timeline for implementation. Meanwhile, the Digital Economy Framework Agreement negotiations, launched in 2023, aim to create binding commitments on cross-border digital trade that encompass AI governance provisions. The establishment of the Working Group on AI (WG-AI) in 2024, chaired by Singapore, created the first dedicated institutional mechanism for coordinating regional AI policy.
What distinguishes ASEAN’s approach is its emphasis on harmonization rather than unification. Rather than imposing a single regulatory framework—an impossibility given the diversity of the ten member states plus Timor-Leste as an observer—the region aims to create interoperable national policies. This pragmatic stance acknowledges that a country like Singapore, ranked 11th globally in the 2024 Global Index on Responsible AI, operates in a fundamentally different context from Timor-Leste, where 48.3% of the population is multidimensionally poor and digital literacy remains extremely limited.
National AI Safety Strategies Across Southeast Asia
AI safety governance in Southeast Asia operates on two tiers: regional frameworks and national strategies. At the national level, the eleven countries of mainland and maritime Southeast Asia span a remarkable spectrum of AI governance maturity. Six nations—Singapore, Indonesia, Malaysia, Thailand, Vietnam, and the Philippines—have published national AI strategies and adopted or are close to adopting comprehensive soft regulatory frameworks. The remaining five—Laos, Myanmar, Brunei, Cambodia, and Timor-Leste—have yet to publish national AI strategies, though several are making progress.
Indonesia, with over 280 million people, more than 1,200 ethnic groups, and 694 local languages, represents one of the most complex governance environments on earth. Its National Strategy for AI 2020–2045 is ambitious, and AI is expected to add 366 billion dollars to its GDP over the next decade. The country enacted a Personal Data Protection Law in 2022 and was the first Southeast Asian nation to complete UNESCO’s AI Readiness Assessment. However, a devastating June 2024 cyberattack impacting 282 government agencies—with only 2% of data recovered—exposed critical vulnerabilities that directly affect AI safety readiness.
The Philippines presents a unique case. Its BPO sector constitutes 9% of GDP and employs approximately 1.3 million people, making AI-driven job displacement an existential economic concern. With up to 1.1 million jobs projected to disappear from the labor market, the Philippines is the only ASEAN nation actively pursuing hard law for AI regulation, with four comprehensive bills under consideration in Congress. The country plans to propose a regional AI regulatory framework during its 2026 ASEAN chairmanship.
Thailand and Vietnam have both drawn inspiration from the EU AI Act’s risk-based approach. Thailand’s Draft Royal Decree establishes three risk categories for AI systems, while Vietnam’s Draft Law on Digital Technology Industry applies similar principles adapted to its national security concerns. Malaysia, serving as 2025 ASEAN Chair, launched its National AI Office in December 2024 and is targeting 900 AI startups through an AI sandbox by 2026. It has also proposed the ASEAN AI Safety Network (AI SAFE) as a formalized mechanism for regional collaboration on AI safety research.
Transform complex policy reports into interactive experiences your team will actually read.
Singapore’s Leadership in AI Safety Governance
No discussion of AI safety governance in Southeast Asia is complete without examining Singapore’s outsized role. Despite its small size, Singapore has emerged as the unambiguous regional leader and a significant player in global AI governance. It is the sole Southeast Asian member of the International Network of AI Safety Institutes, chairs the ASEAN Working Group on AI Governance, and has produced at least 45 AI-related policy documents since 2020.
Singapore’s National AI Strategy 2.0, updated in 2023, positions the city-state as both an AI innovator and a governance pioneer. The Model AI Governance Framework for Generative AI, released in 2024, provides practical guidance for organizations deploying frontier AI systems. Critically, Singapore designated the Digital Trust Centre at Nanyang Technological University as its AI Safety Institute in 2024, giving the region its first institutionalized center for AI safety research.
The AI Verify Foundation, established under Singapore’s leadership, develops open-source AI governance testing tools including Project Moonshot. Meanwhile, AI Singapore developed the SEA-LION family of open-source large language models specifically tailored for Southeast Asian languages—a critical contribution given the region’s extraordinary linguistic diversity. In 2025, Singapore and Japan published a Joint Testing Report evaluating guardrails on non-English LLMs, addressing a significant gap in global AI safety research that has been overwhelmingly English-centric.
Singapore’s approach illustrates a key principle of AI safety governance in Southeast Asia: pragmatic engagement with industry. Unlike the EU’s more adversarial stance toward Big Tech, Singapore actively collaborates with major technology companies for capacity building, talent development, and infrastructure investment while maintaining robust governance standards.
AI Safety Governance Challenges Facing ASEAN Nations
Despite significant progress, AI safety governance in Southeast Asia faces persistent shared challenges that threaten to widen the gap between policy ambition and implementation reality. These challenges are structural and require sustained attention from both national governments and regional institutions.
The most fundamental barrier is the lack of quality datasets for the region’s languages. Languages like Khmer, Lao, and Burmese are classified as “low-resource languages” in the AI research community, meaning there is insufficient digitized text and speech data to train reliable AI models. Cambodia’s AI Forum has signed an MOU with AI Singapore to develop an open-source Khmer LLM, but such efforts remain in early stages. Without representative training data, AI systems deployed in Southeast Asia risk producing biased, inaccurate, or culturally inappropriate outputs—a safety concern that Western-centric governance frameworks rarely address.
Cybersecurity vulnerabilities represent another critical challenge. Indonesia’s 2024 cyberattack demonstrated that even the region’s largest economy remains exposed to systemic digital risks. As AI systems become integrated into critical infrastructure—from Indonesia’s Surabaya AI-powered traffic management to Thailand’s medical AI applications—the security implications multiply exponentially.
Infrastructure inequality creates a stark digital divide. Electrification rates in Indonesia range from 14.06% in Highland Papua to 100% in Jakarta. In Timor-Leste, a government official noted that “in remote or mountainous regions, a civil servant who knows how to use a computer might be hard to find.” These disparities mean that AI safety governance must account for radically different deployment contexts within single countries, let alone across the region.
Talent shortages compound every other challenge. The pay differential tells the story: an AI research internship at a US or UK NGO pays approximately $26 per hour, while equivalent work at a Manila consulting firm pays roughly $4 per hour. This wage gap drives a persistent brain drain that depletes the human capital needed for effective AI governance. Regulatory fragmentation—where multiple agencies within a single country formulate AI policy in silos, as seen in Thailand and Malaysia—further dilutes limited expertise across competing institutional mandates.
The Southeast Asian Way: A Distinct Governance Model
The Brookings report identifies four distinctive features that define the “Southeast Asian Way” of AI safety governance, distinguishing it from approaches adopted by the EU, US, and China. Understanding this model is essential for anyone tracking global responsible AI frameworks.
First, the region practices localized governance grounded in pluralism and pragmatism. Rather than seeking regulatory uniformity, ASEAN nations develop context-specific risk assessments that reflect their unique economic structures, cultural values, and developmental priorities. Malaysia’s National Guidelines on AI explicitly incorporate Islamic precepts alongside international standards. Vietnam frames data protection through a national security lens. The Philippines prioritizes labor market impacts. This pluralism is not a weakness—it reflects the genuine diversity of a region spanning high-income Singapore and low-income Timor-Leste.
Second, the approach leverages regional cooperation as a strategic asset. ASEAN’s institutionalized platforms—the Working Group on AI, the ASEAN Digital Senior Officials Meeting (ADGSOM), and various ministerial mechanisms—provide proven channels for cross-border coordination. Complementary national strengths create opportunities for mutual reinforcement: Vietnam’s data protection expertise, Singapore’s governance testing tools, and Thailand’s medical AI applications can be shared through regional mechanisms.
Third, Southeast Asia embraces inclusive, multi-stakeholder governance. Co-creation with industry, academia, and global partners characterizes the regional approach. Rather than the EU’s apprehension about Big Tech influence, ASEAN nations pragmatically engage major technology companies for capacity building while developing governance guardrails. This inclusive stance extends to international dialogue partners—Japan, China, the US, Australia, India, and South Korea all maintain active AI cooperation programs with ASEAN.
Fourth, the region invests in open-source AI safety initiatives. Singapore’s AI Verify Foundation and the SEA-LION language models exemplify a commitment to shared technical baselines that lower the barrier to entry for smaller nations. These open-source tools enable even resource-constrained countries to implement AI safety testing and culturally attuned model development without building capabilities from scratch.
Make policy research engaging — turn dense reports into interactive learning experiences.
ASEAN AI Safety Governance and Global Frameworks
AI safety governance in Southeast Asia does not develop in isolation—it actively engages with and adapts global frameworks. However, the region’s international representation remains a critical weakness. Only Singapore has attended all three major AI summits: the UK AI Safety Summit in 2023, the Seoul AI Summit in 2024, and the Paris AI Action Summit in 2025. Indonesia attended two of three; the Philippines two; and most other ASEAN members attended one or none. Malaysia, Myanmar, Brunei, and Timor-Leste attended none of the three summits.
This underrepresentation matters because global AI governance norms are being established now. The OECD AI Principles, the G7 Hiroshima Process, and the International Network of AI Safety Institutes are all setting standards that will affect how AI is developed and deployed worldwide. If Southeast Asia’s unique perspectives—particularly regarding low-resource languages, developing-country infrastructure constraints, and diverse cultural contexts—are not incorporated into these norms, the resulting frameworks will be ill-suited for over 700 million people.
ASEAN’s dialogue partner relationships offer a pathway to greater influence. Japan is co-developing non-Western language models and committed to training 100,000 AI professionals across the region. China has agreed to a joint guide on cross-border data flows and an action plan extending through 2030. The US, through USAID, helped develop the ASEAN Responsible AI Roadmap, though recent funding cuts have introduced uncertainty. Australia has identified generative AI and cybersecurity as cooperation priorities with ASEAN.
The Brookings report proposes that ASEAN could serve as a vital “interstitial connection” between the major AI governance blocs. By formulating cooperative agreements with the International Network of AISIs, India’s AI Safety Institute, and China’s AI Safety and Development Association, ASEAN could bridge divides that currently fragment global AI governance efforts.
Frontier AI Risks and Regional Preparedness
While much of AI safety governance in Southeast Asia focuses on near-term risks—misinformation, deepfakes, cybercrime, and job displacement—the Brookings report argues that the region must also prepare for frontier and catastrophic AI risks. Close to 70% of AISA roundtable participants favored government focus on societal risks over existential ones, reflecting a pragmatic prioritization. Yet ignoring frontier risks entirely would leave the region dangerously exposed.
The report proposes a Southeast Asia Frontier AI and Emergency Risk-management (SAFER) Special Taskforce to oversee a 2025–2028 strategic plan for anticipatory governance. This taskforce would identify tangible frontier and catastrophic risks relevant to the region, conduct scenario-based planning exercises, establish an interdisciplinary observatory monitoring agentic AI and AGI trajectories, create a dedicated diplomatic track for coordinated ASEAN positions, and build an ASEAN technical corps for participating in international standard-setting. The SAFER taskforce could be integrated with the existing Working Group on AI or report to ADGSOM, with a sunset clause requiring renewal by consensus.
Myanmar’s situation illustrates how AI risks compound in fragile contexts. Characterized as an AI “risk-critical” environment following the 2021 military coup, Myanmar’s Safe City surveillance project—featuring AI-enabled facial and license plate recognition—has been criticized as a tool for suppressing dissidents rather than ensuring public safety. The country’s Digital Economy Roadmap 2030 contains no mention of AI safety or ethical governance, highlighting how political instability can derail even basic governance aspirations.
Building AI Safety Capacity Across Southeast Asia
The Brookings report’s recommendations for strengthening AI safety governance in Southeast Asia follow an Eisenhower Matrix prioritization. High urgency, high importance actions include leveling all countries up to a baseline standard of AI safety governance, bolstering frontier AI preparedness, developing a regional talent pool, enabling regulatory interoperability, and increasing international representation.
Capacity building must address the enormous disparities between member states. The report proposes a minimum viable baseline that every ASEAN nation should achieve: a published national AI strategy, comprehensive soft regulation, and policy consideration of frontier AI including AGI, agentic AI, and generative AI. Singapore, with its established ecosystem, can lead through a “lead-and-leverage” model where proven approaches are adapted and adopted by other countries. Cambodia’s practical approach—developing Khmer-language AI tools like Translatekh and Sarika while acknowledging its role as a technology consumer rather than developer—offers a realistic template for less-developed members.
Regional data and compute sharing represents an underexploited opportunity. Safety-critical datasets, public interest data, and shared computing resources could dramatically accelerate AI safety capabilities across the region. ASEAN-level training modules for civil servants would build the institutional knowledge needed to implement and enforce governance frameworks. Japan’s commitment to training 100,000 AI professionals and South Korea’s ROK-ASEAN Digital Academy provide external support mechanisms that can be leveraged strategically.
The private sector’s role is equally important. Vietnam’s FPT Corporation established an Ethical AI Committee in December 2024, demonstrating that industry-led governance initiatives can complement government efforts. Brunei assembled a 25-person AI governance working group that produced a comprehensive guide within just three months by adapting existing ASEAN-level documents—showing that even resource-constrained nations can move quickly when they leverage regional frameworks rather than building from scratch.
The Future of AI Safety Governance in ASEAN
AI safety governance in Southeast Asia stands at an inflection point. The frameworks are being built, the institutions are being established, and the regional consensus is forming. But the window for shaping global AI governance norms is narrowing. As the UNESCO Recommendation on the Ethics of AI and other international instruments move toward implementation, Southeast Asia must ensure its voice is heard.
The Philippines’ upcoming 2026 ASEAN chairmanship presents an immediate opportunity to advance regional AI governance ambitions. Malaysia’s AI SAFE Network proposal and the SAFER taskforce concept provide concrete institutional mechanisms for deepening cooperation. The Digital Economy Framework Agreement negotiations, expected to conclude soon, could embed AI governance commitments into binding regional trade law for the first time.
For organizations, researchers, and policymakers engaging with Southeast Asia, understanding the region’s distinct approach to AI safety governance is no longer optional—it is essential. The “Southeast Asian Way” of governance—pluralistic, pragmatic, cooperative, and inclusive—offers a model that other developing regions in Africa, Latin America, and South Asia may well adapt for their own contexts. In a world where AI governance risks becoming a tool of geopolitical competition, Southeast Asia’s emphasis on practical cooperation and shared technical baselines points toward a more constructive path forward.
The question is no longer whether AI safety governance in Southeast Asia will matter globally, but whether the region will seize the moment to shape it on its own terms. The foundations are in place. The urgency is clear. What remains is the political will and sustained investment to translate frameworks into action.
Turn dense policy documents into interactive experiences — engage your audience with Libertify.
Frequently Asked Questions
What is the ASEAN Guide on AI Governance and Ethics?
The ASEAN Guide on AI Governance and Ethics (AIGE) is a voluntary framework adopted in 2024 that outlines seven ethical AI principles for member states. It provides practical guidance for organizations deploying AI systems across Southeast Asia, drawing from international best practices while respecting regional diversity. An expanded version covering generative AI was released in January 2025.
How does AI safety governance in Southeast Asia differ from the EU AI Act?
Southeast Asia takes a more inclusive, multi-stakeholder approach compared to the EU’s prescriptive regulatory model. While the EU AI Act imposes binding risk-based classifications, ASEAN favors voluntary guidelines and harmonization over unified hard law. The region also engages more pragmatically with Big Tech companies for capacity building rather than viewing them primarily as regulatory targets.
Which Southeast Asian countries lead in AI safety governance?
Singapore is the clear regional leader, ranked 11th globally in the 2024 Global Index on Responsible AI. It chairs the ASEAN Working Group on AI Governance and is the only Southeast Asian member of the International Network of AI Safety Institutes. Indonesia, Malaysia, Thailand, Vietnam, and the Philippines also have published national AI strategies and comprehensive governance frameworks.
What are the biggest challenges to AI governance in Southeast Asia?
Key challenges include limited technical capacity and talent pipelines, infrastructure gaps between urban and rural areas, lack of quality datasets for low-resource languages like Khmer and Burmese, cybersecurity vulnerabilities, regulatory fragmentation across overlapping government agencies, and constrained budgets that force governments to prioritize economic development over safety regulation.
What is the proposed ASEAN AI Safety Network?
The ASEAN AI Safety Network (AI SAFE) is a proposal by Malaysia during its 2025 ASEAN chairmanship to establish formalized collaborative mechanisms on AI safety research and development across the region. It aims to facilitate shared technical baselines, capacity building, and regional cooperation on frontier AI preparedness among all member states.
How much could AI contribute to Southeast Asia’s economy?
AI could increase Southeast Asia’s GDP by 10 to 18 percent, valued at approximately one trillion US dollars by 2030. The regional digital economy overall is projected to grow to nearly one trillion dollars by 2030, with individual countries like Indonesia expecting AI to add 366 billion dollars to its GDP over the next decade.