DPM Heng Swee Keat said the program is the first of its kind and will give AI developers the tools needed for their own assessment.
As part of the Generative AI Evaluation Sandbox for Trusted AI, the program will comprise standardised evaluation tests to guide companies to set up guardrails to prevent their systems from committing errors or showing bias. He added: “Critically, the sandbox will equip app developers with the skills and methodologies to conduct generative AI evaluation. Today, these capabilities reside largely with AI model developers.”
IMDA said the sandbox initiative will put AI models to the test in various fields, like human resources and security, to expose gaps in the way AI is currently assessed. An IMDA spokesman told The Straits Times: “Large language models today are trained on Internet data, which may not be representative of the nuances of Singapore’s cultural context. For example, in terms of knowledge understanding, it may not appreciate that within racial groups, there is a diversity of faiths and languages.”
The alliance will discuss AI standards, best practices and create a neutral platform for collaboration on governing AI.