There’s growing noise around ‘regulating’ AI. Some claim it’s too early, citing that precautionary regulations could impede technical developments; others call for action, advocating measures that could mitigate the risks of AI.
It’s an important problem. And both ends of the debate make compelling arguments. AI applications have the potential to improve output, productivity, and quality of life. Forestalling AI developments that facilitate these advancements are big opportunity costs. Equally, the risks of broad-scope AI applications shouldn’t be dismissed. There are near-term implications, like job displacement and autonomous weapons, and longer-term risks, like values misalignment and the control problem.
Regardless of where one sits on the ‘AI regulation’ spectrum, few would disagree that policymakers should have a firm grasp on the development and implications of AI. It’s unsurprising, given the rapid developments, that most do not.
ASYMMETRY OF KNOWLEDGE
Policymakers are still very much at the beginning of learning about AI. The US government held public hearings late last year to ‘survey the current state of AI’. Similarly, the UK House of Commons undertook an inquiry to identify AIs’ ‘potential value’ and ‘prospective problems’.
While these broad inquiries signify positive engagement, they also highlight policymakers’ relatively poor understanding, particularly compared to industry. This is understandable given the majority of AI development and …