Quick Read
- AI experts propose calculating a ‘Compton constant’ to assess risks of AI escaping human control.
- Max Tegmark emphasizes the need for rigorous safety measures before deploying advanced AI systems.
- A global consensus on AI safety could drive international regulations and prevent existential threats.
Why AI Safety Is a Growing Concern
Artificial intelligence (AI) has made remarkable strides in recent years, but with these advancements come significant risks. Experts like Max Tegmark, a physicist and AI researcher at the Massachusetts Institute of Technology (MIT), are calling for rigorous safety measures to ensure that super-intelligent AI systems remain under human control. Tegmark’s recent paper introduces the concept of the ‘Compton constant,’ a metric to calculate the probability of losing control over AI systems.
The Compton Constant: A New Safety Metric
The term ‘Compton constant’ is inspired by the safety calculations conducted by physicist Arthur Compton before the Trinity nuclear test in 1945. Compton calculated a vanishingly small probability of a runaway fusion reaction, which reassured scientists and policymakers to proceed with the test. Tegmark argues that AI companies should adopt a similar approach by calculating the risks associated with Artificial Super Intelligence (ASI), a theoretical system surpassing human intelligence in all aspects.
According to Tegmark, a consensus on the Compton constant among AI companies could foster political will to establish global safety regulations. This would ensure that the development of powerful AI systems does not pose existential threats to humanity. “It’s not enough to say ‘we feel good about it,’” Tegmark stated. “They have to calculate the percentage.”
Global Collaboration on AI Safety
The call for rigorous safety measures aligns with the Singapore Consensus on Global AI Safety Research Priorities. This report, co-authored by Tegmark and other leading experts like Yoshua Bengio, outlines three key areas for AI safety research:
- Developing methods to measure the impact of current and future AI systems.
- Specifying how AI systems should behave and designing mechanisms to ensure compliance.
- Managing and controlling AI behavior to prevent unintended consequences.
These priorities aim to create a framework for the safe development of AI technologies, balancing innovation with responsibility.
Lessons from History: The Risks of Unchecked Development
The rapid development of AI has drawn parallels to other technological advancements that lacked sufficient oversight. For instance, the social media industry faced criticism for ignoring risks during its expansion, leading to issues like misinformation and social polarization. Similarly, experts warn that the race to develop Artificial General Intelligence (AGI) could result in catastrophic outcomes if safety is not prioritized.
Daniel Kokotajlo, a former researcher at OpenAI, estimates a 50% chance that AGI could become a reality within three years. However, he has expressed concerns about the industry’s readiness to manage such powerful systems. “The world isn’t ready, and we aren’t ready,” Kokotajlo said.
The Role of Governments and Public Accountability
While some governments, like the European Union, have taken steps to regulate AI, others lag behind. In the absence of effective oversight, insiders and whistleblowers play a crucial role in holding companies accountable. However, their efforts are often hindered by confidentiality agreements and non-disparagement clauses.
Experts argue that public involvement is essential to ensure transparency and accountability in AI development. As Tegmark noted, “International collaboration has come roaring back,” signaling a renewed focus on global cooperation to address these challenges.
Environmental and Ethical Implications
Beyond safety, the environmental and ethical costs of AI development also warrant attention. Training large AI models consumes significant energy and natural resources. For example, ChatGPT-3 reportedly uses approximately 500 milliliters of water for every 10–15 responses. With millions of active users, the cumulative impact is substantial.
Moreover, AI systems often rely on data extraction, raising ethical concerns about the unconsented use of information. Naomi Klein describes this as “the largest and most consequential theft in human history,” highlighting the need for ethical guidelines in AI development.
As AI continues to evolve, the importance of rigorous safety measures, ethical considerations, and global collaboration cannot be overstated. By adopting metrics like the Compton constant and fostering international cooperation, the industry can mitigate risks and ensure that AI serves humanity responsibly.
Source: The New York Times, The Guardian

