How we think about tradeoffs when communicating surprising or nuanced findings.
Shared components of AI lab commitments to evaluate and mitigate severe risks.
Suggested priorities for the Office of Science and Technology Policy as it develops an AI Action Plan.
Why legible and faithful reasoning is valuable for safely developing powerful AI
List of frontier safety policies published by AI companies, including Amazon, Anthropic, Google DeepMind, G42, Meta, Microsoft, OpenAI, and xAI.
Why pre-deployment testing is not an adequate framework for AI risk management