(Top)

PauseAI values

What do we want?

Globally halt frontier AI development until we know how to do it safely and under democratic control. See our proposal .

What are our values?

  • Humanity first. We believe that AI should be developed in a way that benefits humanity, or not at all.
  • Community. A feeling of community doesn’t just come from a shared goal (e.g. pausing AI) or values, but also from social activity. That’s why it’s important to get people to meet, organize events, have social gatherings and create IRL friendships. It’s not just about constructive action, it’s about making friends and feeling at home with a group.
  • Anyone can contribute. Many AI safety / AI governance organizations rely solely on their group of paid employees. This has its merits, but it leaves a gap of volunteers. That’s where PauseAI is different. By fostering volunteers and encouraging action we can get stuff done even without a lot of funding.
  • Transparency by default. Do and discuss things publicly and openly, unless there’s a good reason not to. Meetings are open to join, the website is open source, and the Discord server is joinable. Being approachable lowers the barrier to feel welcome and help out.
  • Honesty. We don’t have any weird incentives (e.g. having a stake in an AI company), so we are free to say what we believe. We do not sugarcoat our message to make it more palatable.
  • Diversity in risks, uniformity in desires. Whether you’re worried about x-risk, cybersecurity hazards or the impact of AI on our democracy: we are unified in our desire to pause AI development.
  • No partisan politics. Humans are tribal creatures, which causes us to bundle viewpoints into groups (left/right). AI safety is not partisan, and we want to keep it that way. We do not let our other political views distract us from our combined goal.

What type of culture do we want to foster?

  • Action-oriented. We want to be a group that gets stuff done. The perfect is the enemy of the good. We cannot give in to the comfort of just talking about things. We need to act.
  • Friendly. We want to be a group that people like to be part of. We want to be welcoming to new members.
  • Open. We want to be open to new ideas, new people, new ways of doing things. We want to be open to criticism. Our goal is to prevent AI risks. We should be open to the possibility that we’re wrong about how to do that.
  • Reasonable. Because our concerns are often dismissed as crazy, we need to be extra careful to not look crazy. Emphasize that many people in our group have technical backgrounds. Show that we know what we’re talking about.