(Top)

Theory of Change

How does PauseAI plan to achieve its mission?

What do we want?

Our proposal describes what we want: Globally halt frontier AI development until we know how to do it safely and under democratic control.

How do we pause?

Pausing is costly because there are many incentives to develop increasingly powerful AI. Companies can make a lot of money by being the first to develop a new AI, and countries can gain a lot of power by being the first to develop a new AI. This is why we do not expect any single company to pause, and why we do not expect any single country to pause.

This means - we need an international pause. We can get there in two ways:

  1. An international treaty. We banned blinding laser weapons and CFCs through a treaty, so we can also ban superhuman AI through a treaty. These treaties are often initiated by a small group of countries, and then other countries join in. A summit is often the event where such a treaty is initiated and signed. We need to convince our politicians to initialize treaty negotiations.
  2. A unilateral supply-chain pause. The AI supply chain is highly centralized. Virtually all AI chips used in training runs are designed by NVidia, produced by TSMC, which uses lithography machines by ASML. If any of these monopolies were to pause, the entire AI industry would pause. We can achieve this by lobbying these companies, and by lobbying the governments that have leverage over these companies.

Why don’t we have a pause yet?

The problem is not a lack of concerned experts (86% of AI researchers believe the control problem is real and important). The problem is not a lack of public support for our proposal (by far most people already want AI to be slowed down). However, there are some important reasons why we don’t have a pause yet:

  • Race dynamics. AI creates a lot of value, especially if your AI is the most powerful. The desire to be the first to develop a new AI is very strong, both for companies and for countries. Companies understand that the best-performing AI model can have a far higher price, and countries understand that they can get a lot of strategic and economic power by leading the race. The people within AI labs tend to understand the risks, but they have strong incentives to focus on capabilities rather than safety. Politicians often are not sufficiently aware of the risks, but even if they were, they might still not want to slow down AI development in their countnry because of the economic and strategic benefits. We need an international pause. That’s the whole point of our movement.
  • Invisible risks. We tend to make policies based on past experiences, and we have no past experience with risks like these. We will never be able to make policy based on seeing an AI take over the globe. However, some risks may
  • Lack of urgency. People underestimate the pace of AI progress . Even experts in the field have been consistently surprised by how quickly AI has been improving.
  • Our psychology. Read more about how our psychology makes it very difficult for us to internalize how bad things can get.

What do we do to get there?

  1. Grow the movement. The larger our group is, the more we can do. We grow our movement through radical transparency, online community building, and fostering local groups. We empower our volunteers to take action, and we make it easy for them to do so. Read more about our growth strategy .
  2. Protests. Protests increase public awareness and support. They are also a great way to recruit new members and improve community feeling. Because our subject is relatively new, even small protests can get very good media coverage . We encourage our members to organize protests in their own cities.
  3. Lobbying. Every volunteer can become an amateur lobbyist. We send emails to politicians , we meet with them, and we stay in touch. We ask them to put AI risks on the agenda, draft a treaty. The core issue that we’re trying to solve is a lack of information and a lack of emotional internalization and insight in the political sphere.
  4. Inform the public. We make people aware of the risks we’re facing and what we can do to prevent them. We write articles, make videos, design images, join debates, give talks, and organize events.

How do we communicate?

  • Defer to experts. We are warning people about a scenario that is so extreme and scary, that a gut-level response is to dismiss it as crazy talk. Show the expert polls and surveys . The top three most cited AI scientists are all warning about x-risk. Deferring to them is a good way to make our case.
  • Use simple language. You can show you understand the technology and you’ve done your homework, but excessive jargon can make people lose interest. We want to reach as many people as possible. This means we need to use simple language and avoid jargon.
  • Show our emotions. Seeing emotions gives others the permission to feel emotions. We are worried, we are angry, we are eager to act. Showing how you feel can be scary, but in our case we need to. Our message can only be received if it matches with how we send it.
  • Emphasize uncertainty. Don’t say AI will take over, or that we will reach AGI in x years. Nobody can predict the future. There is a significant chance that AI will go wrong soon, and that should be enough to act on. Don’t let uncertainty be the reason to not act. Refer to the Precautionary Principle, and make the point that we should err on the side of caution.
  • Make individuals feel responsible. Nobody wants to feel like they have a strong responsibility to make things go well. Our brains steer us away from this, because we all have a deep desire to believe that someone is in charge, protecting us. But there are no adults in the room right now. You need to be the one to do this. Choose to take responsibility.
  • Inspire hope. When hearing about the dangers of AI and the current race to the bottom, many of us will feel dread, and that makes us not act. Fatalism is comfortable, because a lack of hope means that we don’t have to work towards a good outcome. This is why we need to emphasize that our case is not lost. AGI is not inevitable , technology has been successfully banned internationally before, and our proposal has broad public support.

Let’s get to it

Join PauseAI and take action !