Yud calls out the Fake News(tm)

The end goal is "safe ai which will create utopia", but the community believes that it's very hard to create such an ai. So it's thought that someone else will first create a world destroying ai.

Potential pivotal acts:

human intelligence enhancement powerful enough that the best enhanced humans are qualitatively and significantly smarter than the smartest non-enhanced humans a limited Task AGI that can: upload humans and run them at speeds more comparable to those of an AI prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage) design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)

Later (note that 'borderline-astronomically-significant' means in context "unclear whether it would be capable of causing the alignment problem to be solved") (because if the alignment problem isn't solved humanity goes extinct)

Borderline-astronomically-significant cases:

unified world government with powerful monitoring regime for 'dangerous' technologies widely used gene therapy that brought anyone up to a minimum equivalent IQ of 120

https://arbital.com/p/pivotal/

/r/SneerClub Thread Parent Link - twitter.com