Proposal Summary (<20 words)
Mox, a coworking & events space in SF for AI safety, EA charities, labs & startups.
Description of activities over the funding period
Please describe the activities you are requesting funding for over your ideal funding period.
Mox is a 2-floor, 20k sq ft venue in San Francisco. We’re requesting funding to create a premier hub, bringing together EA & AI safety folks with the SF tech scene and labs. We’re inspired by what Constellation, Lighthaven, and FAR Labs have achieved in Berkeley, and hope to build upon their examples, in the city that is ground zero for transformative work.
The main elements of Mox:
- Coworking & offices: We host daytime members, who use Mox as their primary workplace. Currently our members are small teams and individuals, with a mix of EA orgs, AI safety researchers, and startup founders. We’re also recruiting “anchor” orgs like Epoch AI to situate their offices here.
- Community space: We’re positioned as a “weekend office”, for folks at eg Anthropic, OpenAI, and METR to work and mingle. We encourage member-run gatherings like blog club, paper reading groups, lightning talks and yoga.
- Public events: As a large, central venue with easy access to both SF and East Bay, Mox is ideal for hackathons, speaker talks, happy hours, unconferences and the occasional party. We organize our own events, and also rent our space to aligned organizers.
- Project incubation: as a medium-term goal, we’d like to host external fellowships or incubators (eg for MATS, FLF, or Apart), or run our own in-house accelerator.
Mox is still early (we opened 1 month ago); we’re not sure what our long term breakdown will be on coworking vs events vs community vs incubation. We intend to remain flexible over the next few months, try out many approaches, and double down on whatever shows promise.
Background
What skills, attributes, or experience do you (and your team, if relevant) have that speak to your ability to execute on the activities you’re proposing?
- We've operated Manifund for the last 2 years, moving >$5m to projects in AI safety, biosecurity, animal welfare, and other EA and charitable causes
- We’ve run AI safety regranting in 2023 & 2024, with regrantors like Neel Nanda, Leopold Aschenbrenner, Dan Hendrycks, Adam Gleave, Ryan Kidd, and Evan Hubinger; and have raised funding for a larger 2025 version