Bitcoin Funding Protects Against AI Apocalypse

With a strong overlap between the community of futurists and transhumanists and crypto coins, Bitcoin wealth is trickling into research.

Before Wall Street became interested in Bitcoin, the crypto community belonged to nerds and futurists. Even in 2018, this community still exists, and continues to use the newly acquired Bitcoin wealth for research into the threat of AI.

OpenAI Supporters:

— OpenAI (@OpenAI) February 21, 2018

The Machine Intelligence Research Institute (MIRI), an academic organization prominent on the LessWrong discussion boards, has collected more than $2.5 million last year, including more than $700,000 from Vitalik Buterin, the founder of Ethereum.

The MIRI drive is the continuation of the Singularity Institute for research into the potential and threats of AI.

But while the idea leader of the movement, Eliezer Yudkowski, is busy pondering arcane issues of AI, he has not been blind to the day-to-day troubles of the world of cryptocurrencies, namely, the most recent BitGrail hacking:

Bank customer: Give me $10 million
Bank teller: I’m required to ask if you’re sure you have that much money in your account
Bank customer: Here’s a screenshot of my online balance
Bank teller: Here’s $10,000,000

— Eliezer Yudkowsky (@ESYudkowsky) February 11, 2018

The LessWrong and MIRI team have also moved beyond Bitcoin in their fundraising efforts, becoming aware of more opportunities. New digital assets may also help the AI research at the Institute.


The non-profit has also opened its wallet to donations, and only in the past month received nearly 150 ETH.

But even for the LessWrong community of futurists, predicting Bitcoin proved to be close to impossible, and Ethereum’s rise also caught some unawares.

“We vaguely converged onto the right answer in an epistemic sense. And 3 – 15% of us, not including me, actually took advantage of it and got somewhat rich,” wrote Scott Alexander on a LessWrong blog.

Given that LessWrong has bet on more outlandish outcomes, such as cryonic immortality and malicious AI, in that case, they were not entirely right, but perhaps a bit less wrong.

Source: Read Full Article

Leave a Reply