Dear new reader,
This blog is intended as a conversational topic specific (neuroscience and rationalism) subthread of the commentary on the blog Slate Star Codex by Scott Alexander.
Slate Star Codex is a current hotspot for fascinatingly divisive yet thoughtful discussion on rationality, politics, philosophy, and random other such things. It in turn originates from Scott Alexander’s earlier blogging in various places, notably as Yvain on LessWrong.
LessWrong is now a mostly mothballed archive of rationality blogging, notably organized into a series of posts known as the Sequences. LessWrong also has a useful wiki on concepts in rationality and topics discussed in the Sequences. If you do decide to go explore the Sequences, you will likely find yourself referring to the wiki for explanation of rationality in-group jargon terms.
Importantly, a new organization, CFAR, was spawned by the excitement and momentum around learning about and improving rationality, which got it’s start as a cohesive popular movement during the heyday of LessWrong. CFAR is moving ahead wonderfully with their agenda of teaching rationality concepts and practices to a wider audience.
Many of the blog posts in the LessWrong Sequences were contributed by Eliezer Yudkowsky who works for MIRI. MIRI is an organization focused on developing necessary value-alignment algorithms for Friendly General Artificial Intelligence, in the hopes of both bringing this about and of preventing the terrible counter-possibility of Unfriendly General AI. This work has been compiled into a book called Rationality: AI to zombies. Eliezer formally occasionally posted on a site called Overcoming Bias, which is primarily the work of Robin Hanson.
Goodness! That turned out to be a lot of background to describe for a little-twig-of-a-subthread on a rather large tree!