Archive of comments from SSC: book-review-hive-mind

On the morality of abortion

Peter Gerdes says:

No! Almost all science selects where to focus in part based on concerns of funding/offense/misperception.

For instance, EVERY discussion (except maybe Dawkins, Singer and a few others) of either disability or abortion deliberately avoids poking the elephant in the room: there is an incredibly strong argument that we have a moral imperative to abort the disabled. You might say: that’s not science. True. But tallying up the average QALY loss a disability brings is science. Enumerating the tests that could be done if this was to become common practice and how effective they are is science. Lastly, even suggestive questions like: if poor timing (try again later) is an acceptable consideration for abortion why isn’t the higher expected life quality of a retry an even stronger consideration.

Why bring this up? Because in both cases what we do is avoid making well supported inferences (not sure what re: IQ but they exist) so people can adopt the useful guise of neutrality/non-judgementality not to mention the practical benefit of avoiding all the disclaimers and caveats required to avoid being misinterpreted.

The point is that these hot buttons excite people so they are unable to process other information. If every time it was necessary to improve women’s access to abortions or fight restrictive laws someone chimed in ‘definitely, think of the utility gain from aborting disabled fetuses’ nothing useful would get accomplished. Similarly, talking about either immigration or race here would drown out anything else he has to say.

————–

Sigh, I do hope the abortion/disability hot button fades. I really see no difference (ok extra pain and months of discomfort for the woman) between choosing to carry a disabled fetus to term rather than abort and reconceive (when possible) and administering poison or physical injury to a fetus. Same outcomes (assuming abortion not itself a substantial wrong) same choice same moral question. Yes, it bothers me the same way seeing people stand by and defend deliberate attempts to injure a fetus so as to produce a disabled child would

 

  • Mary says:

    Tallying up the QALY loss to having your life cut off in utero is even easier.

     

    • Neurno says:

      Yes it is! It’s none. No QALY loss. Because you are your mind, not your neurons. A fetus’ brain is not yet capable of sustaining consciousness because its neurons are in GABA-polarity-flipped connection forming mode. Not until the GABAergic neurons flip to their normal (inhibitory) polarity is the neocortex capable of sustaining consciousness. At the time of birth, the chloride ion content inside the neurons is switched from high to low. This causes the polarity of the GABAergic neurons to flip, and consciousness comes online for the first time.
      In rats it has been shown that suppression of the chloride ion concentration switch signals (such as the mother’s oxytocin) makes it harder for the switch to occur and makes the newborn mammal more likely to die of anoxia from failing to begin to breathe.
      In other words, your first breath of air corresponds closely to your first moment of consciousness, and thus the beginning of your existence as a sentient being (sapience develops later, obviously.) Why? Because the same mechanism underlies both: the initial booting-up of the brain due to the chloride-ion concentration change.

      Fascinatingly, this lack of consciousness does not mean that the brain is not yet learning. Thus, the human mind which boots up at birth has some pre-encoded associations already (such as the sound of the mother’s voice as heard through amniotic fluid.)

      Anonymous says:

      Killing a fetus might well avoid most of the problems associated with killing a thinking human with dreams and preferences, but I don’t see how it gets around the loss of QALYs – unless you are assuming that the fetus will later be replaced by another.

       

      Mary says:

      “No QALY loss. Because you are your mind, not your neurons. A fetus’ brain is not yet capable of sustaining consciousness ”

      Not only switching the goalposts, as Anonymous observed, but making an argument that obviously can be extended much further, because your brain is not capable of sustaining consciousness in deep sleep, and so your ability before and after are moot — you can be killed in your sleep with no loss of QALYS

      Neurnosays:

      @Mary: Not only switching the goalposts, as Anonymous observed, but making an argument that obviously can be extended much further, because your brain is not capable of sustaining consciousness in deep sleep, and so your ability before and after are moot — you can be killed in your sleep with no loss of QALYS.

      I was hoping you’d bring up sleeping/unconscious people so I’d have an excuse to talk about my theories in those areas! This is a pretty topic-specific moral dilemma right now, but it has the potential to expand rapidly in the foreseeable future…
      So the difference here is between a brain that has never yet hosted a conscious mind, and a brain that has hosted a conscious mind. In the brain that has hosted a conscious mind, there is a previously existing mind that may come to exist again in that brain (in accordance with the probability that the person will ever wake up again. In this view, someone who is a ‘vegetable’ with no predicted chance of recovery also loses moral value.). Thus, you are effectively killing that extant mind which is attached to that body.
      If, on the other hand, the brain has never hosted a conscious mind, then there is no mind which is dependant on that brain which you would be depriving of it’s needed life-support system.
      Sleeping is different from deep anaesthesia. There is still a flickering partial presence of a mind in the case of sleep (relative to the depth of the sleep). Deep anaesthesia, which the pre-birth state is equivalent to, does not allow for this flicker of mind. So the mind is entirely halted, paused.
      The best analogy I can come up with at the moment is that of sabotaging an astronaut’s life support system. If you do so while the astronaut is in it, it is killing them. If you do so knowing that the astronaut will need it to survive in a few hours from now, you are still responsible for their death should it occur. If you are destroying a space suit that has never been used, that belongs to no one, and by so destroying you are ensuring it will never be used… Who is morally harmed by this? It is only the material cost of the space suit which is lost, not any harm done to a morally relevant being.

      The place into which this moral dilemma extends into the future is uploading. Hypothetically, consider the situation where at some point in the future uploading has been shown to be safe and effective way of transferring a conscious mind into digital form, but it requires that the physical brain be destroyed in the process (I predict this will be true for a long while before nondestructive uploading becomes possible). In this case, is it morally acceptable to chose to upload oneself? Given that you are asking for your brain to be destroyed, if you were your physical brain this would be a sort of suicide / self-murder. But, if we correctly acknowledge that the mind is the being, and safely transferring the mind intact means that no being has been lost, then no-one is killed.
      Similarly, if you choose to make a digital backup of yourself once you are uploaded, this digital backup is not a moral being until it is run for the first time. Once it has begun running for the first time, taking it back offline again is murder if you intend to not let it come back online again.

       

      @Anonymous
      December 9, 2015 at 7:45 pm
      Killing a fetus might well avoid most of the problems associated with killing a thinking human with dreams and preferences, but I don’t see how it gets around the loss of QALYs – unless you are assuming that the fetus will later be replaced by another.

      My personal moral values do not include QALYs for potential but never-yet-extent minds, only for minds that do exist or have-existed-and-likely-will-resume-existing. Thus, I do not believe that all woman who have hit puberty are morally obligated to spend every possible moment of their child-bearing lives pregnant with as many babies as can be implanted in them with In Vitro Fertilization. It violates my moral instincts (and thus I have adjusted my moral philosophy, as do most people) to hold a worldview in which every woman must spend every year of her childbearing existence gestating a new batch of the maximum number of fetuses which can likely be birthed live (twelve? more?). Is that, may I ask, what you believe to be morally correct, or is it just what you thought that I believed?

       

      anon says:

      Someone who genuinely thinks fetuses are people and values the sanctity of human life and so on isn’t going to be convinced based on QALYs that killing what they consider a baby is ok. Hell you don’t even have to think fetuses are people to be skeptical, it’s only an “incredibly strong argument” if you’re a consequentialist, which few people are.

       

      • Neurnosays:

        @ Anon:
        I believe you have misunderstood me. I am telling you that fetuses are not hosts to human minds (yet).
        I do value the continuance of the lives of human minds. I also value those human minds having interesting and pleasurable experiences.
        I do not morally value human bodies, or even brains, in so far as no human mind is affected by them.
        Abortion, to my mind, has only the moral impacts that are derived from the human minds experiencing it.

         

      • Neurno says:

        @ Anon:
        I believe you have misunderstood me. I am telling you that fetuses are not hosts to human minds (yet).
        I do value the continuance of the lives of human minds. I also value those human minds having interesting and pleasurable experiences.
        I do not morally value human bodies, or even brains, in so far as no human mind is affected by them.
        Abortion, to my mind, has only the moral impacts that are derived from the human minds experiencing it.

         

        • anon says:

          No, I didn’t misunderstand you at all. You don’t think fetuses are people in a meaningful sense, other people do. These people are not obligated to endorse abortion based on any number of QALYs gained or lost, because the ethical systems they use to make decisions don’t consider QALYs a compelling reason to take a human life, which they hold as a sacred value.

           

      • Neurno says:

        @ Anon: I disagree with your use of the word “think” here. I don’t “think” that fetuses are not people, I made a scientific argument for why they are not. I would be happy to have this scientifically refuted by empirical evidence if I am incorrect.
        However, my scientific statement is not equivalent to someone else “thinking” that fetuses are people. If you “think” a thing without empirical evidence to support it, and someone presents you with contradictory empirical evidence, you should investigate your beliefs. If you find in the course of your investigations that the belief you hold is incorrect, you should change that belief.
        If a person is not their mind, at what point is there a sensible definition for what a person is?
        I have worked in a lab where I was given living brain tumors freshly excised from a patient. My job was to carefully dissociate the tumor cells from each other, and grow them in a petri dish to study them. The goal was noble, to learn how to better stop cancer. But each of these cells, being a highly mutagenic cancer, had a slightly different genome. So I was growing hundreds of thousands of genetically distinct brain cells in the lab, and when we were done studying them, we destroyed them. Was each dead human cancer cell morally equivalent to the murder of a human being?
        If genetically unique human brain tissue doesn’t count as a human, what does?

       

 

In which JBeshir makes a excellent observation about genetic engineering being both potentially helpful and quite terrifying

JBeshir says:

I’d say that we shouldn’t do that because we went down that road before in history and it went very wrong (thinking of the early 20th century US stuff). The impulsive rejection of trying-to-try anything that’s too close an exact fit is probably correct, even if you are immune to public pressure.

Similar to why we shouldn’t try to set up a single party state with a centralised administration of the economy/society and internal selection procedures to ensure everyone gets fed and no one is too poorly off and political pandering can’t interfere with reasoned administration; we have a fairly good idea that when you try-to-try that the incentives on people within your system work out badly. You end up putting absolute monsters in charge, and your ‘reasoned administration’ goes to hell. Or rather, brings hell to it.

If genetics are a big deal, we should act accordingly, but we can afford to avoid getting close to systems which behaved badly in the past and should.

(I think we can let the fact that we’re gaining terrifying understanding of genetic engineering deal with it. What exactly it is we’re going to use to deal with our terrifying understanding of genetic engineering is another problem.)

On a hypothetical treatment for low IQ

Just to put in a link to the super-stylized essays of La Griffe du Lion, which more than a few people have found hugely illuminating:

http://www.lagriffedulion.f2s.com/

 

  • Neurno says:

    After Scott linked La Griffe in his review above, I went and read those two articles, and then a bunch more. I was hopeful at first that I might have found someone who thinks like me, but ultimately I was disappointed. He seems quite apt at making the accurate observation that some people are substantially less smart than others. What he lacks is any sensible idea about what to do next. He responds, not with empathy towards these people, not with an evident desire to help deliver them from their affliction, but rather cruel snide remarks about how we should be careful not to hire them for anything important, let them into college, etc.
    We can do better. Much better. Easily.
    Did anyone here notice the post I made in the last open thread (OT36: Nes Threadol Hayah Sham) in which I offered a hypothetical cure for mediocre intelligence?
    If not you can visit my SSC subthread blog at neurorationalist dot wordpress dot com to see my post without having to wade through a lot of discussion about gun policy, or just do an in-page search in the comments.
    I offered a hypothetical about a dangerous brain surgery which had an 80% chance of upgrading the recipient two standard deviations of intelligence, and a 20% chance of killing them. Obviously, such a surgery would not fix our situation, too many people would die.
    Here’s a brighter hypothetical for you… What if we had a treatment available now, cheaper to apply per person than a single dose of aspirin, which would have a disproportionately greater effect on those with lower intelligence. For those 1 SD or more below humanity’s norm, it would raise them an average of 1 SD. For those within +- 0.5 SD of the norm, it would raise them about 0.5 SD. For those higher, it would have a decreasing effect.
    Now, the downsides. First, the treatment is accompanied by a minor cold with some sneezing and coughing. Second, it only works on men. Third, it only works on a very specific part of men, their gametes. Ok, I lied, it doesn’t directly improve the intelligence of the men who receive it at all, it only affects their potential offspring.
    That’s it for downsides. No surgery, no significant risk of death or worsening of intelligence, and no effect on the brains of already extant humans. Just a shift in the next generation.
    What do you think, is it still worth applying the treatment even given these downsides?

     

 On other important factors besides IQ

Richard Metzler says:

Does the book mention the possible effecs of emigration as well? It would seem that having a share of the smartest, most ambitious people move to richer countries imposes a brain drain that makes it harder for developing countries to escape the trap – unless a significant share of the emigrants return to their home countries eventually, with the knowledge and habits they formed in the rich societies.

I can see that there are arguments here why IQ might influence nations more than individuals, and I’m supportive efforts to raise IQ (nutrition, education, whatever), but what I’m way about any assumption that IQ could be significant when actually compared to other causal factors involved in a country’s economic fortunes and development. If it was important, and assuming that IQ has an important heritable component, how could we explain the rapid (genetically speaking) rise and fall of many nations economic development in directions that defied IQ? Greece seemed pretty good at solving coordination problems in ancient times when it was uniting countless city-states to fight the much larger Persian army. Yet today, not so much. Likewise the middle east has seen extremely varied fortunes. The genetics may have changed a little, but not THAT much. Is it IQ levels or is it mostly political and cultural problems resulting in institutional failures to address security, fairness and fiscal responsibility?

It seems to me that the primary path by which we could improve our nations’ economic fortunes is still the obvious one – good economic policy. If I was looking at immigration, I think I’d be thinking about a combination of security, cultural compatibility and humanitarian concerns/compassion rather than IQ level. You’re just not going to influence IQ levels enough to matter, given that its probably a very weak lever at best.

I think it was a bit unfortunate to let the rationality/IQ -> property-rights friendly institutions claim go completely unquestioned – markets are awesomely useful but isn’t there a fairly strong case that mixed economies have consistently proven to provide the best outcomes in comparison to one extreme or the other? And even if we are neutral on that, I wasn’t aware that IQ correlated to pro-laissez-faire voting habits? Assuming any particular position on a long-contested political issue equates to rationality is an unreasonably strong claim – its silly to make out politics is that straight-forward. Perhaps I misinterpret property-rights friendly institutions to mean economic-right when it just means the government doesn’t rob people of their property, which would be much more reasonable.

Otherwise I found this like most of Scott’s reviews – the first half made me feel like the book author had some points but hadn’t properly addressed certain issues, and then just as I was about to start attacking my keyboard to comment, I find that Scott has already considered the vast majority of my concerns and already written them up more eloquently than I could. Enjoyable read.

 

  • Neurno says:

    Wow, another great comment. (For those here who haven’t yet checked out CitizensEarth’s blog, you totally should, it’s got some great ideas in it that could benefit from intelligent discussion.) You make an excellent point about society-wide policies, such as economic policy. I’d like to chime in with a historical example about my hobbyhorse, science.
    Are you familiar with the Golden Age of Arab Science? doi:
    10.1096/fj.06-0803ufm
    As in, that period of time from about 750-1258 C.E. when the Islamic states were the intellectual center of the Western World (East Asia off doing its own thing). Yeah, that time when any scientist who wanted to learn about the best science available travelled to the main cities of the Arabic-Muslim Empire, and tons of scientific progress was made by the arabic scientistics who took up the mantle of progress from the Greeks. What an amazing, and statistically disproportionately impressive time for science! Too bad it got the plug pulled on it when, around 1258 C.E., Islamic law shifted to strongly discourage new scientific innovation. Existing discoveries were still honored, but with scientific innovation stifled, the Golden Age of Science foundered. The mantle of science moved on. Ever since that time, those regions have consistently underperformed in science innovation. That is a social wound only beginning to heal to this day. Even now, many brilliant Arabic scientists go to other places (such as America, yay!) to study and innovate. If you love science, and are good at it, and have the capacity to go live in the part of the world where most of the other best scientists are… why not go? I know I would. Who cares about silly arbitrary things like ethnic background and national borders? Science seeks to transcend all that.
    If you, poor unenlightened being, do care about things like national borders and national economies, and comparative achievements in innovation, it would behoove you to make sure that your government does not repress scientific innovation. For the scientists will either stay, but not accomplish much, or leave to find freer intellectual waters, leaving your arbitrary nation-state bereft.

Introduction to this blog/subthread thing

Dear new reader,

This blog is intended as a conversational topic specific (neuroscience and rationalism) subthread of the commentary on the blog Slate Star Codex by Scott Alexander.

Slate Star Codex is a current hotspot for fascinatingly divisive yet thoughtful discussion on rationality, politics, philosophy, and random other such things. It in turn originates from Scott Alexander’s earlier blogging in various places, notably as Yvain on LessWrong.

LessWrong is now a mostly mothballed archive of rationality blogging, notably organized into a series of posts known as the Sequences. LessWrong also has a useful wiki on concepts in rationality and topics discussed in the Sequences. If you do decide to go explore the Sequences, you will likely find yourself referring to the wiki for explanation of rationality in-group jargon terms.

Importantly, a new organization, CFAR, was spawned by the excitement and momentum around learning about and improving rationality, which got it’s start as a cohesive popular movement during the heyday of LessWrong. CFAR is moving ahead wonderfully with their agenda of teaching rationality concepts and practices to a wider audience.

Many of the blog posts in the LessWrong Sequences were contributed by Eliezer Yudkowsky who works for MIRI. MIRI is an organization focused on developing necessary value-alignment algorithms for Friendly General Artificial Intelligence, in the hopes of both bringing this about and of preventing the terrible counter-possibility of Unfriendly General AI. This work has been compiled into a book called Rationality: AI to zombies. Eliezer formally occasionally posted on a site called Overcoming Bias, which is primarily the work of Robin Hanson.

Goodness! That turned out to be a lot of background to describe for a little-twig-of-a-subthread on a rather large tree!

Enjoy!

-Neurno

Continuing my conversation on rationality and cryogenics

Dear Dr Dealgood, Deiseach, and whomever else it may concern:

 If I understand correctly your (and some others’) objection to my announcement, “Cryogenic brain preservation is dead! Long live brain preservation! CLARITY is the new best way (for now)!”, you seem to be saying, “I am unsure if this apparently truthy evidence has value because almost all Rationalists believe (and, more importantly, are acting on) something contradictory.”

If that is indeed what you are intending to say, that sounds like a very strange and disappointing comment from someone who so recently claimed to seek to follow the true way of rationality rather than the false way of Rationality. The way of rationality, as I understand it, would suggest that you update your beliefs as appropriate based on the strength of the evidence, erect a signpost for others saying “I found something useful here once upon a time,” and then move on. If you should find yourself torn between a holding a unique set of views based on the unique set of information you personally possess due to your unique life experiences and learnings-to-date, versus aligning your conceptions of truth and the nature of the universe to the beliefs of those you feel social pull towards…. Please, for your sake and mine, choose the unique set of beliefs!

For my own sake, I would love it if you would take some time to self-examine and lay out for more of your unique constellation of beliefs. I am most especially interested in where your beliefs are most unique, or are shared by others in various pieces but your set of beliefs makes a unique pattern. It is by deliberately seeking out and updating on the surprisingly-unusual-but-true beliefs of others that I strive to protect myself from becoming mired in the dark bog of stale shared belief-sets such as Rationality. New-and-surprising evidence which turns out, upon investigation, to be true, is the brightest of beacons (the most concentrated transfer of bits) leading us along the maze-like intertwining paths of rationality towards the one true goal: objective reality.

So please, if I you discover that I am wrong, tell me. If, however, you suspect me to be correct-but-alone, join me! (At least until I go astray again.) This is a cooperative adventure that does not reward the strategy of blindly following a leader or a social consensus. Which is not to say that social opinions can never be useful (e.g. Prediction Markets), but we must be ever vigilant against the human tendency to erroneously give widely held beliefs more weight than they deserve.

 

The comments that this post is in response to:

Dr Dealgood says:

December 7, 2015 at 12:27 pm

Can you practically use CLARITY on a whole human brain? When I looked it up the only protocols I found were for <=50 mL samples, which is excellent for studying mice but raises questions about how well the process would scale. I’m not an expert here by any means but given that it takes 5+ days to fix a mouse brain the rate of perfusion might be an obstacle in larger organs.

Also it’s a bit of a moot point because, while this is a potentially workable idea for preserving brains nobody is actually doing it. Almost all of the advocacy and all of the money in the Rationalist sphere is focused on freezing.

 

Deiseach says:

December 7, 2015 at 5:52 pm

Neurno, right now people are forking out good money to have themselves, or their heads, frozen and preserved, or paying for the upkeep of frozen deceased family members, via that horse-and-carriage technology.

That’s my main beef: people are being sold a bill of goods that cannot be fulfilled. Better preservation techniques, invention of however the fuck you are going to read engrams or whatever, animal testing of both that shows they work and you get out the other end something almost entirely approximating what you put in – fine, once those bugs are worked out, then sell people “step right up, sign up for our process, and wake up in the wonderful world of tomorrow”.

As it is? Right now? And the companies that started forty years or so ago and froze people in the 60s? I think you’d be as well off to be turned into an Egyptian mummy.

Also – so you slice up the brain into sections? Well, if you can put Humpty-Dumpty back together again, then I think okay. I’d really like to see some animal tests done first, though.

It sounds rather too like Victor Frankenstein stitching separate body parts back together into a coherent whole and getting the resultant jigsaw to work.

Archiving my posts from SSC: ot36-nes-threadol-hayah-sham

Please note: for any new material, I have underlined my name (or “edit:”) and also colored it in red.

Neurno says:

Intro: I am usually a lurker and thorough reader of SSC and comments, and have read much of Scott Alexander’s work on SSC and Jackdaws…Sphinx, and much of LessWrong, and some Overcoming Bias. Writing doesn’t often come easily to me, so I rarely get around to expressing my viewpoint. This weekend, laid up sick in bed, and having no much else to do (since I’d just finished Hive Mind: why…IQ…), my reluctance to engage was overcome by a series of comments about the left/right political balance and lack of radical left commentators on SSC. I thought, “Hey, what about me? Oh yeah, I rarely say much.” And then the metaphorical floodgates burst open on my tiny dam, and a relative flood (for me) of writing poured out. And then some of it got eaten by the spam filter, and the rest got rather buried in a morass of (to me) totally worthless political grumblings about irrelevant petty issues like gun control.

      So, I decided to pull my comments, some context, and some of the many enjoyable responses out of the morass and place them somewhere tidier and more under my control. So, here they are. Please feel free to continue engaging with me on these issues here or on SSC comment threads.

On the political spectrum make-up of the SSC commentariat

Neurno says:

December 6, 2015 at 1:13 pm

I consider myself to be a far-left rationalist (I was raised progressive quaker, and went left-er and more rational and athiest from there). On political test maps (e.g. political compass) I find myself placed so far left that the multiple choice tests don’t often even have choices for my true full views on political matters. I enjoy polite and well-thought-out comments from all areas of the political spectrum. Whenever I consider an unsatisfying political comment, I try to imagine what a smarter, more rational version of that person would say if they had approximately the same underlying values (edit: I meant ‘same underlying values’ as implied by their comments, not the same underlying values as held by me.)

 

On HBD

Neurno says:

  • December 6, 2015 at 2:34 pm
  • I apologize in advance for a somewhat off-topic comment about HBD, but I feel my radical left viewpoint is being under-represented. As a radical leftist I believe that equality-of-mind is far more important than equality-of-capital. As a neuroscientist, I see the brain as a machine than can be taken apart, fixed/upgraded, and put back together. Hypothetically speaking, if a radical brain surgery existed which had an 80% chance of upgrading the recipient two standard deviations of IQ and a 20% chance of killing them or making them much worse off, at what level of intelligence should we consider an adult intelligent enough to make the decision as to whether to accept this surgery? My guess is around half a standard deviation above average IQ.(less than that, and it should be mandatory). A good way to figure out exactly where this cutoff should be would be to give each subject a factual document regarding the surgery, and if they then were able to pass a test on the material they would be allowed to make up their own mind.
  • In my worldview, minor differences between races (although real) seem like a petty and irrelevant topic for discussion.

 

  • Dr Dealgood says:
  • December 6, 2015 at 4:22 pm
  • @Neurno,
  • When you get to the point that you think killing 14% of the population (~45 million people) is a reasonable move, you should really step back and reevaluate the situation. Are those two standard deviations such a life-and-death issue?
  • The irony being that, since you subscribe to racial IQ differences, by your own estimation your hypothetical would end up killing proportionally many more African Americans (about 19% or 7.3 million). That’s going further than any actual racist I’ve ever met would admit to.

 

  • Not Robin Hanson says:
  • December 6, 2015 at 5:28 pm
  • Might it not be more utilitarian the other way around? My impression is that there are strongly superlinear returns to IQ, so if you could administer the treatment to X people it would be best to choose those who already have the highest IQ. (At least to first order, but it seems futile to consider much more when hypotheticals are generally so underspecified.)

 

  • Jiro says:
  • December 6, 2015 at 7:22 pm
  • When you get to the point that you think killing 14% of the population (~45 million people) is a reasonable move, you should really step back and reevaluate the situation.
  • He only proposes letting the population (take a risk that it would) kill itself. He isn’t proposing that *he* kill anyone.

 

  • stargirl says:
  • December 6, 2015 at 7:51 pm
  • I would personally take the treatment if offered. But is not offering to “let” people take the treatment. He is arguing that anyone below an IQ of approx 107.5 should be forced to take the treatment.

 

  • Neurno says:
  • December 6, 2015 at 8:04 pm
  • Dr. Dealgood:
  • You are absolutely correct that 80% is an unconscionably low success rate. I would never approve a mandatory operation with such a low survival rate. I just wanted to start with a provocative number to get people thinking and talking about it. At what survivability rate do you think it would be ethical/reasonable to allow people to choose such a treatment of their own free will? I’m thinking maybe 90%?
  • NotRobinHanson: You are absolutely correct. Thus, I have only offered such to those who I suspect an increase of intelligence would be of substantial benefit to society, and at the current level of risk made it clear to them that the offer is only valid if they can confirm that they are already in imminent danger of death from a currently-incurable illness (e.g. advanced pancreatic cancer).

 

  • stargirl says:
  • December 6, 2015 at 8:36 pm
  • The tables have now obviously flipped around lol.
  • I think people should be able to get any medical treatments they want provided they can show they understand the situation. I would probably institute a waiting period though.
  • I think people should be able to have doctors cut off their limbs or make them blind if that is what they want after serious consideration. If people want the IQ I do not think society has any right to stop people from undergoing the procedure.
  • Maybe if this procedure is destroying society we should put a stop to it. But the damage has to be clear, proven and large before we have a right to stop people from getting the treatments they think will improve their lives.

 

On nanomachines

  • rsaarelm says:
  • December 6, 2015 at 3:42 am
  • I think most transhumanists believe that physically possible nanomachines are going to happen. The Drexler debate was about whether it’s physically possible to build dry nanotechnology where you move individual atoms around into a crystalline structure instead of just setting up a wet nanotechnology protein soup and hoping something useful comes out of it. We already know that wet nanotechnology is possible because we have living cells.
  • TheAncientGeek says:
  • December 6, 2015 at 6:11 am
  • https://en.wikipedia.org/wiki/Drexler%E2%80%93Smalley_debate_on_molecular_nanotechnology
    • Faradn says:
    • December 6, 2015 at 4:53 pm
    • Yeah, Smalley’s objections are part of what I was thinking of. Nanotech as biomimicry with some limitations of application would still be on the table, but not nanotech as magic.
  • Neurno says:
  • December 6, 2015 at 7:45 pm
  • My perception (as a far-left transhumanist rationalist, and neuroscientist) is that I don’t have a very good understanding of the full potential of dry nanomachines. However, I suspect, and have read multiple papers that seem to more or less agree, that the likely near-term potential is of a machine less powerful than engineered bacteria (wet nanotechnology). Is that the full physically possible extent of the power of dry nanotechnology? I dunno, maybe.
  • So, I suspect the future will not have powerful dry nanotechnology (I.e. Drexlerian). But I don’t suspect this with more than about 75% confidence.
  • So, update on this as you will, keeping in mind that the simple plural of anecdote is not data as issues from a controlled experiment. 😉
    • Jeffrey Soreff says:
    • December 6, 2015 at 10:43 pm
    • So, I suspect the future will not have powerful dry nanotechnology (I.e. Drexlerian). But I don’t suspect this with more than about 75% confidence.
    • Any specific reasons? A scanning tunneling microscope was used to position xenon atoms on a nickel crystal with atomic precision back in 1989.
    • https://en.wikipedia.org/wiki/IBM_%28atoms%29
    • Now, whether anyone is actually ever going to pay for developing atomically precise
    • mass production technology is anyone’s guess, but I think there is a strong case that
    • it would work if it were built.
  • Neurno says:
  • December 9, 2015 at 4:41 am ~new~
  • Yes, good point. I guess what I meant to say is that I think it only 25% probable that anyone will successfully build working Drexlerian-style powerful dry nanomachines in the next 100 years. Now that you point it out, I agree that I have no evidence that the existence and function of such machines (if built) would be physically impossible.

 

 

On transhumanism, LessWrong, Yudkowsky, cryogenics / brain preservation / CLARITY

Faradn says:

December 6, 2015 at 12:07 am

It seems a lot of transhumanists and transhumanist-adjacent rationalists believe that Drexlerian nanomachines are going to happen, when the current scientific consensus seems to be that it’s physically impossible. Am I misunderstanding the typical transhumanist position, the scientific consensus, or maybe both?

 

 

  • John Schilling says:
  • December 6, 2015 at 12:37 am
  • Where do you find a scientific consensus about Drexlerian nanomachines being physically impossible? The general argument is that nanomachines are not ascribed any properties not already demonstrated by bacteria, and bacteria exist, so nanomachines physically can exist.
  • That we don’t have a clue how to design a bateria that spends its time doing microscopic bits of some arbitrary macroscopic project of interest to us, is another matter. Drexler and company are clearly a tad optimistic on the engineering side.
  • Dr Dealgood says:
  • December 6, 2015 at 12:38 am
  • No, not really.
  • Most of the big disconnects between lowercase-r rationality and uppercase-R Rationality come from their founder, who founded the group with the explicit goal of training people to be rational enough to agree with his extremely fringe views on things like AI and cryonics.
  • Since most of his apocalypic AI scenarios rely heavily on it quickly devising and releasing something like Molecular Assemblers it makes sense that the idea was hardcoded into the Rationalist system despite not making much rational sense.
    • Samuel Skinner says:
    • December 6, 2015 at 12:52 pm
    • You sure? Bioweapons as well as the long game fit fin in a doomsday AI fear.
    • anon says:
    • December 6, 2015 at 3:31 pm
    • While I agree completely that Yudkowsky founded LW to spread his fringe views (mainly the fringe view that people should give him their money) I think you’re underestimating the damage a superintelligent machine could do even with just today’s technology. There’s plenty it could do just over the internet, and it would be trivial for something several orders of magnitude smarter than human beings to cajole, bribe or blackmail real people into doing everything else.
    • Neurno says:
    • December 6, 2015 at 7:00 pm
    • Dr. Dealgood: I disagree with the factual implications of your comment, and I take issue with you insulting my meme-group, which I perceive you as describing as Yudkowky’s fringe. He is not the origin of my ideas, he is just a popular spokesperson. Furthermore, I have rather a lot of experiential evidence to support my views on these “fringe” ideas. For example, cryogenics works, for the definition of work meaning “can successfully preserve the information contained with the brain of the subject as represented by the structure of neurons and glia, arrangement of synapses, and types and approximate quantities of proteins present in specific parts of each cell”. Furthermore, the digitizing of this information is already possible with current technology. As a neuroscientist I have carefully cryosectioned, labelled, and imaged with laser microscopy many mammalian brain samples. I don’t know when it will be possible to digitally emulate a mind based on this brain data, but my understanding of computational neuroscience suggests that it is quite possible. However, asserting that I support cryogenics would be incorrect. I don’t support cryogenics because it is outdated. There is a better, safer (less likely to lose brain information), cheaper, easier method that does not require an expensive cryogenics company: hydrogel embedding (CLARITY). (If researching this, do not confuse with brain plasticization which risks losing information.) I have informed Yudkowsky that he is factually incorrect in this regard, but I don’t know if he has yet researched the topic himself and correspondingly updated his views.
    • Furthermore, my understanding of the current research in computational neuroscience speaks to the near-term plausibility of biomimetic AI. I am not qualified to speak on the subject of non-biomimetic AI. I am unsure whether biomimetic AI presents any existential risk, but I do believe it will be commonplace within a couple of decades.
    • I do doubt that non-biological nanomachines will ever work well, although some crude ones have been made in labs. Don’t underestimate the power of biological nanomachines/ engineered cells though! I’ve only dabbled in genetic engineering/virus manufacture/stem cell engineering, and yet my humble projects have done some pretty impressive things on occasion!
    • Again, on any topic outside his field, please consider Yudkowsky to be a well-intentioned popular science-and-philosophy writer (whom I happen to like a lot even if he’s not as correct as he seems to think he is about a lot of current science!). Don’t judge the capacities of science and the possibilities that the future holds by any perceived flaws or limitations in his writing.
      • Dr Dealgood says:
      • December 6, 2015 at 8:09 pm
      • I might have phrased my comment too impolitely, since this is a rationalist space. After all I don’t particularly like it when other people metaphorically come into my house and spit on my carpet. But it seems pretty clear that these positions have for the most part been adopted without or even against the available evidence.
      • Going back to cryonics for example: I’m not a neuroscience guy myself, though I am working with a glial cell transcription factor at the moment, but I would contest your characterization of it nonetheless. There’s a huge difference between freezing or perfusing a 10 micrometer slice of tissue that you never intend to thaw and cryonically preserving an entire human brain. Maybe at some future date we will be able to do that, but in the meantime paying even a single dime to companies like Alcor is utterly irrational.
      • For another example, take nanotechnology. Yudkowsky seems not to know the difference between wet protein or nucleic acid enzymes which actually exist and dry Drexlerian assemblers which probably can’t exist, and that leads him to some very bizarre statements. Like the idea that, once his hypothetical UFAI cracks protein folding (without actually having done any non-simulated experiments naturally) it can whip up the sequence that codes for build-anything nanobots, send it to some guy, and FOOM… world destroyed. It’s just more grey goo hysteria, except with a killer AI instead of a sloppy chemist.
      • There’s other examples that I’m not really qualified to speak on, like the QM stuff or the basic computer science behind his conception of AIs, but the responses I’ve heard from other people who know what they’re talking about sounds like my responses to the above. It’s not encouraging, particularly for a self-proclaimed expert in rationality, to pick irrational positions so consistently.
        • Neurno says:
        • December 6, 2015 at 9:51 pm
        • Dear Dr Dealgood:
        • Thank you for your apology, and in return I also apologize for being touchy about it. To give some context to my reaction, I labored intensely for many years in intellectual isolation researching what I felt to be my possible avenue of contribution to the meta-progress of the human race, radical biological intelligence enhancement via genetic modification of consenting adults. Then, relatively recently, overcoming bias spawned LessWrong, and suddenly an intellectual community willing to seriously discuss the issues I believe to be of utmost importance sprang up.
        • It is the willingness of people like yourself to discuss these issues that I primarily value, but I do also very much appreciate Yudkowky’s popularization of the ideas. Why, despite his frequent factual errors? Because I cannot communicate these things nearly so compelling or clearly. My partner and my close friends did not understand when I tried to explain why I have been laboring on this personal project for so many years. After getting them to read LessWrong, they do get it. They get my quest, why I care, why it might really matter. That is no small thing, and for that, I am willing to overlook rather a lot of factual failings. Hopefully if the new book form (Rationality: AI to zombies, which I haven’t read yet, so maybe it’s already somewhat better) gets enough traction, a second issue with the factual failings ‘updated’ by scientists from each specific field referred to can be issued.
        • nope says:
        • December 7, 2015 at 3:32 am
        • @Neurno: if you’re still around, I’d like to pick your brains re: intelligence enhancement in adults. My partner and I are involved in various things related to human intelligence, but on the enhancement side, the only significant possibilities look pre-natal. If this is wrong, we would be very excited to learn why. throwaway5283 at google mail dot com if you’re interested.
        • Deiseach says:
        • December 7, 2015 at 6:25 am
        • There’s a huge difference between freezing or perfusing a 10 micrometer slice of tissue that you never intend to thaw and cryonically preserving an entire human brain. Maybe at some future date we will be able to do that, but in the meantime paying even a single dime to companies like Alcor is utterly irrational.
        • Thank you for succinctly stating my lack of interest in (I can’t really call it opposition to) cryonics. I get that, for raising money for research purposes, companies need to do the whole “we can freeze you and thaw you out in the future” bit, but it does feel to me like taking advantage of the vulnerable, the desperate, and the grieving right now as any people or parts thereof frozen under current technology have, I submit, little to no chance of being successfully re-thawed (that’s not getting into “Oh, the future won’t thaw them out, they’ll read their brain engrams and copy them into a new body/upload to virtual space”).
        • I’m not saying it will never work, just that there is a lot of research and trial-and-error and experimenting on animals to see if it can be done right (and I’m sorry all the animal-rights people, but we’re talking about doing this to chimpanzees and other high-level primates to see if it can work for us).
        • Right now? Donate for research, certainly. Pay to have your dead body frozen and kept in storage for fifty-plus years? That’s hucksterism on the P.T. Barnum scale.
        • Murphy says:
        • December 7, 2015 at 9:10 am
        • personally I don’t think it would work, (my opinion <1% chance) but I can hesitate to call it irrational.
        • It’s a long-odds bet with massive potential payoff.
        • If I was a multi-millionaire or billionaire I could see myself taking that bet.
        • Even if it doesn’t work and nobody can extract a whole personality from a frozen brain there’s good odds that the future equivalent of archaeologists will find the very well preserved bodies of people from, say, a century in their past to be highly useful for understanding our current society,health,diseases etc.
        • Neurno says:
        • December 7, 2015 at 11:49 am
        • @nope: email sent. Let me know here if you don’t receive it.
        • @Deiseich and Murphy:
        • Would you please please please stop discussing horse-and-carriage technology in the discussion about whether it is better to cross a continent on foot or with a cheap jet plane ticket? Seriously, no! That is not the issue at all! Eghads! What are you, Amish, Mennonite, Shaker? Why would you ever ever freeze a brain?! No. Bad. Wrong. Multiple vastly superior options exist.
        • The best option currently is, as I said, hydrogel embedding. This requires only a dead brain (or more conveniently a severed head) to be immersed in a bucket of ~10% paraformaldehyde solution, and the sealed bucket placed in a refrigerator. In a few days, or weeks, whichever is convenient, an expert can come along and convert the brain tissue to a hydrogel embedded sample.
        • This hydrogel embedded sample is optically clear and can be very effectively immunolabeled and confocal laser microscope scanned over and over with no loss of information. The sample is physically stable (strong and plastic-y)(unlike brain tissue preserved in paraformaldehyde and/or alcohol), and will be stable at room temperature for many decades (at least). Scientists can safely study your brain many times over without damaging the sample or its precious information, and many separate attempts to digitize the information can be made (and compared with each other to get the best total info). The hydrogel embedding and initial paraformaldehyde preservation are quite cheap (under $100) and so useful to scientists that they might well pay your descendants for the privilege of non-harmfully studying it.
        • Because there is pretty much no downside to asking your grandkids to keep your preserved brain in a bucket in their garage, you might as well in the slim case that you might be successfully uploaded and in the hugely probable case that your brain would be of great benefit to science at no loss to you. Having scientists non-destructively studying your brain for generations to come would only increase the odds that you might be successfully digitized and emulated someday!
        • Dr Dealgood says:
        • December 7, 2015 at 12:27 pm
        • Can you practically use CLARITY on a whole human brain? When I looked it up the only protocols I found were for <=50 mL samples, which is excellent for studying mice but raises questions about how well the process would scale. I’m not an expert here by any means but given that it takes 5+ days to fix a mouse brain the rate of perfusion might be an obstacle in larger organs.
        • Also it’s a bit of a moot point because, while this is a potentially workable idea for preserving brains nobody is actually doing it. Almost all of the advocacy and all of the money in the Rationalist sphere is focused on freezing.
        • Murphy says:
        • December 7, 2015 at 12:41 pm
        • @Neurno
        • I’m even more skeptical but hey, if it’s really cheap. Let us know if/when a company starts doing this commercially.
        • If I could get my brain preserved somewhere safe like that for, say, $1000 I’d possibly go for it. It’s be cheaper than many currently popular death-rituals.
        • Neurno says:
        • December 7, 2015 at 2:42 pm
        • @Murphy:
        • No company necessary if you’re able/willing to talk a friend/relative into the DIY option. Just ask that your brain be stuck in a bucket with some paraformaldehyde and ask a scientist to come study it with CLARITY.
        • That being said, I can see how a lot of people might not be so into the DIY option. I recognize that I may be somewhat unusual in enjoying studying/handling the brain tissue of the recently deceased. I do hope someone starts such a business soon!
        • @Dr Dealgood:
        • I use this forum for my DIY nitty-gritty on CLARITY.
        • forum.claritytechniques dot org
        • clarityresourcecenter dot org
        • Here is a link to a paper in which it was used on a whole human brain from a brain bank (processed in about 1 cm thick slices) which had been stored a long while previously in formalin.
        • It is possible to do this in much thicker sections, it just takes longer before the brain is optically clear and ready to observe. It’s actually better for information preservation to use the slower method, but it’s hard to be sufficiently patient! I hate having to leave a sample in the clearing solution for months just to be able to answer the question that I prepared it for!
        • http://onlinelibrary.wiley.com/doi/10.1111/nan.12293/full
        • Actually, it can be done not just on brain tissue but on the whole body (but why bother for other than medical research? You are your brain). Note of caution while researching this: images of whole animal hydrogel preservation samples are not for the weak of stomach.
        • Dr Dealgood says:
        • December 7, 2015 at 4:00 pm
        • Thanks for the link. I had seen the bit on preserving other organs, I think it was mentioned in passing by a team optimizing the protocol for whole mouse brains.
        • I also think our definitions of whole are a bit different. To me, once you section a brain it is by definition no longer a whole brain. Asumming arguendo that you can scan and emulate brains you’d still presumably want it in as few pieces as possible.
        • But yeah, definitely going to check that place out.
        • Deiseach says:
        • December 7, 2015 at 5:52 pm
        • Neurno, right now people are forking out good money to have themselves, or their heads, frozen and preserved, or paying for the upkeep of frozen deceased family members, via that horse-and-carriage technology.
        • That’s my main beef: people are being sold a bill of goods that cannot be fulfilled. Better preservation techniques, invention of however the fuck you are going to read engrams or whatever, animal testing of both that shows they work and you get out the other end something almost entirely approximating what you put in – fine, once those bugs are worked out, then sell people “step right up, sign up for our process, and wake up in the wonderful world of tomorrow”.
        • As it is? Right now? And the companies that started forty years or so ago and froze people in the 60s? I think you’d be as well off to be turned into an Egyptian mummy.
        • Also – so you slice up the brain into sections? Well, if you can put Humpty-Dumpty back together again, then I think okay. I’d really like to see some animal tests done first, though
        • It sounds rather too like Victor Frankenstein stitching separate body parts back together into a coherent whole and getting the resultant jigsaw to work.

On why so many different neurotransmitters might have evolved

Oleg S says:

December 5, 2015 at 1:58 am

Is there any idea of why there are so many different neurotransmitters in the brain?

Ok, I understand that glutamate is the major excitatory neurotransmitter and GABA is the major inhibitory neurotransmitter. Excitatory and inhibitory synapses are modeled really well by artificial neural networks with positive and negative weights on connections. But there are also D-serine, serotonin, acetylcholine, dopamine, norepinephrine, histamine and the whole lot of other compounds that somehow stimulate or affect neurons. Wouldn’t it be much easier to drop them and use plain excitation/inhibition networks?

Of course I understand that Nature has her own reasons. Still the question is what do those additional neurotransmitters do that cannot be captured by classic artificial neural networks, and why this function is so vital that they persist through almost entire animal kingdom.

 

  • Daniel Speyer says:
  • December 5, 2015 at 4:26 am
  • A partial reason is that they do more than excite or inhibit. A lot of the receptors for the less common neurotransmitters are g-proteins, which means they release special signaling molecules inside the cell. This can trigger a complex chain of other reactions, which I don’t think anyone has fully mapped. Sometimes there can be an excitatory or inhibitory effect, but sometimes something gets phosphoralated.
    • Oleg S. says:
    • December 5, 2015 at 6:18 am
    • Ok, I can understand that there are some sort of motor-like neurons, which excrete some less common neurotransmitters that activate GPCRs, which in turn trigger other reactions that ultimately lead to changes in pattern of genes expression, secretion of hormones and other rather specific events.
    • However, take for example dopamine signaling. It functions very much like ion channels: for example, once D1 receptor is activated by dopamine, corresponding G-protein binds to adenylate cyclase, which converts ATP to cAMP, which activated PKA, which phosphorylates Na+ ion channel, which opens, and so action potential is generated and sent down. Binding of dopamine to D2 receptors inhibits adenylate cyclase, so these receptors can be regarded as inhibitory.
    • Serotonin receptor 5HT-4 works very similarly to D1 – it also activates adenylate cyclase. So, from internal point of view these receptors works very similar. And one of serotonin receptors family,5-HT3, is ion channel that functions very much like other ion channels, causing depolarization when bound to serotonin.
    • But the distribution and overall effect of these neurotransmitters are profoundly different. Dopamine receptors are abundant in CNS, and are implicated in motivation, pleasure, cognition, learning etc. It’s like I would have two type of electric wires in my house: copper wires for mundane appliances and silver wires for devices which help me to earn money.
      • Neurno says:
      • December 6, 2015 at 4:13 pm
      • I think it would be totally possible to design a brain/mind with a simpler set of neurotransmitters, but that would require a substantial (inefficient) redesign of neurons. Consider, for instance, the frontal cortical pyramidal neurons that receive glutamate/GABA as instructions to increase/lower probability of firing in a near-future time sensitive way. But then they use background diffused dopamine levels to adjust variables such as firing threshold and “mental exhaustion threshold” (poetic license for clarity’s sake!). This could be accomplished as well by having a direct connection from the dopamine-diffusing neurons to every single target neuron to adjust all their thresholds, but that would require far more neural tissue and physiological effort, with high associated costs. So there is no need to code these neurotransmitters into an AI as such, but a substantial need for them in the messy part-digital, part-analogue system of a meat brain.
  • onyomi says:
  • December 5, 2015 at 11:56 am
  • I can’t even understand how there could be so many different dials, controls, etc. in an airplane cockpit; and yet…
  • http://etc.usf.edu/clippix/pix/portion-of-a-control-panel-in-an-airplane-cockpit_medium.jpg
  • Scott Alexander says:
  • December 5, 2015 at 7:26 pm
  • I don’t have a good answer, but here’s a crackpot theory:
  • One of the main ways we get new genes is by mutations that randomly duplicate old genes. The genes randomly diverge for genetic drift, and then evolution gets the chance to do one thing with one copy while leaving the other copy mostly intact.
  • (warning: the following is a really really dumb toy example and bears no relation to real dopamine receptors)
  • So suppose that at first all we had was D1 receptors doing everything . Then the gene randomly duplicates, so we have two copies of the D1 receptor gene. Then they drift apart, and one becomes modern D1 and the other modern D2. And suppose by coincidence of whatever drift they’re getting, D1 is expressed more in the reward system and D2 in the motor system. And suppose evolution wants to implement something like “sex should be very rewarding, so when you get sex, increase stimulation of dopamine receptors in the reward system”. And suppose evolution doesn’t want you to be having weird muscle tics and seizures every time you have sex because you’re also overstimulating the motor system, or make you get Parkinson’s Disease every time you have a dry spell. Evolution can make the D1 receptors sensitive to sex, and the D2 receptors not sensitive to it, and get what it wants. Now there’s evolutionary pressure to keep the D1/D2 distinction and it will be preserved.
  • Fast forward a few million years and you’ve got the modern picture of 5-7 different dopamine receptors plus serotonin, norepinephrine, glutamate, GABA, and a million other things.
  • In other words, the more neurotransmitters you have, the more finesse evolution can use when tuning different systems up or down. If we only had one neurotransmitter, glutamate, and evolution wanted us to have less sex for some reason, all it could do is tone down glutamate, in which case we’d be generally more tired and relaxed and non-thing-doing. But that would be bad for other reasons, for example, we would also have less food. If there’s a single neurotransmitter involved in sex, then it can just up that.
  • I realize that this has the problem of neurotransmitters not really corresponding well to simple things like sex, but it may be they correspond better to very complicated hidden variables that evolution frequently wants to tune, or at least they did in the past, or at least as much as they can given that evolution is inherently hard and inefficient.
  • I have no idea if this is actually true or not and other people may correct me if they know better.
    • Oleg S. says:
    • December 6, 2015 at 3:17 am
    • Corresponding analogy in silico, as I understand it, would be “Let’s have a genetic algorithm evolve us the best neural network for some particular purpose. We’ll have several types of neurons and some basic architecture of connections, and the prevalence of each type of neuron (or strength of their connections) will be varied in GA.” Type of neuron in artificial network would then correspond to neurotransmitter/receptor. If we design the network really good, we’ll be able to assign same type label to the neurons that have similar function, and optimize them in GA separately.
    • Some way to avoid a lot of receptors would be to compartmentalize neurons: have neurons that do a certain aspect of information processing (like emotions, motor control, image recognition) located near each other (like in amygdala, cerebellum, visual cortex – I’m simplifying of course), and then have handles on blood and nutrient flows to that regions. The two ways are very similar from computational point of view – we can label groups of neurons in whatever way we want. Probably Nature uses both ways to control neuronal population responsible for particular task.
    • A way to test this theory is to engineer a small animal (like a round worm C. elegans) that have all its amine neurotransmitter receptors internalized at some point of life and replaced by glutamate/GABA channels, and then to see how well it would do. The null hypothesis would be that once neural circuitry is established (the animal is adult), and receptors are replaced by their analogues, the behavior of modified animals should be basically the same compared to control group.
    • However, my gut feeling is that worms lacking all amine (dopamine, serotonin) receptors and enzymes producing those neurotransmitters would not develop normally. So, I expect that apart from being the tools to regulate different information processing systems on evolutionary scale, different neurotransmitters may have something to do with neural development in each individual animal.
      • nope says:
      • December 6, 2015 at 4:19 am
      • Wouldn’t compartmentalization be worse for more general abilities? And for communication between modules?
    • nope says:
    • December 6, 2015 at 4:14 am
    • This was the first thing that intuitively occurred to me. I can’t really think of an easier way for biology to stumble onto selectivity in up/down-regulation than this one, which may simply speak to my level of sophistication on this issue. Are there any examples of organisms displaying high behavioral complexity with few neurotransmitters? And does neurotransmitter complexity correlate decently with behavioral complexity? Sounds testable!
    • Neurno says:
    • December 6, 2015 at 4:21 pm
    • edit: @Scott Alexander
    • This fits well as a simplified explanation of what I understand the current somewhat-more-complicated scientific explanation to be for the question “how did all these similar but different genes/neurotransmitters/receptors come to be?”
    • JuanPeron says:
    • December 7, 2015 at 2:00 pm
    • This seems to be a solid biological path to acquiring more degrees of freedom in a system. That doesn’t reflect on whether it’s the true explanation, but it’s really promising as a way to get more mental states without adding tons of new neurons or whole brain regions. Basic excitement neural nets seem to be fully representative of brains (Turing complete and all that), but we can use more complicated signaling/neurotransmitter systems to shrink those nets.
    • If you want to have some, but not all, regions of the brain change their behavior in the face of tiredness, sex, etc, you can either add a lot of new neurons to modulate the new effect, or slap some new transmitters on the regions you want to alter and keep brain size the same.

 

On the dangers of Genetic Engineering / CRISPR

Anonymous says:

  December 5, 2015 at 12:01 am

  There seems to have been a flurry of long-form articles published about CRISPR over the past few weeks. For example:

http://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.htmlhttp://www.newyorker.com/magazine/2015/11/16/the-gene-hackershttps://www.sciencenews.org/article/gene-drives-spread-their-wings 

Reading these, I’ve been especially interested in seeing how the mainstream discussion of CRISPR-ethics starts to unfold. For now, the articles seem to focus nearly exclusively on the problems/risks involved with editing human DNA, with some also mentioning ecological risks of gene drives gone wrong in plants or animals. While there’s obviously a lot to unpack with these issues already, I’ve been surprised by the relatively small concern there seems to be about deliberately edited viruses used for bioterrorism, which is the particular risk that scares me the most. Apart from one vague mention at the end of a Wired article from July (http://www.wired.com/2015/07/crispr-dna-editing-2/), I haven’t found anything that talks specifically about CRISPR as a weapon.  Could someone more knowledgeable about biology than I am please let me know why we shouldn’t be completely terrified about this? Is there some reason why it would be insanely, unrealistically hard to, say, make a couple choice edits to an influenza virus and create something way more deadly and infectious? Because right now I’m having a hard time seeing why CRISPR isn’t a “black ball” discovery, to use Bostrom’s term.

    • Dr Dealgood says:
    • December 5, 2015 at 1:14 am
    • CRISPR is a tool that’s mainly used for precisely modifying eukaryotic genomes: viral genomes are small enough that you really don’t need anything very sophisticated to make new viruses. In fact CRISPR is often delivered via a modified and very safe derivative of HIV called a lentiviral vector.
    • And as for being used as a weapon on its own, the only thing I’ve heard of close to that might be the idea to use it for gene drives against mosquitoes. Basically adding in “selfish genes” (that are more likely to be passed on) which also code for a particular trait, so that in a few generations much or most of the targeted population would have it. For obvious reasons this isn’t practical as a weapon against humans: even if millions were hit initially and it went completely undiscovered it would still be somewhere on the order of centuries before it would be a problem.
    • (EDIT: Didn’t see you already mentioned gene drives, presumably you knew about that already. Whoops.)
    • Anyway, as a general rule I would argue that we should avoid worrying about things we don’t understand. Wild speculation doesn’t make anyone safer and can distract you from risks which you do have meaningful control over.
    • svalbardcaretaker says:
    • December 5, 2015 at 8:16 am
    • We’ve had the technology to make deadly plagues since early 2000nds. Interestingly enough the effort to keep that out of the publics eye seems to have been successful; digging it up is somewhat convoluted.
    • CRISPR only makes stuff like that easier; so in short, there is every right to be terrified.
    • Neurno says:
    • December 6, 2015 at 3:47 pm
    • As a researcher who has been hacking genomes since well before convenient and powerful tools such as CRISPER… This is a real issue, the world has been in danger from this for well over a decade now, and the government had been scrambling quietly and largely ineffectually to try to reduce this danger. Fortunately, so far, the only people smart/educated enough to be a risk for designing such a weapon have been too wise to do so. Let us all hope that remains true for the foreseeable future, at least until the government comes up with some better defenses. I recommend increasing government funding for counter-bioterrorism research.

 

On dabbling in religion

Context: Maware wrote in response to a line of comments following a question about which religion we would each choose to be if we were dabbling in religion…

Maware says:

  • December 6, 2015 at 12:06 pm
  • This is because they are so irreligious that they can say “wouldn’t it be nice to be this?” The people who daydream about living on a south seas island have nothing in common with those who live on one, and if they had to actually deal with the reality of living on said island, would go nuts and probably disdain the islanders.
  • The people who hate a religion are closer to it than those who view it as a tourist spot with lovely architecture and quaint local rituals

 

 

  •  Jiro says:
    • December 6, 2015 at 12:52 pm
    • The people who daydream about living on a south seas island have nothing in common with those who live on one
    • Somewhat unrelated comment: remember back when the same was true for sci-fi fans wishing they could live on a space colony?

 

  •   Neurno says:
    • December 6, 2015 at 3:30 pm
    • As a educated rationalist who grew up in an apparently happy-go-lucky outwardly-cheerful, highly repressive and bigoted but low-crime (other than domestic abuse) small Mormon town as a non-mormon…. I would like to very strongly second Maware’s point. Fie on religious conformity approaching the power of law or uniform social sanction. I can assure you that bad things lurk down that road for free-thinkers.

 

On the potential value of a materialist pseudo-religion

A Postdoc says:

December 5, 2015 at 4:12 am

I was pondering recently whether part of the “purpose” of religion is to hack the intensely social nature of human cognition to get people to do things. It’s just easier to make people care about doing something if it “makes God happy” or “defeats the demons” than for some abstract reason like “it will make society better.” This seems to still be a true thing about human cognition (for instance, look how angry we get about terrorists while ignoring problems with a much larger body count but no human face.) So maybe we need a religion that includes both untrue-but-psychologically-motivating aliefs (“malaria nets make God happy!”) and true-but-abstract beliefs (“God is just a convenient label for an abstract set of moral principles.”) I’m not sure how well people could handle the cognitive dissonance in practice, but I feel like it would be an interesting experiment.

Neurno says:

December 6, 2015 at 4:00 pm

Absolutely not. I see religion as a failing of weak minds. Improve the weak minds, and I predict religion (along with any apparent social need for it) will simply disappear due to disinterest.

Dr Dealgood says:

December 5, 2015 at 10:39 am

  • We haven’t had a very good track record with materialist pseudo-religions over the last few centuries, it’s probably for the best to avoid repeating that mistake. I’m also a little fuzzy on why the world needs yet another competing unifying force: if anything it seems like they’re the root of a lot of our present issues.
  • As for religions compatible with science, why not go back to classical sources? Stoicism is probably the most logical choice, but I’ve heard good things about Neoconfucianism (New Confucianism on the other hand seems like a bit of a mess). If you really wanted to you could even go for something like an updated Hermeticism: alchemy and astrology historically developed into chemistry and physics, so the idea of using those disciplines to seek enlightenment still sort of makes sense.

 

 

Deiseach says:

December 5, 2015 at 11:47 am

The kind of “religion of Humanity” which R.H. Benson describes in his 1929 SF apocalyptic novel “Lord of the World”? Based on scientific understanding, where Mankind is the only transcendent thing?

There was but one hope on the religious side, as he had told Mabel a dozen times, and that was that the Quietistic Pantheism which for the last century had made such giant strides in East and West alike, among Mohammedans, Buddhists, Hindus, Confucianists and the rest, should avail to check the supernatural frenzy that inspired their exoteric brethren. Pantheism, he understood, was what he held himself; for him “God” was the developing sum of created life, and impersonal Unity was the essence of His being; competition then was the great heresy that set men one against another and delayed all progress; for, to his mind, progress lay in the merging of the individual in the family, of the family in the commonwealth, of the commonwealth in the continent, and of the continent in the world. Finally, the world itself at any moment was no more than the mood of impersonal life. It was, in fact, the Catholic idea with the supernatural left out, a union of earthly fortunes, an abandonment of individualism on the one side, and of supernaturalism on the other. It was treason to appeal from God Immanent to God Transcendent; there was no God transcendent; God, so far as He could be known, was man.

Yet these two, husband and wife after a fashion — for they had entered into that terminable contract now recognised explicitly by the State—these two were very far from sharing in the usual heavy dulness of mere materialists. The world, for them, beat with one ardent life blossoming in flower and beast and man, a torrent of beautiful vigour flowing from a deep source and irrigating all that moved or felt. Its romance was the more appreciable because it was comprehensible to the minds that sprang from it; there were mysteries in it, but mysteries that enticed rather than baffled, for they unfolded new glories with every discovery that man could make; even inanimate objects, the fossil, the electric current, the far-off stars, these were dust thrown off by the Spirit of the World—fragrant with His Presence and eloquent of His Nature. For example, the announcement made by Klein, the astronomer, twenty years before, that the inhabitation of certain planets had become a certified fact—how vastly this had altered men’s views of themselves. But the one condition of progress and the building of Jerusalem, on the planet that happened to be men’s dwelling place, was peace, not the sword which Christ brought or that which Mahomet wielded; but peace that arose from, not passed, understanding; the peace that sprang from a knowledge that man was all and was able to develop himself only by sympathy with his fellows. To Oliver and his wife, then, the last century seemed like a revelation; little by little the old superstitions had died, and the new light broadened; the Spirit of the World had roused Himself, the sun had dawned in the west; and now with horror and loathing they had seen the clouds gather once more in the quarter whence all superstition had had its birth.

(After Mabel has seen a volor crash and people killed for the first time in her life; there are government officials who mercy-kill the very badly hurt, not likely to survive victims. “Down the steps of the great hospital on her right came figures running now, hatless, each carrying what looked like an old-fashioned camera. She knew what those men were, and her heart leaped in relief. They were the ministers of euthanasia.”)

“My dear, it’s all very sad; but you know it doesn’t really matter. It’s all over.”

“And — and they’ve just stopped?”

“Why, yes.”

Mabel compressed her lips a little; then she sighed. She had an agitated sort of meditation in the train. She knew perfectly that it was sheer nerves; but she could not just yet shake them off. As she had said, it was the first time she had seen death.

“And that priest — that priest doesn’t think so?”

“My dear, I’ll tell you what he believes. He believes that that man whom he showed the crucifix to, and said those words over, is alive somewhere, in spite of his brain being dead: he is not quite sure where; but he is either in a kind of smelting works being slowly burned; or, if he is very lucky, and that piece of wood took effect, he is somewhere beyond the clouds, before Three Persons who are only One although They are Three; that there are quantities of other people there, a Woman in Blue, a great many others in white with their heads under their arms, and still more with their heads on one side; and that they’ve all got harps and go on singing for ever and ever, and walking about on the clouds, and liking it very much indeed. He thinks, too, that all these nice people are perpetually looking down upon the aforesaid smelting-works, and praising the Three Great Persons for making them. That’s what the priest believes. Now you know it’s not likely; that kind of thing may be very nice, but it isn’t true.”

Mabel smiled pleasantly. She had never heard it put so well.

“No, my dear, you’re quite right. That sort of thing isn’t true. How can he believe it? He looked quite intelligent!”

“My dear girl, if I had told you in your cradle that the moon was green cheese, and had hammered at you ever since, every day and all day, that it was, you’d very nearly believe it by now. Why, you know in your heart that the euthanatisers are the real priests. Of course you do.”

John Schilling says:

December 5, 2015 at 1:20 pm

Materialist pseudoreligions, e.g. Marxism, Gaian environmentalism, have stumbled into the religion niche inadvertently and often in opposition to their founders’ intent to Not Start A Religion Because Religions Are Reactionary Nonsense.

It seems like it would be worth trying to design a few nontheistic religions with deliberate intent and through selective appropriation of the good parts of traditional religions, to see if it would do any better. I can think of one obvious example that shan’t be named, that has turned out to be fairly successful and mostly harmless except for all the vindictiveness towards apostates and critical heretics. Probably we could do better; maybe we could do well enough to base a society on the results.

 

  • Protagoras says:
  • December 5, 2015 at 6:08 pm
  • I think someone may have mentioned it elsewhere in this very thread, but one theory that seems to militate against any such materialist pseudoreligion being worthwhile is that our brains seem to be mostly wired for social interaction, with the other things we do with them being mostly lucky side effects. This is presumably the reason people mistakenly try to interact with the inanimate world as if it were consciously motivated. It’s plausible that this is also responsible for some of the benefits of religion; if the main goal is changing yourself, rather than understanding or changing the world (and there are plenty of cases where changing yourself seems extremely valuable), interacting with the world as if it were conscious may well get more of your brain involved and make it easier to make more extensive changes. Obviously, if there’s any merit to that theory, trying to construct a religion without the supernatural elements isn’t going to be very productive. It may be possible to get the benefits without taking the supernatural elements fully seriously; it’s not clear how this works. But if you wish to do supernatural pretense, there are existing religions that are tolerant of doubt and metaphorical interpretations. No need to invent a new one.

 

Max says:

December 6, 2015 at 2:43 pm

Why does religious belief have to be compatible with science and rationality? Science and rationality are tools to help man understand his physical world and its systems. It’s a perversion, even a subversion of religion to presume it has the same purpose

Because when the preacher goes and say something evolution is wrong because “holy book”, and the “love is most important thing” – but said love is very hard to find among its practitioners, when you can see the corruption without even trying hard – you kinda start doubting whole thing very fast. And wonder if your purpose is to spend life on the things which you intuition tells you is wrong in many cases

Old religions worked all right when they were compatible with general worldview. But even then not everything was peachy either – hence churches generally tend to become very corrupt and very violent prone in order to keep population “believing in the right thing”

 

  • Tar Far says:
  • December 6, 2015 at 8:06 pm
  • I don’t know what your preacher said, and I take it as a given that some number of preachers are corrupt, but my general impression of the religious response against evolutionary science comes out of a fear that fallible men will interpret evolution to mean that there is nothing divine or sacred about our bodies, that there is no higher purpose for living besides perpetuating the species, that morality and virtue are relative, etc.
  • Isn’t such a fear justified?

 

Max says:

December 7, 2015 at 3:41 pm

Nope. Because Truth should be sacred, no matter how much it can hurt.

Yeah it is easier to accept lies and give rationalization and justification for them. “The road to hell is paved with best intentions”

Challenge is to accept that people die, that people commit extremely cruel and violent things – not because of Satanic corruption, but on their own volition. Because acceptance of Truth is first step towards understanding and finding solutions.

If you give in to Lies, even comforting ones – that is a path… Where to exactly where old religions and ideologies lead us so far.

On my tactical approach to discussing HBD (above)

Technically Not Anonymous says:

December 6, 2015 at 8:34 pm

This should be interesting. The Future Primaeval announces they’re done pretending to not be completely evil (and confirms my suspicions that that is totally a thing ~the group which shall not be named~ deliberately does.)

Neurno says:

December 7, 2015 at 12:46 pm

@technically not anonymous:

I have seen this tactic taken before by ‘dark enlightenment’ types. Only after seeing this comment thread did I finally realize how I might use this concept to my advantage. In my perception the ‘dark enlightenment’ types are often evil black-robed philosophers going about disguised under robes of grey. Upon garnering what they feel to be a sufficient audience, they dramatically cast aside their grey robes and reveal that they were black-robed all along. “Haha”, they say, ” I tricked you into taking my ideas seriously when normally you would have dismissed me out of hand! Now the seed of petty, small-minded, hateful philosophy has been planted in your brain and soon you shall grow to be like me!” Thus do they attempt to win converts.

Having finally grokked this, I have taken their strategy and reversed it, to great success! I came skulking into this comment thread in tattered robes of darkest black, posing as a highly controversial and somewhat frightening Mad Scientist. Once my controversy had gathered me an audience, I cast aside my robes of black and revealed myself to be clad in robes of shining grey! “Haha,” I declared, “I tricked you! I got you to think about my ideas seriously, when normally you would have dismissed them out of hand for being too science-y and uncontroversial! Now I have planted the seeds of science and rational thought about boring matters of potentially great importance in your brain, and soon you shall grow to be like me!”

I don’t know how necessary or efficacious this gambit actually was, because I lack an adequate control group, but it certainly was fun! For Fun Theory! For the bright shining destiny of Humankind! For the painstakingly slow and precise advancement of potentially-boring but also potentially-hugely-important plans/tools/concepts for the meta-advancement of human sapience! Huzzah!

 

Cost-Benefit Analysis of Nuclear Power

My argument is not that nuclear power can’t be done wrong, but rather, that it can be done right. The obstacles are primarily social and economic rather than physical or technological.

My stance on the environment can be taken to be in line with that described by CitizensEarth at

https://citizensearth.wordpress.com/visions-of-earth/

Big picture

These are long term nuclear goals, which can be achieved piecemeal, while also expanding solar and wind power.

If the premise that nuclear power can be done safely and cost-effectively without generating harmful waste that is difficult to dispose of is correct, several important things follow from this:

  • Electricity can be non-polluting and carbon-neutral. This allows:
    • Reduction of need for fossil fuels stops worsening climate change, and pollution
    • Current home and industrial uses of fossil fuel power can be replaced with electric power even in situations where currently the carbon and pollution released by producing the electricity would outweigh the benefits of switching away from fossil fuel usage
      • For example: gas heating and cooking elements could be replaced with electric ones.
    • Freight transport can be done via electric train and large nuclear-powered shipping vessels
    • Industrial operations can mostly be converted to use primarily electricity / electric motors
      • Mining operations for uranium and thorium can be done with electric-powered mining equipment, thus making the fission fuel gathering carbon neutral
      • Mining operations for other things, such as building materials
      • Factory operations can use electrical power, for producing basic high-energy-requirement goods such as aluminum, concrete and glass
      • Factory operations for producing solar panels and wind power generators can be run on electricity, making solar and wind power more completely carbon-neutral from the outset
    • Powering production of carbon-neutral biodiesel (for use in industrial applications where direct electrical power would be impractical, such as freight trucks and some mining/construction operations)
      • One way this can be accomplished is by using greenhouses with electric lighting to grow high-oil-yield algae
    • Large-scale carbon sequestration can be powered by electricity including:
      • Pumping collected carbon dioxide into underground reservoirs
      • Powering machines/greenhouse lighting for growing or gathering plant materials for conversion to inorganic carbon (charcoal) which can be used as a fertilizer additive (biochar) to sequester carbon in topsoil while improving topsoil quality and crop yields (this is complementary with biodiesel production)

Bonus benefit: Reduced x-risk from loss of sunlight due to particulate matter in the atmosphere. This applies to comet impacts, supervolcano explosions, and nuclear war (in which the phenomenon is known as nuclear winter).

Downsides

-Centralized power in the hands of large corporations or governments

-Risk of dramatic dangerous failures

-Uranium and Thorium mining will initially not be carbon neutral

 

Why now?

Key changes that have occurred since early days of nuclear power development

  • Recognizing Thorium as a valid energy source (more abundant in earth’s crust, less costly to mine, total fissionable materials should be considered as easily mineable uranium plus easily mineable thorium)
  • Improvements in robotics making it easier to safely reprocess fissionable materials with less cost and risk to workers
  • Fusion has been explored thoroughly over the past 40 years, and can now be more confidently assessed as not near-term viable
  • We are now more certain than ever that fossil fuels are a really bad plan
  • Energy demands have continued to increase, and can be expected to increase even more (at least if we want to bring the entire population of the world up to 1st world consumption levels)
  • Solar and Wind have had substantial research and development and yet retain some apparently unavoidable drawbacks:
    • primarily: poor production/replacement cost to lifetime energy output ratios
    • other costs/limitations such as substantial land use, rare mineral requirements, high variability of power supply, etc. which make them undesirable as the primary source of energy for humanity
  • Long term experience with molten salts as a primary thermal conductor / coolant instead of water for high temperature systems have shown proven safety and reliability of this method. This experience has been in part with early alternative fission reactor design explorations, but mostly with industrial solar applications (parabolic mirror arrays). This means that fission reactors can be made much safer and more efficient, cheaper to build, and can cost effectively be built smaller and in more locations, and (where appropriate) even buried partially or completely underground to greatly increase safety.
  • Another former downside of reprocessing nuclear waste/breeder reactors was the increased availability of weapons-grade plutonium as a result. Now this is less of an issue because of widespread nuclear development around the world. China, India, Iraq, and even North Korea have made nuclear weapons on their own. As far as proliferation is concerned, the bird has flown the coop.
  • If more energy is utilized from the fissionable material through reprocessing, vastly less initial fusionable material will be necessary, thus greatly reducing mining costs.
  • The most dangerous portion of the nuclear waste is the actinides, which reprocessing/breeder reactors not only can eliminate but can actually harvest the substantial energy from, turning a huge negative into a positive
  • The current method of dealing with nuclear waste is encasing it in glass, encasing the resulting radioactive glass in metal, and burying the metal containers deep in seismically stable bedrock. If molten glass is used in the reactor as the initial coolant, and all the reprocessing is done as part of the normal function of the reactor, the waste will be prepackaged for disposal.

Gut Response that I suspect needs a bolded heading: “What about Fukushima and Chernobyl?”

What about Fukushima and Chernobyl?

A key factor about Fukushima and Chernobyl (and indeed, all currently active nuclear power plants) is that they are water cooled. They use water as the primary coolant, the medium by which heat is conveyed from the fissionable material to the heat-concentration-based generators which convert the heat energy to mechanical energy to electrical energy.

Unfortunately, water at a useful temperature for generating heat is very near its boiling point. If it does boil, it rapidly and dramatically increases in pressure as it becomes gaseous water (steam), followed by a rapid decrease in pressure if it then cools back to liquid water. This fluctuation in pressure is very difficult to deal with. Not only because of the difficulty of physically containing it, but because you are losing your (radiation-polluted) coolant if you vent the gaseous water away, thus potentially allowing the fissionable material to overheat causing a catastrophic meltdown. This is what happened to Chernobyl. A failure to monitor water temperature and pressure due to faulty gauges allowed steam pressure to accumulate to dangerous levels. This resulted in steam explosions, which in turn meant that the reactor lost its coolant. Which led to the graphite moderator (that which keeps the fissionable fuel source from reacting too quickly and exploding) to catch fire. This fire spread radiation polluted ash from the burning graphite into the atmosphere, where it spread widely and contaminated a wide region.

The big difference with using a molten rock cooled reactor is that the molten rock is nowhere near its boiling point. Thus, operating the reactor is not a dangerous balancing act, but a relatively very safe affair. Even more safe, is the fact that a molten rock cooled reactor will “fail safe”. Instead of the coolant boiling away (as water does), the overheated rock would melt the armatures which keep the moderator from completely stopping the fission reaction. The moderator would fall into place, the reaction would stop, and the molt rock would cool to a solid lump, trapping the radioactive material inside. This is a tricky situation in which to restart the reactor, but offers no risk of coolant loss, reactor meltdown, or radioactive material being released in anyway to the outside. No boiling coolant risk, thus no coolant loss risk, no air-exposed-moderator fire risk, no Chernobyl-style disasters.

 

What about Fukushima?

Again, Fukushima is a water-cooled reactor. Which means it fails-dangerous if anything goes wrong. A big earthquake and flooding went wrong. It failed-dangerous, as could be expected from a system set up in such a way that fail-dangerous is the default. Fortunately, it only failed a little bit, and thus was only a little bit dangerous compared to Chernobyl. Still bad. Still not how we should design a potentially dangerous system such a nuclear reactor. Again, this would not be the case for a fail-safe design where lack of operator control would result in the reactor ceasing fission automatically and safely sealing all the fissionable materials (surrounded by their fully-engaged moderator) in solid rock.

Furthermore, using molten rock as a coolant means you can build the reactor in safer places to begin with, such as deep underground. In this case, it would make more sense to consider the reactor to be a sort of “artificial geothermal source” rather than what we envision as a nuclear reactor of the water-cooled style. The reactor could be a mostly passive device which would fail-safe if overheated, and we would pump water onto/across the surface of the molten rock heat exchanger to generate steam to turn the electrical generators. Again, if for whatever reason this secondary coolant supply was cut off, the reactor would overheat (with no risk of phase change of the primary coolant!) and fail-safe by automatically shutting itself down.

 

Do we know this would work?

Yes, we can be pretty sure it would (I estimate about as sure as knowing that a jet plane is safe to use as a method of travel, which is pretty darn sure). The United States has already built and operated such a molten rock cooled reactor, and it worked fine. When they decided to shut it down and stick to water-cooled reactors (because of political reasons), the reactor was allowed to shut-down by putting the moderator fully into place. The molten rock cooled reactor shutdown safely just as anticipated. So we already have a historical example of a successful trial run with a successful shutdown. “Not only can we expect this will work, this did work.”

 

Why aren’t people talking about this already?

Well, they are, but it hasn’t gotten a lot of press. One of the ideas being talked about is a specific category of molten rock cooled reactors called a LFTR (Liquid Fluoride Thorium Reactor). As you can guess by the name, the designers of the reactors that fit into this category propose using fluoride salt as the molten rock to cool the reactor, and using thorium as the primary fuel source. There are multiple different reactor designs in this category, and different possible categories as well, such as a category which used a different molten rock as a coolant and/or uranium as the fuel source. What I’ve described about fail-safes and safer operating constraints (not having the balancing act of almost-boiling coolant) apply to molten-rock-cooled reactors in general, not just Liquid Fluoride Thorium reactors.

Both China and India have  has announced large research projects exploring thorium as a nuclear power fuel source, although it appears the India is focusing mainly on trying to adapt water-cooled reactors to be able to use Thorium as a fuel source (because India has a whole bunch of easily available Thorium). This relatively conservative take on it, rather than going for the safer-but-stranger option of molten-rock-cooled reactors has many of the same drawbacks (fail-dangerous) as current Uranium-fueled water-cooled reactors.

For more on LFTR technology, check out the wikipedia page, these youtube links, or look them up on google scholar.

https://en.wikipedia.org/wiki/Liquid_fluoride_thorium_reactor

https://youtu.be/nYxlpeJEKmw

 

Show me the Money

So, does the math work out? Is the cost of mining and refining (with electrical power) uranium and thorium, plus the cost of carefully disposing of the waste, plus the cost of all the infrastructure needed to support this, really worth the amount of power produced?

How does this compare to other expandable non-fossil-fuel sources such as wind and solar?

 

(spreadsheet in progress)

 

Getting there from here

The big problem is that fossil fuel industries are currently not being held to account for their externalities (air pollution, greenhouse gasses). Nuclear power plants are (more or less), and we want that to be even more the case. If the full externalities of waste control and harm prevention are on the producing industry, then the cleanest sources will also be the cheapest!

Holding fossil fuel industries to account for the complete cost of preventing or cleaning up all the pollutants released, including fully sequestering as much CO2 and CO as they release, will solve this problem. Wind, solar, and clean nuclear will seem like very good investments when compared to fossil fuels and old wasteful nuclear.

Part of the resistance to adopting the idea of waste-responsibility has been the (accurate!) fear that wind and solar would not be sufficient to power their own production and all of society’s needs (residential, freight, and industrial). With the additional option of clean nuclear, the math works out and society can continue to grow and flourish without destroying the environment or ourselves. Yay!

When cost effective Fusion power comes along, we can replace the moderately clean Fission plants with the even cleaner Fusion plants and have access to even more energy. Fission is a key bridge to get us there, though.

 

Sources or GTFO!

Alright, if you’ve gotten this far you may be thinking, “Nice pie in the sky dreams there, bucko, but this doesn’t match up with anything I’ve heard before. The logic sounds nice, the math seems promising, but I don’t believe the premises!”

To this I would reply, “What exactly don’t you believe?”

I would expect to get a list back (please feel free to suggest more!) that sounds something like this: I don’t believe molten fuel reactors are a real functional thing, I don’t believe the nuclear waste produced will be safe enough to dispose of by encasing in glass and burying, I don’t believe we can use thorium as a fuel, I don’t believe we can cost effectively mine thorium and uranium with electrical power, I don’t believe we can cost effectively run industrial scale factories on electrical power rather than fossil fuels, etc

 

(This part is boring so I’ve put off working on it, but I’ll get here.)

Ideas for Orthogonal Politics

Towards the development of a science-guided Orthagonal Party Platform

Orthogonal Politics: pulling the tug-o’-war rope in a third direction, aiming to avoid directions where anyone is currently pulling exactly opposite you.

 

(note: please taboo all extant political parties/figures/platforms and discuss only specific issues and their merits. The point is to develop well-founded empirical evidence-based opinions on under-discussed issues.)

 

Priorities – largely Fun theory based re: Eliezer Yudkowsky               

( http://lesswrong.com/lw/xy/the_fun_theory_sequence/ )

 

  1. Survival of sapient society – reduction of extinction risk
    1. Generally speaking, I prefer a universe with sapient minds in it to one without, even if this entails more sapient suffering than would occur in a universe without.
  2. Alleviation/prevention of sapient suffering
    1. Good lives > poor lives.
    2. Effective Altruism, planning and working to avoid dystopian situations )
    3. Avoiding Repugnant Conclusion from following only Priority 1 (many sapient minds with lives only barely worth living)
      1. https://thingofthings.wordpress.com/2015/10/09/on-the-allegedly-repugnant-conclusion/
  3. Scientific Progress / Exploration
    1. seeking knowledge and satisfying curiosity as a fundamental moral goal, as part of living good lives.
  4. Sustainability / Sustainable Growth
    1. Growth supports 1 & 3, and doing it wisely supports 2, but can also be considered a goal in and of itself.  I prefer a future with a mostly populated galaxy/galactic cluster to a mostly unpopulated one. More minds living good lives > fewer minds living good lives.
    2. transition to longer term, less polluting power sources: first fission, later fusion
    3. try to avoid growing at too fast a rate, past capacity to develop physical/social/political infrastructure to support the growth
  1.  Application / Realization
  1. Some sort of minimal framework overarching government/agreement/contract with these principles as outlined by Scott Alexander?
    1. http://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/
  2. unwarranted force prevention / penalization

 

Questions:

  1. Is this too much for the most basic overarching framework? Not enough?
  2. What sort of enforcement? Taxation? How to deal with violations? How to define violations?

 

Plank 1: More public funding for fission research.

-Focusing on reducing externalities (cost of structures, risks, danger and quantity of waste produced, cost of storing/disposing of waste, inefficiency of delivery)

Premise: Although it is theoretically possible to substantially redesign fission reactors to have decreased externalities and possibly to harness fuels like thorium, relatively little public research has been done on this issue.

Premise: Fossil fuels are limited and have many pollution issues (esp. air pollution leading to lung cancer downwind, and global warming contributions), and have better uses to which they can be put (e.g. synthesizing plastic)

Premise: much funding and scientific effort has gone into Fusion research with no pay off so far, making it seem like a dead-end for at least the near future (until some significant breakthrough in physics, or development of large expensive space-based facilities).

Premise: Wind and solar have drawbacks and do not seem to offer a sufficient supply of power for humanity given costs of production, rare elements required that are in limited supply, etc. and given the assumption of substantially growing power-needs in a well-developed future world (e.g. all nations being 1st world nations).

 

Plank 2: Pre-registration of all publicly funded scientific studies

Premise: The scientific benefits would be huge: p-hacking is currently a really problematic issue in the life sciences (biology/neuroscience/psychology) and can most effectively be addressed by requiring scientists to preregister planned experiments (with and report their findings whether positive or negative instead of only when positive. Negative reports could be shorter in length, and need not be published in a specific journal, but must be publically available, precisely describe methods used, and be web-searchable so that meta-analyses may use them.

Premise: The additional costs of this policy would be small, as would the additional burden on scientists.

 

Plank 3: More public funding (direct support and medical research emphasis) for early-development humans (pre-conception to age 4 or 5).

Premise: Average IQ and average mental health are very important factors for the well-being / success of a society.

Hive Mind: How Your Nation’s IQ Matters So Much More Than Your Own

by Garett Jones  Link: http://amzn.com/B015PS7DBK

Premise: Based on the current scientific understanding of human brain development, factors such as the health of the originating eggs and sperm, the womb environment, and the social/physical environment early developmental years are vastly disproportionately important for the eventual maximization of IQ and mental health potential of the resulting human (within the potentials/limitations of their genes).

Premise: Funding for improvement of health/intelligence/well-being of people in society should be distributed where it can be expected to have maximal effect, and thus the bulk of the research and monetary focus should be on at-risk fetuses and infants rather than elementary-school-age children, teenagers, or adults.

 

Plank 4: All political debates should have politically-neutral subtitles describing argumentative fallacies and fact-checking all factual statements (expressed or implied). For example: Clearer thinking’s subtitling of the debates. http://www.clearerthinking.org/#!the-2016-presidential-debates–subtitled/wt7g0

 

Plank 5: Build Lunar colony, then asteroid colony and mars colony

 

Plank 6: Effective altruism advancement

 

Comments copied from SSC

Vaniver says:

November 25, 2015 at 11:55 am

Plank 1: More public funding for fission research.

Fission research? What we really need is the moral / legal authority to incentivize fission plants correctly relative to coal plants (i.e. tax coal plants correctly for their exhaust, and I’m not talking about the carbon component). Right now fission’s externalities are correctly priced but other power production isn’t (and thus is implicitly subsidized).

That is, yes, we could have even nicer fission plants. But we could also have even nicer coal plants. We need to cut at the root of the problem and correct distortions that make coal look better than fission, at which point research into superior fission will happen naturally.

Plank 4: All political debates should have politically-neutral subtitles describing argumentative fallacies and fact-checking all factual statements (expressed or implied).

This does not seem safe to trust the government to do.

  • science says:
  • November 25, 2015 at 1:02 pm
  • The situation around insurance for fission plants makes me quibble with your externalities are priced correctly point. Ditto for waste disposal and decommissioning generally.
  • Agreed though that the situation with coal is far worse.
  • Neurno says:
  • November 25, 2015 at 3:03 pm
  • I’m in agreement that changing the unfair coal/gas incentives to make nuclear be less unequally persecuted would also be a good solution. I just thought that that would be harder to get past the fossil fuel private interest groups. I figured public funding for fission research would be less controversial because it lowers barriers to nuclear development (i.e. the research has been done to reduce the externalities) without triggering the NIMBY backlash that has been such a problem for nuclear power (because nonspecific to location) or triggering significant opposition from the fossil fuel lobby.
  • As to plank 4: good point. Still, it might be possible to have some different branch of government oversee it?

Mark Atwood says:

November 25, 2015 at 1:27 pm

Plank 5: all issuance of financial instruments (stock IPOs, bond issues, treasury sales, etc) will be done via a public reverse dutch auction, open to everyone who can scratch together the cash to cover their advance bid, managed by a TTP who gets paid a flat fee for doing it.

Plank 6: all corporations/501cXs must publish their full cash accounting, and every complete tax return in every jurisdiction they file in, in addition to current pile of lies they currently publish as GAAP/SarbOx/etc/crap.

Plank 6a: the only legal records retention policy for a corporations/501cXs is “all records must be retained. forever.” Storage is cheap.

Plank 7: all law, rulemaking, court filing, decree, judgement, regulation must be published as a push to a DVCS with hash based integrity chains, with full actual human authorship attribution (e.g. which clerk or lobbyist actually wrote the words, and who they worked for at the time) attached to every single line of text.

Plank 8: there shall be a private right of action for perjury, which will not be tried in the same court that the perjury is alleged to have occurred.

Plank 8a: “Qualified Immunity” is gone. It has no original basis. It was something made up from scratch by friend-of-cop prosecutors, and then upheld by judges who used to be those prosecutors.

Plank 9: “Orphan works” shall enter the public domain. Copyright can only be maintained by someone other than the original creator only by paying an annual filing fee, which increases by 50% each year.

Plank 9a: improperly filing a DMCA takedown will be punished with a *mandatory* fine equal to the three times maximum penalty for the copyright infringement claimed, and the fine is split three ways between the poster of the content that was taken down, the service provider that was hosting it, and the court that ruled on it.

Plank 10: If a device has a connector, plug, socket, interface, or API, or modulates, transmits, receives, or stores any analog signal or binary bitstream over any medium, it may not be sold unless the complete specification of such interfaces or signals is made publicly and freely available.

Yes, I know that all of these are politically impossible, and I know why they are. And I firmly believe that the people who would oppose this, and both their stated and their actual reasons for opposing it, are at least 3/4s of what is wrong with everything.

 

  • Neurno says:
  • November 25, 2015 at 3:14 pm
  • Thanks for contributing! I think in a throwing-around-ideas context like this there’s no reason to censor oneself based on imagined political feasibility of good ideas. I’m now going to research the concepts you’ve referenced that I don’t understand yet to see if I agree.
  • Planks 6,6a,9: I’ve had similar thoughts and I’m in agreement!
  • Plank 9a: nice addition to 9! Hadn’t occurred to me before, but I like it. And seems a fitting general principle for dealing with assumably-deliberate false accusations of fine-worthy crimes.

Deiseach says:

November 25, 2015 at 3:25 pm

Plank 6a: the only legal records retention policy for a corporations/501cXs is “all records must be retained. forever.” Storage is cheap.

Not when it’s “typed out on a manual typewriter and carbon copy with actual carbon paper” from thirty-forty years ago, it isn’t. Our council archivist is currently tearing out her hair because of all the files shoved into boxes and landed in to her for archiving after the amalgamation and musical chairs type change of departments and bodies in the council.

We have a lot of stuff on paper in physical files and if by “storage is cheap”, you mean something like “everything digitised and in the cloud” then some poor bugger is going to have to sit down and digitise forty years’ worth of records, which is not going to be fast, cheap or easy.

Even today not everything is done online, though that will change in future (even a dinosaur like myself can recognise that).

A new startup that is only in existence six months? Yeah, everything is digital.

A long-established law firm, bank or other entity? Decades’ worth of paper that you can’t just dump because there’s deeds, wills, contracts, etc. there.

Plank 9: “Orphan works” shall enter the public domain. Copyright can only be maintained by someone other than the original creator only by paying an annual filing fee, which increases by 50% each year.

What about if the original creator wants to sign over one of their works to another party for charitable purposes, like J.M. Barry did with Peter Pan and Great Ormond Street Children’s Hospital?

Under Plank 9, I could see all the profits being eaten up by filing fees instead of going to charitable purposes.

And of course, that would mean no barrier to making “Tinkerbell Does Neverland” type movies and books (though that probably isn’t a barrier now either)

    • The Nybbler says:
    • November 25, 2015 at 3:58 pm
    • To plank 6a: Storage is cheap. Lawsuits aren’t. Someone — an actual person or persons — is going to have to go through all those records to see if they are relevant during the discovery phase of every lawsuit. That gets real expensive real fast, and is one of the reasons for retention policies.
      • Mark Atwood says:
      • November 25, 2015 at 4:36 pm
      • one of the reasons for retention policies
      • Bullcrap. Discovery orders do not have to go back to the beginning of time, just back to the beginning of the action being litigated.
      • My own personal and lived experience watching retention policies get newly mandated into organizations, or watching existing retention policies get tightened down, is they *always* come after someone got caught redhanded via their own records, and so decide to avoid having that ever happen again by preemptively committing “destruction of evidence”.
    • Chalid says:
    • November 25, 2015 at 5:37 pm
    • 6 would mean disclosing all kinds of proprietary/competitive data and would put any such corporation at a big disadvantage.
    • Haltingthoughts says:
    • November 25, 2015 at 11:08 pm ~new~
    • Excellent proposals.
    • There was a PDF from a retired judge had a very good treatment of issues with criminal justice, not sure where it went.
    • Plank 8b: in lieu of a improper prosecution suit or a lack of one there is a private right of criminal charges.
    • Plank 8c: experts providing instruction relevant to interpretation of evidence shall be permitted. (Oftentimes calling experts on Bayesian reasoning for evidence are not allowed)
    • Plank 8d: what can be done about sovereign immunity?
    • Plank 8e: relax standing requirements for at least constitutional challenges if not more.
    • Plank 8f: require discovery of all evidence and prosecute those not adhering to it.
    • Plank 8g: require courts to decide all required questions. (It is crazy that sexual orientation has not been clearly defined as a suspect class yet).
    • Murphy says:
    • November 26, 2015 at 7:57 am ~new~
    • Hmm. My problem with 6a is that it get’s more and more expensive over time and storage is only cheap on the small scale. Your little 4 TB drive is cheap. A multi-petabyte array in a datacentre is not cheap at all.
    • There are companies that are hundreds of years old. They get put at a significant competitive disadvantage.
    • What about file formats and readers? There’s archive material from the 70’s which is incredibly hard to read nowadays because the machines to read it are mostly gone. Does a company have to spend a fortune every 15 years on copying all their backup tapes to newer formats?
    • What about backups? If a company has 2 copies of tapes from 70 years ago archived in separate locations and one burns down do they then have to spend a fortune reading and copying all that old archive data to make a new second backup location.
    • I agree that the current 7 years or similar is far too short for many things but I’m not sure “forever” is practical either.
      • Neurno says:
      • November 27, 2015 at 1:12 pm ~new~
      • Hmm, but if the data is properly compressed/simplified (i.e. text files not PDFs), then storage of data of the same scope as pre-digital data shouldn’t be a problem. I mean, there’s room for entire libraries worth of compressed text data on the little 8gb flash memory chip I carry on my keychain…. I think you may be getting thrown off by the fact that digital data has allowed companies to store far more total information than they ever did on dead tree media.
  • Chalid says:
  • November 25, 2015 at 11:43 pm ~new~
  • Most of the standard boring good-government checklist ought to apply here. Tax reform and simplification, more high-skill immigration, reform social programs to minimize disincentives, reduce corporate/agricultural subsidies including tax expenditures, roll back the drug war’s worst excesses, patent reform, etc.
  • More rationality-oriented additions might be issuing prizes instead of patents, encouragement of predictions markets, and increased funding for basic research.
    • Neurno says:
    • November 27, 2015 at 1:06 pm ~new~
    • Yes, those seem like worthy but boring additions. I guiltily confess I was hoping for more new/weird/exciting ideas.
    • I like the prizes instead of parents idea. I’d heard of it in context of chasing specific scientific or engineering goals, but not yet heard of it suggested as a complete replacement for the faulty patent system.
    • I had also wanted to say something about encouragement of prediction markets and/or their incorporation into political governance, but found myself stumbling over the details every time I tried to imagine the specifics.
      • Psmith says:
      • November 27, 2015 at 3:14 pm ~new~
      • “I guiltily confess I was hoping for more new/weird/exciting ideas.”
      • Vote on values, bet on beliefs!