Archiving my posts from SSC: ot36-nes-threadol-hayah-sham

Please note: for any new material, I have underlined my name (or “edit:”) and also colored it in red.

Neurno says:

Intro: I am usually a lurker and thorough reader of SSC and comments, and have read much of Scott Alexander’s work on SSC and Jackdaws…Sphinx, and much of LessWrong, and some Overcoming Bias. Writing doesn’t often come easily to me, so I rarely get around to expressing my viewpoint. This weekend, laid up sick in bed, and having no much else to do (since I’d just finished Hive Mind: why…IQ…), my reluctance to engage was overcome by a series of comments about the left/right political balance and lack of radical left commentators on SSC. I thought, “Hey, what about me? Oh yeah, I rarely say much.” And then the metaphorical floodgates burst open on my tiny dam, and a relative flood (for me) of writing poured out. And then some of it got eaten by the spam filter, and the rest got rather buried in a morass of (to me) totally worthless political grumblings about irrelevant petty issues like gun control.

      So, I decided to pull my comments, some context, and some of the many enjoyable responses out of the morass and place them somewhere tidier and more under my control. So, here they are. Please feel free to continue engaging with me on these issues here or on SSC comment threads.

On the political spectrum make-up of the SSC commentariat

Neurno says:

December 6, 2015 at 1:13 pm

I consider myself to be a far-left rationalist (I was raised progressive quaker, and went left-er and more rational and athiest from there). On political test maps (e.g. political compass) I find myself placed so far left that the multiple choice tests don’t often even have choices for my true full views on political matters. I enjoy polite and well-thought-out comments from all areas of the political spectrum. Whenever I consider an unsatisfying political comment, I try to imagine what a smarter, more rational version of that person would say if they had approximately the same underlying values (edit: I meant ‘same underlying values’ as implied by their comments, not the same underlying values as held by me.)

 

On HBD

Neurno says:

  • December 6, 2015 at 2:34 pm
  • I apologize in advance for a somewhat off-topic comment about HBD, but I feel my radical left viewpoint is being under-represented. As a radical leftist I believe that equality-of-mind is far more important than equality-of-capital. As a neuroscientist, I see the brain as a machine than can be taken apart, fixed/upgraded, and put back together. Hypothetically speaking, if a radical brain surgery existed which had an 80% chance of upgrading the recipient two standard deviations of IQ and a 20% chance of killing them or making them much worse off, at what level of intelligence should we consider an adult intelligent enough to make the decision as to whether to accept this surgery? My guess is around half a standard deviation above average IQ.(less than that, and it should be mandatory). A good way to figure out exactly where this cutoff should be would be to give each subject a factual document regarding the surgery, and if they then were able to pass a test on the material they would be allowed to make up their own mind.
  • In my worldview, minor differences between races (although real) seem like a petty and irrelevant topic for discussion.

 

  • Dr Dealgood says:
  • December 6, 2015 at 4:22 pm
  • @Neurno,
  • When you get to the point that you think killing 14% of the population (~45 million people) is a reasonable move, you should really step back and reevaluate the situation. Are those two standard deviations such a life-and-death issue?
  • The irony being that, since you subscribe to racial IQ differences, by your own estimation your hypothetical would end up killing proportionally many more African Americans (about 19% or 7.3 million). That’s going further than any actual racist I’ve ever met would admit to.

 

  • Not Robin Hanson says:
  • December 6, 2015 at 5:28 pm
  • Might it not be more utilitarian the other way around? My impression is that there are strongly superlinear returns to IQ, so if you could administer the treatment to X people it would be best to choose those who already have the highest IQ. (At least to first order, but it seems futile to consider much more when hypotheticals are generally so underspecified.)

 

  • Jiro says:
  • December 6, 2015 at 7:22 pm
  • When you get to the point that you think killing 14% of the population (~45 million people) is a reasonable move, you should really step back and reevaluate the situation.
  • He only proposes letting the population (take a risk that it would) kill itself. He isn’t proposing that *he* kill anyone.

 

  • stargirl says:
  • December 6, 2015 at 7:51 pm
  • I would personally take the treatment if offered. But is not offering to “let” people take the treatment. He is arguing that anyone below an IQ of approx 107.5 should be forced to take the treatment.

 

  • Neurno says:
  • December 6, 2015 at 8:04 pm
  • Dr. Dealgood:
  • You are absolutely correct that 80% is an unconscionably low success rate. I would never approve a mandatory operation with such a low survival rate. I just wanted to start with a provocative number to get people thinking and talking about it. At what survivability rate do you think it would be ethical/reasonable to allow people to choose such a treatment of their own free will? I’m thinking maybe 90%?
  • NotRobinHanson: You are absolutely correct. Thus, I have only offered such to those who I suspect an increase of intelligence would be of substantial benefit to society, and at the current level of risk made it clear to them that the offer is only valid if they can confirm that they are already in imminent danger of death from a currently-incurable illness (e.g. advanced pancreatic cancer).

 

  • stargirl says:
  • December 6, 2015 at 8:36 pm
  • The tables have now obviously flipped around lol.
  • I think people should be able to get any medical treatments they want provided they can show they understand the situation. I would probably institute a waiting period though.
  • I think people should be able to have doctors cut off their limbs or make them blind if that is what they want after serious consideration. If people want the IQ I do not think society has any right to stop people from undergoing the procedure.
  • Maybe if this procedure is destroying society we should put a stop to it. But the damage has to be clear, proven and large before we have a right to stop people from getting the treatments they think will improve their lives.

 

On nanomachines

  • rsaarelm says:
  • December 6, 2015 at 3:42 am
  • I think most transhumanists believe that physically possible nanomachines are going to happen. The Drexler debate was about whether it’s physically possible to build dry nanotechnology where you move individual atoms around into a crystalline structure instead of just setting up a wet nanotechnology protein soup and hoping something useful comes out of it. We already know that wet nanotechnology is possible because we have living cells.
  • TheAncientGeek says:
  • December 6, 2015 at 6:11 am
  • https://en.wikipedia.org/wiki/Drexler%E2%80%93Smalley_debate_on_molecular_nanotechnology
    • Faradn says:
    • December 6, 2015 at 4:53 pm
    • Yeah, Smalley’s objections are part of what I was thinking of. Nanotech as biomimicry with some limitations of application would still be on the table, but not nanotech as magic.
  • Neurno says:
  • December 6, 2015 at 7:45 pm
  • My perception (as a far-left transhumanist rationalist, and neuroscientist) is that I don’t have a very good understanding of the full potential of dry nanomachines. However, I suspect, and have read multiple papers that seem to more or less agree, that the likely near-term potential is of a machine less powerful than engineered bacteria (wet nanotechnology). Is that the full physically possible extent of the power of dry nanotechnology? I dunno, maybe.
  • So, I suspect the future will not have powerful dry nanotechnology (I.e. Drexlerian). But I don’t suspect this with more than about 75% confidence.
  • So, update on this as you will, keeping in mind that the simple plural of anecdote is not data as issues from a controlled experiment. 😉
    • Jeffrey Soreff says:
    • December 6, 2015 at 10:43 pm
    • So, I suspect the future will not have powerful dry nanotechnology (I.e. Drexlerian). But I don’t suspect this with more than about 75% confidence.
    • Any specific reasons? A scanning tunneling microscope was used to position xenon atoms on a nickel crystal with atomic precision back in 1989.
    • https://en.wikipedia.org/wiki/IBM_%28atoms%29
    • Now, whether anyone is actually ever going to pay for developing atomically precise
    • mass production technology is anyone’s guess, but I think there is a strong case that
    • it would work if it were built.
  • Neurno says:
  • December 9, 2015 at 4:41 am ~new~
  • Yes, good point. I guess what I meant to say is that I think it only 25% probable that anyone will successfully build working Drexlerian-style powerful dry nanomachines in the next 100 years. Now that you point it out, I agree that I have no evidence that the existence and function of such machines (if built) would be physically impossible.

 

 

On transhumanism, LessWrong, Yudkowsky, cryogenics / brain preservation / CLARITY

Faradn says:

December 6, 2015 at 12:07 am

It seems a lot of transhumanists and transhumanist-adjacent rationalists believe that Drexlerian nanomachines are going to happen, when the current scientific consensus seems to be that it’s physically impossible. Am I misunderstanding the typical transhumanist position, the scientific consensus, or maybe both?

 

 

  • John Schilling says:
  • December 6, 2015 at 12:37 am
  • Where do you find a scientific consensus about Drexlerian nanomachines being physically impossible? The general argument is that nanomachines are not ascribed any properties not already demonstrated by bacteria, and bacteria exist, so nanomachines physically can exist.
  • That we don’t have a clue how to design a bateria that spends its time doing microscopic bits of some arbitrary macroscopic project of interest to us, is another matter. Drexler and company are clearly a tad optimistic on the engineering side.
  • Dr Dealgood says:
  • December 6, 2015 at 12:38 am
  • No, not really.
  • Most of the big disconnects between lowercase-r rationality and uppercase-R Rationality come from their founder, who founded the group with the explicit goal of training people to be rational enough to agree with his extremely fringe views on things like AI and cryonics.
  • Since most of his apocalypic AI scenarios rely heavily on it quickly devising and releasing something like Molecular Assemblers it makes sense that the idea was hardcoded into the Rationalist system despite not making much rational sense.
    • Samuel Skinner says:
    • December 6, 2015 at 12:52 pm
    • You sure? Bioweapons as well as the long game fit fin in a doomsday AI fear.
    • anon says:
    • December 6, 2015 at 3:31 pm
    • While I agree completely that Yudkowsky founded LW to spread his fringe views (mainly the fringe view that people should give him their money) I think you’re underestimating the damage a superintelligent machine could do even with just today’s technology. There’s plenty it could do just over the internet, and it would be trivial for something several orders of magnitude smarter than human beings to cajole, bribe or blackmail real people into doing everything else.
    • Neurno says:
    • December 6, 2015 at 7:00 pm
    • Dr. Dealgood: I disagree with the factual implications of your comment, and I take issue with you insulting my meme-group, which I perceive you as describing as Yudkowky’s fringe. He is not the origin of my ideas, he is just a popular spokesperson. Furthermore, I have rather a lot of experiential evidence to support my views on these “fringe” ideas. For example, cryogenics works, for the definition of work meaning “can successfully preserve the information contained with the brain of the subject as represented by the structure of neurons and glia, arrangement of synapses, and types and approximate quantities of proteins present in specific parts of each cell”. Furthermore, the digitizing of this information is already possible with current technology. As a neuroscientist I have carefully cryosectioned, labelled, and imaged with laser microscopy many mammalian brain samples. I don’t know when it will be possible to digitally emulate a mind based on this brain data, but my understanding of computational neuroscience suggests that it is quite possible. However, asserting that I support cryogenics would be incorrect. I don’t support cryogenics because it is outdated. There is a better, safer (less likely to lose brain information), cheaper, easier method that does not require an expensive cryogenics company: hydrogel embedding (CLARITY). (If researching this, do not confuse with brain plasticization which risks losing information.) I have informed Yudkowsky that he is factually incorrect in this regard, but I don’t know if he has yet researched the topic himself and correspondingly updated his views.
    • Furthermore, my understanding of the current research in computational neuroscience speaks to the near-term plausibility of biomimetic AI. I am not qualified to speak on the subject of non-biomimetic AI. I am unsure whether biomimetic AI presents any existential risk, but I do believe it will be commonplace within a couple of decades.
    • I do doubt that non-biological nanomachines will ever work well, although some crude ones have been made in labs. Don’t underestimate the power of biological nanomachines/ engineered cells though! I’ve only dabbled in genetic engineering/virus manufacture/stem cell engineering, and yet my humble projects have done some pretty impressive things on occasion!
    • Again, on any topic outside his field, please consider Yudkowsky to be a well-intentioned popular science-and-philosophy writer (whom I happen to like a lot even if he’s not as correct as he seems to think he is about a lot of current science!). Don’t judge the capacities of science and the possibilities that the future holds by any perceived flaws or limitations in his writing.
      • Dr Dealgood says:
      • December 6, 2015 at 8:09 pm
      • I might have phrased my comment too impolitely, since this is a rationalist space. After all I don’t particularly like it when other people metaphorically come into my house and spit on my carpet. But it seems pretty clear that these positions have for the most part been adopted without or even against the available evidence.
      • Going back to cryonics for example: I’m not a neuroscience guy myself, though I am working with a glial cell transcription factor at the moment, but I would contest your characterization of it nonetheless. There’s a huge difference between freezing or perfusing a 10 micrometer slice of tissue that you never intend to thaw and cryonically preserving an entire human brain. Maybe at some future date we will be able to do that, but in the meantime paying even a single dime to companies like Alcor is utterly irrational.
      • For another example, take nanotechnology. Yudkowsky seems not to know the difference between wet protein or nucleic acid enzymes which actually exist and dry Drexlerian assemblers which probably can’t exist, and that leads him to some very bizarre statements. Like the idea that, once his hypothetical UFAI cracks protein folding (without actually having done any non-simulated experiments naturally) it can whip up the sequence that codes for build-anything nanobots, send it to some guy, and FOOM… world destroyed. It’s just more grey goo hysteria, except with a killer AI instead of a sloppy chemist.
      • There’s other examples that I’m not really qualified to speak on, like the QM stuff or the basic computer science behind his conception of AIs, but the responses I’ve heard from other people who know what they’re talking about sounds like my responses to the above. It’s not encouraging, particularly for a self-proclaimed expert in rationality, to pick irrational positions so consistently.
        • Neurno says:
        • December 6, 2015 at 9:51 pm
        • Dear Dr Dealgood:
        • Thank you for your apology, and in return I also apologize for being touchy about it. To give some context to my reaction, I labored intensely for many years in intellectual isolation researching what I felt to be my possible avenue of contribution to the meta-progress of the human race, radical biological intelligence enhancement via genetic modification of consenting adults. Then, relatively recently, overcoming bias spawned LessWrong, and suddenly an intellectual community willing to seriously discuss the issues I believe to be of utmost importance sprang up.
        • It is the willingness of people like yourself to discuss these issues that I primarily value, but I do also very much appreciate Yudkowky’s popularization of the ideas. Why, despite his frequent factual errors? Because I cannot communicate these things nearly so compelling or clearly. My partner and my close friends did not understand when I tried to explain why I have been laboring on this personal project for so many years. After getting them to read LessWrong, they do get it. They get my quest, why I care, why it might really matter. That is no small thing, and for that, I am willing to overlook rather a lot of factual failings. Hopefully if the new book form (Rationality: AI to zombies, which I haven’t read yet, so maybe it’s already somewhat better) gets enough traction, a second issue with the factual failings ‘updated’ by scientists from each specific field referred to can be issued.
        • nope says:
        • December 7, 2015 at 3:32 am
        • @Neurno: if you’re still around, I’d like to pick your brains re: intelligence enhancement in adults. My partner and I are involved in various things related to human intelligence, but on the enhancement side, the only significant possibilities look pre-natal. If this is wrong, we would be very excited to learn why. throwaway5283 at google mail dot com if you’re interested.
        • Deiseach says:
        • December 7, 2015 at 6:25 am
        • There’s a huge difference between freezing or perfusing a 10 micrometer slice of tissue that you never intend to thaw and cryonically preserving an entire human brain. Maybe at some future date we will be able to do that, but in the meantime paying even a single dime to companies like Alcor is utterly irrational.
        • Thank you for succinctly stating my lack of interest in (I can’t really call it opposition to) cryonics. I get that, for raising money for research purposes, companies need to do the whole “we can freeze you and thaw you out in the future” bit, but it does feel to me like taking advantage of the vulnerable, the desperate, and the grieving right now as any people or parts thereof frozen under current technology have, I submit, little to no chance of being successfully re-thawed (that’s not getting into “Oh, the future won’t thaw them out, they’ll read their brain engrams and copy them into a new body/upload to virtual space”).
        • I’m not saying it will never work, just that there is a lot of research and trial-and-error and experimenting on animals to see if it can be done right (and I’m sorry all the animal-rights people, but we’re talking about doing this to chimpanzees and other high-level primates to see if it can work for us).
        • Right now? Donate for research, certainly. Pay to have your dead body frozen and kept in storage for fifty-plus years? That’s hucksterism on the P.T. Barnum scale.
        • Murphy says:
        • December 7, 2015 at 9:10 am
        • personally I don’t think it would work, (my opinion <1% chance) but I can hesitate to call it irrational.
        • It’s a long-odds bet with massive potential payoff.
        • If I was a multi-millionaire or billionaire I could see myself taking that bet.
        • Even if it doesn’t work and nobody can extract a whole personality from a frozen brain there’s good odds that the future equivalent of archaeologists will find the very well preserved bodies of people from, say, a century in their past to be highly useful for understanding our current society,health,diseases etc.
        • Neurno says:
        • December 7, 2015 at 11:49 am
        • @nope: email sent. Let me know here if you don’t receive it.
        • @Deiseich and Murphy:
        • Would you please please please stop discussing horse-and-carriage technology in the discussion about whether it is better to cross a continent on foot or with a cheap jet plane ticket? Seriously, no! That is not the issue at all! Eghads! What are you, Amish, Mennonite, Shaker? Why would you ever ever freeze a brain?! No. Bad. Wrong. Multiple vastly superior options exist.
        • The best option currently is, as I said, hydrogel embedding. This requires only a dead brain (or more conveniently a severed head) to be immersed in a bucket of ~10% paraformaldehyde solution, and the sealed bucket placed in a refrigerator. In a few days, or weeks, whichever is convenient, an expert can come along and convert the brain tissue to a hydrogel embedded sample.
        • This hydrogel embedded sample is optically clear and can be very effectively immunolabeled and confocal laser microscope scanned over and over with no loss of information. The sample is physically stable (strong and plastic-y)(unlike brain tissue preserved in paraformaldehyde and/or alcohol), and will be stable at room temperature for many decades (at least). Scientists can safely study your brain many times over without damaging the sample or its precious information, and many separate attempts to digitize the information can be made (and compared with each other to get the best total info). The hydrogel embedding and initial paraformaldehyde preservation are quite cheap (under $100) and so useful to scientists that they might well pay your descendants for the privilege of non-harmfully studying it.
        • Because there is pretty much no downside to asking your grandkids to keep your preserved brain in a bucket in their garage, you might as well in the slim case that you might be successfully uploaded and in the hugely probable case that your brain would be of great benefit to science at no loss to you. Having scientists non-destructively studying your brain for generations to come would only increase the odds that you might be successfully digitized and emulated someday!
        • Dr Dealgood says:
        • December 7, 2015 at 12:27 pm
        • Can you practically use CLARITY on a whole human brain? When I looked it up the only protocols I found were for <=50 mL samples, which is excellent for studying mice but raises questions about how well the process would scale. I’m not an expert here by any means but given that it takes 5+ days to fix a mouse brain the rate of perfusion might be an obstacle in larger organs.
        • Also it’s a bit of a moot point because, while this is a potentially workable idea for preserving brains nobody is actually doing it. Almost all of the advocacy and all of the money in the Rationalist sphere is focused on freezing.
        • Murphy says:
        • December 7, 2015 at 12:41 pm
        • @Neurno
        • I’m even more skeptical but hey, if it’s really cheap. Let us know if/when a company starts doing this commercially.
        • If I could get my brain preserved somewhere safe like that for, say, $1000 I’d possibly go for it. It’s be cheaper than many currently popular death-rituals.
        • Neurno says:
        • December 7, 2015 at 2:42 pm
        • @Murphy:
        • No company necessary if you’re able/willing to talk a friend/relative into the DIY option. Just ask that your brain be stuck in a bucket with some paraformaldehyde and ask a scientist to come study it with CLARITY.
        • That being said, I can see how a lot of people might not be so into the DIY option. I recognize that I may be somewhat unusual in enjoying studying/handling the brain tissue of the recently deceased. I do hope someone starts such a business soon!
        • @Dr Dealgood:
        • I use this forum for my DIY nitty-gritty on CLARITY.
        • forum.claritytechniques dot org
        • clarityresourcecenter dot org
        • Here is a link to a paper in which it was used on a whole human brain from a brain bank (processed in about 1 cm thick slices) which had been stored a long while previously in formalin.
        • It is possible to do this in much thicker sections, it just takes longer before the brain is optically clear and ready to observe. It’s actually better for information preservation to use the slower method, but it’s hard to be sufficiently patient! I hate having to leave a sample in the clearing solution for months just to be able to answer the question that I prepared it for!
        • http://onlinelibrary.wiley.com/doi/10.1111/nan.12293/full
        • Actually, it can be done not just on brain tissue but on the whole body (but why bother for other than medical research? You are your brain). Note of caution while researching this: images of whole animal hydrogel preservation samples are not for the weak of stomach.
        • Dr Dealgood says:
        • December 7, 2015 at 4:00 pm
        • Thanks for the link. I had seen the bit on preserving other organs, I think it was mentioned in passing by a team optimizing the protocol for whole mouse brains.
        • I also think our definitions of whole are a bit different. To me, once you section a brain it is by definition no longer a whole brain. Asumming arguendo that you can scan and emulate brains you’d still presumably want it in as few pieces as possible.
        • But yeah, definitely going to check that place out.
        • Deiseach says:
        • December 7, 2015 at 5:52 pm
        • Neurno, right now people are forking out good money to have themselves, or their heads, frozen and preserved, or paying for the upkeep of frozen deceased family members, via that horse-and-carriage technology.
        • That’s my main beef: people are being sold a bill of goods that cannot be fulfilled. Better preservation techniques, invention of however the fuck you are going to read engrams or whatever, animal testing of both that shows they work and you get out the other end something almost entirely approximating what you put in – fine, once those bugs are worked out, then sell people “step right up, sign up for our process, and wake up in the wonderful world of tomorrow”.
        • As it is? Right now? And the companies that started forty years or so ago and froze people in the 60s? I think you’d be as well off to be turned into an Egyptian mummy.
        • Also – so you slice up the brain into sections? Well, if you can put Humpty-Dumpty back together again, then I think okay. I’d really like to see some animal tests done first, though
        • It sounds rather too like Victor Frankenstein stitching separate body parts back together into a coherent whole and getting the resultant jigsaw to work.

On why so many different neurotransmitters might have evolved

Oleg S says:

December 5, 2015 at 1:58 am

Is there any idea of why there are so many different neurotransmitters in the brain?

Ok, I understand that glutamate is the major excitatory neurotransmitter and GABA is the major inhibitory neurotransmitter. Excitatory and inhibitory synapses are modeled really well by artificial neural networks with positive and negative weights on connections. But there are also D-serine, serotonin, acetylcholine, dopamine, norepinephrine, histamine and the whole lot of other compounds that somehow stimulate or affect neurons. Wouldn’t it be much easier to drop them and use plain excitation/inhibition networks?

Of course I understand that Nature has her own reasons. Still the question is what do those additional neurotransmitters do that cannot be captured by classic artificial neural networks, and why this function is so vital that they persist through almost entire animal kingdom.

 

  • Daniel Speyer says:
  • December 5, 2015 at 4:26 am
  • A partial reason is that they do more than excite or inhibit. A lot of the receptors for the less common neurotransmitters are g-proteins, which means they release special signaling molecules inside the cell. This can trigger a complex chain of other reactions, which I don’t think anyone has fully mapped. Sometimes there can be an excitatory or inhibitory effect, but sometimes something gets phosphoralated.
    • Oleg S. says:
    • December 5, 2015 at 6:18 am
    • Ok, I can understand that there are some sort of motor-like neurons, which excrete some less common neurotransmitters that activate GPCRs, which in turn trigger other reactions that ultimately lead to changes in pattern of genes expression, secretion of hormones and other rather specific events.
    • However, take for example dopamine signaling. It functions very much like ion channels: for example, once D1 receptor is activated by dopamine, corresponding G-protein binds to adenylate cyclase, which converts ATP to cAMP, which activated PKA, which phosphorylates Na+ ion channel, which opens, and so action potential is generated and sent down. Binding of dopamine to D2 receptors inhibits adenylate cyclase, so these receptors can be regarded as inhibitory.
    • Serotonin receptor 5HT-4 works very similarly to D1 – it also activates adenylate cyclase. So, from internal point of view these receptors works very similar. And one of serotonin receptors family,5-HT3, is ion channel that functions very much like other ion channels, causing depolarization when bound to serotonin.
    • But the distribution and overall effect of these neurotransmitters are profoundly different. Dopamine receptors are abundant in CNS, and are implicated in motivation, pleasure, cognition, learning etc. It’s like I would have two type of electric wires in my house: copper wires for mundane appliances and silver wires for devices which help me to earn money.
      • Neurno says:
      • December 6, 2015 at 4:13 pm
      • I think it would be totally possible to design a brain/mind with a simpler set of neurotransmitters, but that would require a substantial (inefficient) redesign of neurons. Consider, for instance, the frontal cortical pyramidal neurons that receive glutamate/GABA as instructions to increase/lower probability of firing in a near-future time sensitive way. But then they use background diffused dopamine levels to adjust variables such as firing threshold and “mental exhaustion threshold” (poetic license for clarity’s sake!). This could be accomplished as well by having a direct connection from the dopamine-diffusing neurons to every single target neuron to adjust all their thresholds, but that would require far more neural tissue and physiological effort, with high associated costs. So there is no need to code these neurotransmitters into an AI as such, but a substantial need for them in the messy part-digital, part-analogue system of a meat brain.
  • onyomi says:
  • December 5, 2015 at 11:56 am
  • I can’t even understand how there could be so many different dials, controls, etc. in an airplane cockpit; and yet…
  • http://etc.usf.edu/clippix/pix/portion-of-a-control-panel-in-an-airplane-cockpit_medium.jpg
  • Scott Alexander says:
  • December 5, 2015 at 7:26 pm
  • I don’t have a good answer, but here’s a crackpot theory:
  • One of the main ways we get new genes is by mutations that randomly duplicate old genes. The genes randomly diverge for genetic drift, and then evolution gets the chance to do one thing with one copy while leaving the other copy mostly intact.
  • (warning: the following is a really really dumb toy example and bears no relation to real dopamine receptors)
  • So suppose that at first all we had was D1 receptors doing everything . Then the gene randomly duplicates, so we have two copies of the D1 receptor gene. Then they drift apart, and one becomes modern D1 and the other modern D2. And suppose by coincidence of whatever drift they’re getting, D1 is expressed more in the reward system and D2 in the motor system. And suppose evolution wants to implement something like “sex should be very rewarding, so when you get sex, increase stimulation of dopamine receptors in the reward system”. And suppose evolution doesn’t want you to be having weird muscle tics and seizures every time you have sex because you’re also overstimulating the motor system, or make you get Parkinson’s Disease every time you have a dry spell. Evolution can make the D1 receptors sensitive to sex, and the D2 receptors not sensitive to it, and get what it wants. Now there’s evolutionary pressure to keep the D1/D2 distinction and it will be preserved.
  • Fast forward a few million years and you’ve got the modern picture of 5-7 different dopamine receptors plus serotonin, norepinephrine, glutamate, GABA, and a million other things.
  • In other words, the more neurotransmitters you have, the more finesse evolution can use when tuning different systems up or down. If we only had one neurotransmitter, glutamate, and evolution wanted us to have less sex for some reason, all it could do is tone down glutamate, in which case we’d be generally more tired and relaxed and non-thing-doing. But that would be bad for other reasons, for example, we would also have less food. If there’s a single neurotransmitter involved in sex, then it can just up that.
  • I realize that this has the problem of neurotransmitters not really corresponding well to simple things like sex, but it may be they correspond better to very complicated hidden variables that evolution frequently wants to tune, or at least they did in the past, or at least as much as they can given that evolution is inherently hard and inefficient.
  • I have no idea if this is actually true or not and other people may correct me if they know better.
    • Oleg S. says:
    • December 6, 2015 at 3:17 am
    • Corresponding analogy in silico, as I understand it, would be “Let’s have a genetic algorithm evolve us the best neural network for some particular purpose. We’ll have several types of neurons and some basic architecture of connections, and the prevalence of each type of neuron (or strength of their connections) will be varied in GA.” Type of neuron in artificial network would then correspond to neurotransmitter/receptor. If we design the network really good, we’ll be able to assign same type label to the neurons that have similar function, and optimize them in GA separately.
    • Some way to avoid a lot of receptors would be to compartmentalize neurons: have neurons that do a certain aspect of information processing (like emotions, motor control, image recognition) located near each other (like in amygdala, cerebellum, visual cortex – I’m simplifying of course), and then have handles on blood and nutrient flows to that regions. The two ways are very similar from computational point of view – we can label groups of neurons in whatever way we want. Probably Nature uses both ways to control neuronal population responsible for particular task.
    • A way to test this theory is to engineer a small animal (like a round worm C. elegans) that have all its amine neurotransmitter receptors internalized at some point of life and replaced by glutamate/GABA channels, and then to see how well it would do. The null hypothesis would be that once neural circuitry is established (the animal is adult), and receptors are replaced by their analogues, the behavior of modified animals should be basically the same compared to control group.
    • However, my gut feeling is that worms lacking all amine (dopamine, serotonin) receptors and enzymes producing those neurotransmitters would not develop normally. So, I expect that apart from being the tools to regulate different information processing systems on evolutionary scale, different neurotransmitters may have something to do with neural development in each individual animal.
      • nope says:
      • December 6, 2015 at 4:19 am
      • Wouldn’t compartmentalization be worse for more general abilities? And for communication between modules?
    • nope says:
    • December 6, 2015 at 4:14 am
    • This was the first thing that intuitively occurred to me. I can’t really think of an easier way for biology to stumble onto selectivity in up/down-regulation than this one, which may simply speak to my level of sophistication on this issue. Are there any examples of organisms displaying high behavioral complexity with few neurotransmitters? And does neurotransmitter complexity correlate decently with behavioral complexity? Sounds testable!
    • Neurno says:
    • December 6, 2015 at 4:21 pm
    • edit: @Scott Alexander
    • This fits well as a simplified explanation of what I understand the current somewhat-more-complicated scientific explanation to be for the question “how did all these similar but different genes/neurotransmitters/receptors come to be?”
    • JuanPeron says:
    • December 7, 2015 at 2:00 pm
    • This seems to be a solid biological path to acquiring more degrees of freedom in a system. That doesn’t reflect on whether it’s the true explanation, but it’s really promising as a way to get more mental states without adding tons of new neurons or whole brain regions. Basic excitement neural nets seem to be fully representative of brains (Turing complete and all that), but we can use more complicated signaling/neurotransmitter systems to shrink those nets.
    • If you want to have some, but not all, regions of the brain change their behavior in the face of tiredness, sex, etc, you can either add a lot of new neurons to modulate the new effect, or slap some new transmitters on the regions you want to alter and keep brain size the same.

 

On the dangers of Genetic Engineering / CRISPR

Anonymous says:

  December 5, 2015 at 12:01 am

  There seems to have been a flurry of long-form articles published about CRISPR over the past few weeks. For example:

http://www.nytimes.com/2015/11/15/magazine/the-crispr-quandary.htmlhttp://www.newyorker.com/magazine/2015/11/16/the-gene-hackershttps://www.sciencenews.org/article/gene-drives-spread-their-wings 

Reading these, I’ve been especially interested in seeing how the mainstream discussion of CRISPR-ethics starts to unfold. For now, the articles seem to focus nearly exclusively on the problems/risks involved with editing human DNA, with some also mentioning ecological risks of gene drives gone wrong in plants or animals. While there’s obviously a lot to unpack with these issues already, I’ve been surprised by the relatively small concern there seems to be about deliberately edited viruses used for bioterrorism, which is the particular risk that scares me the most. Apart from one vague mention at the end of a Wired article from July (http://www.wired.com/2015/07/crispr-dna-editing-2/), I haven’t found anything that talks specifically about CRISPR as a weapon.  Could someone more knowledgeable about biology than I am please let me know why we shouldn’t be completely terrified about this? Is there some reason why it would be insanely, unrealistically hard to, say, make a couple choice edits to an influenza virus and create something way more deadly and infectious? Because right now I’m having a hard time seeing why CRISPR isn’t a “black ball” discovery, to use Bostrom’s term.

    • Dr Dealgood says:
    • December 5, 2015 at 1:14 am
    • CRISPR is a tool that’s mainly used for precisely modifying eukaryotic genomes: viral genomes are small enough that you really don’t need anything very sophisticated to make new viruses. In fact CRISPR is often delivered via a modified and very safe derivative of HIV called a lentiviral vector.
    • And as for being used as a weapon on its own, the only thing I’ve heard of close to that might be the idea to use it for gene drives against mosquitoes. Basically adding in “selfish genes” (that are more likely to be passed on) which also code for a particular trait, so that in a few generations much or most of the targeted population would have it. For obvious reasons this isn’t practical as a weapon against humans: even if millions were hit initially and it went completely undiscovered it would still be somewhere on the order of centuries before it would be a problem.
    • (EDIT: Didn’t see you already mentioned gene drives, presumably you knew about that already. Whoops.)
    • Anyway, as a general rule I would argue that we should avoid worrying about things we don’t understand. Wild speculation doesn’t make anyone safer and can distract you from risks which you do have meaningful control over.
    • svalbardcaretaker says:
    • December 5, 2015 at 8:16 am
    • We’ve had the technology to make deadly plagues since early 2000nds. Interestingly enough the effort to keep that out of the publics eye seems to have been successful; digging it up is somewhat convoluted.
    • CRISPR only makes stuff like that easier; so in short, there is every right to be terrified.
    • Neurno says:
    • December 6, 2015 at 3:47 pm
    • As a researcher who has been hacking genomes since well before convenient and powerful tools such as CRISPER… This is a real issue, the world has been in danger from this for well over a decade now, and the government had been scrambling quietly and largely ineffectually to try to reduce this danger. Fortunately, so far, the only people smart/educated enough to be a risk for designing such a weapon have been too wise to do so. Let us all hope that remains true for the foreseeable future, at least until the government comes up with some better defenses. I recommend increasing government funding for counter-bioterrorism research.

 

On dabbling in religion

Context: Maware wrote in response to a line of comments following a question about which religion we would each choose to be if we were dabbling in religion…

Maware says:

  • December 6, 2015 at 12:06 pm
  • This is because they are so irreligious that they can say “wouldn’t it be nice to be this?” The people who daydream about living on a south seas island have nothing in common with those who live on one, and if they had to actually deal with the reality of living on said island, would go nuts and probably disdain the islanders.
  • The people who hate a religion are closer to it than those who view it as a tourist spot with lovely architecture and quaint local rituals

 

 

  •  Jiro says:
    • December 6, 2015 at 12:52 pm
    • The people who daydream about living on a south seas island have nothing in common with those who live on one
    • Somewhat unrelated comment: remember back when the same was true for sci-fi fans wishing they could live on a space colony?

 

  •   Neurno says:
    • December 6, 2015 at 3:30 pm
    • As a educated rationalist who grew up in an apparently happy-go-lucky outwardly-cheerful, highly repressive and bigoted but low-crime (other than domestic abuse) small Mormon town as a non-mormon…. I would like to very strongly second Maware’s point. Fie on religious conformity approaching the power of law or uniform social sanction. I can assure you that bad things lurk down that road for free-thinkers.

 

On the potential value of a materialist pseudo-religion

A Postdoc says:

December 5, 2015 at 4:12 am

I was pondering recently whether part of the “purpose” of religion is to hack the intensely social nature of human cognition to get people to do things. It’s just easier to make people care about doing something if it “makes God happy” or “defeats the demons” than for some abstract reason like “it will make society better.” This seems to still be a true thing about human cognition (for instance, look how angry we get about terrorists while ignoring problems with a much larger body count but no human face.) So maybe we need a religion that includes both untrue-but-psychologically-motivating aliefs (“malaria nets make God happy!”) and true-but-abstract beliefs (“God is just a convenient label for an abstract set of moral principles.”) I’m not sure how well people could handle the cognitive dissonance in practice, but I feel like it would be an interesting experiment.

Neurno says:

December 6, 2015 at 4:00 pm

Absolutely not. I see religion as a failing of weak minds. Improve the weak minds, and I predict religion (along with any apparent social need for it) will simply disappear due to disinterest.

Dr Dealgood says:

December 5, 2015 at 10:39 am

  • We haven’t had a very good track record with materialist pseudo-religions over the last few centuries, it’s probably for the best to avoid repeating that mistake. I’m also a little fuzzy on why the world needs yet another competing unifying force: if anything it seems like they’re the root of a lot of our present issues.
  • As for religions compatible with science, why not go back to classical sources? Stoicism is probably the most logical choice, but I’ve heard good things about Neoconfucianism (New Confucianism on the other hand seems like a bit of a mess). If you really wanted to you could even go for something like an updated Hermeticism: alchemy and astrology historically developed into chemistry and physics, so the idea of using those disciplines to seek enlightenment still sort of makes sense.

 

 

Deiseach says:

December 5, 2015 at 11:47 am

The kind of “religion of Humanity” which R.H. Benson describes in his 1929 SF apocalyptic novel “Lord of the World”? Based on scientific understanding, where Mankind is the only transcendent thing?

There was but one hope on the religious side, as he had told Mabel a dozen times, and that was that the Quietistic Pantheism which for the last century had made such giant strides in East and West alike, among Mohammedans, Buddhists, Hindus, Confucianists and the rest, should avail to check the supernatural frenzy that inspired their exoteric brethren. Pantheism, he understood, was what he held himself; for him “God” was the developing sum of created life, and impersonal Unity was the essence of His being; competition then was the great heresy that set men one against another and delayed all progress; for, to his mind, progress lay in the merging of the individual in the family, of the family in the commonwealth, of the commonwealth in the continent, and of the continent in the world. Finally, the world itself at any moment was no more than the mood of impersonal life. It was, in fact, the Catholic idea with the supernatural left out, a union of earthly fortunes, an abandonment of individualism on the one side, and of supernaturalism on the other. It was treason to appeal from God Immanent to God Transcendent; there was no God transcendent; God, so far as He could be known, was man.

Yet these two, husband and wife after a fashion — for they had entered into that terminable contract now recognised explicitly by the State—these two were very far from sharing in the usual heavy dulness of mere materialists. The world, for them, beat with one ardent life blossoming in flower and beast and man, a torrent of beautiful vigour flowing from a deep source and irrigating all that moved or felt. Its romance was the more appreciable because it was comprehensible to the minds that sprang from it; there were mysteries in it, but mysteries that enticed rather than baffled, for they unfolded new glories with every discovery that man could make; even inanimate objects, the fossil, the electric current, the far-off stars, these were dust thrown off by the Spirit of the World—fragrant with His Presence and eloquent of His Nature. For example, the announcement made by Klein, the astronomer, twenty years before, that the inhabitation of certain planets had become a certified fact—how vastly this had altered men’s views of themselves. But the one condition of progress and the building of Jerusalem, on the planet that happened to be men’s dwelling place, was peace, not the sword which Christ brought or that which Mahomet wielded; but peace that arose from, not passed, understanding; the peace that sprang from a knowledge that man was all and was able to develop himself only by sympathy with his fellows. To Oliver and his wife, then, the last century seemed like a revelation; little by little the old superstitions had died, and the new light broadened; the Spirit of the World had roused Himself, the sun had dawned in the west; and now with horror and loathing they had seen the clouds gather once more in the quarter whence all superstition had had its birth.

(After Mabel has seen a volor crash and people killed for the first time in her life; there are government officials who mercy-kill the very badly hurt, not likely to survive victims. “Down the steps of the great hospital on her right came figures running now, hatless, each carrying what looked like an old-fashioned camera. She knew what those men were, and her heart leaped in relief. They were the ministers of euthanasia.”)

“My dear, it’s all very sad; but you know it doesn’t really matter. It’s all over.”

“And — and they’ve just stopped?”

“Why, yes.”

Mabel compressed her lips a little; then she sighed. She had an agitated sort of meditation in the train. She knew perfectly that it was sheer nerves; but she could not just yet shake them off. As she had said, it was the first time she had seen death.

“And that priest — that priest doesn’t think so?”

“My dear, I’ll tell you what he believes. He believes that that man whom he showed the crucifix to, and said those words over, is alive somewhere, in spite of his brain being dead: he is not quite sure where; but he is either in a kind of smelting works being slowly burned; or, if he is very lucky, and that piece of wood took effect, he is somewhere beyond the clouds, before Three Persons who are only One although They are Three; that there are quantities of other people there, a Woman in Blue, a great many others in white with their heads under their arms, and still more with their heads on one side; and that they’ve all got harps and go on singing for ever and ever, and walking about on the clouds, and liking it very much indeed. He thinks, too, that all these nice people are perpetually looking down upon the aforesaid smelting-works, and praising the Three Great Persons for making them. That’s what the priest believes. Now you know it’s not likely; that kind of thing may be very nice, but it isn’t true.”

Mabel smiled pleasantly. She had never heard it put so well.

“No, my dear, you’re quite right. That sort of thing isn’t true. How can he believe it? He looked quite intelligent!”

“My dear girl, if I had told you in your cradle that the moon was green cheese, and had hammered at you ever since, every day and all day, that it was, you’d very nearly believe it by now. Why, you know in your heart that the euthanatisers are the real priests. Of course you do.”

John Schilling says:

December 5, 2015 at 1:20 pm

Materialist pseudoreligions, e.g. Marxism, Gaian environmentalism, have stumbled into the religion niche inadvertently and often in opposition to their founders’ intent to Not Start A Religion Because Religions Are Reactionary Nonsense.

It seems like it would be worth trying to design a few nontheistic religions with deliberate intent and through selective appropriation of the good parts of traditional religions, to see if it would do any better. I can think of one obvious example that shan’t be named, that has turned out to be fairly successful and mostly harmless except for all the vindictiveness towards apostates and critical heretics. Probably we could do better; maybe we could do well enough to base a society on the results.

 

  • Protagoras says:
  • December 5, 2015 at 6:08 pm
  • I think someone may have mentioned it elsewhere in this very thread, but one theory that seems to militate against any such materialist pseudoreligion being worthwhile is that our brains seem to be mostly wired for social interaction, with the other things we do with them being mostly lucky side effects. This is presumably the reason people mistakenly try to interact with the inanimate world as if it were consciously motivated. It’s plausible that this is also responsible for some of the benefits of religion; if the main goal is changing yourself, rather than understanding or changing the world (and there are plenty of cases where changing yourself seems extremely valuable), interacting with the world as if it were conscious may well get more of your brain involved and make it easier to make more extensive changes. Obviously, if there’s any merit to that theory, trying to construct a religion without the supernatural elements isn’t going to be very productive. It may be possible to get the benefits without taking the supernatural elements fully seriously; it’s not clear how this works. But if you wish to do supernatural pretense, there are existing religions that are tolerant of doubt and metaphorical interpretations. No need to invent a new one.

 

Max says:

December 6, 2015 at 2:43 pm

Why does religious belief have to be compatible with science and rationality? Science and rationality are tools to help man understand his physical world and its systems. It’s a perversion, even a subversion of religion to presume it has the same purpose

Because when the preacher goes and say something evolution is wrong because “holy book”, and the “love is most important thing” – but said love is very hard to find among its practitioners, when you can see the corruption without even trying hard – you kinda start doubting whole thing very fast. And wonder if your purpose is to spend life on the things which you intuition tells you is wrong in many cases

Old religions worked all right when they were compatible with general worldview. But even then not everything was peachy either – hence churches generally tend to become very corrupt and very violent prone in order to keep population “believing in the right thing”

 

  • Tar Far says:
  • December 6, 2015 at 8:06 pm
  • I don’t know what your preacher said, and I take it as a given that some number of preachers are corrupt, but my general impression of the religious response against evolutionary science comes out of a fear that fallible men will interpret evolution to mean that there is nothing divine or sacred about our bodies, that there is no higher purpose for living besides perpetuating the species, that morality and virtue are relative, etc.
  • Isn’t such a fear justified?

 

Max says:

December 7, 2015 at 3:41 pm

Nope. Because Truth should be sacred, no matter how much it can hurt.

Yeah it is easier to accept lies and give rationalization and justification for them. “The road to hell is paved with best intentions”

Challenge is to accept that people die, that people commit extremely cruel and violent things – not because of Satanic corruption, but on their own volition. Because acceptance of Truth is first step towards understanding and finding solutions.

If you give in to Lies, even comforting ones – that is a path… Where to exactly where old religions and ideologies lead us so far.

On my tactical approach to discussing HBD (above)

Technically Not Anonymous says:

December 6, 2015 at 8:34 pm

This should be interesting. The Future Primaeval announces they’re done pretending to not be completely evil (and confirms my suspicions that that is totally a thing ~the group which shall not be named~ deliberately does.)

Neurno says:

December 7, 2015 at 12:46 pm

@technically not anonymous:

I have seen this tactic taken before by ‘dark enlightenment’ types. Only after seeing this comment thread did I finally realize how I might use this concept to my advantage. In my perception the ‘dark enlightenment’ types are often evil black-robed philosophers going about disguised under robes of grey. Upon garnering what they feel to be a sufficient audience, they dramatically cast aside their grey robes and reveal that they were black-robed all along. “Haha”, they say, ” I tricked you into taking my ideas seriously when normally you would have dismissed me out of hand! Now the seed of petty, small-minded, hateful philosophy has been planted in your brain and soon you shall grow to be like me!” Thus do they attempt to win converts.

Having finally grokked this, I have taken their strategy and reversed it, to great success! I came skulking into this comment thread in tattered robes of darkest black, posing as a highly controversial and somewhat frightening Mad Scientist. Once my controversy had gathered me an audience, I cast aside my robes of black and revealed myself to be clad in robes of shining grey! “Haha,” I declared, “I tricked you! I got you to think about my ideas seriously, when normally you would have dismissed them out of hand for being too science-y and uncontroversial! Now I have planted the seeds of science and rational thought about boring matters of potentially great importance in your brain, and soon you shall grow to be like me!”

I don’t know how necessary or efficacious this gambit actually was, because I lack an adequate control group, but it certainly was fun! For Fun Theory! For the bright shining destiny of Humankind! For the painstakingly slow and precise advancement of potentially-boring but also potentially-hugely-important plans/tools/concepts for the meta-advancement of human sapience! Huzzah!

 

Leave a comment