Lessons from zombie-psychosis:
Cotard’s Syndrome—in which a person can believe that they’re dead, that their organs are rotting, or that they don’t exist—was first identified by the French neurologist Jules Cotard more than a century ago, in 1882. But the condition is so rare that it’s still far from fully understood. […] … But Cotard’s Syndrome isn’t simply interesting from a neuroscience or psychological perspective. In the world of artificial intelligence, roboticists are working to build ever-more complex machines that replicate human behavior. One of the central questions is whether machines can truly become self-aware. Could understanding Cotard’s Syndrome provide the answer?
This could go so wrong …
Swiss banking giant UBS wants to talk to you about robotic emotion simulation, for some reason. It’s not at all badly done (irrespective of what it’s selling).
Building on [Herbert A.] Simon’s achievements in the field of artificial intelligence, we take a journey to explore the latest innovations in AI and, most importantly, its human element, to ultimately answer the controversial questions: What physical human characteristics and emotions must a robot have to make people react to it? And, obversely, Can AI recognize human emotions? …
The ad (if that’s what it is) has interactive features that seek to make some of its questions performative. It begins to fold back upon itself only in the final section, when it suggests:
Breakthroughs in data processing and conversation systems are helping more and more companies to implement AI in their operations. According to some experts, well-advanced artificial intelligence could someday not only assist businesses in doing their jobs more efficiently, but also bring a more human touch back to customer service, leading consumers to prefer sophisticated and professional AI service to today’s human variety.
Puzzle resolved. We’re exploring a projection of UBS’s customer interface, from the near future.
Great Filter calculation proceeds, around the back:
… according to a new paper published in the journal Astrobiology, recent discoveries of exoplanets combined with a broader approach to answering this question has allowed researchers to conclude that, unless the odds of advanced life evolving on a habitable planet are immensely low, then humankind is not the universe’s first technological, or advanced, civilization. […] “The question of whether advanced civilizations exist elsewhere in the universe has always been vexed with three large uncertainties in the Drake equation,” said Adam Frank, professor of physics and astronomy at the University of Rochester and co-author of the paper, in a press release. […] … “Thanks to NASA’s Kepler satellite and other searches, we now know that roughly one-fifth of stars have planets in ‘habitable zones,’ where temperatures could support life as we know it. So one of the three big uncertainties has now been constrained,” explained Frank.
However, the universe is more than 13 billion years old. “That means that even if there have been a thousand civilizations in our own galaxy, if they live only as long as we have been around — roughly ten thousand years — then all of them are likely already extinct,” explained Sullivan. “And others won’t evolve until we are long gone.”
(Apologies for the image quality — stumped in my search for a better one.)
Lots of stimulation in this John Horgan interview with Eliezer Yudkowsky (via). Among the gems:
Horgan: I’ve described the Singularity as an “escapist, pseudoscientific” fantasy that distracts us from climate change, war, inequality and other serious problems. Why am I wrong?
Yudkowsky: Because you’re trying to forecast empirical facts by psychoanalyzing people. This never works.
(Note on ‘Singularity’ FWIW by EY here: “I think that the “Singularity” has become a suitcase word with too many mutually incompatible meanings and details packed into it, and I’ve stopped using it.”)
One more EY snippet: “… human axons transmit information at around a millionth of the speed of light, even when it comes to heat dissipation each synaptic operation in the brain consumes around a million times the minimum heat dissipation for an irreversible binary operation at 300 Kelvin, and so on. Why think the brain’s software is closer to optimal than the hardware? Human intelligence is privileged mainly by being the least possible level of intelligence that suffices to construct a computer; if it were possible to construct a computer with less intelligence, we’d be having this conversation at that level of intelligence instead.”
I consider that we are still monkeys; we just came down from the trees rather recently, and it’s astonishing how well we can do. The fact that we can even write down partial differential equations, let alone solve them, to me is a miracle. The fact that we ourselves at the moment have very limited understanding of things doesn’t surprise me at all. […] If you go far enough in the future, we’ll be asking totally different questions. We’ll be thinking thoughts which at the moment we can’t even imagine. So I think to say that a question is unanswerable is ludicrous. All you can say is that it’s not going to be answered in the next hundred years, or the next two hundred years… To say there are unanswerable questions makes no sense. But if history comes to a stop, if we descend into barbarism or if we become extinct, then the questions won’t be answered. But to me that’s just a historical accident.
Now, do I think that wellbeing is a higher value than truth? No. I hope I would never cling to something because it made me happy, if I suspected it wasn’t true. Philosophy involves a restless search for the truth, an unceasing examination of one’s assumptions. I enjoy that search, which is why I didn’t stop at Stoicism, but have kept on looking, because I don’t think Stoicism is the whole truth about reality. But what gives me the motive to keep on looking is ultimately a sort of Platonic faith that the truth is good, and that it’s good for me. Why bother searching unless you thought the destination was worth reaching?
If the apparent, empirical, psychological, or anthropological subject were the real agent of the philosophical enterprise, this question would make a lot of sense.
A cluster of crucial arguments here, launched by an exotic question:
What if artificial intelligence is so unfamiliar that we have a hard time recognising it? Could our machines have become self-aware without our even knowing it? The huge obstacle to addressing such questions is that no one is really sure what consciousness is, let alone whether we’d know it if we saw it. …
Despite decades of focused effort, computer scientists haven’t managed to build an AI system intentionally, so it can’t be easy. For this reason, even those who fret the most about artificial intelligence, such as University of Oxford philosopher Nick Bostrom, doubt that AI will catch us completely unawares. And yet, there is reason to think that conscious machines might be a byproduct of some other effort altogether. …
A couple years back, I published a piece in Scientia Salon, “Back to Square One: Toward a Post-intentional Future,” that challenged the intentional realist to warrant their theoretical interpretations of the human. What is the nature of the data that drives their intentional accounts? What kind of metacognitive capacity can they bring to bear?
I asked these questions precisely because they cannot be answered. The intentionalist has next to no clue as to the nature, let alone the provenance, of their data, and even less inkling as to the metacognitive resources at their disposal. They have theories, of course, but it is the proliferation of theories that is precisely the problem. Make no mistake: the failure of their project, their consistent inability to formulate their explananda, let alone provide any decisive explanations, is the primary reason why cognitive science devolves so quickly into philosophy. …