Quotable (#195)

James Thompson on the lessons of AlphaGo for intelligence research:

Very interestingly, getting more computer power does not help AlphaGo all that much. Between the first match against the professional European Champion Fan Hui and then the test match against World Champion Lee Sedol, AlphaGo improved to a 99% win rate against the 6 month earlier version. Against the world champion Lee Sedol, AlphaGo played a divine move: a move with a human probability of only 1 in 1000, but a value move revealed 50 moves later to have been key to influencing power and territory in the centre of the board. (The team do not yet have techniques to show exactly why it made that move). Originally seen by commentators as a fat finger miss-click, it was the first indication of real creativity. Not a boring machine. […] The creative capabilities of the deep knowledge system is only one aspect of this incredible achievement. More impressive is the rate at which it learnt the game, going up the playing hierarchy from nothing, 1 rank a month, to world champion in 18 months, and is nowhere near asymptote yet. It does not require the computer power to compute 200 million positions a second that IBMs Deep Blue required to beat Kasparov. Talk about a mechanical Turk! AlphaGo needed to look at only 100,000 positions a second for a game that was one order of magnitude more complicated than chess. It becomes more human, comparatively, the more you find out about it, yet what it does now is not rigid and handcrafted, but flexible, creative, deep and real. …

And on an optimistic note:

What about us poor humans, of the squishy sort? Fan Hui found his defeat liberating, and it lifted his game. He has risen from 600th position to 300th position as a consequence of thinking about Go in a different way. Lee Sedol, at the very top of the mountain till he met AlphaGo, rated it the best experience of his life. The one game he won was based on a divine move of his own, another “less than 1 in 1000” moves. He will help overturn convention, and take the game to new heights. […] All the commentary on the Singularity is that when machines become brighter than us they will take over, reducing us to irrelevant stupidity. I doubt it. They will drive us to new heights.

Quotable (#149)

A ruined empire on the brink:

All around the Web, in print, and on radio comes the claim that America has entered its “Weimar” phase. Economic collapse, political paralysis, rampant homosexuality, a desperate, disoriented populace open to the ravings of a demagogue – that is the portrait we get of Germany between the end of World War I in 1918 and the Nazi seizure of power in 1933. That is where America is supposedly situated in 2016. […] Yes, Weimar Germany ended badly, horribly so. But …

Much tying-itself-in-knots follows (not entirely uninterestingly).

The historical analogy is far stronger than the apologetic analysis. What Weitz refuses to contemplate, is that the set of outcomes he dogmatically defends as “social progress” is a partisan agenda (the New England Utopia) masquerading as a universal value. What left-liberals see as unambiguous advance looks to everyone else like losing. As the Internet decentralizes media, the progressive narrative monopoly is coming apart in the hurricane, and nostalgic preaching for the old religion won’t glue it back together. Weitz is right about one thing, though: there’s no doubt political developments could be blown in very ugly directions.

It’s chicken (the edge of the cliff version).
Left-Liberals: Stick with our vector for social development, or we’ll all go over the edge.
Mashed-Right: There have been far too many concessions already …

You have to swerve, Weitz pleads. Even if they do this time, they won’t forever, and its already far less obvious that they will. Compared to what we’re used to, that makes it a whole new world.

Game Over

Go is done, as a side-effect of general machinic ‘beating humans at stuff’ capability:

“This is a really big result, it’s huge,” says Rémi Coulom, a programmer in Lille, France, who designed a commercial Go program called Crazy Stone. He had thought computer mastery of the game was a decade away.

The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games.

This means that similar techniques could be applied to other AI domains that require recognition of complex patterns, long-term planning and decision-making, says Hassabis. “A lot of the things we’re trying to do in the world come under that rubric.”

UF emphasis (to celebrate one of the most unintentionally comedic sentences in the history of the earth).

We’re entering the mopping-up stage at this point.

Eliezer Yudkowsky is not amused.

The Wired story.

Quotable (#134)

In The New Yorker, John Cassidy lucidly rehearses the core game theoretic model of economic crisis:

… deciding whether to invest in financial assets or any other form of capital can be viewed as a huge n-person game (one involving more than two participants), in which there are two options: trust in a good outcome, which will lead you to make the investment, or defect from the game and sit on your money. If you don’t have a firm idea about what is going to happen and the payoffs are extremely uncertain, the optimal strategy may well be to defect rather than to trust. And if everybody defects, bad things result.

Does anybody seriously expect honesty from the status quo within this context? ‘Optimism’ is a fundamental building-block of regime stability. Expect it to be very carefully nurtured, with whatever epistemological flexibility is found helpful.

(Stay to the end of the article for the ominous nonlinear dynamics that correspond to narrative dike-breaking.)

Twitter cuts (#75)

Continue reading

Quotable (#77)

The crack artificial intelligence creeps in through is an unpatchable coordination problem. That gets ever easier to see:

Stephen Hawking deftly framed the issue when he wrote that, in the short term, A.I.’s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. … One obvious example is autonomous killing machines. More than 50 nations are developing battlefield robots. The most sought-after will be robots that make the “kill decision” — the decision to target and kill someone — without a human in the loop. Research into autonomous battlefield robots and drones is richly funded today in many nations, including the United States, the United Kingdom, Germany, China, India, Russia and Israel. These weapons aren’t prohibited by international law, but even if they were, it’s doubtful they’ll conform to international humanitarian law or even laws governing armed conflict. How will they tell friend from foe? Combatant from civilian? Who will be held accountable? That these questions go unanswered as the development of autonomous killing machines turns into an unacknowledged arms race shows how ethically fraught the situation is.

Twitter cuts (#36)

Andreessen tweet-storming on prices and information:

Continue reading