The idea that European political fragmentation, despite its evident costs, also brought great benefits, enjoys a distinguished lineage. In the closing chapter of The History of the Decline and Fall of the Roman Empire (1789), Edward Gibbon wrote: ‘Europe is now divided into 12 powerful, though unequal, kingdoms.’ Three of them he called ‘respectable commonwealths’, the rest ‘a variety of smaller, though independent, states’. The ‘abuses of tyranny are restrained by the mutual influence of fear and shame’, Gibbon wrote, adding that ‘republics have acquired order and stability; monarchies have imbibed the principles of freedom, or, at least, of moderation; and some sense of honour and justice is introduced into the most defective constitutions by the general manners of the times.’ […] In other words, the rivalries between the states, and their examples to one another, also meliorated some of the worst possibilities of political authoritarianism. Gibbon added that ‘in peace, the progress of knowledge and industry is accelerated by the emulation of so many active rivals’. Other Enlightenment writers, David Hume and Immanuel Kant for example, saw it the same way. From the early 18th-century reforms of Russia’s Peter the Great, to the United States’ panicked technological mobilisation in response to the Soviet Union’s 1957 launch of Sputnik, interstate competition was a powerful economic mover. More important, perhaps, the ‘states system’ constrained the ability of political and religious authorities to control intellectual innovation. If conservative rulers clamped down on heretical and subversive (that is, original and creative) thought, their smartest citizens would just go elsewhere (as many of them, indeed, did).
Political disintegration combined with cultural-market integration was the key.
In 18th-century Europe, the interplay between pure science and the work of engineers and mechanics became progressively stronger. This interaction of propositional knowledge (knowledge of ‘what’) and prescriptive knowledge (knowledge of ‘how’) constituted a positive feedback or autocatalytic model. In such systems, once the process gets underway, it can become self-propelled. In that sense, knowledge-based growth is one of the most persistent of all historical phenomena – though the conditions of its persistence are complex and require above all a competitive and open market for ideas.
The conclusion to a little tangled esoteric amusement:
… And so, in the book itself, like one of the horror protagonists he discusses, Sandifer continually, compulsively – and less and less convincingly – says no, asserts that nothing is wrong, that he’s in control, that he’s not unhealthi[l]y interested in his subjects, that he knows they’re wrong and evil (did you know he thinks they’re wrong and evil? let’s say it again to make sure), that he may be gazing into the abyss but – rest easy – it’s not gazing into him, that nothing is off here, dear reader, oh no, that the trio is just as dismissible as you thought when you began reading, let me just reiterate that once again for clarity, no there is not anything going on over there in the shadows –
He’s of the Devil’s party, but he doesn’t know it.
(If you’ve no idea at all what’s going on here, that’s almost certainly for the best.)
There a lot of excitable feedback circuits to be discovered on the way down the slope. This looks like one:
Analyzing data from a large, worldwide sample, two Chinese psychologists report people whose countries are more involved with wars and similar conflicts experience higher levels of existential fear, which drive them to greater religiosity. […] Previous surveys have found highly religious Americans tend to be more supportive of war, as well as of torturing one’s opponents. This raises a profound and troubling question: Could it be that armed conflict and intense religiosity are in a mutually reinforcing relationship? […] “The relationship between war and religiousness may be bidirectional,” write Hongfei Du and Peilian Chi of the University of Macau. “War strengthens individuals’ religiousness (due to) their worries about war, while fundamental religious beliefs result in violent conflicts and war.”
The Du and Chi paper is here.
(Societies are partially-efficient homeostats.)
This is one of the greatest things ever written, period.
‘SOCI’ abbreviates ‘self-organizing collective intelligence’.
The basic dynamics of a SOCI is as follows. It begins as some sort of attractor — some aesthetic sensibility or yearning — that is able to grab the attention and energy of some group of people. Generally one that is very vague and abstract. Some idea or notion that only makes sense to a relatively small group. […] But, and this is the key move, when those people apply their attention and energy to the SOCI, this makes it more real, easier for more people to grasp and to find interesting and valuable. Therefore, more attractive to more people and their attention and energy. […] … If the SOCI has enough capacity within its collective intelligence to resolve the challenge, it “levels up” and expands its ability to attract more attention and energy. If not, then it becomes somewhat bounded (at least for the present) and begins to find the limit of “what it is”.
Greenhal then narrates the story of Bitcoin to date, within this framework. The sheer enormity of the innovation it has introduced emerges starkly.
My sense is that over just the next five years this new form of SOCI will go through its gestation, birthing and childhood development stages. The result will be a form of collective intelligence that is so much more capable than anything in the current environment that it will sweep away even the most powerful contemporary collective intelligences (in particular both corporations and nation states) in establishing itself as the new dominant form of collective intelligence on the Earth. […] And whoever gets there first will “win” in a fashion that is rarely seen in history.
This will look prophetic not too far down the road.
On history, cybernetics, and the end of trust:
We’re not undergoing, since 2008, an abrupt and unexpected “economic crisis,” we’re only witnessing the slow collapse of political economy as an art of governing. Economics has never been a reality or a science; from its inception in the 17th century, it’s never been anything but an art of governing populations. Scarcity had to be avoided if riots were to be avoided – hence the importance of “grains” – and wealth was to be produced to increase the power of the sovereign. “The surest way for all government is to rely on the interests of men,” said Hamilton. Once the “natural” laws of economy were elucidated, governing meant letting its harmonious mechanism operate freely and moving men by manipulating their interests. Harmony, the predictability of behaviors, a radiant future, an assumed rationality of the actors: all this implied a certain trust, the ability to “give credit.” Now, it’s precisely these tenets of the old governmental practice which management through permanent crisis is pulverizing. We’re not experiencing a “crisis of trust” but the end of trust, which has become superfluous to government. Where control and transparency reign, where the subjects’ behavior is anticipated in real time through the algorithmic processing of a mass of available data about them, there’s no more need to trust them or for them to trust. It’s sufficient that they be sufficiently monitored. As Lenin said, “Trust is good, control is better.”
(Emphasis in original.)
“Cybernetic government is inherently apocalyptic.” — Twice a day, even stopped communists can see the time.
An exponential tech (deep-time) tweet-storm:
UPI (among others) reports:
SpaceX and Tesla CEO Elon Musk said he plans to send humans to Mars by 2025. … […] “Mars is the next natural step. In fact, it’s the only planet we have a shot at establishing a self-sustaining city on,” he said. “Once we do establish such a city, there will be strong forcing function for the improvement of space flight technology that will then enable us to establish colonies elsewhere in the solar system and ultimately extend beyond our solar system.”
‘Forcing functions’ play a critical role in Musk’s thinking. Beginning to do something is catalytic. It activates the positive cybernetics required to carry the process forward. That’s why Musk likes to get started with things he wants to see done, at the earliest opportunity, and certainly before there’s any basis for a confident forecast — in the absence of forcing functions — that they’re ultimately doable at all.
Every significant business leader of recent times has had a cybernetic heuristic of some kind. They function as entrepreneurial propellant. Musk’s might well be the most dynamic we’ve seen yet.
ADDED: On-topic Reddit meanderings.
Craig Hickman on deepening neuro-technological darkness:
The convergence of knowledge and technology for the benefit or enslavement of society (CKTS) is the core aspect of 21st century science initiatives across the global system, which is based on five principles: (1) the interdependence of all components of nature and society (the so called network society, etc.), (2) enhancement of creativity and innovation through evolutionary processes of convergence that combine existing principles, and divergence that generates new ones (control of creativity and innovation by corporate power), (3) decision analysis for research and development based on system-logic deduction (data-analysis, machine learning, AI, etc.), (4) higher-level cross-domain languages to generate new solutions and support transfer of new knowledge (new forms of non-representational systems and mappings, topological, etc.). As civilization and societal challenges become more and more dependent on external and internalized artificial mechanisms and technological systems we are faced with the convergence of “NBIC” technological reorganization of corporate and socio-cultural fields of business, inquiry, and research into: nanotechnology, biotechnology, information technology, and cognitive and neruosciences. But it is the neuroscientific breakthroughs and initiatives that will underpin the forms of global governance: political and economic systems of rules, negotiations, and navigation systems of impersonal and indifferent regulatory and reason-based imperialism of the future capitalist regimes as they begin to marshal every aspect of life into a data-centric vision of command and control.
The subsequent list of ‘neuro-‘ prefixed social management disciplines, accompanied by short introductions, is a treasure.
ADDED: Highly relevant.
In The New Yorker, John Cassidy lucidly rehearses the core game theoretic model of economic crisis:
… deciding whether to invest in financial assets or any other form of capital can be viewed as a huge n-person game (one involving more than two participants), in which there are two options: trust in a good outcome, which will lead you to make the investment, or defect from the game and sit on your money. If you don’t have a firm idea about what is going to happen and the payoffs are extremely uncertain, the optimal strategy may well be to defect rather than to trust. And if everybody defects, bad things result.
Does anybody seriously expect honesty from the status quo within this context? ‘Optimism’ is a fundamental building-block of regime stability. Expect it to be very carefully nurtured, with whatever epistemological flexibility is found helpful.
(Stay to the end of the article for the ominous nonlinear dynamics that correspond to narrative dike-breaking.)