Indefinitely Scalable Computing = Artificial Life Engineering

The traditional CPU/RAM computer architecture is increasingly unscalable, presenting a challenge for the industry— and is too fragile to be securable even at its current scale, presenting a challenge for society as well. This paper argues that new architectures and computational models, designed around software-based artificial life, can offer radical solutions to both problems. The challenge for the soft alife research community is to harness the dynamics of life and complexity in service of robust, scalable computations—and in many ways, we can keep doing what we are doing, if we use indefinitely scalable computational models to do so. This paper reviews the argument for robustness in scalability, delivers that challenge to the soft alife community, and summarizes recent progress in architecture and program design for indefinitely scalable computing via artificial life engineering. The future of software artificial life Focusing particularly on models that span traditionally separate representational levels, the ‘soft alife’ (Bedau, 2003) research community has used digital computers to investigate everything from artificial physics and chemistry to artificial biology and ecology. Collectively, the community has sharpened the understanding of life-like systems, deepened the understanding of their range, and offered illuminating examples of their complexities. Alife models and techniques have found applications in contexts like film production (Bajec and Heppner, 2009; Reynolds, 1987) and video games (Grand, 2003, e.g.). The productivity of the community has been admirable, particularly given its modest size, but we believe a far greater destiny for it lies in store: Software-based artificial life is to be the architectural foundation of truly scalable and robust digital computing. Because, fundamentally, the mechanisms required for robust scalability—to switch energy and perform work, to adapt to local conditions or maintain invariants despite them, to increase parallel processing to suit available resources—are precisely what life does. Future computer programs will be less like frozen entities chained to the static memory locations where they were loaded, and more like yeasty clusters of digital cells, moving and growing, healing and reproducing, cooperating and competing for computing resources on a vast digital landscape—that will itself be growing and changing, as we build and upgrade it even while it’s operating. Any program instance will be finite, but the substrate architecture itself will be indefinitely scalable, defined as supporting openended computational growth without requiring substantial re-engineering (Ackley and Cannon, 2011). An indefinitely scalable machine will ultimately provide an aggregate computational power to dwarf our current visions for even highperformance computing. To get there from here, we need to reveal, remove, and reimplement design elements that block indefinite scalability. Traditional models of computation, as well as important areas of soft alife research, embody several such assumptions. We need to raise awareness of the costs of such designs, and advocate for existing alternatives, as well as develop new ones. To get there from here, ultimately a significant societal investment will be required, to back an expanding software artificial life community as it fleshes out a body of scientific and engineering knowledge around robust artificial life for computer architecture. There is much to be invented and discovered, but the payoff will be immense: The development of a tough and savvy computing base of great capability— not a system promising freedom from all risk or fault, but a system for which risk management and fault tolerance have always been inescapable parts of life. To get there from here, we also need to get going. Recently we have presented a framework called bespoke physics to ground the effort (Ackley, 2013a); in addition, we have made the case to communities focusing on operating systems (Ackley and Cannon, 2011), spatial computing (Ackley et al., 2013), and general computing (Ackley, 2013b). Here our purpose is to sound a vigorous call to arms and place a challenge before the soft alife community, and to provide an update on our own recent progress in indefinitely scalable computing via artificial life engineering. In the rest of this section we expand on the twin challenges—scalability and security—now facing tradiALIFE 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems tional computer architecture. Section ‘Beyond serial determinism’ proposes an alternative and grounds it briefly in history, then Section ‘Challenges to the soft alife community’ highlights common architectural assumptions—such as synchronous updating and perfect reliability—that can yield evocative models in the small, but lead to engineering dead ends in the large. Section ‘Programming the Movable Feast Machine’ reports progress on our tool-building efforts for indefinitely scalable architecture, along with first benchmarks on (finitely) parallel hardware, and then finally, Section ‘A call to action’ touches on our next steps and appeals to the soft artificial life community for help. Computer architecture at a crossroads The rise of digital computing over the last seventy years has been a stunning feat of technological research and development, with revolutionary economic and societal impacts. But recently the growth rate of traditional serial deterministic computing has plateaued, as further clock speed increases consumed exorbitant resources for diminishing returns. Now ‘multicore’ machines exchange strict serial determinism for a modicum of parallelism, creating some uncertainty about the exact sequencing of operations, while preserving overall input-output determinism for wellformed programs. But even there, the requirement for cache coherence—so the architecture presents a unified Random Access Memory to all processors—is demanding increasingly heroic engineering (Xu et al., 2011, e.g.) even when only considering scalability within a single chip. At the same time, given recent high-profile computer penetrations and security failures (Harding, 2014; Perlroth, 2013, and many others reported and not), there is underappreciated irony in the computing industry’s determined preservation of CPU and RAM architectures, which—by fundamental design—are all but impossible to keep secure. Because programs and data can be placed anywhere in RAM, storage location provides precious little clue to the identity or provenance of its content. And central processing means that the same tiny spots of silicon run everything— whether code of the long-trusted servant or the drive-by scum of the internet. Undeniably, serial deterministic computing with CPU and RAM has great strengths: It is flexible and efficient, and its behavior can be predicted accurately by chaining simple logical inferences—which programmers do routinely as they imagine execution of their code. But that predictability exists only so long as hardware and software and user all act as anticipated. If anything amiss is detected, fail-stop error handling—that is, halting the machine—is the traditional response. It’s game over; no further predictions are required. Fail-stop is efficient to implement and tolerable assuming the only unexpected events are rare faults due to random cosmic rays or other blind physical processes, resulting in nothing more than the occasional system crash without lasting damage or too much lost work. However, this only applies to small and isolated systems. By contrast, as the high-performance computing (HPC) community contemplates the move to exascale computers, the cost of using fail-stop to preserve program determinism is increasingly seen as untenable (Cappello et al., 2009). And for today’s networked CPU/RAM computers, the unexpected is typically neither rare nor random. As app installs and updates bring ever more software bugs, and ever more value at risk attracts ever more malicious actors, the only safe prediction is that the first flaw loses the machine to an attacker’s control.1 A horror movie nightmare, where hearing one senseless incantation causes immediate and enduring loss of volition, is quite literally true in our digital environments. We are now building millions of computers per week according to that staggeringly fragile blueprint. Hardware switches are packed millions to the square millimeter and controlled by a software house of cards—a rickety skyscraper of cards, lashed together by a single thread of execution. It’s a devil’s bargain that we have accepted, it seems, because its efficiency and flexibility strengths were immediate while its security and scalability weaknesses have overwhelmed us gradually. Now we’re so deeply invested in the architecture that we blame only the imperfect tenants, never the doomed buildings: We blame the programmers with their buggy code and the harried managers shipping it, and the miscreants with their malware and the clueless users clicking it. We have accepted this devil’s bargain, it seems, because we thought there was no fundamental alternative, or that any alternative would involve unaffordable exotic hardware and space shuttle-grade software. Beyond serial determinism There is another approach to building digital computers, a direction suggested decades ago but still mostly unexplored today, that leads to robustness instead of fragility. It is built not on static correctness but on dynamic stability, aimed not at efficient completion but at continuous creation, acting not via local changes to a frozen ocean but via collective stabilization of restless seas. It is neither free from faults nor paralyzed by them, but born to them, expecting and accomodating them—even exploiting them. The proposal is to extract large quantities of robust, useful computation from vast ecosystems of engineered, soft

[1]  David H. Ackley,et al.  A Movable Architecture for Robust Spatial Computing , 2013, Comput. J..

[2]  Luke Harding The Snowden Files: The Inside Story of the World's Most Wanted Man , 2014 .

[3]  Shlomi Dolev,et al.  Self Stabilization , 2004, J. Aerosp. Comput. Inf. Commun..

[4]  John S. McCaskill,et al.  Living Technology: Exploiting Life's Principles in Technology , 2010, Artificial Life.

[5]  David H. Ackley Beyond efficiency , 2013, Commun. ACM.

[6]  David H. Ackley,et al.  Pursue Robust Indefinite Scalability , 2011, HotOS.

[7]  Randall D. Beer,et al.  The Cognitive Domain of a Glider in the Game of Life , 2014, Artificial Life.

[8]  Craig W. Reynolds Flocks, herds, and schools: a distributed behavioral model , 1998 .

[9]  Franck Cappello,et al.  Toward Exascale Resilience , 2009, Int. J. High Perform. Comput. Appl..

[10]  Chrystopher L. Nehaniv Asynchronous Automata Networks Can Emulate any Synchronous Automata Network , 2004, Int. J. Algebra Comput..

[11]  M. Bedau Artificial life: organization, adaptation and complexity from the bottom up , 2003, Trends in Cognitive Sciences.

[12]  Iztok Lebar Bajec,et al.  Organized flight in birds , 2009, Animal Behaviour.

[13]  Wolfgang Banzhaf,et al.  Artificial ChemistriesA Review , 2001, Artificial Life.

[14]  Steve Grand,et al.  Creation: Life and How to Make It , 2001 .

[15]  David H. Ackley Bespoke Physics for Living Technology , 2013, Artificial Life.

[16]  Edsger W. Dijkstra,et al.  Self-stabilizing systems in spite of distributed control , 1974, CACM.

[17]  Master Gardener,et al.  Mathematical games: the fantastic combinations of john conway's new solitaire game "life , 1970 .

[18]  J. Neumann The General and Logical Theory of Au-tomata , 1963 .

[19]  Jun Yang,et al.  A composite and scalable cache coherence protocol for large scale CMPs , 2011, ICS '11.