Engineering Societies in the Agents World

Social Order becomes a major problem in MAS and in computer mediated human interaction. After explaining the notions of Social Order and Social Control, I claim that there are multiple and complementary approaches to Social Order and to its engineering: all of them must be exploited. In computer science one try to solve this problem by rigid formalisation and rules, constraining infrastructures, security devices, etc. I think that a more socially oriented approach is also needed. My point is that Social Control – and in particular decentralised and autonomous Social Control – will be one of the most effective approaches. 1 The Framework: Social Order vs Social Control This is an introductory paper. I mean that I will not propose any solution to the problem of social order in engineering cybersocieties: neither theoretical solutions and even less practical solutions. I want just to contribute to circumscribe and clarify the problem, identify relevant issues, and discuss some notions for a possible ontology in this domain. I take a cognitive and social perspective, however I claim that this is relevant not only for the newborn computational social sciences, but for networked society and MAS. There is a dialectic relationship: on the one hand, in MAS and cybersocieties we should be inspired by human social phenomena, on the other hand, by computationally modelling social phenomena we should provide a better understanding of them. In particular I try to understand what Social Order 1 is, and to describe different approaches to and strategies for Social Order, with special attention to Social Control * This work has been and is being developed within the ALFEBIITE European Project: A Logical Framework For Ethical Behaviour Between Infohabitants In The Information Trading Economy Of The Universal Information Ecosystem. IST1999-10298. 1 The spreading identification between "social order" and cooperation is troublesome. I use here social order as “desirable”, good social order (from the point of view of an observer or designer, or from the point of view of the participants). However, more generally social order should be conceived as any form of systemic phenomenon or structure which is sufficiently stable, or better either self-organising and self-reproducing through the actions of the agents, or consciously orchestrated by (some of) them. Social order is neither necessarily 2 Cristiano Castelfranchi and its means. Since the agents (either human or artificial) are relatively autonomous, act in an open world, on the basis of their subjective and limited points of view and for their own interests or goals, Social Order is a problem. There is no possibility of application for a pre-determined, “hardwired” or designed social order. Social order has to be continuously restored and adjusted, dynamically produced by and through the action of the agents themselves; this is why Social Control is necessary. There are multiple and complementary approaches to Social Order and to its engineering: all of them must be exploited. In computer science one try to solve this problem by rigid formalisation and rules, constraining infrastructures, security devices, etc. I think that a more socially oriented approach is also needed. My point is that Social Control and in particular decentralised and autonomous Social Control will be one of the most effective approaches. 2 The Big Problem: Apocalypse Now I feel that the main trouble of infosocieties, distributed computing, Agent-based paradigm, etc. will be -quite soonthat of the “social order” in the virtual or in artificial society, in the net, in MASs. Currently the problem is mainly perceived in terms of “security”, and in terms of crisis, breakdowns, and overload, but it is more general. The problem is how to obtain from local design and programming, and from local actions, interests, and views, some desirable and relatively predictable/stable emergent result. This problem is particularly serious in open environments and MASs, or with heterogeneous and self-interested agents, where a simple organisational solution doesn’t work. This problem has several facets: Emergent computation and indirect programming [5,18]; reconciling individual and global goals [25,31]; the trade-off between initiative and control; etc.). Let me just sketch some of these perspectives on the problem. 2.1 Towards Social Computing: Programming (with) ‘the Invisible Hand’? Let me consider the problem from a computational and engineering perspective. It has been remarked how we are going towards a new “social” computational paradigm [19,20]. I believe that this should be taken in a radical way, where “social” does not means only organisation, roles, communication and interaction protocols, norms (and other forms of coordination and control); but it should be taken also in terms of spontaneous orders and self-organising structures. That is, one should consider the emergent character of computation in Agent-Based Computing. In a sense, the current paradigm of computing is going beyond strict ‘programming’, and this is particularly true in the agent paradigm and in large and open MASs. cooperative nor a “good” social function. Also systematic dys-functions (in Merton’s terminology) are forms of social order [10]. See Section 3. Engineering Social Order 3 On the one hand the agents acquire more and more features such as: • adaptivity: either in the sense that they learn from their own experience and from previous stimuli; or in the sense that there may be some genetic recombination, mutation, and selection; or in the sense that they are reactive and opportunistic, able to adapt their goals and actions to local, unpredictable and evolving environments. • autonomy and initiative: the agent takes care of the task/objective, by executing it when it finds an opportunity, and proactively, without the direct command or the direct control of the user; it is possible to delegate not only a specified action or task but also an objective to bring about in any way; and the agent will find its way on the basis of its own learning and adaptation, its own local knowledge, its own competence and reasoning, problem solving and discretion. • distribution and decentralisation: MAS can be open and decentralised. It is neither established nor predictable which agent will be involved, which task it will adopt, and how it will execute or solve it. And during the execution the agent may remain open and reactive to incoming inputs and to the dynamics of its internal state (for example, resource shortage, or change of preferences). Assigned tasks are (in part) specified by the delegated agents: nobody knows the complete plan. Nobody entirely knows who delegated what to whom. In other words, nobody will be able to specify where, when, why, who is running a given piece of the resulting computation (and especialli how). The actual computation is just emergent. Nobody directly wrote the program that is being executed. We are closer to Adam Smith’s notion of ‘the invisible hand’ than to a model of a plan or a program as a pre-specified sequence of steps to be passively executed. This is on my view the problem of Emergent Computation (EC) as it applies to DAI/MAS. Forrest [18] presents the problem of Emergent Computation as follows: The idea that interactions among simple deterministic elements can produce interesting and complex global behaviours is well-accepted in sciences. However, the field of computing is oriented towards building systems that accomplish specific tasks, and emergent properties of complex systems are inherently difficult to predict and control. ... It is not obvious how architectures that have many interactions with often unpredictable and selforganising effects can be used effectively. The premise of EC is that interesting and useful computational systems can be constructed by exploiting interactions among agents. The important point is that the explicit instructions are at different (and lower) level than the phenomena of interest. There is a tension between low-level explicit computations and direct programming, and the patterns of their interaction’. Thus, there is some sort of ‘indirect programming’: implementing computations indirectly as emergent patterns. Strangely enough, Forrest following the fashion of that moment of opposing an antisymbolic paradigm to the gofAIdoes not mention DAI, AI agents or MAS at all; she just refers to connectionist models, cellular automata, biological and ALife models, and to the social sciences. However, also higher-level components -complex AI agents, cognitive agents give rise precisely to the same phenomenon (like humans!). More than this, I claim 4 Cristiano Castelfranchi that the ‘central themes of EC’ as identified by Todd [38] are among the most typical DAI/MAS issues. Central themes of EC include in fact [38]: • self-organisation, with no central authority to control the overall flow of computation; • collective phenomena emerging from the interactions of locally-communicating autonomous agents; • global cooperation among agents, to solve a common goal or share a common resource, being balanced against competition between them to create a more efficient overall system; • learning and adaptation (and autonomous problem solving and negotiation) replacing direct programming for building working systems; • dynamic system behaviour taking precedence over traditional AI static data structures. In sum, Agent based computing, complex AI agents, and MASs are simply meeting the problems of human society: functions and ‘the invisible hand’; the problem of a spontaneous emergent order, of beneficial self-organisation, of the impossibility of planning; but also the problem of harmful self-organising behaviours. Let’s look at the same problem from other perspectives. 2.2 Modelling Emergent and Unaware Cooperation among Intentional Agents Macy [33] is right when he claims that social cooperation d