MSSCI2024: Mini-Symposium on Strong Collective Intelligence April 6, 2024 |
Conference website | https://mssci2024.github.io/index.html |
Submission link | https://easychair.org/conferences/?conf=mssci2024 |
Poster | download |
Abstract registration deadline | March 30, 2024 |
Submission deadline | April 1, 2024 |
This mini-symposium aims to create awareness of a topic that is critically important to research in every field in general. But despite this importance it is a topic that might be completely outside the awareness of the research community in general. This topic is the limits to the problems that human groups can reliably define or solve in the absence of "strong collective intelligence" (strong CI), where strong CI is defined as general rather than narrow problem-solving ability at the group level.
By drawing an analogy between strong CI and the collective intelligence between cells that nature has developed in multicellularity, the limit to the complexity of structures that can be achieved without multicellularity can help us understand why there might be limits to the complexity of problems that can be defined and/or solved without strong CI. Multicellularity can solve complex problems like vision and cognition, while single-cellularity can only solve simpler problems like creating the kinds of simple structures that slime molds can combine to form.
The idea of limits to the problem-solving ability of human groups is not new. What is novel about this mini-symposium is the idea that there might be limits related to the fact that groups might lack general problem-solving ability at the group level (collective intelligence), and that even if groups do have general problem-solving ability at the group level, as measured by the general collective intelligence or C factor, that collective intelligence might not be sufficient. General problem-solving ability at the group level solves problems in a way that optimizes collective outcomes for all, while general problem-solving ability at the individual level solves problems in a way that optimizes outcomes from the individual's perspective.
The presence of any such limits would suggest that while any given idea might be useful in solving a specific group problem, past those limits of complexity, groups might reliably fail to solve that problem with that very same idea. Persistent and/or intractable problems might often exist past these limits. If so, then in order to reliably be successful in executing any such idea, the group needs to increase its strong collective intelligence. In other words, part of an idea is its execution, and an idea being executed by a vastly more intelligent collective intelligence is not only being executed in a way that tries to solve the very different problem of optimizing collective outcomes, but it is also being executed by a vastly more powerful intelligence, and would therefore be expected to have very different outcomes.
This analogy between different single-celled strategies competing to try to solve problems like vision and cognition, despite vision and cognition only being solvable through multicellulairy might be generalized. Where the correct definition of problems and the discovery of solutions lies beyond the hidden limits to the problem-solving ability of human groups that lack strong CI, the competition to "win" in proving one solution is better than another, while the real solution lies completely elsewhere, might have profound and unintended consequences. For example, we might believe we are solving a given problem like AI alignment, while the actual effect of our actions might be the exact opposite, such as racing inadvertently to make AI more dangerous.
If, as theorized, the collective intelligence of networks of entities in natural systems can be generalized into a model for strong CI as envisioned by a hypothetical GCI platform capable of replicating nature’s ability to explore a far wider range of collective problem-solving strategies, and potentially capable of exploring the space of ANY possible collective problem-solving strategies, then any current limits to the ability of groups of humans to solve any problem in general without such infrastructure, are limits that invite exploration. Those interested in presenting their thoughts on the topic are therefore invited to give a five minute talk on the subject, and to submit a 300 word abstract about their proposed talk.
Whether or not it is feasible to technologically achieve such a level of strong CI, is immaterial to understanding the limits to human problem-solving that are imposed by its absence. The practical challenges of implementing a technology enabled strong CI, as envisioned by the idea of a General Collective Intelligence (GCI) platform, as well as the theoretical and empirical evidence supporting the potential of such technology, could be areas of skepticism, but are irrelevant to the importance of understanding these limits.