Strange Attractor Design

We had a series of aha moments. Herman was explaining a recent design choice. We connected it with a prior design choice, and this took us to a transcending moment of seeing how these patterns are at work. This unleashed a raw flurry of cascading aha moments, which are roughly captured here.

Designing Networks

Consider a group of businesses and organizations that are all connecting to each other. Very quickly, the amount of information being tracked about all the others in the network overwhelms the nodes. (Herman drew the actual network math, I drew a dense network cluster.) The orgs have information that they need to exchange in order to unlock value that they benefit from. So, how do they reorganize the network to enable the optimal information flow patterns in a network that has limited trust between selfish nodes? So often when asked to design these things, people think in terms of idealistic utopian design conditions. It is about how much functionality can be given so that any information can flow in any direction. However, all the nodes find this scary. Instead of designing for optimal participant conditions, design for “willing to cooperate, yet anxious” instead.

As a network solution, you create an intermediary to reduce the network connection count. If I know the right network members, I have 2 degree access to everyone while not having to know most of them. So, see my “count of nodes” divided by “count of hubs” which is a rough way of saying, in back of the napkin kind of math, that each intermediary is reducing the number of nodes you need to directly connect with.

Trust

The next problem is why would we trust this intermediary? Again, lots of design decisions are made for central control of these hubs. What Valdis calls the “queen in between” kind of model. Most of us are skeptical about the power that aggregates in these hubs. So we need ways to ensure they are reliable. Solution: a) limit what information does flow through – only a certain type or amount. b) make all the nodes equal in some way and c) make the flow of that information extremely transparent.

When we don’t have much trust in the health of the network, the other nodes in the network, or in intermediaries, we have to manage high amounts of information about all of them. We act paranoid. When we have significant trust in the health of the network and the nodes within it and the intermediaries, we act generously. We can handle more “hops” between members, since the trust is in the network as much as it is in the nodes of that network.

One of the aha’s here for me was actually a reminder more than an aha. It is not the FORM of the network that determines the way it needs to be governed. It is the evolution of that network that determines the governance. How did it mature? If trusting in the past has led to positive outcomes, more trust will continue, and less governance is needed. However, if trust has been broken, more rules might be created to qualify when and how to trust, and thus bureaucracy begins to flourish. But I will get back to that later.

Where you see the figure 8 on the side, I am explaining to Herman about polarity management. We are not just greedy and selfish nor are we just open generous cooperators. We exist in the tension between these extremes. And even the skeptics are willing to navigate the risks of cooperation when the benefits might be higher.

Incentives

Herman mentioned how the use of financial incentives can often be misleading. The amount of the financial incentive is not what allows the node to participate in the network. Instead it is the ability of that node to maintain identity with their peer group. Can they save face and have pride while taking the risk of cooperation? Yes if there is some token amount of money involved. Or it can be another kind of token.

We have another case where credits were used between players to negotiate their placement in a queue. I might let you pass me in line, if you give me your credit. I can still save face to my peers for letting you cut, because I got some benefit for it. This design may require an arbiter to view the transaction, but the arbiter isn’t needing to make judgements about whether it can happen or not. Tokens can make visible an exchange that allows both sides to save face to others and enable actions in cooperator networks.

choice

Identity

We find the issue of identity to be a huge factor in the behavior of nodes in a network.

We talked about how a transaction between two people always seems to have at least a third party. I am not only negotiating with you for what the transaction is, I am also thinking about what my group or tribe will think of me for engaging in that transaction. In fact, I may have more anxiety about what they think than about whether you are making a good transaction with me or not. Most of us dedicate most of our time to identity formation, with everything we say and do designed, intentionally or not, to reinforce what we want to think of ourselves. We may not do a good job of it all the time, but that is a driving force.

Choices

I told Herman about a conversation I had with Benjamin Ellis about making choices. Benjamin was talking about a graphic designer that provided 3 choices. And we quickly riffed on why this works so well. We think that the selecting out of the bad option of the three builds the sense of confidence in decision making that enables the next choice between the remaining two to be faster and seem easier.

Rules, Rules, and more Rules

We also talked about how the centralized control design model has these negative side effects of generating bureaucracy because they tend to create rules to follow, and when those rules conflict, they create rules about the conflict, in an infinite cascade of rule making that ends up grinding cooperation to a halt. Thus we said they are limited in size by the ability to moderate the rules. This did help us transcend the 150 limit of community cooperation (limit to the ability to track the trustability of members of a network with each other). However, when you design for robustness (bookmark conversation on resilience, robustness, and anti-fragility)… when you design for robustness, you put the value and power at the edges of the network operating on principles instead of rules, and allow it to learn and grow from a simple structure, you get agility and adaptability in the network…and it can become scale-free? I think.

Optimize for what?

We moved to a meta-level discussion about how any group tends to design to optimize for one factor. When we succeed at that optimization, we also learn that there are negative side effects that may inhibit our achievement of another (possibly more important) goal. So we have to switch to design to optimize for something else. This can be long term human social evolution at work – when we exhaust the potential of something, we have to evolve to work around that. This often involves making distinctions between things that were previously hooked together in order to unlock value… but I digress. Will save that for another conversation…. back to the design at hand.

Rituals

In talking about Identity, we had taken a side track to talk about the power of ritual. I had shared a story about being uncomfortable – with a growing sense of dis-ease when I was in a group that had a series of actions that were meant to create bonding in the group culminating in a big ritual together. Rituals can help ease the anxiety of cooperation by bonding everyone into a peer group – we are all smart or stupid together, so I can work within that group easier. Everyone has crossed a threshold together. And you feel it in an embodied form. However, we need to take into account that humans have a spectrum of tolerance to peer influence. Some of us just don’t care what our peers think. And others can’t make a decision without being aligned with the group. It isn’t a stable known to design for.

There is that formula that Herman wrote. 🙂

Design to Evolve Cooperation

In pink at the bottom it says “It isn’t the form==> governance. It is TRUST==>governance.”

On the left in blue it says, “Design big” from the start and you get “bureaucracy, low trust, not agile, ultra specific and rule based.”

If you start small, simple, and mature through the growth of trust, you get evolving cooperation. What is the least that is needed? What is the smallest channel that enables a flow of information that benefits the network? Design for learning and self-evolution. Design to grow using minimal rituals for a foundation of trust to help mediate the anxiety of cooperation. When evolving cooperation against the will or inclination of players, manage their fear and sense of security. Sometimes tokens or credits can be used to mediate that trust.

I am very interested in how tokens are used to engender trust and enable flows. How does a token act as an object of trust transfer? Trust tends not to be transferable, whereas tokens can be. How do you route the trust into the token and the process, so that it can be transferred?

This is where we start really transcending that 150 limit.

 

Sorry this isn’t an essay yet. It is just notes from an extended conversation.