Designing beyond the machine: Tokens, Blockchain & Contracts - with Michael Zargham

BOUNDARYLESS CONVERSATIONS PODCAST - SEASON 4 EP #7

 placeholder
BOUNDARYLESS CONVERSATIONS PODCAST - SEASON 4 EP #7

Designing beyond the machine: Tokens, Blockchain & Contracts - with Michael Zargham

Michael Zargham, founder and CEO of BlockScience and Board Member of the Metagov Project, shares his views on how Blockchain and other emerging technologies are making new ways of organizing possible. Yet, Michael believes that, so far, we are not fully using the potential of these affordances to create something new.

Podcast Notes

Michael Zargham, founder and CEO of BlockScience and Board Member of the Metagov Project, shares his views on how Blockchain and other emerging technologies are making new ways of organizing possible. Yet, Michael believes that, so far, we are not fully using the potential of these affordances to create something new. 

Smart contracts are becoming widespread, but does the relationship between crypto and organizing stop there? What’s next? Which new “non-familiar” possibilities of design will we see unlocked in the next few years? Michael describes how systems designers will need to be humble and leave space for systems to evolve through enabling constraints. He also believes that the gap between the complexity of organizational design and transparency of decision-making is closing through “healthy DAOs”, blurring the line between those making the rules and those acting upon the rules.   

Michael holds a Ph.D. in systems engineering from the University of Pennsylvania where he studied optimization and control of decentralized networks. Thanks to his experience, Michael Zargham has a non-common point of view on designing beyond the machine.

 

Key highlights: 

  • How new affordances for organizing are created by Blockchain and other emerging technologies; 
  • The gap between the complexity of organizational design and how it is documented;
  • The “Animating purpose” is core to what the organization does and why;  
  • How to design mechanisms without being mechanistic;
  • How designers need to leave empty space and provide enabling constraints; 
  • Systems engineers as civil engineerings: the civil servant ethics approach
  • Finances as constraints rather than goals in emerging mutualist institutions

 

Topics (chapters):

00:00 Michael Zargham’s quote

00:59 Intro and Michael Zargham’s bio

02:33 New technologies, new affordances.

06:04 Beyond Smart contracts: how deep is the relationship between crypto and organization?

09:03 The new “non-familiar” possibilities of designing next generation voting public 

14:41 How an organization can “use” the Conviction voting

19:55 The gap between organizational design and the documentation of the organizational design

24:23 “Animating purpose” is core to the organization and what it does

29:49 A new era of “Design as a participatory system”? 

33:26 The role of the designer: risks and opportunities. 

40:59 The civil servant ethics approach for designers

46:18 Michael Zargham’s breadcrumbs

 

To find out more about Michael Zargham’s work:

 

Other references and mentions:

 

Michael’s suggested breadcrumbs (things listeners should check out):

 

Recorded on 12 October 2022.

 

Get in touch with Boundaryless:

Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast 

Music 

Music from Liosound / Walter Mobilio. Find his portfolio here: https://blss.io/Podcast-Music 

Transcript

Simone Cicero:
Hello, everybody. Welcome back to the Boundaryless Conversations podcast where we meet with pioneers, thinkers, doers, entrepreneurs. And we speak about the future of business models, organizations, markets and society in this rapidly changing world we live in. I’m Simone Cicero and today I’m joined by my usual co-host, Stina Heikkila. Ciao Stina!

Stina Heikkila:
Hello, hello.

Simone Cicero:
And we’re also joined by Michael Zargham, Founder and CEO of BlockScience, Board member of Metagov Project. Michael also holds a Ph.D, in Systems Engineering from the University of Pennsylvania, where he studied things such as optimization and control of decentralized networks. And he’s also working with the Crypto Economics Institute at the University of Wien.

Today, with Michael we are going to talk about the role of some new technologies enabling, I will say a new approach to some kind of complexity aware approach to organizing that picks from cybernetics, for example, and basically large scale systems engineering. So, Z, like you suggest to be called, I hope you will enjoy the conversation with us. It’s great to have you, and welcome.

Michael Zargham:
Thank you so much for having me.

Simone Cicero:
Maybe we can start really from this assumptions that was baked into my introduction. So, are these new technologies really making it possible for us to approach organizing in private and public? So, for example building cooperates in a certain new way, or writing policies and enacting them into society in a radically new way, for example, can finally really peek from the story of cybernetics or large scale systems engineering. Is it really something new possible today that we can obtain from these new emerging technologies, Z?

Michael Zargham:
I will start by saying that I think that the new technologies are providing new affordances and that those affordances create new opportunities. But I will probably start by describing the ways in which I don’t think that they’re new before we get there. So, the ways in which they’re not new is that they embody sort of algorithmic policymaking in sort of shared systems. And we can and actually do use sort of non-crypto based technologies to do this in small and medium sized groups, and even in some quite large groups.

So, if we take for a moment, sort of a blockchain and a particular one with, say, smart contracts available on it, generally, what you have is a kind of shared data structure and a set of rules about how those data structures are mutated, sort of by whom, under what circumstances and what happens when you call those methods. And if you really look carefully at that and squint a bit, you realize that if you’re just working with your friends, you might do that in a shared spreadsheet. And then you might just normatively enforce who can change what when, and kind of keep an eye on it. And it’s not really a big deal because you trust everybody, and it’s a small group.

As that set of people who are sharing that information footprint grows, you might move from something that was just a shared spreadsheet that more or less everyone can edit, to something with permission controls on it, to something with much more specifically codified rules about who can mutate what when. You might move to a custom piece of software, you might move to a tool like a SaaS product for which you can configure the policies and rules. But then you’re trusting the SaaS product service provider to execute the rules that you’ve made. And then as you sort of move up and up this stack to things that are more and more, let’s say, like sensitive, both in terms of whether the users can mutate the system in a certain way, or the people who are administering that system can mutate it in a certain way.

The emphasis here being on the administrators sort of duty to the users. We’ve run into encounters with, for example, GitHub, taking down repost, and then sort of ending up in little squabbles with the users about whether that was appropriate as an example of where sensitive might arise. But as we keep moving up sort of this chain towards larger groups of people and/or more sensitivity to sort of manipulation, either by the system stewards or by its users, that’s where these sort of new, more cryptography-based software systems become relevant and those affordances of being able to make algorithmic policy and enshrine it in a sort of shared computer like structure becomes really relevant to organizational design.

Simone Cicero:
What’s really decisively new in terms of the penetration of cryptography in the way we organize in society? I guess it’s really related to trust and you spoke about this idea of having a very large spreadsheet that many, many people can edit. And I guess for example, in this case, cryptography, as a technology as it’s now deployed in smart contracts and in blockchains, allows us, for example, to distribute the access rights and something like that. So, making it possible to have these very large spreadsheets right, that everybody can interact with. Is there any other new pattern that is worth mentioning? I’m thinking of, for example, smart contracts.

Michael Zargham:
Yes. So, I think the key here is by understanding what the smart contracts are there, essentially cryptographically enforced sort of essentially unique microservices, right? Like they’re these data structures and there are rules about mutating them. What they do is they open up a large design space, they’re extremely composable, which means that I could build a little chunk of data and rules about how to mutate it, you could build a little chunk of data and some rules about mutating it. And then Stina could build another one that actually calls, make — calls the methods and uses the data structures that’s on both of ours. And thus, make a new thing that is actually made up of facets of the exposed interfaces for both of ours.

So, it’s extremely generative. It’s not diminishing to say that it’s actually just a bunch of data structures and codifying the rules for mutating them. Because that is like fundamental building blocks of not just a software design, but like some kind of organizational design equipment, but with the caveat that the technology still only encodes process or procedure. And to some extent, like canonical facts or memory, there’s going to be a complementary component that resides within the human users within the community members. It’s going to include things like norms or substantive judgments about how the affordances of the technology are used within a particular context.

And so we can’t just look at the tech. But it’s important to understand that that technology, as simple as it is, if you think of it as these kinds of shared databases with cryptographic enforcement of the rules on how those databases are mutated, as really basic building blocks for higher order structures, and that design freedom associated with writing smart contracts, and composing smart contracts into higher order ecosystems, does provide us a lot of opportunity. And to be totally honest, I don’t think we’ve seen anywhere near the potential of that design space. I think we’re still doing a lot of recreating things that are familiar to us, and we haven’t really yet probed the kinds of organizational structures that are made possible by this sort of, I guess, again, organic compositional ecosystem design.

Simone Cicero:
First of all, I want to double click on the question of composability and modularity, which seems to be common thread through many of the conversations we are having with people that are in a marginally involved in the crypto space or in general, looking into the most advanced technologies and application of technology nowadays. Essentially, three things that emerge from these new enablers and new affordances that you speak about there is shared the data with regulated access that is self-enforcement or self-execution of contracts, and then there is this idea of composability.

So, when you were speaking, I felt like we’re moving away from letting people designing a blank page into designing on a blank page with some kind of square guideline, as you have when we’re kids. When kids go to school like they have these guidelines, and now basically can design stuff that is natively integrable, natively composable. What are these new, currently known, familiar possibilities that we have tests to these affordances in designing the next generation of institutions, both in public, private, open, or whatever? So, what kind of new things can we envision?

Michael Zargham:
I would very much like to see designs that leverage multiple timescales of evolution. I can kind of double click on that a little bit. So, right now, when we have, you know, traditional organizational design, we have to think about the temporality of different processes. Some things are daily rituals, some things might be weekly review meetings. You might have monthly, quarterly, annual sort of events, processes or decisions. In the smart contract world, because we’re using a high degree of automation, we at least have the option to use something closer to a continuous time dynamic; things that have characteristic timescales that are not predefined by discrete chunks in time. But just have much like natural systems, sort of, I don’t want to say like Eigenvalues, but what I really mean is temporal modes in the continuous sense.

And so like one experiment that I did on this particular type of organizational design is an algorithm called conviction voting. Conviction voting is more or less just a signaling structure. So, people have a certain amount of voice credit, whether it’s pre-assigned to every member, or it’s based on tokens. It doesn’t actually matter what the thing is, it’s just that you have a weight, and then you have preferences. And if you express preferences, so you vote for things, you’re basically charging up an action potential on the thing that you voted on over time, based on a — There’s a coefficient in that algorithm, it’s called alpha. It’s got sort of a half lifelike character to it. And it sort of determines the rate at which your broadcast preference charges up. And you can change your broadcast preference whenever you want, actually.

So, if your friend convinces you of something and you change your mind, you can switch your signaling and you’ll sort of discharge the thing you had previously signaled on and charge the thing that you’re signaling on now. And that is very much like — I refer to it as sensor fusion, because if you have lots of people doing this all at the same time, and they are changing on different moments, there’s not like a fixed window for voting, but rather, their move to change their mind, they simply can. That integrator, the discounted accumulator in it basically absorbs and integrates all of the information that’s noisy across space and time into signals across the things being signaled upon. So, it takes some of the high frequency noise out and gives you a slower moving signal and we can use that for decision making.

In the implementations of conviction voting that exist, they’re used for participatory budgeting, and there are trigger thresholds. So, if you accumulate enough signal behind a particular funding proposal, the funds get issued to the fundee, and the particular amount of conviction or energy that needs to accumulate behind a specific proposal is determined by how much the shared funding someone wants. So, the less of the communal funding pool you’re proposing for, the less conviction needs to be accumulated in order for that proposal to pass. And that manner of sort of participatory budgeting has been used by a DAO called OneHive for some years now. And it is currently in use by the token engineering Commons, and I believe a few other. There’s a template that OneHive built called gardens, that has conviction voting as one of its available building blocks.

But the reason I bring this one up is because it was originally created in a large part just to demonstrate that there are things in the design space that are a little different from the things that we most commonly use and even call voting. That is, of course, not a sort of magic wand answer, it’s not the right fit for every participatory budgeting situation. But in DAOs, there’s often an extremely do-ocratic modality where people want to get out of each other’s way, and just give them space to build. And so it made a mechanism for the community to sort of do a kind of consent based participatory voting where if you built up enough weight behind something it would pass without necessarily grabbing everyone’s attention and calling a vote.

Simone Cicero:
Can you make an example on how these, for example, could be used by an organization in a certain context. For example, with conviction voting, which is one of these, let’s say primitives that are extremely modular now you can mount on top of each other and that’s, I think, also part of the thesis of the Metagov Project. Maybe you can also double click on it later on.

Michael Zargham:
Sure. I mean, it’s funny, because I was attempting to use that as an example. Specifically the accumulator function, which is a continuous time process with a natural timescale, which is, again, this kind of unintuitive thing, because it’s not common in organizational design is possible in smart contracts. And in, say, the OneHive DAO, it’s their mechanism for participatory budgeting. So, you take your, basically, token weighted voting in their case, and you broadcast your preferences across the proposals that are outstanding. And then when enough people over enough time support a proposal, it passes and the funding for that proposal gets sent to the proposers, and they can proceed with whatever project that they were asking for funds for.

It’s really concretely just a participatory budgeting tool. And it’s used in a particular kind of community where their desired experience is not a lot of friction, that they want to be able to get funds to the people who need them without having a lot of bureaucracy or a lot of discussions. They can kind of locally whip votes with their friends and say, hey, I really want to do this thing, like, do you want to support me? And if they do, there’s a temporal element, it has to charge up. But part of the reason it has to charge up is to make sure that people really do have conviction in that. If they then change their mind or shift their voting voice elsewhere, before the critical threshold is reached, then the proposal doesn’t pass. So, it places a little bit of a temporal element. And it’s that same temporal element that gives the whole system a natural timescale. In tuning it, you can tune up that sort of charging up factor to make things pass either very quickly, or potentially take weeks to pass if you’re talking about potentially larger sums of money.

The conviction voting algorithm is one of the “pillars” in something called the gardens, which was a template created by a community called OneHive. And in the gardens, they have several other components that are sort of available and work together. Probably the most important one, in my opinion, is the community covenant. So, OneHive has a sort of content addressed, meaning, sort of cryptographically hashed or named file that contains their sort of mission and values. It’s a type of constitution that says sort of what they exist for. And that object is what they call the community covenant. And when you join that community, you sort of sign a transaction that is essentially signing your, like, consent to abide by that constitution within the community. So, that’s one of the pillars.

Another one of the pillars is called decision voting, which is a much more conventional voting, when you need to hold a vote to say change the constitution. If you were to actually go in there and want to make an edit to that file, then you would have to basically propose that amendment as a proposal to decision voting and decision voting is a relatively high quorum, sort of standard voting procedure. I think it has both token weighted and non-token weighted variants, depending on how you configure the software.

And the sort of fourth thing is they have a court system or dispute resolution system called Celeste. And part of the reason they have that is because if someone makes a proposal, say to the participatory budgeting, the conviction voting piece, and other people within the community view it as potentially conflicting with the sort of values of the core mission of the org per the community covenant, then there’s a mechanism to sort of challenge it, which is different from saying, hey, I’m voting no, in the sense that I’m voting against this proposal. I’m saying, challenging whether it falls within the mandate or the mission of the org, and that would go to the dispute resolution protocol, Celeste.

And so you end up with these sort of four components, the community covenant, the sort of court system/dispute resolution protocol called Celeste, the decision voting, which is again, a more conventional voting mechanism, and conviction voting, which is used largely for participatory budgeting together within like an app that allows you to sort of navigate those mechanisms and allows the community to coordinate. And there’s a handful of communities on the Gnosis chain that use that particular pattern. And it’s just something I’ve been watching because I worked on this conviction voting algorithm. And I had used it as a case study in one of my cybernetics and DAO’s papers.

Simone Cicero:
One key message that I bring home from this initial part of the conversation is now as a designer, you can mess much more with incentives, right, to incentivize a certain behavior. So, for example, conviction voting incentivize a certain type of decision based on convictions. As you said, it’s like you are programming the way to interact with a certain decision, which is an element of organizing, embedding inside these primitive certain values, for example, that your organization chooses to embody. Right? And also another very interesting point is now it’s much easier for everyone to see the rules, let’s say, and to participate in the ways that the system is designed to let you participate. So, maybe in the past, there was much more a barrier into participation. Now, essentially, it’s much more possible to make things clearer, design larger scale systems that incentivize certain behaviors or certain incentives. So, unless you want to double click — [crosstalk]

Michael Zargham:
I don’t know if that’s actually true. So, it’s interesting is people say this a lot, that it’s much more possible. I think it was possible before, and I think even now, it’s still possible, but people don’t do it as much as they should. Because there’s a big difference between having a smart contract, which is technically verifiable, and a smart contract that is actually community legible. And so what we’re starting to see is this gap emerge between the complexity of the organizational design, and the documentation of that organizational design. And again, I don’t know that that’s even new itself. It’s common in accent organizations. But what I think is really key is the communities like OneHive that put a lot of effort into building out the documentation that clearly articulates how it works, as well as how and what the organ– what mechanisms are available for change.

And so when I think about these systems, I generally think about the community policymaking or sort of rulemaking in terms of there’s like, kind of above and below the line. Above the line is governance, where you’re sort of deciding what the governing processes or protocols or rules of your community are going to be. And below the line is acting in accordance with them. Now, in many traditional organizations, there’s a stark gap between the people who are above and below that line. Some people are acting within the rules, and other people are acting upon the rules. And often, like all too often also feeling as though they can act outside of those rules.

Smart contracts basically ensure that no one can act outside of those rules, because the rules are enshrined in the software itself. But there’s still this separation between activities that are within the rules and activities upon the rules. I think what’s particularly interesting about at least sort of healthy DAOs is that they provide a lot of documentation about both the processes that are defined by the rules, and the processes about modifying the rules. And they give everyone sort of an opportunity to engage in the actions upon the rules. And if we continue to see that, that dynamic grow, I think that does lead to a more practical form of democracy, where we have the lived experience of influencing the systems that constrain our own activities.

Simone Cicero:
It’s great to look into this dichotomy, right, between the potential to engage more participation. And on the other hand, the tendency, let’s say, to make things complex, and then required lots of documentation, lots of intentionality, also from the participant perspective. So, it’s great that you actually point out that it’s not free, I would say, to design a much more participatory organization. It still needs something that needs to happen at the social level, not just at the programmable elements of what somebody will call the trustware if some somebody like the listeners have listened to our podcast with Chase, for example, we spoke about this difference between the socialware and the trustware, you always need both. You cannot fall into the trap that if your trust ware, your programmable organization is so perfect, and we grant you participation for free. No, that’s not going to happen. You still need to have fairly thorough elements of social ware engagement and information and communication and documentation to get really everybody to participate, which is maybe the high-level objective that we have.

Stina Heikkila:
It’s a good bridge, what you said the high level objective, because I’m listening and of course, coming a little bit with the, let’s say, a newbie mind or maybe not newbie, but — [crosstalk]

Simone Cicero:
A non-engineer mind!

Stina Heikkila:
A non-engineer, let’s put it that way. Exactly. So, I’m thinking about your title – Z – like “systems engineer” and everything that you’ve talked about, these mechanisms. It’s quite clear what these mechanisms allow. And what keep coming back to my mind is like, okay, but for what — There are — I think you said now in the end that you create this lived experience of democracy and being able to participate. So, it’s clearly like, an objective sort of in itself, but I’m also thinking more in society, and in transitions and in the bigger, complex systems that we are living in. Like, do you need to set a direction for this kind of work and all these people collaborating? They have the ways to do so, but how do — [crosstalk] Why?

Michael Zargham:
Yeah. So, first off, I’ll note that I think that the why is very much local to a context. But I’m going to 100% agree that animating purpose is a critical element to an organization. So, it’s not just organizing for the sake of organizing. I mean, you could do it, in which case, it might be sociality, which is the purpose and you’re engaging with friends and whatnot, but like setting pure sociality aside, like I generally view the effort to design an organization emerging from some animating purpose. And I’ll put a pin in too much technical detail on animating purpose, but I will sort of flag it with two pointers. One is, I find it very useful to look at the cybernetics literature, viable system model, Stafford Beer, and some of that literature that I think was referred to briefly earlier, as a place to read about sort of the role of purpose within designing.

But the other pin is more recently, I’ve done some work on animating purpose with a friend named Eric Alston, who’s worth looking up. He’s institutional economist and constitutional scholar. But to make this much more practical, I’m currently working on an organization which is a Data Trust or a data DAO with a large tech company, actually. And the animating purpose of this organization is to make data contributors, sort of human users, first class citizens within a particular data economy. That project is called Delphia. It involves basically an investment app that is soliciting data from its users to participate in, basically enriching their own ability to invest and make money for the same traditional finance, not DeFi, basically. But enriching their capacity to make bets in the stock market that in turn the users benefit from. But those users are being asked to provide large amounts of potentially very valuable sort of first party data like directly from the users not necessarily buying data on the market.

And the thing about that kind of system is that right now, our economy runs on data, it’s largely a kind of gnarly surveillance machine. And people are really the natural resource being mined, they’re not first class citizens within the systems that use the data that’s created through their day to day interactions. And so this particular organizational design is a complement to a larger system. So, you have this compositionality. The larger system includes a tech company, which could be thought of as a data grid, and a hedge fund, basically, that’s responsible for monetizing. And you’ve got this new interface to this community of people giving data, and we want to make sure that those humans have their, basically, their rights respected. For once in the data economy, we don’t just assume that we can mine those resources on sustainably and sort of benefit from it indefinitely.

And it’s a pretty complex organizational design problem simply because there’s a tension between, say, the capacity to incentivize or compensate the data contributors, since the value of the data is not as simply okay, I bought your data. Like, it goes through a complex manufacturing process to transform from lots of individuals, individual data into datasets and signals, which in turn can get monetized through trading. And so attribution is subjective. You ultimately have time lags, you have a lot of challenges that make it not, like it’s not like an objective fact how valuable the data is.

And so the effort to build up a data trust that’s capable of advocating and negotiating with the sort of data cred and the sort of hedge fund monetization entity is the purpose of this organization. And I am in the process of constituting it with, actually, my friend and colleague, Eric Alston, who I mentioned earlier. We’re writing the constitute of principles and working with lawyers to create a legal entity that will actually wrap this sort of community org. That’s again, its purpose is to give people a platform to have their voices heard within the particular data economy that is being instantiated.

Simone Cicero:
Feels like these new approaches, these new technologies give us the possibility to design as systems, right? There is this paper called Design as Participation that I always refer to, which essentially pushes us into the perspective of looking into the system we are designing as we are participating in it. So, not as owners or shapers or whatever but more as part of the system. For example, when you say you’re building the legal entity and so on. So, it makes me think that we don’t have these legal entities because we never design things, our systems. We always design things from the perspective of one particular entity trying to exploit the situation to generate certain advantage. So, maybe we are finally stepping into an age of design as participatory systems where we can solve these tragedies of misaligned incentives. And if yes, what is the responsibility of the designer?

Michael Zargham:
First is I do think that we have an increase in the frequency of participatory — designing as participants. I actually associate this to Donella Meadows and sort of Dancing With Systems, this idea that you can’t control these things, you can only interact with them. And that as a designer, we need to be humble in order to actually have success.

Simone Cicero:
Yeah. Yeah. Sorry. Just a little interruption because exactly, you mentioned Donella Meadows, and I was thinking, as you were speaking, how can we be designers of mechanisms without being mechanistic, right?

Michael Zargham:
A hundred percent. So, I mean, one of the ways that I think about designing without being mechanistic is thinking about holding space. So, like, if you’re too mechanistic, you’re trying to create deterministic outcomes. You’re like, this will happen, that will happen, that will happen. It’s very much trying to make a domino effect or to assert a particular outcome. Whereas when you’re designing for something that has like a livingness to it, then actually the tricky part isn’t the stuff that you assert. It’s the stuff that you don’t assert. And so you can think about it as, like, structuring space. You’re creating, I don’t know, an amphitheater. Right? Most of it is empty space. Sure you’re providing some structure and that structure does affect what the space is useful for. But it does very little to force a particular activity. If someone — [crosstalk]

Simone Cicero:
Enabling constraints, essentially.

Michael Zargham:
Yes. Exactly. So, as a designer, you’re providing enabling constraints. But you’re doing so in a way that is ideally maximizing for sort of the expressivity while minimizing the sort of risk of, say, harm or misuse or what you would consider abuse. And actually being willing to sort of leave space for the unknown is a big part of what is differentiating really mechanistic design, which a really concrete example is like a people mover, like in the airport versus like even a hallway. Like, you could have an art exhibit in a hallway because the hallway still has — is not so imposing. Whereas, like, a people mover would quite literally mechanically moves you down the hall. So, it assumes the only thing you could possibly want to do is move along it and it literally carries you versus, again, just creating a long narrow space.

And so start to trying to, like, build intuition for the differences between things that are more sort of open or multi-use or things that might need to evolve over time or get used differently in the future context or by different users. At least for me, that’s the kind of mental tool that I use to try to stay away from things that are, like, overly mechanistic.

Simone Cicero:
Right. Can you maybe just double clicking to the role of the designer, right? In terms of both the risk of ending up with this anti-patterns that you mentioned before, like surveillance or things like that. And also, what’s in for the design? So, why should I design a participatory system?

Michael Zargham:
I think I have three things that I need to hit. Let’s see if I can — I want to talk about engineering and sort of civil engineering. I want to talk about hard-fought simplicity, and I want to touch on sort of like engineering ethics. So, I’m going to see if I can get to those three things. The reason that I like thinking about it as digital civil engineering is because I come from an engineering family and my dad’s a civil engineer and I grew up in a household where he’s a public servant. He works for the New York State Department of Transportation. And I got to see relatively first hand this sort of process through which you would sort of identify the need for, in his casing design bridges, the sort of need for a bridge emerges from the sort of existing, the state.

The jurisdiction has processes for determining and allocating budget and saying we’re going to need to build a bridge here because the transportation network needs to be able to move people goods, etc, along this particular dimension. But that goes through a process of design that leads to something that is going to meet those needs that were specified. The designs are done by very technical people with a very deep degree of expertise. There’s a degree of sort of regulatory oversight in terms of who is allowed to sign off on these things, what’s acceptable in terms of a bridge. Like, there’s a massive amount of technical complexity. But at the end of the day, what really matters is the sort of properties of that bridge. Like, how many lanes does it have, how much traffic can it move through it? Is it safe? Is it well maintained? And so an everyday human driving their car over the bridge is not going to think about all of the technical complexity. They’re just going to engage with the properties or the affordances of that technical thing.

And so I use that as a sort of example for this hard fought simplicity thing. Because honestly, like bridges especially in the northeast, they have elements that you probably aren’t even aware of. Things like plates that expand and contract in temperature to make sure that the bridge remains safe despite the changes in temperatures over the seasons, it doesn’t break down and crack. These are things that most people don’t need to think about, probably shouldn’t need to think about, and nonetheless that they benefit from; the technical depth and the oversight that went into the sort of production of that shared infrastructure.

And so if we think about, like, organization, like Org Tech as institutional infrastructure, then we can imagine that there’s places where lots of detailed design might go into either a specific bridge or a pattern for a type of bridge over time and we say, okay, cool, these things are robust and reliable. They have properties that we can trust and respect to meet the needs of people. And ultimately, that technical complexity gets kind of boxed away from the end user, not because they shouldn’t have the right to say go read the plans. They’re generally on file somewhere.

But they shouldn’t necessarily be expected to understand all of the details of the implementation. What really should matter to them is where the bridge is, basically, is it large enough to meet the needs of the community that it’s serving? Is it safe? Is it — maybe some of the traffic pattern information is particularly relevant to the people using it. But like the detailed structural engineering and/or bridge like, detailed bridge engineering work is really not something that they can or should need to engage with. And even if they do need to sort of feedback on that system and say, hey, this isn’t working for us or it’s not what we needed or wanted. They’re not going to do that at the level of the specific bridge implementation. They’re going to do that at the level of the properties or affordances of the bridge. So, I’m using that as a couple examples, I guess.

On one hand, it gives you a sense of this notion of hard-fought simplicity, which is we can have engineers and architects working really hard to accomplish particular properties and still give the end user the property rather than try to give the end user all the details about the technicals that went into creating the property. And then the other piece is the engineering ethics element where those engineers have a real obligation to the health, safety, and well-being of the people are using the infrastructures that they produced, which is quite different from the modality we have in most software engineering and business building. Where all of the duties are effectively fiduciary duties to shareholders as opposed to duties to sort of end users or stakeholders. And so I try really hard to keep my sort of architecture and sort of systems engineering work in the mental modality of a civil engineer.

And in doing so, sort of mapping out stakeholders potentially several orders of stakeholders. Like people who are directly impacted or indirectly impacted and how the things that we’re designing bear on those people, as well as including meaningful feedback loops. We sometimes joke about qualitative, quantitative sandwich in my team. Where, like, you have the qualitative work is the bread and the quantitative or technical work is the meat. So, the first half of the bread is basically really understanding context, going into an environment and understanding who the stakeholders are, what’s the purpose of the system, and things that are fundamentally in this sort of subjective, inter subjective regime that you have to go talk to people to understand. Then you can kind of take the requirements that you develop to refine a technical design that actually meets those needs and potentially get it deployed. But then after you finish that technical work again and you’ve deployed the thing, the back half has to be another sort of more qualitative or social scientific activity, which is like, is this really meeting your needs? Is it continuing to meet your needs? Like, is this what you wanted at all?

And so you end up with basically the social elements as both the beginning and the end, and the technical elements exist basically to close the gap between the wants and needs on the front half and the outcomes on the back half. And then you can imagine continuously iterating, not necessarily fast, meaning we’re not updating things like bridges constantly, but we are maintaining them, taking them through end of life, building new ones. Like, our technical infrastructures should be under constant monitoring, revision, and updating. And I think the same is actually true for our institutional infrastructures. And one of the reasons that we get challenges is because a lot of communities think that once they get the right process, procedures, software, or other institutional infrastructure that it’s just set forever. And in truth, these are, like, the kinds of things that need to be evaluated and revised over time to make sure that they’ve remained fit with the needs of that community they’re serving.

Simone Cicero:
That’s really interesting. I think it brings up a totally different approach to designing and building initiative. Which is a bit more resonating with that of a civil servant, not that of an entrepreneur as we used to look into. So, maybe, you know, it’s like and especially as you speak, you speak often about infrastructures, for example, institutions. It’s like we are kind of entering an age where it’s going to be more possible to design institution initiatives that are much more, you know, actualizing whole systems instead of just actualizing our own interests. And therefore, we need a different ethics as designers. You know, we kind of had to embrace more like a civil servant ethic than a traditional private LTD limited reliable entrepreneur. What do you think?

Michael Zargham:
Yeah. So, I agree but I want to extend this a little further because we’re still talking about now in terms of this polarity between private and public sector. So, I did very much bring with me the lineage of engineering in the public sector sense. And I am using it as a foil to the sort of corporate, sort of private sector profit maximization sort of sector. But actually that there’s a sort of third sector, and it’s the one that I actually really ascribed to. And I really like this terminology from Sara Horowitz, mutualism. But so the sort of mutualist or sort of third sector here is something that’s more fundamentally community driven, more fundamentally purpose driven. And in this sort of purpose driven modality, we are not necessarily looking to a sort of corporate model or to a state based model of design. And we’re looking to our again, I guess I’ll use the term our communities, our communities of practice with shared mission.

And so as someone who say helping to create a DAO or helping to create a new cooperative organization, what I’m really looking at is who is the mass of stakeholders within this particular system? What is their animating purpose or goals? What are their financial constraints? So, I tend to look at organizations’ finances, not as goals, but as constraints. Meaning they have to produce enough revenue money to continue to act to live. Sort of like an energy analogy in a biological system you have to go eat and you don’t just exist to eat, you don’t see arbitrarily, you know, sort of fat organisms just like sitting around. You see them running around and doing stuff and whatever it is that they want to do. But they still need to maintain enough resources to survive.

And so I’m looking at these organizations from this perspective that they’re going to need to have endogenous sort of revenue production models. They are able to live whether that’s providing services that get paid for or collecting dues, or… you name it, but you produce some sort of economic engine that allows it to persist. But then you don’t use that economic engine to get more money, you use that economic engine to pursue an animating purpose or mission. And so this sort of way of thinking is, I think, a little different from the state or the corporate model because the state model is also expanding money to provide services, but it’s still coming from a very top-down mechanism or at least large jurisdictions. It’s probably fair to say that small municipalities act closer to what I’m describing, but I’d like to see this extended beyond thinking about governmental organizations to things that are more diverse.

I would love to have a mutualist ISP, Internet Service Provider, maybe mutualist style grid providers. And these are organizations that would be institutions, maintaining infrastructures, which serve large populations, but that they exist as part of that social society, that civic society. And there are lots of interesting questions around, like, what does it mean to be a good citizen within systems like this? In particular, because attention is a scarce resource and another trap you see a lot of people falling into is expecting everyone to participate in governance all the time, but good faith participation in governance is hard.

It may be more realistic to say, you’re going to live in a sort of pluralverse of communities each of which is attending to the various resources that you depend on in your life or that you choose to participate in in your life. And you’re going to decide which ones you allocate attention in to be a sort of contributor, maintainer, governor type. And others, you’re just going to say, oh, you know what? I trust the people who are going to do that. And so remembering that, like, good faith, participation in governance is attention intensive, and actually, being thoughtful about where we choose to allocate that attention and being reasonable in our expectations about how others might expend their attention.

Stina Heikkila:
Thank you so much, Z. I have been quite quiet in this episode. I think I was sort of —

Simone Cicero:
I think we are to put a label, like, only for engineers in this episode today.

Michael Zargham:
Sorry.

Stina Heikkila:
No. It was not to say that it was not, let’s say, accessible. I think it definitely was. It’s more for crafting something that can push the conversation further, I was just trying to listen mainly and capture these points. So, I really like what you shared in this idea of new types of institutions. I work a lot in public policies. I work a lot with city governments in my work. And I definitely see sort of this frontier that you’re talking about, even though you might also think there’s a challenge there in terms of capabilities and people’s, yeah, different backgrounds and how do we actually make that — these kind of projects also more multidisciplinary. Right? So, that can be like a kind of closing quest if you will.

Before we say goodbye and so on, we wanted to ask you to leave some further thoughts with our listeners that are, I mean, could be linked to what we discussed, but really something that is on your mind at the moment, something that — a reference, a book, a podcast movie, whatever that you think would be nice to leave with our audience.

Michael Zargham:
Sure. A few thoughts. So, in terms of less — a bit less technical, but very much in the vein that I’m discussing, a friend of mine, Kelsie Nabben, hosts a podcast called Mint and Burn. I think that would be worth checking out. I alluded to a book called Mutualism by Sarah Horowitz, which I strongly recommend. And on the slightly more technical side, there’s a book called Engineering a Safer World by Nancy Leveson, which is still pretty accessible. Although she’s a systems engineer, aerospace engineer at MIT, the book is a bit more about how we think about system science and safety and sort of redefine safety in our engineered systems to be more human centric.

Stina Heikkila:
Okay. Yeah. Great. So, we look forward to looking into all that. Again, we really want to thank you for being part of the conversations today. I hope you enjoyed it as well.

Michael Zargham:
It was great. Thank you so much for having me. Sorry if I went a little too far down the technical rabbit holes. You guys met Jeff. He’s like, one of my main collaborators for, sort of, turning it down.

Simone Cicero:
Right.

Stina Heikkila:
We had Jeff in another event linked to the research we were mentioning on our white paper. And they did a session actually on this conviction voting, which is part of the reason why I felt very comfortable following that. So, we can definitely put that in the show notes. I think we have a video somewhere of their presentation on that.

Michael Zargham:
Cool.

Simone Cicero:
Yeah. Let’s thank Jeff Emmett from Common Stack and any other initiatives that was important to get this off the ground. So, thank you so much, Z. And I think I bring home this idea of designing mechanisms without being mechanistic. That stays in my mind at the end of the conversation and it was really great.

Stina Heikkila:
So, thank you very much. And to our listeners, all the references and things that we have been mentioning, you can find in the show notes to this episode. So, you go to boundaryless.io/resources/podcast. Look for Michael Zargham’s episode there, and you’ll find all the links. And until we meet next time, we’ll catch up soon, and remember to think Boundaryless.