#84 Gardening Platforms and the Future of Open Ecosystems with Alex Komoroske

BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 84

 placeholder
BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 84

#84 Gardening Platforms and the Future of Open Ecosystems with Alex Komoroske

In our first episode this season, we dive into the world of open platforms and the future of organizing. With his thought-provoking presentation of “Gardening Platforms”, our guest Alex Komoroske, helps create a shift in mindset towards a more achievable and sustainable approach in creating impactful technology solutions. We explore his key learnings from leading products at Google Chrome, and how he continues to implement these in his own journey today. Talking on questions of morality, control, and designing for the nefarious users, there’s so much Alex offers in this podcast, and we hope you take from it, as much as we did.

 

Youtube video for this podcast is linked here.

 

 

Podcast Notes

Alex Komoroske, is truly one to get inspired by when we think of the potential that powerful technology solutions can create. For 13 years, he worked with Google, building some of the most cherished products that came from the organization.

Heading product for a company that affects billions of lives is no easy feat, and Alex did just that for Google Chrome. He took his vast ocean of knowledge, and for the last two years has helped shape Stripe, as their Head of Strategy.

In his flip-book style presentation, aptly titled, “Gardening Platforms”, he discusses the fundamental emergent power dynamics of platforms, how to evolve an existing platform to continuously improve it, and how to create a new platform from scratch.

This got us truly hooked, and we are glad that Alex joined us to discuss this, and many more mind-tinkering concepts in our podcast. We touch upon the inherently complex and evolutionary  nature of platforms, the role of composability and modularity, and what it means to be sensitive to socio-technical implications of your solutions. 

Get ready to be inspired by his high-impact journey, brutal honesty, and some remarkable recommendations of books and concepts that have shaped his thoughts and processes.

 

Key highlights

  • Platforms can be imagined as a swarm of energy, between multiple entities interacting with one another.
  • Why you don’t always need a big-bang for product launches, and oftentimes self-accumulating users are what’s best for the platform’s success.
  • Why a gardener mindset, rather than a builder mindset, is poised to reward a product more success.
  • How organizations can prepare and design for open platforms that are resilient and expansive, but chaotic and hard to control.

 

This podcast is also available on Apple PodcastsSpotifyGoogle PodcastsSoundcloud and other podcast streaming platforms.

 

Topics (chapters):

(00:00) Quote by Alex Komoroske

(00:39) Simone welcomes audience

(01:26) Alex Introduction

(04:28) Platforms as Complex Adaptive Systems and Gardening Platforms

(09:08) “Coherence” an attribute for platform development in Open vs. Closed Platforms

(12:22) Rebound from Centralized Platforms

(16:36) Translating chaotic and complex ecosystems into actionable initiatives in organizations

(29:04) Trust, Organizational Accountability, and Micro enterprises

(39:18) Moral implications of building platforms and open source movements

(49:08) Design for Participation

(53:42) Power of Decision Makers in influencing platform design

(01:00:09) Breadcrumbs (Suggested content from Alex)

 

To find out more about Alex’s work:

 

 

Other references and mentions:

Alex’s suggested breadcrumbs (things listeners should check out):

Recorded on 30th August 2023.

Get in touch with Boundaryless:

Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast

 

 

Music:

Music from Liosound / Walter Mobilio. Find his portfolio here: https://blss.io/Podcast-Music

Transcript

Simone Cicero:

Hello everybody, welcome back to the Boundaries Conversations podcast. On this podcast we meet with pioneers, thinkers and doers and we talk about the future of business models, organizations, markets and society in this rapidly changing world. Today I am joined by a new regular co-host, my colleague at Boundaryless, Shruthi Prakash, that is joining from Indonesia. Hello Shruthi, good to have you.

 

Shruthi Prakash:

Hi, nice to be here as well, Simone, thank you.

 

Simone Cicero:

Thank you so much. And today we are joined by kind of a true champion of open platform thinking, Alex Komoroske. Alex, hello.

 

Alex Komoroske:

Hi, it’s great to be here.

 

Simone Cicero:

I hope I pronounced your surname well,

 

Alex Komoroske:

That’s correct.

 

Simone Cicero:

otherwise.

 

Alex Komoroske:

That’s good. Yeah, that’s good.

 

Simone Cicero:

Good. And I mean, Alex, thank you so much for joining us today. I think for those that think about open platforms, more specifically, the implications in organizational matter, you are really like an inspiration. And I really encourage the people that didn’t… didn’t familiarize with his content yet to do it as soon as possible because it’s really, really important. I recently met Alex because we shared speaking as guest lecturers at the Summer of Protocols, which is a program that the Ethereum Foundation started before the summer to go deep into the implications of the evolution between platforms and protocols essentially. So… the role the protocols play in society, in technology, and so on. And we will have an extensive bio for Alex in the podcast notes, and you find all the links to your decks and the other pieces of work that you have been churning out in the last few years. But I think we should definitely peek into your expertise on platform development. You know, just mentioned that you have been working on things like the Chrome Web Platform, which has billions of users daily at the moment, and Stripe on strategy. But maybe we can give to our listeners a little bit of a quick one-minute overview of what drives your work and interest and research today when it comes to platforms.

 

Alex Komoroske:

Yeah, a lot of it grew out of a necessity. It just went for many years working on and leading Chrome’s Web Platform PM team of like, you got to reason through how to navigate a platform problem. I think product managers are often under the misunderstanding that they’re in way more control of their product and its usage than they actually are. Because this notion of like, I’m the CEO of the product has this kind of control mentality. And platform PMs have to contend with the fact that there’s at least another layer of actors in the middle influence to then lead to the outcomes in the real world they care about. An open platform PM where you have multiple browser vendors, many of whom, you know, don’t always get along. Trying to figure out how to do something coherent is like a place where you fundamentally have to come to grips with what is happening in the platform and the power dynamics of it. Later, I also realized how some of the tactics I used to navigate large organizations. were basically exactly the same as the strategies I was using, intuitively using to navigate large open ecosystems, and they kind of fit together in my head. And since then, I’ve gotten to be, you know, learn quite a bit more about complex adaptive systems and other lenses of power dynamics and others to understand and sort of represent on Earth the intuition that was developed in the forge of many years of experience in a very challenging environment to understand them and pack them and apply to see where else they might apply.

 

Simone Cicero:

I think it’s good to maybe double-click quickly on this idea of platforms as complex adaptive systems, which is, I think, one of the major elements of the way you look at platforms. And it’s been interesting for me because when we started our work at Boundaryless, or even before Boundaryless, 10 years ago, when I started to do platform design you know, kind of dropping these canvases for helping people to kind of grapple with the complexity of creating a multi-sided system with so many people, so many players in the ecosystem and so on. There’s always some kind of reductionist approach that you have to embrace, because otherwise the complexities are kind of overwhelming, right? But besides, you know, the necessarily simplifications you have to make to… be able to try and act a design in a platform, can you maybe expand on how do you go from your vision as a platform developer, the things, the ideas that you may have about the ecosystem, the services, the ecosystem needs and so on, and maybe introduce a few key concepts that people can use to really understand your idea of rational platforms and platforms as complex systems. I’m thinking of, for example, the way you introduced the idea of use cases, layers, horizontal and vertical layers. Maybe you can give us a few pointers that can help the listeners to start understanding your point of view on platforms as complex adaptive systems.

 

Alex Komoroske:

I think the simplest way in the core of it, and why I titled that deck the way I did was, it’s the gardening mindset, as opposed to the builder mindset. The builder mindset is highly engineered, highly focused on control. There are no externalities to the system. You figure out what to build, and then you build it, and then it works. In a gardening mindset, you very clearly have some other agents in the thing that you were trying to influence and create, but you fundamentally can’t control them directly. You can put up different scaffolding, you can trim back things, you can… You can definitely have an impact on the outcome, but in a fundamental way, you don’t directly control it. And that’s the mindset shift. I think that’s so important. When people, when I talk a lot about some of the emotional feeling of navigating complex problems, which platforms are a perfect type of, but I think is really a lot of problems that we as in our jobs face day to day, is people say, they often, I think incorrectly say, that sounds like nihilism. You sound like you were saying, give up. and don’t do anything and nothing you do matters. And I think that’s fundamentally wrong. I think that this is actually quite an empowered stance. I think it is by acknowledging your own limitations of the actual space of actions that you are permitted to make based on the constraints on you, you actually realize how much power you do have to choose among those decisions in a coherent way that leads to large coherent outcomes. And so people, I think, often want to have control in the small. And so. they, which is an illusion, I think to some degree. And so they end up giving up on the ability to influence the large arcs of your system. And so if you have one way of looking at this is, um, when you have uncoordinated entities, lots of different agents, lots of different developers or people or coworkers or whatever, by default, it’s just Brownian motion. Everybody’s just going in our direction and the coherence at the whole, there is no coherence. It’s just an overall like entropic kind of expansion. And if you, however, if you can get even the teensiest amount of bias of like people are 10% more likely to go in that direction, overall, the coherence of the overall behavior of the swarm goes up significantly, right? Just a little bit of an edge, a consistent edge, and it pulls everybody in that direction. And this is one of the reasons that North Stars can be very powerful, clarifying concepts within an organization, within an ecosystem, within a platform, having a very clear, a plausible and inspiring North Star. that other people that everybody knows about and everybody shares and that can lead to very coherent outcomes and Creating such a North Star is quite difficult. They’re often very low resolution They must be because you can’t control the details of it You can only control the grand sweep of the thing or give them one of the path that it might take on the grand sweep Of things and I think that’s very confusing to people Because it’s very everyone’s used to control in the small as opposed to like this kind of like sweeping arc in the bigger picture

 

Simone Cicero:

Is there any, you mentioned a very important word that we use a lot in our work, which is coherence, right? So my question is, is coherence really an attribute that we should be seeking in modern platform development? And especially I want to make this question in relationship to the idea of what we normally call open platforms versus… closed platforms, let’s say you can think of maybe aggregators or you know or platforms which are more like more end user oriented than the more open which are typically more like enabling platforms. So is coherence important and especially when we think about building an open versus more like closer use case oriented platform?

 

Alex Komoroske:

I think you do need coherence of some degree because otherwise you get background noise, which is diffuse, it’s entropy. And so you need something that gives a bias so that it accumulates into something more than background noise. And so that requires some kind of coherence of like multiple agents pointing in roughly-ish the same direction, at least some pockets. So I would argue that it’s way easier to find coordinating and getting. 18 or so different actors to all agree on a direction from a standstill is basically impossible. It’s really, really hard. However, if you’d see that a couple of them are already pointing in roughly the right direction, and they could say, hey, we should like work together. I was like, oh, OK, like that becomes easy. And you know, you can tweak and go instead of going default divergent, you go like this. And now your default convergent. OK. And so now this gets momentum. And now other people nearby who may be at the beginning or I’m not really 100 percent sure that maybe they were oriented like this. As this gets momentum, it pulls people’s incentives in just a little bit. So now there’s people that before were on the margin in a way, now they’re on the margin in, they come in. This creates the pull even stronger. And over time, this gravity well dynamic can pull in more people once it gets a bunch of momentum. And people go, their opinion of what they want to do individually is overwhelmed by whatever. I’ll just go and do the thing with everybody else is doing because it’s better to do something with momentum. So you can get these coherence things, but it’s quite hard to do from a standstill. And I do think that you do need… local pockets, if everyone’s doing their own individual thing that never adds up to anything more, you don’t get anything, you just get noise. And so there’s a section at the end of I have this essay about shelling points and organizations, and about I think it’s a it’s a treatment of like game theoretic, or shelling points. And at the end, the very last section of that essay almost gets poetic. And so we call me out like, you’re ready poetry, what are you trying to do here? But like talking about the importance of these little, there’s all this background noise happening. all this stuff that’s just random. And every so often a little shelling point of a thing, a little seed of an idea that is coherent pops out that can then start accumulating energy from nearby things. And it can sometimes grow into something very large. But I think you do need those shelling points, those little pockets, little seeds of coherence. Otherwise you don’t get a thing collectively. And that’s really the challenge in open platforms. How do you design for the layers? that have coherence at the protocol layer to allow incoherence above. So like the scaffolding for the incoherence and the exploratory energy. But you need something that coordinates and pulls people in and makes them say default to going from that sort of, again, that default divergent to default convergent on that level.

 

Simone Cicero:

I know that Shruthi has a question on the organizational implication of this, but I want to add a little point before that, if you allow me, because when you were talking about this, I was kind of reconnecting this conversation. And, you know, and I feel like when you speak about this, I feel like your use of the concept of a North Star, for example. and this idea of things coalescing around, you know, these kind of elements. It’s very important, but what I’m seeing, for example, now, when I look at the Web3 narrative, right, it’s more like an idea that everybody’s going to build a little piece, and then you’re free to compose everything together, right? So there is this lack of… It’s like there is a rebound from the centralized platforms of the early 2000s, and everybody felt like you are too controlling, you are too dictatorial. So we want to develop a new ecosystem which is made of small pieces, scarcely related together, kind of scarcely coupled, and we just build our own small piece and then… everybody will compose and will take Lego bricks and make something more complex. So how do you feel about these kind of shift of narrative from more kind of coherent platform visions into a universe of small projects that develop small pieces and then somebody will bundle them together?

 

Alex Komoroske:

I think it’s great. And I view platform development as a swarm of energy, of uncoordinated entities. Sometimes it’s literal developers inside of an organization. You can visualize as a swarm. Sometimes it’s like users of the platform, people on an open source project who are contributors. And then there’s the kind of thing that makes it something coherent out of it, the selection pressure that kind of selects into a thing that is more coherent. I think the power of these approaches has tilted way too much into the selection pressure and the query. We are overdue for a swing back. I personally think that generative AI helps significantly with that, allowing a lot of tinkerer energy, a new era of tinkerer energy and empowered things. One of my favorite essays is a very old essay from Clay Shirky in 2004 about situated software. And I think this is a very powerful idea whose time has come because of generative AI. That’s a whole other trend that we can go down. But I do think that, I 100% agree. I would point out that what happens in a lot of those things in the swarming behavior You have to have some kind of coherent API, which is a boundary between systems that people know is roughly constant. And not necessarily it never changes, but something that you can rely on as a thing that these things will plug in together. So like Unix, everything’s a file. Great boundary. It just fits in really, really nicely and everything can plug together like Lego bricks. Gordon Brander, who’s I don’t know if he can talk at the Summer Protocols, but he’s doing subconscious and he’s a good friend. And I would say I would take a literal genius. And he talks quite a bit about the power of these clarifying protocols for things to fit together. Protocols, by the way, don’t have to emerge on purpose. Sometimes they emerge because, like, somebody, everyone just starts using some protocol of some open source system, like the boundary of that system, people start going, yeah, I’ll use that. And then over time, more people glom onto it, it becomes a formal boundary that people now rely on. And now when someone goes to, like, the owner of that project goes to, like, tweak that boundary, people go, whoa, whoa. We’re all relying on that thing. So it can absolutely happen unexpectedly of people voting with their feet kind of, and choosing what code to build on, what things they think have value to build on top of. But so protocols can emerge unintentionally. And you get some real weird things though, when you get things that weren’t designed to be, weren’t conceptually wanting to be federated protocols. With federated protocols, you have to assume all your agents are gonna move very, very slowly. And so that means you want the basest semantics, really small, sparse semantics. You don’t want a lot of sugar on top because these are extra semantics you have to maintain and change over time. And those things can be intentional.

 

Shruthi Prakash:

I’ll take from that cue Alex. So for example, how you speak about maybe semantics being a conscious, maybe choice when you start off. I want to understand how maybe in complex ecosystems like this, where, like you said, it’s chaotic, even though it might be, let’s say, expansive in nature, but it’s essentially chaotic, right? So how does all of that translate into an organization? How? How do you, let’s say, see the connection between the two and let that be like a corporate or like a startup, either way, right? How do you see that translating or impacting our organizational functions today?

 

Alex Komoroske:

I think to some degree an organization is an assemblage of people that agree to subvert their individual goals to a collective set of goals in that context. And something, something showing, you know, the, oh my gosh, the economist who talks about, oh my gosh, this is so embarrassing. The guy who talks about the transaction costs is the fundamental determinant of where the boundaries show up.

 

Simone Cicero:

Coase, Coase theory.

 

Alex Komoroske:

Who, it’s.

 

Alex Komoroske:

Coase, thank

 

Alex Komoroske:

Yeah, the Coase in theory with the firm. Yeah. And I think one of my favorite little anecdotes is back in the early, before midway through the 19th century, United States, the United States was a plural noun. The United States are a great place to live. And then after civil, sometime after the civil war, it became a singular noun. The United States is, and that emphasizes the assemblage of them versus the individual swarm of it. And there’s something very different about the way you think, which one matters more, the collection or the individuals in that context? And which one has more agency? Which one has the power to the side, the rightful power to the side? That’s to some degree, the agents have to decide that they want to participate in that thing. And so you see it coming together where people say, oh, our goals are similar, and we should, you know, work together so that we don’t have to rebuild a bunch of things, or we’ll get more momentum than we would get individually. But then you might find that like you have a disagreement and you’re like, See ya, bye, I’m gonna fork and go over in this other direction. So when you have something very, when it’s very informal, it’s like very easy to break up. In the same way when you’re just, you’re just dating someone informally, very easy to break up. When you’re married, way harder to break up. And so that you’re more committed. You have like, and one of the things that you’ve done when you get married is you have made commitments to demonstrate, here’s how seriously I feel about this relationship. I’m gonna make it really hard for myself to leave to show you how much, how committed I am to this relationship. And you get some of the same dynamics that happen in organizations. you sign a contract with the company that you are currently an employee of and it has certain very clear transfers of agency in certain contexts. So I think that it shows up wherever people want to think that it would be more useful to work together on something. Coordination, when I did my original slime mold deck, Venkat did a really nice piece that was very flattering about coordination headwinds and how they rule everything around us. They’re everywhere. People think that I know I use this lens a lot, but like everyone wants and expects hard things to be hard for interesting reasons. And the reality is the vast majority of hard things are hard for totally boring reasons. And it’s just the coordination challenge of getting lots of individual agents to like point in roughly the same direction for some period of time is really, really hard. And it’s just the amount of energy in society that goes into this is huge. And I think to some degree, technologies like AI and the internet changed this cost to some degree on the margin, but on a fundamental level, I think this is not even just like a huge, like man, humanity is kind of silly that we spend all our time coordinating. No, I think it’s a fundamental thing. I think it’s like a, it’s a thing that’s expected to show up in any complex adaptive system fundamentally. And over again, we’re increasing the information transfer rate. which allows larger coordination structures to be plausible. But I don’t think it’s a fundamental characteristic of like a fundamental like first principles. It emerges into any plausible universe that must have this dynamic.

 

Simone Cicero:

Is there any chance, if I understand well, that what you are, I don’t want to say arguing for, but you want to suggest maybe, is that when you deal with complex ecosystems and complex plant forms, a couple of useful things to do are agreeing on the north star on one side, so the big vision maybe, and agreeing on a small set of kind of… commitments that could be, for example, interfaces, right? That we all agree we can discuss maybe once for all and then kind of stick to this promise as we develop our own piece of the puzzle.

 

Alex Komoroske:

I think so, but I also, the thing I see people do is like, okay, I want to coordinate with all 18 of these different people. Step one, get them all to agree on a thing. That’s impossible, that’s a miracle, stop right there.

 

Like that will not work. So instead, what I do is I say, listen, I’m gonna sketch out where I’m going and why I think it’s interesting and why other people I think might find it interesting too. And I’m not gonna try to convince you, I’m just gonna do it. This doesn’t require me to convince you to do action on this. I’m gonna start tinkering on these very short term things that sort of accumulating value in that direction. And if you think it’s worthwhile, feel free to pitch in. That’s great. And you can help even tweak the vision. That’s great, that’s fine too. But you come to me if you want to participate. Because when you find someone who has a self-selecting interest into your thing, they are way more motivated. As opposed to trying to force someone, it’s like trying to force someone to be in love with you. It doesn’t work, it doesn’t work that way. Like you can allow, it can be more likely that someone falls in love with you by being a caring, thoughtful person and you know, knowing that, letting them know that you would welcome their, you know, their attention and interest. But like… Don’t try to force somebody to. And so once you get momentum, if you find a set of actions that will get you some momentum, even small amounts, then you will naturally, you might find that people go, oh, that’s kind of cool. I wanna be part of that. And then that gets more momentum and then it builds. And now before you know it, you actually have quite a large thing. And if you didn’t, it’s okay. Like this is where I have an essay about the doorbell in the jungle. This notion of people do strategies and they do this big grand thing. And they say, well, we shouldn’t even do any of this unless this big grand thing is impossible. And they go for the hardest part first. Say, first, we’re gonna do this big bold bet. My God, no. Sketch out a thing that is coherent, that people go, I can see how that could work. And then, again, a North Star that is plausible and inspiring, and it should be like two pages. It should not be long. It should be, ah, here’s why this thing could work. And build, ah, I guess it, yeah, sure, okay. And then figure out what are the small amount of actions that you could take right now that would very likely pay for themselves, which is to say very low cost. or very clear value that’s unlocked. Like very clear, like I’ve got eight developers, like eight customers who are like banging on the door, asking for exactly this thing and telling me to take their money. Like neat, if I invest three hours of effort to build that feature, I will get money for it. So like, okay, what’s the likelihood I regret doing that work? Very low, because worst case scenario, it paid for itself. 

 

And then you accumulate, and then what you do is you just look for more signals of demand. And if at any point you stop seeing demand, just pause, just stop developing it, that’s fine. But when you start seeing demand again, start investing again. And if you have multiple of these, eight or so of these in different directions, the likelihood that any one of them is firing in demand at any given time is pretty high. And so this looks again to people like nihilism. They say, oh, you’re saying you don’t have a strategy. No, I’m saying I have a really good strategy. It is a meta strategy that works quite well and is likely to work no matter what is happening in the rest of this world.. But people so often just go, yeah, I’m gonna, step one, convince all 18 people that this is a good idea. Like, that’s not gonna work, you know?

 

Simone Cicero:

I mean, that’s extremely interesting because we are arguing as well a boundary for organizations to embrace a different organizational structure, which is a little less, well, actually not a little, but a lot less bureaucratic than usual. So when you say, for example, you shouldn’t have a strategy in mind, you should kind of let the strategy unfold itself. I think it’s really resonating with what we do, for example, with the 3O model, that is our organizational framework that we have developed following the example of Haier, Rendanhaeyi and other models. And I recall that when we interviewed, for example, this manager, the CEO of GE Appliances, which is a US based company that just, well, actually just a couple of years ago adopted the Rendenhaeyi model, which is based on micro entrepreneurial units generating their own P&L, essentially, you can think of something like that. He said, we don’t have a strategy, the units have strategies, which resonates a lot with what you’re saying. But maybe my little question before we jump into other topics is… How do you, you know, should it start from the question of how do you organizationally deal with this? So we are used to organizations to be much more directive, you know, in terms of strategy, in terms of management. I can think of, for example, the recent turn of kind of narrative at Airbnb, you know, we’ve been listening to Brian Chesky in the last six months saying, we stopped doing Agile, independent development, we just centralize all releases in twice a year, release trains and so on. So what I’m asking you is, how can the management in organizations be comfortable with building products and platforms in this way, with so little control on the outcomes?

 

Alex Komoroske:

I think this is one of those things that is, but my answer sounds so like Pat and so silly. It’s trust, trust makes it all work. And especially when you’ve hired very good people, it’s going to be like, you have to, when you love it, let it go kind of thing, you know, so you got to give. Like people will know and do things that you don’t fully understand and they won’t go to represent to you why they are a good idea. And sometimes it’s like literally information they cannot share with you because of the power dynamics and, you know, what have you, or it wouldn’t make sense. or take forever to explain. And so it’s just a lot of like, hire good people, give them quite a bit of leeway and help ensure that they can think long-term. You can think of trust as the lubricant of an organization. And it’s the thing that allows the organization to navigate ambiguity and to get stronger from it, not weaker. And by default, ambiguity destroys an organization in a low trust environment. And every little bit of thing is like, oh, I think that person is just trying to be lazy or like do this thing for their own personal gain. Like, great. I mean, you’re screwed at that point. Like there’s nothing, you’re not able to send into this finger pointing kind of thing. A lens I think is interesting for organizations, by the way, one of my favorite little lenses, and we use this a fair bit on Flux, is Kayfabe. And Kayfabe is this notion from professional wrestling. And what it means is a thing that everybody knows is fake, that everybody acts like is real. And I think Kayfabe is a useful lens for organizations understanding what actually happens in organizations. And… Organizations, a Kayfabe is also a little bit of a lubricant in organizations, in the same way that politeness is actually, a lot of politeness is a lot of crushed up little white lies kind of, but they lubricate interactions in society. So like a little bit of Kayfabe is good. It helps make sure everybody is like, yeah, we’re going to be bold or whatever. What can happen in organizations in general, and this happens, I think, inextricably to some degree in all organizations over time, is the Kayfabe starts off grounded, just a little bit off the ground truth. But then you add more layers and then you add more pressure for big bold bets. And that means it’s harder for people to raise disconfirming evidence up the chain to their manager and say, I don’t think that’s possible. And so you now have like, they’re just playing the game. So now you have something that’s decoherent at a certain point, the Kayfabe delaminates from underlying reality and becomes free floating. And this is a very dangerous situation because you now have a thing that looks like it’s going in a coherent direction. It is not. Uh, the other thing that you is very dangerous about this is you as a participant in an organization have to make a decision. Do I try to point out the ground truth reality or do I play along with the Kayfabe? And everybody has built all their plans on the Kayfabe, not ground reality. And so the more people that have done that, the harder and more expensive it is for you to go for the ground truth. And so when you point out, hey, actually this thing down here, a lot of people who built all their plants and all their promo plans and everything are based on these plans that are based on like non-reality will… be very mad at you. And so it will be very, and there’s a, you know, maybe be quite passive aggressive to you. So it was actually a very strong accelerating momentum to just play along with the Kayfabe. And so this is one of the reasons Kayfabe is such a dangerous factor in large organizations. And it happens, I think, in every organization at some point, to some degree, and it can be quite dangerous.

 

Shruthi Prakash:

I think,

 

Alex Komoroske:

Sorry, I went in a very weird direction with that. I just.

 

Shruthi Prakash:

no, it’s really interesting for us to hear as well. I think I really liked your point on trust as well. I think earlier, when I was listening to some of your podcasts from before, I think one point that I noted was how, if I’m not wrong, Google made early releases of products, which were, let’s say, hidden or available to only the early customers. And how that, you know, basically increased accountability to the customers, right, essentially. So that’s something I wanted to touch upon here and see how trust in that organizational sense sort of relates back to the customer, keeping your ears on the ground, that kind of a mindset in terms of working as well.

 

Alex Komoroske:

It’s interesting. I wouldn’t have made the connection to trust per se. I think that to some degree, you want to get feedback from real situations as quickly as possible because a lot of ideas that sound great, you do them in reality and they just go, they just kind of don’t work. There’s something about them like that sounds coherent. So like you want to get a sort of feedback loop as possible for the core dynamic of your thing and then do iteration cycles on top of that. And that’s one of the reasons, you know, it’s so easy to sit there and polish this thing that you want. Like this is actually another place where… when organizations are dominated by Kayfabe, everybody wants to polish this big thing that’s gonna solve all the problems, this new product or whatever, and they sit there and they polish it, and they polish it, and they polish it. And actually, having real users see it might ground truth it and have people go, I don’t want this. And that would be very dangerous to understand politically within the organization. And so people kind of like, oh, we’re just gonna make sure it’s really good. It’s like, no, actually, the core concepts might not be useful. It might not be a viable concept in the market. And so that’s why I always look for how can you de-risk it? So the lens that people use often when they have a big, bold launch, people want to do, people have this intuition, big, bold launches is the way you ship things. That’s how you got this perfect moment and everyone’s gonna go, whoa, and come into it. Everyone’s gonna listen and hear about it. I think this tactic is extraordinarily dangerous and unnecessary in a lot of cases. When you have something that people who are interested can self-select into and accumulate. and then it has momentum as they accumulate it, you don’t need a big bang. In fact, actually a big bang is bad because maybe a key opinion maker ate a bad burrito that day or something and they’re grumpy. And they’ve read a tweet that says, I hate this thing for these reasons. Like, crap, now my thing is dead because that guy ate a bad burrito that day. So like, I think it’s much de-risk the underlying reality. So like, how can you make it so, like this is where the frame goes less from, how could I maximize the number of people who hear about this and love it? It’s actually, How can I minimize the number of people who use the experience and have such a bad time that they will never use it again? And then secondarily, maximize the absolute number of people who have a good enough or actively great experience. People focus on the latter part, but they don’t focus on the former part. And if you do that, then you start realizing, oh, wait a second, I should do an alpha test or some small thing in a quiet, out-of-the-way corner for a self-selecting population of resilient, engaged, motivated users. who it’s a small group of people. So if you burn through them, and they have a really bad experience, you’ve only burned through a small portion of them. But also they were less likely to get burned because it says, warning this thing might blow up on your face. They use it anyway, they blows up in their face. They go, well, they did tell me it was gonna blow up my face. So that, you know, they’re less likely to go have a negative surprise about the underlying reality. Which means that now in the future, when you say, hey, we did made a big update, no longer explodes people’s faces out as often, they might try it again. So you haven’t burned them out.

 

Simone Cicero:

I’m considering one thing. So basically, I think the original question that this, I would say, consideration that led us to connect trust with the risking and accountability is this idea that you want to hire great people. You want to give them lots of freedom in developing the platform. But then, what are they accountable at the end of the day? And we are. Organizational speaking, and again, clicking back to this connection between the product and the organization in a conway slow lens, let’s say. I can imagine an organization embracing this micro entrepreneurial structure where teams have their own P&L, they kind of have to create their own sustainability, accountability to customers, either internal or external. and develop these kind of chaotic platforms where new products, for example, can pop out very easily, as long as there is some customer validation. But then the question that I have is, how do you see this kind of playing out over the long term from the perspective of what is the impact on the very idea of the organization that we have? You mentioned COAS first, right? And- it’s clear that transaction costs are going down. So it’s much easier to collaborate internally and externally. So over the long term, I kind of feel like that our kind of concept of an organization is really no more up to the challenge, let’s say. So how do you see that emerging? And maybe if I can add a little bit of a twist on top of it, what is the role of things such as protocols and Web3 in this transition between? organizations as closed systems into networks of small teams that develop and connect pieces all together as they bring them to the market.

 

Alex Komoroske:

Yeah, I think the I do think that we’re going to see like the reconfiguration to some degree. If you read, by the way, and go back and read things that were written back in the early 2000s, Yochai Benkler’s Wealth of Networks, Starfish and the Spider, I’m blanking on the author’s name, some of Clay Shirky’s work. I think it’s very prescient. And it’s also, it’s a trip reading it because it’s not how it played out. It played out like that and then it just looped right into a top-down aggregator first kind of thing. If you read Tim Wu’s Master Switch, the thing that really surprised me about that book was I had this notion before I read it of like, oh, tech is special. This has never happened before. This is a totally different dynamic than it’s happened before. And that book is like, bah. Basically every technical revolution of like radio and TV and movies, they all went through this, whoa, and then whoop, into a centralized thing. And so my hope is that we are actually at the era of a new set of enabling technologies. that might allow a new era of tinkering and bottoms up thing. Those might plausibly be some of the innovations in Web3. Potentially, I’m not an expert in that. I think I do buy lowercase C crypto is useful. I don’t necessarily know if I buy all the other components of the Web3 vision. And I also think that generative AI really does play a role in like the reducing the transaction costs. I think we will see some different equilibrium. One of the way, by the way, the way to go fast. And when people are like, oh, we should go fast. Going fast almost always requires taking shortcuts. And a shortcut takes the form of some kind of externality. So an externality might be to whoever, whatever sucker is sitting in this seat in three years. So it might be an externality in time. It might be an externality in other adjacent things. And so often when things go fast, what they’re doing is pumping externalities out into another part of the system. And you can visualize this as like, this mental model of like in organizations. Someone’s like, I’m going really, it’s wow, it’s so foggy in here. And I’m gonna create, I built a little machine that’s gonna pump the fog out of our room so we can see clearly and execute. And what they don’t realize is that fog, like that machine is powered by coal. And so actually it’s not fog, it’s smog. It’s everybody pumping, like creating clarity in their local pocket and creating lack of clarity around them. And so everyone is fighting and this escalating race where everybody is pumping tons of coal into it, making the overall thing which. much more expensive and challenging. So just remember, I think that when you set up to a smaller thing, the externalities might come back to bite you and sometimes you can line up where it doesn’t really matter. But like, for example, one of the challenges of the situation of micro entrepreneurship that you’re describing is to the extent there was a shared resource, a resource in common, there is always an, it’s a literal tragedy of the commons kind of situation. And one of the shared resources that happens across companies is brand. So to the extent the brand means something, it is a very, very valuable resource that is also quicker to erode than to build. And so one team can say, effectively, they won’t ever think they’re doing this, but they can say, oh, I can capitalize on the amount of trust we have accrued in that brand to go really fast on this thing in a way that actually will burn trust overall on net, but it’ll make my project look really good. And they don’t think of it that way, of course. But like that is an underlying dynamic that happened. And that’s one of the reasons it’s very hard to challenge it. This is why if you’re gonna do this kind of organization, it’s best for external customers to also see them as separate. So you might, this is, I’m just thinking on top of my head. You might see like Y Combinator and its individual companies as an example of this. Y Combinator as a brand means something about, oh, okay, selection pressure, a level of quality, but then each individual company is a separate entity. And so one of them does something really, you know, really bad and has some kind of bad, you know, and everyone said, wow, what a bad. mistake that was, it doesn’t affect, it has a little bit of an effect on the overall collective, but it hasn’t had a massive collective. Whereas if they were all branded as literally being part of a company called Y Combinator and separate divisions, it would hurt quite a bit if one of them explodes actually to the overall system. If that makes, that tracks.

 

Simone Cicero:

Yeah, I think it’s really interesting and I think it connects your question about speed and your consideration around speed and externalities. You kind of… into the idea of discounting, right? You know, you can, it’s often to do things faster, you kind of discount future implications of what you’re doing and you can create depth, for example, things like that. And you also mentioned brand. And so I also connected with this idea of the commons. So you have maybe organizational commons that you don’t want to impact and kind of sacrifice to the… to the will of a small team versus a whole organization. So, I mean, putting all this together, makes me reflect a little bit to some points that also Shruthi was making before we jump into this call, which are related to moral implications, okay, of what you do with a platform. So is there an inherent, I would say, better way of doing platforms. And for example, you could be, I don’t know, developing, I mean, it’s very much related to the open source movement that you have been leading to some extent with the time you spent on Chrome. So how does this conversation around speed and consideration and building trust and connects with the idea kind of switching from an age of exploitation into an age of cooperation and kind of trying to, for example, avoid building, reinventing the wheel all the time and, you know, and trying to maybe be a bit more thoughtful as we build platforms.

 

Alex Komoroske:

Okay, so I want to say there’s a number of pieces I want to unpack in there. They’re reinventing the wheel. To some degree, evolutionary systems look really sloppy and really wasteful, but they’re actually load bearing of like the underlying, the replication with variation is what gives you the thing to select over. And so I do like, it doesn’t have to be, oh, everyone has to do exactly the same thing. No, the variation is good. The variation is the raw material that like comes, like that leads to, with the selection pressure leads to innovation. So I think there’s not like a, everyone should use the same thing. You want there to be a pull towards the same things though. One of the rules of thumb I used to have was for when we were working on web standards, I said, imagine each individual web standard that you’re gonna propose, and imagine the number of characters of normative spec text that must be added to the canon of all the public specs that people rely on to describe this idea. And the more that goes up with an exponential, the more costly and expensive it will be. So if you say, hey, it’s basically this thing over here, but plus this little thing over here, like this little extra little, we’re gonna add a new very, like a new property to CSS is display and we’re going to add a new version to it. So much simpler than describing we’re going to build this whole new styling system that access via JavaScript. It’s like, whoa, what? So you want to rely on the existing pieces as much as possible where they fit. And that is a natural convergent that leads to a building up of things that are coherent. But you also do want some space to be able to do separately. There’s a pattern that I like for this kind of thing, by the way, about what you do is you have in a spec, you have some kind of open ended field. that points maybe to a URI, some other thing, other semantic that you’re referencing. And then what you do is you buy a domain as a foundation, a number of the major providers or whatever, and you have like, let’s imagine this is about credentials. So you say, verifycredentials.org. You create this domain, you just stand up a media wiki instance on it, and you make sure that it’s a small foundation of a number of different companies that just pay the like hosting costs and that’s it. You know, like it’s just kind of like a shared common little thing. And then what you do is you make it, hey, anybody can create a page on this wiki with their username on this wiki at propended. So it can be Alex’s dash, you know, this semantic. And then what you do, so anyone can create it and they can document here’s the fields, here’s the methods, here’s the events, here’s what I mean by this. And then what you do is when anybody creates a new one, first, they must search for it. So they search for what they want their thing to do. And then you say, oh, is it like one of these five? And then what you do is you show how often is this one referenced, how many stars does this have or whatever, and you sort it based on that. And so what this does is it allows anybody to create a semantic, whatever, fine, but then it also creates a preferential attachment effect. If you’re kind of a trained semantic and you look at it, you go, oh, this one has a thousand stars, and you look at it like, oh, yeah, that’s pretty close to what I meant, and oh, they actually thought about this edge case, I hadn’t thought about that. Yeah, I’ll use that. So now at a certain point, if one of these pops out and starts being used, oh yeah, that one, that’s the tab strip semantic, sure, that’s the one we all use. At that point, it’s a no-brainer to promote it out of the sandbox into the like, you know, unprefixed version. And then you might even decide to standardize it formally after that, but you might not need to because everyone goes, yeah, tap-trip, you know. So in this way, you’ve discovered the semantics by allowing people to do whatever they want, but giving a little bit, a little teensy bit of a, of an optional, but default convergence kind of energy. And this is a very powerful pattern. So to unpack and go back to the moral component. 

 

The tactics I’m describing in all of these things are amoral, which is to say they have no moral component attached to them. What matters is what you do with them. What kinds of things, what kind of net change do you cause to happen directly or indirectly in the world? And this is true, I think, from the vast majority of tactics is they are amoral. They are there. You can’t say affirmatively whether or not they are good or bad as to what ends are they put. And I think my frame of morality is you should think as long term and as broad. as your time horizon allows. So you should be thinking not just about the effects directly that affect you in the future, but that affect indirectly. What kinds of things do I think will happen in the world as happens? If someone does a thing, makes this open source little thing that’s really easy for someone to clone someone’s voice, for example, and someone clones someone’s voice and it uses it to do all kinds of nefarious, terrible fraud or whatever, who could have thought? Well, you should have thought. Like, do you feel good about that thing that you just did? And the, so like, owning this implication. You know, the tech industry, we have this thing and someone actually literally gave me this, something very similar to this advice at one point. They said, Alex, you would be a, you know, you’d be even further in your career today if you just stop thinking through the implications of your actions, which I think is a very revealing statement because I think it’s the fundamental definition of morality is to think through the implications of your actions, not dwell on it forever. That’s like in the good place. They have a nice, later seasons have a nice framing on this. about how it can get to the absurd. But like, of course you should be thinking. And when you see disconfirming evidence, it’s, oh, isn’t it, won’t it be used for, so is it, you go, oh crap. You know, I should think about that. Oh, maybe if I do like this, that makes it a little bit less likely to, for that kind of thing to happen or, but you, I think have to be thinking through the implications. And this is where I used to believe on a fundamental basis, a fundamental basis that open systems were morally superior, period. And I no longer believe that to be the case. I think that many open systems, they tend it to be more morally superior for society, but like that is not always the case. And there’s a number of places where it allows people to find, I have an essay I published many years ago, it’s pretty dark. It’s called the runaway engine in society. And it frames society as a evolutionary search through an evolving fitness landscape using different technologies and different substrates of biological evolution, cultural evolution, algorithms and search. AI. And the conclusion it comes to is, as you take away the gatekeepers, as you make it so that anybody can tweak with any of these things, you very quickly fall down whatever the actual incentive gradient is, which in the case of humanity is whatever terrible heuristics were burned into the firmware of our brains in a high friction evolutionary environment. So like some of the heuristics that were burned in that were made a ton of sense in a like our evolutionary environment were, I don’t know if you see fat or sugar, just eat as much as you can, you know? and like gossip all the time. And if you see somebody that you don’t recognize, just kill them, just in case. These terrible heuristics that are absolutely horrendous in a modern society that can provide those things. And this is kind of, so open systems tend to allow people to then fall into these traps where people just do the thing that they don’t, not that they want to want, but what they actually want. Like one of the things I want to want is to read interesting… pieces by authors I disagree with that will challenge my worldview. That’s what I want to want. But reality, I want to read stuff that makes me feel like a smart person for what I already believe, especially when I’m stressed or busy, right? And this is true, I think, for the vast majority of people. And this is one of the reasons that like gatekeepers and news, uh, you know, over back in the past was bad in a lot of ways, but it also helped make sure of like, okay, listen, we’re all going to like not share a bunch of sensationalist crap or, or whatever we’re going to try to, to aim for an even minded thing. And like, I don’t know, it’s debatable whether that was better or worse for society. But like, I don’t, I do not think it’s a slam dunk on either, on either side. And so that’s why I think it’s more complicated than just open systems are more moral. But again, it’s like, one of my rules of thumb is for every decision I make, even micro decisions, I try to imagine if someone were to show me a video of this decision right now that I’m making in 10 years at a party with all my friends and family, I wanna optimize to at the very least not be embarrassed of it and ideally to be actively proud. And that is my moral compass. That is what I try to do to like think about in a long-term time horizon. When I’m no longer at that page, you’ll need a different phase of your life. You’re probably working at a different company. We work in different projects. It’s very easy in the thick of it, in the heat of the moment to have this totalizing idea about what needs to happen in this particular thing and what’s most important. But taking that long-term perspective backwards often helps you go, oh my God, wait a second. No, I should stand by my principles on this because this is something that I would be absolutely embarrassed in the future to look back on. This, by the way, I think sometimes makes you a less effective corporate employee, just in general, because it requires the, sometimes the reality of collaborating in an organization requires to some degree. You know, I was talking to somebody

 

Shruthi Prakash:

Thanks.

 

Alex Komoroske:

recently and they were saying, saying their management coach had told them of like, do you want to be right or do you want to be effective? And that was a very clarifying frame. But also sometimes the thing to be effective in an organization is very precisely to do a thing that is actually not probably the right thing from the broadest possible perspective. It’s like locally optimal, but actually locally effective, globally ineffective or a bad outcome.

 

Simone Cicero:

This makes me think of an essay from Joichi Ito and I don’t remember the other author, I will put it in the notes, called Design for Participation. So you’re thinking about kind of looking at the implications of what you designed from the time perspective, time horizon perspective, and now you kind of refer to look at the implications from the network perspective. So I think looking at what we designed from the perspective of we being participants in a broader system of interactions and so try to look into the whole points of view, it’s a kind of moral compass we can easily, I into, I would agree that all else equal, a system that allows more individuals, more people to participate as agents with agentic power in the system is better, all else equal, without question, and I think morally better almost, because treating people as ends, not means, is just generally a good moral principle. But I would argue that there are, it’s possible to design, the main thing is you have to think systemically about the system. So you have to think not just about the local incentives, like, yeah, we’ll have people participate. 

 

Yes, but if you have this dynamic that will lead very clearly to people being, being incentivized to defraud their co-participants and to do nasty tactics to win that thing, then like design for participation in that case actually has created that negative. So you need to, like, so design participation and also make sure that there are incentives are aligned and that the, there’s self-healing mechanisms within the system that help prevent people from doing things that actually turn out to just be like as a general rule of thumb, you know, the Goodhart’s Law is this notion that once you start optimizing for a metric, it ceases to measure what you care about. And their intuition for this is the interest of the individual members of the swarm is different than the interest of the collective of the overall total goal. So if you there’s different ways to make Goodhart’s Law, it runs, it’s inevitable, but there’s different ways to make it run faster or slower. So one of the ways that makes it run faster is if you make it so all that matters is this metric. Nothing else matters. This is the only legible signal in the entire system, then you’re gonna get all  kinds of fraud. 

 

Yes, if there’s no real names in the thing, there’s no direct connection to real world, you’re gonna get a whole bunch of fraud. And then the second thing that happens is how clever are the participants? If they’re really clever, then Goodhart’s Law will run faster. Because they’ll figure out all kinds of weird little loopholes and one weird tricks and whatever, and they’ll discover an innovation which only innovates the metric, not the underlying reality. which is its own form of Kayfabe.

 

Simone Cicero:

Yeah, yeah, also we should be careful with what metrics we set, you know. That’s Another key point.

 

Alex Komoroske:

And on a fundamental basis, because no individual metric, you can do things like a portfolio of metrics and check metrics. These help reduce this effect. But fundamentally, if what you care about is long-term outcomes, I don’t want you to get a short-term, I actually want you to care about long-term. Okay, but long-term takes a long time to happen. So like, you don’t know on the way there, is this person a bad actor or not? And so it’s actually, it varies, fundamentally impossible to steer based on long-term metrics. So you must use proxy metrics, but then the proxy metric will become your primary metric because it’s the one that’s gonna be constantly, how are you rewarded? How are you, you know, seeing what your performance is or whatever. So everyone will focus on the proxy metric and then they’ll forget that the proxy metric is a means to an end. They’ll also focus on, another thing that happens all the time is people focus on things that are easy to measure, not things that matter. So it’s the streetlight fallacy. where the drunk is looking for his keys and the person helping him says, under the streetlight, and the person helping him says, oh, did you drop the keys nearby here? He goes, oh no, I dropped them way over there, but it’s dark, I can’t see over

There. so, yes, and so you get this kind of thing with metrics. People measure the thing that’s easier to operationalize as a proxy metric, but often it’s like wildly disjoint from the underlying reality of the thing you care about. And you would just have to constantly, the way that you keep on track with this is trust. It is trust to think in a way that people, to give people the space to take a long-term perspective and that they will be held accountable on a long-term perspective. And that’s where you get the ambiguity. When people in the participation of it starts to see it as a end in itself, that the part of the collective is a mission-driven thing that they morally care about. It’s not just that they’re optimizing for their reward or perf or whatever in the system. That’s when you start getting really good results. And that requires them to see the collective goal of that thing as something quite a bit bigger, like an infinite game. kind of mindset as opposed to a finite game mindset.

 

Simone Cicero:

Thank you so much. Shruthi, I know that you have maybe a last question before we go through the closing.

Shruthi Prakash:

Sure. Thank you, Simone. So I think what point I wanted to sort of, you know, talk about a bit more, you said that it’s good to sort of design for maybe the nefarious users or the most sensitive issues, right? And that’s where I wanted to sort of touch upon how, let’s say, the power of decision makers influence how we operate as an ever evolving ecosystem, right? Essentially, and This is something you touched upon, but I think for me as a woman, for example, openness is not innate. Like it’s not something that comes very, let’s say easily to me. It’s not as maybe secure as it is maybe for others and so on. So how do you design for sensitivities essentially, and how do you progressively do that as you get more and more open?

 

Alex Komoroske:

Hmm. Yeah, being open to some degree is a privilege. It has to come from a position of strength. And when you’re unsure, you locally in the context of like, do I belong here? Or is my vision is my thing or that can, I think come across. And I think it’s easy to say that. As in it’s easy. Oh, everything should be open. But actually, honestly, like it’s can be there and it can be a very scary place. And the especially people who don’t want the other contact information public, or they might be harassed for various reasons. It really, it comes with a, and part of the point of an open system is, you don’t know who will participate. Pretty fundamentally, like that’s the whole thing, right? And that means you’re gonna get some bad actors and they’re gonna come in and they’re gonna be mean and they’re gonna do bad things and they’re gonna be, you know, just not live up to the code. And that’s why it’s important to have code of conduct. That’s why it’s important to go proactively and actively police those kinds of things. But I think it is important that they make both in how you design the system and also how you design the system that builds the system in terms of how you run the project. But, um, yeah, I don’t know. It’s a, there’s a, I do believe, like, I believe that people want to be good, but often they’re put in situations where they’re incentivized to be bad. And, uh, it’s very easy to get in those situations where there’s anonymous. connection of like, Oh, this is just some random, you know, personal on the thing. I’m just telling what I feel. It’s like, no, you’re talking about an individual that you think that they are terrible and worthless or doing this miserable thing. Like that’s a that the internet has this very, there’s something about seeing something, someone in person to understand their direct humanity and to feel this level of trust in them. And the internet allows you to talk with anybody. But it means that like you fundamentally like, here’s one way of looking at this trust ultimately comes from the expectation of future interactions directly or indirectly with his counterparty. So like what I mean by indirect is maybe this person tells their friends, oh my god don’t talk to Alex that guy’s a jerk. So like that’s an indirect interaction of the future. So this is where trust comes from. So you have to get that a future expected repeated interactions. So if somebody, if you’re in a small town and someone cuts you off at the stop sign and speeds past you, you probably won’t flick them off because You’re likely to see them in the grocery store again or a church or whatever if someone Cuts you off in New York City. You’re gonna flick them off. You will never see that person again. And so there’s something about this open System where you don’t know when you’re gonna see like oh cool if I if I have a bad interaction with this community Whatever, I’ll just burn that account and go, you know do something over elsewhere It’s people in this transactional mood or enables people to be in this transactional And that can create all kinds of weird oddities in the way the system underlying works. I think it’s just, I think recognizing that there’s situations in which people will do things that are probably not what people want is not, it’s like, well, we shouldn’t dwell on the negative or I hope people will be better. It’s like, hope is not a strategy, man. Like the, you gotta be realistic about the kinds of things that might happen. And what would we do if that did happen? And that’s why from the very beginning you have a code of conduct. kind of thing just and then maybe have to develop it and grow it more as you see real situations that are happening that you need to protect against. But like you can’t just say, I’m sure everyone will use this in a great way. No, they’re going to do. People will use your system in ways that are like on the web platform be really fun sometimes to see people use stuff. Like I remember the canonical example, was there somebody who built this thing using service workers to transcode a atypical image format? into a JPEG live before being passed through to render in the browser. And I remember we saw that was like, Oh my God, that’s crazy. Is that, does that even work? And then we’re like, no, it does. They made this moment like, Oh, is that security hole? And then we’re like, no, that’s fine. Actually that’s totally fine. It’s within the SEMA. And so you had this moment where people will do weird things with your platform. And ideally you want it to be like, Oh, cool, weird, cool, as opposed to like, Oh, no, like that is like, we have to like cut that down. So you have to be realistic that like, you just imagine how people will nefariously use this thing. That’s why red teaming is really useful by the way. When you’re doing it and you’re just brainstorming, people are like, oh, what a negative thought you’re having. When you put on the role of a different, of an adversary, you say, well, I’m gonna try to break this system and people will allow you to do it. You don’t start to feel the shame of doing it. So like the role playing as red team, the role play aspects is one of the social pieces that makes it work. But that makes sense.

 

Simone Cicero:

I mean, it makes a lot of sense. I think, you know, in general, the conversation we’re having makes, you know, makes me think that there’s no shortcut, you know, you can, you cannot really live there and things nicely develop. You should kind of seek this equilibrium between trusting the ecosystem and at the same time, taking your own responsibility, you know, in… detailing the North Star and the code of conduct and overseeing the interfaces that emerge and nudging the system in what you believe is a positive way. So for example, when you spoke about how you can design kind of attractors for coherence to emerge. So it’s really a careful process, right? It’s kind of balancing

 

Simone Cicero:

These emergence with… Yeah, go ahead.

 

Alex Komoroske:

That’s I think, the fundamental reality of it being a complex system. There is no solution. There is no one easy fix. There is no do this and it always works. There is like, here are some of the things that will be intention. Here are some of the things that you’re going to be constantly surfing and evaluating. And it’s more of that active stance of knowing that this will be a constant surfing through an ever evolving situation is more important than anything else. Because that’s what I mean with the gardening mindset versus the builder mindset of like, it’s going to be fluid. It’s going to be a little bit weird. It’s going to be a little bit unexpected. It’s going to be ever evolving. And that’s the main thing for people to have in their mind is be open to that.

 

Simone Cicero:

Yeah, gardens need the gardeners, right? That’s the point. And yeah, lastly, Alex, to finish the conversation, we normally close the episodes with what we call the breadcrumbs. So any suggestion that you want to share with our audience, you may drop some parts during the conversation, some essays or blogs or books, but maybe if you can mention a couple of things that people should be… looking into, especially if they want to understand deeply what’s your stance on platforms and these kind of complex systems.

 

Alex Komoroske:

So I have a lot of my favorite essays and articles on kamarovsky.com linked from there.

And I think that all of them, they kind of talk to each other a bit. They’re all interrelated in various ways, but I think that they are, the gardening platforms in the back, I think is useful, hopefully. I also really liked the doorbell in the jungle, by the way. I think it’s one of my favorite little things of like relatively concrete product guidance will also fanciful metaphor. The other thing I really recommend is, one of my good friends blogs. Okay, so first of all, I must recommend Flux Collective, read.fluxcollective.org. I think that’s a newsletter that I and a bunch of good friends and collaborators work on a weekly basis. And another that I think is really deserves even more attention than it gets is Dimitri Glazkos, whatdimitrilearned.substack.com. I think it’s phenomenal. I think it’s, Dimitri was the sort of Uber TL of Blink for many years. We were collaborators. We still talk almost every day. Brilliant engineer and architect. but also understands the human component of code and the systems and how they’re built. And his essays on that blog, I think, are just exceptionally insightful. And unlike what you find in lots of other places, which just for the technical decisions, and he sort of sees the whole system as a system built by humans. And what that implies or how it works within organizations and different tactics and different weird hang ups that you’ll get stuck on. And I just think it’s exceptional.

 

Simone Cicero:

Thank you so much. We will put these breadcrumbs recommendations into the podcast notes. So our listeners, you should check the whole notes on our website, boundaries.io, slash resources, slash podcast. You will find Alex’s episodes with all the transcripts and the notes and so on. And I mean. First of all, Alex, it was great to have you. I hope you enjoyed the conversation as much as we did.

 

Alex Komoroske:

Yeah Thank you.

 

Simone Cicero:

Thank you.

 

Alex Komoroske:

Yeah. ThanksFor having me.

 

Alex Komoroske:

Sorry. I go in lots of different directions with this. Maybe it’s different than what you’re expecting, but I had a lot of fun.

 

Simone Cicero:

No, but I mean, that’s why we wanted to have you here, because you can entertain the complexity of really building platforms in a thoughtful and complex way, which is, I think, very enriching for our listeners to kind of confront themselves as they develop these systems. And thank you so much, Shruthi, for your contribution.

 

Shruthi Prakash:

Thank you. Thanks Alex for joining today. It was really great hearing from you. Yeah. Thank you Simone as well.

 

Simone Cicero:

And for our listeners, as always, stay tuned and until we catch up again, remember to think Boundaryless.