#100 – Learning Centered Organizations for the 21st Century with Indy Johar
BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 100
#100 – Learning Centered Organizations for the 21st Century with Indy Johar
Indy Johar, Mission Steward of Dark Matter Labs, joins the podcast to help us rethink how we organize ourselves and our systems, addressing inadequacies of traditional business models that are no longer sufficient for managing complexities.
Bringing light to how “risk management should be localized, with decision-making power and accountability being as close to the problem as possible,” Indy argues for an organization that encourages participants to be citizens rather than employees and fosters continuous and accountable learning among individuals.
Tune into this episode and learn from Indy, a veteran in building mission-driven organizations, who has always stayed ahead of the curve.
Youtube video for this podcast is linked here.
Podcast Notes
Indy – who joins the podcast for the 3rd time – besides stewarding the mission at Dark has also recently joined RMIT University, teaching no less than “Planetary Civics”, a testament to his influence in the space of social innovation.
As always, Indy comes with a profound understanding of designing complex organizations for the 21st century, from his hands-on experience in creating radical innovations in governance, architecture, and social systems, particularly in sustainability and collaborative economy.
He starts the podcast with philosophical considerations about the nature of complex organizations and then highlights the need for a radical shift toward a learning-centered organization.
In the conversation, we question the traditional metrics of productivity and value and further advocate for new metrics that account for collective intelligence and systemic contributions rather than individual output.
There was no better way to celebrate our 100th episode because this one is a landmark. Grab a notepad and pick a pen, because there’s much to pin down.
Key highlights
- Organizational structures in a complex world, and a shift from command-and-control to systemic learning and adaptability.
- Decentralized risk management and the need to localize decision-making to enhance responsiveness and effectiveness.
- Dark Matter Labs’ mission-driven approach, emphasizing partnerships over traditional consultancy to enable sustainable change.
- Need for a paradigm shift in how value is defined and measured within organizations, moving towards incorporating multi-dimensional success indicators.
- Redefining organizational roles to encourage craft and citizenship, where individuals contribute to the organization’s decision-making processes.
- Rethinking legal structures governing organizations, moving towards frameworks that support distributed risk and empower collective action.
This podcast is also available on Apple Podcasts, Spotify, Google Podcasts, Soundcloud and other podcast streaming platforms.
Topics (chapters):
00:00 Learning Centered Organizations for the 21st Century with Indy Johar – Intro
01:40 Introducing Indy Johar
10:31 Value of Learning as a Strategic Advantage
14:41 Accelerating Learning and Modifying Organizational Agendas
19:23 Structure in a post-managerial economy
24:46 The Design of Dark Matter Labs
34:19 Citizenship inside Organizations
41:54 Citizen or Employee?
47:25 Money making machine to avenues of value
55:16 Breadcrumbs and Suggestions
To find out more about his work:
- Indy Johar | LinkedIn
- Indy Johar – London, United Kingdom, Project00.cc
- Indy Johar (@indy_johar) / X
- Indy Johar – Medium
- Dark Matter Labs
Other references and mentions:
- Redrawing the Human Development Thesis for the 21st Century — with Indy Johar – Boundaryless
- Dark Matter Labs: rethinking organizing #BeyondTheRules – with Indy Johar and Annette Dhami – Boundaryless
- Marshall Plan
- The Deep Dive (thedeepdivepod.com)
Podcast recorded on 19 March 2024.
Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast
- Twitter: https://twitter.com/boundaryless_
- Website: https://boundaryless.io/contacts
- LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo
Indy – who joins the podcast for the 3rd time – besides stewarding the mission at Dark has also recently joined RMIT University, teaching no less than “Planetary Civics”, a testament to his influence in the space of social innovation.
As always, Indy comes with a profound understanding of designing complex organizations for the 21st century, from his hands-on experience in creating radical innovations in governance, architecture, and social systems, particularly in sustainability and collaborative economy.
He starts the podcast with philosophical considerations about the nature of complex organizations and then highlights the need for a radical shift toward a learning-centered organization.
In the conversation, we question the traditional metrics of productivity and value and further advocate for new metrics that account for collective intelligence and systemic contributions rather than individual output.
There was no better way to celebrate our 100th episode because this one is a landmark. Grab a notepad and pick a pen, because there’s much to pin down.
Key highlights
- Organizational structures in a complex world, and a shift from command-and-control to systemic learning and adaptability.
- Decentralized risk management and the need to localize decision-making to enhance responsiveness and effectiveness.
- Dark Matter Labs’ mission-driven approach, emphasizing partnerships over traditional consultancy to enable sustainable change.
- Need for a paradigm shift in how value is defined and measured within organizations, moving towards incorporating multi-dimensional success indicators.
- Redefining organizational roles to encourage craft and citizenship, where individuals contribute to the organization’s decision-making processes.
- Rethinking legal structures governing organizations, moving towards frameworks that support distributed risk and empower collective action.
This podcast is also available on Apple Podcasts, Spotify, Google Podcasts, Soundcloud and other podcast streaming platforms.
Topics (chapters):
00:00 Learning Centered Organizations for the 21st Century with Indy Johar – Intro
01:40 Introducing Indy Johar
10:31 Value of Learning as a Strategic Advantage
14:41 Accelerating Learning and Modifying Organizational Agendas
19:23 Structure in a post-managerial economy
24:46 The Design of Dark Matter Labs
34:19 Citizenship inside Organizations
41:54 Citizen or Employee?
47:25 Money making machine to avenues of value
55:16 Breadcrumbs and Suggestions
To find out more about his work:
- Indy Johar | LinkedIn
- Indy Johar – London, United Kingdom, Project00.cc
- Indy Johar (@indy_johar) / X
- Indy Johar – Medium
- Dark Matter Labs
Other references and mentions:
- Redrawing the Human Development Thesis for the 21st Century — with Indy Johar – Boundaryless
- Dark Matter Labs: rethinking organizing #BeyondTheRules – with Indy Johar and Annette Dhami – Boundaryless
- Marshall Plan
- The Deep Dive (thedeepdivepod.com)
Podcast recorded on 19 March 2024.
Get in touch with Boundaryless:
Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast
- Twitter: https://twitter.com/boundaryless_
- Website: https://boundaryless.io/contacts
- LinkedIn: https://www.linkedin.com/company/boundaryless-pdt-3eo
Transcript
Simone Cicero
Hello everybody, and welcome back to the Boundaryless Conversations podcast. On this podcast, we meet with pioneers, thinkers and doers, and we talk about the future of business models, organizations, markets and society in our rapidly changing world. Today, I’m joined as usual by my regular co-host, my colleague at Boundaryless, Shruthi Prakash. Hello, Shruthi.
Shruthi Prakash
Hi everyone.
Simone Cicero
Good to have you. And yeah, of course, we have very special guests as we often do, but this time is even more special, let’s say. We are lucky to have a friend who comes to the podcast for the third time. We have today with us Indy Johar. Hello, Indy. Thank you so much.
Indy Johar
Hi, it’s Simone and Shruthi, how are you doing?
Simone Cicero
I mean, Indy, it’s a well-known face for all our listeners. Indy is the founder of Dark Matter Labs, an organization that I would say is exploring the edges of what does it mean to be an organization in a changing world to some extent. And also has a long track record of being the founder of so many great initiatives that I’m not going to repeat today because Indy gets bored with long introductions.
I must say that since the latest time we spoke, Indy also is now teaching a new, a very interesting matter at the RMIT University in Australia. He’s now teaching planetary civics, which I found an intriguing topic to share and teach about. So maybe Indy, we can talk about this also later on.
But as we discussed in the intro, let’s say, we agreed that it’s a good idea to use this conversation as a way to reconnect and to feed the conversation that we’re having since the last 10 years. Maybe some of this conversation is also accessible to our listeners if they can check the previous episodes we had. But as a start, I think it would be great to hear from you where you are at the moment with your work.
And maybe you can mention some of the key things that you keep in your mind as you go ahead with your incredible projects and missions. And then we can start from there to untangle this knot and go deeper into any direction we can think about.
Indy Johar
Hi Simone, thank you so much for inviting me back and thank you so much to both of you for this great work that you’re doing. I think it’s an important critical exploration. As I’ve said before, I think the real challenge of the 21st century is how we organize ourselves. How we organize ourselves and how we organize ourselves in relationship to the world. And I think actually everything else falls out of that. And I think most of our challenges are in the misorganization and the malorganization of ourselves that we construct most the externalities that we are facing in many formats. So firstly, thank you for all the great work that you’re doing in building this conversation and building the conversation with a kind of a genuine space of inquiry and unknownness in it. So maybe to sort of get into this conversation, what I’d sort of say is like the thing I’m, The thing I’m really finding fascinating right now and the thing that I’m really holding is that if you’re building an organism in a complex, emergent imagined world, then your theory of organizing has to be rooted not in control and has to be rooted in building the learning capacity of the organism. And if the organism is able to, every agent in the organism is able to learn and adjust and support the learning of the rest of the system, you build a completely different theory of organizing. And I think one of the profound challenges I think that we face, in an increasingly complex emergent world where every action and every agency, is relational and situational into that context is that theories of command and control, centralized action, centralized prescription, centralized instruction are no longer sufficient to work in complexity. And for organisms to be relevant or meta -organisms to be relevant, they have to be able to create a new theory of organizing rooted in building systemic learning capacity and the systemic, what I would call, relationality of the system to be able to learn and be in relational dialogue with each other.
Now that looks like a philosophical question and it probably is at a top root level, but if you follow that hypothesis all the way through, it changes the very theory and practice of organizing and what the nature of an organism is.
And I think for me, if you’re going to be successful in the 21st century, you have to move from a control -oriented system of organizing to a learning -orientated, agent -fied system of organizing, where people are not instructable systems, but learning systems, and building not only individual learning, but building the accumulative learning, the compound learning of the system.
And, I would say one thing for me was that this requires a very intentional design. It requires intentionally countering the legal structure of organisms which accumulate centralized risk. So a legal board accumulates centralized risk and because it accumulates centralized risk, it accumulates, it seeks to accumulate the power to manage centralized risk. And because it seeks to accumulate the power to manage the centralizers, thereby accrues the power to manage the building of options to manage that risk. And thereby, because it accumulates a centralized power of managing those options, it has to become command, instructable, orientated to be able to manage that.
And this is an information impossibility. So in a complex entangled world, that endeavor is informationally impossible. And I think this is the philosophical end game of what I’d call industrial linear command and control organizing, that it’s become informationally impossible. And one resolution of this has been what I would argue is building the platform organism as a means to layer out that complexity and to be able to create and thereby moving the theory of control into what I would call algorithmic or parametric theories of control, which allow for particular situational variance, which allow for a degree of freedom of the agent in a prescribed system. And that creates some degree of freedom.
But I think the problem is that it’s still fundamentally rooted in a theory of control with a greatest degree of freedom. It doesn’t invite, and this is, I think, the tipping point. It doesn’t invite learning, and it doesn’t invite care. It increases the freedom space of operation, but doesn’t embody, it doesn’t build embodied agency and embodied relationality, it doesn’t build embodied theories of care. And when you don’t do that, what you don’t get is any of the systemic level benefits of an evolutionary system. And I think this is kind of, and I know this beginning part of the conversation has been very philosophical, but I think I think one of the most fascinating moments was for me, the gathering this year. At the DM gathering, we held every year, we held a pop -up studio where the whole team comes together. We’re about 65 of us. We all came together. We were all in space together for a whole week. And the space is entirely around exploring what is it that we’re learning in the system and actually compounding and challenging our theories of what we’re learning and being able to hold. Convergence and Divergence.
This is the other thing, if you’re thinking about the intelligence of a system, we often think through binary singular logics convergence, “a thought”, “a idea”, but actually resilient intelligences are able to hold quantum decisions, they’re multiple decisions in their mind simultaneously. And we often resort to these binary modalities. For a resilient intelligence, you have to be able to resort to an idea being appropriate and situationally correct in Halifax and another idea being equally appropriate and situationally correct in Manheim. And they may look like divergent ideas, but they are situationally appropriate, but actually at a meta-level, they may still operate with meta-coherences and other forms of divergences. And they may create pathways for each other in a second horizon of diffusion landscapes.
And why I’m saying all this is that for the first time, I think what I experienced was the collective intelligence of the system was like way smarter than anything I could have predicted. And that was a really interesting one. It’s taken eight years to get there, right? Eight years where I think what was fascinating was that the nuance and particularity of the system and its self -corrective behaviors and self -challenging behaviors was alive.
And you know, DM has been a hypothesis, we’ve been running a hypothesis, but for the first time it was visible. And I think there is something really interesting about how do you build an organism which is rooted in compound learning at speed. And in order to build that, you also then have to build a new theory of citizenship, a behavioral model, which is rooted into being part of this compound learning institution and relationality.
And I think thereby you also have to shift your theory of metrics of like, I don’t even know, how do you classify the growing collective intelligence of a system? How do you hold that intelligence in the system? What are the kind of metrics of success in that system in a different way? So I think, and then that opens up a whole different theory of what I’d call value. Like, you know, the value is no longer, you know, a single point productivity value, but it’s a compounding, compounding value of the system in a completely different way. So the dynamics of this starts to shift. And I find that super fascinating.
Simone Cicero
Yeah, you touched upon a few bits that I was coming up with on my notes as you were talking, but essentially my immediate question. And let me just draw a few bits of what you shared with our audience. So what I captured is that you say risk management as a strategic driver for organizations is no more at hand in a complex world, because essentially, at least my perception is that due to all these long tail, fat tails that we have, you can try to manage risk, but at the end of the day, you cannot really manage risk in an interconnected complex world. So the question moves into then the priority should be the learning and not the top down managing of options.
But then, you know, my question was about value. So what is the value of learning and how do you learn? What is this entity that learns? And you spoke about, you know, the system, the meta -organism that learns and the need for a new kind of theory of citizenship, also citizenship of organizations, to be part of something that learns.
My immediate question is, and you also said at the end, we need new metrics to measure collective intelligence that we are building inside the system. And it’s not about productivity, so it’s not about risk managing. And actually, productivity for me is even, I would say, an older metric than risk management. That would be, you know, the normal way we consider organizations in the 21st century is actually to move from productivity to risk management.
And what you’re advocating for is to go beyond that into creating a learning system. So my question for you is, what is the value of learning? And how do you actually learn? So can we get into more of why building a learning system is important and it’s a strategic advantage today?
Indy Johar
Yeah, so I think what you’re trying to do is move agency as close to the problem space as possible. So first space is how do you move agency to be as proximate to the problem space as possible? And then how do you build the innovation and risk response capacity and the risk accountability capacity to be as close to the problem space as possible?
And that means you have to build the understanding of the system to be able to actually see risks, local and global risks, and be able to act with agency at those moments of time. So why you’re building a learning system is not to build the learning system. Like too often, you’re not trying to build a learning capacity of the C -suite, right? This is the often default. Let’s make the C -suite smarter because they’ll make better decisions. You can never make good enough decisions because the informational problem is so large they can never know enough. So the only point you have is to actually build the whole system to be a learning agent and thereby the agency at every point in the system is operating with a risk management theory and a learning risk and learning and craft management theory. And this also means the organism has to move towards a theory of craft as a means of organizing.
And why I use the word craft is that it becomes it becomes a dialogue, a dialectic dialogue with situations as opposed to a service design or service execution problem. So it’s the ability to continuously explore and improve in relationship to context becomes a really key vital component. And that requires a new theory of accountability, but also a new theory of freedom in a system to be able to operationalize that. So it’s really building, rebuilding the human economy through a craft theory.
And this also means that you have to expand the theory of human contributions to being what I would call multi -intelligence systems. Like it’s the emotional intelligence, the kind of cognitive intelligence, the creative intelligence, the kind of system level intelligence, to kind of being able to operate at multiple computes, right? Multiple computes of care at every level. And that becomes the human contribution space, which I also think is a unique new component to that landscape.
And that I think is a different type of machine -human relationship as well, rooted in there. What the relationship of machine systems to human systems are becomes also fundamentally different because currently I think what we’re doing is using machine systems to control or define the parametres of control of human systems rather than actually being ennobling systems for human capacity.
Shruthi Prakash
I was just listening and there are so many tangents I could go in, so I’m choosing which one. But what sort of, you know, stood out to me, one of the points that, let’s say, you’re creating options, you are creating situational variance and degrees of freedom. These were some things that you had mentioned, right?
And all of this reside in an organization which have like archaic sort of systems of control. And meanwhile, you are also trying to compound learning and accelerate this process. And like you said, let’s say until now already it has taken maybe let’s say an organization that’s really forward like dark matter labs, eight years to maybe achieve that. So how do you accelerate this process even further? And, um, sort of modify this agenda of organizations.
Indy Johar
I think you have to start with a different theory of value. I think you have to start with a thesis that we have massively underexplored the human economy in our system. And if you think that humans are the most impressive general intelligent systems that we have in the world, have we unlocked the full capacity of every human in the system? No. We have made them quasi-instructible machine systems where we prescribe the action through an idea of prescription because the model of the organism is a machine. The theoretical model is an input-output machine with humans acting as quasi-machine components in that system. And I think if you change the theory of that model that people are running, and I think one of the first things is you say, oh, you make it parametric. So you, degrees of freedom is increased. So you could make evolution.
But I think what this is about is fundamentally unlocking a new human contribution component, recognizing that actually in craft, in multiple intelligences, we have to create a new theory of where value is constructed. And if that value is constructed in a learning sense, in an embodied sense, in an agent-fied sense, as opposed to a control sense, we unlock a whole amount of capacity in the system.
So I think it’s about what your purpose is in terms of locking the human component of the economy is a key thing. I think the second thing is that I think complexity will demand this of us. So if you’re operating in complexity, I think what we’re seeing is many, many organizations start to accrue risk which is centrally unmanageable. And I think the kind of failure, a systemic failure to be able to manage and operate in volatile, increasingly uncertain and high risk environments is actually starting to cause a failure of what I would call linear predictable control or intelligent systems. And they’re just not going to be efficient. So I’m not even making a moral argument now. I’m just talking about efficacy, the efficacy of a system to operate in complexity and uncertainty. So I think there’s a second dimension to it, which I think is also changing.
This also changes the theory of what I would say is labor, labor as well, because it also challenges the idea of humans as units of labor, prescribed theories of action. So it talks about embodied ownership and it sort of starts to talk about a new form of relationship. So I think this is a kind of pathway that’s laid out to us to unlock what I think will be a key part of the human economy system and what is the human economy. And then there is a more fundamental role of what is our technological institutional, legal frameworks that support the unlocking and unfurling of this new human economy. That I think is the next part of the story.
So, and you could argue if you look back at history, whether it’s, you know, the Japanese sort of innovation system like total plant management, they were all about this sort of learning compounding capacity of a system and building new forms of. So we’ve been on this route for a while. We just haven’t sort of unleashed it at the full system level. I think this has been an evolution that we’ve been working towards, especially as we operate in complexity.
And I think this is going to become more and more the de reigueur of kind of how we have to organize and thereby will change, you like to put a kind of one sentence word of it, like, I think we’re going to have to move away from the idea of the chief executive officer to the chief learning officer. All right. So if you’re going to, if you would have a brand word, it’d be a chief learning officer.
And the purpose of that learning officer tool would be to maximize and ensure the compound learning of the system, as opposed to being able to dictate what the outcomes would be of the system or the outputs of the system. It’s effectively the learning capacity of the system. And that would be the key indicator of the efficacy and thereby be chief citizenship officer and how do we embody citizenship into those behaviors. And so I think there’s a completely different mindset available and then a model available on the other side.
Simone Cicero
I have a couple of reflections. So if we try to bring it to, because I would say I have two kind of areas of questions that are rambling in my mind. So one is, how do we actually translate this into an operational element, into an operational perspective? And on the other side, I have some questions around what does it entail in terms of deep structural changes in terms of what type of institutions we build and what type of things we do. So the first question for me would be, when you speak about craft and you speak about first, I perceive it as a little bit of a nudging to doing work, in learning through the doing of work.
So not through the managing aspect of work, which is a deep critique towards bureaucratic organizations. So you have this perception that it’s similar to the work we do when we speak about micro-entrepreneurial organizations and pushing people to actually take care of developing value propositions and responding to problems and creating products and solutions or whatever. So there is this post -bureaucratic thinking, exactly, post-managerial. But then there is another piece of question that I have, which is, okay, and to what extent can we just change the structure of organizations that operate inside the markets that we know versus having to really rethink about the kind of work we do? So what is your reflection about this?
Indy Johar
There’s so many different layers to that question. So I really like your frame about the post-managerial economy of the post, because I think in a way, so there’s so many different dimensions. One, I think it changes role theory and power theory in organisms, right? So, and decision models. So for example, you want in a complex system, the decision space to be driven through what I’d call decorative announcements. I am going to make this decision.
And you want to decentralize and distribute the power for decorative decision models. Not consensus, decorative. And just declare, I’m going to try this out, I’ve thought about this, this is what I’m going to do.
And that opens up a space for what I would say is sort of informal consultation. Then there might be spaces of what I’d call consultative decisions where you know there are other agents impacted. Then there might be a consensus base or where you say, is there a no?
But if there isn’t, no, I’m going to go ahead and do it. Or if there is a total consent-based model. But this shifts the whole power of decision-making to decorative. So you’ve changed the theory. And this means that in a decorative system, could that be machine-assisted declarations? So how do you start to compound this new model of decision model? And I think you’re absolutely right to say that the decision space is also a space of craft. It is a space of not managerial decisions, but a decision in relationship to power and accountability and work. So there’s a kind of new bundling of those things as opposed to the unbundling of those things into different spaces. And that requires these new forms of embodied human intelligence. And it sort of shifts to the background of the organism, what I’d call the kind of compound learning frameworks. And that will, I think, almost certainly be machine-assisted. So I think that’s one thing. What does it do to decision power agency in the system?
Secondly, I think it changes all sorts of things about organisms like this have to be operated through theories of rhythm. So actually, coherence in a system like this not only comes through learning, but the rhythm of an organism. And I think there’s something interesting about, so a lot of the work, for example, we’ve been doing with politics for tomorrow has been around 10 by 100, which has been the theory of rhythm, of 100 day rhythms. And the rhythms become a very key deriver in patterning and cohering a system and letting it cohere and decoherent. And coherence and decoherence are both important. This is often where we, over coherence is actually a systemic self-destroying engine. So what you have to do is actually provide coherence and active decoherence as a learning mechanism. And that also requires micro-behaviours like, what do you do every day? What do you do every week? How do you shift the grammar and syntax of an organisation?
So is the grammar and syntax of an organization prescriptive? Is it descriptive? Is it exploratative? And those are choices. So what is the tone of life? So if it’s prescriptive, thereby instructive, it’s basically top -down, executive. If it’s exploratattive, it’s invitational. And I think you have to control and influence those things, the grammar and syntax of an organism, the rhythm of an organism, the roles, what I call role and power of the organism in really radical ways.
And you’ve got to create the social incentives of the system to be rooted in those behaviors as well. So I think whilst this is, yeah, whilst this is a sort of a, I think this is also part of a new human machine capability, I would almost certainly say the background of this stuff will be machine assisted in pretty radical formats to be able to build that. And thereby starts to build a different theory of organizing at that. So I think this is,
There’s lots of practical components to this which are rooted in how organisms are coded, I suppose.
Simone Cicero
Can I ask you, because the points that you bring up are quite kind of cross -domain. So you’re talking about an organization that is based on craft and actually doing the work, an organization based on rhythm, on practices of explicit knowledge and understanding relationally. So these are things that you can apply in, we to some extent apply to our organization. We teach customers how to do that. But the question that I want to bring up is, why is Dark Matter not a consulting company or a software development company or whatever, but rather engages in missions, which is also something you brought up last time we spoke.
And in these missions, it’s kind of entangling itself with local systems and trying to do work, which is very much about going beyond the idea of selling products or services to the market, much more into rethinking models. And as a follow -up question, maybe to these, so first question is, why is dark matter on a mission kind of model instead of a traditional business model?
And what does it entail from the perspective of having a sustainable organization, so which doesn’t deal with massive problems in terms of financial capabilities or workers’ security or anything that deals with running an organization where people can get paid and happy to do the work and have their dreams and can fulfill their needs and engage maybe entrepreneuriarly.
What are the kind of implications of not changing the way we work, but also changing structurally the things we work on in a way that stresses the traditional frames of what is a business model, what is a service price, what is value in the market?
Indy Johar
Yeah, okay, there’s loads of layers to that conversation and I think there’s at least three questions. So I’m going to try to… No, no, no, it’s fine.
Simone Cicero
Yeah, sorry about that.
Indy Johar
So one missions. So the first order answer is the reason why we, one of the things that we recognize in a complex entangled world, I don’t believe you can consult change. And consulting change is actually impossible because in order to consult change, you have to pre -subscribe the idea of what the change is to thereby create a contract, a determinate fixed boundary contract as to what that theory of change looks like. And thereby most consulting in my problematized way ends up becoming performative, becomes theater. Because actually, the theory of change requires an entanglement or to use, I think sometimes you’ve used the skin in the game. You need to be a partner in that theory of change. So that’s one thing. So I think behind missions, even more importantly, is what’s the relationship of change. And so I think one of the big things for Dark Matter has been that we want to partner for change.
And literally, that’s how we built our economy. Our whole model has been built around partnering with organizations for that change model, as opposed to consulting. Because we think consulting actually creates asymmetry of power, and the asymmetry of power actually reduces the outcome for the organization that wants that change. So even if a corporate comes to you and say, we won’t do this, once you introduce asymmetry of power, what you end up is a listening mechanism that’s broken.
So it’s difficult for the organization to hear what’s required or allocate resource what’s required because it all feels like a game to operationalize the contract in a very particular way. Whereas actually, I think we need a new theory of partnering for outcomes in that model. So that’s one thing. This is why for us, missions are a means to say we’re not there to transact. Our top line issue is not the kind of consulting revenue that we bring in. We don’t even call it consulting revenue. It’s partnership frameworks.
Second is that we’re intentional about the markets or the worlds that we’re creating. So our missions are not just language, they are, they’re intentional intervention points as part of a theory of effect or change into those land specs. Now, the more difficult problem that I want to also put here is more and more I worry that missions as language, isn’t sufficient.
And why I say that is in a complex entangled world, any form of single point optimization is problematic. And this may be the big paradigm leap that I think missions might be great to going to the moon, but they’re terrible for the transition that we face. So it is not sufficient for Genoa to become decarbonized because it could become decarbonized, become a net zero city by massively increasing inequality, massively not increasing economic justice, massively increasing environmental damage around the planet, and other things. So the question is, does any single-point optimization create harm? And I worry that our mission language is in fact still a single-point optimization problem. I think the systemic challenge that we face is that we need to do multi-point optimizations in an embodied sense. And this is where I think there is a systemic problem in the coding of our language.
And this is something we’re looking at. So I bring this as a part of a learning that we think there’s a problem with the mission theory because it actually is systemically creating single point optimizations and thereby codes a goal which actually becomes drives systemic externalities.
Even if you code other checks and balances into the system, the single point focus of the goal creates that. And I don’t think we yet understand how to do multi -point optimization. And how do you build systems which are, you know, profit is a single point optimization system. And a single point optimization system on profit, actually, which it can be a proof of value function, but if you optimize towards profit, actually, you can be self -destructive – destructive to your organism in the long term, destructive to the ecosystems to which your substrates, social, environmental, legal substrates that you’re reliant on. Actually, so how do you optimize? And currently, our theory of optimization, multipoint optimization, has not been optimization, but regulation with one space of optimization. So we regulate the other systems, i .e. legally control them. But it’s not possible to regulate those issues because they all require not just control or conservation, but renewal. So I think there’s something really interesting in the mission question, and we’re in the middle of that question.
And we’re organized around a whole set of these arcs, these trajectories, and a whole set of these laboratories. And that creates what I would call the matrix that’s at the heart of DM. And that matrix is what thereby creates the capacity for compound learning.
So it’s the multiplicity of missions and the multiplicity of labs, which are like property and beyond, capital systems and many to many contracting and beyond the rules. These compounding functions are then the intersection of these missions and labs. And that’s how we’ve designed ourselves. So that’s step two. There was a final question that was, so there was.
Simone Cicero
Yeah, it was more on the implications in terms of organizational challenges and getting everybody to be paid for well and that kind of stuff.
Indy Johar
Yeah. So, yeah, I think this is a really interesting point. I do worry that we often think about business models from the ideal framework of stasis.
Stable equilibrium, stable input, stable outputs. And that’s the ideal goal. In a complex, uncertain system, uncertain world, maybe the theory of this stable input, stable output is an illusion in play. So one of the things we’ve done in DM is not talked about trying to achieve that goal, but recognizing DM is operating in seasons, season one, season two, season three, season four.
And what you’re trying to create is these transition steps. And the business model or the value model for season one might be different from the value model for season two, might be quite different from the value model for season three, and might be very different from season four.
And the second thing is that you’re you’re trying to keep the capabilities of the system growing and the outcome value growing and the input value continuing to pour in in some fashion. So there’s a kind of a dynamic balance that you’re holding as opposed to a static predictable balance. So the question is, how do you build the capacity to be able to dynamically hold those balances in different formats?
And I think that’s a really it’s a different class of business models and a different class of value models that need to be constructed in those frameworks. And I think, I don’t know. I mean, that’s what I do know is that there’s a different class of behavior and too many, too much of the conversations I’m focused on end up, or I get involved in people to linearly trying to build these stable business models. And I think if you accept instability as the reality, then the question becomes is how do you.
How do you have these evolutionary business models as opposed to static business models or this optimum business models, but actually these systemic evolutionary business models, which are accounting for value in different formats. And that all sits as part of, yeah, I mean, that’s the kind of question space I’m really holding onto.
Shruthi Prakash
When I was listening to you, right, I remembered one other podcast that we had with the head of strategic innovation at UNDP. So a point that they had sort of continuously reiterated on was the fact that they moved from project approach to portfolio approach. And that sort of sounded similar to the single point optimization versus let’s say a multi point optimization. And one of the examples that they quoted was very interesting. So they had studied about 160 odd countries and they noticed that for every in half of them for every 1 % increase in GDP, it meant that 64,000 odd people are falling below poverty line.
So the fact that that kind of correlation can be made and therefore sort of understood that we have to look at a multi -pronged sort of approach rather than a single approach was really interesting and I thought it related to what you said as well. What I wanted to ask you about was also, yeah.
Indy Johar
What would be possible just to sort of on that point just to drive a little bit of clarification. So one is that I think portfolio of options, if they’re learning options, are multiple points of intervention. I suppose what I’m talking about is every point of intervention, but being multi-optimizing.
So it’s the point of intervention being multi -optimizing, not just the portfolio as a multiple points of intervention.
And so it’s these two layer problems that I think are really, really critical. And then the second dimension that I would say, and this is a challenge I think of portfolio theory, portfolio theory systemically has been a function of what I would call the boardroom. So it is how does a single identity understand the risks that are allocated to that singular identity? and how does it create portfolios options relative to that singular identity.
The problem in an entangled world or a city or a place is they’re not singular identities. There are a multitude of identities. And those multitude of identities and their portfolio construction has to be built not through the C -suite, but actually has to be built through a cumulative relational idea of portfolio building in relationship to rather than centralize. So I just wanted to sort of nuance some of that stuff because I think there’s been a lot of stuff on portfolio and it’s been sort of everyone’s singing and talking about it. But I think there’s a really particular problem space that if you portfolios of love, remember portfolio theories largely come from finance and it’s becoming from actually capital allocation, risk management theory. Whereas actually if you’re looking for a city or a place, a place is full of multitude of actors with multitude of identities with differential forms of risk, appetite, and capabilities. So the construction of portfolio there has to be a different theory of portfolio construction. And too often, I think, portfolio construction has still been massively a tool of centralization. So centralized risk management through portfolio dealing with complexity rather than multitudes. And I think there’s something really interesting there. Sorry.
Shruthi Prakash
No, please. I’m glad you clarified. I think these points need to be sort of touched upon as well. Right. So, um, what I was going to ask you was more about, let’s say now that we’ve sort of gotten an idea of this, how do you create this form of citizenship inside organizations so that, you know, like how, how do you navigate all of it? And while also managing maybe, let’s say, politicizing the workplace or managing how do you bring your whole self to work and so on. So how do you go about that?
Indy Johar
Yeah, I think that I think it’s a really difficult question Shruthi actually, because I think the problem is that we have weaponized labor and instrumentalized labor, that the reactionary effect of employees is to be a force of to be a 21st century good employee is to is to act as a to prevent the extraction capacities of an organism. So to be a good employee of the 21st century, you have to fight the machine which is driving extraction. And that is the activist position right now. So I’m going to defend my time, I’m going to defend my hours, I’m going to defend my contribution. So it’s all based on a theory of defensibility against the extraction of the system.
So one of the big things is that most people that are really thinking about this space either come into DM from two lengths. One, defensibility. That’s what they’ve been coded to understand. And two, they come in from consensus or consent. So they are rooted. This is the kind of two dominant, what I call cultural narratives you fall into. One, they come in, I’m going to defend myself. And two, they come in, how do you make decisions? Well, all decisions should be consensus-orientated.
And actually, both are problematic. So one is problematic because if you default to consensus, Simone will know about his context way smarter than I’ll know about his context. So if him and I are making a consensus-based decision, the decision will be suboptimal. So we have to change our theory of decisions outside theories of consensus to be able to deal with complexity and situationality.
So that’s one thing that becomes really problematic. And the other thing is if you come from the theory of defensibility, in a complex system, what you create is asymmetries in the system, where some people, 30%, 20%, will end up doing vast amounts of the work. Because in every organism, there is what I’d call known work, known unknown work. And let’s use the Donald Rumsfeld strategy of unknown unknown work, which just happens to turn out.
So you need relationships to be able to not only deal with the predictable, but you need relationships to deal with the unpredictable and the unknown. And theories of labor are all rooted in the theories of the known known, right? And actually, we don’t have a theory of labor or contribution rooted in the unknown. So what ends up happening is a few people end up holding the unknown.
But if you want to create equitable organisms, you have to be able to create the relationship to be able to actually deal with the whole spectrum. If you want to create equitable developmental organisms. And I think that’s where citizenship becomes really critical in recoding ourselves outside the lock -ins of labor and anti -labor as a theory of organizing to something that transcends that theory. And I think this is a new dimension. I think this is one of the big problems is we’re locked into a labor and anti -labor theory as opposed to transcending that problem into a new space. And that requires us to deal with a new thesis of work and an equitable relationship with the work and thereby a leadership, an invitation of leadership in every act of work. So an invitation of leadership in every act of work means that you are inviting care and extraordinary responsibility in every relationship, as opposed to inviting, privileging certain aspects of the work of the organism versus the other work. And, you know, our pay formula does not differentiate between, I don’t know, coders and financiers and designers. It doesn’t differentiate between admin and non -admin.
We don’t do that. We think every space is a space of leadership and every space requires human intelligence and human care. And if everything is done at an extraordinary level, the cumulative effects are extraordinary. So I think we have to challenge both these frameworks. And so citizenship means, I think, actually challenging. If I want to be a citizen of DM, I have to contribute to the learning of DM. I have to contribute to being a declarative decision maker, I have to contribute to being part of the rhythm of the organism, I have to contribute to making myself vulnerable in sharing my own learning and my own fragilities in the system and to be able to dynamically operate with those things. I have to be able to operationalise on the basis of me recognising the impacts on my decision relative to my colleagues and be able to be proactive about that in different formats. So it requires a new theory of responsibility which is often missing in these codes.
We often talk about roles, but actually this is a new theory of actually being a responsible agent in the system, which is where I would class citizenship in these frameworks.
Simone Cicero
Thank you so much. I mean, this conversation around citizenship is very interesting, because especially as we look into citizenship as an alternative of labor for organizations, it strikes a few chords. So my first reaction is being a citizen. First reaction is more like a dynamic thought about becoming a citizen.
And if I think about the concept of a city and citizenship context and idea comes from the very early days of cities like the Police. And it was much less intended as it is today. So citizenships today, if you think about being a citizen of Italy, for example, it comes with rights more than the duties you have to perform.
So it is predicated on industrial ideas and concepts that a state has a budget and there’s services and you can, you know, as a citizen in a modern institution, so 21st century nation, you basically start with a lot of rights and very few obligations, right?
Instead, if I look at, the citizenship inside organizations as the challenges you are actually performing, it resonates much more with the citizenship at the age of, I don’t know, the ancient Rome or ancient Athens, right? So where you, at some point there was a war, you had to pick your sword and go war for your city. You have to, you are very much, you have a lot of duties come with this citizenship. So my question maybe for you is, how do we recast this idea of citizenship in organizations that require us to really go into the unknown, as you said? So what does it mean to be a citizen of an organization instead of being a worker or an employee?
Indy Johar
Yeah, and I think this is where you have to, people have to feel they are part of it. It is theirs. Their fate, it is a community of fate into their futures. So I think you have to be able to operate through that theory. And I think you have to be able to operate – You have to be able to create a landscape of decision models which recognize that model without falling back into consensus. You have to be able to create new forms of, like I said, the pay structures, you have to be able to create the institutional logics to be able to work through that reality. You have to onboard people into that reality. So I think it requires a whole plethora of frameworks, but I increasingly can’t see a way of operating in either the complexity or the uncertainty without actually building this capabilities.
Simone Cicero
And what can, as a citizen of an organization like this, what shouldn’t you expect? So what are the things you shouldn’t expect? Maybe, I mean, I’m referring to beliefs of employment or labor. What are the beliefs that you have to let go to become a citizen of an organization that engages with complexity fully in this craft -based manner and in this entangled manner as you seem to, you know, point out.
Indy Johar
I don’t know. It’s a really good question. I suppose what I can say is that citizenship is an invitation for leadership. It’s an invitation for everyone being a leader. And I think that’s what the kind of root of it is, whether it is any part of the system should be led and should be explorative, should be discovery orientated. Every part of the system is not just routine. Every part of the system is routine.
So what you’re building is this unfurling capacity of the system. And this unfurling capacity becomes a really key part of being in this. I think what it also demands is. an embodiment of being, I think the detachment of employment also becomes really a I think there’s an emotional detachment of our theory of labor where we can emotionally detach and execute. And I think this requires no longer emotional detachment. I think the dehumanization of that detachment theory of labor was a functional necessity perhaps for the industrial economy.
But perhaps what this requires is not an emotional detachment, but actually bringing a completeness of that, of your beingness into that table. It does require a form of being able to hold divergences. So, you know, like what may be appropriate in your context may not be appropriate in mine and vice versa. So being able to cognitively hold divergence, which I think is much bigger task that most people think becomes really critical.
Being able to operate multiple computes. So I think one of the things I often find in rooms or conversations is people go, it’s either this or it’s that. And actually what you find is that the smartest move might be at the intersection of three simultaneous computes. And it’s not A or B, it’s A and B and C and the intersection of those that constructs that compute. So it requires a different theory of holding tolerance for divergence and complexities in different formats. It requires making yourself more legible, which I think the legibility of action and reflectiveness of action becomes a key combination. I think it requires quite high levels of auto -reflective behavior and sort of the inner voice of a system needs to be very your ability to hear yourself and hear your actions. These are capabilities required. So there’s a whole bunch of stuff that I think is really critical in building an organism of that type.
And I also think this is fundamentally rooted in a different theory of machine -learned bureaucracy as well. I think there’s a new machine -learned component at the back end of this that actually accelerates these capabilities in different formats as well.
Simone Cicero
My curiosity at the moment as we kind of edge into the end of the conversation for today, it’s really about how do we, how the hell we build a way for people to start to engage with this theory of organizing, you know, how do we onboard them? How do we ramp them into this? What are the avenues that we can get talented people into engaging with organizing in the way you are talking about, which is not just challenging from a perspective of being entrepreneurial, leaderful, but also from a perspective of changing the work we do.
Because, you know, I don’t think we can have this type of organizations you’re talking about in a disembodied manner. So you cannot just have a post -bureaucratic and the organization you’re talking about doing, I don’t know, digital marketing.
I don’t think so, because the value avenues you can get from this work are too narrow. It’s going to be just money and it’s not going to work. At least it’s not going to work over the long term. Right. And so the question is really, how do we make this, make people to reflect and shift their focus from the money making machine into looking for different avenues of value and being inside the work at the community level, at the environmental level, at the city level iin this context where the difference can be really made when we think about transitioning into something new.
Indy Johar
Yeah, I mean, again, I want to be careful. I don’t have a problem with the idea of profit. I think profit for me is a signal function. It’s a signal function that the system is creating surplus. The problem is most of our theory of profit is largely rooted in extractionism. So the profit signal function is a false signal because it’s effectively rooted in extracting from the system to be able to create that profit. So the signal function is mal -signaled. Like the signal it’s giving is not representative of actual system level generational value. So I think one of the key things for me is not is how you take system level, this multi optimization frameworks, like how do you optimize, yes, for price and for profit, but also to system flourishing simultaneously. So that’s really critical.
I do think in terms of making this case, as you rightly say, I think this is rooted in a new theory of politics and Italy was kind of. Italy was fantastic in terms of leading the kind of labor conversations. And I wonder whether this is a new theory of, you know, beyond labor, which is effectively opening up conversations of new freedoms, freedoms to be human and a new freedom economy rooted in being radically human. And what does a radically human economy looks like? What does it require from us? And how does this radically human economy work with the new theory of machines and machine learning capabilities to unlock a whole new class of value. So I think this is a kind of, it’s a paradigm in my view of a, and a politics of a paradigm, because I think we can also recognize that people are enslaved in the theory of labor and the enslavement is felt by people in their theory of labor.
And actually, unless your, you know, wages are growing, whatever 10, 15 % a year, actually you very quickly become enslaved in the system. So the question is, how do we start to build a new political economy beyond labor? And how do we unlock that and talk about it? I think that’s a key part of the story because I think we can demand this sort of. Then the second part is how do you build what I would call the new bureaucratic capabilities to be able to build towards human economy? And that I think is also a key part of the story. Most of our bureaucratic systems are rooted in theories of control, not ennoblement. We haven’t built a back end of a machine learning system, which is rooted in building the learning capacity of thousands of people. What does it mean to build those sorts of economies, which the back end of organizations are rooted in, in asking better questions in all sorts of learning capacities in the system. So we haven’t built that. And then I think it also thereby opens up a new thesis of where value lies. And I think value lies in craft and multi -point optimising and building economies which are completely different from the industrial economies of production that we see in the 21st century. So for me, this changes theories of labour, changes theories of bureaucracy and changes theories of value as well. And I think if you look at the scale of the challenge that we face, I think this is the future that we’re having to deal with.
So, you know, and I think this is a political theory as well.
Simone Cicero
Yeah, I mean, I was wondering which route in the background a little bit, especially on the question of, but I don’t think we want to ask another question, but rather maybe underline some of the key questions that we are left with as we close this conversation, which is for me, mostly, you know, where is this political discourse going to happen? In what place, in what layer, so that we can change the forcing functions on our organizing, because now it’s just profit, it’s just being on the market.
How do we change the forces that we apply to our organizations? Where this political conversation is going to happen so that the underlying organizing systems change priority and the humans inside these systems are supported with this freedom to care as you often say.
Indy Johar
Simone, can I just with regards to that, I think the forcing function is no longer a morality. So often this conversation has been a moral conversation. I think this is an efficacy conversation of operating in complexity and uncertainty. I think this is the most efficacious way of organizing resource capabilities and systems in complexity and in uncertainty and I think it also creates a new thesis of value formation, which is completely appropriate in those contexts. So I think this is no longer, I’m no longer trying to argue this from a moral landscape of rights.
Simone Cicero
No, but I’m not making a moral point. What I’m making is that so far, let’s say the operating system of organizing has been predicated on this forcing function, market success. So what you say for me is that when you say it’s not going to be a moral thing, it’s more like an efficacy question. For me, it means that most likely the forcing functions are going to switch as a consequence of catastrophic events.
Indy Johar
So, like, okay, so Italy, in terms of the Marshall Plan, was one of the most interesting landscapes in anywhere in the world, right? So there was a massive transformation of Italy as a result of the Marshall Plan and how the Marshall Plan was coded. So crisis will open up opportunities for new negotiations of this story, but I also think, simultaneously, there’s going to be a whole shadow of organisms that are going to start to grow into this new landscape of value as well.
And I think that, and I think this is because there’s just so much resilience into these sorts of ways of organizing into those frameworks. So I think there’ll be a crisis driven approach. I think you’re seeing organisms like this starting to become kind of powerful systems. And I think what this also does, and I think I haven’t mentioned this before, I think this changes, you know, typically in an organization you say this much overhead, this much is whatever, you know, X, Y. I think this changes the theory of overhead organisms as well. And I think the percentages of what I would call learning function activity is much larger, but the management function is much lower, but the efficacy of your talent is much higher. Right?
So the proportionality of what we’ve been coded into our brains is all changing. And I think this is what people aren’t cutting. If you’re building an organism like this, the background numbers are fundamentally different. Like, I think your overheads are much higher, but your efficacy of your base organism value is much higher as a result.
And this one -to -one ratio, so one person working on the front of the organism is probably as valuable as three people working, but your overheads are much higher. So there’s a completely different thesis at the back end of this, which I don’t think we’re yet understanding.
Simone Cicero
I agree. Shruthi.
Shruthi Prakash
Thank you Indy for the discussion up until now. What I wanted to touch upon as we close on this podcast is what we call breadcrumbs on our podcast. Any suggestions for books, movies, recommendations that our listeners can gain from?
Indy Johar
There’s thousands of them. The one that I’m perhaps for this moment in time, it’s an old, not an old book, but it’s a relatively recent book, but I think it’s done by Katharina Pistor, The Code of Capital. And one, I think it’s a brilliant book and I think everyone should read it. It’s sort of, I think I would invite people to read how we code capital for control and how we code capital for learning what it could look like and how do we go beyond that. I think it’s really, and two, she’s the most incredibly kind, generous intellect that I’ve ever met.
So I think both those things actually mean that it’s really great to read her work and to advocate for her because I think to have someone so brilliant be so kind and humble, I think is a really, really genius contribution to the world.
Simone Cicero
Thank you so much. I mean, I want to also take the chance to refer our listeners to Philip McKenzie’s Deep Dive podcast, where we borrow this idea of the breadcrumbs, the fill -coated drops. So I really encourage listeners to check the archive because I feel there’s a lot of conversations with you that they can also dive into and learn from. So kudos.
Simone Cicero
And yeah, it was an amazing conversation. I feel like we have lots to experiment and explore, hopefully together in the future. Thank you so much for your time, Indy. It was great to have you. I hope you also enjoyed the conversation.
Indy Johar
I know I loved it. And so I want to thank you. I hardly ever talk about these sorts of things with anyone. So it’s like typically most of my conversations in the podcast is outward facing into the world facing as opposed to this particular type of conversation is how do we organize for this? So I really welcome and I’m deeply privileged to have the space to have these sorts of conversations with you Simone in all the great work that yourself Shruthi and Simone you’re doing in terms of being able to advance these things. So I thank you for holding the space and thank you for holding the knowledge as this whole ecosystem grows.
I want to thank you deeply both of you.
Simone Cicero
Thank you so much. Shruthi, thank you for your time as well.
Shruthi Prakash
Thank you. Thank you. Thanks, Simone. Thank you, Indy, for joining. It was great having you.
Simone Cicero
And for our listeners, of course, you can check our website. You go to boundaryless.io/resources/podcast. You will find in this the conversation we have with Indy today with all the transcripts, the references and some other elements. And until we speak again, please remember to think boundaryless.
Any form of single point optimization is problematic. And this may be the big paradigm leap I think missions might be great to going to the moon, but they’re terrible for the transition that we face. So it is not sufficient for Genoa to become decarbonized, to become a net zero city by massively increasing inequality, massively not increasing justice, massively increasing environmental damage around the planet,
And we think there’s a problem with the mission theory because it actually is systemically creating single point optimizations and thereby codes a goal which actually drives systemic externalities.