#96 – Building Calm Technology and Organizations with Amber Case

BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 96

 placeholder
BOUNDARYLESS CONVERSATIONS PODCAST - EPISODE 96

#96 – Building Calm Technology and Organizations with Amber Case

We rarely (if ever) get to host a cyborg anthropologist on our podcast, so we are thrilled to have Amber Case join us as a special guest for this episode. 

 

Amber, who works at the intersection of human and technology interactions, shares her journey into envisioning what a calm technology is. Calm Technology is a design philosophy that advocates for technology to integrate into our lives, enhancing our ability to focus rather than distracting us. She takes us through everyday objects that have been designed this way and leaves us challenging what’s really “transparent technology”?  

 

We stretch the boundaries to include a conversation on organizations and governance: a leading thread is that of keeping asking questions and not drinking the kool-aid of truthiness in how we think about tech and organizing. 

 

Tune in to this provocation that leaps into the future of technology, its impact on society, and how you can navigate these landscapes with mindful intent.

 

 

Youtube video for this podcast is linked here.

Podcast Notes

This podcast started on a personal note – Amber shared her experience being a child of engineers, and also having early exposure to the natural world – both of which she believes set the precedence for what she does today. 

 

She takes us through a wealth of experience she gained in her multifaceted career – having been a renowned book author, TED speaker, serial startup founder, advisor, co-founder of DAOs, Research Director at the Meta Governance Project, and most recently the founder of the Calm Tech Institute.

Amber advocates for a future where technology “gets out of your way and lets you live your life.” This episode will elicit new questions for you and your career.

 

This episode has been occasionally recorded as “sound only”.

 

Key highlights

  • Critical intersection between human behavior and technology, emphasizing the need for a deeper understanding of our digital coexistence.
  • Calm Technology as an essential way to creating designs that respect human attention without overwhelming users.
  • Building tools that help users focus on the task and not the tool.
  • Complexities of governance in technological and organizational contexts, with a focus on the potentials and pitfalls of decentralized autonomous organizations (DAOs).
  • Ways to bring a shift towards innovations that serve humanity and contribute to a more equitable future.
  • Developing a questioning mindset for driving meaningful technological advancements.

 

This podcast is also available on Apple PodcastsSpotifyGoogle PodcastsSoundcloud and other podcast streaming platforms.

 

Topics (chapters):

00:00 Building Calm Technology and Organizations – Intro

02:09 The start of a cyborg anthropologist

11:19 What happens as Technology fades into the background/ disappears?

19:55 Technology as facilitator vs. distractor

30:35 How to limit and grow technology

48:23 Organizations shaping itself to build better technology

59:07 Breadcrumbs and Suggestions

 

To find out more about her work:

 

Other references and mentions:

 

Recorded on 22nd February 2024.

 

Get in touch with Boundaryless:

Find out more about the show and the research at Boundaryless at https://boundaryless.io/resources/podcast

Transcript

Simone Cicero 

Hello everybody and welcome back to the Boundaryless Conversations Podcast. On this podcast, we meet with pioneers, thinkers and doers and we talk about the future of business models, organizations, markets and society in our rapidly changing world. Today I’m joined by my co -host from Jakarta again, Shruthi, you’re back to Jakarta.

 

Shruthi Prakash 

Yeah, I’m back. Thank you. Hi everyone.

 

Simone Cicero

Thank you so much for joining us. It’s a very late conversation for you today. So thank you again for your extra effort. Today we are also joined by Amber Amber Case. I’m always hesitant, let’s say, with writing these short intros, but today we are lucky to have a guest who has a Wikipedia page. So I make the Wikipedia page talk and the page says she’s a cyber anthropologist, a designer, and public speaker who explores the interaction between humans and technology. 

 

Amber, besides that, she’s a renowned book author, former TED speaker, serial startup founder, advisor, co-founder of DAOs. She’s also Research Director at the Meta Governance Project and most recently founder of the Calm Tech Institute. Hello, Amber. It’s so nice to have you today with us.

 

Amber Case 

Hi, nice to see you. It’s great to be able to talk with you again. I think we’ve met quite a bit ago and really connected and got along. So it’ll be fun to look back on that and see what’s changed and talk a little bit about the future.

 

Simone Cicero 

Yeah, it’s been more than 10 years because I met Amber for the first time in 2011, if I’m not wrong. She was a speaker at the super interesting Italian conference that was called Frontiers of Interaction, props to the Organizers. You were talking about interaction design and already speaking about a lot of the topics that ended up in being part of your work and your first book, if I’m not wrong, or maybe the second, Calm Technology Principles and Patents for Non -Intrusive Design that came out in 2015. So I’m sure we’re going to talk about these topics today. But maybe, Amber, you can start by giving us a slight idea of what was your trajectory as an independent researcher moving from an entrepreneur as well – moving from interaction design now into governance, organizational settings, you know, startups, whatever. So maybe, yeah, that could be an initial framing that can help us to move forward in the conversation.

 

Amber Amber Case

Yeah, so I grew up in Denver, Colorado. My parents were broadcast engineers, so they put television on the air. And my grandpa was the head of the mathematics department at University of Utah. So he was really interested in early computer graphics and helped teach the people that brought that into existence. So this goes from the 60s and 70s working on big computers the size of a gymnasium to growing up with an Atari computer when I was a kid, yet reading the 1960 workbook encyclopedia and having my parents have little laboratories at the house where they did circuit board design and learning how to solder when I had hand -eye coordination. So everything that I grew up with was partially technology and then a lot of being outside in the backyard, catching bugs and watching trees grow. 

 

And then when I was old enough, reading things like Plato and Aristotle and all of the kind of human side and noticing huge differences in all of these pieces of information. First, that when I read Plato for the first time, I think it was 10, I thought it was a contemporary author or at least, I thought that Plato was around in 1958, which is when it was when the book was translated that I was reading. I did not understand that this person had lived a long time ago and had taken very long walks to talk with his friends about governance and politics and rhetoric and all these different things and that there was a long lineage from Socrates to Plato to Aristotle and then Aristotle being the advisor to Alexander the Great, the teacher to Alexander the Great, that you could have this huge history of empire. 

 

And then reading the world book encyclopedia and comparing it from this 1960 world book encyclopedia, everything also seemed the same. You had redwood trees and colleges and universities seemed to be pretty much the same. The University of Chicago was a little smaller in real life. But. The big difference was that when I got to the article called Computer, the computer was this huge device and there were a lot of people working on it, also called computers, even though they were humans, doing lots and lots of math. My grandpa was also a computer because he did the firing tables for the Mercury Redstone rocket, which was the first rocket to go to space. And he had to do it by hand or, you know, alongside a computer or, you know, there was so much manual.

 

Amber Case

But when I looked at the entry for computer, the computer was big. And when I looked at the computer in my house, only 20 years later, that computer was smaller. And that was the first time when I thought, there are some human universals. There are some things that don’t change, but the technology is unstable and it changes in some way. I wonder in 20, 30 years, are we going to have wristwatches with information on them? Are we going to have the mobile phone. 

 

So when the iPhone came out in 2007, I was getting my degree in anthropology, specifically focusing on cyborg anthropology. And I got to write my thesis on the mobile phone right when it came out, specifically the iPhone because I wanted to learn about how that touchscreen versus the physical buttons might affect how people interacted and how that would change the world. And so this was this weird thing of going into being really into math and science and engineering and just having those be my first language alongside music. My dad built speakers, my mom played the piano. Between the two of that, it was very musical. And then my mom’s best friend who is a mathematician and a math professor saying, why don’t you get a degree in something that’s hard for you? What’s the hardest thing for you? And for me, it was social studies.

 

And so I deliberately went to a private liberal arts college to learn how to think. And instead of taking a job at, you know, the job offer I had was Lockheed Martin, which makes all sorts of advanced technologies. And I had been a part of their program in high school and they said, you don’t need a college degree. You can just work for us out of high school. And so it was very exciting until my mom’s best friend said, you know, you should really try to challenge yourself and do something that’s hard. 

 

And so I took the anthropology degree, learned about cyborg anthropology, and that’s where I kind of got my start. I think it was really about looking at the human factors involved in using technology and also looking at technology as a continuum that’s not just about this one thing that we see right now, but about 1,000; 2,000; 5,000 years of history, whether that’s the governance of trees or the alongside technology of a water wheel or a river, and to not think of what’s now as the whole thing. And so to have a bit of a history of being able to go back and see what software was there in the 90s, that’s when I was writing my thesis, I discovered these papers from Mark Weiser and John Seely Brown from the 90s at Xerox PARC, where a lot of people were thinking about what’s going to happen when technology is cheap instead of expensive, then attention, our attention will be the most expensive and how that technology works with or against our attention will make or break that technology. And when I saw those papers, they read just like a paper today and I cried because I thought I had gotten a little bit closer to the truth.

 

And then I discovered that Mark Weiser had died when he was 46 and he wasn’t around anymore. And it was just like rediscovering Plato all over again. I hoped to meet this Plato person who I thought was a real person that was born in 1958. This was not correct. Here’s yet another person who seemed to have something that was classic and wasn’t around. And so I waited a very, very long time in my view, thinking that people would discover Mark Weiser too.

 

And barely anybody wrote about it. And so I started giving these talks, which were somewhat well received in the beginning, but not as well received as they are now about our attention is going to be a big mess. How do we make things that work well with our attention or against our attention and try to do a good job? And so that’s kind of how I got here and why I got interested in this specific subject, trying to bring it to to the era which I think it’s most needed.

 

Simone Cicero 

I mean, I’m so fascinated by your work. I must admit, when we spoke first time, when we met for the first time, I remember that you had this quote that maybe it was your own words or maybe you were quoting someone else. I don’t remember that the best technologies are the ones that tend to disappear over time, right? Which makes me very thoughtful about what happens as technology disappears. 

 

So what does it mean to disappear? Does it disappear creating more space for us or maybe it disappears by becoming ingrained in the background, in the scaffold that we move through. And so in a way by disappearing, it’s influencing our lives even more than a visible technology would do. 

 

And this reflection is very much connected with, I think, in general, the question concerning technology that everybody has been studying, every philosopher has been studying for ages now. And if I think about Heidegger’s work, for example, in explaining that at some point, the technology tends to become a force in itself, right, that we cannot even control. It kind of develops on its own as the spirit of the modernity. So what is your reflection after all these years in terms of what really happens as technology disappears and maybe gets into the background?

 

Amber Case 

Sure. That quote is from Mark Weiser and it has been woefully misinterpreted throughout the ages, the ages between here in 1990 or 1989. And by that I mean, when people think of the term disappear, they think here is an invisible system that knows what you need before you need it and knows your preferences and is doing stuff for you in the background. And that’s not what Mark Weiser meant. 

 

What he meant is that a good technology is useful when it’s invisible. By invisible, we mean that you focus on the task and not the tool. And so both the terms invisible and calm are kind of misnomers. We think calm as being a yoga room with a nice yoga mat and a pretty view and you are wearing expensive yoga clothes and there’s nothing there. And also we think of invisible as like this smart thing in the background. But really what it means is when I’m using my glasses, I don’t notice the glasses, I just look through them. When I’m wearing the glasses, and we’re all having glasses here, I am focusing on the view, not the glasses. The glasses are a pass -through. 

 

When I look out the window, unless the window is dirty, I’m focusing on the view outside the window and it’s giving me something special. It’s giving me the view of the outside without me having to be outside. It’s pretty magical technology. It’s a pass -through. When I’m writing, even if I have a fluffy pen that’s really extra, I’m focusing on the task and not the tool. I’m focusing on writing. 

 

When I’m in a car, I’m not focusing on everything in the car. I’m focusing on the peripheral attention that informs me about what I need to do. Actually, being in a car and driving can be calm, not because you have no information, but because you have lots of information and it allows your brain to make calculations. And so calming means putting things into the periphery so that you can be attuned to more without being overburdened with your primary attention. And that’s a version of calm. And then pass through, which I think is a better phrase to use than invisible. I think invisible got everybody down the wrong route. Pass through is something that might take you a really long time to learn how to use. 

 

It takes you a long time, many years, to learn how to use a pencil or a pen, to learn to write in whatever language that you learn how to write in to learn how to speak in that language, to learn how to ride a bike. 

 

Once you have learned it, you can just do it. You don’t have to reconfigure your pencil every time you use it. You don’t have to reconfigure your glasses every time you use them. There are affordances on those glasses that the optometrists can set that maybe they get bent a little bit. This is a pair of vintage glasses, so it had to be adjusted a little bit, and it gets conformed to your face. It’s not smart. It’s not trying to conform itself to your face perfectly. Same in a car. It’s not perfect when you sit in the car, but you have these levers that allow you to adjust the seat to your needs. You can move the mirror to your needs. It’s not trying to do this automatically for you, because if it did, it’d probably get it wrong. 

 

And I think that’s a really important thing is that, you can adjust it to yourself. It takes you a long time to learn. But then you get familiar with something. And once you’re familiar, it disappears. And I think one of the most invisible technologies is electricity. We don’t think about the light switch on the wall in our environment. But when we walk into a room, we use it. We look for it. We find it. We tap it. We switch it. And then here is a light. And very often, the lights that are smarter, that detect our presence or movement and if we don’t move enough, turn themselves off are really annoying because they’ve made the decision for us. The governance is happening outside of us and we are not the ones steering or in control. Again, the car is doing lots and lots of stuff for you. The motor is going, the oil is operating properly. There’s coolant, there’s pistons, there’s brake pads, there’s vibration handling, there’s suspension, there’s all sorts of little metastable systems in a car, just like we have in our body. 

 

Breathing, blood, pulse, like all of these things are happening behind the scenes so that we can make decisions and steer ourselves. And it’s the same in a car. And so I think when people talk about doing something in the background, you know, electricity, HVAC, you know, heating and air conditioning, these are all things that are done behind the scenes, are set and created by control systems engineers and civil engineers that are making systems that work really well within a certain boundary level to a certain threshold that can withstand a decent amount of disturbances and then go back to that kind of luminous mean just like an airplane wing can withstand a certain amount of turbulence. And we might get scared when it gets a little bumpy, but it’s been designed for that. 

 

And then we look at a lot of the systems we build today that crumble under any sort of disturbance because they’re not made to be metastable. And so I really like the term cybernetics because it’s about steering.

 

Because it’s about getting informed and steering and making those decisions, the governance is with people, the individual. The governance is not being made for you by somebody else, and you have no mouth, but you must scream. And I think that’s a really big difference in how we built technology in the past and how we build it now. If we were to build, we built Wi -Fi in a weird way. Wi -Fi could be more like electricity, where…

 

Most of the time electricity works and when it doesn’t it’s pretty obvious. But a lot of the times Wi -Fi doesn’t totally work and it could work like electricity. It’s not really a total given. And yes, it’s really complicated. But the thing about electricity is it’s dangerous. We know it’s dangerous. You need to be certified as an electrician to work with it. And it’s behind the scenes doing a lot of stuff for us. So. I think the kind of innovation gotchas, the calm tech gotchas are, you know, the term calm is not what you think. There’s just as much information in a forest as there is in a city, but one relaxes us and the other one does not. Why? Because of the way in which a jungle or a forest engages our attention. We tune to it. And if you’re part of an indigenous tribe, you are very tuned to it for the last thousand years that you’ve lived on that land. And you know exactly what the land is saying. You know exactly what the bugs are saying. You know if there’s going to be a weather event. You know whether you’re going to be able to gather from a specific place or not. You know the shape of the animals and you understand how long that particular rock and that tree have been there. In a city, the way in which things are being communicated are pretty loud. There’s like less nuance. And if you read into it very deeply, as all humans have the ability to do,

 

you can sense a lot of information like, oh, that coffee shop probably won’t exist for a couple of years. Or you do, you know, use Esri software and get an ArcGIS landscape analysis for 50 years in the future. And you do urban planning with that, right? There’s so much more meaning and overlay and density that you can do with that to inform. But a lot of times you just walk down the street and it’s extremely loud and it’s hard to have your thoughts and it’s not very uniform. So there’s a lot of applications and gotchas in this whole thing to say when you hear a term like the future is invisible or the future intelligent agents will talk to us and you don’t know the history of intelligent agents or why we even have the term artificial intelligence. It’s really, really important to go back in time and say, where did this term come from? What does it actually mean? What do people think it means? And what are we told that we’re supposed to react to it with from movies and television shows?

 

Shruthi Prakash 

Sure. Sure. Sure. I’ve just been listening and I have so many thoughts as well. I think I wanted to, you know, primarily sort of hear your thoughts on what role, let’s say developers of these technology play right in ensuring that their creations don’t inadvertently become an obstruction and are transparent to some extent. And how do you therefore sort of attain this balance between their product being a facilitator versus a distractor, essentially? And I think the analogy of the glasses really help in this, but just a few more thoughts on it will be helpful.

 

Amber Case 

Sure. We could go back to early technology. It was like, here is a vessel that is holding liquid. And it’s holding it for me, so I don’t have to think about it. This is pretty innovative. Baskets, ceramics, chairs, the chair that I’m sitting in, I have to physically think about it to remember, oh, I’m sitting in a chair. So I think it’s important to think about how the waves of technology and our lives have changed. And Mark Weiser talks about this in which we had the mainframe era in which you might rent space on a giant mainframe sponsored by some college university that might cost several millions to install and be very hard to maintain. 

 

And the computing was far away from you. So you had to go to it and you had to share it. Then we had a distributed area, and I call it distributed because it is. It’s distributed and decentralized, in which everybody had a PC or Mac at their house. Not everybody, but many people had PCs and Macs at their house. And what you did with that is, you know, Microsoft would release its operating system on a CD -ROM. You had to do it very well. And if you went viral, it just meant that you’d sell lots and lots and lots and lots of disks. 

 

And at 40 to 200 bucks a disk or whatever the price was, you knew how much money you could make, you knew your product lifecycle, and because it was baked into a CD, you needed to make sure it was very, very good. And of course, you’d have updates because it was very hackable. But it was a different thing because then the technology was near to you. And when people started to use the web, they were very nervous about putting their information outside of themselves, like credit cards. Paul Graham’s ViaWeb was like one of the startups that helped people to be able to buy stuff online, you know, and then eBay and then, and now we store, now we’re fairly comfortable with, or perhaps not, but it’s the norm to store your information outside of yourself, whether that’s photos on iCloud or your reactions to political ideologies on social media. So much information is now remote and now again it’s almost like you’re sharing a mainframe putting your information out there. The problem is that when people had their computers at their houses they could change those systems. It was under their control in a lot more reasonable way and that you could modify your operating system get to really know it for a while. Understand how to program different things in it or modify it. And even when we had an era in which there was like cPanel and PHP and like forums and you could still install that on your own kind of shared server and you could make decisions on running forums about who to allow into the forum. And I see that now with Discord channels where you can decide who is in your Discord channel. 

 

It’s very similar, but you don’t own Discord in any way. And if Discord were to go away as a startup, you wouldn’t have anything left. Whereas if you ran the shared, if you ran some PHP app on a shared server, you would have access even if that project stopped being supported. You could still run it. You could still have a forum for like 30 years if you wanted to, I suppose. And so that’s really important to think about that the innovations and the user experience are being run more and more by centralized companies that are remote that are often propped up by venture capital. And so you don’t really own the stuff that you’re using and you don’t get to make as many decisions about what you want. Your account can also be deleted for no good reason at any time. So if I have 50 ,000 pins on Pinterest, because I’m really interested in exploring the latent space around specific colors, shapes, and ideas,

 

I know that that could be taken away at any time. And the hundreds and hundreds of hours of labor that I’ve done to explore those things are all for naught. And so of course I’m looking for export tools to do that sort of thing. And so I think, you know, before it’s like, if you know how to make a ceramic cup, there’s a governance around how that knowledge is transferred from generation to generation. There’s a culture around it. And within that culture is a governance.

 

And it’s not, it doesn’t need to be put into a constitution. It’s just, here’s the way a thing is done. And it can be fun. It can be empowering. It could be interesting. It can be multi -generational. But the way in which something is done now is, you know, you might have a BTS fan club discord server, which has all sorts of emoji clicking and voting to like put flair on your profile so that people understand where you’re from and how old you are and what character you like and all these things.

 

That is also a culture and that culture of governance is emergent and being shared. But again, all that goes away if discord says we don’t exist anymore. And it’s not that these things are ever permanent. You might look at a tribe and say, wow, you make pottery better than any factory can ever make it, right? Because you have a thousand years of history going into that artisan product. 

 

And the invisible thing that we don’t think about or see is the governance through culture and the kind of indigenous cybernetics that like predate any Western ideas of cybernetics. And so it becomes really important to look at everything through a much wider systemic lens that involves the culture, the history, the incentive models versus just the technology itself. 

 

And I think this is kind of the hard thing that I see when people say, you know, blockchain or anything, which I’ve been involved in a lot of these projects and even, you know, co -founded, you know, DAO tooling platforms and things like that, is that there’s an expectation that just because you have a new technology that people will suddenly be able to do governance better. The main thing is that people need to know how to do governance. And then the software shows a record of that governance.

 

Just like saying, you if you get used to Robert’s Rules of Order, for instance, when you’re making decisions or, you know, running something, you practice with it. You know, either you do student government or you play act something through it and you do rehearsal through games until you understand why this is useful and then you wield it over time and you learn how, oh, Robert’s Rules is really helpful for not having someone take over a city hall meeting with being a troll. That there’s kind of a rate limiter, that this itself is kind of a dynamic equilibrium system. And so I think there’s this issue that we conflate, we focus on the tool and not the task. And we get so excited about it. And that’s when technology is most visible. 

 

You know, the killer app for electricity was the washer and dryer, you know, the electric stove, all of these time -saving things that, you know, especially in like the Hill Country in Texas, women didn’t have to age 20 years prematurely because they’re, you know, heating up irons and, you know, fire and doing all these complicated tasks that they were able to be more free. It was a great emancipator in some respects.

 

The killer app for the phone, the killer app for Wi -Fi, it’s not necessarily time -saving stuff, it’s more time -sucking stuff. And so we’re having a completely different era that’s a little bit different than having Bob Frankston’s spreadsheet software show up on an early computer for business tasks. We have a lot of fun that we can do with our phones.

 

But a lot of it is super consumptive and on interfaces that are being governed outside of ourselves that we’re being beholden to. So I think the web has become a lot more consumed than create. And we’re losing a bit of that homesteading aspect of you can make a website and put anything on it and then share that with people. Now it’s more like I’m going to put in this link and get angry about it.

 

And so our attention’s really been used against us. And I think also it’s just because it’s expensive to run these websites. You do need to have advertisers. Whereas if you made Minecraft, for instance, you can have people, you have a 12 year old run a Minecraft server. You don’t need to support the central server. And so it’s a completely different equation. I think we can make stuff very differently, but we’ve been locked almost into this era where we think it has to be done remotely and it just costs a lot of bandwidth. And it’s just, you know, norms happen and we autocomplete to those norms. And so now even thinking about computing being in the world on, you know, pads, tabs and boards, as Mark Weiser talked about versus, you know, being in your head, you know, or being in a heads up display, we think because we’ve seen in movies that it should be in your heads up display, that it should be the color blue, that it should be very exciting. But we go to movies to be entertained. We don’t go to movies to be doled. 

 

But in real life, we don’t want to be entertained by our technology. We just want it to work. We don’t want the washing machine to beep really loud. We want to be able to change the tone. We don’t want to have these decisions made for us based on an earlier era.

 

of a suburban home that needed to have something that made a loud enough noise that we could hear it through the floorboards from the basement. Right. So I think there’s there’s a lot of these issues that we keep running into that are based on either old notions of governance or the idea that things should be a certain way. And I think that’s that’s just how things things are right now. And we’re in a state of what Douglas Rushkoff would call present shock. We can’t even deal with the present moment not to mention Alvin Toffler’s future shock in which we can’t handle the future, we can’t even handle the now.

 

Simone Cicero 

So let me try to bounce back some ideas. There’s a lot that I’ve been thinking and writing about. So my first reaction to this is like, how do we do this without, because there is a certain element of, of course, when you talk about technology as aligned to our needs or our objectives, more like in terms of alignment right, that the glass metaphor was very powerful, something that lines without even standing in the middle. 

 

But there is some element of restraint to what you talk about when you talk about technology, some limits, let’s say, that technology shouldn’t maybe surpass or go beyond, OK? At least that’s how I feel it.

 

And yeah, you always made, also made a lot of references to what happened in the past with CD -ROMs, for example, and software development and so on. And then I also feel like some kind of ethical element in what you talk about, some moral discussion and so on. So my question is really, the thing that is hovering in my mind is that, how do we do this? 

 

How do we enact this view of technology without sounding either nostalgic or, I don’t want to say fascist on the other side, that we kind of impose to someone how technology should be? Because I don’t want to say totalitarian, but when you said, for example, games, games create the situation that leads to you know, this kind of dystopian mechanisms with people playing games and burning a lot of time, you know, in doing pointless things basically. 

 

And I was, I recall that recently in China, they had this legislation that you cannot play games more than a certain amount of hours during the day, you know, they would deactivate your accounts after a while. And I mean, how do I feel about that? You know, is it a way to regulate technology that should be welcome or not. 

 

So I mean, down this reflection, I came up your ideas about design is governance, right? So somehow, should this conversation, critical conversation around technology be brought back into this space where designing and using a technology, an organization, a technology or whatever, we’re designing together as a system that uses a technology, shouldn’t we approach these questions as a system, more, I would say, into the smallness of the things that we build instead of thinking about these as a universal problem.

 

I don’t know if I convey my point, but reducing the size of this debate into the systems we build for us, it could be a company for its customers, or it can be an organizer for the people that interact in the organization. By reducing the space of the conversation around what kind of limits we want to impose to a certain technology, maybe we make it, you know, we make it to a dimension that is not imposing, but rather it’s more like creating a co-creative process where everybody can be thoughtful about how we use a certain technology.

 

Amber Case 

Hmm. Those are really good points. I feel like. First, ethics are hard to talk about because ethics with aesthetics could be considered fascist. So we have to really think about how civil engineers might approach ethics. We don’t say that a bridge is only for certain people. We say a bridge is for everyone. And we know that a bridge can withstand a certain weight limit. And so we post it on the bridge and be like, here’s the height and weight limit restrictions of the bridge. And there’s legislation around who can go down what roads. And if you have hazardous materials, what you need to do so that people can be safe. And we don’t say this bridge is only for white people. Used to do that. But the whole point is the bridge is like, you can go down this bridge and here’s some affordances if you’re walking or you’re bicycling. And also if you’re a boat, you need to go under it. And there’s a lot of different considerations that go with that bridge, which is why it takes a really long time to build a bridge. There’s also soil analysis and endangered species analysis and lots of things. 

 

But the ethics are set up around that particular project. They’re not set up as like, here is an ethical theory that can be applied to every single system.

 

It’s always about, here’s a practical application of what we need for this project in order to make sure that there’s a least amount of harm. And there’s a lot of simulation programs that are created in order to make models of the bridge so that you can run certain cargo across it, understand whether it’s going to break or not, understand the different issues that are going to happen with that bridge under different temperatures so that you don’t have to just build that bridge and watch it break when some wacky weather system comes through.

 

And of course, every engineering student gets that rolling bridge video where the bridge broke and it was terrible because they didn’t calculate it. Everyone gets this catastrophic failure as the first very, very visual learning experience of what not to do and what to learn from. And it’s so memorable because the ethics are born from a failure that’s so visual. But with a lot of these systems that we’re dealing with right now,

 

The failure is not visual unless a journalist comes in and writes a story about an ocular implant that helped people to see that went out of business and now people can’t see anymore. So those stories are harder to tell and less easy to, you know, you don’t start a computer engineering class. 

 

First off, you don’t get a lot of engineering principles in a computer engineering class, but you don’t start a software class or even, you know, introduction to programming with a catastrophic failure event that shows you what can happen to the people that use your software when you don’t consider their needs. We don’t educate like that. We do talk constantly about ethics in a vacuum as if it’s the separate system. In engineering, it’s embedded into the process.

 

Um, and so I think, you know, you don’t even go to school to learn software. You can learn to code with a learn to code book from, uh, you know, the super high course, which I recommend it’s great book. Um, but, but the problem is that we don’t often see or experience when we build software, the experience of how terrible it is for someone to use it and it not being real because we can’t see them. On a bridge, we can see if someone dies, we can, it gets on the television.

 

It gets on the web, it gets on TikTok, it goes everywhere. It’s viral and we see and we’re associated with that failure. But is, you know, what engineer at Facebook was associated with, you know, who made that decision that said, hey, early on in what, 20, you know, maybe 2007, we’re going to list all of the groups that you’re a member of on your page. 

 

And people who were part of groups that were not in the political alignment with their parents had to leave home. Who made that decision for somebody else? Who talked about the ramifications? And the posters on the wall are move fast and break things so you can see what happens. And now this doesn’t mean that Facebook or meta engineers are better or worse than any other engineers.

 

When we look at the history of bridge design, road design, car design, people get hurt, and then there’s a regulation. It’s nice if you can try to figure out what might happen beforehand and do it in a simulation. But it’s really hard to do that with software and people because there just aren’t that many tools to do that yet. There’s a company called BlockScience that’s working on that so that people don’t really get hurt. 

 

Or you can build better economics models and better software models so that you can fulfill the constraints of what you need to do in a way that you can understand the behavior of the system before you implement in the real world. And this is just good engineering. But I think it’s a bit harder right now because we’re in a kind of unstable media that’s being funded by venture capitalists who are being funded by wealthy families.

 

who need to make a return on their investment. So the incentive model is not make a long -term piece of software. The incentive model is you must within X amount of years in order for us to return the money that we raised from these private families, you need to have a three to 10 X return, ideally 100 X return in terms of acquisition, or you need to go public. And the economics of those are not necessarily people serving. Now there’s a real middle class in Germany that’s about

 

kind of these old families that are blue collar workers that make good intermediary stuff, you know, good bolts and wheels that have been around for a long time. And they don’t need to get conglomerated. They do not need to have venture capital and they do not need to go public. They’re family run companies. And I think having a family run software company sounds weird. But if you look at panic software that produces coda and a lot of Mac tools, they do a really good job of saying, hey, we don’t want to be bought. They’ve been offered to be bought by Apple before. And we don’t want to go public. 

 

We’re going to have a small company that makes high quality software. And we’re going to make enough to raise our families and have maybe 26 people. So I think there’s these notions that we get when we look at innovation, that it looks like an innovation center or you’re celebrating raising money and not actually building the product. And you’re doing something that’s incremental, not the next generation, because you aren’t even asking the right questions. And so when we look at ethics in a vacuum and try to apply it without good civil engineering skills or even mechanical engineering skills, or I would say like control systems theory skills, it becomes really hard to say, we’re going to do an AI ethical system. Where are the ethics when? What’s the application? Be specific. 

 

If we were going to have an AI assistant, which I like to call alongside technology, not artificial intelligence, helping a doctor make sense of their notes to reduce the time that it takes to make sense of patient time, what possible failures could that system have? Can you inform the doctor in that system of, of, can you give them options to reduce their labor? First off in diagnosing and treating disease, can you bubble up information for them? Can you tell them if you have a confidence threshold and can you have a network of people who are paid to check that work? And, you know, how do you assist somebody to have a good system so that you can take maybe doctors that are mediocre at their task or overwhelmed to becoming more expert -like?

 

And for expert doctors, getting rid of some of the administrative bureaucratic issues so that they can do their work better. And then also just remembering that people are reproducible and training them well and having a very good culture can very often outperform an AI. And by that, I mean, you can do a lot of call center automation by having a machine sort people through that system.

 

But when people are really in pain, they just want a person to help them. A person that’s very familiar with the system and can help them out and kind of has superpowers administratively to be able to usher them through that system. So there keeps being this idea that like it’s an all or nothing. AI will completely take over or this certain thing will happen. In reality, it’s more of this issue that happened in 1956 where a bunch of people that we’re talking about cybernetics wanted to have a conference where Norbert Wiener, the grandfather of cybernetics didn’t show up and talk the whole time. They wanted to just have a place to talk that might not be cybernetics. They didn’t want this person to just say, this is cybernetics. And so they created this field called artificial intelligence. They got a summer course at 1956 at Dartmouth and they got to talk amongst themselves. 

 

And the issue with the term artificial intelligence is that it was so exciting for the press.

 

and so much about the human use of human beings and automation and the Industrial Revolution and, you know, that that we started thinking about it as its own system outside of where it should be, which is in the control systems theory, steering cybernetics place. I don’t think most people have even heard about control systems. It’s how you experience most of the world.

 

when you use a complicated object outside of yourself, whether that’s like an electric toothbrush or a car or an airplane, those systems behind the scenes are set up to ensure your safety or ensure that they work within a certain threshold so that we can operate very complex and tense machinery and steer it. And we’re allowed to as failable human beings to be able drive cars on the road, you know, or have trains, and so there’s human oversight on a system. 

 

So I think we’ve gotten the situation where we’re almost in this container of abstraction in which simply because of the term AI, it’s almost somewhat sci -fi, when in reality it’s how people use it that is the threat, how you create.

 

viruses. And then you can ask it questions. And it’s very interesting because it’s very hackable. You can set up an AI in a box and say, here are your parameters for safety. And then all you need to do to hack it is to tell the AI that the things I’m requesting are still within the parameters of your safety. And here’s why. And it’ll say, oh, OK, I’ll just do that thing that you asked. And people with an English degree are really good at hacking that because they can wield the human language and convince the AI. 

 

So it’s a hard thing because I feel like right now we’re in this extremely narrow idea of technology. It’s not that we can regulate as much as we want, and that’s always going to lag what’s being made. But I think how we conceive of technology is very narrow. And therefore, we keep making the same stuff again and again, where we could really explore.

 

different things that could be made or go back and say, hey, wouldn’t it be better if we were to tap to pair our phone to our car when we step in the car? Who invented that? Oh, yeah, it was Manu Chatterjee in 2008 at Palm. It was the only demo that made Steve Jobs throw it against the wall and anchor that he hadn’t invented it. Just a little bit of historical investigation, there are better ways to do things. And yet, right now, we’re very, very attracted to specific ideas of how things are made because that’s what we see. 

 

And somehow we’re so entranced by the new and so entranced by the visual that it’s really hard to think about other senses and other ways of interaction and how humans digest things and how to make stuff that empowers humans. And by the term empower, that’s not even really a real word, you know, just like the same with ethics is like, we apply these words arbitrarily because they make us feel good. We’re told that we need to apply them. But what does it really mean? You know, it’s the same with diversity. What does it really mean? Does that also include people who are 90 years old? Does that also include people who don’t have use of their right hand? Like there’s the, you know, where, how do you define it?

 

And so it’s really important to think about how things could be. And when you have a constraint, for instance, like I store my data outside of myself and then somebody hacks into it and they get all the data. Sometimes the answer is inverting it. What would inverted system look like? Well, it would be I share my data temporarily with a third party for the purpose of diagnosing a disease.

 

They send me that diagnosis, which I own. And if somebody hacks into the system, they only get what’s shared at that period of time. So people don’t have to both store the information and keep it safe in addition to providing medical tools. Maybe you log in with your data in a way that’s private so that you can prove that you actually have multiple sclerosis on the forum about talking about multiple sclerosis, and you’re not an advertiser coming in maybe some of that information is stored on your computer close to you, like you would store the deed to your house in your house or in a safe. 

 

I just think that right now, I think it goes in waves. We’re in one of those very, very boring, extremely uncreative, very scared phases of tech history. And that, you know, possibly in five or 10 years we’ll be in another

 

period in which it is reasonable to have meetups and talk about different stuff. Right now, it’s very hard to talk about different fun stuff because somehow a lot of people, especially a lot of people in school, they think that there’s a social network and there’s this, the funding is available for only specific kinds of research, not general research. 

 

Simone Cicero

I wanted to use a few minutes of the remaining time to talk about organizations, right? And I know Shruthi also has some nuances about this. Maybe you can add it, Shruthi. But my question was more like how much of these different technologies that we’re going to build in 10 years, as you are wishing for and somehow expecting, how much of this depends on how different are the organizations that actually going to build these technologies? And I’m talking about more specifically of organizations that go more in the direction of enabling and to some extent make emergent, make the work of the people more emergent, like self -managing organizations or DAOs versus traditionally hierarchical or bureaucratic organizations where someone states the objective and it can be profitability or it can be a certain purpose. 

 

And then to some extent, the organization imposes these over the people like a non -calm technology would do versus what could a calm organization be, meaning that an organization that to some extent doesn’t do much, but rather becomes the tool that aligns with the people purpose and self -determination, which I think is very fundamental to building a post -industrial society, because a lot of the industrial society has been about kind of impeding people to determine themselves, but rather give them something to do, like a bullshit job and everything else that David Graeber talked about.

 

I don’t know if you want to add a little nuance and then maybe you can use a few minutes for this.

 

Shruthi Prakash 

Sure. I think to just segue from what Simone said as well, I think for me it was more about, let’s say, how do you enable organizations to feel that tactile sense of that problem, right? And increasingly so, like how do they get closer to the problem and therefore design for the good of the consumers or the users and so on and balance it against profitability as well.

 

Amber Case

Yeah, that’s a good question. I think there’s a lot of different ways to look at this. I think we need more rehearsal and we need more play because I don’t think that DAOs, decentralized autonomous organizations, they’re not self -managing. They’re not decentralized and they’re not organizations. They’re more disorganizations with a record on a blockchain of what has happened. 

 

So, the way I see it is that in the past, at least in the United States, 50 years ago, anybody could set up a small group, like a chrysanthemum club. And that would be self -similar at a regional level, a city level, a state level, and a national level, and even an international level. You understood anybody in your organization could do it.

 

You might amend your articles of organization at some point, but it was really easy, comparatively easy to run and join and have in your free time. I think a lot of people have less free time, not more. They aren’t used to self -governing at these small levels. And they aren’t necessarily used to like being a part of a governance organization. And so when we say join a DAO, it’s self -managing.

 

And then you have to call up people manually because some people are on signal, some people are on telegram, some people are on WhatsApp, somebody forgot their private keys to vote with. There’s so much more overhead in getting them to vote than if they were just in your little regional gardening club and you were to say, I’ll say I use Robert’s Rules of Order. And so you have a bunch of people who haven’t ever experienced governance before thinking that a DAO will actually solve their problems when in reality, when things are good and let’s say you use a token and the token price is high.

 

Everything is fine, but the minute you have to do taxes, figure out treasury, you have a bunch of people who have never done that before. Which is why I say if you do a DAO, you should probably work at a co-op for a while to see what it’s like before you take that. People are like, let’s build no governance tools. It’s like, let’s remember governance how it was when it was more human scale, and let’s reteach that.

 

And I think Daniel Golliher’s maximum New York class is a really good example of trying to re -embed those governance tools at the more local level and even re-teaching people that are at City Hall that are working in government how to use Robert’s rules of order and have a history. So I think we have this kind of misnomer that like these things are self -organizing and we have to look back at where the term DAO came from to begin with. It came from a 1996 paper by Dilger about the autonomous system in the home, a decentralized autonomous system in a home for a home automation system, which is basically trying to make a metastable system in the home that handles your thermostat and handles all these things for you.

 

And I would say the closest thing to that is probably an HVAC system that understands the humidity in your house and keeps it within a certain thing, basically a spaceship. You could go even further and you could say, well, let’s have an air purifier turn on when this level gets hit. And if there’s a forest fire outside, let’s understand we’re going to know that this is coming. And you could hit Yes on your phone to order a Merv 13 filter and install that in your furnace filter. 

 

That’s more what it was supposed to be like than an actual system that has people in the system. It’s really hard to do governance. It’s really hard to like do politics at a city level. You have to have people and it’s very people centered and how they behave with each other. You have to have a little bit of understanding of arbitration and litigation. On most of the projects that I’ve been on in this space, there have been conflicts in a way that I’ve never experienced before with any other project. 

 

And it’s not like I’ve tried deliberately to set up ways to conflict. No, I’ve tried in good faith to do things. And so has everybody. But either lack of experience or misunderstandings or a system of economics failing has caused conflict. And then we’ve had to rely on people that are good at people in the community to kind of straighten things out. 

 

So I think that’s the hardest thing is that when we assume that a new system is better than the old, but we forget about how important people are into the system, we get in really big trouble because it’s really people. And if you don’t have that culture, the reason why people don’t want to participate in governance is usually either you’re multi -generational and you got trained in it, you practice it as a kid, you had enough fun and games with it in school, or there’s some huge economic incentive for you to do it, or there’s a people incentive and it’s still done behind the scenes. And then, you know, you have the board meeting that registers what you agreed upon at the country club the night before. 

 

It’s still human. 

 

And the record, when we look at just the record, and we think we can automate that, that’s hard. If we think that things can be more transparent, because we’ve made a DAO, you’re still not seeing the signal messages behind the scenes before a resolution passes. You’re still not seeing the phone calls. You’re not seeing the late night meetups. You’re not seeing the parties. And so it’s not really decentralized if that’s still a couple of people. What’s more interesting is probably just expanding people’s notions of how governance really happens on lots of different landscapes and how to do things a little bit better as a human and become smarter. And the technology doesn’t become smarter. 

 

But maybe if you want to work on a project with a bunch of people around the world that aren’t in the same room, that tool might be really useful. But you need to kind of rehearse first to see what person might get angry or like what person never votes.

 

So that when you have an important resolution to pass and the software only allows three days for that in order to change the voting duration, it requires, you know, four out of the six people to say yes. And one of them’s on vacation or just had a kid or lost their private keys and they aren’t there. You need to make sure everybody’s there or you need to redo the resolution or the token price goes down and then there’s no way to get it out. There’s so many things that can happen that you really need to know the people that you’re working with, or at least know their personalities if they’re anonymous. 

 

So I say it doesn’t necessarily solve anything. It just re-represents the same problems that we already have, but it seems new. And just because something’s new doesn’t mean it’s better. It has the same amount of problems the old. We just don’t know the amount of problems yet. Not to say right off the whole thing. 

 

It’s just, it’s a new way of doing stuff, but it’s still the same people, the same human universals, the same Aristotelian system, you know, behind the scenes that we’re used to. So, you know, when we think of, you know, the question of whether people should be allowed to play video games for a long time or not, it’s not turn off the video games, it’s understand what in societal structure makes people want to play video games and look at that as a system and reduce those levers so that people don’t just jump to video games because they’re stuck in some way that they don’t like and so they jump to it. 

 

Or understand that if people play video games, they might be rehearsing governance because if you have to govern a Minecraft server, you are getting introduction to governance as a 12 year old. So hey, maybe it’s good, right? Maybe it turns people into a specific thing. But like understanding that’s really, really important. 

 

And that’s the output, but it’s not the reason why people are doing a certain thing. So regulation is not going to change anything. It’s just going to make people do weird VPNs to try and figure out how to get around the system.

 

Simone Cicero 

I mean, I could go ahead for ages on this. It just reminded me quickly, as we close, that the work of Yuk Hui that speaks about this idea of cosmotechnics, where this kind of responsibility to critically look at technology and kind of give it a meaning as a cultural element, as a cultural process, needs to be there if we really want to relate to technology in a way that is different from this universalizing idea that we have been buying for the last few decades, right? 

 

So I think that’s a piece of work that’s my breadcrumbs for our listeners. And my dream is to bring you on the podcast at some point in the future. So stay tuned, but Shruthi I’ll leave it to you for the rest.

 

Shruthi Prakash

Yeah, I mean, it was really interesting to hear what we have today. And I think what we wanted to close the session with is to also hear some of your breadcrumbs as we call it. So maybe suggestions on books, audios, podcasts, movies, or anything that basically inspires you and what our listeners can learn from as well.

 

Amber Case

Definitely. I love Yuk Hui. I love the idea of Cosmo Technics. In my interpretation, it’s every single culture has their own technology. I’ve been trying to write about indigenous cybernetics as like, hey, look, it doesn’t need to be in a peer reviewed research paper for people to understand that maybe you bury a cow skull and then something interesting happens. And then it’s, you know, there’s all of this knowledge that just because we haven’t discovered it yet in the West doesn’t mean it’s not real.

 

It’s that we don’t have the sensory apparatus to understand it. But I like Cosmo Technics because I’ve encouraged people around the world to say like, well, for instance, what would an actual Indian website look like? Depending on region, of course, not what we think of as a website, which is like, here’s the page with CSS and it looks like this. What does a Japanese website look like? We know what that looks like because there’s a lot of Japanese websites and they’re very intense. 

 

I love that there’s the West and the East represented. I think it’s really important to have multiple perspectives inside your mind that’s past and present. Don’t have a delineation between them. Try to understand Aristotle, try to understand Daoism, try to understand Confucianism. Just try to understand all these different perspectives.

 

If you get lost or want to talk, I think it’s really fun to use GPT or Claude to have a dialogue like Plato might have with a text or with a body of work. It doesn’t matter if it’s correct or not. It’s about learning how to ask questions and probe more deeply into the past and have that tool aid you in a way that might take you weeks at a library and then go and read those primary sources that keep coming up.

 

I think it’s just really important to ask a lot of questions. All the terms that you use, all the assumptions that you have, where do we get the idea that we need to have heads up displays? Why is the color of technology blue? Why does this need to be this? Why do we assume that a machine should talk to us? In what cases would it be useful for a technology to have a conversation with us? Where do we get this idea of an intelligent system in the home? Where do DAOs come from?

 

Why do we get so excited about X? What are different kinds of economic systems? There are spectacle economics, for instance, where you get really excited about a thing during the Super Bowl and that’s it, which is a lot of crypto. How do you make systems that don’t just succumb to the second law of thermodynamics and go into the cool? How do you have something that integrates over time? How do you have something that improves over time? How do you set cultures that make people want to maintain stuff instead of just have the fun part where they jump on the bouncy castle and then leave and don’t deflate the bouncy castle and it sits in the rain and molds over. You know, this is what we see in a lot of organizations. 

 

We want the fun bouncy castle part. We don’t want the maintenance part. How do you make the maintenance fun? How do you make care interesting? What does attention really mean? What does attention and care mean? What does, you know, Omotenashi from Japanese culture, how does that get applied to modern technology? 

 

All these different things of asking questions and exploring is something that people don’t look at when they say, I need to make a startup. I was told that it should look like this and it needs to have this server and it needs to use this software. You know, it’s a very interesting thing. I think when we off road a little bit, we can be surprised and we can get things like Figma or the new Dyson hairdryer, any of these things that are actually category changers because the question that was asked is, how do you make a really seamless experience in real time to have collaboration? How long can we spend with WebAssembly or whatever with Figma so that we can have a good experience? Google is all about, how do we not make a super fast multimedia cool solution to browsing and surfing the internet? How do we just make it fast when someone has a query? How do we connect people to the information that other people have made. 

 

And with Dyson, it’s, we haven’t done anything with a hairdryer since the forties. What’s the most annoying thing that gets people from using the hairdryer? The sound. How we reduce the sound of the hairdryer. So it’s like, those are the interesting questions to ask and answer. And when we think about just what’s happening right now, we miss out. 

 

So I would say the biggest thing is ask more questions, figure out how to ask better questions. And the quality of your questions will be directly related to the quality of your life and what you can build.

 

So yeah, that should be my further reading, which is really further questions.

 

Simone Cicer

Thank you so much. I mean, it’s been a very deep and important conversation, I think, for anybody that is involved in either building technology directly or building organizations to build technology, which is basically everything right now today, because everything we build has some kind of, at least some kind of digital footprint or extension.

 

So thank you so much. I hope you also enjoyed the conversation.

 

Amber Case

Absolutely. Yeah, it’s been great. And it’s been great to talk with you again. Thank you for hosting this. Really appreciate it. Yeah. Definitely.

 

Simone Cicero

Yes, after 13 years. I will put in the notes the original interview that we had. It’s written and still online after 13 years on my blog. So people that are curious, they can check our interview. It was about what does it mean to be a cyborg anthropologist? That was the topic. So thank you so much. It was amazing to have you.

 

And yeah, thank you, Shruthi, for the questions and for the effort to take this call so late. It will be midnight now, probably.

 

Shruthi Prakash 

Yeah, it is, it is, but thank you. Thank you. It was really nice listening to you and learning from you. Yeah.

 

Amber Case 

Thank you so much and see you again, hopefully not in 13 years.

 

Simone Cicero 

Thank you.

 

Shruthi Prakash 

For sure.

 

Simone Cicero

Thank you. And for our listeners, of course, you will find all the information about Amber, her work and their work and all the podcast notes and transcripts on our website. If you go to www.boundaryless.io/resources/podcast, you will find Amber’s podcast there. Thank you, everybody. And until we speak again, remember to think Boundaryless.