Human-Centered Design with Mira Lane – Transcript

150 150 This Week in Machine Learning & AI

Sam Charrington: Today we’re excited to present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft. Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalability implemented across large engineering organization. Before diving in I’d like to thank Microsoft once again for their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with this intelligent technology to help solve previously intractable societal challenges spanning, sustainability, accessibility, and humanitarian action. Learn more about their plan at Microsoft.ai. Enjoy.

Mira Lane: [00:00:09] Thank you, Sam. Nice to meet you.

Sam Charrington: [00:00:11] Great to meet you and I’m excited to dive into this conversation with you. I saw that you are a video artist and technologist by background. How did you come to, you’re looking away, is that correct?

Mira Lane: [00:00:28] No, that’s absolutely true.

Sam Charrington: [00:00:30] Okay. So I noted that you’re a video artist. How did you come to work at the intersection of ethics and society and AI?

Mira Lane: [00:00:42] For sure. So let me, Sam, let me give you a little bit of a background on how I got to this point. I actually have a mathematics and computer science background from the University of Waterloo in Canada. So I’ve had an interesting journey, but I’ve been a developer, program manager, and designer, and when I think about video art and artificial intelligence, I’ll touch artificial intelligence first and then the video art, but a few years ago I had the opportunity to take a sabbatical and I do this every few years. I take a little break, reflect on what I’m doing, retool myself as well.

 So I decided to spend three months just doing art. A lot of people take a sabbatical and they travel but I thought I’m just gonna do art for three months and it was luxurious and very special. But then I also thought I’m going to reflect on career at the same time and I was looking at what was happening in the technology space and feeling really unsettled about where technology was going, how people were talking about it, the way I was seeing it affect our societies and I thought I want to get deeper into the AI space. So when I came back to Microsoft, I started poking around the company and said is there a role in artificial intelligence somewhere in the company? And something opened up for me in our AI and Research group where they were looking for a design manager. So I said absolutely. I’ll run one of these groups for you, but before I take the role, I’m demanding that we have an ethics component to this work because what they were doing is they were taking research that was in the AI space and figuring out how do we productize this?

 Because at that point, research was getting so close to engineering that we were developing new techniques and you were actually able to take those to market fairly quickly and I thought this is a point where we can start thinking about responsible innovation and let’s make that a formalized practice. So me taking the role for the design manager was contingent on us creating a spot for ethics at the same time and so backing up a little bit, the video part comes in because I have traditionally been a really analog artist. I’ve been a printmaker, a painter, and during my sabbatical, I sought some more digitized, looked at digitizing some of the techniques that I was playing with on the analog side. I thought let me go play in the video space for a while.

 So for three months, like I said, I retooled and I started playing around with different ways of recording, editing, and teaching myself some of these techniques and one of the goals I set out at the time was well, can I get into a festival? Can I get into a music or video festival? So that was one of my goals at the end of the three months. Can I produce something interesting enough to get admitted into a festival? And I won a few, actually.

Sam Charrington: [00:03:46] That’s fantastic.

Mira Lane: [00:03:46] So I was super pleased. I’m like okay, well that means I’ve got something there I need to continue practicing. But that for me opened up a whole new door and one of the things that I did a few years ago also was to explore art with AI, and could we create a little AI system that could mimic my artwork and become a little co-collaborator with myself? So we can dig into that if you want, but it was a really interesting journey around can AI actually compliment an artist or even replace an artist? So there’s interesting learnings that came out of that experience.

Sam Charrington: [00:04:25] Okay. Interesting, interesting. We’re accumulating a nice list of things to touch on here.

Mira Lane: [00:04:30] Yeah, absolutely.

Sam Charrington: [00:04:31] Ethics and your views on that was at the top of my list, but before we got started, you mentioned work that you’ve been doing exploring culture and the intersection between culture and AI and I’m curious what that means for you. It’s certainly a topic that I hear brought up quite a bit. Particularly when I’m talking to folks in enterprises that are trying to adopt AI technologies and you hear all the time well one of the biggest things we struggle with is culture. So maybe, I don’t know if that’s the right place to start, but maybe we’ll start there. What does that mean for you when you think about culture in AI?

Mira Lane: [00:05:12] Yeah, no, that’s a really good question, and I agree that one of the biggest things is culture and the reason why I say that is if you look at every computer scientist that’s graduating, none of us have taken an ethics class and you look at the impact of our work, it is touching the fabric of our society. Like it’s touching our democracies and our freedoms, our civil liberties, and those are powerful tools that we’re building, yet none of us have gone through a formal ethics course and so the discipline is not used to talking about this. It’s a few years ago you’re like I’m just building a tool. I’m building an app. I’m building a platform that people are using, and we weren’t super introspective about that.

 It wasn’t part of the culture, and so when I think about culture in the AI space, because we’re building technologies that have scale and power, and are building on top of large amounts of data that empower people to do pretty impressive things, this whole question of culture and asking ourselves, well what could go wrong? How could this be used? Who is going to use it directly or indirectly? And those are parts of the culture of technology that I don’t think has been formalized. You usually hear designers talking about that kind of thing. It’s part of human-centered design. But even in the human-centered design space, it’s really about what is my ideal user or my ideal customer and not thinking about how could we exploit this technology in a way that we hadn’t really intended?

 We’ve talked about that from an engineering context the way we do threat modeling. How could a system be attacked? How do you think about denial of service attacks? Things like that. But we don’t talk about it from a how could you use this to harm communities? How could you use this to harm individuals or how could this be inadvertently harmful? So those parts of cultures are things that we’re grappling with right now and we’re introducing into our engineering context. So my group sits at an engineering level and we’re trying to introduce this new framework around responsible innovation and there’s five big components to that. One is being able to anticipate, look ahead, anticipate different futures, look around corners and try to see where the technology might go. How someone could take it, insert it into larger systems, how you could do things at scale that are powerful that you may not intend to do.

 There’s a whole component around this responsible innovation that is around reflection and looking at yourselves and saying where do we have biases? Where are we assuming things? What are our motivations? Can we have an honest conversation about our motivations? Why are we doing this and can we ask those questions? How do we create the space for that? We’ve been talking about diversity and inclusion like how do you bring diverse voices into the space, especially people that would really object to what you’re doing and how do you celebrate that versus tolerate that? There’s a big component around our principles and values and how do you create with intention and how do you ensure that they align with the principles and they align with their values and they’re still trustworthy?

 So there’s a whole framework around how we’re thinking about innovation in the space and at the end of the day it comes down to the culture of the organization that you’re building because if you can’t operate at scale, then you end up only having small pockets of us that are talking about this versus how do we get every engineer to ask what’s this going to be used for? And who’s going to use it? Or what if this could happen? And we need people to start asking those types of questions and then start talking about how do we architect things in a way that’s responsible. But I’d say most engineers probably don’t ask those types of questions right now. So we’re trying to build that into the culture of how we design and develop new technologies.

Sam Charrington: [00:09:14] Mm-hmm (affirmative). One of the things that I often find frustrating about this conversation particularly when talking to technology vendors is this kind of default answer while we just make the guns, we don’t shoot them. We just make the technologies. They can be used for good. They can also be used for bad, but we’re focused on the good aspects. It sounds like, well, I’m curious, how do you articulate your responsibility with the tools that you’re creating? Or Microsoft’s responsibility with the tools it’s creating. Do you have a-

Mira Lane: [00:09:55] Well I have a very similar reaction to you when I hear oh, we’re just making tools. I think, well, fine. That’s one perspective, but the responsible perspective is we’re making tools and we understand that they can be used in these ways and we’ve architected them so that they cannot be misused and we know that there will be people that misuse them. So I think you’re hearing a lot of this in the technology space and every year there’s more and more of it where people are saying look, we have to be responsible. We have to be accountable. So I think we’ll hear fewer and fewer people saying what you’re hearing, what I’m hearing as well.

 But one of the things we have to do is we have to avoid the ideal path and just talking only about the ideal path. Because it’s really easy to just say here’s the great ways that this technology is going to be used and not even talk about the other side because then, again, we fall into that pattern of well, we only thought about it from this one perspective, and so one of the things that my group is trying to do is to make it okay to talk about here’s how it could go wrong so that it becomes part of our daily habit and we do it at various levels. We do it at our all hands, so when people are showing our technology, we have them show the dark side of it at the same time so that we can talk about that in an open space and it becomes okay to talk about it. No one wants to share the bad side of technology. No one wants to do that. But if we make it okay to talk about it, then we can start talking about well, how do we prevent that?

 So we do that at larger forums and I know this is a podcast, but I wanted to show you something. So I’ll talk about it, but we created, it’s almost like a game, but it’s a way for us to look at different stakeholders and perspectives of what could happen. So how do we create a safe environment where you can look at one of our ethical principles. You can look at a stakeholder that is interacting with the system and then you say well if the stakeholder for example is a woman in a car and your system is a voice recognition system, what would she say if she gave it a one star review? She would probably say I had to yell a lot and it didn’t recognize me because we know that most of our systems are not tuned to be diverse, right? So we start creating this environment for us to talk about these types of things so that it becomes okay again. How do we create safe spaces? Then as we develop our scenarios, how do we bring those up and track them and say, well how do we fix it now that we’ve excavated these issues? Well, let’s fix it and let’s talk about it.

 So that’s, again, part of culture. How do we make it okay to bring up the bad parts of things, right? So it’s not just the ideal path.

Sam Charrington: [00:12:46] Mm-hmm (affirmative). Do you run into, or run up against engineers or executives that say, introspection, safe spaces, granola? What about the bottom line? What does this mean for us as a business? How do we think about this from a shareholder perspective?

Mira Lane: [00:13:09] It’s interesting, I don’t actually hear a lot of that pushback because I think internally at Microsoft, there is this recognition of hey, we want to be really thoughtful and intentional and I think the bigger issue that we hear is how do we do it? It’s not that we don’t want to. It’s well, how do we do it and how do we do it at scale? So what are the different things you can put in place to help people bring this into their practice? And so there isn’t a pushback around well, this is going to affect my bottom line, but there’s more of an understanding that yeah, if we build things that are thoughtfully designed and intentional and ethical that it’s better for our customers. Our customers want that too, but then again the question is how do we do it and where is it manifest?

 So there’s things that we’re doing in that space. When you look at AI, a big part of it is data. So how do you look at the data that’s being used to power some of these systems and say is this a diverse data set? Is this well rounded? Do we have gaps here? What’s the bias in here? So we start looking at certain components of our systems and helping to architect it in a way that’s better. I think all of our customers would want a system that recognized all voices, right? Because again, to them, they wouldn’t want a system that just worked for men, it didn’t work for women. So again, it’s a better product as a result. So if we can couch it in terms of better product, then I think it makes sense versus if it’s all about us philosophizing and only doing that, I don’t know if that’s the best. Only doing that is not productive, right?

Sam Charrington: [00:14:59] Do you find that the uncertainty around ethical issues related to AI has been an impediment to customers adopting it? Does that get in the way? Do they need these issues to be figured out before they dive in?

Mira Lane: [00:15:22] I don’t think it’s getting in the way, but I think what I’m hearing from customers is help us think about these issues and a lot of people, a lot of customers don’t understand AI deeply, right? It’s a complex space and a lot of people are ramping up in it. So the question is more about what should I be aware of? What are the questions that I should be asking and how can we do this together? We know you guys are thinking about this deeply. We’re getting just involved in it, a customer might say, and so it’s more about how do we educate each other? And for us if we want to understand, how do you want to use this? Because sometimes we don’t always know the use case for the customer so we want to deeply understand that to make sure that what we’re building actually works for what they are trying to do, and from their perspective they want to understand well how does this technology work and where will it fail and where will it not work for my customers?

 So the question of ethics is more about we don’t understand the space well enough, help us understand it and we are concerned about what it could do and can we work together on that? So it’s not preventing them from adopting it, but there’s definitely a lot of dialog. It comes up quite a bit around we’ve heard this. We’ve heard bias is an issue. Well, what does that mean?

Sam Charrington: [00:16:47] Right.

Mira Lane: [00:16:47] So I think that’s an education opportunity.

Sam Charrington: [00:16:49] When you think about ethics from a technology innovation perspective, are there examples of things that you’ve seen either that Microsoft is doing or out in the broader world that strike you as innovative approaches to this problem?

Mira Lane: [00:17:12] Yeah, I’ll go back to the data side of things just briefly, but there’s this concept called data sheets, which I think is super interesting. You’re probably really familiar with that and-

Sam Charrington: [00:17:25] I’ve written about some of the work that Timnit Gebru and some others with Microsoft have done around data sheets for data sets.

Mira Lane: [00:17:31] Exactly, and the interesting part for us is how do you put it into the platform? How do you bake that in? So one of the pieces of work that we’re doing is we’re taking this notion of data sheets and we are applying it into how we are collecting data and how we’re building out our platform. So I think that that’s, I don’t know if it’s super novel because to me it’s like a nutrition label for your data. You won’t understand how is it collected? What’s in it? How can you use it? But I think that that’s one where now as people leave the group you want to make sure that there’s some history and understanding the composition of it. There’s some regulation around how we manage it internally and how we manage data in a thoughtful way.

 I think that’s just a really interesting concept that we should be talking about more as an industry and then can we share data between each other in a way that’s responsible as well?

Sam Charrington: [00:18:24] Right. I don’t know that the data sheet, I think inherent to the idea was that hey, this isn’t novel. In fact, look at electrical components and all these other industries that do this. It’s just “common sense”. But what is a little novel, I think, is actually doing it. So since that paper was published, several companies have published similar takes, model cards, and there have been a handful and every time I hear about them I ask okay, when is this? When are you going to be publishing these for your services and the data sets that you’re publishing? And no one’s done it yet. So it’s intriguing to hear you say that you’re at least starting to think in this way internally. Do you have a sense for what the path to publishing these kinds of, whether it’s data sheet or a card or some kind of set of parameters around bias either in a data set or a model for a commercial public service?

Mira Lane: [00:19:41] Yeah, absolutely. We’re actually looking at doing this for facial recognition and we’ve publicly commented about that,  we’ve said,  hey we’re going to be sharing for our services what it’s great for, what it’s not, and so that stuff is actually actively being worked on right now. You’ll probably see more of this in the next few weeks, but there is public comment that’s going to come out with more details about it and I’ll say that on the data sheet side, I think a large portion of it is it needs to get implemented in the engineering systems first and you need to find the right place to put it. So that’s the stuff that we’re working on actively right now.

Sam Charrington: [00:20:25] Can you comment more on that? It does, as you say that, it does strike me a little bit as one of these iceberg kind of problems. It looks very manageable kind above the waterline but if you think about what goes into the creation of a data set or a model, there’s a lot of complexity and certainly the scale that Microsoft is working it needs to be automated. What are some of the challenges that have come into play in trying to implement an idea like that?

Mira Lane: [00:21:01] Well, let me think about this for a second so I can frame it the right way. The biggest challenge for us on something like that is really thinking through the data collection effort first and spending a little bit of time there. That’s where we’re actually spending quite a bit of time as we look at, so let me back up for a second. I work in an engineering group that touches all the speech, language, and vision technologies and we do an enormous amount of data collection to power those technologies. One of the things that we’re first spending time on is looking at exactly how we’re collecting data and going through those methodologies and saying is this the right way that we should be doing this? Do we want to change it in any way? Do we want to optimize it? Then we want to go and apply that back in.

 So you’re right, this is a big iceberg because there’s so many pieces connected to it and the spec for data sheets and the ones we’ve seen are large and so what we’ve done is how do we grab the core pieces of this and implement and create the starting point for it? And then scale over time add versioning, being able to add your own custom scheme list to it and scale over time, but what is the minimum piece that we can put into this system and then make sure that it’s working the way we want it to?

 So it’s about decomposing the problem and saying which ones do we want to prioritize first? For us, we’re spending a lot of time just looking at the data collection methodologies first because there’s so much of that going on and at the same time, what is the minimum part of the data sheet spec that we want to go and put in and then lets start iterating together on that.

Sam Charrington: [00:22:41] It strikes me that these will be most useful when there’s kind of broad industry adoption or at least coalescence around some standard whether it’s a standard minimum that everyone’s doing and potentially growing over time. Are you involved in or aware of any efforts to create something like that?

Mira Lane: [00:23:02] Well I think that that’s one piece where it’s important. I would say also in a large corporation, it’s important internally as well because we work with so many different teams and we’re interfacing with, we’re a platform but we interface with large parts of our organization and being able to share that information internally, that is a really important piece to the puzzle as well. I think the external part as well, but the internal one is not any less important in my eyes because that’s where we are. We want to make sure that if we have a set of data, that this group A is using it in one way. If group B wants to use it, we want to make sure they have the rights to use it. They understand what it’s composed of, where it’s orientation is and so that if they pick it up, they do it with full knowledge of what’s in it. So for us internally it’s a really big deal. Externally, I’ve heard pockets of this but I don’t think I can really comment on that yet with full authority.

Sam Charrington: [00:24:03] I’m really curious about the intersection between ethics and design and you mentioned human-centered design earlier. My sense is that that phrase kind of captures a lot of that intersection. Can you elaborate on what that means for you?

Mira Lane: [00:24:20] Yeah, yeah. So when you look at traditional design functions, when we talk about human-centered design, there’s lots of different human-centered design frameworks. The one I typically pick up is Don Norman’s emotional design framework where he talks about behavioral design, reflective design, and visceral design. And so behavior is how is something functioning? What is the functionality of it? Reflective is how does it make you feel about yourself? How does it play to your ego and your personality? And visceral is the look and feel of that.

 That’s a very individual oriented approach to design and when I think about these large systems, you actually need to bring in the ecosystem into that. So how does this object you’re creating or this system you’re creating, how does it fit into the ecosystem? So one of the things we’ve been playing around with is we’ve actually reached into adjacent areas like agriculture and explore how do you do sustainable agriculture? What are some of those principles and methodologies and how do you apply that into our space? So a lot of the conversations we’re having is around ecosystems and how do you insert something into the ecosystem and what happens to it? What is the ripple effect of that? And then how do you do that in a way that keeps that whole thing sustainable? It’s a good solution versus one that’s bad and causes other downstream effects.

 So I think that those are changes that we have to have in our design methodology. We’re looking away from the one artifact and thinking about it from a here’s how the one user’s going to work with it versus how is the society going to interact with it? How are different communities going to interact with it and what does it do to that community? It’s a larger problem and so there’s this shift in design thinking that we’re trying to do with our designers. So they’re not just doing UI. They’re not just thinking about this one system. They’re thinking about it holistically. And there isn’t a framework that we can easily pick up, so we have to kind of construct one as we’re going along.

Sam Charrington: [00:26:28] Yeah, for a while a couple of years ago maybe I was in search of that framework and I think the motivation was just really early experiences of seeing AI shoved into products in ways that were frustrating or annoying. For example, a Nest thermostat. It’s intended to be very simple, but it’s making these decisions for you in a way that you can’t really control and it’s starting me down this path of what does it mean to really, build out a discipline of design that is aware of AI and intelligence? I’ve joked on the podcast before, I call it intelligent design, but that’s an overloaded term.

Mira Lane: [00:27:23] Totally is.

Sam Charrington: [00:27:24] But is there a term for that now or people thinking about that? How far have we come in building out a discipline or a way of thinking of what it means to build intelligence into products?

Mira Lane: [00:27:37] Yeah, we have done a lot of work around education for our designers because we found a big gap between what our engineers were doing and talking about and what our designers had awareness over. So we actually created a deep learning for designers workshop. It was a two day workshop and it was really intensive. So we took neural nets, convolutions, all these concepts and introduced them to designers in a way that designers would understand it. We brought it to here’s how you think about it in terms of photoshop. Here’s how you think about it in terms of the tools you’re using and the words you use there, here’s  how it applies. Here’s an exercise where people had to get out of their seats and create this really simple neural net with human beings and then we had coding as well. So they were coding in Python and in notebooks, so they were exposed to it and we exposed them to a lot of the techniques and terminology in a way that was concrete and they were able to then say oh, this is what style transfer looks like. Oh, this is how we constructed a bot.

 So first on the design side, I think having the vocabulary to be able to say oh, I know what this word means. Not just I know what it means, but I’ve experienced it, so that I can have a meaningful discussion with my engineer, I think that that was an important piece, and then understanding how AI systems are just different from regular systems. They are more probabilistic in nature. The defaults mattered. They can be self learning and so how do we think about these and starting to showcase case studies with our designers to understand that these types of systems are quite different from the deterministic type of systems that may have designed for in the past. Again, I think it comes back to culture because, and we keep doing these workshops. Every quarter we’ll do another one because we have so much demand for it and we found even engineers and PMs will come to our design workshops. But kind of democratizing the terminology a little bit and making it concrete to people is an important part of this.

Sam Charrington: [00:29:48] It’s interesting to think about what it does to a designer’s design process to have more intimate knowledge of these concepts. At the same time a lot of the questions that come to mind for me are much higher level concepts in the domain of design. For example, we talk about user experience. To what degree should a user experience AI if that makes any sense? Should we be trying to make AI or this notion of intelligence invisible to users or very visible to users? This has come up recently in, for example, I’m thinking of Google Duplex when they announced that that system was gonna be making phone calls to people and there was a big kerfuffle about whether that should be disclosed.

Mira Lane: [00:30:43] Yeah.

Sam Charrington: [00:30:43] I don’t know that there’s a right answer. In some ways you want some of this stuff to be invisible. In other ways, tying back to the whole ethics conversation, it does make sense that there’s some degree of disclosure.

Mira Lane: [00:30:57] Yeah, absolutely.

Sam Charrington: [00:30:58] I imagine as a designer, this notion of disclosure can be a very nuanced thing. What does that even mean?

Mira Lane: [00:31:03] Yeah, yeah. And it’s all context dependent and it’s all norm dependent as well because if you were to look into the future and say are people more comfortable, I mean look at airports for example. People are walking through just using face ID, using the clear system and a few years ago, I think if you ask people would you feel comfortable doing that? Most people would say no, I don’t feel comfortable doing that. I don’t want that. So I think in this space because it’s really fluid and new norms are being established and things are being tested out, we have to be on top of how people are feeling and thinking about these technologies so that we understand where some disclosure needs to happen and where things don’t.

 In a lot of cases you almost want to assume disclosure for things that are very consequential and high stakes. Where there is opportunity for deception. In the Duplex case you have to be thoughtful about that. So this isn’t one where you can say okay, you should always disclose. It just depends on the context. So we have this notion of consequential scenarios where things are if there’s automated decision making, if there are scenarios where there is, there are high stakes scenarios. Those ones we think about in we just put a little bit more due diligence over those and start to be more thoughtful about those. Then we have other types of scenarios which are more systems-oriented and here’s some things that are operationally oriented and they end up having different types of scenarios, but we haven’t been able to create a here’s the exact way you do every single, you approach it in every single way. So it is super context dependent and expectation dependent.

 Maybe after a while you get used to your Nest thermostat and you’re fine with the way it’s operating, right? So I don’t know. These social norms are interesting because they are, someone will go and establish something or they’ll test the waters. Google Glass tested the waters and that was a cultural response, right? People responded and said I don’t want to be surveilled. I want to be able to go to a bar and get a drink and not have someone recording me.

Sam Charrington: [00:33:21] Right.

Mira Lane: [00:33:22] So I think we have to understand where society is relative to what the technologies are that we’re inserting into them. So again, it comes back to are we listening to users? Are we just putting tech out there? I think we really have to start listening to users. My group has a fairly large research component to it and we spend a lot of time talking to people. Especially in the places where we’re going to be putting some tech and understanding what it’s going to do to the dynamic and how they’re reacting to it.

Sam Charrington: [00:33:52] Mm-hmm (affirmative). Mm-hmm (affirmative). Yeah, it strikes me that maybe it’s kind of the engineer background in me that’s looking for a framework, a flowchart for how we can approach this problem and I need to embrace more of the design or it’s like every product, every situation is different and it’s more about a principled approach as opposed to a process.

Mira Lane: [00:34:18] Absolutely. It’s more about a principled and intentional approach. So what we’re just talking about is everything that you’re choosing, are you intentional about that choice and are you very thoughtful about things like defaults? Because we know that people don’t change them and so how do you think about every single design choice and being principled and then very intentional and evidence-driven. So we pushed this onto our teams and I think some of our teams maybe don’t enjoy being with us sometimes as a result but we say look, we’re going to give you some recommendations that are going to be principled, intentional, and evidence-driven and we want to hear back from you if you don’t agree on your evidence and why you’re saying this is a good or bad idea.

Sam Charrington: [00:34:59] Mm-hmm (affirmative).

Mira Lane: [00:35:00] That’s the way you have to operate right now because it is so context driven.

Sam Charrington: [00:35:04] I wonder if you can talk through some examples of how human-centered design, AI, all these things come together in the context of kind of concrete problems that you’ve looked at.

Mira Lane: [00:35:13] Yeah, I was thinking about this because a lot of the work that we do is fairly confidential, but there’s one that I can touch on, which was shared at build earlier this year and that was a meeting room device and I don’t know if you remember this, but there’s a meeting room device that we’re working on that recognizes who’s in the room and does transcription of that meeting, and to me, as someone who is a manager, I love the idea of having a device in the room that captures action items and who was here and what was said.

 So we started looking at this and we said okay, well let’s look at different types of meetings and people, and let’s look at categories of people that this might affect differently. And so how do you think about introverts in a meeting? How do you think about women and minorities because there are subtle dynamics that are happening in meetings that make some of these relationships, they can reinforce certain types of stereotypes or relationships and so we started interviewing people in the context of this sort of meeting room device and this is research that is pretty well recognized. It’s not novel research, but it reinforced the fact that when you start putting in things that will monitor anyone that’s in a room, certain categories of people behave differently and you see larger discrepancies and impact with women, minorities, more junior people. So we said wow, this is really interesting because as soon as you put a recording device in a room, it’s gonna subtly shift the dynamic where some people might talk less or some people might feel like they’re observed or depending on if there’s a manager in the room and there’s a device in the room, they’re going to behave differently and does that result in a good meeting or a bad one? We’re not sure. But that will affect the dynamic.

 And so then we took a lot of this research and we went back to the product team and said well how do we now design this in such a way that we design with privacy first in mind? And make users feel like they’re empowered to opt into it and so we’ve had discussions like that, especially around these types of devices where we’ve seen big impact to how people behave. But it’s not like a hard guideline. There’s not really a hard set of rules around what you have to do, but because all meetings are different. You have brainstorming ones that are more about fluid ideas. You don’t really care who said what, it’s about getting the ideas out. You have ones where you’re shipping something important and you wanna know who said what because there are clear action items that go with them and so trying to create a system that works with so many different nuanced conversations and different scenarios is not an easy one.

 So what we do is we’ll run alongside with a product team and while they’re engineering, they’re developing their work, we will take the research that we’ve gathered and we’ll create alternatives for them at the same time so that we can run alongside of them. We can say hey, here’s option A, B, C, D, and E. Let’s play with these and maybe we come up with a version that mixes them all together. But it gives them options to think about. Because again, it comes back to oh, I might not have time to think about all of this. So how do we empower people with ideas and concrete things to look at?

Sam Charrington: [00:38:35] Yeah, I think that example’s a great example of the complexity or maybe complexity’s not the right word, but the idea that your initial reaction might be like the exact opposite of what you need to do.

Mira Lane: [00:38:51] Yep.

Sam Charrington: [00:38:51] As you were saying this, I was just like oh, just hide the thing so no one knows it’s there. It doesn’t change the dynamic. It’s like that’s exactly wrong.

Mira Lane: [00:38:58] You don’t want to do that. Don’t hide it.

Sam Charrington: [00:38:59] Right, right.

Mira Lane: [00:39:01] Yeah. And maybe that’s another piece. I’m sorry to interrupt that, but one of the things I’ve noticed is our initial reaction is often wrong, and so how do we hold it at the same that we give ourselves a space to explore other things and then keep an open mind and say okay, I have to adjust and change because hiding it would absolutely be an interesting option, but then you have so many issues with that, right? But again, it is about being able to have an open mindset and being able to challenge yourself in this space.

Sam Charrington: [00:39:33] Do you have a sense for where if we kind of buy in to the idea that folks that are working with AI need to be more thoughtful and more intentional and maybe incorporate more of this into more of this design thinking element to their work? Do you have a sense for where this does, or should, or needs to live within a customer organization?

Mira Lane: [00:40:01] Yeah, I think it actually, and this is a terrible answer but I think it needs to live everywhere in some ways because one thing that we’re noticing is we have corporate level things that happen. We have an ether board. It’s an advisory board that looks at AI technologies and advises and that’s at a corporate level and that’s a really interesting way of approaching it, but it can’t live alone and so the thing that we have learned is that if we pair it with groups that mine that sit in the engineering context, that are able to translate principles, concepts, guidelines into practice, that sort of partnership has been really powerful because we can take those principles and say well here’s where it really worked and here’s where it kind of didn’t work and we can also find issues and say well we’re grappling with this issue that you guys hadn’t thought about. How do you think about this and can we create a broader principle around it?

 So I think there’s this strong cycle of feedback that happens. If you have something at the corporate level or you established just what your values are, what are our guidelines and what are our approaches? But at the engineering context, you have a team that can problem solve and apply and then you can create a really tight feedback loop between that engineering team and your corporate team so that you’re continually reinforcing each other, because the worst thing would be just to have a corporate level thing and just be PR speak. You don’t want that.

Sam Charrington: [00:41:23] Right. Right.

Mira Lane: [00:41:24] The worst thing would also be just to have it on the engineering level because then you would have a very distributed mechanism of doing something that may not cohesively ladder up to your principles. So I think you kind of need both to have them work off each other to really have something effective and maybe there’s other things as well, but so far this has been a really productive and iterative experiment that we’re doing.

Sam Charrington: [00:41:50] Do any pointers come to mind for folks that want to explore this space more deeply? Do you have a top three favorite resources or initial directions?

Mira Lane: [00:42:02] Well it depends on what you want to explore. So I was reading the AI Now report the other day. It’s a fairly large report, 65 page report around the impact of AI in different systems, different industries and so if you’re looking at getting up to speed on well what areas is AI going to impact? I would start with some of these types of groups because I found that they are super thoughtful and how they’re going into each space and understanding each space and then bubbling up some of the scenarios. So if you’re thinking about AI from a how is it impacting? Those types of things are really interesting.

 On the engineering side, I actually spend a lot of time on a few Facebook groups where they have, there’s some big AI groups in Facebook and they’re always sharing here’s the latest, here’s what’s going on, try this technique. So that keeps me up to speed on some of those that are happening and also archive just to see what research is being published.

 The design side I’m sort of mixed. I haven’t really found a strong spot yet. I wish I had something in my back pocket where I can just refer to, but the thing that maybe has been on the theory side that has been super interesting is to go back to a few people that have made commentaries just around sustainable design. So I refer back to Wendell Berry quite a bit, the agriculturalist and poet, actually, who has really introspected how agriculture could be reframed. Ursula Franklin is also a commentary from Canada. She did a lot of podcasts or radio broadcast a long time ago and she has a whole series around technology and it’s societal impact and if you replace a few of those words and put in some of our new age words, it would still hold true, and so I think there’s a lot of theory out there but not a lot of here’s really great examples of what you can do because we’re all still feeling out the space and we haven’t found perfect patterns yet that you can democratize and share out broadly.

Sam Charrington: [00:44:18] Well, Mira, thanks so much for taking the time to chat with us about this stuff. It’s a really interesting space and one that I enjoy coming back to periodically and I personally believe that there’s this intersection of AI and design as one that’s just wide open and should and will be further developed and I’m kind of looking forward to keeping an eye on it and I appreciate you taking the time to chat with me about it.

Mira Lane: [00:44:49] Thank you so much, Sam. It was wonderful talking to you.

Sam Charrington: [00:44:52] Thank you.

Leave a Reply

Your email address will not be published.