Talk 227 – AI for Accessibility Interview

Sam Charrington: [00:00:00] Today we’re joined by Wendy Chisholm, Lois Brady and Matthew Guggemos. Wendy is a principal accessibility architect at Microsoft and one of the chief proponents of the AI for Accessibility program, which extends grants to a high-powered accessibility projects within the areas of employment daily life and communication and connection. Lois and Matthew are co-founders and CEO and CTO respectively of iTherapy an AI for Accessibility grantee and creator of the Inner Voice app, which utilizes visual language to strengthen communication in children on the autism scale. In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that Innovation and AI can have for people with disabilities and society as a whole and the importance of programs like AI for accessibility in bringing projects in this area to fruition.

This episode is part of a series of shows on the topic of AI for the benefit of Society that we’re excited to have partnered with Microsoft to produce before we proceed. I’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at microsoft.ai.

Enjoy the show.

Sam Charrington: [00:02:06] All right, everyone. I am here with Wendy Chisholm, Lois Brady, and Matthew Guggemos. Wendy is a principle accessibility architect at Microsoft and Lois is co-founder and CEO of iTherapy. And Matthew is a co-founder and CTO at iTherapy. Welcome, all of you, to this week in machine learning and AI.

Wendy Chisholm: [00:02:33] Thank you.

Lois Brady: [00:02:33] Thank you.

Matthew Guggemos: [00:02:34] Thanks for having us.

Sam Charrington: [00:02:35] Fantastic. Fantastic. I think five people is the largest interview that I’ve done. But we had the advantage of all being in the same room. In this case, I’m seated with Wendy, actually in a studio in Redmond, and Lois and Matthew are joining us remotely. And today we’ll be talking about some of the work that Microsoft is doing around AI for accessibility. I’m really looking forward to digging into that. But before we do, I’d like for our audience to get to know each of you a little bit better and what you’re working on. So, let’s start with you, Wendy. How did you get involved in working on this intersection of accessibility and artificial intelligence?

Wendy Chisholm: [00:03:20] Yeah. It’s a fun story. It starts 25 years ago when I was working on my computer science degree. I’ve just always been very curious about not only technology, but the humans that use them. And if we’re building technology and no one’s using it, why are we doing it? So, I was studying computer science and psychology, and one of my professors asked me to tutor a student in statistics. And I said yes and I met him, and he was blind. I had not ever met anyone who was blind before. I wasn’t really sure what I was doing, but I was very curious to learn about his experience and what I could do.

So, we got very creative and used Legos to teach bar graphs, and I used a pin to poke holes in a piece of paper to create a scatter plot. And with the backdrop of my computer science degree, I was like, “There’s got to be something that computers can do.” And that really has been my trajectory ever since. It’s taken me around the world. I got to work at MIT with Tim Berners-Lee. I got to write an O’Reilly book and go to Foo Camp. I got to do a lot of really cool things.

So, what that eventually did was by understanding the diversity of human experience worldwide and how culture and language can play into that experience, and trying to shift how people think and understand the experience of the billion people on the planet who have disabilities, myself included, it’s just been a very interesting thing. So, what I ended up doing was consulting for a while, and helping companies try to make their websites more accessible. And in realizing that people get very excited about it, they’d think, “Yeah, I want to do this. It’s a great idea.” But then not a lot would change. So, I decided to join a large corporation to understand those decisions are made and quickly learned that it’s the tooling that can really help people make better decisions. And a lot of times, it’s not that people are making decisions that make the world less accessible out of malice. Just a lot of folks don’t know. So, the more that we can infuse our engineering systems with the knowledge and information that’s needed to help people make good decisions, then it’s more likely they’ll end up with more accessible outcomes.

So, that’s kind of what I’ve been doing the last while, and that’s what led me to AI and machine learning. ‘Cause to me the real juice in this, again, is how we bring technology together with humans, and it’s about helping people make good decisions. So, that’s kind of how I ended up there.

Sam Charrington: [00:06:10] Okay. Maybe to contextualize what iTherapy is up to, I’ll have you talk a little bit about the AI for Accessibility project at Microsoft, and your role in particular with that project.

Wendy Chisholm: [00:06:26] Yeah. What’s super cool about the program is that there’s been this long history of innovation that … for people with disabilities that ends up impacting all of us. And my favorite example of that is that smartphones wouldn’t exist if it weren’t for people with disabilities. And the reason is that when you use an onscreen keyboard to text or to type, that keyboard was actually created a long time ago for people with physical disabilities who couldn’t actually type, whether for weakness or loss of a limb or anything. And now it’s being used by all of us because we’re not carrying keyboards around with us so we can type on our phones. The phones create such a limiting experience for us, and that’s what technology for people with disabilities is about, is just really what abilities do you have and let’s amplify those.

So, because of that long history of innovation, what we’re doing in the AI for Accessibility program is just funding projects that are working on that next wave of innovation. So, I get to spend my time talking to people who are working on that next wave and then kind of figuring out who to fund and how to piece that all together into this … what this future is gonna look like for all of us. Because like I said, when we innovate for people with disabilities, we end up impacting all of us. So, I get to kind of look into the future and place some bets, basically.

Sam Charrington: [00:08:02] That sounds like a ton of fun.

Wendy Chisholm: [00:08:03] It is, yeah. I get to talk to really smart people like Lois and Matthew, and give them money and support to do cool stuff.

Sam Charrington: [00:08:12] Lois, can you share a little bit about your background and iTherapy?

Lois Brady: [00:08:18] Absolutely, Sam. Like Wendy, our journey started about 25 years ago when I became a speech language pathologist, working with people who had communication challenges in one way or another. At that time, technology was all about getting the old technology from the general population and then we had to adapt it to our students who needed communication technology. When the iPad came out, that kind of flipped the model for us, and it was very impressive. And I noticed how a lot of my students who had significant challenges, complex communication needs were gravitating towards the iPad, and it encouraged me to write a book called Apps for Autism.

So, as I was writing that book, I noticed that there’s certain features that kids would attend to and use, and they’ll use it without even us prompting them to use it and there’s certain features that they don’t really care about. So, taking all of those things that all my students really loved, we put them into Inner Voice to capture their attention and teach them communication, and also just to give kids with complex communication needs or complex sensory needs the latest, greatest technology and just let them communicate more or less like everyone else. So, we created the app Inner Voice, and then when we saw the artificial intelligence grant, we thought it was absolutely perfect to just keep the students, keep our ideas rolling along, make it easier, faster, more fluent, and really put our kids with communication challenges at the forefront of technology instead of 10 years behind everyone else, using all of the equipment that everyone else no longer uses. So, this has been wonderful for us, because as our kids use this, they really draw in everybody around them thinking, “Oh, this is so cool. I want to see what you’re doing. I want to come and use it also.” So, it’s been a big boost for us.

Sam Charrington: [00:10:24] And I understand that you’re also trained in animal assistive therapy and you have a therapy pig named Buttercup. Is that correct?

Lois Brady: [00:10:36] Absolutely. Buttercup is more famous than anything else I’ve ever done. He’s absolutely wonderful. And I chose a potbelly pig because I specifically work and I specialize in autism, and a lot of kids with autism definitely have trouble with dogs, because they’ve heard them bark before or they’ve been jumped on, or something’s happened, so they have already a preconceived notion about a dog. Or a cat. They may have gotten scratched. But a pig, they had absolutely no idea what to do with a pig and it was breaking new ground. And it’s just a matter of getting something that captures their attention and using it to create communication opportunities, much like the iPad.

When the iPad came along, it was the exact same thing. We’re using this high interest item, whether it’s a pig or an iPad, to capture their interest and then teach them communication with it. He’s great. He’s great.

Sam Charrington: [00:11:34] I bring that up mostly because I have a daughter who is studying psychology and loves animals, and her goal is to do animal assisted therapy. And I can now tell her that I interviewed an animal assisted therapist and finally get her to listen to one of my podcast episodes.

Wendy Chisholm: [00:11:56] Perfect. We’re here to help.

Sam Charrington: [00:11:58] So Matthew, how about you? Can you speak a little bit about your background and the … maybe go and take us into a little bit more detail into the iTherapy app and how it uses AI?

Matthew Guggemos: [00:12:12] Absolutely. My background, first actually was as a musician. So, I am actually a drummer. I really have been interested in just how you learn skills. Drumming is … takes a long time to learn, takes a lot of study, and that’s very similar to how speech and language develops. Speech and language, those are probably two of the most difficult skills people can learn. So, I was really fascinated by how do you actually learn to communicate using words. And just like Lois, I specialized with people with autism. I’m a certified autism specialist in addition to being a speech pathologist, and early on when I started working with kids with autism, one of the most difficult things about teaching them skills was to show them things that captured their interest and interest leads to learning. Because if you’re not interested in something, it’s hard to pay attention. It’s hard to learn things you don’t pay attention to.

When the iPad came out, I noticed, “Wow,” you could really capture people’s interest with the iPad, just like Lois was saying. So, we started, the first version of Inner Voice was just using facial recognition technology to model speech. So, this is kind of similar how a musician learns. You go to a drum teacher and he shows you something, you copy it, and then you practice. And it was really interesting, how I can get tons of kids to imitate what was on the screen, but not necessarily what I would model for them. So, how we’ve kind of woven in the AI component using the Azure vision services, this is to me, really, it’s great. We’ve been testing it with users now and it kind of mimics the way people learn language.

For example, you look at something. Let’s say you’re a child. You look at something, you point to it, your parents call it … you point to an animal, your parents say, “That’s a pig,” and then child says, “Pig.” And then they know what a pig is. So, what we’ve done with the vision services is that now a user, let’s say a kid who’s using Inner Voice, can use this feature we’ve called Visual Language, and he or she can take a picture of a pig, for example, and what will happen is the photo will get sent to a cloud and then that’ll get paired with text. And the text appears back on the screen, and the avatar, which is from our original version, will read the text. So, they can see themselves saying the word, and then pair that word with the image and the written text. And this is something that we’ve developed called Multi Sensory Symbiotics. So, symbiotics is how you assign meaning to a symbol.

By using this AI technology, we’ve been able to pair speech and language with photos and it can be interest driven. So, let’s say a kid wants to know what something is in the room. They can take a picture of it and it can be labeled and spoken for him or her, and then they can imitate it and learn kind of through self motivation what things are called and how to label or describe things in their environment.

Sam Charrington: [00:15:53] Wendy, I’m curious, when you think about again, this intersection of AI and accessibility, on the one hand AI presents an opportunity to allow us to, I guess, connect more directly or to provide better support for people that need it. On the other hand, there’s also a risk that the people that need support get left behind or left out of the innovation that’s happening with AI, and I’m curious how do you think about those two? Are there other factors that you think, like what’s the perspective that you bring to supporting organizations that are trying to do work in kind of the confluence of these fields?

Wendy Chisholm: [00:16:41] That’s a great question. There are several different ways that we can go with that. On the one hand, the people being left behind and not included is near and dear to my heart, because as Lois talked about, technology and how people use it, it’s really … I guess what I really want to focus on is that it’s so important for us to, as we’re building new technology, that we’re really bringing everyone who’s going to be using it and to get them around the table and involved in the process. And I think that’s why when we see these amazing innovations, like the onscreen keyboard, that was because someone, a developer, was working with someone with a disability. And so I think one of the things I’m excited about that we’re doing is we’re specifically looking for projects that are firmly grounded in the community in which they’re intending to benefit. And ideally, I want to be a shark tank of entrepreneurs with disabilities.

So, part of that is just making sure that folks have the skills to really contribute. It also means that we’re looking at data sets and we’re looking at bias in data sets. And I think one of the things that we’re excited about is the photographs that are taken by people who are blind or have low vision. They’re not gonna be perfectly framed. So, how are they gonna … how does that affect the models and the training that’s been done before? So, we want to make sure that there’s a good diversity there so that folks are included. And then we get all the good innovative juicy stuff, too. And I just, I have so many great stories about that where when you really include people and bring them around the table, that you just … you get to do some really good stuff.

Sam Charrington: [00:18:34] I’d love to hear some of those stories.

Wendy Chisholm: [00:18:36] Yeah. And I think the other thing about it, too, is just … so, Saquib, who’s one of our engineers that worked on the Seeing AI project, is still working on that. I’m not sure if you’re familiar with Seeing AI.

Sam Charrington: [00:18:49] Tell us about that, please.

Wendy Chisholm: [00:18:50] So, it’s an iPhone app right now and there’s a really great demo of Sacha and one of my colleagues, Ann Taylor. And the reason I love this demo is that … so, she’s blind. She’s using Seeing AI. And he writes on a piece of paper, “Accessibility is awesome.” So, she’s able to feel where the paper is, take a picture of it. The text, the handwriting is recognized and read out loud to her. And for her, that means now that she can read cards or business cards, or letters from family and friends. I think it’s really, it’s empowering. And I think that’s really cool. And I think when you start looking at the dream that Saquib has of when he’s walking around with a friend who knows him and his preferences, his friend is able to recognize what he’s interested in and maybe tell him, “This is what’s changed since the last time we were here.” Or give some color about what’s happening in the space.”

But when you look at that, that’s just another eyes free interface. And it’s something that I think we’d all use. When I’m touristing in a new place and I don’t know the language, I don’t want to be looking at my phone for directions, ’cause I don’t want to appear to be a tourist. And I want some of that feedback in my ear, and getting it tailored to me, that’s something we can all use. So again, that’s kind of what I feel the power is for us, is … and that goes kind of back to the decision making as well. In those instances, we’re helping people make better decisions, ’cause we’re giving them information that they didn’t have before. So, it doesn’t really matter if you’re disabled or not, right? You want information so you can make better decisions. And that’s really what this is about, I think.

Sam Charrington: [00:20:37] Yeah, I love this recurring theme of the innovations that we are creating to support people with disabilities, kind of coming back full circle and impacting the way we use technology, the way everyone uses technology.

Wendy Chisholm: [00:20:57] Exactly. And I mean, where I really hope the world goes is that by making sure that we’re all at the table and contributing, we’re not creating barriers and disability kind of disappears. Because one of my favorite quotes is, “It’s the stairs that make the building inaccessible, not the wheelchair.” And to me, that’s the beauty of it, right? If we can really design the world such that everyone can participate, it’s just not really a thing anymore. We’re all benefiting. We’re all benefiting from these connections with other people. Again, that’s the juicy part with AI, because our devices now have so many sensors in them and can give us information that we may miss, whether we’re eyes busy or we’re blind. There’s just so much opportunity there.

Sam Charrington: [00:21:47] Absolutely. Lois, can you elaborate on the kind of experiences you’ve seen with the users of the Inner Voice app and the kind of impact it’s had for them?

Lois Brady: [00:21:59] Absolutely. We’ve been using this in our clinic and in school districts. And we’re even branching out now into hospitals with folks who may have had strokes or head injuries. But initially it was made for video stealth modeling where you take a picture and you see yourself producing the target language or the target word, and then we added in the vision where they can take a picture, just like Wendy said. It’s amazing. They can take a picture of words or a thing, and then the avatar says what that is, and across the board from … the youngest student which we have is probably around two to some of the oldest, which are in their 80s. It’s amazement. Their mouths just drop and it becomes then the … I call it like an electric communication environment. Now everyone’s coming over, asking about it, wanting to use it. And it just produces this place where now we want to talk and the students want to talk about what they’re doing and then our students start adding in characters that they like and they make their characters talk.

So, it’s all about just providing these wonderful opportunities for not only the student to talk, but people to come in and say, “Oh my god, show me what you’re doing. What’s going on?” Where our kids quite literally never had those opportunities before, so they’re leading the pack in that matter, because they do have all this wonderful AI embedded because of the Azure services. So, they’re leading the way and nobody’s ever seen this. So, they get to be the cool kid. And I think Wendy hit it a little bit before when she was talking about the universal design, because then I went and used it with a student who was bilingual and didn’t have any English at all, and using a translator app, we were able to put in one language and speak a different language. So, the technology’s just absolutely amazing and we can almost hit any kind of a challenge and overcome it right now. And it’s never happened that way for us before.

Technology was something that was cumbersome and hard to use, but no more. Everyone has it in the palm of their hands. And again, our kids are like leading the way at this point in time.

Sam Charrington: [00:24:23] Matthew, can you share a little bit about your experience in using this … using AI as a technologist, building it into the application? What was your background with regards to AI and how did that inform the way you incorporated it into the app?

Matthew Guggemos: [00:24:44] Well, i’ve always loved technology. I’ve always been someone who just loves to read about technology or learn how it works. My first and foremost background in terms of, I guess scientific background, is communication sciences. So, I looked into how AI could be used for that field. And probably the one that first caught my eye and then how Lois and I developed our visual language feature was the character recognition, describing the feature where I was talking about before, pairing images to text. So, I thought, “Wow, that is a fantastic way to help with literacy and possibly help people who maybe just want to learn a different language.” It could be applied, if you have no disability at all, it could be applied to that.

The other aspect was I got really fascinated by the smart bot technology that particularly Microsoft has now. So, they have a number of these services, text to speech and then the language understanding and Q&A frameworks. It’s almost sort of like they’re … you can see a lot of this in their Cortana app that they’ve released. And that, tying back to my background as a musician, you have to practice things to be good at them. And I thought, “What a great way to make practice motivational for kids or anyone by …” you can make a bot that will interact with you and you can ask it questions. You could find out information. And one of the trickiest things to teach any person with communication challenges is to initiate a communication exchange. But a bot is kind of a friendly place to start. You can say, “What’s the weather? What time is it?” Or “What’s a platypus? What’s a potbelly pig?” And the … I had to tie that in.

So, the bot can come back with an answer, and it’s motivating, because being motivated really is a huge factor in communication. And AI can answer infinite questions about a subject that maybe an individual’s only interested in. So, I think it’s sort of a long answer to your question, but there’s a host of reasons why I got into AI and particularly for communication sciences.

Wendy Chisholm: [00:27:12] We’re gonna see how many times we can say pig in this podcast.

Sam Charrington: [00:27:19] Did you specifically work on the integration with the Azure services into the app? Or was that work that was done by other folks or via Microsoft?

Matthew Guggemos: [00:27:35] Oh, so we … our role as specifically, Lois and I do this, we work on the UX/UI design first and foremost, and then we work closely with a developer who has helped us from the beginning design Inner Voice. His name is Junichi Fujita. He’s an amazing guy. So, we went through some of these tutorials about how to integrate the Azure services into our existing code, because basically you’re leveraging the cloud based services into our IOS code because currently Inner Voice is in IOS. So, we contracted with Junichi who did the coding for us, because I’m actually not a coder. I’m a speech pathologist. So, we’d come up with the designs to find the technology, make sure it’s feasible for what we want to do, and then he is brilliant at being able to translate that pretty much exactly to our specifications. So, we’ve worked with him for a long time.

Sam Charrington: [00:28:36] Are you aware of any challenges or impediments that you and he ran into in incorporating AI and these cognitive services into the application?

Matthew Guggemos: [00:28:50] They were actually stunningly easy to integrate. The biggest problem was the UI/UX stuff. So, that one we kept going back and forth. “Well, how should it look? What’s the easiest way for it to be used by users?” ‘Cause we do a lot of user testing, so we don’t just think, “Hey, I think people will like this.” We actually designed something based on observation and interviews with people, and then we try it with them, and then they either like it or think, “Oh, this is no good.” And then we go back and redesign it. So, most of the challenges were just in that aspect, but in terms of integrating the Azure services into our code, it was really easy.

Wendy Chisholm: [00:29:30] Yeah, they’ve had an easy time, which is great. I think when I looked at some of our other grantees, I think they’re gonna be pushing the limits of the technology. In particular, when we look at some of the work that Zyrobotics is doing, they’re looking at speech recognition for students with nontypical speech patterns. So, they’re really having to train the data and expand what it can recognize.

We’ve got another grantee from the University of Iowa, and she’s using a camera to help athletes who are blind and running around jogging tracks. So, the cool part about that one is with a jogging track, you have clear lines, in most tracks, right? Kind of indicating where the lanes are. So, once they get those recognizers working in real time, they’ll be able to tell someone they’re starting to veer out of their lane. Now, the problem is getting that working fast enough on the device in real time so that you’re not getting it like, “Oops,” a few seconds later and you’ve run into somebody. And these are athletes who are actively competing. And if we can get … solve some of those issues, the independence is great.

Sam Charrington: [00:30:45] That’s incredible. I happen to live very close to a school for the blind and I see their track all the time and they’ve got these … I don’t know if listeners have ever seen these, but they’ve basically got these guide wires along the lanes-

Wendy Chisholm: [00:31:01] Yeah, exactly.

Sam Charrington: [00:31:01] So, they can still participate in the activity, but I can envision bringing AI to a device that the athlete can wear totally eliminates the need for a specialized setup. They could compete with other athletes.

Wendy Chisholm: [00:31:25] Exactly.

Sam Charrington: [00:31:26] And be on a level playing ground with the help of a model that’s running in a phone.

Wendy Chisholm: [00:31:31] Exactly. Yeah. So, there’s gonna be some really hard challenges with that one. And I can’t talk yet about this next round, but we’re in the midst of interviewing our … basically what we do, we kind of are operating like a shark tank. So, after we review a bunch of the applications, we’ll invite them in for a pitch meeting, just to kind of meet folks and get a sense of what they’re really doing. And so I’m very excited about some of the ones we’re hearing this week. This is pitch week for us. So, we’ll be announcing in January, then, this kind of next round.

But people, and again, now that there’s some maturity to the program, and it helps that Sacha has been out there talking about it, too. We’re getting some really cool applications. I think we were very lucky with iTherapy in our first round and Zyrobotics, and now we’re just, it’s really growing. So, we’ve got some fun things in the works.

But yeah, and I think, too, there’s just so many opportunities to use Internet of Things and just all these sensors. For my own, so I have PTSD, and so I have so many sensors kind of monitoring different aspects of heart rate and trying to predict just how I’m feeling. And I am still, have not been able to pull together a dashboard that pulls together all my calendar and health data, and mood data and all of this stuff. I think … I’m looking forward to some of what we can do. There’s just so much more that we know, and there’s so many patterns that we can recognize. And then if we can kind of pull that together, again, it’s about the decisions that it allows us to make. So, I’m really excited about that.

Sam Charrington: [00:33:17] It’s funny that you mention patterns, because that’s the word that was floating in my mind around this next question, and that’s really about the patterns that you see as you work with organizations that are using AI in an accessibility context. And I think the broader part of the question is around … as we’ve started to work with accessibility in a broader computing context, we’ve developed specifications and guidelines that, at this point, are fairly well understood and codified. And I’m curious do you see AI for Accessibility evolving in a similar way or is there not a need for that? How do you see that evolving?

Wendy Chisholm: [00:34:14] That’s a really great question. My work at MIT was on some of those standards.

Sam Charrington: [00:34:20] Okay.

Wendy Chisholm: [00:34:21] That’s right in my wheelhouse. I’m not sure about that, honestly. I do think, though, as I’m reviewing. So, part of my time, too, is giving feedback to folks who … let me put it this way, I do a lot of educating about possibilities. So, I do spend time, when people are creating plans for how to use AI in their organizations, making sure that they’re considering all scenarios and not accidentally creating more barriers. So, I know there’s work going on with other types of guidelines, like there’s some work right now at W3C in terms of cognitive disabilities and some new suggestions on how to make websites more accessible for folks who have learning disabilities or even emotional disabilities. I am very curious to see where things go in terms of augmented reality and VR. I was looking at a manufacturing, someone talking about manufacturing the other day and how to bring AI into the floor, the manufacturing floor. And really looking at, there’s opportunities here, I think, for people with disabilities to be employed in some of these new jobs, as long as people really consider how it’s being designed.

So for example, in a manufacturing scenario, getting feedback from a bot about how something is working, or being notified that, “Hey, we’ve noticed, we’ve detected that this run has some errors in it. We’re seeing some patterns.” And just making sure that the information is being presented, if it’s audible. But it’s also gonna be in text. What’s interesting to me is that I think a lot of the potential issues I’m seeing have already been documented, and it’s kind of just the same things over and over. And it doesn’t surprise me, because that’s kind of what I’ve seen in my career, right? I analyzed Java years ago and said, “Here are the things that we need to do to make sure that if you’re using Java, your applications are gonna be accessible.” And the concepts haven’t really shifted that much. I think if you have visual information, you need to make sure you also have auditory and tactile information. And just because you never know the scenario someone’s gonna be in.

And again, I just, I’m gonna tie it back ’cause I really want to drive the point home that anytime you do that-

Sam Charrington: [00:36:45] The pig?

Wendy Chisholm: [00:36:45] Huh?

Sam Charrington: [00:36:45] The pig?

Wendy Chisholm: [00:36:46] Oh, the pig. Yeah. Oh my god.

Matthew Guggemos: [00:36:50] That’s the thematic continuity there.

Wendy Chisholm: [00:36:52] How can I integrate pig into that? That’s a good challenge. Because no matter what you’re doing, it could be used by a pig. No, I’m kidding. You never know the scenario that someone’s gonna be in. And you never know the kind of scenario, the environment that someone’s in, right? So, for captions, especially on that manufacturing floor, it’s gotta be really loud. So, I was actually really surprised that they were designing something that was audible without being visual, because I’m like, “Isn’t it gonna be noisy?” I think everyone is gonna be experiencing hearing loss on this floor. It was just surprising to me.

So yeah, I don’t know that we’ll have new standards, but you never know. It kind of depends on what evolves. Actually, now that I think about it, I think with data sets, I think we are gonna have to have some standards that clearly ensure that we have good diversity in all the conversations going on about bias. That’s a big part of it right there, is just making sure that we aren’t accidentally continuing to discriminate against people with disabilities, ’cause unfortunately, that is quite a reality, especially when you look at … yeah. Well, I’ll just say that. And that’s part of the culture change I think AI can really help with, is ensuring that we’re supporting a culture of more diversity.

Sam Charrington: [00:38:10] I think one of the things that’s most exciting about all the work that’s happening in so many places about AI for social good and various aspects of it is just how intersectional it is, the issues, the folks that are working on AI ethics and AI bias, and now the work that we’re discussing here around accessibility, it ties in in so many ways and I guess what’s kind of bringing that thought to mind is just the … I have lots of conversations about bias and bias in data sets, bias in AI systems, and to think about how important that is in this context, and then how as we overcome those issues and create new technologies here, how that feeds into the technologies that we all benefit from. I can’t help but think it creates exciting opportunities for folks that want to kind of jump into this field.

Wendy Chisholm: [00:39:11] Well, that’s the thing and I think that’s, when I’ve been talking … so, some of our applicants aren’t as familiar with disabilities. And that’s great. They’re familiar with AI and machine learning and they can see, “Oh, this is how I can get the data you’re looking for.” And that’s really a very cool thing, is that if we can bring together people who are looking for ways that their work in AI and machine learning can impact humans all over the planet, I think that’s a very exciting opportunity, where we can really start making those partnerships and bringing people together, kind of matchmaking. Like, “Hey, we see over here that this community has this need. Is there anyone out there doing something similar and we can pair you together, and then you can test this out and continue to evolve your work in a way that’s really gonna be impactful for somebody.”

And one of the things we keep saying is we’re funding projects that are developed with or by people with disabilities. And again, that’s because when it’s grounded in reality, then you know you get something good. Lois and Matthew talk about how much their time is around the UX, and I think that’s so important. That’s where the really good stuff comes from, is when you’re really looking at how people use this and how they are gonna integrate it into their daily lives, what it allows people to do.

Sam Charrington: [00:40:35] Lois, when you look forward, what do you see in terms of incorporating AI more deeply in what iTherapy’s doing?

Lois Brady: [00:40:48] Well, we have plans probably up and through the next 10 years to continue using this. We have great plans. It’s made a big difference with a lot of our students, however one of the main pain points that they have is by the time they come up with something to say or they can’t, or join a conversation, the conversation’s done. It’s fluency. It’s the rate that they speak. And that’s across the board and across abilities and ages, is that if you’re using a device to speak and not using your own voice, it takes a lot longer, so that is probably the gap I would love to bridge is that someone who cannot use their natural voice can jump into a conversation and speak like anyone else. That’s gonna be difficult and that’s absolutely gonna be AI. Currently, it takes so long for people to either generate a sentence or search for the word they want to say that most folks check out of the conversation and our folks don’t really get to have one on one conversations, unless they’re pre made and scripted.

So that’s, I think my ultimate, ultimate goal is folks who have challenges speaking can speak like just about anyone else.

Wendy Chisholm: [00:42:12] Yeah, that’s a really beautiful moonshot. There’s a lot of these things in AI where we look for that real time, with the jogging track, that real time feedback. And hear real time speech generation. And we see that in a lot of scenarios and I think that’s … yeah, I agree. That’s the vision. That’s a good vision.

Sam Charrington: [00:42:32] And Matthew, what are you most excited about from an AI perspective?

Matthew Guggemos: [00:42:39] Well, we are looking into coming up with some diagnostic tools, which I think are going to be really cool. I think they … we’re hoping to come up with, we’re just in the preliminary pieces of this now. But using AI to analyze data through possibly Q&A framework, and to help people create solutions for communication delays or disorders. So basically, one of the ways that we, as speech pathologists, evaluate people now, it’s through kind of paper and pencil test. Then you have to score it and then it just becomes kind of cumbersome. So, what we’re hoping is we’re taking a look at some kind of far out ideas. In fact, I read this article in Wired magazine about a guy named Karl Friston who’s into this concept called Free Energy. So, we’re thinking of some ways to take that concept and apply to diagnosing communication disorder, so I think that’s pretty exciting.

So, that should be in the next few years. That’s my, I guess, goal for the future, is to develop that, among other things.

Sam Charrington: [00:44:00] Nice. Nice. So Wendy, I gather that by the time folks get an opportunity to listen to this podcast, you’ll be just in the midst of finalizing your next round of grantees.

Wendy Chisholm: [00:44:15] Yeah.

Sam Charrington: [00:44:16] For folks that hear this and are interested in working with this program, what does the process tend to look like and when should they start looking for the next round to open up?

Wendy Chisholm: [00:44:28] We accept applications any time.

Sam Charrington: [00:44:30] Oh, really?

Wendy Chisholm: [00:44:30] Yeah.

Sam Charrington: [00:44:30] Okay.

Wendy Chisholm: [00:44:31] We’re constantly kind of reviewing them. And the link to apply-

Sam Charrington: [00:44:37] Feel free.

Wendy Chisholm: [00:44:38] … is really easy. It’s AKA.ms/grant. Super easy. So, AKA.ms/grant. Yeah. You pull together an application and we’ll review that. We’re really looking for projects that are gonna elevate the industry. We’re not as interested in funding just someone to develop an application. We want someone who is going to contribute something back, because for us, that’s really how we’re gonna raise the water and bring up the boat, you know? So, that’s part of what we’re looking at is either someone willing to contribute a data set or a model, or some other learning, whether research paper or something like that. And obviously someone who’s, whether by themselves or through partnership or grounded in community, and that it’s something that’ll be feasible to accomplish in a year.

So, while we really are looking for projects that will go beyond a year and they’re something that can be built on, the grants are year long grants. So, there needs to be some deliverable in that year. But yeah, and we encourage anyone to apply. I mean, literally anyone from anywhere. We’ve done a push in Latin America. We’re gonna be reaching out to Asia soon. We do have grantees in different regions. We have a few folks in Europe, one in India, and we’re really looking to expand that, because … especially I really encourage folks in Asia and Latin America, because I recently learned, and just to spend a moment on Latin America, that the unemployment rate in general for people with disabilities is usually twice that of people without, and the average age is 30 years old in Latin America. And to me, that’s prime employment age. So, we really want to shift those employment numbers.

To that point, we really are looking for applications and projects that are gonna have an impact in our focus areas, which are employment, daily life and communication, and connection. For example, Lois and Matthew, this is communication and connection. The jogging track, that’s daily life, because that’s out and about, being independent. And then employment is another one we’ve got from Vanderbilt is a good example, where they’re building a bot that can help someone who has autism and is interested in practicing for job interviews and stuff like that. It does some really cool modification of the interview to help someone practice. So, yeah. I encourage anybody.

And NGO, non profit, individuals, companies. As long as you meet some of those criteria, go for it.

Sam Charrington: [00:47:33] And are they fixed sized grants? Or what’s the range?

Wendy Chisholm: [00:47:37] They’re not. Yeah, so right now we have a couple categories. We’ve got one set of grants where it’s just called an Azure Credits. And so we’re just giving folks credits to just play and learn and see what they come up with, make some progress on their idea. The other one is kind of an Azure Credits plus, and in that one, that’s the category that Matthew and Lois are in, is we have a community, we have a lot of support in terms of education and we’ve got on staff folks who can help with technical questions, integration with Azure. We’ve got great connections with cognitive services, Microsoft research. So, we can really support folks in their development and give them cash for engineering costs or data acquisition, data cleaning, data labeling. Kind of some of those pieces just needed to build.

And right now, we’re not really saying much about how much it is. We’re new. We’re kind of experimenting with how much we give people and what results we get. So, yeah. We’ll see how it goes.

Sam Charrington: [00:48:46] Do you by any chance have a wishlist of ideas that you’d love to fund?

Wendy Chisholm: [00:48:53] I do. Yes. We have several moonshots that I’d love to see. I want to see a self driving wheelchair. I want to see a dashboard for people with PTSD. Like I said, I want that dashboard for myself, that pulls together all the data that I have and really learns and can start recommending and predicting. I want Saquib to get his digital assistant so that he can be out and about as he’s traveling and get information to make decisions. I’d love for my friend who is deaf, who recently went in for a surgery and her sign language interpreter was late, so she had to lip read and it was very scary. She didn’t have all the information. But if she had had kind of a back up interpreter that could help her in that situation. And you know, the doctor couldn’t wait, you know.

So, I think those are some of the things that we’re looking for. Yeah. And then I’m just curious to learn what other ideas people have out there. Like I said, there were some things I heard this week I hadn’t even thought were needed. And now I really want to fund them.

Sam Charrington: [00:50:00] Fantastic, fantastic. Well, Wendy, Lois, and Matthew, thank you so much for taking the time to chat with us about AI for Accessibility.

Wendy Chisholm: [00:50:11] Thank you.

Sam Charrington: [00:50:13] Lois and Matthew in particular, congratulations for what you’re doing. It sounds like a great application with a lot of impact and Wendy, great program. Thank you.

Wendy Chisholm: [00:50:23] Thank you. Yeah, thanks.

Matthew Guggemos: [00:50:24] Thanks so much.

Lois Brady: [00:50:25] Thank you, Sam.