We could not locate the page you were looking for.

Below we have generated a list of search results based on the page you were trying to reach.

404 Error
Naila Murray obtained a BSE in electrical engineering from Princeton University in 2007. In 2012, she received her Ph.D. from the Universitat Autonoma de Barcelona, in affiliation with the Computer Vision Center. She joined Xerox Research Centre Europe in 2013 as a research scientist in the computer vision team, working on topics including fine-grained visual categorization, image retrieval, and visual attention. From 2015 to 2019, she led the computer vision team at Xerox Research Centre Europe and continued to serve in this role after its acquisition and transition to becoming NAVER LABS Europe. In 2019, she became the director of science at NAVER LABS Europe. In 2020, she joined Meta AI’s FAIR team, where she served as a senior research manager. She now leads as the director of AI research at Meta. She has served as area chair for ICLR 2018, ICCV 2019, ICLR 2019, CVPR 2020, ECCV 2020, and CVPR 2022, and program chair for ICLR 2021. Her current research interests include few-shot learning and domain adaptation.
There are few things I love more than cuddling up with an exciting new book. There are always more things I want to learn than time I have in the day, and I think books are such a fun, long-form way of engaging (one where I won’t be tempted to check Twitter partway through). This book roundup is a selection from the last few years of TWIML guests, counting only the ones related to ML/AI published in the past 10 years. We hope that some of their insights are useful to you! If you liked their book or want to hear more about them before taking the leap into longform writing, check out the accompanying podcast episode (linked on the guest’s name). (Note: These links are affiliate links, which means that ordering through them helps support our show!) Adversarial ML Generative Adversarial Learning: Architectures and Applications (2022), Jürgen Schmidhuber AI Ethics Sex, Race, and Robots: How to Be Human in the Age of AI (2019), Ayanna Howard Ethics and Data Science (2018), Hilary Mason AI Sci-Fi AI 2041: Ten Visions for Our Future (2021), Kai-Fu Lee AI Analysis AI Superpowers: China, Silicon Valley, And The New World Order (2018), Kai-Fu Lee Rebooting AI: Building Artificial Intelligence We Can Trust (2019), Gary Marcus Artificial Unintelligence: How Computers Misunderstand the World (The MIT Press) (2019), Meredith Broussard Complexity: A Guided Tour (2011), Melanie Mitchell Artificial Intelligence: A Guide for Thinking Humans (2019), Melanie Mitchell Career Insights My Journey into AI (2018), Kai-Fu Lee Build a Career in Data Science (2020), Jacqueline Nolis Computational Neuroscience The Computational Brain (2016), Terrence Sejnowski Computer Vision Large-Scale Visual Geo-Localization (Advances in Computer Vision and Pattern Recognition) (2016), Amir Zamir Image Understanding using Sparse Representations (2014), Pavan Turaga Visual Attributes (Advances in Computer Vision and Pattern Recognition) (2017), Devi Parikh Crowdsourcing in Computer Vision (Foundations and Trends(r) in Computer Graphics and Vision) (2016), Adriana Kovashka Riemannian Computing in Computer Vision (2015), Pavan Turaga Databases Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases (2021), Xin Luna Dong Big Data Integration (Synthesis Lectures on Data Management) (2015), Xin Luna Dong Deep Learning The Deep Learning Revolution (2016), Terrence Sejnowski Dive into Deep Learning (2021), Zachary Lipton Introduction to Machine Learning A Course in Machine Learning (2020), Hal Daume III Approaching (Almost) Any Machine Learning Problem (2020), Abhishek Thakur Building Machine Learning Powered Applications: Going from Idea to Product (2020), Emmanuel Ameisen ML Organization Data Driven (2015), Hilary Mason The AI Organization: Learn from Real Companies and Microsoft’s Journey How to Redefine Your Organization with AI (2019), David Carmona MLOps Effective Data Science Infrastructure: How to make data scientists productive (2022), Ville Tuulos Model Specifics An Introduction to Variational Autoencoders (Foundations and Trends(r) in Machine Learning) (2019), Max Welling NLP Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics (2013), Emily M. Bender Robotics What to Expect When You’re Expecting Robots (2021), Julie Shah The New Breed: What Our History with Animals Reveals about Our Future with Robots (2021), Kate Darling Software How To Kernel-based Approximation Methods Using Matlab (2015), Michael McCourt
Over the past couple weeks I got to sit on the other side of the (proverbial) interview table and take part in a few fantastic podcasts and video conversations about the state of machine learning in the enterprise. We also cover current trends in AI, and some of the exciting plans we have in store for TWIMLcon: AI Platforms. Each of these chats has its own unique flavor and I’m excited to share them with you. The New Stack Makers Podcast.I had a great chat with my friend, Alex Williams, founder of The New Stack, a popular tech blog focused on DevOps and modern software development. We focused on MLOps and the increasingly significant convergence of software engineering and data science. Minter Dialogue. I spoke with Minter Dial, host of the popular podcast, Minter Dialogue, and author of the book Heartificial Empathy: Putting Heart into Business and Artificial Intelligence. We had a wide ranging conversation in which we talked about the future of AI, AI ethics, and the state of AI in businesses. Datamation. In this video chat with James Maguire for Datamation, we discuss some of the key trends surrounding AI in the enterprise, and the steps businesses are taking to operationalize and productionalize machine learning. Hope you enjoy the talks! If you're not already registered for TWIMLcon we'd love to have you join us! Register now!
Another #TWIMLcon short with the wonderful Rosie Pongracz and Trisha Mahoney, from a Founding sponsor who you all know, IBM. Rosie is the  World Wide Director of Technical Go-to-Market and Evangelism for Data Science and Trisha is a Senior Tech Evangelist. We chat about the latest IBM research, projects, and products, including AI Fairness 360, which will be the focus of Tricia’s session at TWIMLcon. The IBM booth also promises to bring the heat, with a variety of open source projects and resources for the data science community. See you there! Sam Charrington: [00:00:00] All right everyone. I've got Rosie Pongracz and Trisha Mahoney from IBM on. Rosie is the Worldwide Director of Technical go-to-market and Evangelism for Data Science and AI and Trisha is a Senior Tech Evangelist in Machine Learning & AI and they are both instrumental in IBM's support for the TWIMLcon: AI Platforms conference. Rosie and Trisha, it's so exciting to be able to talk to you. Rosie Pongracz: [00:00:27] We are excited to be here, Sam! So happy to be a supporter of TWIMLcon and all the great work you do. Trisha Mahoney: [00:00:33] Thanks for having us, Sam. Sam Charrington: [00:00:34] Absolutely. Thank you. So, I don't know if it makes sense to say who is IBM? [laughs] You know in this context, I think most people who hear this know what IBM is but, you know, maybe you can talk a little bit about the company's involvement in the AI Platform space and, you know, why you're, you know what really kind of created the interest in supporting this, this conference. Rosie Pongracz: [00:01:00] Absolutely. So, yes, I would imagine most of the listeners already know IBM. We are long-standing, I'd say, evangelist, product producer, supporters of open source, anything for AI. And I'd say most of the current recognition goes back to Watson, of course, and the Jeopardy challenge.   But from that, IBM has evolved...what was that, almost ten years ago, to create some significant products. Not only have we made our way to the cloud should I say, and supports hybrid clouds for our clients and bringing them through the digital transformation, but we also have a good range of, of tools that help people not only do data science and machine learning but also scaled those, operationalized those, and bring them to production. I think if anything, IBM is known for its expertise in enterprise-scale and wide range of industry solutions. And that's really what we're doing. We're involved in open source. So quite a few open-source projects that are AI and data science and ML related, as well as products that can help our clients bring that AI to their business.  Sam Charrington: [00:02:16] Awesome. And I know that I've covered some of those products in our recent e-books in the platform space. Both the Fabric for Deep Learning open source project, which I talked about in our Kubernetes for ML and DL e-book, as well as the Watson Studio products which I believe came up in the ML Platforms e-book. Are there other products that IBM is kind of focused on in this space? Rosie Pongracz: [00:02:43] I think you captured the main ones. Especially the ones my team has been involved in. There's Watson Studio, Watson Machine Learning, Watson Open Scale. And if you look at Studio, it's more or less it's an IDE of sorts for data scientists, built on Jupiter Notebooks. ML, uh Watson ML is for running those machine learning algorithms. And then Watson Open Scale is for at scale.  And actually one of the big pieces of that pipeline, if you look at all those pieces along the pipeline or the platform, if you will is one of the areas that Trisha's going to be talking about which is the AI fairness and bias, which is a really important piece of the pipeline that we're proud to be incorporating.  I think you caught all the products. There's a significant amount of open-source that we're also involved in and, like I said, bringing those into our products and also supporting those communities like the Jupiter community, like the Linux Foundation, AI. Those are also very important projects and places where IBM has been involved as well.  Sam Charrington: [00:03:53] That's right. We recently did a podcast with Luciano Resende, who is at IBM and works on the Jupiter Enterprise Hub project, I believe is the name of it? Rosie Pongracz: [00:04:03] Yup. Jupiter Enterprise Gateway is correct. Yes. Sam Charrington: [00:04:05] Got it. Jupiter Enterprise Gateway. Rosie Pongracz: [00:04:07] Yeah. Sam Charrington: [00:04:08] So in addition to all of the products and open-source that you're working on in this space, you're also out there evangelizing the whole idea of ML Ops. You ran a workshop on this topic at the OzCon Conference recently. Maybe talk a little bit about your perspective on ML Ops and why that's so interesting to you. Rosie Pongracz: [00:04:29] Yeah. I think it goes back to where, where IBM can really make a difference is that we have, we'll have literally hundreds of years, decades, of experience in helping our enterprise clients do things at scale. And that is across industry. So if you look at all of the products that we have and you also look at something like cloud pak for data, which is bringing those containerized applications to any cloud, really, it is about giving our clients flexibility, helping them modernize. It's helping do things at scale.   Now a lot of our clients also have businesses that they're trying to transform so when you talk about ML Ops, certainly, you look at data science, I kind of look at that akin to a desktop where a developer works on. It's great to be able to develop those algorithms on your desktop and test that out on data sets, but when you really want to implement it, you're talking there's a whole kind of dev-ops cycle, if you will, applying that to AI and then machine learning.   And IBM has been there with its clients in the early days of Java. It's been there in the early days of cloud. And we're also taking that now into kind of the next realm if you will, the next era of bringing AI to businesses at scale. So how do you take your current applications and embed AI in those? Or how are you creating new ways to use your data and to modernize your business? And IBM you know, it's just near and dear to our client's heart. It's near and dear to who we are as a company in being able to do things at scale. And you have to have a platform. You have to have a way to operationalize that. It's great to run little science experiments to try things out and test things and fail fast, but when you start to operationalize, that's where the ML at scale, ML Ops, is really going to start to be important. Sam Charrington: [00:06:25] Mm-hmm [affirmative]. I was at the last IBM Think Conference, which is its big user conference and had an opportunity to hear Rob Thomas talk about, you know, one of the key things that he sees as being a determinant of enterprises finding success in machine learning and AI is the number of experiments that they're able to run and being able to scale that so that they can run those experiments en masse. Rosie Pongracz: [00:06:51] Yeah absolutely. That's an important piece of what IBM is helping enable our clients to do. And with our products that is definitely what we're striving for. You've got to be able to experiment. And then when you do want to operationalize, you got to be able to do that at scale.   Some of the clients we work with have some of the biggest applications running for their enterprise for their customers. And they depend on IBM to do that. So how do we bring that into, you know, this experimentation mode? Because you're absolutely right. Now it's not, you know, much more in...it's not about, you know, building one app and then releasing that. It's, as you know, the world is very much agile, you've got to fail fast. You've got to experiment. You've got to understand.   And with data science, that is absolutely sort of the MO. That's sort of the way you operate; is how do you, how do you know what works? And then if, when you...you know, you also have to retrain. So there's a lot of differences to building AI and building data science in a [inaudible] scale that is slightly different than just building applications if you will. Sam Charrington: [00:07:55] Mm-hmm [affirmative]. Mm-hmm [affirmative]. So, Trisha, you're going be speaking at the conference. Tell us a little bit about your topic and what attendees can expect when they come to your session. Trisha Mahoney: [00:08:06] Right. So, I'm going to be speaking on AI Fairness 360. And this is a comprehensive toolkit created by IBM researchers. And what we focus on is detecting and understanding and mitigating unwanted machine learning bias. So the toolkit is open source. It's in Python and it contains over 75 fairness metrics, ten bias mitigation algorithms, and fairness metrics with explanations. So one of the key components to this is that it has some of the most cutting edge metrics and algorithms across academia and industry today. So it's not just an IBM thing, it includes the algorithms from researchers from Google, Stanford, Cornell. That's just a few.   But what it really focuses on is teaching people how to learn to measure bias in their data sets and models. And how to apply fairness algorithms throughout the pipeline. So you know the big focus is on data science leaders, practitioners, and also legal and ethic stakeholders who would be a part of this.  So, just a few things that I'll go through in the talk is when you would apply pre-processing algorithms to manipulate your training data, in- processing algorithms, for incorporating fairness into your training algorithms itself, as well as post-processing, de-biasing algorithms. And, you know, one of the key things we wanted to get across is, I'm working on an O'Reilly book on AI fairness and bias with our researchers. So, you know, the key thing is that you know, this is a problem we think may prevent AI from reaching its full potential if we can't remove bias.   So, the thing we want to get across is that this is a long data science initiative. If you want to remove bias throughout your pipeline, so it involves a lot of stakeholders in your company, and that it can be very complex. So the way you define fairness and bias leads down into the types of metrics and algorithms you use. So, you know there are a lot of complexities. And the hope is that data science teams need to work with people throughout their org; they can't really make these decisions on their own, as they may actually break the law in some cases with their algorithms.   So, you know, I'll go into in the short period of time, kind of some of the trade-offs that data science teams have to make between model accuracy and removing bias, and talk about what they do for acceptable thresholds for each.   And the last thing on the ML Ops piece is I'll also do a demo in Watson Open Scale. And this is where you, you know, have models in production and you need to detect and remove bias from models, you know that are, aren't in an experimentation environment, right? So within Watson Open Scale, you can automatically detect fairness, issues at run time. And we essentially just do this by comparing the difference between rates at which different groups receive the same outcomes.  So are different minority groups, or men or women being approved for loans at the same time. So that's just an example. So that's kind of the top things that I'll go through on the toolkit and, I've heard many people say that others do bias talks on the problem that we have. But AI Fairness 360 is one of the few that's bringing a solution to the table on how to fix this within the machine learning pipeline.  Sam Charrington: [00:11:29] Yeah, I think that's one of the most exciting things about the talk from our perspective is that it's not just talking about the challenges that exist, but also how to integrate a concrete toolkit into your pipeline. And whether it's Fairness 360 or something else, but how to, integrate tools into your pipeline so that you can detect and mitigate bias, just very concretely as opposed to talking about it abstractly. Trisha Mahoney: [00:11:58] Correct. And I think the bridge that this creates is, you know, there are a lot of new fairness research techniques out there, but this toolkit sort of gets them into production and accessible in a way that data scientists can use. So, I think this is considered the most comprehensive toolkit to do that on the market today. Sam Charrington: [00:12:18] Mm-hmm [affirmative]. So Rosie in addition to Trisha's session, you'll also be exhibiting at the conference in our community hall. What can attendees expect to see at the IBM booth there? Rosie Pongracz: [00:12:30] Yeah, we're excited to be there too. So you'll see several things. We are going to be talking about the relevant open source projects like AI Fairness 360 that Trisha mentioned and also AI Explainability 360, which is another new toolkit. And we have, actually, a whole host of, projects that I won't go into here, but we can talk through those and see where IBM is contributed and working on open source projects like the Jupiter Enterprise Gateway that you mentioned as well.   They'll also see our, our products, and how those work together in helping operationalize and bring AI platforms to reality. And we'll also be talking about our data science community, which is a place that not only can product users go and share and collaborate, but also we have some great technical solution type content on there, with the goal of that being that IBM has a lot of deep rich solutions that we're building. As I mentioned earlier, industry-specific, or transformation type of projects and those are the types of materials that we're building there.  We've heard many people, both academic and industry, say it's great to talk about all this theoretical AI and what we'd really like to see is how are people putting that to work and solutions. So that's something that we're trying to bring to life on the community with many of [our] IBM experts all across any of our implementation folks, to our research folks. Sam Charrington: [00:14:01] Fantastic. Fantastic. Well, I'm really looking forward to seeing both of you at the event. And I am very gracious for your and IBM's support of the conference. Rosie Pongracz: [00:14:14] We are really excited to support what you're doing, Sam. I know you and I have worked together for many years through some technology transitions, so this is really appropriate and fun and fitting that we get to work together on something as exciting as what you're doing at TWIMLcon. Sam Charrington: [00:14:29] Absolutely. Thank you both. Rosie Pongracz: [00:14:31] Thank you. TWIMLcon: AI Platforms will be held on October 1st and 2nd at the Mission Bay Conference Center in San Francisco. Click here to learn more  
Bits & Bytes Microsoft open sources Bing vector search. The company published its vector search toolkit, Space Partition Tree and Graph (SPTAG) [Github], which provides tools for building, searching and serving large scale vector indexes. Intel makes progress toward optical neural networks. A new article on the Intel AI blog (which opens with a reference to TWIML Talk #267 guest Max Welling’s 2018 ICML keynote) describes research by Intel and UC Berkeley into new nanophotonic neural network architectures. A fault tolerant architecture is presented, which sacrifices accuracy to achieve greater robustness to manufacturing imprecision. Microsoft research demonstrates realistic speech with little labeled training data. Researchers have crafted an “almost unsupervised” text-to-speech model that can generate realistic speech using just 200 transcribed voice samples (about 20 minutes’ worth), together with additional unpaired speech and text data. Google deep learning model demonstrates promising results in detecting lung cancer. The system demonstrated the ability to detect lung cancer from low-dose chest computed tomography imagery, outperforming a panel of radiologists. Researchers trained the system on more than 42,000 CT scans. The resulting algorithms turned up 11% fewer false positives and 5% fewer false negatives than their human counterparts. Facebook open-sources Pythia for multimodal vision and language research. Pythia [Github] [arXiv] is a deep learning framework for vision and language multimodal research framework that helps researchers build, reproduce, and benchmark models. Pythia is built on PyTorch and designed for Visual Question Answering (VQA) research, and includes support for multitask learning and distributed training. Facebook unveils what secretive robotics division is working on. The company outlined some of the focus areas for its robotics research team, which include teaching robots to learn how to walk on their own, using curiosity to learn more effectively, and learning through tactile sensing. Dollars & Sense Algorithmia raises $25M Series B for its AI platform Icometrix, a provider of brain imaging AI solutions, has raised $18M Quadric, a startup developing a custom-designed chip and software suite for autonomous systems, has raised $15M in a funding Novi Labs, a developer of AI-driven unconventional well planning software, has raised $7M To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.
Bits & Bytes Google announces TensorFlow 2.0 Alpha, TensorFlow Federated, TensorFlow Privacy. At the 3rd annual TensorFlow Developer Summit, Google announced the first alpha release of TensorFlow 2.0 and several other new releases such as: TensorFlow Federated – a new open-source framework that allows developers to use all the ML-training features from TF while keeping the data local; TensorFlow Privacy – which uses differential privacy to process data in a private manner; extensions to TensorFlow Extended (TFX), a platform for end-to-end machine learning; and Activation Atlases – which attempts to visualize and explain how neural networks process images. Google open sources GPipe, a library for parallel training of large-scale neural networks. GPipe, which is based on the Lingvo (a TensorFlow framework for sequence modeling), is applicable to any network consisting of multiple sequential layers and allows researchers to “easily” scale performance. [Paper] Facebook AI researchers create a text-based adventure to study how AI speak and act. Researchers from Facebook and University College London specifically investigated the impact of grounding dialogue – a collection of mutual knowledge, beliefs, and assumptions essential for communication between two people–on AI agents. Google announces Coral platform for building IoT hardware with on-device AI. Coral targets developers creating IoT hardware from prototyping to production. It is powered by a TPU that is specifically designed to run at the edge and is available in beta. Google and DeepMind are using AI to predict the energy output of wind farms. Google announced that it has made energy produced by wind farms more viable using DeepMind’s ML algorithms to better predict the wind output. Ben-Gurion U. develops new AI platform for ALS care. Researchers at Ben-Gurion University have used ML models to develop a new method of monitoring and predicting the progression of neurodegenerative and help identify markers for personalized patient care and improve drug development. Google rolls out AI grammar checker for G Suite users. Google applies ML techniques to understand complex grammar rules and identify “tricky” grammatical errors by G Suite users. Dollars & Sense PolyAI, a London, UK-based platform for conversational AI, raised $12M in Series A funding Wade & Wendy, a NYC-based AI recruitment platform, closed a $7.6M Series A funding Brodmann17, a Tel Aviv, based provider of vision-first technology for automated driving, raised $11M in Series A funding Paradox.ai, a Scottsdale-based assistive intelligence platform raised $13.34M series A funding Apple acquires patents from AI security camera maker Lighthouse Horizon Robotics, China-based AI chip maker raises $600M ELSA, US-based AI language learning app, raised $7M Modulate, a Cambridge-based ML startup raised $2M in seed funding Zone7, which uses AI to predict injuries in sports, has secured $2.5M DataRobot acquires a data collaboration platform company, Cursor Splice Machine announced that it has raised $16M for unified ML platform Senseon has raised $6.4M to tackle cybersecurity threats with an AI ‘triangulation’ approach Ctrl-labs, a New York startup announced that it has raised $28M in a funding round led by GV, Google’s venture capital arm Armorblox, a Sunnyvale, CA-based provider of a natural language understanding platform for cybersecurity, raised $16.5M Series A funding ViSenze, Singapore-based AI startup, has raised $20M in series C funding BlackBerry announces the acquisition of Cylance, a cybersecurity and AI firm To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.
Sam Charrington: Today we're excited to continue the AI for the benefit of society series that we've partnered with Microsoft to bring to you. Today we're joined by Hanna Wallach principal researcher at Microsoft research. Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, whether deployment of fair ML algorithms can actually be achieved in practice and much more. Along the way, Hannah points us to a ton of papers and resources to further explore the topic of fairness in ML. You'll definitely want to check out the show notes page for this episode, which you'll find at twimlai.com/talk/232. Before diving in I'd like to thank Microsoft for their support of the show and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with this intelligent technology to help solve previously intractable societal challenges, spanning sustainability, accessibility and humanitarian action. Learn more about their plan at Microsoft.ai. Enjoy. Sam Charrington: [00:02:18] All right everyone, I am on the line with Hanna Wallack, Hanna is a principal researcher at Microsoft Research in New York City. Hanna, welcome to this week in Machine Learning and AI. Hanna Wallach:[00:00:11] Thanks, Sam. It's really awesome to be here. Sam Charrington: [00:00:14] It is a pleasure to have you on the show, and I'm really looking forward to this conversation. You are clearly very well known in the machine learning and AI space. Last year, you were the program chair at one of the largest conferences in the field, NeurIPS. In 2019, you'll be it's general chair. But for those who don't know about your background, tell us a little bit about how you got involved and started in ML and AI. Hanna Wallach:[00:00:48] Sure. Absolutely. So I am a machine learning researcher by training, as you might expect. I've been doing machine learning for about 17 years now. So since way before this stuff was even remotely fashionable, or popular, or cool, or whatever it is nowadays. In that time, we've really seen machine learning change a lot. It's sort of gone from this weirdo academic discipline only of interest to nerds like me, to something that's so mainstream that it's on billboards, it's in TV shows, and so on and so forth. It's been pretty incredible to see that shift over that time. I got into machine learning sort of by accident, I think that's often what happens. I had taken some undergrad classes on information theory and stuff like that, found that to be really interesting, but thought that I was probably going to go into human computer interaction research. But through a research assistantship during the summer between my undergrad degree and my Master's degree, I ended up discovering machine learning, and was completely blown away by it. I realized that this is what I wanted to do. I've been focusing on machine learning in various different forms since them. My PHD was specifically on Bayesian Latent Variable methods, typically for analyzing text and documents. So topic models, that kind of thing. But during my PHD, I really began to realize that I'm not particularly interested in analyzing documents for the sake of analyzing documents, I'm interested in analyzing documents because humans write documents to communicate with one another. It's really that underlying social process that I'm most interested in. So then during my postdoc, I started to shift direction from primarily looking at text and documents to thinking really about those social processes. So not just what are people saying, but also who’s interacting with whom, and thinking about machine learning methods for analyzing the structure and content of social processes in combination. I then dove into this much more when I got a faculty job, because I was hired as part of UMass Amherst’s Computational Social Science Initiative. So at that point I started focusing really in depth on this idea of using machine learning to study society. I established collaborations with a number of different social scientists, focusing on a number of different topics. Over the years, I've mostly ended up working with political scientists, and often study questions relating to government transparency, and still looking at sort of this whole idea of a social process consists of individuals, or groups of individuals interacting with one another, information that might be used in or arising from these interactions, and then the fact that these things might change over time. I often use one of these or two of these modalities, so structure, content, or dynamics, to learn about one or more of the other ones as well. As I continued to work in this space, I started to think more, not just about how we can use machine learning to study society, but the fact that machine learning is becoming much more prevalent within society. About four years ago, I started really thinking more about these issues of fairness, accountability, transparency, and ethics. It was a pretty natural fit for me to start moving in this direction. Not only was I already thinking about questions to do with people, but I've done a lot of diversity and inclusion work in my non research life. So I'm one of the co-founders of the Women in Machine Learning workshop, I also co-founded two organizations to get more women involved in free and open source software development. So issues related to fairness and stuff like that are really something that I tend to think about a lot in general. So I ended up making sort of this shift a little bit in my research focus. That's not to say that I don't still work on things to do with core computational social science, but increasingly my research is focusing on the ways that machine learning impacts society. So fairness, accountability, transparency, and ethics. Sam Charrington: [00:05:53] We will certainly dive deep into those topics. But before we do, you've mentioned a couple of times the term computational social science. That's not a term that I've heard before, I don't believe. Can you ... Is that ... I guess I'm curious how established that is as a field, or is it something that is specific to that institution that you were working at? Hanna Wallach:[00:06:19] Sure. So this is really a discipline that started emerging in maybe sort of 2009, 2008, that kind of time. By 2010, which is when I was hired at UMass, it really was sort of its own little emerging field with a bunch of different computer scientists and social scientists really committed to pushing this forward as a discipline. The basic idea, of course, is you know social scientists study society and social processes, and they've been doing this for decades. But often using qualitative methods. But of course, as more of society moves towards digitized interaction methods, and online platforms, and other kinds of things like that, we're beginning to see much more of this sort of digital data. At the same time, we've seen this massive increase, as I've said, in the popularity of machine learning and machine learning methods that are really suitable for analyzing data about social processes in society. So computational social science is really the sort of emerging discipline at the intersection of computer science, the social sciences, and statistics as well. The real goal is to develop and use computational and statistical methods, so machine learning methods, for example, to understand society, social processes, and answer questions that are substantively interesting to social scientists. At this point, there are people at a number of different institutions focusing on computational social science. So yes, of course, UMass, as I've mentioned before. But also Northwestern, Northeastern, University of Washington, in fact have been doing this for years, and of course, Microsoft Research is no exception in this regard. Part of the reason why I joined Microsoft Research was that we have a truly exceptional group of researchers in computational social science here. That was really very appealing to me. Sam Charrington: [00:08:31] Oh, awesome, awesome. So you talked about your transition to focusing on fairness, accountability, transparency, and ethics in machine learning and AI. Can you talk a little bit about what those terms mean to you, and your broader research? Hanna Wallach:[00:08:54] Yeah, absolutely. So I think the bulk of my own research in that sort of broad umbrella falls within two categories. So the first is fairness, and the second is what I would sort of describe as interpretability of machine learning. So in that fairness bucket, really, much of my research is focused on studying the ways in which machine learning can inadvertently harm or disadvantage groups of people or individual people in various different, usually unintended, ways. I'm interested in understanding not only why this occurs, but what we can do to mitigate it, and what we can do to really develop fairer machine learning systems. So systems that don't inadvertently harm individuals or groups of people. In the intelligibility bucket, so there, I'm really interested in how we can make machine learning methods that are interpretable to humans in different roles for particular purposes. There has been a lot of research in this area over the past few years, focusing on oftentimes developing simple machine learning models that can be easily understood my humans simply by exposing their internals, and also on developing methods that can generate explanations for either entire models or the predictions of models. Those models might be potentially very complex. My own work typically focuses really more on the human side of intelligibility, so what is it that might make a system intelligible or interpretable to a human trying to carry out some particular task? I do a lot of human subjects experiments to really try and understand some of those questions with a variety of different folks here at Microsoft Research. Sam Charrington: [00:11:01] On the topic of fairness and avoiding inadvertent harm, there are a lot of examples that I think many of our audience would be familiar with, the ProPublica work into the use of machine learning systems in the justice process, and others. Are there examples that come to mind for you that are maybe less well known, but that illustrate for you the importance of that type of work? Hanna Wallach:[00:11:36] Yes. So when I typically think about this space, I tend to think about this in terms of the types of different harms that can occur. I have some work with Aaron Shapiro, Solon Barocas, and Kate Crawford on the different types of harms that can occur. Kate Crawford actually did a fantastic job of talking about this work in her invited talk at the NeurIPS conference in 2017. But to give you some concrete examples, so many of the examples that people are most familiar with are these scenarios as you mentioned where machine learning systems are being used to allocate or withhold resources, opportunities, or information. So one example would be of the compass recidivism prediction system being used to make decisions about whether people should be released on bail. Another example would be from a story, a news story that happened in November where Amazon revealed that it had abandoned an automating hiring tool because of fears that the tool would reinforce existing gender imbalances in the workplace. So there you're looking at these existing gender imbalances, and seeing that this tool is perhaps withholding opportunities from women in the tech industry in an undesirable way. There was a lot of coverage about this very sensible decision that Amazon made to abandon that tool. Some other examples would be more related to quality of service issues even when no resources or opportunities are being allocated or withheld. So a great example there would be the work that Joy Buolamwini and Timnit Gebru did focusing on the ways that commercial gender classification systems might perform less well, so less accurate, for certain groups of people. Another example you might think of, let's say, speech recognition systems. You can imagine systems that work really well for people with certain types of accents, or for people with voices at certain pitches. But less well for other people, certainly for me. I'm British, and I have a lisp. I know that oftentimes speech recognition systems don't do a great job of understanding what I'm saying. This is much less of an issue nowadays, but you know, five or so years ago, this was really frustrating for me. Some other examples are things like stereotyping. So here the most famous example of stereotyping in machine learning is Latanya Sweeney's work from 2013, where she showed that advertisements that were being shown on web searches for different people's names would more typically be advertisements that reinforced stereotypes about black criminality when people searched for sort of black sounding names, than when people searched for stereotypically white sounding names. So there the issue is this sort of reinforcement of these negative stereotypes within society by the placement of particular ads for particular different types of searches. So another example of stereotyping in machine learning would be the work done by Joanna Bryson and others at Princeton University on stereotypes in word embeddings. There has also been some similar work done by my colleague, Adam Kalai, here at Microsoft Research. Both of these groups of researchers showed that if you train word embedding methods, so things like Word2Vec, that try and identify a low dimensional embedding for word types based on the surrounding words that are typically used in conjunction with them in sentences, you end up seeing that these word embeddings reinforce existing gender stereotypes. For example, so the word man ends up being embedded much closer to programmer and similarly woman ends up being embedded much closer to homemaker than vice versa. So that would be another kind of example. Then we see other kinds of examples of unfairness and harms within machine learning as well. So for example, over and under representation. So Matthew Kay and some others at the University of Washington have this really nice paper where they show that for professions with an equal or higher percentage of men than women, the image search results are much more heavily skewed towards images of men than reality. So that would be another kind of example. What you'll see from all of these examples that I've mentioned is that they affect a really wide range of systems and types of machine learning applications. The types of harms or unfairness that might occur are also pretty wide ranging as well, going from yes, sure, allocational withholding of resources, opportunities of information, but moving beyond that to stereotyping and representation and so on. Sam Charrington: [00:17:02] So often when thinking about fairness and bias in machine learning and the types of harm that can come about when unfair systems are developed, the kind of all roads lead back to the data itself, and the biases that are inherent in that data. Given that machine learning and AI is so dependent on data, and often much of the data that we have is biased, what can we do about that, and what are the kinds of things that your research is exploring to help us address these issues? Hanna Wallach:[00:17:41] Absolutely. Yeah, so you've hit on a really important point there, which is that in a lot of the sort of public discourse about fairness in machine learning, you have people making comments about algorithms being unfair, or algorithms being biased. Really, I think this misses some of the most fundamental points about why this is such a challenging landscape. So I want to just emphasize a couple of those here in response to your question. So the first thing is that machine learning is all about taking data, finding patterns in that data, and then often training systems to mimic the decisions that are represented within that data. Of course, we know that the society we live in is not fair. It is biased. There are structural disadvantages and discrimination all over the place. So it's pretty inevitable that if you take data from a society like that, and then train machine learning systems to find patterns expressed in that data, and to mimic the decisions made within that society, you will necessarily reproduce those structural disadvantages, that bias, that discrimination, and so on. So you're absolutely right that a lot of this does indeed come from data. But the other point that I want to make is that it's not just from data and it's not from algorithms per se. The issue is really, as I see it, and as my colleagues here at Microsoft Research see it, the issue is really about people and people's decisions at every point in that machine learning life cycle. So I've done some work on this with a number of people here at Microsoft, most recently I put together a tutorial on machine learning and fairness in collaboration with my colleague Jenn Wortman Vaughan. The way we really think about this is that you have to prioritize fairness at every stage of that machine learning lifecycle. You can't think about it as an afterthought. The reason why is that decisions that we make at every stage can fundamentally impact whether or not a system treats people fairly. So I think it's really important when we're thinking about fairness in machine learning to not just sort of make general statements about algorithms being unfair, or systems being unfair, but really to go back to those particular points and think about how unfairness can kind of creep in at any one of those stages. That might be as early as the task definition stage, so when you're sitting down to develop some machine learning system, it's really important to ask the question of who does this take power from, and who does this give power to? The answers to that question often reveal a lot about whether or not that technology should even be built in this first place. Sometimes the answer to addressing fairness in machine learning is simply, no, we should not be building that technology. But there are all kinds of other decisions and assumptions at other points in that machine learning life cycle as well. So the way we typically like to think about it is that a machine learning model, or method, is effectively an abstraction of the world. In making that abstraction, you necessarily have to make a bunch of assumptions about the world. Some of these assumptions will be more or less justified, some of these assumptions will be better fit for the reality than others. But if you're not thinking really carefully about what those assumptions are, when you are developing your machine learning system, this is one of the most obvious places that you can inadvertently end up introducing bias or unfairness. Sam Charrington: [00:21:42] Can you give us some concrete examples there? Hanna Wallach:[00:21:45] Yeah. Absolutely. One common example of this form would be stuff to do with teacher evaluation. So there have been a couple of high profile lawsuits about this kind of thing. But I think it illustrates the point nicely. So it's common for teachers to be evaluated based on a number of different factors, but including their student's test scores. Indeed, many of the methods that have been developed to analyze teacher quality using machine learning systems have really focused predominantly on student's test scores. But this assumes that student's test scores are in fact an accurate predictor of teacher quality. This isn't actually always the case. A good teacher should obviously do more than test prep. So any system that really looks just at test scores when trying to predict teacher quality is going to do a bad job of capturing these other properties. So that would be one example. Another example involves predictive policing. So a predictive policing system might make predictions about where crimes will be committed based on historic arrest data. But an implicit assumption here is that the number of arrests in an area is an accurate proxy for the amount of crime. It doesn't take into account the fact that policing practices can be racially biased, or there might be historic over policing in less affluent neighborhoods. I'll give you another example as well. So many machine learning methods work by defining some objective function, and then learning the parameters of the model so as to optimize that objective function. So for example, if you define an objective function in the context of, let's say, a search engine, that prioritizes user clicks, you may end up with search results that don't necessarily reflect what you want them to. This is because users may click on certain types of search results over other search results, and that might not be reflective of what you want to be showing when you show users a page of search results. So as a concrete example, many search engines, if you search for the word boy, you see a bunch of pictures of male children. But if you search for the world girl, you see a bunch of pictures of grown up women. These are pretty different to each other. This probably comes from the fact that search engines typically optimize for clicks among other metrics. This really shows how hard it can be to even address these kinds of fairness issues, because in different circumstances the word girl may be referring to a child or a woman, and users search for this term with different intentions. In this particular example, as you can probably imagine, one of these intentions might be more prevalent than the other. Sam Charrington: [00:24:57] You've identified lots of opportunities for pitfalls in the process of fielding systems going all the way back to the way you define your system, and state your intentions, and formulate the problem that you're going after. Beyond simply being mindful of the potential for bias and unfairness and just saying simply, I realize that that's not simple, that it's work to be mindful of this. But beyond that, what does your research offer in terms of how to overcome these kinds of issues? Hanna Wallach:[00:25:43] Yeah, this is a really good question. It's a question that I get a lot from people is what can we actually do in practice? There are a number of things that can be done in practice. Not all of them are easy things to do, as you say. So one of the most important things is that issues relating to fairness in machine learning are fundamentally socio-technical. They're not going to be addressed by computer scientists or developers alone. It's really important to involve a range of diverse stakeholders in these conversations when we're developing machine learning systems so that we have a bunch of different perspectives represented. So moving beyond just involving computer scientists and developers on teams, it's really important that we involve social scientists, lawyers, policy makers, end users, people who are going to be affected or impacted by these systems down the line, and so on and so forth. That's one really concrete thing you can do. There is a project that came out of the University of Washington called the Diverse Voices project. It provides a way of getting feedback from stakeholders on tech policy documents. It's really good, they have a great how-to guide that I definitely recommend checking out. But many of the things that they recommend doing there, you can also think about when you're trying to get feedback from stakeholders on, let's say, the definition of a machine learning system. So that task definition stage. Some of these could even potentially be expanded to consider other stages of that machine learning pipeline as well. So there are a number of things that you can do at every single stage of the machine learning pipeline. In fact, this tutorial that I mentioned earlier, that I worked on with my colleague Jenn Wortman Vaughan actually has guidelines for every single step of the pipeline. But to give you examples, here are some things, for instance, that you can do when you're selecting a data source. So for example, it's really important to think critically before even collecting any data. It's often very tempting to say, oh, there is already some dataset that I can probably repurpose for this. But it's really important to take that step back and before immediately acting based on availability to actually think about whether that data source is appropriate for the task you want to use it for. There is a number of reasons why it might not be, it could be to do with biases and the data source selection process. There might be societal biases present in the data source itself. It might be that the data source doesn't match the deployment context, that's a really important one that people really should be taking into account. Where are you thinking about deploying your machine learning system and does the data you have availability for training and development match that context? As another example, still related to data, it's really important to think about biases in the technology used to collect data. So as an example here, there was an app released in the city of Boston back in 2011, I think it was called Street Bump. The way it worked is it used iPhone data and specifically the sort of positional movement of iPhones as people were driving around, to gather data on where there were potholes that should be repaired by the city. But pretty quickly, the city of Boston figured out that this actually wasn't a great way to get that kind of data, because back in 2011, the people who had iPhones were typically quite affluent and only lived in certain neighborhoods. So that would be an example about thinking carefully about the technology even used to collect data. It's also really important to make sure that there is sufficient representation of different subpopulations who might be ultimately using or affected by your machine learning system to make sure that you really do have good representation overall. Moving onto things like the model, there is a number of different things that you can do there, for instance, as well. So in the case of a model, I mentioned a bit about assumptions being really important. It's great to really clearly define all of your assumptions about the model, and then to question whether there might be any explicit or implicit biases present in those assumptions. That's a really important thing to do when you're thinking about choosing any particular model or model structure. You could even, in some scenarios, include some quantitative notion of parity, for instance, in your model objective function as well. There have been a number of academic papers that take that approach in the literature over the past few years. Sam Charrington: [00:30:43] Can you give an example of that last point? Hanna Wallach:[00:30:46] Yeah, sure. So imagine you have some kind of a machine learning classifier that's going to make decisions of the form, let's say loan, no loan, hire, no hire, bail, no bail, and so on. The way we normally develop these classifiers is to take a bunch of labeled data, so data points labeled with, let's say, loan, no loan, and then we train a model, a machine learning model, a classifier, to optimize accuracy on that training data. So you end up setting the parameters of that model such that it does a good job of accurately predicting those labels from the training data. So the objective function that's typically used is one that considers, usually, only accuracy. But something else you can do is define some quantitative definition of fairness, some quantitative fairness metric, and then try to simultaneously optimize both of these objectives. So classifier accuracy and whatever your chosen fairness metric is. There is a number of these different quantitative metrics that have been proposed out there that all typically are looking at parity across groups of some sort. So I think it's really important to remember that even though these are often referred to as fairness metrics, they're really parity metrics. They neglect many of the really important other aspects of fairness, like justice, and due process, and so on and so forth. But, it is absolutely possible to take these parity metrics and to incorporate them into the objective function of, say, a classifier, and then to try and prioritize satisfying and optimizing that fairness metric at the same time as optimizing classifier accuracy. There have been a number of papers that focus on this kind of approach, many of them will focus on one particular type of classifier, so like SBMs, or neural networks, or something like that, and one particular fairness metric. There are a bunch of standard fairness metrics that people like to look at. I actually have some work with some colleagues here at Microsoft where we have a slightly more general way of doing this that will work with many different types of classifiers, and many different types of fairness metrics. So there is no reason to start again from scratch if you want to switch to a different classifier or a different fairness metric. We actually have some open source Python code available on GitHub that implements our approach. Sam Charrington: [00:33:27] So you've talked about the idea that kind of people are fundamentally the root of the issue, that these are societal issues, that they're not going to be solved by technological advancements or processes alone. At the same time, there has been a ton of new research happening in this area by folks in your group and elsewhere. Does that lead to a mismatch between what's happening in academia and on the technical side with the way this stuff actually gets put into practice? Hanna Wallach:[00:34:11] That's an awesome question. The simple answer is yes. This actually relates to one of my most recent research projects, which I'm really, really excited about. So last summer, some of my colleagues and I, specifically Jenn Wortman Vaughan, Miro Dudík, and Hal Daumé, along with our incredible intern, Ken Holstein from CMU, conducted the first systematic investigation of industry practitioner's challenges and needs for support relating to developing fairer machine learning systems. This work actually came about because we were thinking about ways of developing interfaces for that fair classification work that I mentioned a minute ago. Through a number of conversations with people in different product groups here at Microsoft and people at other companies, we realized that these kinds of classification tasks, while they're incredibly well studied within the fairness and machine learning literature, are maybe less common than we had thought in practice within industry. So that got us thinking about whether there might be, actually, a mismatch between the academic literature on fairness and machine learning, and practitioner's actual needs. What we ended up doing was this super interesting research project that was a pretty different style of research for me and for my colleagues. So I am a machine learning researcher, so is Jen, so is Howell, and so is Miro. Ken, our intern, is an HCI researcher. What we ended up doing was this qualitative HCI work to really understand what it is that practitioners are facing in reality when they try and develop fairer machine learning systems. To do this, we conducted semi structured interviews with 35 people, spanning 25 different teams, in 10 different companies. These people were in a number of different roles, ranging from social scientist, data labeler, product manager, program manager, to data scientists and researcher. Where possible, we tried to interview multiple people from the same team in order to get a variety of perspectives on that team's challenges and needs for support. We then took our findings from these interviews, and developed a survey which was then completed by another 267 industry practitioners, again, in a variety of different companies and a variety of different roles. What we found, at a high level, was that yes, there is a mismatch between the academic literature on fairness in machine learning and industry practitioner's actual challenges and needs for support on the ground. So firstly, much of the machine learning literature on fairness focuses on classification, and on supervised machine learning methods. In fact, what we found is that industry practitioners are grappling with fairness issues in a much wider range of applications beyond classification or prediction scenarios. In fact, many times the systems they're dealing with involve these really rich, complex interactions between users and the system. So for example, chat bots, or adaptive tutoring, or personalized retail, and so on and so forth. So as a result, they often struggle to use existing fairness research from the literature, because the things that they're facing are much less amenable to these quantitative fairness metrics. Indeed, very few teams have fairness KPIs or automated tests that they can use within their domain. One of the other things that we found is that the machine learning literature typically assumes access to sensitive attributes like race or gender, for the purpose of auditing systems for fairness. But in practice, many teams have no access to these kinds of attributes, and certainly not at the level of individuals. So they express needs for support in detecting biases and unfairness with access only to core screened, partial, or indirect information. This is something that we've seen much less focus on in the academic literature. Sam Charrington: [00:38:41] That last point is an interesting one, and one that I've brought up on the podcast previously. In many of the places you might want to use an approach like that, it's forbidden, from a regulatory perspective, to use the information that you want to use in your classifier to achieve fairness in any part of the decisioning process. Hanna Wallach:[00:39:04] Exactly. This sets up this really difficult tension between doing the right thing in practice from a machine learning perspective, and what is legally allowed. I'm actually working on a paper at the moment with a lawyer, Zack Conard, actually, a law student, Zack Conard, at Stanford University, on exactly this issue. This challenge between what you want to do from a machine learning perspective, and what you are required to do from a legal perspective, based on humans and how humans behave, and hundreds of years of law in that realm. It's really challenging, and there is this complicated trade off there that we really need to be thinking about. Sam Charrington: [00:39:48] It does make me wonder if techniques like or analogous to a differential privacy or something like that could be used to provide a regulatorily acceptable way to access protected attributes, so that they can be incorporated into algorithms like this. Hanna Wallach:[00:40:07] Yeah, so there was some work on exactly this kind of topic at the FAT ML Workshop colocated with ICML last year. This work was proposing the use of encryption and such like in order to collect and make available such information, but in a way that users would feel as if their privacy was being respected, and so that people who wanted to use that information would be able to use it for purposes such as auditing. I think that's a really promising approach, although there is obviously a bunch of non trivial challenges involved in thinking about how you might make that a reality. It's a really complicated landscape. But definitely one that's worth thinking about. Sam Charrington: [00:40:54] Was there a third area that you were about to mention? Hanna Wallach:[00:40:58] Yeah, so one of the main themes that we found in our work studying industry practitioners is a real mismatch between the focus on different points in the machine learning life cycle. So the machine learning literature typically assumes no agency over data collection. This makes sense, right? If you're a machine learning academic, you typically work with standard data sets that have been collected and made available for years. You don't typically think about having agency over that data collection process. But of course, in industry, that's exactly where practitioners often do have the most control. They are in charge of that data collection or data curation process, and in contrast, they often have much less control over the methods or models themselves, which often are embedded within much bigger systems. So it's much harder to intervene from a perspective of fairness with the models than it is with the data. We found that really interesting, this sort of difference in emphasis between models versus data in these different groups of people. Of course, many practitioners voiced needs for support in figuring out how to leverage that sort of agency over data collection to create fairer data sets for use in developing their systems. Sam Charrington: [00:42:20] So you mentioned the FAT ML workshop. I'm wondering as we come to a close, if there are any resources, events, pointers, I'm sure there are tons of things that you'd love to point people at. But what are your top three or four things that you would suggest people take a look at as they're trying to wrap their heads around this area, and how to either have an impact as a researcher, or how to make good use of it as a practitioner? Hanna Wallach:[00:42:55] Yeah. Absolutely. So there are a number of different places with resources to learn more about this kind of stuff. So first, I've mentioned a couple of times, this tutorial that I put together with Jen Waltman Vahn, that will be available publicly online very soon. It is in fact being broadcast next week, so it should be up by the time this podcast goes live. So I would definitely recommend that people check that out to really get a sense of how we, at Microsoft, are thinking about fairness in machine learning. Then moving beyond that, and thinking specifically on more of the academic literature, the FAT ML workshop maintains a list of resources on the workshop website. That's again, another really, really great place to look for things to read about this topic. The FAT Star conference is a relatively newly created conference on fairness accountability and transparency, not just in machine learning, but across all of computer science and computational systems. Again, there, I recommend checking out the website to see the publications that were there last year, and also the publications that will be there this year. There is a number of really interesting papers that I haven't read yet, but I'm super excited to read, being presented at this year's conference. That conference also has tutorials on a range of different subjects. So it's also worth looking at the various different tutorials there. So at last year's conference, Arvind Narayanan presented this amazing tutorial on quantitative fairness metrics, and why they're not a one size fits all solution, why there are trade offs between them, why you can't just sort of take one of these definitions, optimize for it, and call it quits. So I definitely recommend checking that out. Some other places that are worth looking for resources on this, the AI Now Institute, which was co-founded by Kate Crawford, who is also here at Microsoft Research, and Meredith Whitaker, who is also at Google, also has some incredibly awesome resources. They've put out a number of white papers and reports over the past couple of years that really get at the crux of why these are complicated socio-technical issues. So I strongly recommend reading pretty much everything that they put out. I would also recommend checking out some of the material put out by Data and Society, which is also an organization here in New York, led by Danah Boyd, and they too have a number of really interesting things that you can read about these different topics. Then the final thing I want to emphasize is the Partnership on AI, which was formed a couple of years ago by Microsoft and a bunch of other companies working in this space of AI to really foster cross company collaboration and moving forward in this space when thinking about these complicated societal issues that relate to AI and machine learning. So the partnership has been really ramping up over the past couple of years, and they also have some good resources that are worth checking out. Sam Charrington: [00:46:22] Oh, that's great. That is a great list that will keep us busy for a while. Hanna, thank you so much for taking the time to chat with us. It was really a great conversation, and I appreciate it. Hanna Wallach:[00:46:34] No problem. Thank you for having me. This has been really great. Sam Charrington: [00:46:38] Awesome, thank you.
Bits & Bytes Facebook and Google researchers build RL framework to study language evolution. Their research paper proposes a computational framework allowing agents to interact in a series of games and uses it to demonstrate that symmetric communication protocols (i.e. languages) emerge and evolve without any innate, explicit mechanisms built in the agent. Google releases synthetic speech dataset to help researchers combat “deep fakes.” Google has published a dataset of synthetic speech containing thousands of phrases spoken by deep learning based text-to-speech models. By training models on both real and computer-generated speech, researchers can develop systems that learn to distinguish between the two. Carbon Relay to optimize energy efficiency in data centers with AI. Carbon Relay launched a new data center energy management product using deep reinforcement learning and augmented intelligence to offer customers energy efficiency improvements. Google announced its success with a similar internal project last year. AWS open sources Neo-AI project to accelerate ML on edge devices. Recall that the project extends TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models to perform at up to twice the speed of the original model with no loss in accuracy on multiple hardware platforms. It’s based on the TVM and Treelite compilers developed at the University of Washington. Microsoft and MIT work to detect ‘blind spots’ in self-driving cars. A model developed by MIT and Microsoft researchers identifies instances where autonomous cars have learned actions from training examples that could cause real-world errors on the road. Amazon facial-identification software used by police falls short on tests for accuracy and bias. AWS may be digging itself a deeper and deeper hole as it attempts to refute claims of bias for its facial-recognition software, Rekognition, marketed to local and federal law enforcement as a crime-fighting tool, struggles to pass basic tests of accuracy. Spell expands cloud AI platform. The Spell platform uses Kubernetes to automatically scale models as necessary and provides metrics, monitoring, and logs for everything running in real time. Its latest edition adds team and collaboration features. The company also announced new funding; see below. Dollars & Sense Israel-based start-up, Augury which provides AI solution to predict industrial equipment failure, raised $25M in a Series C funding Aureus Analytics, a predictive analytics platform for the insurance industry, has secured $3.1M in funding Mimiro, a London, UK-based ML platform for analyzing the risk of financial crime, raised $30M in Series B funding Carbon Relay, a Boston and Washington, D.C.based company raised $5M to tackle data center cooling with AI Verbit Software Ltd, a provider of automated video and speech transcription services powered by AI has raised $23M round of funding led by Viola Ventures Cinnamon AI, a provider of AI solutions, has secured $15M in Series B funding Sonasoft Corp. announces that it has signed a letter of intent to acquire Hotify Rover180 has acquired an AI automation and ML Indiana-based company, Vemity Sherpa.ai, Palo Alto, California-based AI-powered digital predictive assistant provider, raised $8.5M in Series A funding AInnovation, a Chinese AI solutions provider raised approximately $60M in Series A and A+ financing round Zeta Global has acquired Silicon Valley-based AI company, Temnos Adjust announced that it has entered into a definitive agreement to acquire cybersecurity and AI start-up, Unbotify Spell closed $15M in new funding from Eclipse Ventures and Two Sigma Ventures. To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.
Bits & Bytes Google introduces Feast: an open source feature store for ML. GO-JEK and Google announced the release of Feast that allows teams to manage, store, and discover features to use for ML projects. Amazon CEO, Jeff Bezos, is launching a new conference dedicated to AI. The new AI specific conference, re:MARS, will be held in Las Vegas between June 4th and 7th this year. Should be an interesting event. Mayo Clinic research uses AI for early detection of silent heart disease. Mayo Clinic study finds that applying AI to electrocardiogram (EKG) test results offers a simple, affordable early indicator of asymptomatic left ventricular dysfunction, a precursor to heart failure. Microsoft announces ML.NET 0.9. Microsoft’s open-source and cross-platform ML framework, ML.NET, was updated to version 0.9. New and updated features focus on expanded model interpretability capabilities, GPU support for ONNX models, new Visual Studio project templates in preview, and more. Intel and Alibaba team up on new AI-powered 3D athlete tracking technology. At CES 2019, Intel and Alibaba announced the new collaboration to develop AI-powered 3D athlete tracking technology to be deployed at the 2020 Olympic Games. Baidu unveils open source edge computing platform and AI boards. OpenEdge, an open source computing platform enables developers to build edge applications with more flexibility. The company also announced new AI hardware development platforms BIE-AI-Box with Intel for in-car video analysis, and BIE-AI-Board, co-developed with NXP, for object classification. Qualcomm shows off an AI-equipped car cockpit at CES 2019. At CES, Qualcomm introduced the third generation of its Snapdragon Automotive Cockpit Platforms. The upgraded version covers various aspects of the in-car experience from voice-activated interfaces to traditional navigation systems. Their keynote featured a nice demo of “pedestrian intent prediction” based on various computer vision techniques including object detection and pose estimation. Dollars & Sense Fractal Analytics, an AI firm based in India which, among other things, owns Qure.ai (see my interview with CEO Prashant Warier), raised $200M from private equity investor Apax Standard has acquired Explorer.ai, a mapping and computer vision start-up Israeli AI-based object recognition company, AnyVision, has raised $15M from Lightspeed Venture Partners Spell, an NYC-based AI and ML platform startup raised $15M HyperScience, an edge ML company has raised $30M WeRide.ai, a Chinese autonomous driving technology specialist raised series A funding from SenseTime Technology and ABC International in series A funding UK-based Exscientia, AI-driven drug discovery company, has raised $26 million CrowdAnalytix raises $40 million in strategic investment for crowdsourced AI algorithms To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.
Bits & Bytes Microsoft leads the AI patent race. As per EconSight research findings, Microsoft leads the AI patent race going into 2019 with 697 patents that the firm classifies as having a significant competitive impact as of November 2018. Out of the top 30 companies and research institutions as defined by EconSight in their recent analysis, Microsoft has created 20% of all patents in the global group of patent-producing companies and institutions. AI hides data from its creators to cheat at its appointed task. Research from Stanford and Google found that the ML agent intended to transform aerial images into street maps and back was found to be hiding information it would need later. Tech Mahindra launches GAiA for enterprises. GAiA is the first commercial version of the open source Acumos platform, explored in detail in my conversation with project sponsor Mazin Gilbert about a year ago. Taiwan AI Labs and Microsoft launch AI platform to facilitate genetic analysis. The new AI platform “TaiGenomics” utilizes AI techniques to process, analyze, and draw inferences from vast amounts of medical and genetic data provided by patients and hospitals. Google to open AI lab in Princeton. The AI lab will comprise a mix of faculty members and students. Elad Hazan and Yoram Singer, who both work at Google and Princeton and are co-developers of the AdaGrad algorithm, will lead the lab. The focus of the group is developing efficient methods for faster training. IBM designs AI-enabled fingernail sensor to track diseases. This tiny, wearable fingernail sensor can track disease progression and share details on medication effectiveness for Parkinson’s disease and cardiovascular health. ZestFinance and Microsoft collaborate on AI solution for credit underwriting. Financial institutions will be able to use the Zest Automated Machine Learning (ZAML) tools to build, deploy, and monitor credit models using the Microsoft Azure cloud and ML Server. Dollars & Sense Swiss startup  Sophia Genetics raises $77M to expand its AI diagnostic platform Baraja, LiDAR start-up, has raised $32M in a series A round of funding Semiconductor firm QuickLogic announced that it has acquired SensiML, a specialist in ML for IoT applications Donnelley Financial Solutions announced the acquisition of eBrevia, a provider of AI-based data extraction and contract analytics software solutions Graphcore, a UK-based AI chipmaker, has secured $200M in funding, investors include BMW Ventures and Microsoft Dataiku Inc, offering an enterprise data science and ML platform, has raised $101M in Series C funding Ada, a Toronto-based co focused on automating customer service, has raised $19M in funding To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.
Bits and Bytes Google won’t renew its military AI contract. According to company sources, Google is planning to close the military AI project after the current contract expires in March 2019. Google staff had expressed their unhappiness over project Maven earlier this year. Nvidia Introduces HGX-2 for HPC and AI. Nvidia has introduced a unified computing platform HGX-2 for both artificial intelligence and high-performance The new cloud server platform allows high-precision calculations for scientific computing and simulations and will also enable AI training and inference. Nvidia Jetson platform goes GA. Nvidia announced the general availability of Nvidia Isaac, which brings AI capabilities to robots for manufacturing, logistics, agriculture, construction and many other industries. The Isaac platform includes new hardware, software, and a realistic robot simulator. Qualcomm reveals new XR platform. Qualcomm introduced its new dedicated Extended Reality (XR) platform Qualcomm® Snapdragon™ XR1. The XR1 platform includes an on-device AI engine so that Augmented Reality (AR) developers can take advantage of AI features like better pose prediction and object classification. Microsoft’s AI bot also calls humans, but only in Chinese. After Google’s Duplex demo, which showed an AI calling to make a reservation and conversing with an employee, Microsoft CEO Satya Nadella demoed a similar service at an event in London. Unlike Duplex, which is just a demo at this point, Microsoft says that thousands of users in China have conversed with its AI, called Xiaoice and it can also call for the conversation. Intel AI Lab open-sources deep NLP library. Intel AI Lab has open-sourced a library for deep-learning-based natural language processing to help researchers and developers create conversational agents like chatbots and virtual assistants. Facebook researchers demonstrate musical style transfer. Researchers from Facebook Artificial Intelligence Research (FAIR) have developed an AI system that can translate music between different styles. Particularly impressive was the ability to translate a whistled version of the Raiders of the Lost Ark theme song to a variety of instruments and classical styles. The accompanying paper, A Universal Music Translation Network, is available on arXiv. Dollars & Sense Flock, a London-based startup focused on the use of real-time data in insurance, has raised a £2.25M Seed funding ForwardX, maker of Ovis, the self-driving suitcase you never knew you needed, raised $10M Series A funding Chinese facial recognition technology developer SenseTime Group Ltd said it has raised $620 million in a second round of funding Kneron, provider of edge artificial intelligence (AI) solutions, announced completion of their series A1 financing of US$18 millionled by Horizons Ventures Weights & Biases, a San Francisco, CA-based enterprise AI platform, received $5m in Series A funding Kontrol Energy Corp, a leader in the energy efficiency market announced acquisition of iDimax will be adding Artificial Intelligence (AI) across its energy software platform PayPal announced that it has acquired Jetlore, an artificial intelligence-powered prediction platform SafeToNet, the UK-based cyber safety company, has acquired the Toronto-based artificial intelligence (AI) and natural language processing startup, VISR Sign up for our Newsletter to receive the Bits & Bytes weekly to your inbox.