We could not locate the page you were looking for.

Below we have generated a list of search results based on the page you were trying to reach.

404 Error
Running out of gift ideas and need a little inspiration? The TWIML team has you covered! We put together a Holiday Gift Guide featuring some of our favorite AI-enabled products. It’s probably no surprise if you listen to the podcast, but AI has found its way into a bunch of different areas. This is just a small sampling of some of the nifty gadgets and services that caught our attention this holiday season. Surprise the AI enthusiast (or non-enthusiast) in your life with: The Drone: ActiveTrack + Advanced Pilot Asset System am felt this list wouldn’t be complete without at least one drone, and this is the one on his wish list. The DJI Mavic Air 2 has all the usual drone things: good sensors, a nice camera, etc., but what makes the drone unique is the stellar AI software. ActiveTrack 3.0 and Advanced Pilot Assist System 3.0 features allow you to focus on a subject and have it tracked and filmed while the drone is in flight. There’s a pretty good review of the drone here. Personal Trainer for Running For those of us missing the gym, running is often the only refuge. We like the Vi app because it uses AI to personalize workouts, offer to coach, and give daily challenges– all helpful qualities when you’re trying to build healthy habits! Coding Robots for Kids & Adults This one is for the kiddos! (And anyone learning to code). We’re fans of this little smart Root Coding Robot that complements any level of coding experience. It’s super interactive, with 3 learning levels full of lessons, projects, and activities. If you’re curious about how iRobot is using AI, check out this TWIML interview on Re-Architecting Data Science at iRobot with Angela Bassa, the company’s Global Global Head of Analytics & Data Science. For older kids or young-at-heart adults, DJI’s Robomaster S-1 is a neat choice too and allows users to program with some simple AI building blocks like person detection. Data-based Skincare In an effort to create a skincare program based on data science, Proven established The Skin Genome ProjectTM, which became the most comprehensive skincare database you can find and winner of MIT’s 2018 Artificial Intelligence Award. With this database that accounts for over 20,000 skincare ingredients, 100,000 products, 8 million testimonials, and even the climate you live in – they’re able to curate skin care formulas based on your skin. We hope you enjoy our top picks!
The issue of bias in AI was the subject of much discussion in the AI community last week. The publication of PULSE, a machine learning model by Duke University researchers, sparked a great deal of it. PULSE proposes a new approach to the image super-resolution problem, i.e. generating a faithful higher-resolution version of a low-resolution image. In short, PULSE works by using a novel technique to efficiently search space of high-resolution artificial images generated using a GAN and identify ones that are downscale to the low-resolution image. This is in contrast to previous approaches to solving this problem, which work by incrementally upscaling the low-resolution images and which are typically trained in a supervised manner with low- and high-resolution image pairs. The images identified by PULSE are higher resolution and more realistic than those produced by previous approaches, and without the latter’s characteristic blurring of detailed areas. However, what the community quickly identified was that the PULSE method didn’t work so well on non-white input images. An example using a low res image of President Obama was one of the first to make the rounds, and Robert Ness used a photo of me to create this example: I’m going to skip a recounting of the unfortunate Twitter firestorm that ensued following the model’s release. For that background, Khari Johnson provides a thoughtful recap over at VentureBeat, as does Andrey Kurenkov over at The Gradient. Rather, I’m going to riff a bit on the idea of where bias comes from in AI systems. Specifically, in today’s episode of the podcast featuring my discussion with AI Ethics researcher Deb Raji I note, “I don’t fully get why it’s so important to some people to distinguish between algorithms being biased and data sets being biased.” Bias in AI systems is a complex topic, and the idea that more diverse data sets are the only answer is an oversimplification. Even in the case of image super-resolution, one can imagine an approach based on the same underlying dataset that exhibits behavior that is less biased, such as by adding additional constraints to a loss or search function or otherwise weighing the types of errors we see here more heavily. See AI artist Mario Klingemann’s Twitter thread for his experiments in this direction. Not electing to consider robustness to dataset biases is a decision that the algorithm designer makes. All too often, the “decision” to trade accuracy with regards to a minority subgroup for better overall accuracy is an implicit one, made without sufficient consideration. But what if, as a community, our assessment of an AI system’s performance was expanded to consider notions of bias as a matter of course? Some in the research community choose to abdicate this responsibility, by taking the position that there is no inherent bias in AI algorithms and that it is the responsibility of the engineers who use these algorithms to collect better data. However, as a community, each of us, and especially those with influence, has a responsibility to ensure that technology is created mindfully, with an awareness of its impact. On this note, it’s important to ask the more fundamental question of whether a less biased version of a system like PULSE should even exist, and who might be harmed by its existence. See Meredith Whittaker’s tweet and my conversation with Abeba Birhane on Algorithmic Injustice and Relational Ethics for more on this. A full exploration of the many issues raised by the PULSE model is far beyond the scope of this article, but there are many great resources out there that might be helpful in better understanding these issues and confronting them in our work. First off there are the videos from the tutorial on Fairness Accountability Transparency and Ethics in Computer Vision presented by Timnit Gebru and Emily Denton. CVPR organizers regard this tutorial as “required viewing for us all.” Next, Rachel Thomas has composed a great list of AI ethics resources on the fast.ai blog. Check out her list and let us know what you find most helpful. Finally, there is our very own Ethics, Bias, and AI playlist of TWIML AI Podcast episodes. We’ll be adding my conversation with Deb to it, and it will continue to evolve as we explore these issues via the podcast. I'd love to hear your thoughts on this. (Thanks to Deb Raji for providing feedback and additional resources for this article!)
Today we're joined by Yashar Hezaveh, Assistant Professor at the University of Montreal, and Research Fellow at the Center for Computational Astrophysics at Flatiron Institute. Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how machine learning can be applied to undistort images, including some of the various techniques used and how the data is prepared to get the best results. We also discuss the intertwined roles of simulation and machine learning in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project. You might have seen the news yesterday that MIT researcher Katie Bouman produced the first image of a black hole. What's been less reported is that the algorithm she developed to accomplish this is based on machine learning. Machine learning is having a huge impact in the fields of astronomy and astrophysics, and I'm excited to bring you interviews with some of the people innovating in this area. For even more on this topic, I'd also suggest checking out the following interviews, TWIML Talk #117 with Chris Shallue, where we discuss the discovery of exoplanets, TWIML Talk #184, with Viviana Acquaviva, where we explore dark energy and star formation, and if you want to go way back, TWIML Talk #5 with Joshua Bloom which provides a great overview of the application of ML in astronomy.