We could not locate the page you were looking for.

Below we have generated a list of search results based on the page you were trying to reach.

404 Error
Bits & Bytes Microsoft leads the AI patent race. As per EconSight research findings, Microsoft leads the AI patent race going into 2019 with 697 patents that the firm classifies as having a significant competitive impact as of November 2018. Out of the top 30 companies and research institutions as defined by EconSight in their recent analysis, Microsoft has created 20% of all patents in the global group of patent-producing companies and institutions. AI hides data from its creators to cheat at its appointed task. Research from Stanford and Google found that the ML agent intended to transform aerial images into street maps and back was found to be hiding information it would need later. Tech Mahindra launches GAiA for enterprises. GAiA is the first commercial version of the open source Acumos platform, explored in detail in my conversation with project sponsor Mazin Gilbert about a year ago. Taiwan AI Labs and Microsoft launch AI platform to facilitate genetic analysis. The new AI platform “TaiGenomics” utilizes AI techniques to process, analyze, and draw inferences from vast amounts of medical and genetic data provided by patients and hospitals. Google to open AI lab in Princeton. The AI lab will comprise a mix of faculty members and students. Elad Hazan and Yoram Singer, who both work at Google and Princeton and are co-developers of the AdaGrad algorithm, will lead the lab. The focus of the group is developing efficient methods for faster training. IBM designs AI-enabled fingernail sensor to track diseases. This tiny, wearable fingernail sensor can track disease progression and share details on medication effectiveness for Parkinson’s disease and cardiovascular health. ZestFinance and Microsoft collaborate on AI solution for credit underwriting. Financial institutions will be able to use the Zest Automated Machine Learning (ZAML) tools to build, deploy, and monitor credit models using the Microsoft Azure cloud and ML Server. Dollars & Sense Swiss startup  Sophia Genetics raises $77M to expand its AI diagnostic platform Baraja, LiDAR start-up, has raised $32M in a series A round of funding Semiconductor firm QuickLogic announced that it has acquired SensiML, a specialist in ML for IoT applications Donnelley Financial Solutions announced the acquisition of eBrevia, a provider of AI-based data extraction and contract analytics software solutions Graphcore, a UK-based AI chipmaker, has secured $200M in funding, investors include BMW Ventures and Microsoft Dataiku Inc, offering an enterprise data science and ML platform, has raised $101M in Series C funding Ada, a Toronto-based co focused on automating customer service, has raised $19M in funding To receive the Bits & Bytes to your inbox, subscribe to our Newsletter.
Last week on the podcast I interviewed Clare Gollnick, CTO of Terbium Labs, on the reproducibility crisis in science and its implications for data scientists. We also got into an interesting conversation about the philosophy of data, a topic I hadn’t previously thought much about. The interview seemed to really resonate with listeners, judging by the number of comments we’ve received via the show notes page and Twitter. I think there are several reasons for this. I’d recommend listening to the interview if you haven't already. It’s incredibly informative and Clare does an excellent job explaining some of the main points of the reproducibility crisis. The short of it though is that many researchers in the natural and social sciences report not being able to reproduce each other’s findings. A 2016 “Nature” survey demonstrated that more than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. This concerning finding has far-reaching implications for the way scientific studies are performed. Gollnick suggests that one contributing factor is the idea of “p-hacking”–that is, examining one’s experimental data until patterns are found that meet the criteria for statistical significance, before determining a specific hypothesis about the underlying causal relationship. P-hacking is also known as “data fishing” for a reason: You’re working backward from your data to a pattern, which breaks the assumptions upon which statistical significance is determined in the first place. Clare points out that data fishing is exactly what machine learning algorithms do though–they work backward from data to patterns or relationships. Data scientists can thus fall victim to the same errors made by natural scientists. P-hacking in the sciences, in particular, is similar to developing overfitted machine learning models. Fortunately for data scientists, it is well understood that cross-validation, by which a hypothesis is generated on a training dataset and then tested on a validation dataset, is a necessary practice. As Gollnick points out, testing on the validation set is a lot like making a very specific prediction that’s unlikely to occur unless your hypothesis is true, which is essentially the scientific method at its purest. Beyond the sciences, there’s growing concern about a reproducibility crisis in machine learning as well. A recent blog post by Pete Warden speaks to some of the core reproducibility challenges faced by data scientists and other practitioners. Warden refers to the iterative nature of current approaches to machine and deep learning and the fact that data scientists are not easily able to record their steps through each iteration. Furthermore, the data science stack for deep learning has a lot of moving parts, and changes in any of these layers–the deep learning framework, GPU drivers, or training or validation datasets–can all impact results. Finally, with opaque models like deep neural networks, it’s difficult to understand the root cause of differences between expected and observed results. These problems are further compounded by the fact that many published papers fail to explicitly mention many of their simplifying assumptions or implementation details, making it harder for others to reproduce their work. Efforts to reproduce deep learning results are further confounded by the fact that we really don’t know why, when or to what extent deep learning works. During an award acceptance speech at the 2017 NIPS conference, Google’s Ali Rahimi likened modern machine learning to alchemy for this reason. He explained that while alchemy gave us metallurgy, modern glass making, and medications, alchemists also believed they could cure illnesses with leeches and transmute base metals into gold. Similarly, while deep learning has given us incredible new ways to process data, Rahimi called for the systems responsible for critical decisions in healthcare and public policy to be “built on top of verifiable, rigorous, thorough knowledge.” Gollnick and Rahimi are united in advocating for a deeper understanding of how and why the models we use work. Doing so might mean a trip back to basics–as far back as the foundations of the scientific method. Gollnick mentioned in our conversation that she’s been fascinated recently with the “philosophy of data,” that is, the philosophical exploration of scientific knowledge, what it means to certain of something, and how data can support these. It stands to reason that any thought exercise that forces us to face tough questions about issues like explainability, causation, and certainty, could be of great value as we broaden our application of modern machine learning methods. Guided by the work of science philosophers like Karl Popper, Thomas Kuhn, and as far back as David Hume, this type of deep introspection into our methods could prove useful for the field of AI as a whole. What do you think? Does AI have a reproducibility crisis? Should we bother philosophizing about the new tools we’ve made, or just get to building with them? Sign up for our Newsletter to receive this weekly to your inbox.
Bits & Bytes Amazon to design its own AI chips for Alexa, Echo devices. This announcement follows similar moves made by rivals Apple and Google, both of which have developed custom AI silicon. Amazon, which reportedly has nearly 450 people on staff with chip expertise, sees custom AI chips as a way to make it's AI devices faster and more efficient. Google’s Cloud TPU AI accelerators now available to the public. Cloud TPUs are custom chips optimized for accelerating ML workloads in Tensorflow. Each boasts up to 180 teraflops of computing power and 64 gigabytes of high-bandwidth memory. Last week Google announced their beta availability via the Google Cloud. Cloud TPUs are available in limited quantities today and cost $6.50 / TPU-hour. At this cost, users can train a ResNet-50 neural network on ImageNet in less than a day for under $200. Finding pixie dust unavailable, Oracle sprinkles AI buzzword on cloud press release. The company applied "AI" to its Cloud Autonomous Services, including its Autonomous PaaS, and its Autonomous Database and Autonomous Data Warehouse products to make them "self-driving, self-securing and self-repairing" software. Oh boy! In other news, the company ran the same play for a suite of AI-powered finance applications. LG to introduce new AI tech for its smartphones. Following the launch of its ThinQ and DeepThinQ platforms earlier this year, as previously noted in this newsletter, LG will introduce new Voice AI and Vision AI features for its flagship V30 smartphone at the gigantic Mobile World Congress event next week. Applitools updates AI-powered visual software testing platform. I hadn't heard of this company before, but it's a pretty cool use case. The company released an update to its Applitools Eyes product, which is a tool for software development and test groups that allows them to ensure a visually consistent user experience as the application evolves. The company uses AI and computer vision techniques to detect changes to rendered web pages and applications, and report the ones that shouldn't be there. Dollars & Sense OWKIN, a company using transfer learning to accelerate drug discovery and development, closes $11m Series A financing. Ditto, a UK AI startup, raises £4 million to bring the expert system back via "software advisor" bots which aim to replicate human expertise and accountability. Palo Alto-based Uncomnon.co raises $18M in Series A funding for Uncommon IQ, its AI-powered talent marketplace. Sign up for our Newsletter to receive the Bits & Bytes weekly to your inbox.