Monday, 30 June 2014
Influential Papers for 2013
Googlers across the company actively engage with the scientific community by publishing technical papers, contributing open-source packages, working on standards, introducing new APIs and tools, giving talks and presentations, participating in ongoing technical debates, and much more. Our publications offer technical and algorithmic advances, feature aspects we learn as we develop novel products and services, and shed light on some of the technical challenges we face at Google. Below are some of the especially influential papers co-authored by Googlers in 2013. In the coming weeks we will be offering a more in-depth look at some of these publications.
Algorithms
Online Matching and Ad Allocation, by Aranyak Mehta [Foundations and Trends in Theoretical Computer Science]
Matching is a classic problem with a rich history and a significant impact, both on the theory of algorithms and in practice. There has recently been a surge of interest in the online version of the matching problem, due to its application in the domain of Internet advertising. The theory of online matching and allocation has played a critical role in the design of algorithms for ad allocation. This monograph provides a survey of the key problems and algorithmic techniques in this area, and provides a glimpse into their practical impact.
Computer Vision
Fast, Accurate Detection of 100,000 Object Classes on a Single Machine, by Thomas Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, Jay Yagnik [Proceedings of IEEE Conference on Computer Vision and Pattern Recognition]
In this paper, we show how to use hash table lookups to replace the dot products in a convolutional filter bank with the number of lookups independent of the number of filters. We apply the technique to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM.
Distributed Systems
Photon: Fault-tolerant and Scalable Joining of Continuous Data Streams, by Rajagopal Ananthanarayanan, Venkatesh Basker, Sumit Das, Ashish Gupta, Haifeng Jiang, Tianhao Qiu, Alexey Reznichenko, Deomid Ryabkov, Manpreet Singh, Shivakumar Venkataraman [SIGMOD]
In this paper, we talk about Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency. The streams may be unordered or delayed. Photon fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention while joining every event exactly once. Photon is currently deployed in production, processing millions of events per minute at peak with an average end-to-end latency of less than 10 seconds.
Omega: flexible, scalable schedulers for large compute clusters, by Malte Schwarzkopf, Andy Konwinski, Michael Abd-El-Malek, John Wilkes [SIGOPS European Conference on Computer Systems (EuroSys)]
Omega addresses the need for increasing scale and speed in cluster schedulers using parallelism, shared state, and lock-free optimistic concurrency control. The paper presents a taxonomy of design approaches and evaluates Omega using simulations driven by Google production workloads.
Human-Computer Interaction
FFitts Law: Modeling Finger Touch with Fitts' Law, by Xiaojun Bi, Yang Li, Shumin Zhai [Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2013)]
Fitts’ law is a cornerstone of graphical user interface research and evaluation. It can precisely predict cursor movement time given an on screen target’s location and size. In the era of finger-touch based mobile computing world, the conventional form of Fitts’ law loses its power when the targets are often smaller than the finger width. Researchers at Google, Xiaojun Bi, Yang Li, and Shumin Zhai, devised finger Fitts’ law (FFitts law) to fix such a fundamental problem.
Information Retrieval
Top-k Publish-Subscribe for Social Annotation of News, by Alexander Shraer, Maxim Gurevich, Marcus Fontoura, Vanja Josifovski [Proceedings of the 39th International Conference on Very Large Data Bases]
The paper describes how scalable, low latency content-based publish-subscribe systems can be implemented using inverted indices and modified top-k document retrieval algorithms. The feasibility of this approach is demonstrated in the application of annotating news articles with social updates (such as Google+ posts or tweets). This application is casted as publish-subscribe, where news articles are treated as subscriptions (continuous queries) and social updates as published items with large update frequency.
Machine Learning
Ad Click Prediction: a View from the Trenches, by H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica [KDD]
How should one go about making predictions in extremely large scale production systems? We provide a case study for ad click prediction, and illustrate best practices for combining rigorous theory with careful engineering and evaluation. The paper contains a mix of novel algorithms, practical approaches, and some surprising negative results.
Learning kernels using local rademacher complexity, by Corinna Cortes, Marius Kloft, Mehryar Mohri [Advances in Neural Information Processing Systems (NIPS 2013)]
This paper shows how the notion of local Rademacher complexity, which leads to sharp learning guarantees, can be used to derive algorithms for the important problem of learning kernels. It also reports the results of several experiments with these algorithms which yield performance improvements in some challenging tasks.
Efficient Estimation of Word Representations in Vector Space, by Tomas Mikolov, Kai Chen, Greg S. Corrado, Jeffrey Dean [ICLR Workshop 2013]
We describe a simple and speedy method for training vector representations of words. The resulting vectors naturally capture the semantics and syntax of word use, such that simple analogies can be solved with vector arithmetic. For example, the vector difference between 'man' and 'woman' is approximately equal to the difference between 'king' and 'queen', and vector displacements between any given country's name and its capital are aligned. We provide an open source implementation as well as pre trained vector representations at http://word2vec.googlecode.com
Large-Scale Learning with Less RAM via Randomization, by Daniel Golovin, D. Sculley, H. Brendan McMahan, Michael Young [Proceedings of the 30 International Conference on Machine Learning (ICML)]
We show how a simple technique -- using limited precision coefficients and randomized rounding -- can dramatically reduce the RAM needed to train models with online convex optimization methods such as stochastic gradient descent. In addition to demonstrating excellent empirical performance, we provide strong theoretical guarantees.
Machine Translation
Source-Side Classifier Preordering for Machine Translation, by Uri Lerner, Slav Petrov [Proc. of EMNLP '13]
When translating from one language to another, it is important to not only choose the correct translation for each word, but to also put the words in the correct word order. In this paper we present a novel approach that uses a syntactic parser and a feature-rich classifier to perform long-distance reordering. We demonstrate significant improvements over alternative approaches on a large number of language pairs.
Natural Language Processing
Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging, by Oscar Tackstrom, Dipanjan Das, Slav Petrov, Ryan McDonald, Joakim Nivre [Transactions of the Association for Computational Linguistics (TACL '13)]
Knowing the parts of speech (verb, noun, etc.) of words is important for many natural language processing applications, such as information extraction and machine translation. Constructing part-of-speech taggers typically requires large amounts of manually annotated data, which is missing in many languages and domains. In this paper, we introduce a method that instead relies on a combination of incomplete annotations projected from English with incomplete crowdsourced dictionaries in each target language. The result is a 25 percent error reduction compared to the previous state of the art.
Universal Dependency Annotation for Multilingual Parsing, by Ryan McDonald, Joakim Nivre, Yoav Goldberg, Yvonne Quirmbach-Brundage, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello, Jungmee Lee, [Association for Computational Linguistics]
This paper discusses a public release of syntactic dependency treebanks (https://code.google.com/p/uni-dep-tb/). Syntactic treebanks are manually annotated data sets containing full syntactic analysis for a large number of sentences (http://en.wikipedia.org/wiki/Dependency_grammar). Unlike other syntactic treebanks, the universal data set tries to normalize syntactic phenomena across languages when it can to produce a harmonized set of multilingual data. Such a resource will help large scale multilingual text analysis and evaluation.
Networks
B4: Experience with a Globally Deployed Software Defined WAN, by Sushant Jain, Alok Kumar, Subhasree Mandal, Joon Ong, Leon Poutievski, Arjun Singh, Subbaiah Venkata, Jim Wanderer, Junlan Zhou, Min Zhu, Jonathan Zolla, Urs Hölzle, Stephen Stuart, Amin Vahdat [Proceedings of the ACM SIGCOMM Conference]
This paper presents the motivation, design, and evaluation of B4, a Software Defined WAN for our data center to data center connectivity. We present our approach to separating the network’s control plane from the data plane to enable rapid deployment of new network control services. Our first such service, centralized traffic engineering allocates bandwidth among competing services based on application priority, dynamically shifting communication patterns, and prevailing failure conditions.
Policy
When the Cloud Goes Local: The Global Problem with Data Localization, by Patrick Ryan, Sarah Falvey, Ronak Merchant [IEEE Computer]
Ongoing efforts to legally define cloud computing and regulate separate parts of the Internet are unlikely to address underlying concerns about data security and privacy. Data localization initiatives, led primarily by European countries, could actually bring the cloud to the ground and make the Internet less secure.
Robotics
Cloud-based robot grasping with the google object recognition engine, by Ben Kehoe, Akihiro Matsukawa, Sal Candido, James Kuffner, Ken Goldberg [IEEE Int’l Conf. on Robotics and Automation]
What if robots were not limited by onboard computation, algorithms did not need to be implemented on every class of robot, and model improvements from sensor data could be shared across many robots? With wireless networking and rapidly expanding cloud computing resources this possibility is rapidly becoming reality. We present a system architecture, implemented prototype, and initial experimental data for a cloud-based robot grasping system that incorporates a Willow Garage PR2 robot with onboard color and depth cameras, Google’s proprietary object recognition engine, the Point Cloud Library (PCL) for pose estimation, Columbia University’s GraspIt! toolkit and OpenRAVE for 3D grasping and our prior approach to sampling-based grasp analysis to address uncertainty in pose.
Security, Cryptography, and Privacy
Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness, by Devdatta Akhawe, Adrienne Porter Felt [USENIX Security Symposium]
Browsers show security warnings to keep users safe. How well do these warnings work? We empirically assess the effectiveness of browser security warnings, using more than 25 million warning impressions from Google Chrome and Mozilla Firefox.
Social Systems
Arrival and departure dynamics in Social Networks, by Shaomei Wu, Atish Das Sarma, Alex Fabrikant, Silvio Lattanzi, Andrew Tomkins [WSDM]
In this paper, we consider the natural arrival and departure of users in a social network, and show that the dynamics of arrival, which have been studied in some depth, are quite different from the dynamics of departure, which are not as well studied. We show unexpected properties of a node's local neighborhood that are predictive of departure. We also suggest that, globally, nodes at the fringe are more likely to depart, and subsequent departures are correlated among neighboring nodes in tightly-knit communities.
All the news that's fit to read: a study of social annotations for news reading, by Chinmay Kulkarni, Ed H. Chi [In Proc. of CHI2013]
As news reading becomes more social, how do different types of annotations affect people's selection of news articles? This crowdsourcing experiment show that strangers' opinion, unsurprisingly, has no persuasive effects, while surprisingly unknown branded companies still have persuasive effects. What works best are friend annotations, helping users decide what to read, and provide social context that improves engagement.
Software Engineering
Does Bug Prediction Support Human Developers? Findings from a Google Case Study, by Chris Lewis, Zhongpeng Lin, Caitlin Sadowski, Xiaoyan Zhu, Rong Ou, E. James Whitehead Jr. [International Conference on Software Engineering (ICSE)]
"Does Bug Prediction Support Human Developers?" was a study that investigated whether software engineers changed their code review habits when presented with information about where bug-prone code might be lurking. Much to our surprise we found out that developer behavior didn't change at all! We went on to suggest features that bug prediction algorithms need in order to fit with developer workflows, which will hopefully result in more supportive algorithms being developed in the future.
Speech Processing
Statistical Parametric Speech Synthesis Using Deep Neural Networks, by Heiga Zen, Andrew Senior, Mike Schuster [Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)]
Conventional approaches to statistical parametric speech synthesis use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech given text. This paper examines an alternative scheme in which the mapping from an input text to its acoustic realization is modeled by a deep neural network (DNN). Experimental results show that DNN-based speech synthesizers can produce more natural-sounding speech than conventional HMM-based ones using similar model sizes.
Accurate and Compact Large Vocabulary Speech Recognition on Mobile Devices, by Xin Lei, Andrew Senior, Alexander Gruenstein, Jeffrey Sorensen [Interspeech]
In this paper we describe the neural network-based speech recognition system that runs in real-time on android phones. With the neural network acoustic model replacing the previous Gaussian mixture model and a compressed language model using on-the-fly rescoring, the word-error-rate is reduced by 27% while the storage requirement is reduced by 63%
Statistics
Pay by the Bit: An Information-Theoretic Metric for Collective Human Judgment, by Tamsyn P. Waterhouse [Proc CSCW]
There's a lot of confusion around quality control in crowdsourcing. For the broad problem subtype we call collective judgment, I discovered that information theory provides a natural and elegant metric for the value of contributors' work, in the form of the mutual information between their judgments and the questions' answers, each treated as random variables
Structured Data Management
F1: A Distributed SQL Database That Scales, by Jeff Shute, Radek Vingralek, Bart Samwel, Ben Handy, Chad Whipkey, Eric Rollins, Mircea Oancea, Kyle Littlefield, David Menestrina, Stephan Ellner, John Cieslewicz, Ian Rae, Traian Stancescu, Himani Apte [VLDB]
In recent years, conventional wisdom has been that when you need a highly scalable, high throughput data store, the only viable options are NoSQL key/value stores, and you need to work around the lack of transactional consistency, indexes, and SQL. F1 is a hybrid database we built that combines the strengths of traditional relational databases with the scalability of NoSQL systems, showing it's not necessary to compromise on database functionality to achieve scalability and high availability. The paper describes the F1 system, how we use Spanner underneath, and how we've designed schema and applications to hide the increased commit latency inherent in distributed commit protocols.
Thursday, 26 June 2014
Segmenting Brand and Generic Paid Search Traffic in Google Analytics
Many advertisers with paid search campaigns advertise on queries mentioning their brand (e.g., “Motorola smartphone” for Motorola) and also on generic searches (e.g., “smartphone reviews”). Because the performance metrics for ads shown against brand and generic queries can be vastly different, many advertisers prefer to analyze these two groups separately. For example, all else being equal, searches containing the advertiser’s brand name often have higher clickthrough-rates than those that don’t.
Automatic classification
To make analysis of brand and generic performance as easy as possible, we’re introducing a new feature which automatically identifies brand-aware paid search clicks tracked in Google Analytics. We use a combination of signals (including the clickthrough-rate, text string, domain name and others) to identify query terms which show awareness of your brand. You can review our suggested brand terms and then accept or decline each of them. It’s also easy to add additional brand terms that we’ve missed.
With the resulting list of brand terms, we classify your paid search traffic in GA so that you can split your “paid search” channel into two separate channels: “brand paid search” and “generic paid search”. This can be done both for Multi-Channel Funnels (for attribution purposes) and for the main Google Analytics channel grouping. See this straightforward step-by-step guide to get started.
Industry feedback
Back in 2012, George Michie from the Rimm-Kaufmann Group, a leading online marketing agency, called analyzing brand and generic paid search together “the cardinal sin of paid search”. We showed him a preview of our new solution and here’s his reaction:
"I've been arguing for many years that advertisers should look at their brand and generic paid search separately. There are massive differences in overall performance - but also in more specific areas, like attribution and new customer acquisition.
Google Analytics now makes it a lot easier for advertisers to segment brand and generic paid search into separate channels. I'm sure this feature will help many more advertisers measure these important differences - and more importantly, take action on these new insights."
Getting started
Finally: note that this feature works for all paid search advertising, not just Google AdWords. It will roll out to all users in the coming weeks.
To get started, use the step-by-step guide to set up separate brand paid search and generic paid search channels. We’ve already suggested brand terms for every GA view with sufficient paid search traffic.
Posted by: Frank Uyeda, Software Engineer, Google Analytics
Tuesday, 24 June 2014
Introducing the new Google Analytics Partner Gallery
Google Analytics has a vibrant ecosystem of analytics practitioners, advocates, and developers that drive great conversations, learnings, and sharing among passionate users. A central part of this ecosystem is partners, which can help users quickly increase the business value of Google Analytics through implementation expertise, analysis, and integrations.
To make it easier to find services and apps that are important to your business, we’ve re-launched the App Gallery as the Partner Gallery, the new destination to find partners and review their offerings. It includes:
Certified Partners are vetted by Google and meet rigorous qualification standards. This includes agencies and consultancies who offer web analytics implementations, analysis services and website testing and optimization services.
Ready-to-use applications that extend Google Analytics in new and exciting ways. This includes solutions that help analysts, marketers, IT teams, and executives get the most out of Google Analytics and complement functionality.
The Partner Gallery includes new features and improvements:
- A brand new look and layout.
- A combined view of both services and apps so you don’t need to visit multiple sites to find a solution.
- New search capabilities and category selection making it easier to filter and find what you’re looking for.
- Google Analytics Certified Partners are sorted based on your location to find partners that have an office near you.
- Media assets like screenshots / videos / case studies that highlight customer success stories and illustrate app features.
- Comments and ratings to review user experiences and provide feedback.
Visit the Partner Gallery to browse partner services and apps. If you’re interested in the Google Analytics Certified Partner or Technology Partner programs, learn how to become a partner.
Pete Frisella, Developer Advocate, Google Analytics Developer Relations team
Thursday, 19 June 2014
New Google Analytics Premium Feature: Unsampled Reports in the Management API
Today, we are adding Unsampled Reports to the Google Analytics Management API for Google Analytics Premium customers.
Accurate analysis when you’re not online
Enterprise analytics users need to execute complicated, ad hoc reports and download them into their own systems. The Unsampled Reports feature provides accurate analysis of large unsampled data sets.
Easily integrate data
This enhancement to our Management API offers a new way to access unsampled data, so you’re free to spend more time on other strategic areas of your business. It also increases the integrity of the data in your internal systems and provides the flexibility to access your data in a way that best fits your business needs. For example, you can integrate the API into your Business Intelligence (BI) system to retrieve unsampled data, and to provide accurate metrics that support your critical business decisions.
How it works
When you create an Unsampled Report using the API, it is processed in an offline manner. The completed reports are available through the API and under the Customization tab in the Unsampled Reports section. You can define whether you would like the report to be saved in Google Drive or in Google Cloud Storage. Read the Unsampled Reports API documentation for more details.
Posted by Yaniv Yaakubovich, Product Manager, Google Analytics Premium
Wednesday, 18 June 2014
2014 Google PhD Fellowships: Supporting the Future of Computer Science
Posted by David Harper, Google University Relations & Beate List, Google Research Programs
Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we’re announcing the 2014 Google PhD Fellowship recipients. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We’re excited to support them, and we extend our warmest congratulations.
The Google PhD Fellowship program supports PhD students in computer science or closely related fields and reflects our commitment to building strong relations with the global academic community. Now in its sixth year, the program covers North America, Europe, China, India and Australia. To date we’ve awarded 193 Fellowships in 72 universities across 17 countries.
As we welcome the 2014 PhD Fellows, we hear from two past recipients, Cynthia Liem and Ian Goodfellow. Cynthia studies at the Delft University of Technology, and was awarded a Fellowship in Multimedia. Ian is about to complete his PhD at the Université de Montréal in Québec, and was awarded a Fellowship in Deep Learning. Recently interviewed on the Google Student blog, they expressed their views on how the Fellowship affected their careers.
Cynthia has combined her dual passions of music and computing to pursue a PhD in music information retrieval. She speaks about the fellowship and her links with Google:
“Through the Google European Doctoral Fellowship, I was assigned a Google mentor who works on topics related to my PhD interests. In my case, this was Dr. Douglas Eck in Mountain View, who is part of Google Research and leads a team focusing on music recommendation. Doug has been encouraging me in several of my academic activities, most notably the initiation of the ACM MIRUM Workshop, which managed to successfully bring music retrieval into the spotlight of the prestigious ACM Multimedia conference.”
Ian is about to start as a research scientist on Jeff Dean’s deep learning infrastructure team. He was also an intern at Google, and contributed to the development of a neural network capable of transcribing the address numbers on houses from Google Street View photos. He describes the connection between this intern project and his PhD study supported by the Fellowship:
“The project I worked on during my internship was the basis for a publication at the International Conference on Learning Representations …. my advisor let me include this paper in my PhD thesis since there was a close connection to the subject area.… I can show that some of the work developed early in the thesis has had a real impact.“
We’re proud to have supported Cynthia, Ian, and all the other recipients of the Google PhD Fellowship. We continue to look forward to working with, and learning from, the academic community with great excitement and high expectations.
Nurturing and maintaining strong relations with the academic community is a top priority at Google. Today, we’re announcing the 2014 Google PhD Fellowship recipients. These students, recognized for their incredible creativity, knowledge and skills, represent some of the most outstanding graduate researchers in computer science across the globe. We’re excited to support them, and we extend our warmest congratulations.
The Google PhD Fellowship program supports PhD students in computer science or closely related fields and reflects our commitment to building strong relations with the global academic community. Now in its sixth year, the program covers North America, Europe, China, India and Australia. To date we’ve awarded 193 Fellowships in 72 universities across 17 countries.
As we welcome the 2014 PhD Fellows, we hear from two past recipients, Cynthia Liem and Ian Goodfellow. Cynthia studies at the Delft University of Technology, and was awarded a Fellowship in Multimedia. Ian is about to complete his PhD at the Université de Montréal in Québec, and was awarded a Fellowship in Deep Learning. Recently interviewed on the Google Student blog, they expressed their views on how the Fellowship affected their careers.
Cynthia has combined her dual passions of music and computing to pursue a PhD in music information retrieval. She speaks about the fellowship and her links with Google:
“Through the Google European Doctoral Fellowship, I was assigned a Google mentor who works on topics related to my PhD interests. In my case, this was Dr. Douglas Eck in Mountain View, who is part of Google Research and leads a team focusing on music recommendation. Doug has been encouraging me in several of my academic activities, most notably the initiation of the ACM MIRUM Workshop, which managed to successfully bring music retrieval into the spotlight of the prestigious ACM Multimedia conference.”
Ian is about to start as a research scientist on Jeff Dean’s deep learning infrastructure team. He was also an intern at Google, and contributed to the development of a neural network capable of transcribing the address numbers on houses from Google Street View photos. He describes the connection between this intern project and his PhD study supported by the Fellowship:
“The project I worked on during my internship was the basis for a publication at the International Conference on Learning Representations …. my advisor let me include this paper in my PhD thesis since there was a close connection to the subject area.… I can show that some of the work developed early in the thesis has had a real impact.“
We’re proud to have supported Cynthia, Ian, and all the other recipients of the Google PhD Fellowship. We continue to look forward to working with, and learning from, the academic community with great excitement and high expectations.
Tuesday, 17 June 2014
Moving from Data to Decisions in the next Analytics Academy course
Today we’re excited to announce our next Analytics Academy course, Ecommerce Analytics: From Data to Decisions. As the name suggests, we’ve designed this course specifically to help marketers and analysts who work in ecommerce understand how Analytics data can be used to make decisions and take actions that improve their ecommerce performance.
In the course, you’ll join instructor Justin Cutroni to explore topics through the lens of a fictional online retailer, The Great Outdoors. This practical example will help bring common ecommerce questions to life with relevant planning, reporting and analysis examples.
By participating in the course, you’ll learn how to:
- select and customize meaningful reports that align with your ecommerce measurement plan
- use segmentation to compare interesting subsets of your online audience
- and conduct actionable in-depth analyses in Google Analytics.
In addition to teaching you how to make the most of reporting features like segmentation, the course has a special focus on the new Enhanced Ecommerce for Google Analytics. This set of new features, which was announced in May, helps ecommerce companies understand the customer journey and merchandising tactics at a much deeper level. The course will introduce you to powerful analysis tools, like the Product List Performance report, the Shopping Behavior report and the Checkout Behavior report.
Sign up for the Ecommerce Analytics course now and join us when it opens on July 8, 2014.
Happy Learning!
Post By: Christina Macholan & The Google Analytics Education Team
Subscribe to:
Posts (Atom)