Monday, 31 August 2015

VLDB 2015 and Database Research at Google



This week, Kohala, Hawaii hosts the 41st International Conference of Very Large Databases (VLDB 2015), a premier annual international forum for data management and database researchers, vendors, practitioners, application developers and users. As a leader in Database research, Google will have a strong presence at VLDB 2015 with many Googlers publishing work, organizing workshops and presenting demos.

The research Google is presenting at VLDB involves the work of Structured Data teams who are building intelligent and efficient systems to discover, annotate and explore structured data from the Web, surfacing them creatively through Google products (such as structured snippets and table search), as well as engineering efforts that create scalable, reliable, fast and general-purpose infrastructure for large-scale data processing (such as F1, Mesa, and Google Cloud's BigQuery).

If you are attending VLDB 2015, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at VLDB 2015 in the list below (Googlers highlighted in blue).

Google is a Gold Sponsor of VLDB 2015.

Papers:
Keys for Graphs
Wenfei Fan, Zhe Fan, Chao Tian, Xin Luna Dong

In-Memory Performance for Big Data
Goetz Graefe, Haris Volos, Hideaki Kimura, Harumi Kuno, Joseph Tucek, Mark Lillibridge, Alistair Veitch

The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing
Tyler Akidau, Robert Bradshaw, Craig Chambers, Slava Chernyak, Rafael Fernández-Moctezuma, Reuven Lax, Sam McVeety, Daniel Mills, Frances Perry, Eric Schmidt, Sam Whittle

Resource Bricolage for Parallel Database Systems
Jiexing Li, Jeffrey Naughton, Rimma Nehme

AsterixDB: A Scalable, Open Source BDMS
Sattam Alsubaiee, Yasser Altowim, Hotham Altwaijry, Alex Behm, Vinayak Borkar, Yingyi Bu, Michael Carey, Inci Cetindil, Madhusudan Cheelangi, Khurram Faraaz, Eugenia Gabrielova, Raman Grover, Zachary Heilbron, Young-Seok Kim, Chen Li, Guangqiang Li, Ji Mahn Ok, Nicola Onose, Pouria Pirzadeh, Vassilis Tsotras, Rares Vernica, Jian Wen, Till Westmann

Knowledge-Based Trust: A Method to Estimate the Trustworthiness of Web Sources
Xin Luna Dong, Evgeniy Gabrilovich, Kevin Murphy, Van Dang, Wilko Horn, Camillo Lugaresi, Shaohua Sun, Wei Zhang

Efficient Evaluation of Object-Centric Exploration Queries for Visualization
You Wu, Boulos Harb, Jun Yang, Cong Yu

Interpretable and Informative Explanations of Outcomes
Kareem El Gebaly, Parag Agrawal, Lukasz Golab, Flip Korn, Divesh Srivastava

Take me to your leader! Online Optimization of Distributed Storage Configurations
Artyom Sharov, Alexander Shraer, Arif Merchant, Murray Stokely

TreeScope: Finding Structural Anomalies In Semi-Structured Data
Shanshan Ying, Flip Korn, Barna Saha, Divesh Srivastava

Workshops:
Workshop on Big-Graphs Online Querying - Big-O(Q) 2015
Workshop co-chair: Cong Yu

3rd International Workshop on In-Memory Data Management and Analytics
Program committee includes: Sandeep Tata

High-Availability at Massive Scale: Building Google's Data Infrastructure for Ads
Invited talk at BIRTE by: Ashish Gupta, Jeff Shute

Demonstrations:
KATARA: Reliable Data Cleaning with Knowledge Bases and Crowdsourcing
Xu Chu, John Morcos, Ihab Ilyas, Mourad Ouzzani, Paolo Papotti, Nan Tang, Yin Ye

Error Diagnosis and Data Profiling with Data X-Ray
Xiaolan Wang, Mary Feng, Yue Wang, Xin Luna Dong, Alexandra Meliou

Friday, 28 August 2015

Announcing Google’s 2015 Global PhD Fellows



In 2009, Google created the PhD Fellowship program to recognize and support outstanding graduate students doing exceptional research in Computer Science and related disciplines. Now in its seventh year, our fellowship programs have collectively supported over 200 graduate students in Australia, China and East Asia, India, North America, Europe and the Middle East who seek to shape and influence the future of technology.

Reflecting our continuing commitment to building mutually beneficial relationships with the academic community, we are excited to announce the 44 students from around the globe who are recipients of the award. We offer our sincere congratulations to Google’s 2015 Class of PhD Fellows!

Australia

  • Bahar Salehi, Natural Language Processing (University of Melbourne)
  • Siqi Liu, Computational Neuroscience (University of Sydney)
  • Qian Ge, Systems (University of New South Wales)

China and East Asia

  • Bo Xin, Artificial Intelligence (Peking University)
  • Xingyu Zeng, Computer Vision (The Chinese University of Hong Kong)
  • Suining He, Mobile Computing (The Hong Kong University of Science and Technology)
  • Zhenzhe Zheng, Mobile Networking (Shanghai Jiao Tong University)
  • Jinpeng Wang, Natural Language Processing (Peking University)
  • Zijia Lin, Search and Information Retrieval (Tsinghua University)
  • Shinae Woo, Networking and Distributed Systems (Korea Advanced Institute of Science and Technology)
  • Jungdam Won, Robotics (Seoul National University)

India

  • Palash Dey, Algorithms (Indian Institute of Science)
  • Avisek Lahiri, Machine Perception (Indian Institute of Technology Kharagpur)
  • Malavika Samak, Programming Languages and Software Engineering (Indian Institute of Science)

Europe and the Middle East

  • Heike Adel, Natural Language Processing (University of Munich)
  • Thang Bui, Speech Technology (University of Cambridge)
  • Victoria Caparrós Cabezas, Distributed Systems (ETH Zurich)
  • Nadav Cohen, Machine Learning (The Hebrew University of Jerusalem)
  • Josip Djolonga, Probabilistic Inference (ETH Zurich)
  • Jakob Julian Engel, Computer Vision (Technische Universität München)
  • Nikola Gvozdiev, Computer Networking (University College London)
  • Felix Hill, Language Understanding (University of Cambridge)
  • Durk Kingma, Deep Learning (University of Amsterdam)
  • Massimo Nicosia, Statistical Natural Language Processing (University of Trento)
  • George Prekas, Operating Systems (École Polytechnique Fédérale de Lausanne)
  • Roman Prutkin, Graph Algorithms (Karlsruhe Institute of Technology)
  • Siva Reddy, Multilingual Semantic Parsing (The University of Edinburgh)
  • Immanuel Trummer, Structured Data Analysis (École Polytechnique Fédérale de Lausanne)
  • Margarita Vald, Security (Tel Aviv University)

North America

  • Waleed Ammar, Natural Language Processing (Carnegie Mellon University)
  • Justin Meza, Systems Reliability (Carnegie Mellon University)
  • Nick Arnosti, Market Algorithms (Stanford University)
  • Osbert Bastani, Programming Languages (Stanford University)
  • Saurabh Gupta, Computer Vision (University of California, Berkeley)
  • Masoud Moshref Javadi, Computer Networking (University of Southern California)
  • Muhammad Naveed, Security (University of Illinois at Urbana-Champaign)
  • Aaron Parks, Mobile Networking (University of Washington)
  • Kyle Rector, Human Computer Interaction (University of Washington)
  • Riley Spahn, Privacy (Columbia University)
  • Yun Teng, Computer Graphics (University of California, Santa Barbara)
  • Carl Vondrick, Machine Perception, (Massachusetts Institute of Technology)
  • Xiaolan Wang, Structured Data (University of Massachusetts Amherst)
  • Tan Zhang, Mobile Systems (University of Wisconsin-Madison)
  • Wojciech Zaremba, Machine Learning (New York University)

Thursday, 27 August 2015

Real-Time Data Validation with Google Tag Assistant Recordings

We’ve said it before and we’ll say it again: great analytics can only happen with great data.  

That's why we've made it a priority to help our users confirm that their data is top-quality. Last year we released our automated data diagnostics feature, and now we’re proud to announce the launch of another powerful new feature: Google Tag Assistant Recordings.  

This tool helps you instantly validate your Google Analytics or Google Analytics Premium implementation. If it finds data quality issues, it helps you troubleshoot them and then recheck them on the spot.  It’s available as part of the Google Tag Assistant Chrome Extension.
Screen Shot 2015-07-21 at 2.27.31 PM.png
"Tag Assistant Recordings is fast becoming one of my favorite tools for debugging Google Analytics Premium installations!  I use it multiple times a day with my Premium clients to help explain odd trends in their data or debug configuration issues. Already I'm building it into my core workflow." 

- Dan Rowe, Director of Analytics at Analytics Pros

What can I use it for?

Tag Assistant Recordings works with all kinds of data events: purchases, logins, and so on. What if you sell flowers online and want to confirm that Enhanced Ecommerce is capturing the checkout flow correctly? With Tag Assistant Recordings, you can record yourself going through the checkout process as you buy a dozen red roses, and then review what Google Analytics captured.

If you find that your account isn’t set up properly — if the sale wasn't recorded or was mis-labeled — you can make adjustments and test it all over again instantly.  With Tag Assistant Recordings, you know you’re capturing all the data that’s important to you.

Tag Assistant Recordings can be particularly useful when (1) you’re in the process of implementing Google Analytics or Google Analytics Premium, (2) you’ve recently made updates to your site, or (3) you’re making changes to your Google Analytics or Google Analytics Premium configuration. It works even if your new site or your updates aren't visible to the public yet, so you can feel confident before you go live.

Tag Assistant Recordings can also help if you want to reconfigure your Google Analytics account to better reflect your business.  For example, you may want to configure multi-channel funnels to detect your AdWords channel.  Tag Assistant Recordings lets you set up this new functionality in Google Analytics and test immediately whether everything is working as you expect.  

"Tag Assistant Recordings has already been a HUGE help! Analytics Pros and About.com were working on an issue with sessions double-counting and Tag Assistant Recordings let us narrow down precisely which hits were having new sessions counted. It saved us hours of time and helped us jump right to where the problem was. So, in summary, this is awesome!"  

- Greg McDonald, Business Intelligence Analyst at About.com

How does it work?

Tag Assistant Recordings works through the Google Tag Assistant Chrome Extension, so you’ll need to download the extension if you aren’t already using it.  From there, setup is easy.  Simply open Google Tag Assistant, record the user flow you’d like to check, and then view the full report in Tag Assistant.  You’ll want to view both tabs in the report (Tag Assistant and Google Analytics) to verify that you see the intended tags.  Keep in mind that the Google Analytics data is only available if you have access to the appropriate property or view.

Tag Recordings Gif.gif

Here's a nifty bonus: If you find a problem, and you think you have fixed it by changing settings from within Google Analytics, return to the Google Analytics tab in Tag Assistant Recordings and click the “Update” button. You'll see instantly how your configuration changes would have affected this recording.

We hope that Google Tag Assistant will be a valuable new tool in your analytics toolkit.  

Why not start using it today?


Posted by:  Ajay Nainani, Frank Kieviet, and Jocelyn Whittenburg, Google Analytics team

Friday, 21 August 2015

Affiliate Attribution: Putting the Pieces Together

Originally Posted on the Adometry M2R Blog
Recently I was reminded of an article from a little while back, titled, “2013: The Year of Affiliate Attribution?” It’s an interesting take and worthwhile read for those interested in affiliate marketing and the associated measurement challenges. Given that some time has passed, I thought it would be interesting to take a look at progress to date towards realizing a more holistic and accurate view of affiliate performance as part of a comprehensive cross-channel strategy.
Most affiliate managers have a similar goal to manage affiliate holistically, meaning investing in those that predominantly drive net-new customers independent of other paid marketing investments. Ultimately, this model allows them to optimize CPA by managing commissions, coupon discounts, and brand appropriateness based on true “incremental value” provided to business. Unfortunately, due to a lack of transparency and inadequate measurement, many marketers find themselves short of this goal. The result is the ongoing nagging question, “Is my affiliate strategy working and am I overpaying for what I’m getting?”

Why ‘Affiliate Attribution’ Is Hard

Affiliate marketers’ challenges range from competing against affiliates in PPC ad programs to concerns about questionable business practices employed by some “opportunistic” affiliates offering marginal value, but still receiving credit for sales that likely would have happened regardless. Which brings us to the central question:
How do marketers determine how much credit an affiliate should receive?

As you may know, opinions about how much conversion credit affiliates deserve for any given transaction vary widely. While there are a number of factors that influence affiliate performance (e.g. where they appear in the sales funnel, industry/sector, time-to-purchase length, etc.) for most brands the attribution model that is utilized will have a significant impact on which affiliates are over- and under-valued.
For example, in a last-click world affiliates that enter the purchase path towards the bottom of the funnel often hold their own; yet, when brands begin measuring on a full-funnel basis incorporating impression data, many struggle to prove their incremental value as the consumer has many exposures to marketing long before they reach the affiliate site. Conversely, affiliates that act predominantly as top- or mid-funnel (content, loyalty, etc.) are usually undervalued using last-click but can garner more credit using a full-funnel, data-driven attribution methodology. I should also mention these are broad generalizations only meant as examples, and it’s not necessarily a zero-sum game.
Another challenge is that fractional, data-driven attribution is difficult to implement for some types of promotions. One instance of this is cash back, loyalty and reward sites that must know an exact commission amount they will receive for each transaction so that they can pass on discounts to members. Given the complexity of more sophisticated attribution models, this data isn’t readily available.
Lastly, there several organizational challenges that inhibit the use of data-driven attribution among affiliate marketers. Some industry experts have indicated that many publishers, as much as 70-80%, strip impression tracking code from affiliate URLs. Another measurement challenge we see frequently is brands managing affiliates at the channel level leaving little sub-channel categorization which is where significant optimization opportunities exist.
Affiliate Attribution and the Performance Marketing Goldmine
Of course, part of our work at Adometry is helping customers address these challenges (and more) to ensure they are measuring affiliate contributions accurately and able to take appropriate action based on fully-attributed results.
Some key advantages of using data-driven attribution to measure affiliate sales include:
  • The ability to create a unified framework to compare performance (clicks and Impressions) in which affiliates compete for budgets on equal footing,
  • Increased visibility into which publishers are truly driving net-new customers through specifying which are an integral part of a multi-touch path and which are expendable,
  • The knowledge required to implement a Publisher category taxonomy to allow more insights into how different types of publishers perform by funnel stage and areas to improve efficiency,
  • Insight into the true incremental value publishers are providing and the offering commission rates to reflect this actual value,
  • A better understanding of affiliate’s role in the overall mix, further informing marketers use of complementary tactics to maximize affiliate contributions in concert with other channels,
  • The ability to use actual performance data to counter myths and frustrations with affiliates (cookie stuffing, stealing conversions, etc.)
Taken separately, each of these represents a significant opportunity to both be more effective in how you identify and utilize affiliate attribution to drive new opportunities. Together, they represent a fundamental improvement in how you manage your overall marketing spending, strategic planning and optimization efforts.
Top-performing affiliates, particularly those at the top and middle of the funnel, also stand to benefit from more transparent, accurate and fair system for crediting conversions. In fact, several large-scale, forward-thinking affiliates are already investing in data-driven attribution to arm themselves with the data required to effectively compete and win business in the market as brands become more sophisticated and judicious with their affiliates budgets.
It’s an exciting time for performance marketing. Change is always hard, but in this case it’s absolutely change for the better.  And frankly, its time.  What are your thoughts and experiences with measuring affiliate performance and attribution?

Posted by Casey Carey, Google Analytics team

Google Faculty Research Awards: Summer 2015



We have just completed another round of the Google Faculty Research Awards, our annual open call for research proposals on Computer Science and related topics, including systems, machine learning, software engineering, security and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.

This round we received 805 proposals, about the same as last round, covering 48 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 113 projects, with 27% of the funding awarded to universities outside the U.S. The subject areas that received the highest level of support were systems, machine perception, software engineering, and machine learning.

The Faculty Research Awards program plays a critical role in building and maintaining strong collaborations with top research faculty globally. These relationships allow us to keep a pulse on what’s happening in academia in strategic areas, and they help to extend our research capabilities and programs. Faculty also report, through our annual survey, that they and their students benefit from a direct connection to Google as a source of ideas and perspective.

Congratulations to the well-deserving recipients of this round’s awards. If you are interested in applying for the next round (deadline is October 15), please visit our website for more information.

Thursday, 20 August 2015

The Next Chapter for Flu Trends



When a small team of software engineers first started working on Flu Trends in 2008, we wanted to explore how real-world phenomena could be modeled using patterns in search queries. Since its launch, Google Flu Trends has provided useful insights and served as one of the early examples for “nowcasting” based on search trends, which is increasingly used in health, economics, and other fields. Over time, we’ve used search signals to create prediction models, updating and improving those models over time as we compared our prediction to real-world cases of flu.

Instead of maintaining our own website going forward, we’re now going to empower institutions who specialize in infectious disease research to use the data to build their own models. Starting this season, we’ll provide Flu and Dengue signal data directly to partners including Columbia University’s Mailman School of Public Health (to update their dashboard), Boston Children’s Hospital/Harvard, and Centers for Disease Control and Prevention (CDC) Influenza Division. We will also continue to make historical Flu and Dengue estimate data available for anyone to see and analyze.

Flu continues to affect millions of people every year, and while it’s still early days for nowcasting and similar tools for understanding the spread of diseases like flu and dengue fever—we’re excited to see what comes next. To download the historical data or learn more about becoming a research partner, please visit the Flu Trends web page.

Wednesday, 19 August 2015

Google Analytics User Conference: G’day Australia

The Australian Google Analytics User Conference is worth clearing your diaries for, with some of the most well-known and respected international industry influencers making their way to Sydney and Melbourne to present at the conference this September.

Hosted by Google Certified Partners, Loves Data, you’ll be learning about the latest features, what’s trending and popular, best practices and uncovering ways to get the most out of Google Analytics. Topics covered include: making sure digital analytics is indispensable to your organisation; applying analytics frameworks to your whole organisation; improving your data quality and collection; data insights you can action; and presenting data to get results.

Presenting the keynote is Jim Sterne, Chairman of the Digital Analytics Association, founder of eMetrics and also known as the godfather of analytics. Joining him are two speakers from Google in the US: Krista Seiden, Google Product Manager and Analytics Advocate and Mike Kwong, Senior Staff Software Engineer.

Other leading international industry influencers presenting at the conference include Simo Ahava (Google Developer Expert; Reaktor), Chris Chapo (Enjoy), Benjamin Mangold (Loves Data), Lea Pica (Consultant, Leapica.com), Chris Samila (Optimizely), Carey Wilkins (Evolytics) and Tim Wilson (Web Analytics Demystified).  

Expect to network with other like-minded data enthusiasts, marketers, developers and strategists, plus get to know the speakers better during the Conference’s Ask Me Anything session. We’ve even covered our bases for those seeking next-level expertise with a marketing or technical masterclass available the day before the conference. Find out more information about the speakers and check out the full program.

Last year’s conference sold out way in advance and this year’s conference is heading in the same direction. Book your tickets now to avoid disappointment. 

Event details Sydney
Masterclass & Conference | 8 & 9 September 2015

Event details Melbourne
Masterclass & Conference | 10 & 11 September 2015

Posted by Will Pryor, Google Analytics team

Tuesday, 18 August 2015

Pulling Back the Curtain on Google’s Network Infrastructure



Pursuing Google’s mission of organizing the world’s information to make it universally accessible and useful takes an enormous amount of computing and storage. In fact, it requires coordination across a warehouse-scale computer. Ten years ago, we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements. So, we set out to build our own datacenter network hardware and software infrastructure. Today, at the ACM SIGCOMM conference, we are presenting a paper with the technical details on five generations of our in-house data center network architecture. This paper presents the technical details behind a talk we presented at Open Network Summit a few months ago.

From relatively humble beginnings, and after a misstep or two, we’ve built and deployed five generations of datacenter network infrastructure. Our latest-generation Jupiter network has improved capacity by more than 100x relative to our first generation network, delivering more than 1 petabit/sec of total bisection bandwidth. This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s.

Such network performance has been tremendously empowering for Google services. Engineers were liberated from optimizing their code for various levels of bandwidth hierarchy. For example, initially there were painful tradeoffs with careful data locality and placement of servers connected to the same top of rack switch versus correlated failures caused by a single switch failure. A high performance network supporting tens of thousands of servers with flat bandwidth also enabled individual applications to scale far beyond what was otherwise possible and enabled tight coupling among multiple federated services. Finally, we were able to substantially improve the efficiency of our compute and storage infrastructure. As quantified in this recent paper, scheduling a set of jobs over a single larger domain supports much higher utilization than scheduling the same jobs over multiple smaller domains.

Delivering such a network meant we had to solve some fundamental problems in networking. Ten years ago, networking was defined by the interaction of individual hardware elements, e.g., switches, speaking standardized protocols to dynamically learn what the network looks like. Based on this dynamically learned information, switches would set their forwarding behavior. While robust, these protocols targeted deployment environments with perhaps tens of switches communicating between between multiple organizations. Configuring and managing switches in such an environment was manual and error prone. Changes in network state would spread slowly through the network using a high-overhead broadcast protocol. Most challenging of all, the system could not scale to meet our needs.

We adopted a set of principles to organize our networks that is now the primary driver for networking research and industrial innovation, Software Defined Networking (SDN). We observed that we could arrange emerging merchant switch silicon around a Clos topology to scale to the bandwidth requirements of a data center building. The topology of all five generations of our data center networks follow the blueprint below. Unfortunately, this meant that we would potentially require 10,000+ individual switching elements. Even if we could overcome the scalability challenges of existing network protocols, managing and configuring such a vast number of switching elements would be impossible.
So, our next observation was that we knew the exact configuration of the network we were trying to build. Every switch would play a well-defined role in a larger-ensemble. That is, from the perspective of a datacenter building, we wished to build a logical building-scale network. For network management, we built a single configuration for the entire network and pushed the single global configuration to all switches. Each switch would simply pull out its role in the larger whole. Finally, we transformed routing from a pair-wise, fully distributed (but somewhat slow and high overhead) scheme to a logically-centralized scheme under the control of a single dynamically-elected master as illustrated in the figure below from our paper.
Our SIGCOMM paper on Jupiter is one of four Google-authored papers being presented at that conference. Taken together, these papers show the benefits of embedding research within production teams, reflecting both collaborations with PhD students carrying out extended research efforts with Google engineers during internships as well as key insights from deployed production systems:

  • Our work on Bandwidth Enforcer shows how we can allocate wide area bandwidth among tens of thousands of individual applications based on centrally configured policy, substantially improving network utilization while simultaneously isolating services from one another.
  • Condor addresses the challenges of designing data center network topologies. Network designers can specify constraints for data center networks; Condor efficiently generates candidate network designs that meet these constraints, and evaluates these candidates against a variety of target metrics.
  • Congestion control in datacenter networks is challenging because of tiny buffers and very small round trip times. TIMELY shows how to manage datacenter bandwidth allocation while maintaining highly responsive and low latency network roundtrips in the data center.

These efforts reflect the latest in a long series of substantial Google contributions in networking. We are excited about being increasingly open about results of our research work: to solicit feedback on our approach, to influence future research and development directions so that we can benefit from community-wide efforts to improve networking, and to attract the next-generation of great networking thinkers and builders to Google. Our focus on Google Cloud Platform further increases the importance of being open about our infrastructure. Since the same network powering Google infrastructure for a decade is also the underpinnings of our Cloud Platform, all developers can leverage the network to build highly robust, manageable, and globally scalable services.

Say hello to the Enigma conference



USENIX Enigma is a new conference focused on security, privacy and electronic crime through the lens of emerging threats and novel attacks. The goal of this conference is to help industry, academic, and public-sector practitioners better understand the threat landscape. Enigma will have a single track of 30-minute talks that are curated by a panel of experts, featuring strong technical content with practical applications to current and emerging threats.
Google is excited to both sponsor and help USENIX build Enigma, since we share many of its core principles: transparency, openness, and cutting-edge security research. Furthermore, we are proud to provide Enigma with with engineering and design support, as well as volunteer participation in program and steering committees.

The first instantiation of Enigma will be held January 25-27 in San Francisco. You can sign up for more information about the conference or propose a talk through the official conference site at http://enigma.usenix.org

Monday, 17 August 2015

Google Analytics Conference Nordic in Stockholm, Sweden

Join the Google Analytics Certified Partners for Google Analytics Conference Nordic in Sweden. 

The event takes place August 26 in Stockholm, Sweden, and is followed by a workshop on August 27.

Started based on an initiative by Outfox, who gathered the other Google Analytics Certified Partners, the conference is now returning for the fifth consecutive year.

Our Stockholm conference includes:

 • Case studies from businesses and other organizations, such as The Swedish Society for Nature Conservation, Viaplay, and Storebrand. In other words, Google Analytics for sales, entertainment, non-profits, insurance, and more!
 • Expert presentations by Google Analytics Certified Partners.
 • Opportunities to interact with peers and experts
 • ...much more!

The conference is being visited by two top speakers from Google, Sagnik Nandy and Daniel Waisberg.

Sagnik Nandy is technical leader and manager of several Analytics and Reporting efforts in Google. He has hands on experience in building, scaling, deploying and managing large scale systems used by millions of web sites around the world. 

Daniel Waisberg is Analytics Advocate at Google, where he is responsible for fostering Google Analytics by educating and inspiring Online Marketing professionals. Both at Google and his previous positions, Daniel has worked with some of the biggest Internet brands to measure and optimize online behavior. 

Besides meeting Google, you’ll meet several Nordic Google Analytics Certified Partners. You will also meet and learn from several end users who use Google Analytics on a daily basis.

To join us in Stockholm in August, visit the conference site and secure your ticket.


Posted by Lars Johansson, Google Analytics Certified Partner and Google Analytics Premium Authorized Reseller

KDD 2015 Best Research Paper Award: “Algorithms for Public-Private Social Networks”



The 21st ACM conference on Knowledge Discovery and Data Mining (KDD’15), a main venue for academic and industry research in data management, information retrieval, data mining and machine learning, was held last week in Sydney, Australia. In the past several years, Google has been actively participating in KDD, with several Googlers presenting work at the conference in the research and industrial tracks. This year Googlers presented 12 papers at KDD (listed below, with Googlers in blue), all of which are freely available at the ACM Digital Library.

One of these papers, Efficient Algorithms for Public-Private Social Networks, co-authored by Googlers Ravi Kumar, Silvio Lattanzi, Vahab Mirrokni, former Googler intern Alessandro Epasto and research visitor Flavio Chierichetti, was awarded Best Research Paper. The inspiration for this paper comes from studying social networks and the importance of addressing privacy issues in analyzing such networks.

Privacy issues dictate the way information is shared among the members of the social network. In the simplest case, a user can mark some of her friends as private; this would make the connections (edges) between this user and these friends visible only to the user. In a different instantiation of privacy, a user can be a member of a private group; in this case, all the edges among the group members are to be considered private. Thus, each user in the social network has her own view of the link structure of the network. These privacy issues also influence the way in which the network itself can be viewed and processed by algorithms. For example, one cannot use the list of private friends of user X for suggesting potential friends or public news items to another user on the network, but one can use this list for the purpose of suggesting friends for user X.

As a result, enforcing these privacy guarantees translates to solving a different algorithmic problem for each user in the network, and for this reason, developing algorithms that process these social graphs and respect these privacy guarantees can become computationally expensive. In a recent study, Dey et al. crawled a snapshot of 1.4 million New York City Facebook users and reported that 52.6% of them hid their friends list. As more users make a larger portion of their social neighborhoods private, these computational issues become more important.

Motivated by the above, this paper introduces the public-private model of graphs, where each user (node) in the public graph has an associated private graph. In this model, the public graph is visible to everyone, and the private graph at each node is visible only to each specific user. Thus, any given user sees their graph as a union of their private graph and the public graph.

From algorithmic point of view, the paper explores two powerful computational paradigms for efficiently studying large graphs, namely, sketching and sampling, and focuses on some key problems in social networks such as similarity ranking, and clustering. In the sketching model, the paper shows how to efficiently approximate the neighborhood function, which in turn can be used to approximate various notions of centrality scores for each node - such centrality scores like the PageRank score have important applications in ranking and recommender systems. In the sampling model, the paper focuses on all-pair shortest path distances, node similarities, and correlation clustering, and develop algorithms that computes these notions on a given public-private graph and at the same time. The paper also illustrates the effectiveness of this model and the computational efficiency of the algorithms by performing experiments on real-world social networks.

The public-private model is an abstraction that can be used to develop efficient social network algorithms. This work leaves a number of open interesting research directions such as: obtaining efficient algorithms for the densest subgraph/community detection problems, influence maximization, computing other pairwise similarity scores, and most importantly, recommendation systems.

KDD’15 Papers, co-authored by Googlers:

Efficient Algorithms for Public-Private Social Networks (Best Paper Award)
Flavio Chierichetti, Alessandro Epasto, Ravi Kumar, Silvio Lattanzi, Vahab Mirrokni

Large-Scale Distributed Bayesian Matrix Factorization using Stochastic Gradient MCMC
Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling

TimeMachine: Timeline Generation for Knowledge-Base Entities
Tim Althoff, Xin Luna Dong, Kevin Murphy, Safa Alai, Van Dang, Wei Zhang

Algorithmic Cartography: Placing Points of Interest and Ads on Maps
Mohammad Mahdian, Okke Schrijvers, Sergei Vassilvitskii

Stream Sampling for Frequency Cap Statistics
Edith Cohen

Dirichlet-Hawkes Processes with Applications to Clustering Continuous-Time Document Streams
Nan Du, Mehrdad Farajtabar, Amr Ahmed, Alexander J.Smola, Le Song

Adaptation Algorithm and Theory Based on Generalized Discrepancy
Corinna Cortes, Mehryar Mohri, Andrés Muñoz Medina (now at Google)

Estimating Local Intrinsic Dimensionality
Laurent Amsaleg, Oussama Chelly, Teddy Furon, Stéphane Girard, Michael E. Houle Ken-ichi Kawarabayashi, Michael Nett

Unified and Contrasting Cuts in Multiple Graphs: Application to Medical Imaging Segmentation
Chia-Tung Kuo, Xiang Wang, Peter Walker, Owen Carmichael, Jieping Ye, Ian Davidson

Going In-depth: Finding Longform on the Web
Virginia Smith, Miriam Connor, Isabelle Stanton

Annotating needles in the haystack without looking: Product information extraction from emails
Weinan Zhang, Amr Ahmed, Jie Yang, Vanja Josifovski, Alexander Smola

Focusing on the Long-term: It's Good for Users and Business
Diane Tang, Henning Hohnhold, Deirdre O'Brien

Thursday, 13 August 2015

Google’s Course Builder 1.9 improves instructor experience and takes Skill Maps to the next level



(Cross-posted on the Google for Education Blog)

When we last updated Course Builder in April, we said that its skill mapping capabilities were just the beginning. Today’s 1.9 release greatly expands the applicability of these skill maps for you and your students. We’ve also significantly revamped the instructor’s user interface, making it easier for you to get the job done while staying out of your way while you create your online courses.

First, a quick update on project hosting. Course Builder has joined many other Google open source projects on GitHub (download it here). Later this year, we’ll consolidate all of the Course Builder documentation, but for now, get started at Google Open Online Education.

Now, about those features:
  • Measuring competence with skill maps
    In addition to defining skills and prerequisites for each lesson, you can now apply skills to each question in your courses’ assessments. By completing the assessments and activities, learners will be able to measure their level of competence for each skill. For instance, here’s what a student taking Power Searching with Google might see:
This information can help guide them on which sections of the course to revisit. Or, if a pre-test is given, students can focus on the lessons addressing their skill gaps.

To determine how successful the content is at teaching the desired skills across all students, an instructor can review students’ competencies on a new page in the analytics section of the dashboard.

  • Improving usability when creating a course Course Builder has a rich set of capabilities, giving you control over every aspect of your course -- but that doesn’t mean it has to be hard to use. Our goal is to help you spend less time setting up your course and more time educating your students. We’ve completely reorganized the dashboard, reducing the number of tabs and making the settings you need clearer and easier to find.
We also added in-place previewing, so you can quickly edit your content and immediately see how it will look without needing to reload any pages.
For a full list of the other features added in this release (including the ability for students to delete their data upon unenrollment and removal of the old Files API), see the release notes. As always, please let us know how you use these new features and what you’d like to see in Course Builder next to help make your online course even better.

In the meantime, take a look at a couple recent online courses that we’re pretty excited about: Sesame Street’s Make Believe with Math and our very own Computational Thinking for Educators.

Tuesday, 11 August 2015

The neural networks behind Google Voice transcription



Over the past several years, deep learning has shown remarkable success on some of the world’s most difficult computer science challenges, from image classification and captioning to translation to model visualization techniques. Recently we announced improvements to Google Voice transcription using Long Short-term Memory Recurrent Neural Networks (LSTM RNNs)—yet another place neural networks are improving useful services. We thought we’d give a little more detail on how we did this.

Since it launched in 2009, Google Voice transcription had used Gaussian Mixture Model (GMM) acoustic models, the state of the art in speech recognition for 30+ years. Sophisticated techniques like adapting the models to the speaker's voice augmented this relatively simple modeling method.

Then around 2012, Deep Neural Networks (DNNs) revolutionized the field of speech recognition. These multi-layer networks distinguish sounds better than GMMs by using “discriminative training,” differentiating phonetic units instead of modeling each one independently.

But things really improved rapidly with Recurrent Neural Networks (RNNs), and especially LSTM RNNs, first launched in Android’s speech recognizer in May 2012. Compared to DNNs, LSTM RNNs have additional recurrent connections and memory cells that allow them to “remember” the data they’ve seen so far—much as you interpret the words you hear based on previous words in a sentence.

By then, Google’s old voicemail system, still using GMMs, was far behind the new state of the art. So we decided to rebuild it from scratch, taking advantage of the successes demonstrated by LSTM RNNs. But there were some challenges.
An LSTM memory cell, showing the gating mechanisms that allow it to store
and communicate information. Image credit: Alex Graves
There’s more to speech recognition than recognizing individual sounds in the audio: sequences of sounds need to match existing words, and sequences of words should make sense in the language. This is called “language modeling.” Language models are typically trained over very large corpora of text, often orders of magnitude larger than the acoustic data. It’s easy to find lots of text, but not so easy to find sources that match naturally spoken sentences. Shakespeare’s plays in 17th-century English won’t help on voicemails.

We decided to retrain both the acoustic and language models, and to do so using existing voicemails. We already had a small set of voicemails users had donated for research purposes and that we could transcribe for training and testing, but we needed much more data to retrain the language models. So we asked our users to donate their voicemails in bulk, with the assurance that the messages wouldn’t be looked at or listened to by anyone—only to be used by computers running machine learning algorithms. But how does one train models from data that’s never been human-validated or hand-transcribed?

We couldn’t just use our old transcriptions, because they were already tainted with recognition errors—garbage in, garbage out. Instead, we developed a delicate iterative pipeline to retrain the models. Using improved acoustic models, we could recognize existing voicemails offline to get newer, better transcriptions the language models could be retrained on, and with better language models we could recognize again the same data, and repeat the process. Step by step, the recognition error rate dropped, finally settling at roughly half what it was with the original system! That was an excellent surprise.

There were other (not so positive) surprises too. For example, sometimes the recognizer would skip entire audio segments; it felt as if it was falling asleep and waking up a few seconds later. It turned out that the acoustic model would occasionally get into a “bad state” where it would think the user was not speaking anymore and what it heard was just noise, so it stopped outputting words. When we retrained on that same data, we’d think all those spoken sounds should indeed be ignored, reinforcing that the model should do it even more. It took careful tuning to get the recognizer out of that state of mind.

It was also tough to get punctuation right. The old system relied on hand-crafted rules or “grammars,” which, by design, can’t easily take textual context into account. For example, in an early test our algorithms transcribed the audio “I got the message you left me” as “I got the message. You left me.” To try and tackle this, we again tapped into neural networks, teaching an LSTM to insert punctuation at the right spots. It’s still not perfect, but we’re continually working on ways to improve our accuracy.

In speech recognition as in many other complex services, neural networks are rapidly replacing previous technologies. There’s always room for improvement of course, and we’re already working on new types of networks that show even more promise!

Thursday, 6 August 2015

The reusable holdout: Preserving validity in adaptive data analysis



Machine learning and statistical analysis play an important role at the forefront of scientific and technological progress. But with all data analysis, there is a danger that findings observed in a particular sample do not generalize to the underlying population from which the data were drawn. A popular XKCD cartoon illustrates that if you test sufficiently many different colors of jelly beans for correlation with acne, you will eventually find one color that correlates with acne at a p-value below the infamous 0.05 significance level.
Image credit: XKCD
Unfortunately, the problem of false discovery is even more delicate than the cartoon suggests. Correcting reported p-values for a fixed number of multiple tests is a fairly well understood topic in statistics. A simple approach is to multiply each p-value by the number of tests, but there are more sophisticated tools. However, almost all existing approaches to ensuring the validity of statistical inferences assume that the analyst performs a fixed procedure chosen before the data are examined. For example, “test all 20 flavors of jelly beans”. In practice, however, the analyst is informed by data exploration, as well as the results of previous analyses. How did the scientist choose to study acne and jelly beans in the first place? Often such choices are influenced by previous interactions with the same data. This adaptive behavior of the analyst leads to an increased risk of spurious discoveries that are neither prevented nor detected by standard approaches. Each adaptive choice the analyst makes multiplies the number of possible analyses that could possibly follow; it is often difficult or impossible to describe and analyze the exact experimental setup ahead of time.

In The Reusable Holdout: Preserving Validity in Adaptive Data Analysis, a joint work with Cynthia Dwork (Microsoft Research), Vitaly Feldman (IBM Almaden Research Center), Toniann Pitassi (University of Toronto), Omer Reingold (Samsung Research America) and Aaron Roth (University of Pennsylvania), to appear in Science tomorrow, we present a new methodology for navigating the challenges of adaptivity. A central application of our general approach is the reusable holdout mechanism that allows the analyst to safely validate the results of many adaptively chosen analyses without the need to collect costly fresh data each time.

The curse of adaptivity

A beautiful example of how false discovery arises as a result of adaptivity is Freedman’s paradox. Suppose that we want to build a model that explains “systolic blood pressure” in terms of hundreds of variables quantifying the intake of various kinds of food. In order to reduce the number of variables and simplify our task, we first select some promising looking variables, for example, those that have a positive correlation with the response variable (systolic blood pressure). We then fit a linear regression model on the selected variables. To measure the goodness of our model fit, we crank out a standard F-test from our favorite statistics textbook and report the resulting p-value.
Inference after selection: We first select a subset of the variables based on a data-dependent criterion and then fit a linear model on the selected variables.
Freedman showed that the reported p-value is highly misleading - even if the data were completely random with no correlation whatsoever between the response variable and the data points, we’d likely observe a significant p-value! The bias stems from the fact that we selected a subset of the variables adaptively based on the data, but we never account for this fact. There is a huge number of possible subsets of variables that we selected from. The mere fact that we chose one test over the other by peeking at the data creates a selection bias that invalidates the assumptions underlying the F-test.

Freedman’s paradox bears an important lesson. Significance levels of standard procedures do not capture the vast number of analyses one can choose to carry out or to omit. For this reason, adaptivity is one of the primary explanations of why research findings are frequently false as was argued by Gelman and Loken who aptly refer to adaptivity as “garden of the forking paths”.

Machine learning competitions and holdout sets

Adaptivity is not just an issue with p-values in the empirical sciences. It affects other domains of data science just as well. Machine learning competitions are a perfect example. Competitions have become an extremely popular format for solving prediction and classification problems of all sorts.

Each team in the competition has full access to a publicly available training set which they use to build a predictive model for a certain task such as image classification. Competitors can repeatedly submit a model and see how the model performs on a fixed holdout data set not available to them. The central component of any competition is the public leaderboard which ranks all teams according to the prediction accuracy of their best model so far on the holdout. Every time a team makes a submission they observe the score of their model on the same holdout data. This methodology is inspired by the classic holdout method for validating the performance of a predictive model.
Ideally, the holdout score gives an accurate estimate of the true performance of the model on the underlying distribution from which the data were drawn. However, this is only the case when the model is independent of the holdout data! In contrast, in a competition the model generally incorporates previously observed feedback from the holdout set. Competitors work adaptively and iteratively with the feedback they receive. An improved score for one submission might convince the team to tweak their current approach, while a lower score might cause them to try out a different strategy. But the moment a team modifies their model based on a previously observed holdout score, they create a dependency between the model and the holdout data that invalidates the assumption of the classic holdout method. As a result, competitors may begin to overfit to the holdout data that supports the leaderboard. This means that their score on the public leaderboard continues to improve, while the true performance of the model does not. In fact, unreliable leaderboards are a widely observed phenomenon in machine learning competitions.

Reusable holdout sets

A standard proposal for coping with adaptivity is simply to discourage it. In the empirical sciences, this proposal is known as pre-registration and requires the researcher to specify the exact experimental setup ahead of time. While possible in some simple cases, it is in general too restrictive as it runs counter to today’s complex data analysis workflows.

Rather than limiting the analyst, our approach provides means of reliably verifying the results of an arbitrary adaptive data analysis. The key tool for doing so is what we call the reusable holdout method. As with the classic holdout method discussed above, the analyst is given unfettered access to the training data. What changes is that there is a new algorithm in charge of evaluating statistics on the holdout set. This algorithm ensures that the holdout set maintains the essential guarantees of fresh data over the course of many estimation steps.
The limit of the method is determined by the size of the holdout set - the number of times that the holdout set may be used grows roughly as the square of the number of collected data points in the holdout, as our theory shows.

Armed with the reusable holdout, the analyst is free to explore the training data and verify tentative conclusions on the holdout set. It is now entirely safe to use any information provided by the holdout algorithm in the choice of new analyses to carry out, or the tweaking of existing models and parameters.

A general methodology

The reusable holdout is only one instance of a broader methodology that is, perhaps surprisingly, based on differential privacy—a notion of privacy preservation in data analysis. At its core, differential privacy is a notion of stability requiring that any single sample should not influence the outcome of the analysis significantly.
Example of a stable learning algorithm: Deletion of any single data point does not affect the accuracy of the classifier much.
A beautiful line of work in machine learning shows that various notions of stability imply generalization. That is any sample estimate computed by a stable algorithm (such as the prediction accuracy of a model on a sample) must be close to what we would observe on fresh data.

What sets differential privacy apart from other stability notions is that it is preserved by adaptive composition. Combining multiple algorithms that each preserve differential privacy yields a new algorithm that also satisfies differential privacy albeit at some quantitative loss in the stability guarantee. This is true even if the output of one algorithm influences the choice of the next. This strong adaptive composition property is what makes differential privacy an excellent stability notion for adaptive data analysis.

In a nutshell, the reusable holdout mechanism is simply this: access the holdout set only through a suitable differentially private algorithm. It is important to note, however, that the user does not need to understand differential privacy to use our method. The user interface of the reusable holdout is the same as that of the widely used classical method.

Reliable benchmarks

A closely related work with Avrim Blum dives deeper into the problem of maintaining a reliable leaderboard in machine learning competitions (see this blog post for more background). While the reusable holdout could directly be used for this purpose, it turns out that a variant of the reusable holdout, we call the Ladder algorithm, provides even better accuracy.

This method is not just useful for machine learning competitions, since there are many problems that are roughly equivalent to that of maintaining an accurate leaderboard in a competition. Consider, for example, a performance benchmark that a company uses to test improvements to a system internally before deploying them in a production system. As the benchmark data set is used repeatedly and adaptively for tasks such as model selection, hyper-parameter search and testing, there is a danger that eventually the benchmark becomes unreliable.

Conclusion

Modern data analysis is inherently an adaptive process. Attempts to limit what data scientists will do in practice are ill-fated. Instead we should create tools that respect the usual workflow of data science while at the same time increasing the reliability of data driven insights. It is our goal to continue exploring techniques that can help to create more reliable validation techniques and benchmarks that track true performance more accurately than existing methods.