Tuesday, 31 March 2015

Google Analytics Introduces Product Release Notes

Ever feel like you just can’t keep up with all the new features in Google Analytics? We hear you! To help you keep track of everything that’s going on, we’ve started publishing Release Notes in our product Help Center.

Release notes will be updated periodically and will have the most comprehensive list of new features or changes to the Google Analytics product. So, if you see something new in your account and have questions, we recommend starting here. We’ll point you to the relevant documentation to get you up to speed on everything you need to know.

We're happy to be adding another resources to keep our users informed. Check it out today!

Posted by Louis Gray, Analytics Advocate

Thursday, 26 March 2015

Solutions Guide for Implementing Google Analytics via Google Tag Manager

Marketers, developers, and practitioners of analytics depend on having the right data at the right time - but implementing analytics code or AdWords pixels can be a less than fun (or easy) experience. Google Tag Manager makes tagging simple and fast by letting you add tags with a simple UI instead of code, while also offering advanced tracking features used by some of the web’s top sites.

Today we’re excited to announce the launch of the Solutions Guide section on the Google Analytics and Google Tag Manager Help Centers. The Solutions Guide area is focused on providing actionable, hands on, step-by-step instructions for implementing Google Analytics, AdWords, DoubleClick, and other third party tags via Google Tag Manager. 

In this guide, you’ll learn:
  • When and why to use Google Tag Manager
  • Best practices for naming conventions and setup tips
  • When to choose the Data Layer or the Tag Manager UI
  • How to implement GA event tracking, custom dimensions & cross-domain tracking
  • How to setup AdWords, Doubleclick, and Dynamic Remarketing tags in GTM
We’re thrilled to share this with you and hope you find it helpful as you implement Google Tag Manager.

Check out the new GTM Solutions Guide today!

Happy Tagging.

Posted by Krista Seiden, Analytics Advocate

Wednesday, 25 March 2015

Evolving Beyond The Conversion With Neil Hoyne

Measurement is constantly evolving, and while metrics by themselves each tell us something interesting, they do not necessarily tell the whole story or equally important, what to do next. In essence, our tools provide the what, but not always the why. As marketers and analysts, we need to put in the work and be able to take the next steps with our data: tell the whole story to our teams and stakeholders and be consultative in decision making and direction.

This is really important to get right, because use of data for companies is still new territory for many (frequently, decisions are still just based on how marketers feel). And while this may have been fine in a pre-digital age, the future of your company may very well depend on embracing analytics. With fragmentation of users and channels, there’s just too much for anyone to do. So it comes down to knowing what really works and why - these are the modern modern keys to success.

In this recent talk, Googler Neil Hoyne, Global Program Manager Customer Analytics shares how to embrace the above as well as take the next steps with your measurement.

A few key takeaways:
  • You need to evolve your measurement plan to better fit the state of the web & complex customer journey (see our recent measurement guide to help).
  • Question if you have the right goals or you need to adjust, and don’t be afraid to change goals if need be. Really make sure you have the right macro & micro conversions.
  • Build an attribution model (also see our guide) that works for your brand, considering the unique factors that make up your business and what messages make sense in each different context (for example, mobile, social, email, etc). 
  • Measure your customers in a user-centric way, move beyond the old session-based world.
Watch the whole talk embedded below:
And be sure and connect with Neil on Twitter and Google+

Posted by the Google Analytics Team

Monday, 16 March 2015

Google Computer Science Capacity Awards



One of Google's goals is to surface successful strategies that support the expansion of high-quality Computer Science (CS) programs at the undergraduate level. Innovations in teaching and technologies, while additionally ensuring better engagement of women and underrepresented minority students, is necessary in creating inclusive, sustainable, and scalable educational programs.

To address issues arising from the dramatic increase in undergraduate CS enrollments, we recently launched the Computer Science Capacity Awards program. For this three-year program, select educational institutions were invited to contribute proposals for innovative, inclusive, and sustainable approaches to address current scaling issues in university CS educational programs.

Today, after an extensive proposal review process, we are pleased to announce the recipients of the Capacity Awards program:

Carnegie Mellon University - Professor Jacobo Carrasquel
Alternate Instructional Model for Introductory Computer Science Classes
CMU will develop a new instructional model consisting of two optional mini lectures per week given by the instructor, and problem-solving sessions with flexible group meetings that are coordinated by undergraduate and graduate teaching assistants.

Duke University - Professor Jeffrey Forbes
North Carolina State University - Professor Kristy Boyer
University of North Carolina - Professor Ketan Mayer-Patel
RESEARCH TRIANGLE PEER TEACHING FELLOWS: Scalable Evidence-Based Peer Teaching for Improving CS Capacity and Diversity
The project hopes to increase CS retention and diversity by developing a highly scalable, effective, evidence-based peer training program across three universities in the North Carolina Research Triangle.

Mount Holyoke College - Professor Heather Pon-Barry
MaGE (Megas and Gigas Educate): Growing Computer Science Capacity at Mount Holyoke College
Mount Holyoke’s MaGE program includes a plan to grow enrollment in introductory CS courses, particularly for women and other underrepresented groups. The program also includes a plan of action for CS students to educate, mentor, and support others in inclusive ways.

George Mason University - Professor Jeff Offutt
SPARC: Self-PAced Learning increases Retention and Capacity
George Mason University wants to replace the traditional course model for CS-1 and CS-2 with an innovative teaching model of self- paced introductory programming courses. Students will periodically demonstrate competency with practical skills demonstrations similar to those used in martial arts.

Rutgers University - Professor Andrew Tjang
Increasing the Scalability and Diversity in the Face of Large Growth in Computer Science Enrollment
Rutger’s program addresses scalability issues with technology tools, as well as collaborative spaces. It also emphasizes outreach to Rutgers’ women’s college and includes original research on success in CS programs to create new courses that cater to the changing environment.

University of California, Berkeley - Professor John DeNero
Scaling Computer Science through Targeted Engagement
Berkeley’s program plans to increase Software Engineering and UI Design enrollment by 500 total students/year, as well as increase the number of women and underrepresented minority CS majors by a factor of three.

Each of the selected schools brings a unique and innovative approach to addressing current scaling issues, and we are excited to collaborate in developing concrete strategies to develop sustainable and inclusive educational programs. Stay tuned over the coming year, where we will report on program recipients' progress and share results with the broader CS education community.

Monday, 9 March 2015

Announcing the Google MOOC Focused Research Awards



Last year, Google and Tsinghua University hosted the 2014 APAC MOOC Focused Faculty Workshop, an event designed to share, brainstorm and generate ideas aimed at fostering MOOC innovation. As a result of the ideas generated at the workshop, we solicited proposals from the attendees for research collaborations that would advance important topics in MOOC development.

After expert reviews and committee discussions, we are pleased to announce the following recipients of the MOOC Focused Research Awards. These awards cover research exploring new interactions to enhance learning experience, personalized learning, online community building, interoperability of online learning platforms and education accessibility:

  • “MOOC Visual Analytics” - Michael Ginda, Indiana University, United States
  • “Improvement of students’ interaction in MOOCs using participative networks” - Pedro A. Pernías Peco, Universidad de Alicante, Spain
  • “Automated Analysis of MOOC Discussion Content to Support Personalised Learning” - Katrina Falkner, The University of Adelaide, Australia
  • “Extending the Offline Capability of Spoken Tutorial Methodology” - Kannan Moudgalya, Indian Institute of Technology Bombay, India
  • “Launching the Pan Pacific ISTP (Information Science and Technology Program) through MOOCs” - Yasushi Kodama, Hosei University, Japan
  • “Fostering Engagement and Social Learning with Incentive Schemes and Gamification Elements in MOOCs” - Thomas Schildhauer, Alexander von Humboldt Institute for Internet and Society, Germany
  • “Reusability Measurement and Social Community Analysis from MOOC Content Users” - Timothy K. Shih, National Central University, Taiwan

In order to further support these projects and foster collaboration, we have begun pairing the award recipients with Googlers pursuing online education research as well as product development teams.

Google is committed to supporting innovation in online learning at scale, and we congratulate the recipients of the MOOC Focused Research Awards. It is our belief that these collaborations will further develop the potential of online education, and we are very pleased to work with these researchers to jointly push the frontier of MOOCs.

Thursday, 5 March 2015

Build a loyal user base with three new Mobile App Analytics reports

Successful developers understand that in order to have a popular app, focusing on retaining a loyal user base is just as important as driving new installs. Today at the Game Developer Conference in San Francisco, we introduced new reports that will help you measure how to do this in two meaningful ways. We’re happy to announce that Mobile App Analytics will now let you understand how users come back to your app day after day, and provide the rich insights you need in order to measure their value over time. Let’s take a look at how these new reports can help make your app a hit.

Active Users
The active user report displays your 1-day, 7-day, 14-day and 30-day trailing active users next to each other in one, easy-to-view dashboard. The new overview gives immediate insights into how users interact with your app over time, along with dropoff rate comparisons. With this report, an app download is only the beginning of a potentially valuable relationship with your new users.

Benchmark active users at 1-7-14-30 days by selecting the segments you want. (Click to enlarge image)
While these metrics help you monitor your active user trends, when put into context they can answer important questions about your user acquisition strategies. For example, if you are investing in different campaigns, you can compare the cost of retaining users acquired via paid traffic versus organic to understand if you are attracting the right type of users. Not only can you measure your cost effectiveness, but you can also continue to monitor whether or not the users you paid for are still coming back after the campaign is over. This is particularly important when trying to keep your loyal user base engaged and happy with your app.

Lior Romano, Founder and CEO of Gentoo Labs (the makers of  Contacts+ for iOS and Android), was one of the first customers to try out this new report during our beta test period. He found the Active Users report especially useful when managing and organizing all their information at-a-glance: “We love the new Google Analytics Active Users feature -- it's a real time-saver! We get a quick overview of the 1/7/14/30-day active user trends side by side in a snap, which helps us to easily track our main metrics.”

Cohort Analysis
After learning how many users have opened your app, the next step in driving engagement is understanding when they come back. Cohort Analysis is a user analysis technique that allows you to analyze and compare your users by looking at their customer journey. Using Cohort Analysis, you can see when users are coming back to your app and their behavior over time after the day of the first session, and lets you further filter the information by day, week or month. We’ve also added the ability to compare different segments of users based on the day of the first install. 

In order to validate your user acquisition strategies, Cohort analysis lets you compare different periods or campaigns. For example, you can compare different weeks or months to measure the retention effectiveness of a single channel to see if you continue to attract valuable users throughout a campaign. The flexibility of the report also allows you to see how much time users are spending in an app as they come back day after day. With these valuable insights, Mobile App Analytics users can tailor their acquisition campaigns or app experience, just as our partner E-Nor did: “Cohort analysis in GA made it easy for E-Nor to gauge the effectiveness of lead nurturing efforts during an app free-trial promotion campaign. The analysis clearly showed that many users responded well to email and in-app reminders, resulting in over 50% retention between the 3rd and 5th day post sign-up as opposed to 30% in the first and 2nd day.

See at a glimpse when users are coming back to your app. (Click to enlarge image)

Lifetime Value
Analyzing retention is a great way to ensure users stick with your app and come back day after day. With Lifetime Value reporting, you’ll get a full picture of these users’ value over time. To get the most out of this report, it’s important to start with a clear definition of what a user’s value means to you based on your business objectives. Once you’ve defined the value, you can access the report to measure certain variables such as revenue per user and number of screen views per user over a period of 90 days. For example, if the goal of your app is to get users to purchase virtual or material goods, you’ll want to use this report to get a clear view of when they make a purchase and how much they are spending in your app over time.

Lifetime Value is a key metric to use to measure the effectiveness of your acquisition campaigns. If your cost to acquire a new user is higher than the average value over time, you might want to optimize your campaigns to meet the lifetime revenue they generate. Lifetime Value is particularly valuable if you offer in-app purchases, but it can be applied to discovering many other useful insights, such as number of times they open your app, total number of screens and goal completions.

Session duration per users compared to goal completion over a 60 day window. (Click to enlarge image)

How to get started
Cohort Analysis report can be found under the ‘Audience’ section in your Google Analytics account, and is now available in beta. Lifetime Value and Active Users reports are coming soon to all Analytics accounts.

To get started login into your Analytics account and look for the new reports under the Audience section. 


Posted by Gene Chan and Rahul Oak on behalf of the Google Analytics Team

Wednesday, 4 March 2015

A step closer to quantum computation with Quantum Error Correction



Computer scientists have dreamt of large-scale quantum computation since at least 1994 -- the hope is that quantum computers will be able to process certain calculations much more quickly than any classical computer, helping to solve problems ranging from complicated physics or chemistry simulations to solving optimization problems to accelerating machine learning tasks.

One of the primary challenges is that quantum memory elements (“qubits”) have always been too prone to errors. They’re fragile and easily disturbed -- any fluctuation or noise from their environment can introduce memory errors, rendering the computations useless. As it turns out, getting even just a small number of qubits together to repeatedly perform the required quantum logic operations and still be nearly error-free is just plain hard. But our team has been developing the quantum logic operations and qubit architectures to do just that.

In our paper “State preservation by repetitive error detection in a superconducting quantum circuit”, published in the journal Nature, we describe a superconducting quantum circuit with nine qubits where, for the first time, the qubits are able to detect and effectively protect each other from bit errors. This quantum error correction (QEC) can overcome memory errors by applying a carefully choreographed series of logic operations on the qubits to detect where errors have occurred.
Photograph of the device containing nine quantum bits (qubits). Each qubit interacts with its neighbors to protect them from error.

So how does QEC work? In a classical computer, we can monitor bits directly to detect errors. However, qubits are much more fickle -- measuring a qubit directly will collapse entanglement and superposition states, removing the quantum elements that make it useful for computation.

To get around this, we introduce additional ‘measurement’ qubits, and perform a series of quantum logic operations that look at the 'measurement' and 'data' qubits in combination. By looking at the state of these pairwise combinations (using quantum XOR gates), and performing some careful cross-checking, we can pull out just enough information to detect errors without altering the information in any individual qubit.
The basics of error correction. ‘Measurement’ qubits can detect errors on ‘data’ qubits through the use of quantum XOR gates.

We’ve also shown that storing information in five qubits works better than just storing it in one, and that with nine qubits the error correction works even better. That’s a key result -- it shows that the quantum logic operations are trustworthy enough that by adding more qubits, we can detect more complex errors that otherwise may cause algorithmic failure.

While the basic physical processes behind quantum error correction are feasible, many challenges remain, such as improving the logic operations behind error correction and testing protection from phase-flip errors. We’re excited to tackle these challenges on the way towards making real computations possible.

Monday, 2 March 2015

Large-Scale Machine Learning for Drug Discovery



Discovering new treatments for human diseases is an immensely complicated challenge; Even after extensive research to develop a biological understanding of a disease, an effective therapeutic that can improve the quality of life must still be found. This process often takes years of research, requiring the creation and testing of millions of drug-like compounds in an effort to find a just a few viable drug treatment candidates. These high-throughput screens are often automated in sophisticated labs and are expensive to perform.

Recently, deep learning with neural networks has been applied in virtual drug screening1,2,3, which attempts to replace or augment the high-throughput screening process with the use of computational methods in order to improve its speed and success rate.4 Traditionally, virtual drug screening has used only the experimental data from the particular disease being studied. However, as the volume of experimental drug screening data across many diseases continues to grow, several research groups have demonstrated that data from multiple diseases can be leveraged with multitask neural networks to improve the virtual screening effectiveness.

In collaboration with the Pande Lab at Stanford University, we’ve released a paper titled "Massively Multitask Networks for Drug Discovery", investigating how data from a variety of sources can be used to improve the accuracy of determining which chemical compounds would be effective drug treatments for a variety of diseases. In particular, we carefully quantified how the amount and diversity of screening data from a variety of diseases with very different biological processes can be used to improve the virtual drug screening predictions.

Using our large-scale neural network training system, we trained at a scale 18x larger than previous work with a total of 37.8M data points across more than 200 distinct biological processes. Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data. In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future. The data in the paper represents more than 50M total CPU hours.
This graph shows a measure of prediction accuracy (ROC AUC is the area under the receiver operating characteristic curve) for virtual screening on a fixed set of 10 biological processes as more datasets are added.

One encouraging conclusion from this work is that our models are able to utilize data from many different experiments to increase prediction accuracy across many diseases. To our knowledge, this is the first time the effect of adding additional data has been quantified in this domain, and our results suggest that even more data could improve performance even further.

Machine learning at scale has significant potential to accelerate drug discovery and improve human health. We look forward to continued improvement in virtual drug screening and its increasing impact in the discovery process for future drugs.

Thank you to our other collaborators David Konerding (Google), Steven Kearnes (Stanford), and Vijay Pande (Stanford).

References:

1. Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Kurt Wegner, Hugo Ceulemans, Sepp Hochreiter. Deep Learning as an Opportunity in Virtual Screening. Deep Learning and Representation Learning Workshop: NIPS 2014

2. Dahl, George E, Jaitly, Navdeep, and Salakhutdinov, Ruslan. Multi-task neural networks for QSAR predictions. arXiv preprint arXiv:1406.1231, 2014.

3. Ma, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George, and Svetnik, Vladimir. Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling, 2015.

4. Peter Ripphausen, Britta Nisius, Lisa Peltason, and Jürgen Bajorath. Quo Vadis, Virtual Screening? A Comprehensive Survey of Prospective Applications. Journal of Medicinal Chemistry 2010 53 (24), 8461-8467