Friday, 27 December 2013

Klarna tracks third-party iframe with Universal Analytics’ cookieless approach

Klarna is one of the biggest providers in Europe of in-store credit and invoice based payment solutions for the ecommerce sector. The company enables the end-consumer to order and receive products, then pay for them afterwards. Klarna assesses the credit and fraud risk for the  merchant, allowing the merchant to have a zero-friction checkout process – a win-win for the merchant-customer relationship.


Third-party domains pose a problem
Merchants use Klarna’s iframed checkout solution. The iframe is located on the merchant’s domain, but the actual iframe contents are hosted on  Klarna’s own domain. Browsers such as Safari on iPhone and iPad, and later generation desktop browsers such as Internet Explorer 10 prevent  third-party cookies by default. Many analytics solutions rely on the use of cookies though. In order to prevent the loss of nearly all iPhone visits and  many desktop visits, Klarna wanted to address this problem. 

A cookieless approach to the rescue
Working with Google Analytics Certified Partner Outfox, Klarna found exactly what it needed in Universal Analytics, which introduces a set of features that change the way data is collected and organized in Google Analytics accounts. In addition to standard Google Analytics features, Universal Analytics provides new data collection methods, simplified feature configuration, custom dimensions and metrics, and multi-platform tracking.
“Thanks to Universal Analytics we can track the iframe on our merchants’ domains and be sure we get all traffic.”
- David Fock, Vice President Commerce, Klarna

In Klarna’s new cookieless approach, the “storage: none” option was selected in creating the account in Universal Analytics. The checkout iframe meanwhile uses a unique non-personally identifiable ‘client ID’. These measures cause Universal Analytics to disable cookies and instead use the client ID as a session identifier. Because no cookies are in use, browsers that don’t allow for third-party cookies aren’t an issue at all. 

Virtual pageviews are sent on checkout form interactions. Custom dimensions and metrics are used for tagging a visit, with a dimension  indicating which merchant is hosting the iframe, and a metric showing what cart value the user brings to the checkout.

Complete tracking and assured analysis
With Universal Analytics features, Klarna ensures iframe tracking is complete across all browsers. By using the virtual pageviews as URL goals  and funnel steps, goal flow visualizations are used to find bottlenecks in the checkout flow. The new custom dimensions and metrics together with  ecommerce tracking mean that reports can now be set up to reveal how each merchant’s cart value correlates to its final transaction value.

Be sure to check out the whole case study here.

Posted by the Google Analytics Team

Thursday, 19 December 2013

Wrangle Your Site Categories And Product Types With Content Grouping

Viewing your site content in logical groups is important for sites and businesses of all types. It lets you understand how different categories of products are working together and the buckets that generate the most revenue. Or, if you run a news site understand which categories are the hottest and most in demand. Some of you have been analyzing these things in the past via Advanced Segments but we want to make this even easier and more useful across the product. That’s why we’re excited to launch Content Grouping.

Content Grouping allows sites to group their pages through tracking code, a UI-based rules editor, and/or UI-based extraction rules. Once implemented, Content Groupings become a dimension of the content reports and allow users to visualize their data based on each group in addition to the other primary dimensions.
We’ve been hard at work refining Content Grouping based on tester feedback to create a simplified experience that has been unified with the familiar Channel Grouping interface. Content Grouping supports three methods for creating groups: 1) Tracking Code, 2) Rules, 3) Extraction. You can use a single method or a combination of all of them. 
This will help you wrangle those long lists of tens, hundreds or thousands of URLs, most of which have a tiny portion of the pageviews (or entrances, exits, etc) each one being individually not interesting, but together telling a meaningful story. We would like to help you grasp and represent this data in a grouped format, helping you understand the overall areas that the website owner has (e.g. “product pages”, “search pages”, “watch pages”).
Content Grouping lets you group content into a logical structure that reflects how you think about your site. You can view aggregated metrics by group name, and then drill in to individual URLs, page titles, or screen names. For example, you can see the aggregated number of pageviews for all pages in /Men/Shirts rather than for each URL or page title, and then drill in to see statistics for individual pages.

Watch the below video to learn more:


Be sure and visit our Help Center to learn how to get started with Content Grouping today.

Happy Analyzing!

Posted by Russell Ketchum, Google Analytics Team

Monday, 16 December 2013

Groundbreaking simulations by Google Exacycle Visiting Faculty



In April 2011, we announced the Google Exacycle for Visiting Faculty, a new academic research awards program donating one billion core-hours of computational capacity to researchers. The Exacycle project enables massive parallelism for doing science in the cloud, and inspired multiple proposals aiming to take advantage of cloud scale. Today, we would like to share some exciting results from a project built on Google’s infrastructure.

Google Research Scientist Kai Kohlhoff, in collaboration with Stanford University and Google engineers, investigated how an important signalling protein in the membrane of human cells can switch off and on by changing its three-dimensional structure following a sequence of local conformational changes. This research can help to better understand the effects of certain chemical compounds on the human body and assist future development of more potent drug molecules with fewer side effects.

The protein, known as the beta-2 adrenergic receptor, is a G protein-coupled receptor (GPCR), a primary drug target that plays a role in several debilitating health conditions. These include asthma, type-2 diabetes, obesity, and hypertension. The receptor and its close GPCR relatives bind to many familiar molecules, such as epinephrine, beta-blockers, and caffeine. Understanding their structure, function, and the underlying dynamics during binding and activation increases our chances to decode the causes and mechanisms of diseases.

To gain insights into the receptor’s dynamics, Kai performed detailed molecular simulations using hundreds of millions of core hours on Google’s infrastructure, generating hundreds of terabytes of valuable molecular dynamics data. The Exacycle program enabled the realization of simulations with longer sampling and higher accuracy than previous experiments, exposing the complex processes taking place on the nanoscale during activation of this biological switch.

The paper summarizing the results of Kai’s and his collaborators’ work is featured on the January cover of Nature Chemistry, with artwork by Google R&D UX Creative Lead Thor Lewis, to be published on December 17, 2013. The online version of his paper was published on their website today.

We are extremely pleased with the results of this program. We look forward to seeing this research continue to develop.

Friday, 13 December 2013

Using Universal Analytics to Measure Movement

The following is a guest post by Benjamin Mangold, Director of Digital & Analytics at Loves Data, a Google Analytics Certified Partner.

Universal Analytics includes new JavaScript tracking code for websites and new mobile SDKs. But Universal Analytics is a lot more than that - it also gives us the Measurement Protocol, which allows us to send data to Google Analytics without the need to use the tracking code or SDKs.

Earlier this year, the team at Loves Data used Universal Analytics and the Measurement Protocol to measure their caffeine consumption and tie it to the team’s productivity. Our next challenge: measuring our team’s movement into Google Analytics. With the help of an Xbox Kinect, movement recognition software, and of course the Measurement Protocol, we started getting creative!



Business Applications and Analysis Opportunities

So measuring movement is fun and although we can measure total and unique dance moves you might be wondering about the business applications. This is where the power of measuring offline interactions can really start to be seen. The Measurement Protocol enables business applications such as:
  • Measuring in-store purchases and tying purchases to your online data
  • Understanding behaviour across any connected device, including gaming consoles
  • Comparing offline billboard impressions to online display ad impressions
  • Getting insights into your audience’s online to offline journey
Once you have tied your online and offline data together you can begin to analyze the full impact of your different touch points. For example, if you are collecting contact details online, you can use Google Analytics to then understand who actually converts offline, whether this conversion is attending an information session or making a purchase at a cash register. The analysis possibilities made available by the Measurement Protocol are truly amazing.

Wednesday, 11 December 2013

How attribution modeling increases profit for Baby Supermall

"Attribution modeling changes everything."

That's what Joe Meier of Baby Supermall told us recently.  If you're looking for alphabets or monkeys on your new baby bedding, Baby Supermall is the place to be. But those products have an unusually long buying cycle. "Our typical customer is a pregnant mother-to-be," says Meier. "They have months to make a decision."

In this video, Meier describes how Google Analytics’ attribution modeling tool let them measure the impact of different marketing touch points before customers finally made a purchase. So they could figure out which of their marketing activities led all those moms (and dads) to visit the Baby Supermall site. It also saved him from the monster 80-megabyte spreadsheets he'd been building as he tried to manually figure those patterns out. 

Result? “We’re spending our money more efficiently than we were before. We know what we’re getting for it,” says Meier. By linking their Google Analytics and Adwords accounts, Baby Supermall was able to see the impact of different keywords and optimize their AdWords ads, bringing in “tens of thousands of dollars in additional sales every week."

He calls the results "groundbreaking." Check out the video:


(PS: Don't miss their site if you happen to like very cute baby bedding.)

Happy Analyzing!

Posted by: Suzanne Mumford, Google Analytics Marketing

Googler Moti Yung elected as 2013 ACM Fellow



Yesterday, the Association for Computing Machinery (ACM) released the list of those who have been elected ACM Fellows in 2013. I am excited to announce that Google Research Scientist Moti Yung is among the distinguished individuals receiving this honor.

Moti was chosen for his contributions to computer science and cryptography that have provided fundamental knowledge to the field of computing security. We are proud of the breadth and depth of his contributions, and believe they serve as motivation for computer scientists worldwide.

On behalf of Google, I congratulate our colleague, who joins the 17 ACM Fellow and other professional society awardees at Google, in exemplifying our extraordinarily talented people. You can read a more detailed summary of Moti’s accomplishments below, including the official citations from ACM.

Dr. Moti Yung: Research Scientist
For contributions to cryptography and its use in security and privacy of systems

Moti has made key contributions to several areas of cryptography including (but not limited to!) secure group communication, digital signatures, traitor tracing, threshold cryptosystems and zero knowledge proofs. Moti's work often seeds a new area in theoretical cryptography as well as finding applications broadly. For example, in 1992, Moti co-developed a protocol by which users can commonly compute a group key using their own private information that is secure against coalitions of rogue users. This work led to the growth of the broadcast encryption research area and has applications to pay-tv, network communication and sensor networks.
Moti is also a long-time leader of the security and privacy research communities, having mentored many of the leading researchers in the field, and serving on numerous program committees. A prolific author, Moti routinely publishes 10+ papers a year, and has been a key contributor to principled and consistent anonymization practices and data protection at Google.

Thursday, 5 December 2013

Fairmont Gets Deeper Understanding of Social Interactions for Real Results

How do you improve social messaging for some of the world's most prestigious hotels? If you're Fairmont Raffles Hotels, you turn to Google Analytics. 

Fairmont is famous for its nearly 100 global luxury hotels, from the original Raffles Hotel in Singapore to the grand Empress Fairmont in Victoria, B.C.  The variety of the properties can make social impact tricky to measure, says Barbara Pezzi, Director of Analytics & SEO.

Charmingly direct, Pezzi says her team tried other social media analytics tools and found that "the metrics were really lame. Number of likes and retweets — that didn't really tell us anything." They wanted to know exactly who they were attracting and how.

Once the Fairmont team began using Google Analytics, they were able to see their audiences more clearly and tailor messages to fit. The results were impressive: a doubling of bookings and revenue from social media. 

Here's the whole story:



"It was a big revelation for everyone" — when it comes to analytics, those are the magic words.

Learn more about Google Analytics and Google Analytics Premium here.

Posted by Suzanne Mumford, Google Analytics Marketing

Tuesday, 3 December 2013

Google Analytics Dashboards for Quick Insights

The following is a guest post by Benjamin Mangold, Director of Digital & Analytics at Loves Data, a Google Analytics Certified Partner.

Creating custom Google Analytics Dashboards is a great way to monitor performance and get quick insights into the success of key aspects of your websites and mobile apps. You can create dashboards to meet your particular needs, from understanding marketing campaign performances, to content engagement levels, and even trends relating to goal conversions and e-commerce transactions.

Sample custom dashboard (click for full-size image)

The dashboards you create will depend on who is going to use them. You will want the dashboard used by your marketing manager to be different to the dashboard that is seen by your technical team - and different again for your CEO. You should always tie dashboards to the types of questions the particular person or stakeholder is going to ask. Basing your dashboards on particular roles or job functions within your organisation is a good place to start thinking about the type of dashboards you will want to design.
Dashboard Widgets

Each dashboard is made up of widgets which can be pieces of information or data from your Google Analytics reports. There are a number of different widgets and the ones you add to your dashboard will depend on the type of trends and insights you want to provide.


Metric widgets present a single piece of data on your dashboard along with a small sparkline.

Timeline widgets give a detailed sparkline showing trends by day. This widget allows you to show a single metric or compare two metrics.

Geomap widgets allows you to display a map within your dashboard. You can show the location of your visitors and even compare conversion rates or engagement by geographic location.

Table widgets display a table that combines information (a dimension) with up to two metrics.

Pie widgets present a pie or doughnut chart and are useful for visual comparisons.

Bar widgets are also useful for presenting comparisons. This widget allows you to pivot by an additional dimension and switch between horizontal and vertical layout.

In most cases you will want to use the ‘standard’ widgets. These present data that has been processed into the standard reports. You can also include ‘real-time’ widgets, but it is important to know that these will not be included if you are exporting or scheduling the dashboard.

Widget Filters

Filters can be applied to widgets within your dashboard, allowing you to further define what is presented in your dashboard. For example, if you want to include a metric widget to show the total number of visits from your Google AdWords campaigns, you could then add the following filter which will only include visits where the source is ‘google’ and the medium is ‘cpc’.


Sharing Dashboards

Once you have created your custom dashboards you can keep these private, share the dashboard with everybody who has access to the reporting view or even share them with the wider Google Analytics community. The Google Analytics Solutions Gallery is a crowdsourced collection of customizations and includes a number of great dashboards that you can add to your account.

Have a great dashboard? Want to win prizes? Loves Data, a Google Analytics Certified Partner are running a competition for the best Google Analytics Dashboard. Judges include Google’s own Justin Cutroni, Daniel Waisberg and Adam Singer. The competition closes on December 31, 2013 and winners will be announced in late January 2014.

Posted by Benjamin Mangold, Google Analytics Certified Partner

Free Language Lessons for Computers



Not everything that can be counted counts.
Not everything that counts can be counted.

50,000 relations from Wikipedia. 100,000 feature vectors from YouTube videos. 1.8 million historical infoboxes. 40 million entities derived from webpages. 11 billion Freebase entities in 800 million web documents. 350 billion words’ worth from books analyzed for syntax.

These are all datasets that we’ve shared with researchers around the world over the last year from Google Research.

But data by itself doesn’t mean much. Data is only valuable in the right context, and only if it leads to increased knowledge. Labeled data is critical to train and evaluate machine-learned systems in many arenas, improving systems that can increase our ability to understand the world. Advances in natural language understanding, information retrieval, information extraction, computer vision, etc. can help us tell stories, mine for valuable insights, or visualize information in beautiful and compelling ways.

That’s why we are pleased to be able to release sets of labeled data from various domains and with various annotations, some automatic and some manual. Our hope is that the research community will use these datasets in ways both straightforward and surprising, to improve systems for annotation or understanding, and perhaps launch new efforts we haven’t thought of.

Here’s a listing of the major datasets we’ve released in the last year, or you can subscribe to our mailing list. Please tell us what you’ve managed to accomplish, or send us pointers to papers that use this data. We want to see what the research world can do with what we’ve created.

50,000 Lessons on How to Read: a Relation Extraction Corpus

What is it: A human-judged dataset of two relations involving public figures on Wikipedia: about 10,000 examples of “place of birth” and 40,000 examples of “attended or graduated from an institution.”
Where can I find it: https://code.google.com/p/relation-extraction-corpus/
I want to know more: Here’s a handy blog post with a broader explanation, descriptions and examples of the data, and plenty of links to learn more.

11 Billion Clues in 800 Million Documents

What is it: We took the ClueWeb corpora and automatically labeled concepts and entities with Freebase concept IDs, an example of entity resolution. This dataset is huge: nearly 800 million web pages.
Where can I find it: We released two corpora: ClueWeb09 FACC and ClueWeb12 FACC.
I want to know more: We described the process and results in a recent blog post.

Features Extracted From YouTube Videos for Multiview Learning

What is it: Multiple feature families from a set of public YouTube videos of games. The videos are labeled with one of 30 categories, and each has an associated set of visual, auditory, and and textual features.
Where can I find it: The data and more information can be obtained from the UCI machine learning repository (multiview video dataset), or from Google’s repository.
I want to know more: Read more about the data and uses for it here.

40 Million Entities in Context

What is it: A disambiguation set consisting of pointers to 10 million web pages with 40 million entities that have links to Wikipedia. This is another entity resolution corpus, since the links can be used to disambiguate the mentions, but unlike the ClueWeb example above, the links are inserted by the web page authors and can therefore be considered human annotation.
Where can I find it: Here’s the WikiLinks corpus, and tools can be found to help use this data on our partner’s page: Umass Wiki-links.
I want to know more: Other disambiguation sets, data formats, ideas for uses of this data, and more can be found at our blog post announcing the release.

Distributing the Edit History of Wikipedia Infoboxes

What is it: The edit history of 1.8 million infoboxes in Wikipedia pages in one handy resource. Attributes on Wikipedia change over time, and some of them change more than others. Understanding attribute change is important for extracting accurate and useful information from Wikipedia.
Where can I find it: Download from Google or from Wikimedia Deutschland.
I want to know more: We posted a detailed look at the data, the process for gathering it, and where to find it. You can also read a paper we published on the release.
Note the change in the capital of Palau.


Syntactic Ngrams over Time

What is it: We automatically syntactically analyzed 350 billion words from the 3.5 million English language books in Google Books, and collated and released a set of fragments -- billions of unique tree fragments with counts sorted into types. The underlying corpus is the same one that underlies the recently updated Google Ngram Viewer.
Where can I find it: http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
I want to know more: We discussed the nature of dependency parses and describe the data and release in a blog post. We also published a paper about the release.

Dictionaries for linking Text, Entities, and Ideas

What is it: We created a large database of pairs of 175 million strings associated with 7.5 million concepts, annotated with counts, which were mined from Wikipedia. The concepts in this case are Wikipedia articles, and the strings are anchor text spans that link to the concepts in question.
Where can I find it: http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2
I want to know more: A description of the data, several examples, and ideas for uses for it can be found in a blog post or in the associated paper.

Other datasets

Not every release had its own blog post describing it. Here are some other releases: