Friday, 27 December 2013

Klarna tracks third-party iframe with Universal Analytics’ cookieless approach

Klarna is one of the biggest providers in Europe of in-store credit and invoice based payment solutions for the ecommerce sector. The company enables the end-consumer to order and receive products, then pay for them afterwards. Klarna assesses the credit and fraud risk for the  merchant, allowing the merchant to have a zero-friction checkout process – a win-win for the merchant-customer relationship.


Third-party domains pose a problem
Merchants use Klarna’s iframed checkout solution. The iframe is located on the merchant’s domain, but the actual iframe contents are hosted on  Klarna’s own domain. Browsers such as Safari on iPhone and iPad, and later generation desktop browsers such as Internet Explorer 10 prevent  third-party cookies by default. Many analytics solutions rely on the use of cookies though. In order to prevent the loss of nearly all iPhone visits and  many desktop visits, Klarna wanted to address this problem. 

A cookieless approach to the rescue
Working with Google Analytics Certified Partner Outfox, Klarna found exactly what it needed in Universal Analytics, which introduces a set of features that change the way data is collected and organized in Google Analytics accounts. In addition to standard Google Analytics features, Universal Analytics provides new data collection methods, simplified feature configuration, custom dimensions and metrics, and multi-platform tracking.
“Thanks to Universal Analytics we can track the iframe on our merchants’ domains and be sure we get all traffic.”
- David Fock, Vice President Commerce, Klarna

In Klarna’s new cookieless approach, the “storage: none” option was selected in creating the account in Universal Analytics. The checkout iframe meanwhile uses a unique non-personally identifiable ‘client ID’. These measures cause Universal Analytics to disable cookies and instead use the client ID as a session identifier. Because no cookies are in use, browsers that don’t allow for third-party cookies aren’t an issue at all. 

Virtual pageviews are sent on checkout form interactions. Custom dimensions and metrics are used for tagging a visit, with a dimension  indicating which merchant is hosting the iframe, and a metric showing what cart value the user brings to the checkout.

Complete tracking and assured analysis
With Universal Analytics features, Klarna ensures iframe tracking is complete across all browsers. By using the virtual pageviews as URL goals  and funnel steps, goal flow visualizations are used to find bottlenecks in the checkout flow. The new custom dimensions and metrics together with  ecommerce tracking mean that reports can now be set up to reveal how each merchant’s cart value correlates to its final transaction value.

Be sure to check out the whole case study here.

Posted by the Google Analytics Team

Thursday, 19 December 2013

Wrangle Your Site Categories And Product Types With Content Grouping

Viewing your site content in logical groups is important for sites and businesses of all types. It lets you understand how different categories of products are working together and the buckets that generate the most revenue. Or, if you run a news site understand which categories are the hottest and most in demand. Some of you have been analyzing these things in the past via Advanced Segments but we want to make this even easier and more useful across the product. That’s why we’re excited to launch Content Grouping.

Content Grouping allows sites to group their pages through tracking code, a UI-based rules editor, and/or UI-based extraction rules. Once implemented, Content Groupings become a dimension of the content reports and allow users to visualize their data based on each group in addition to the other primary dimensions.
We’ve been hard at work refining Content Grouping based on tester feedback to create a simplified experience that has been unified with the familiar Channel Grouping interface. Content Grouping supports three methods for creating groups: 1) Tracking Code, 2) Rules, 3) Extraction. You can use a single method or a combination of all of them. 
This will help you wrangle those long lists of tens, hundreds or thousands of URLs, most of which have a tiny portion of the pageviews (or entrances, exits, etc) each one being individually not interesting, but together telling a meaningful story. We would like to help you grasp and represent this data in a grouped format, helping you understand the overall areas that the website owner has (e.g. “product pages”, “search pages”, “watch pages”).
Content Grouping lets you group content into a logical structure that reflects how you think about your site. You can view aggregated metrics by group name, and then drill in to individual URLs, page titles, or screen names. For example, you can see the aggregated number of pageviews for all pages in /Men/Shirts rather than for each URL or page title, and then drill in to see statistics for individual pages.

Watch the below video to learn more:


Be sure and visit our Help Center to learn how to get started with Content Grouping today.

Happy Analyzing!

Posted by Russell Ketchum, Google Analytics Team

Monday, 16 December 2013

Groundbreaking simulations by Google Exacycle Visiting Faculty



In April 2011, we announced the Google Exacycle for Visiting Faculty, a new academic research awards program donating one billion core-hours of computational capacity to researchers. The Exacycle project enables massive parallelism for doing science in the cloud, and inspired multiple proposals aiming to take advantage of cloud scale. Today, we would like to share some exciting results from a project built on Google’s infrastructure.

Google Research Scientist Kai Kohlhoff, in collaboration with Stanford University and Google engineers, investigated how an important signalling protein in the membrane of human cells can switch off and on by changing its three-dimensional structure following a sequence of local conformational changes. This research can help to better understand the effects of certain chemical compounds on the human body and assist future development of more potent drug molecules with fewer side effects.

The protein, known as the beta-2 adrenergic receptor, is a G protein-coupled receptor (GPCR), a primary drug target that plays a role in several debilitating health conditions. These include asthma, type-2 diabetes, obesity, and hypertension. The receptor and its close GPCR relatives bind to many familiar molecules, such as epinephrine, beta-blockers, and caffeine. Understanding their structure, function, and the underlying dynamics during binding and activation increases our chances to decode the causes and mechanisms of diseases.

To gain insights into the receptor’s dynamics, Kai performed detailed molecular simulations using hundreds of millions of core hours on Google’s infrastructure, generating hundreds of terabytes of valuable molecular dynamics data. The Exacycle program enabled the realization of simulations with longer sampling and higher accuracy than previous experiments, exposing the complex processes taking place on the nanoscale during activation of this biological switch.

The paper summarizing the results of Kai’s and his collaborators’ work is featured on the January cover of Nature Chemistry, with artwork by Google R&D UX Creative Lead Thor Lewis, to be published on December 17, 2013. The online version of his paper was published on their website today.

We are extremely pleased with the results of this program. We look forward to seeing this research continue to develop.

Friday, 13 December 2013

Using Universal Analytics to Measure Movement

The following is a guest post by Benjamin Mangold, Director of Digital & Analytics at Loves Data, a Google Analytics Certified Partner.

Universal Analytics includes new JavaScript tracking code for websites and new mobile SDKs. But Universal Analytics is a lot more than that - it also gives us the Measurement Protocol, which allows us to send data to Google Analytics without the need to use the tracking code or SDKs.

Earlier this year, the team at Loves Data used Universal Analytics and the Measurement Protocol to measure their caffeine consumption and tie it to the team’s productivity. Our next challenge: measuring our team’s movement into Google Analytics. With the help of an Xbox Kinect, movement recognition software, and of course the Measurement Protocol, we started getting creative!



Business Applications and Analysis Opportunities

So measuring movement is fun and although we can measure total and unique dance moves you might be wondering about the business applications. This is where the power of measuring offline interactions can really start to be seen. The Measurement Protocol enables business applications such as:
  • Measuring in-store purchases and tying purchases to your online data
  • Understanding behaviour across any connected device, including gaming consoles
  • Comparing offline billboard impressions to online display ad impressions
  • Getting insights into your audience’s online to offline journey
Once you have tied your online and offline data together you can begin to analyze the full impact of your different touch points. For example, if you are collecting contact details online, you can use Google Analytics to then understand who actually converts offline, whether this conversion is attending an information session or making a purchase at a cash register. The analysis possibilities made available by the Measurement Protocol are truly amazing.

Wednesday, 11 December 2013

How attribution modeling increases profit for Baby Supermall

"Attribution modeling changes everything."

That's what Joe Meier of Baby Supermall told us recently.  If you're looking for alphabets or monkeys on your new baby bedding, Baby Supermall is the place to be. But those products have an unusually long buying cycle. "Our typical customer is a pregnant mother-to-be," says Meier. "They have months to make a decision."

In this video, Meier describes how Google Analytics’ attribution modeling tool let them measure the impact of different marketing touch points before customers finally made a purchase. So they could figure out which of their marketing activities led all those moms (and dads) to visit the Baby Supermall site. It also saved him from the monster 80-megabyte spreadsheets he'd been building as he tried to manually figure those patterns out. 

Result? “We’re spending our money more efficiently than we were before. We know what we’re getting for it,” says Meier. By linking their Google Analytics and Adwords accounts, Baby Supermall was able to see the impact of different keywords and optimize their AdWords ads, bringing in “tens of thousands of dollars in additional sales every week."

He calls the results "groundbreaking." Check out the video:


(PS: Don't miss their site if you happen to like very cute baby bedding.)

Happy Analyzing!

Posted by: Suzanne Mumford, Google Analytics Marketing

Googler Moti Yung elected as 2013 ACM Fellow



Yesterday, the Association for Computing Machinery (ACM) released the list of those who have been elected ACM Fellows in 2013. I am excited to announce that Google Research Scientist Moti Yung is among the distinguished individuals receiving this honor.

Moti was chosen for his contributions to computer science and cryptography that have provided fundamental knowledge to the field of computing security. We are proud of the breadth and depth of his contributions, and believe they serve as motivation for computer scientists worldwide.

On behalf of Google, I congratulate our colleague, who joins the 17 ACM Fellow and other professional society awardees at Google, in exemplifying our extraordinarily talented people. You can read a more detailed summary of Moti’s accomplishments below, including the official citations from ACM.

Dr. Moti Yung: Research Scientist
For contributions to cryptography and its use in security and privacy of systems

Moti has made key contributions to several areas of cryptography including (but not limited to!) secure group communication, digital signatures, traitor tracing, threshold cryptosystems and zero knowledge proofs. Moti's work often seeds a new area in theoretical cryptography as well as finding applications broadly. For example, in 1992, Moti co-developed a protocol by which users can commonly compute a group key using their own private information that is secure against coalitions of rogue users. This work led to the growth of the broadcast encryption research area and has applications to pay-tv, network communication and sensor networks.
Moti is also a long-time leader of the security and privacy research communities, having mentored many of the leading researchers in the field, and serving on numerous program committees. A prolific author, Moti routinely publishes 10+ papers a year, and has been a key contributor to principled and consistent anonymization practices and data protection at Google.

Thursday, 5 December 2013

Fairmont Gets Deeper Understanding of Social Interactions for Real Results

How do you improve social messaging for some of the world's most prestigious hotels? If you're Fairmont Raffles Hotels, you turn to Google Analytics. 

Fairmont is famous for its nearly 100 global luxury hotels, from the original Raffles Hotel in Singapore to the grand Empress Fairmont in Victoria, B.C.  The variety of the properties can make social impact tricky to measure, says Barbara Pezzi, Director of Analytics & SEO.

Charmingly direct, Pezzi says her team tried other social media analytics tools and found that "the metrics were really lame. Number of likes and retweets — that didn't really tell us anything." They wanted to know exactly who they were attracting and how.

Once the Fairmont team began using Google Analytics, they were able to see their audiences more clearly and tailor messages to fit. The results were impressive: a doubling of bookings and revenue from social media. 

Here's the whole story:



"It was a big revelation for everyone" — when it comes to analytics, those are the magic words.

Learn more about Google Analytics and Google Analytics Premium here.

Posted by Suzanne Mumford, Google Analytics Marketing

Tuesday, 3 December 2013

Google Analytics Dashboards for Quick Insights

The following is a guest post by Benjamin Mangold, Director of Digital & Analytics at Loves Data, a Google Analytics Certified Partner.

Creating custom Google Analytics Dashboards is a great way to monitor performance and get quick insights into the success of key aspects of your websites and mobile apps. You can create dashboards to meet your particular needs, from understanding marketing campaign performances, to content engagement levels, and even trends relating to goal conversions and e-commerce transactions.

Sample custom dashboard (click for full-size image)

The dashboards you create will depend on who is going to use them. You will want the dashboard used by your marketing manager to be different to the dashboard that is seen by your technical team - and different again for your CEO. You should always tie dashboards to the types of questions the particular person or stakeholder is going to ask. Basing your dashboards on particular roles or job functions within your organisation is a good place to start thinking about the type of dashboards you will want to design.
Dashboard Widgets

Each dashboard is made up of widgets which can be pieces of information or data from your Google Analytics reports. There are a number of different widgets and the ones you add to your dashboard will depend on the type of trends and insights you want to provide.


Metric widgets present a single piece of data on your dashboard along with a small sparkline.

Timeline widgets give a detailed sparkline showing trends by day. This widget allows you to show a single metric or compare two metrics.

Geomap widgets allows you to display a map within your dashboard. You can show the location of your visitors and even compare conversion rates or engagement by geographic location.

Table widgets display a table that combines information (a dimension) with up to two metrics.

Pie widgets present a pie or doughnut chart and are useful for visual comparisons.

Bar widgets are also useful for presenting comparisons. This widget allows you to pivot by an additional dimension and switch between horizontal and vertical layout.

In most cases you will want to use the ‘standard’ widgets. These present data that has been processed into the standard reports. You can also include ‘real-time’ widgets, but it is important to know that these will not be included if you are exporting or scheduling the dashboard.

Widget Filters

Filters can be applied to widgets within your dashboard, allowing you to further define what is presented in your dashboard. For example, if you want to include a metric widget to show the total number of visits from your Google AdWords campaigns, you could then add the following filter which will only include visits where the source is ‘google’ and the medium is ‘cpc’.


Sharing Dashboards

Once you have created your custom dashboards you can keep these private, share the dashboard with everybody who has access to the reporting view or even share them with the wider Google Analytics community. The Google Analytics Solutions Gallery is a crowdsourced collection of customizations and includes a number of great dashboards that you can add to your account.

Have a great dashboard? Want to win prizes? Loves Data, a Google Analytics Certified Partner are running a competition for the best Google Analytics Dashboard. Judges include Google’s own Justin Cutroni, Daniel Waisberg and Adam Singer. The competition closes on December 31, 2013 and winners will be announced in late January 2014.

Posted by Benjamin Mangold, Google Analytics Certified Partner

Free Language Lessons for Computers



Not everything that can be counted counts.
Not everything that counts can be counted.

50,000 relations from Wikipedia. 100,000 feature vectors from YouTube videos. 1.8 million historical infoboxes. 40 million entities derived from webpages. 11 billion Freebase entities in 800 million web documents. 350 billion words’ worth from books analyzed for syntax.

These are all datasets that we’ve shared with researchers around the world over the last year from Google Research.

But data by itself doesn’t mean much. Data is only valuable in the right context, and only if it leads to increased knowledge. Labeled data is critical to train and evaluate machine-learned systems in many arenas, improving systems that can increase our ability to understand the world. Advances in natural language understanding, information retrieval, information extraction, computer vision, etc. can help us tell stories, mine for valuable insights, or visualize information in beautiful and compelling ways.

That’s why we are pleased to be able to release sets of labeled data from various domains and with various annotations, some automatic and some manual. Our hope is that the research community will use these datasets in ways both straightforward and surprising, to improve systems for annotation or understanding, and perhaps launch new efforts we haven’t thought of.

Here’s a listing of the major datasets we’ve released in the last year, or you can subscribe to our mailing list. Please tell us what you’ve managed to accomplish, or send us pointers to papers that use this data. We want to see what the research world can do with what we’ve created.

50,000 Lessons on How to Read: a Relation Extraction Corpus

What is it: A human-judged dataset of two relations involving public figures on Wikipedia: about 10,000 examples of “place of birth” and 40,000 examples of “attended or graduated from an institution.”
Where can I find it: https://code.google.com/p/relation-extraction-corpus/
I want to know more: Here’s a handy blog post with a broader explanation, descriptions and examples of the data, and plenty of links to learn more.

11 Billion Clues in 800 Million Documents

What is it: We took the ClueWeb corpora and automatically labeled concepts and entities with Freebase concept IDs, an example of entity resolution. This dataset is huge: nearly 800 million web pages.
Where can I find it: We released two corpora: ClueWeb09 FACC and ClueWeb12 FACC.
I want to know more: We described the process and results in a recent blog post.

Features Extracted From YouTube Videos for Multiview Learning

What is it: Multiple feature families from a set of public YouTube videos of games. The videos are labeled with one of 30 categories, and each has an associated set of visual, auditory, and and textual features.
Where can I find it: The data and more information can be obtained from the UCI machine learning repository (multiview video dataset), or from Google’s repository.
I want to know more: Read more about the data and uses for it here.

40 Million Entities in Context

What is it: A disambiguation set consisting of pointers to 10 million web pages with 40 million entities that have links to Wikipedia. This is another entity resolution corpus, since the links can be used to disambiguate the mentions, but unlike the ClueWeb example above, the links are inserted by the web page authors and can therefore be considered human annotation.
Where can I find it: Here’s the WikiLinks corpus, and tools can be found to help use this data on our partner’s page: Umass Wiki-links.
I want to know more: Other disambiguation sets, data formats, ideas for uses of this data, and more can be found at our blog post announcing the release.

Distributing the Edit History of Wikipedia Infoboxes

What is it: The edit history of 1.8 million infoboxes in Wikipedia pages in one handy resource. Attributes on Wikipedia change over time, and some of them change more than others. Understanding attribute change is important for extracting accurate and useful information from Wikipedia.
Where can I find it: Download from Google or from Wikimedia Deutschland.
I want to know more: We posted a detailed look at the data, the process for gathering it, and where to find it. You can also read a paper we published on the release.
Note the change in the capital of Palau.


Syntactic Ngrams over Time

What is it: We automatically syntactically analyzed 350 billion words from the 3.5 million English language books in Google Books, and collated and released a set of fragments -- billions of unique tree fragments with counts sorted into types. The underlying corpus is the same one that underlies the recently updated Google Ngram Viewer.
Where can I find it: http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
I want to know more: We discussed the nature of dependency parses and describe the data and release in a blog post. We also published a paper about the release.

Dictionaries for linking Text, Entities, and Ideas

What is it: We created a large database of pairs of 175 million strings associated with 7.5 million concepts, annotated with counts, which were mined from Wikipedia. The concepts in this case are Wikipedia articles, and the strings are anchor text spans that link to the concepts in question.
Where can I find it: http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2
I want to know more: A description of the data, several examples, and ideas for uses for it can be found in a blog post or in the associated paper.

Other datasets

Not every release had its own blog post describing it. Here are some other releases:

Wednesday, 27 November 2013

Full Customer Journey: Three Lenses of Measurement

My son is a LEGO enthusiast, and even though I don’t build that often, I am usually involved in the acquisition process of LEGO sets or digital goods. To quote a few, we build with bricks, plan with their software, play with their apps, buy through their website and consume content on their social profiles. Quite a lot of touch points with their brand, and that’s not all!

On my side, I get very curious on how they measure and optimize their customer experiences, so I like to use them as an example of how challenging the measurement world has become. And the way we look at this challenge at Google is through three lenses of measurement:
  1. Holistic Measurement: how can we understand our customers using multiple devices through multiple touch points? 
  2. Full Credit Measurement: how can we attribute the credit of bringing new and returning customers to marketing campaigns?
  3. Active Measurement: how can we make sure that data is accessible, accurate and comprehensive?
This is the kind of challenge that we try to solve for and that drives our thinking. Paul Muret, VP Engineering at Google, discussed these three challenges in his article on the Harvard Business Review and how we should face them. Here is an excerpt:
This is creating tremendous opportunities for business teams to engage customers throughout their new and more complex buying journeys. But before you can take advantage, you have to understand that journey by measuring and analyzing the data in new ways that value these moments appropriately. The payoff is better alignment between marketing messages and consumers’ intent during their paths to purchase - and ultimately, better business results.
Below is a presentation delivered by me at Dublin, in a Google Think event earlier this year. I discuss each of the challenges in depth.


Tuesday, 26 November 2013

SUPERWEEK 2014: January 21-23, Hungary

The following is a guest post contributed by Zoltán Bánóczy, founder of AALL Ltd. and the SUPERWEEK Conference series.

In the fourth week of the New Year, many of us will enjoy the gorgeous view pictured below as the actual backdrop for one the year’s most exciting analytics conferences.  Speakers hailing from Jerusalem to Copenhagen to San Francisco to Ahmedabad promise to deliver insightful talks about a wide range of topics surrounding the modern digital industry.


The 3 day SUPERWEEK 2014 begins on January 21st, located on the beautiful mountaintop of GalyatetĹ‘, at the highest-lying 4 star hotel in Hungary. Fly to Budapest easily from across Europe and rely on our shuttlebuses called SUPERBUS as an option for your package. Conference goers can expect advanced talks at the sessions, data based opinions shared during the panels, and Google Tag Manager deep dives - some say even deeper than the Mariana Trench. 

In his keynote, Avinash Kaushik will share a collection of strategies to help you ensure that the focus of your analytics effort is on taking action and not data regurgitation in a session titled: “Driving an Obsession with Actionable Analytics.“  Caleb Whitmore (Analytics Pros) will be providing a “hands-on” training and conference goers can complete the GAIQ exam right afterwards. Excitingly, we get the opportunity to ask Avinash about Life! - in his Q&A session entitled: “Search, Social, Analytics, Life: AMA (“ask me anything”)”. 

Speakers include industry thought leaders, Top Contributors to the AdWords forums and many Google Analytics Certified Partner companies - all from about 10 countries.


We’ll try to cover the latest of the industry: predictive analytics (Ravi Pathak, India), Universal Analytics & Google Tag Manager implementations (Yehoshua Coren - Israel, Doug Hall - UK, and Julien Coquet - France), PPC / display advertising (Jacob Kildebogaard - Denmark and Oliver Schiffers - Germany), A/B testing, privacy (AurĂ©lie Pols - Spain) and even analytics expert  “The Professor”, Phil Pearce from the UK.

Join us for the emblematic, traditional evening with campfire made from large 2+ meter logs where a wide range of (mulled) wine and mellow mood will be served.

Keep up to date on the agenda and other programmes by following us at @superweek2014 (or #spwk during the event) on Twitter.

Posted by Zoltán Bánóczy, Google Analytics Certified Partner

Released Data Set: Features Extracted From YouTube Videos for Multiview Learning


“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”

Performance of machine learning algorithms, supervised or unsupervised, is often significantly enhanced when a variety of feature families, or multiple views of the data, are available. For example, in the case of web pages, one feature family can be based on the words appearing on the page, and another can be based on the URLs and related connectivity properties. Similarly, videos contain both audio and visual signals where in turn each modality is analyzed in a variety of ways. For instance, the visual stream can be analyzed based on the color and edge distribution, texture, motion, object types, and so on. YouTube videos are also associated with textual information (title, tags, comments, etc.). Each feature family complements others in providing predictive signals to accomplish a prediction or classification task, for example, in automatically classifying videos into subject areas such as sports, music, comedy, games, and so on.

We have released a dataset of over 100k feature vectors extracted from public YouTube videos. These videos are labeled by one of 30 classes, each class corresponding to a video game (with some amount of class noise): each video shows a gameplay of a video game, for teaching purposes for example. Each instance (video) is described by three feature families (textual, visual, and auditory), and each family is broken into subfamilies yielding up to 13 feature types per instance. Neither video identities nor class identities are released.

We hope that this dataset will be valuable for research on a variety of multiview related machine learning topics, including multiview clustering, co-training, active learning, classifier fusion and ensembles.

The data and more information can be obtained from the UCI machine learning repository (multiview video dataset), or from here.

Monday, 25 November 2013

Intuit Team Crunches its Own Numbers with Google Analytics Premium

Intuit products like Quicken and TurboTax have been putting the power of numbers in the hands of users since 1983.

Which is why we're so pleased that when Intuit wanted to boost the power of analytics for one of their own teams recently, they turned to Google Analytics Premium. The details are in our new case study, which you'll find here.

The study has the full story of Intuit's Channel Marketing Team, which now uses Google Analytics Premium to measure data for multiple business segments. Once they began using it, Intuit discovered that they had been under-reporting the success of their SEO traffic by at least 27% and conversions by up to 200%.  

Those are exactly the kind of vital numbers that Google Analytics Premium is designed to provide.  

Intuit used Blast Analytics and Marketing, a Google Analytics certified partner, to build out their solution, which was configured to match Intuit's own organizational structure. That structure helped Intuit "democratize" its data so that now anyone on the team can get what they need right away, in real time. Instead of the two days it used to take to request and deliver reports, it takes two hours or less.

Simply put, Ken Wach, Vice President of Marketing at Intuit said, “Google Analytics Premium increased the speed and accuracy of actionable data that drives our business.” 


Post by Suzanne Mumford, Google Analytics Marketing

The MiniZinc Challenge



Constraint Programming is a style of problem solving where the properties of a solution are first identified, and a large space of solutions is searched through to find the best. Good constraint programming depends on modeling the problem well, and on searching effectively. Poor representations or slow search techniques can make the difference between finding a good solution and finding no solution at all.

One example of constraint programming is scheduling: for instance, determining a schedule for a conference where there are 30 talks (that’s one constraint), only eight rooms to hold them in (that’s another constraint), and some talks can’t overlap (more constraints).

Every year, some of the world’s top constraint programming researchers compete for medals in the MiniZinc challenge. Problems range from scheduling to vehicle routing to program verification and frequency allocation.

Google’s open source solver, or-tools, took two gold medals and two silver medals. The gold medals were in parallel and portfolio search, and the silver medals were in fixed and free search. Google’s success was due in part to integrating a SAT solver to handle boolean constraints, and a new presolve phase inherited from integer programming.

Laurent Perron, a member of Google’s Optimization team and a lead contributor to or-tools, noted that every year brings fresh techniques to the competition: “One of the big surprises this year was the success of lazy-clause generation, which combines techniques from the SAT and constraint programming communities.”

If you’re interested in learning more about constraint programming, you can start at the wikipedia page, or have a look at or-tools.

The full list of winners is available here.

Friday, 22 November 2013

New Research Challenges in Language Understanding



We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event.

The 2-day workshop included keynote talks, panel discussions and break-out sessions [agenda]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas.

Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop’s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them.

At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:

  • Knowledge representation, integration, and maintenance
  • Efficient and scalable infrastructure and algorithms for inferencing
  • Presentation and explanation of knowledge
  • Multilingual computation

Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the Faculty Research Awards program. Faculty Research Awards are one-year grants to researchers working in areas of mutual interest.

The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future.

Thursday, 21 November 2013

New Secondary Dimensions Provides Deeper Insights Into Your Users

Today we’ve added many new secondary dimensions to standard reports, including the much-asked for Custom Dimensions.



Custom Dimensions is a new Universal Analytics feature that allows you to bring custom business data into Google Analytics. For example, a custom dimension can be used to collect friendly page names, whether the user is logged in, or a user tier (like Gold, Platinum, or Diamond).

By using Custom Dimensions in secondary dimensions, you can now refine standard reports to obtain deeper insights.




In the report above, Direct Traffic delivers the most traffic, but these are Gold users (lower value). At the same time, Google Search delivers the third and fourth most site traffic and these are Diamond users (high value). Therefore, data shows this site should continue to invest in Google Search to attract more high value users.

The new data in secondary dimensions gives analysts a powerful new tool. We’d love to hear about any new insights in the comments.

Posted by Nick Mihailovski, Product Manager,

Wednesday, 20 November 2013

Insights on the fly: Introducing executive reporting from DoubleClick Search

The following post originally appeared on the DoubleClick Advertiser Blog.

Search marketers managing multiple campaigns across multiple accounts have to visualize their data in many different ways and tailor reporting for each group of stakeholders. Often, this means spending time pulling and aggregating reports, building macro-enabled spreadsheets, and wrangling your data into a specific format for a specific presentation -- only to do it all over again in a slightly different way the next time around. 

DoubleClick Search believes in making search marketing faster -- and we’ve invested in time-saving features like bulk editing enhancements, new scheduling options, and automated rules. Today, we’re excited to announce executive reporting, a fundamentally new way to report on and share your search campaign data.  

With executive reporting, quickly get to the insights you need. Take the data from all your search campaigns, segment as needed, present it in an easily consumable visual format, and share with team members and stakeholders -- all within the UI, without spending hours downloading, reconciling, and updating spreadsheets.

Click image for full-sized version

As we designed executive reporting, we worked closely with our clients to ensure our solution was built to address the unique needs of search marketers, agency account managers, and executives. Matt Grebow, Sr. Manager, Search Marketing at TSA, who participated heavily in our feedback sessions, shared his needs for richer export fidelity with the engineering team.

“Most reporting platforms let you export data in a raw format, but this means extensive formatting in Excel and a lot of coding. DoubleClick Search Executive Reporting is flexible enough to use across clients with different goals. We can create templates on the fly and export reports in a client-ready format.”

Three ways to get started with executive reporting
  • Daily account management and stakeholder communication: As an account manager, you can easily pick the subset of data and the visualizations you need for each set of stakeholders. The reports will stay up to date, and you can have them ready for meetings, or download and share through email at a moment’s notice -- saving you time for strategy.
  • High-level team management and oversight: As a business leader, you can see an overview of your entire business in one place. If you’re needed for an escalation, you can quickly pull reports to understand account health and spot issues -- so you’re never unprepared.
  • Market insights for competitive advantage: Another advantage of seeing your entire business at a glance: if you manage a large volume of accounts, you can quickly analyze market-level data and see which account or campaigns are underperforming. Then, dig in to understand why and get them back on track.
Keep an eye on the blog next week for a follow up “Success with DS” post on how the get the most out of executive reporting. In the meantime, give the new reports a try and let your account team know what you think. If you don’t see the ‘Executive Reports’ tab in the DoubleClick Search interface, ask your account team to enable it for you. 

Over the coming months, we’ll continue to invest in easy, flexible reporting options for DoubleClick Search. If you have a data warehouse, business intelligence tool, or visualization software and you’re interested in seeing your search data alongside other metrics for reporting purposes, check out our reporting API, currently in open whitelist.

Posted by the DoubleClick Team

Tuesday, 19 November 2013

Optimizing AdSense Revenue Using Google Analytics

Recently Google Analytics launched two important new capabilities for its AdSense integration: AdSense Exits reports and AdSense Revenue as an experiment objective. They both come as a great additions to websites that use AdSense for monetization. In this post I will go over the the AdSense Analytics integration and how it can be used to optimize AdSense revenue.

Integrating AdSense and Google Analytics

Before going further into the wonders of the Analytics AdSense marriage, you should first be sure that your accounts are linked properly. Here is how to do it. First follow the steps in the screenshot below after logging into Google Analytics (Admin => AdSense Linking => Link Accounts): 

AdSense and Analytics Integration (click for full size)

You will be sent to your AdSense account in order to confirm the linking and then you will be sent back to Google Analytics to choose which profiles should include this data. If you have any problems or additional questions, take a look at the AdSense Help Center. After the integration is complete the following metrics will be available on your Google Analytics account:
  • AdSense revenue: revenue generated by AdSense ads.
  • Ads clicked: the number of times AdSense ads were clicked.
  • AdSense CTR (click-through rate): the percentage of page impressions that resulted in a click on an ad.
  • AdSense eCPM: AdSense revenue per 1,000 page impressions.
  • AdSense ads viewed: number of ads viewed.
  • AdSense Page Impressions: the number of pageviews during which an ad was displayed.

AdSense Reports On Google Analytics

Currently, there are 3 out-of-the-box AdSense reports available on Analytics: Pages, Referrers and Exits. You can find them here (direct link to report).

1. AdSense Pages

This report provides information about which pages contributed most to AdSense revenue. It will show each of the pages on the website and how well they performed in terms of AdSense. For each page in the website that contains an AdSense unit we will be able to analyze the following metrics: AdSense revenue, AdSense ads clicked, AdSense CTR, AdSense eCPM, AdSense ads viewed and AdSense page impressions. 

This report provides an interesting view of which page performed best, and it can be used to optimize website content. For example, if you find that posts about celebrities generate more revenue than posts about soccer, you might consider writing more about celebrities (if your main objective is to make money on AdSense.)

2. AdSense Referrers

This report provides information about the performance of domains that referred visitors who generated AdSense revenue. This information is extremely valuable; however, I suggest using a different report, since it provides more in-depth information: “All Traffic”. 

The AdSense Referrers only displays information about websites that generated AdSense Revenue, it does not provide information on other types of traffic sources and campaigns. For this reason, I believe the All Traffic report presents a more complete view. To find the report, go to this page (direct link to report) and click on the AdSense tab just above the chart.

3. AdSense Exits

AdSense Exit report shows the number of sessions that ended due to a user clicking on an AdSense ad. This is an interesting metric as it can show which pages have a "high conversion rate", i.e. the ratio of visits to a page and those that left the website clicking on an AdSense unit through it. If your monetization is made through AdSense this report will give just that: AdSense conversion rate per page.

Optimizing AdSense revenue using Google Analytics

Below is an example of how to use the integration from my Analytics for Publishers eBook. Most websites work with templates and each template may have different AdSense placements; this means that an important analysis would be to compare performance by template (or by category) rather than by page. 

In order to analyze template performance, we will need to create one segment per template. If you want to learn more about creating Segments, check this Help Center article. For example, let’s suppose your website has the following page templates:
  • Analytics pages with URLs structured as example.com/analytics/...
  • Testing pages with URLs structured as example.com/testing/...
  • Targeting pages with URLs structured as example.com/targeting/...
In this case you would create three segments using the dimension Page, each containing its unique pattern: /analytics/ for analytics pages, /testing/ for testing pages, and /targeting/ for targeting pages. Below is an example of how the segment would look for the analytics pages: 

Analyzing template performance using segments (click for full size) 

After creating the segments for all three templates, you will be able to choose all of them in the top-left corner of the screen (just above the chart, see bubble #1 above) to see a comparison between them. Below is a screenshot showing how such a comparison would look like: 

Table comparison metrics for different visitor segments (click for full size)
In the table above we are able to compare pages by all metrics available. For example, we can see that while the Analytics section has higher revenue, this is related to the number of impressions, which is also significantly higher. When we analyze further, we see that the Testing and Targeting sections have a good potential, with the same CTR but significantly higher AdSense eCPM. Based on these metrics we can understand which templates and content types are the most effective. 

As mentioned above, once you find out which pages are performing well and which pages are not, you can use Content Experiments to optimize them. Here is a Content Experiments guide.

Closing Thoughts

Here are a few takeaways for you to start optimizing today!
  1. Understand which content type and subject generates the highest revenue and create content based on this data.
  2. Understand which page templates bring the best results by using advanced segments.
  3. Analyze AdSense performance to learn which segments have a good CTR; this might bring insight into which audience to target.

Unique Strategies for Scaling Teacher Professional Development



Research shows that professional development for educators has a direct, positive impact on students, so it’s no wonder that institutions are eager to explore creative ways to enhance professional development for K-12 teachers. Open source MOOC platforms, such as Course Builder, offer the flexibility to extend the reach of standard curriculum; recently, several courses have launched that demonstrate new and creative applications of MOOCs. With their wide reach, participant engagement, and rich content, MOOCs that offer professional development opportunities for teachers bring flexibility and accessibility to an important area.

This summer, the ScratchEd team out of Harvard University launched the Creative Computing MOOC, a 6 week self paced workshop focused on building computational thinking skills in the classroom. As a MOOC, the course had 2600 participants, who created more than 4700 Scratch projects, and engaged in 3500 forum discussions, compared to the “in-person” class held last year, which reached only 50 educators.

Other creative uses of Course Builder for educator professional development come from National Geographic and Annenberg Learner who joined forces to develop Water: The Essential Resource, a course developed around California’s Education and Environment Initiative. The Friday Institute’s MOOC, Digital Learning Transitions, focused on the benefits of utilizing educational technology and reached educators across 50 states and 68 countries worldwide. The course design included embedded peer support, project-based learning, and case studies; a post-course survey showed an overwhelming majority of responders “were able to personalize their own learning experiences” in an “engaging, easy to navigate” curriculum and greatly appreciated the 24/7 access to materials.

In addition to participant surveys, course authors using the Course Builder platform are able to conduct deeper analysis via web analytics and course data to assess course effectiveness and make improvements for future courses.

New opportunities to experience professional development MOOCs are rapidly emerging; the University of Adelaide recently announced their Digital Technology course to provide professional development for primary school teachers on the new Australian curriculum, the Google in Education team just launched a suite of courses for teachers using Google technologies, and the Friday Institute course that aligns with the U.S. based Common Core State Standards is now available.

We’re excited about the innovative approaches underway and the positive impact it can have for students and teachers around the world. We also look forward to seeing new, creative applications of MOOC platforms in new, unchartered territory.

Monday, 18 November 2013

Learning what moves the needle most with Data-Driven Attribution

"Tremendously useful."  That's what Chris Bawden of the TechSmith Corporation says about Data-Driven Attribution.

What is Data-Driven Attribution? Well, in August we launched a new leap in technology that uses algorithmic models and reports to help take the guesswork out of attribution. And it's available now to Google Analytics Premium customers around the world.

Data-Driven Attribution uses statistical probabilities and economic algorithms to analyze each customer's journey in a new way. You define the results that count — sales, sign-ups, or whatever matters to you— and the model assigns value to marketing touchpoints automatically, comparing actions and probabilities to show you which digital channels and keywords move the needle most. 

The bottom line: better returns on your marketing and ad spend. 

We checked in with companies using DDA and results have been strong:
  • "Data Driven Attribution really showed us where we were driving conversions," says Will Lin, Senior Director of Global eMarketing for HomeAway. They saw a 23% increase in attributed conversions for their test keywords after making changes suggested by Data Driven Attribution. Download case study.
  • TechSmith Corporation saw a 19% increase in attributed conversions under the Data Driven Attribution model. "It uncovered growth potential we would have not seen otherwise," reports Nicole Remington, their Search Marketing Manager. Download case study.
  • And the digital analytics firm MaassMedia saw display leads increase 10% while costs per lead remained flat. "We now have a much more accurate measure of how display impacts our business," one of their clients told them. Download case study.
In short, the early returns for DDA users have been strong. Some of the key advantages of this model:

Algorithmic and automatic: The model distributes credit across marketing channels scientifically, based on success metrics you define. 

Transparent: Our unique Model Explorer gives you full insight into how marketing touch points are valued — no “black box” methodology.

Actionable: Detailed insights into both converting and non-converting paths offer clear guidance for your marketing decisions.

Cross-platform: DDA is deeply integrated with other Google products like AdWords, the Google Display Network, and YouTube, and you can pull in data from most any digital channel.

You'll learn much more about the benefits of Data-Driven Attribution when you download our cheat sheet. Or to learn more about Google Analytics Premium, contact your Google Account Manager or visit google.com/analytics/premium.

Posted by Bill Kee, Product Manager for Attribution, and Jody Shapiro, Product Manager for Google Analytics Premium

Friday, 15 November 2013

Moore’s Law Part 4: Moore's Law in other domains

This is the last entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.

--

The quest for Moore’s Law and its potential impact in other disciplines is a journey the technology industry is starting, by crossing the Rubicon from the semiconductor industry to other less explored fields, but with the particular mindset created by Moore’s Law. Our goal is to explore if there are Moore’s Law opportunities emerging in other disciplines, as well as its potential impact. As such, we have interviewed several professors and researchers and asked them if they could see emerging ‘Moore’s Laws’ in their discipline. Listed below are some highlights of those discussions, ranging from CS+ to potentials in the Energy Sector:

Sensors and Data Acquisition
Ed Parsons, Google Geospatial Technologist
The More than Moore discussion can be extended to outside of the main chip, and go within the same board as the main chip or within the device that a user is carrying. Greater sensors capabilities (for the measurement of pressure, electromagnetic field and other local conditions) allow including them in smart phones, glasses, or other devices and perform local data acquisition. This trend is strong, and should allow future devices benefiting from Moore’s Law to receive enough data to perform more complex applications.

Metcalfe’s Law states that the value of a telecommunication network is proportional to the square of connected nodes of the system. This law can be used in parallel to Moore’s Law to evaluate the value of the Internet of Things. The network itself can be seen as composed by layers: at the user’s local level (to capture data related to the body of the user, or to immediately accessible objects), locally around the user (such as to get data within the same street as the user), and finally globally (to get data from the global internet). The extrapolation made earlier in this blog (several TB available in flash memory) will lead to the ability to construct, exchange and download/upload entire contexts for a given situation or a given application and use these contexts without intense network activity, or even with very little or no network activity.

Future of Moore’s Law and its impact on Physics
Sverre Jarp, CERN
CERN, and its experiments with the Large Electron-Positron Collider (LEP) and Large Hadron Collider (LHC) generate data on the order of a PetaByte per year; this data has to be filtered, processed and analyzed in order to find meaningful physics events leading to new discoveries. In this context Moore’s Law has been particularly helpful to allow computing power, storage and networking capabilities at CERN and at other High Energy Physics (HEP) centers to scale up regularly. Several generations of hardware and software have been exhausted during the journey from mainframes to today’s clusters.

CERN has a long tradition of collaboration with chip manufacturers, hardware and software vendors to understand and predict next trends in the computing evolution curve. Recent analysis indicates that Moore’s Law will likely continue over the next decade. The statement of ‘several TB of flash memory availability by 2025’ may even be a little conservative according to most recent analysis.

Big Data Visualizations
Katy Börner, Indiana University
Thanks to Moore’s Law, the amount of data available for any given phenomenon, whether sensed or simulated, has been growing by several orders of magnitude over the past decades. Intelligent sampling can be used to filter out the most relevant bits of information and is practiced in Physics, Astronomy, Medicine and other sciences. Subsequently, data needs to be analyzed and visualized to identify meaningful trends and phenomena, and to communicate them to others.

While most people learn in school how to read charts and maps, many never learn how to read a network layout—data literacy remains a challenge. The Information Visualization Massive Open Online Course (MOOC) at Indiana University teaches students from more than 100 countries how to read but also how to design meaningful network, topical, geospatial, and temporal visualizations. Using the tools introduced in this free course anyone can analyze, visualize, and navigate complex data sets to understand patterns and trends.

Candidate for Moore’s Law in Energy
Professor Francesco Stellacci, EPFL
It is currently hard to see a “Moore’s Law” applying to candidates in energy technology. Nuclear fusion could reserve some positive surprises, if several significant breakthroughs are found in the process of creating usable energy with this technique. For any other technology the technological growth will be slower. Best solar cells of today have a 30% efficiency, which could scale higher of course (obviously not much more than a factor of 3). Also cost could be driven down by an order of magnitude. Best estimates show, however, a combined performance improvement by a factor 30 over many years.

Further Discussion of Moore’s Law in Energy
Ross Koningstein, Google Director Emeritus
As of today there is no obvious Moore’s Law in the Energy sector which could decrease some major costs by 50% every 18 months. However material properties at nanoscale, and chemical processes such as catalysis are being investigated and could lead to promising results. Applications targeted are hydrocarbon creation at scale and improvement of oil refinery processes, where breakthrough in micro/nano property catalysts is pursued. Hydrocarbons are much more compatible at scale with the existing automotive/aviation and natural gas distribution systems. Here in California, Google Ventures has invested in Cool Planet Energy Systems, a company with neat technology that can convert biomass to gasoline/jet fuel/diesel with impressive efficiency.

One of the challenges is the ability to run many experiments at low cost per experiment, instead of only a few expensive experiments per year. Discoveries are likely to happen faster if more experiments are conducted. This leads to heavier investments, which are difficult to achieve within slim margin businesses. Therefore the nurturing processes for disruptive business are likely to come from new players, beside existing players which will decide to fund significant new investments.

Of course, these discussions could be opened for many other sectors. The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.