Thursday, 30 April 2015

Introducing Search Response and Airings Data in TV Attribution

The following is a cross post from Adometry by Google, a Marketing Analytics and Attribution product

Mass media drives people to interact with brands in compelling ways. When a TV or radio ad creates an I-want-to-know, I-want-to-go, or an I-want-to-buy moment in the mind of a consumer, many pursue it online. Immediately - and on whatever screen they have handy.

Last year, we announced Adometry TV Attribution, which measures the digital impact of offline channels such as television and radio. Now, we’re moving TV Attribution forward by integrating Google Search query data and Rentrak airings data to help marketers better understand the important moments their broadcast investments create.

New Search Behavior, New Search Analysis
Broadcast media doesn’t just drive consumers directly to websites — it drives searches. Now, TV attribution lets you analyze minute-by-minute aggregated Google Search query data against spot-related keywords to detect and attribute search “micro-conversions” to specific TV airings. 

With insights on the entire digital customer journey — including search behaviors — brands can better evaluate broadcast network and daypart, specific ad creative, and keyword performance. As a result, brands can:
  • Assess Immediate Influence: See which messages are sticking in the minds of consumers to both maximize TV interest and choose ideal keywords for SEO and paid search strategies.
  • Evaluate Awareness Goals: Optimize against a digital signal even when a site visit isn’t the primary goal, such as in brand awareness or sponsorship campaigns.
  • Analyze Competitive Category: Glean which generic keywords drive category interest for the industry — a type of insights not possible through site traffic analysis alone. 

Rentrak Partnership Speeds TV Attribution Insights
Knowing when your spots aired and collecting that data for timely TV attribution analysis can be a challenge. Marketers who buy broadcast media through agencies often don’t have direct access to this data. And once data is obtained — after coordinating with multiple agencies, partners, and TV measurement companies — the time lag makes for outdated analysis. 

TV Attribution now solves these challenges a new partnership with Rentrak, the leading and trusted source for TV airings information. 

What Rentrak Integration Delivers
Integrating directly with Rentrak TV Essentials, TV Attribution now overcomes some of the biggest hurdles in TV measurement, with increases in: 
  • Actionability: TV Attribution can more quickly and easily obtain TV data for analysis without time-consuming coordination from you or your agencies.  
  • Accuracy: Rentrak provides a comprehensive data set with aggregated viewership information from more than 30 million televisions across the country, and from more than 230 networks.
  • Frequency: A direct relationship means more frequent reporting since there is no longer a manual find-and-transfer of data required from TV buying partners.
“What makes this partnership so exciting is it removes the biggest barrier to truly measuring TV effectiveness, timely access to spot airings data including impressions,” said Tony Pecora, CMO for SelectQuote. “Rather than hunting and gathering data, we are now able to spend our time evaluating insights and optimizing our marketing investments across both TV and digital. As a CMO, this is a really big win for our business.”

Want to Get Moving?

The gap between offline and digital measurement continues to close. Learn more about how Adometry TV Attribution, now with Google Search query data and integrated Rentrak airings data, can help you gain more actionable cross-channel insights.

Posted by Dave Barney, Product Manager

Tuesday, 28 April 2015

Supercharge your Google Analytics with SkyGlue

The following is a guest post from SkyGlue, a Google Analytics Technology Partner

SkyGlue is a powerful add-on tool for Google Analytics that helps web analysts to get more out of Google Analytics. With SkyGlue, you can automate Event Tracking for your website, zoom in on visitor analytics, and export and integrate your Google Analytics data with your own database or CRM. 
Automatic Event Tracking: Custom data collection without IT help
Your website probably offers many ways for visitors to interact with your content, so you need to know what your visitors do on your site, and not just which pages they visit. Although you collect important data about interactions like clicks, downloads, and modal popups using Google Analytics Event Tracking, it requires a fair amount of additional setup. And if you don’t have the IT resources to set up Event Tracking, it means that you’re missing out on collecting this important data. 
SkyGlue helps you gain independence from IT resources by automating Event Tracking with on-the-fly customization using SkyGlue web portal. By adding one line of JavaScript to your website, the SkyGlue app can track interactions with any HTML element on your website and then send this data to your Google Analytics account.  
SkyGlue Event Tracking visual overlay
Visitor analytics + Data export
SkyGlue supports multiple approaches to visitor tracking and offers special reports that let you see the entire sequence of visits and interactions. Integrated fully with Google Analytics advanced segments, these reports let you zoom-in on selected groups of visitors, helping you understand your customers’ behavior, discover patterns, identify technical glitches, improve customer service, and find ways to increase conversion and retention rates. You can also use SkyGlue to export your Google Analytics data on a daily basis, and integrate it with your own CRM and other data sources.
abc_appliances_updated.jpg
SkyGlue Individual Visitor Report (not based on real data)
SkyGlue puts some of Google Analytics most powerful features in the hands of every analyst. Use it to automate Event Tracking, get access to visitor analytics reports, and export and integrate Google Analytics data with other data sources. 
SkyGlue is free to try and takes only a few minutes to set up - check it out and see customer reviews in the Google Analytics Partner Gallery
For more information, visit the SkyGlue website and read real-world examples of how SkyGlue has already helped many business and organizations get more out of Google Analytics.  
- The Google Analytics Developer Relations team

Wednesday, 15 April 2015

Google Handwriting Input in 82 languages on your Android mobile device



Entering text on mobile devices is still considered inconvenient by many; touchscreen keyboards, although much improved over the years, require a lot of attention to hit the right buttons. Voice input is an option, but there are situations where it is not feasible, such as in a noisy environment or during a meeting. Using handwriting as an input method can allow for natural and intuitive input method for text entry which complements typing and speech input methods. However, until recently there have been many languages where enabling this functionality presented significant challenges.

Today we launched Google Handwriting Input, which lets users handwrite text on their Android mobile device as an additional input method for any Android app. Google Handwriting Input supports 82 languages in 20 distinct scripts, and works with both printed and cursive writing input with or without a stylus. Beyond text input, it also provides a fun way to enter hundreds of emojis by drawing them (simply press and hold the ‘enter’ button to switch modes). Google Handwriting Input works with or without an Internet connection.
By building on large-scale language modeling, robust multi-language OCR, and incorporating large-scale neural-networks and approximate nearest neighbor search for character classification, Google Handwriting Input supports languages that can be challenging to type on a virtual keyboard. For example, keyboards for ideographic languages (such as Chinese) are often based on a particular dialect of the language, but if a user does not know that dialect, they may be hard to use. Additionally, keyboards for complex script languages (like many South Asian languages) are less standardized and may be unfamiliar. Even for languages where virtual keyboards are more widely used (like English or Spanish), some users find that handwriting is more intuitive, faster, and generally more comfortable.
Writing 'Hello' in Chinese, German, and Tamil.
Google Handwriting Input is the result of many years of research at Google. Initially, cloud based handwriting recognition supported the Translate Apps on Android and iOS, Mobile Search, and Google Input Tools (in Chrome, ChromeOS, Gmail and Docs, translate.google.com, and the Docs symbol picker). However, other products required recognizers to run directly on an Android device without an Internet connection. So we worked to make recognition models smaller and faster for use in Android handwriting input methods for Simplified and Traditional Chinese, Cantonese, and Hindi, as well as multi-language support in Gesture Search. Google Handwriting Input combines these efforts, allowing recognition both on-device and in the cloud (by tapping on the cloud icon) in any Android app.

You can install Google Handwriting Input from the Play Store here. More information and FAQs can be found here.

Tuesday, 14 April 2015

DAA San Francisco presents ‘Optimizing Your Analytics Career’

The Digital Analytics Association (DAA) San Francisco Chapter is hosting an evening of networking and conversation on the topic of ‘Optimizing Your Analytics Career’. 

We’d like to invite you to join us for an evening of networking and to hear from a great panel of seasoned experts and analytics newbies on their career paths, goals, and what the future holds. Come with your questions in mind and ask our experts everything you’ve wanted to know and discuss about excelling in the analytics industry.

Moderator: Krista Seiden, Analytics Advocate, Google

Panelists:
By attending you will:
  • Network with industry leaders and enjoy great casual conversations with your fellow analytics peers
  • Hear from our seasoned experts about their career paths, interests, educational opportunities, and skillsets needed in the analytics industry
Theme: Optimizing Your Analytics Career
When: Thursday April 23rd, 6:00pm - 8:00pm (career panel starting around 6:40pm)
Where: Roe Restaurant, 651 Howard St, San Francisco, CA 94105
Cost: The cost to attend the event is free for DAA members, $15 for non-members and $5 for students (students use this promo code: SFstudent). Students can join the DAA for $39 and also get admission to our annual symposium for free.

This will be an excellent opportunity to connect with your fellow analytics professionals and learn more about advancing in your profession. Join us!

Event website and registration: register here.

This event is organized by local DAA members and volunteers. We encourage you to become a member of the DAA and join our local efforts. Become a member and reach out to one of the local chapter leaders, Krista, Charles or Feras.

Posted by Krista Seiden, Analytics Advocate

Wednesday, 8 April 2015

Beyond Short Snippets: Deep Networks for Video Classification



Convolutional Neural Networks (CNNs) have recently shown rapid progress in advancing the state of the art of detecting and classifying objects in static images, automatically learning complex features in pictures without the need for manually annotated features. But what if one wanted not only to identify objects in static images, but also analyze what a video is about? After all, a video isn’t much more than a string of static images linked together in time.

As it turns out, video analysis provides even more information to the object detection and recognition task performed by CNN’s by adding a temporal component through which motion and other information can be also be used to improve classification. However, analyzing entire videos is challenging from a modeling perspective because one must model variable length videos with a fixed number of parameters. Not to mention that modeling variable length videos is computationally very intensive.

In Beyond Short Snippets: Deep Networks for Video Classification, to be presented at the 2015 Computer Vision and Pattern Recognition conference (CVPR 2015), we1 evaluated two approaches - feature pooling networks and recurrent neural networks (RNNs) - capable of modeling variable length videos with a fixed number of parameters while maintaining a low computational footprint. In doing so, we were able to not only show that learning a high level global description of the video’s temporal evolution is very important for accurate video classification, but that our best networks exhibited significant performance improvements over previously published results on the Sports 1 million dataset (Sports-1M).

In previous work, we employed 3D-convolutions (meaning convolutions over time and space) over short video clips - typically just a few seconds - to learn motion features from raw frames implicitly and then aggregate predictions at the video level. For purposes of video classification, the low level motion features were only marginally outperforming models in which no motion was modeled.

To understand why, consider the following two images which are very similar visually but obtain drastically different scores from a CNN model trained on static images:
Slight differences in object poses/context can change the predicted class/confidence of CNNs trained on static images.
Since each individual video frame forms only a small part of the video’s story, static frames and short video snippets (2-3 secs) use incomplete information and could easily confuse subtle fine-grained distinctions between classes (e.g: Tae Kwon Do vs. Systema) or use portions of the video irrelevant to the action of interest.

To get around this frame-by-frame confusion, we used feature pooling networks that independently process each frame and then pool/aggregate the frame-level features over the entire video at various stages. Another approach we took was to utilize an RNN (derived from Long Short Term Memory units) instead of feature pooling, allowing the network itself to decide which parts of the video are important for classification. By sharing parameters through time, both feature pooling and RNN architectures are able to maintain a constant number of parameters while capturing a global description of the video’s temporal evolution.

In order to feed the two aggregation approaches, we compute an image “pixel-based” CNN model, based on the raw pixels in the frames of a video. We processed videos for the “pixel-based” CNNs at one frame per second to reduce computational complexity. Of course, at this frame rate implicit motion information is lost.

To compensate, we incorporate explicit motion information in the form of optical flow - the apparent motion of objects across a camera's viewfinder due to the motion of the objects or the motion of the camera. We compute optical flow images over adjacent frames to learn an additional “optical flow” CNN model.
Left: Image used for the pixel-based CNN; Right: Dense optical flow image used for optical flow CNN
The pixel-based and optical flow based CNN model outputs are provided as inputs to both the RNN and pooling approaches described earlier. These two approaches then separately aggregate the frame-level predictions from each CNN model input, and average the results. This allows our video-level prediction to take advantage of both image information and motion information to accurately label videos of similar activities even when the visual content of those videos varies greatly.
Badminton (top 25 videos according to the max-pooling model). Our methods accurately label all 25 videos as badminton despite the variety of scenes in the various videos because they use the entire video’s context for prediction.
We conclude by observing that although very different in concept, the max-pooling and the recurrent neural network methods perform similarly when using both images and optical flow. Currently, these two architectures are the top performers on the Sports-1M dataset. The main difference between the two was that the RNN approach was more robust when using optical flow alone on this dataset. Check out a short video showing some example outputs from the deep convolutional networks presented in our paper.


1 Research carried out in collaboration with University of Maryland, College Park PhD student Joe Yue-Hei Ng and University of Texas at Austin PhD student Matthew Hausknecht, as part of a Google Software Engineering Internship

Tackling Quantitative PR Measurement with AirPR & Google Analytics

The following is a guest post by Leta Soza. Leta is the PR Engineer at AirPR where she lives and breathes PR strategy, content marketing, community cultivation, and analytics. Her analytics adoration stems from the firmly rooted belief that you can’t manage what you can’t measure, so bring on the data. She works with everyone from Fortune 500 companies to innovative startups in order to assist them in proving the ROI of their PR efforts while optimizing their strategies. 

It’s no secret that PR has historically been difficult to measure… quantitatively that is.

PR pros have always had to rely on less than stellar metrics (AVEs, impressions calculations, etc.) to show ROI, and with seemingly no viable reporting alternatives, PR has basically been relegated to the budgetary back seat.

For years, the industry has struggled to prove its value, lagging behind in technological innovation. But as every aspect of business becomes driven by data, vanity metrics are becoming unacceptable and PR is being held accountable for demonstrating its impact on the bottom line.

At AirPR, we’ve made it our mission to provide analytics, insights, and measurement solutions for the rapidly evolving PR industry. Our Analyst product focuses on increasing overall PR performance while seeking to solve systemic industry challenges through the application of big data.

Analyst, our measurement and insights solution, was created to assist PR and communication professionals in understanding what’s moving the needle in terms of their business objectives. 

Interested in how many potential customers came to your website from that press hit? Curious which authors drove the most social amplification during a specific quarter? Want to more deeply understand message pull-through or even attribute revenue? Analyst simplifies getting these answers.

One of the key features of Analyst is our unique integration with Google Analytics. Our integration arms Analyst users with a comprehensive snapshot of the PR activities driving business objectives, as well as the insights to understand the media placements (earned or owned) that are achieving specific company aims, giving PR professionals a single dashboard dedicated to displaying the performance of their efforts. Completing the GA integration creates a comprehensive view of the most meaningful and actionable PR data in aggregate which then allows users to click into any piece of data for more context. 
AirPR Analyst Dashboard (click for full-sized image)

In PR attribution is key, so we leverage Google Analytics data in order to display PR-specific performance and demonstrate ROI. Our aim: To change the way the industry thinks about PR analytics, insights, and measurement and to provide the solutions that support this shift. 

To quote legendary management consultant Peter Drucker, “In this new era of ‘big data’ it is even more important to convert raw data to true information.” Our goal is to deliver actionable and meaningful information. When decision makers understand what’s working, they can increase effort on certain aspects, eliminate others, and make impactful budget allocation decisions for future PR campaigns, much like they do for advertising.

To learn more about AirPR Analyst, check us out in the Google Analytics app gallery.

Posted by Leta Soza, PR Engineer at AirPR 

Monday, 6 April 2015

Skill maps, analytics and more with Google’s Course Builder 1.8



Over the past couple of years, Google’s Course Builder has been used to create and deliver hundreds of online courses on a variety of subjects (from sustainable energy to comic books), making learning more scalable and accessible through open source technology. With the help of Course Builder, over a million students of all ages have learned something new.

Today, we’re increasing our commitment to Course Builder by bringing rich, new functionality to the platform with a new release. Of course, we will also continue to work with edX and others to contribute to the entire ecosystem.

This new version enables instructors and students to understand prerequisites and skills explicitly, introduces several improvements to the instructor experience, and even allows you to export data to Google BigQuery for in depth analysis.
  • Drag and drop, simplified tabs, and student feedback
We’ve made major enhancements to the instructor interface, such as simplifying the tabs and clarifying which part of the page you’re editing, so you can spend more time teaching and less time configuring. You can also structure your course on the fly by dragging and dropping elements directly in the outline.

Additionally, we’ve added the option to include a feedback box at the bottom of each lesson, making it easy for your students to tell you their thoughts (though we can't promise you'll always enjoy reading them).
  • Skill Mapping
You can now define prerequisites and skills learned for each lesson. For instance, in a course about arithmetic, addition might be a prerequisite for the lesson on multiplying numbers, while multiplication is a skill learned. Once an instructor has defined the skill relationships, they will have a consolidated view of all their skills and the lessons they appear in, such as this list for Power Searching with Google:
Instructors can then enable a skills widget that shows at the top of each lesson and which lets students see exactly what they should know before and after completing a lesson. Below are the prerequisites and goals for the Thinking More Deeply About Your Search lesson. A student can easily see what they should know beforehand and which lessons to explore next to learn more.
Skill maps help a student better understand which content is right for them. And, they lay the groundwork for our future forays into adaptive and personalized learning. Learn more about Course Builder skill maps in this video.
  • Analytics through BigQuery
One of the core tenets of Course Builder is that quality online learning requires a feedback loop between instructor and student, which is why we’ve always had a focus on providing rich analytical information about a course. But no matter how complete, sometimes the built-in reports just aren’t enough. So Course Builder now includes a pipeline to Google BigQuery, allowing course owners to issue super-fast queries in a SQL-like syntax using the processing power of Google’s infrastructure. This allows you to slice and dice the data in an infinite number of ways, giving you just the information you need to help your students and optimize your course. Watch these videos on configuring and sending data.

To get started with your own course, follow these simple instructions. Please let us know how you use these new features and what you’d like to see in Course Builder next. Need some inspiration? Check out our list of courses (and tell us when you launch yours).

Keep on learning!