Wednesday, 27 November 2013

Full Customer Journey: Three Lenses of Measurement

My son is a LEGO enthusiast, and even though I don’t build that often, I am usually involved in the acquisition process of LEGO sets or digital goods. To quote a few, we build with bricks, plan with their software, play with their apps, buy through their website and consume content on their social profiles. Quite a lot of touch points with their brand, and that’s not all!

On my side, I get very curious on how they measure and optimize their customer experiences, so I like to use them as an example of how challenging the measurement world has become. And the way we look at this challenge at Google is through three lenses of measurement:
  1. Holistic Measurement: how can we understand our customers using multiple devices through multiple touch points? 
  2. Full Credit Measurement: how can we attribute the credit of bringing new and returning customers to marketing campaigns?
  3. Active Measurement: how can we make sure that data is accessible, accurate and comprehensive?
This is the kind of challenge that we try to solve for and that drives our thinking. Paul Muret, VP Engineering at Google, discussed these three challenges in his article on the Harvard Business Review and how we should face them. Here is an excerpt:
This is creating tremendous opportunities for business teams to engage customers throughout their new and more complex buying journeys. But before you can take advantage, you have to understand that journey by measuring and analyzing the data in new ways that value these moments appropriately. The payoff is better alignment between marketing messages and consumers’ intent during their paths to purchase - and ultimately, better business results.
Below is a presentation delivered by me at Dublin, in a Google Think event earlier this year. I discuss each of the challenges in depth.


Tuesday, 26 November 2013

SUPERWEEK 2014: January 21-23, Hungary

The following is a guest post contributed by Zoltán Bánóczy, founder of AALL Ltd. and the SUPERWEEK Conference series.

In the fourth week of the New Year, many of us will enjoy the gorgeous view pictured below as the actual backdrop for one the year’s most exciting analytics conferences.  Speakers hailing from Jerusalem to Copenhagen to San Francisco to Ahmedabad promise to deliver insightful talks about a wide range of topics surrounding the modern digital industry.


The 3 day SUPERWEEK 2014 begins on January 21st, located on the beautiful mountaintop of Galyatető, at the highest-lying 4 star hotel in Hungary. Fly to Budapest easily from across Europe and rely on our shuttlebuses called SUPERBUS as an option for your package. Conference goers can expect advanced talks at the sessions, data based opinions shared during the panels, and Google Tag Manager deep dives - some say even deeper than the Mariana Trench. 

In his keynote, Avinash Kaushik will share a collection of strategies to help you ensure that the focus of your analytics effort is on taking action and not data regurgitation in a session titled: “Driving an Obsession with Actionable Analytics.“  Caleb Whitmore (Analytics Pros) will be providing a “hands-on” training and conference goers can complete the GAIQ exam right afterwards. Excitingly, we get the opportunity to ask Avinash about Life! - in his Q&A session entitled: “Search, Social, Analytics, Life: AMA (“ask me anything”)”. 

Speakers include industry thought leaders, Top Contributors to the AdWords forums and many Google Analytics Certified Partner companies - all from about 10 countries.


We’ll try to cover the latest of the industry: predictive analytics (Ravi Pathak, India), Universal Analytics & Google Tag Manager implementations (Yehoshua Coren - Israel, Doug Hall - UK, and Julien Coquet - France), PPC / display advertising (Jacob Kildebogaard - Denmark and Oliver Schiffers - Germany), A/B testing, privacy (Aurélie Pols - Spain) and even analytics expert  “The Professor”, Phil Pearce from the UK.

Join us for the emblematic, traditional evening with campfire made from large 2+ meter logs where a wide range of (mulled) wine and mellow mood will be served.

Keep up to date on the agenda and other programmes by following us at @superweek2014 (or #spwk during the event) on Twitter.

Posted by Zoltán Bánóczy, Google Analytics Certified Partner

Released Data Set: Features Extracted From YouTube Videos for Multiview Learning


“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”

Performance of machine learning algorithms, supervised or unsupervised, is often significantly enhanced when a variety of feature families, or multiple views of the data, are available. For example, in the case of web pages, one feature family can be based on the words appearing on the page, and another can be based on the URLs and related connectivity properties. Similarly, videos contain both audio and visual signals where in turn each modality is analyzed in a variety of ways. For instance, the visual stream can be analyzed based on the color and edge distribution, texture, motion, object types, and so on. YouTube videos are also associated with textual information (title, tags, comments, etc.). Each feature family complements others in providing predictive signals to accomplish a prediction or classification task, for example, in automatically classifying videos into subject areas such as sports, music, comedy, games, and so on.

We have released a dataset of over 100k feature vectors extracted from public YouTube videos. These videos are labeled by one of 30 classes, each class corresponding to a video game (with some amount of class noise): each video shows a gameplay of a video game, for teaching purposes for example. Each instance (video) is described by three feature families (textual, visual, and auditory), and each family is broken into subfamilies yielding up to 13 feature types per instance. Neither video identities nor class identities are released.

We hope that this dataset will be valuable for research on a variety of multiview related machine learning topics, including multiview clustering, co-training, active learning, classifier fusion and ensembles.

The data and more information can be obtained from the UCI machine learning repository (multiview video dataset), or from here.

Monday, 25 November 2013

Intuit Team Crunches its Own Numbers with Google Analytics Premium

Intuit products like Quicken and TurboTax have been putting the power of numbers in the hands of users since 1983.

Which is why we're so pleased that when Intuit wanted to boost the power of analytics for one of their own teams recently, they turned to Google Analytics Premium. The details are in our new case study, which you'll find here.

The study has the full story of Intuit's Channel Marketing Team, which now uses Google Analytics Premium to measure data for multiple business segments. Once they began using it, Intuit discovered that they had been under-reporting the success of their SEO traffic by at least 27% and conversions by up to 200%.  

Those are exactly the kind of vital numbers that Google Analytics Premium is designed to provide.  

Intuit used Blast Analytics and Marketing, a Google Analytics certified partner, to build out their solution, which was configured to match Intuit's own organizational structure. That structure helped Intuit "democratize" its data so that now anyone on the team can get what they need right away, in real time. Instead of the two days it used to take to request and deliver reports, it takes two hours or less.

Simply put, Ken Wach, Vice President of Marketing at Intuit said, “Google Analytics Premium increased the speed and accuracy of actionable data that drives our business.” 


Post by Suzanne Mumford, Google Analytics Marketing

The MiniZinc Challenge



Constraint Programming is a style of problem solving where the properties of a solution are first identified, and a large space of solutions is searched through to find the best. Good constraint programming depends on modeling the problem well, and on searching effectively. Poor representations or slow search techniques can make the difference between finding a good solution and finding no solution at all.

One example of constraint programming is scheduling: for instance, determining a schedule for a conference where there are 30 talks (that’s one constraint), only eight rooms to hold them in (that’s another constraint), and some talks can’t overlap (more constraints).

Every year, some of the world’s top constraint programming researchers compete for medals in the MiniZinc challenge. Problems range from scheduling to vehicle routing to program verification and frequency allocation.

Google’s open source solver, or-tools, took two gold medals and two silver medals. The gold medals were in parallel and portfolio search, and the silver medals were in fixed and free search. Google’s success was due in part to integrating a SAT solver to handle boolean constraints, and a new presolve phase inherited from integer programming.

Laurent Perron, a member of Google’s Optimization team and a lead contributor to or-tools, noted that every year brings fresh techniques to the competition: “One of the big surprises this year was the success of lazy-clause generation, which combines techniques from the SAT and constraint programming communities.”

If you’re interested in learning more about constraint programming, you can start at the wikipedia page, or have a look at or-tools.

The full list of winners is available here.

Friday, 22 November 2013

New Research Challenges in Language Understanding



We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event.

The 2-day workshop included keynote talks, panel discussions and break-out sessions [agenda]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas.

Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop’s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them.

At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:

  • Knowledge representation, integration, and maintenance
  • Efficient and scalable infrastructure and algorithms for inferencing
  • Presentation and explanation of knowledge
  • Multilingual computation

Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the Faculty Research Awards program. Faculty Research Awards are one-year grants to researchers working in areas of mutual interest.

The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future.

Thursday, 21 November 2013

New Secondary Dimensions Provides Deeper Insights Into Your Users

Today we’ve added many new secondary dimensions to standard reports, including the much-asked for Custom Dimensions.



Custom Dimensions is a new Universal Analytics feature that allows you to bring custom business data into Google Analytics. For example, a custom dimension can be used to collect friendly page names, whether the user is logged in, or a user tier (like Gold, Platinum, or Diamond).

By using Custom Dimensions in secondary dimensions, you can now refine standard reports to obtain deeper insights.




In the report above, Direct Traffic delivers the most traffic, but these are Gold users (lower value). At the same time, Google Search delivers the third and fourth most site traffic and these are Diamond users (high value). Therefore, data shows this site should continue to invest in Google Search to attract more high value users.

The new data in secondary dimensions gives analysts a powerful new tool. We’d love to hear about any new insights in the comments.

Posted by Nick Mihailovski, Product Manager,

Wednesday, 20 November 2013

Insights on the fly: Introducing executive reporting from DoubleClick Search

The following post originally appeared on the DoubleClick Advertiser Blog.

Search marketers managing multiple campaigns across multiple accounts have to visualize their data in many different ways and tailor reporting for each group of stakeholders. Often, this means spending time pulling and aggregating reports, building macro-enabled spreadsheets, and wrangling your data into a specific format for a specific presentation -- only to do it all over again in a slightly different way the next time around. 

DoubleClick Search believes in making search marketing faster -- and we’ve invested in time-saving features like bulk editing enhancements, new scheduling options, and automated rules. Today, we’re excited to announce executive reporting, a fundamentally new way to report on and share your search campaign data.  

With executive reporting, quickly get to the insights you need. Take the data from all your search campaigns, segment as needed, present it in an easily consumable visual format, and share with team members and stakeholders -- all within the UI, without spending hours downloading, reconciling, and updating spreadsheets.

Click image for full-sized version

As we designed executive reporting, we worked closely with our clients to ensure our solution was built to address the unique needs of search marketers, agency account managers, and executives. Matt Grebow, Sr. Manager, Search Marketing at TSA, who participated heavily in our feedback sessions, shared his needs for richer export fidelity with the engineering team.

“Most reporting platforms let you export data in a raw format, but this means extensive formatting in Excel and a lot of coding. DoubleClick Search Executive Reporting is flexible enough to use across clients with different goals. We can create templates on the fly and export reports in a client-ready format.”

Three ways to get started with executive reporting
  • Daily account management and stakeholder communication: As an account manager, you can easily pick the subset of data and the visualizations you need for each set of stakeholders. The reports will stay up to date, and you can have them ready for meetings, or download and share through email at a moment’s notice -- saving you time for strategy.
  • High-level team management and oversight: As a business leader, you can see an overview of your entire business in one place. If you’re needed for an escalation, you can quickly pull reports to understand account health and spot issues -- so you’re never unprepared.
  • Market insights for competitive advantage: Another advantage of seeing your entire business at a glance: if you manage a large volume of accounts, you can quickly analyze market-level data and see which account or campaigns are underperforming. Then, dig in to understand why and get them back on track.
Keep an eye on the blog next week for a follow up “Success with DS” post on how the get the most out of executive reporting. In the meantime, give the new reports a try and let your account team know what you think. If you don’t see the ‘Executive Reports’ tab in the DoubleClick Search interface, ask your account team to enable it for you. 

Over the coming months, we’ll continue to invest in easy, flexible reporting options for DoubleClick Search. If you have a data warehouse, business intelligence tool, or visualization software and you’re interested in seeing your search data alongside other metrics for reporting purposes, check out our reporting API, currently in open whitelist.

Posted by the DoubleClick Team

Tuesday, 19 November 2013

Optimizing AdSense Revenue Using Google Analytics

Recently Google Analytics launched two important new capabilities for its AdSense integration: AdSense Exits reports and AdSense Revenue as an experiment objective. They both come as a great additions to websites that use AdSense for monetization. In this post I will go over the the AdSense Analytics integration and how it can be used to optimize AdSense revenue.

Integrating AdSense and Google Analytics

Before going further into the wonders of the Analytics AdSense marriage, you should first be sure that your accounts are linked properly. Here is how to do it. First follow the steps in the screenshot below after logging into Google Analytics (Admin => AdSense Linking => Link Accounts): 

AdSense and Analytics Integration (click for full size)

You will be sent to your AdSense account in order to confirm the linking and then you will be sent back to Google Analytics to choose which profiles should include this data. If you have any problems or additional questions, take a look at the AdSense Help Center. After the integration is complete the following metrics will be available on your Google Analytics account:
  • AdSense revenue: revenue generated by AdSense ads.
  • Ads clicked: the number of times AdSense ads were clicked.
  • AdSense CTR (click-through rate): the percentage of page impressions that resulted in a click on an ad.
  • AdSense eCPM: AdSense revenue per 1,000 page impressions.
  • AdSense ads viewed: number of ads viewed.
  • AdSense Page Impressions: the number of pageviews during which an ad was displayed.

AdSense Reports On Google Analytics

Currently, there are 3 out-of-the-box AdSense reports available on Analytics: Pages, Referrers and Exits. You can find them here (direct link to report).

1. AdSense Pages

This report provides information about which pages contributed most to AdSense revenue. It will show each of the pages on the website and how well they performed in terms of AdSense. For each page in the website that contains an AdSense unit we will be able to analyze the following metrics: AdSense revenue, AdSense ads clicked, AdSense CTR, AdSense eCPM, AdSense ads viewed and AdSense page impressions. 

This report provides an interesting view of which page performed best, and it can be used to optimize website content. For example, if you find that posts about celebrities generate more revenue than posts about soccer, you might consider writing more about celebrities (if your main objective is to make money on AdSense.)

2. AdSense Referrers

This report provides information about the performance of domains that referred visitors who generated AdSense revenue. This information is extremely valuable; however, I suggest using a different report, since it provides more in-depth information: “All Traffic”. 

The AdSense Referrers only displays information about websites that generated AdSense Revenue, it does not provide information on other types of traffic sources and campaigns. For this reason, I believe the All Traffic report presents a more complete view. To find the report, go to this page (direct link to report) and click on the AdSense tab just above the chart.

3. AdSense Exits

AdSense Exit report shows the number of sessions that ended due to a user clicking on an AdSense ad. This is an interesting metric as it can show which pages have a "high conversion rate", i.e. the ratio of visits to a page and those that left the website clicking on an AdSense unit through it. If your monetization is made through AdSense this report will give just that: AdSense conversion rate per page.

Optimizing AdSense revenue using Google Analytics

Below is an example of how to use the integration from my Analytics for Publishers eBook. Most websites work with templates and each template may have different AdSense placements; this means that an important analysis would be to compare performance by template (or by category) rather than by page. 

In order to analyze template performance, we will need to create one segment per template. If you want to learn more about creating Segments, check this Help Center article. For example, let’s suppose your website has the following page templates:
  • Analytics pages with URLs structured as example.com/analytics/...
  • Testing pages with URLs structured as example.com/testing/...
  • Targeting pages with URLs structured as example.com/targeting/...
In this case you would create three segments using the dimension Page, each containing its unique pattern: /analytics/ for analytics pages, /testing/ for testing pages, and /targeting/ for targeting pages. Below is an example of how the segment would look for the analytics pages: 

Analyzing template performance using segments (click for full size) 

After creating the segments for all three templates, you will be able to choose all of them in the top-left corner of the screen (just above the chart, see bubble #1 above) to see a comparison between them. Below is a screenshot showing how such a comparison would look like: 

Table comparison metrics for different visitor segments (click for full size)
In the table above we are able to compare pages by all metrics available. For example, we can see that while the Analytics section has higher revenue, this is related to the number of impressions, which is also significantly higher. When we analyze further, we see that the Testing and Targeting sections have a good potential, with the same CTR but significantly higher AdSense eCPM. Based on these metrics we can understand which templates and content types are the most effective. 

As mentioned above, once you find out which pages are performing well and which pages are not, you can use Content Experiments to optimize them. Here is a Content Experiments guide.

Closing Thoughts

Here are a few takeaways for you to start optimizing today!
  1. Understand which content type and subject generates the highest revenue and create content based on this data.
  2. Understand which page templates bring the best results by using advanced segments.
  3. Analyze AdSense performance to learn which segments have a good CTR; this might bring insight into which audience to target.

Unique Strategies for Scaling Teacher Professional Development



Research shows that professional development for educators has a direct, positive impact on students, so it’s no wonder that institutions are eager to explore creative ways to enhance professional development for K-12 teachers. Open source MOOC platforms, such as Course Builder, offer the flexibility to extend the reach of standard curriculum; recently, several courses have launched that demonstrate new and creative applications of MOOCs. With their wide reach, participant engagement, and rich content, MOOCs that offer professional development opportunities for teachers bring flexibility and accessibility to an important area.

This summer, the ScratchEd team out of Harvard University launched the Creative Computing MOOC, a 6 week self paced workshop focused on building computational thinking skills in the classroom. As a MOOC, the course had 2600 participants, who created more than 4700 Scratch projects, and engaged in 3500 forum discussions, compared to the “in-person” class held last year, which reached only 50 educators.

Other creative uses of Course Builder for educator professional development come from National Geographic and Annenberg Learner who joined forces to develop Water: The Essential Resource, a course developed around California’s Education and Environment Initiative. The Friday Institute’s MOOC, Digital Learning Transitions, focused on the benefits of utilizing educational technology and reached educators across 50 states and 68 countries worldwide. The course design included embedded peer support, project-based learning, and case studies; a post-course survey showed an overwhelming majority of responders “were able to personalize their own learning experiences” in an “engaging, easy to navigate” curriculum and greatly appreciated the 24/7 access to materials.

In addition to participant surveys, course authors using the Course Builder platform are able to conduct deeper analysis via web analytics and course data to assess course effectiveness and make improvements for future courses.

New opportunities to experience professional development MOOCs are rapidly emerging; the University of Adelaide recently announced their Digital Technology course to provide professional development for primary school teachers on the new Australian curriculum, the Google in Education team just launched a suite of courses for teachers using Google technologies, and the Friday Institute course that aligns with the U.S. based Common Core State Standards is now available.

We’re excited about the innovative approaches underway and the positive impact it can have for students and teachers around the world. We also look forward to seeing new, creative applications of MOOC platforms in new, unchartered territory.

Monday, 18 November 2013

Learning what moves the needle most with Data-Driven Attribution

"Tremendously useful."  That's what Chris Bawden of the TechSmith Corporation says about Data-Driven Attribution.

What is Data-Driven Attribution? Well, in August we launched a new leap in technology that uses algorithmic models and reports to help take the guesswork out of attribution. And it's available now to Google Analytics Premium customers around the world.

Data-Driven Attribution uses statistical probabilities and economic algorithms to analyze each customer's journey in a new way. You define the results that count — sales, sign-ups, or whatever matters to you— and the model assigns value to marketing touchpoints automatically, comparing actions and probabilities to show you which digital channels and keywords move the needle most. 

The bottom line: better returns on your marketing and ad spend. 

We checked in with companies using DDA and results have been strong:
  • "Data Driven Attribution really showed us where we were driving conversions," says Will Lin, Senior Director of Global eMarketing for HomeAway. They saw a 23% increase in attributed conversions for their test keywords after making changes suggested by Data Driven Attribution. Download case study.
  • TechSmith Corporation saw a 19% increase in attributed conversions under the Data Driven Attribution model. "It uncovered growth potential we would have not seen otherwise," reports Nicole Remington, their Search Marketing Manager. Download case study.
  • And the digital analytics firm MaassMedia saw display leads increase 10% while costs per lead remained flat. "We now have a much more accurate measure of how display impacts our business," one of their clients told them. Download case study.
In short, the early returns for DDA users have been strong. Some of the key advantages of this model:

Algorithmic and automatic: The model distributes credit across marketing channels scientifically, based on success metrics you define. 

Transparent: Our unique Model Explorer gives you full insight into how marketing touch points are valued — no “black box” methodology.

Actionable: Detailed insights into both converting and non-converting paths offer clear guidance for your marketing decisions.

Cross-platform: DDA is deeply integrated with other Google products like AdWords, the Google Display Network, and YouTube, and you can pull in data from most any digital channel.

You'll learn much more about the benefits of Data-Driven Attribution when you download our cheat sheet. Or to learn more about Google Analytics Premium, contact your Google Account Manager or visit google.com/analytics/premium.

Posted by Bill Kee, Product Manager for Attribution, and Jody Shapiro, Product Manager for Google Analytics Premium

Friday, 15 November 2013

Moore’s Law Part 4: Moore's Law in other domains

This is the last entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.

--

The quest for Moore’s Law and its potential impact in other disciplines is a journey the technology industry is starting, by crossing the Rubicon from the semiconductor industry to other less explored fields, but with the particular mindset created by Moore’s Law. Our goal is to explore if there are Moore’s Law opportunities emerging in other disciplines, as well as its potential impact. As such, we have interviewed several professors and researchers and asked them if they could see emerging ‘Moore’s Laws’ in their discipline. Listed below are some highlights of those discussions, ranging from CS+ to potentials in the Energy Sector:

Sensors and Data Acquisition
Ed Parsons, Google Geospatial Technologist
The More than Moore discussion can be extended to outside of the main chip, and go within the same board as the main chip or within the device that a user is carrying. Greater sensors capabilities (for the measurement of pressure, electromagnetic field and other local conditions) allow including them in smart phones, glasses, or other devices and perform local data acquisition. This trend is strong, and should allow future devices benefiting from Moore’s Law to receive enough data to perform more complex applications.

Metcalfe’s Law states that the value of a telecommunication network is proportional to the square of connected nodes of the system. This law can be used in parallel to Moore’s Law to evaluate the value of the Internet of Things. The network itself can be seen as composed by layers: at the user’s local level (to capture data related to the body of the user, or to immediately accessible objects), locally around the user (such as to get data within the same street as the user), and finally globally (to get data from the global internet). The extrapolation made earlier in this blog (several TB available in flash memory) will lead to the ability to construct, exchange and download/upload entire contexts for a given situation or a given application and use these contexts without intense network activity, or even with very little or no network activity.

Future of Moore’s Law and its impact on Physics
Sverre Jarp, CERN
CERN, and its experiments with the Large Electron-Positron Collider (LEP) and Large Hadron Collider (LHC) generate data on the order of a PetaByte per year; this data has to be filtered, processed and analyzed in order to find meaningful physics events leading to new discoveries. In this context Moore’s Law has been particularly helpful to allow computing power, storage and networking capabilities at CERN and at other High Energy Physics (HEP) centers to scale up regularly. Several generations of hardware and software have been exhausted during the journey from mainframes to today’s clusters.

CERN has a long tradition of collaboration with chip manufacturers, hardware and software vendors to understand and predict next trends in the computing evolution curve. Recent analysis indicates that Moore’s Law will likely continue over the next decade. The statement of ‘several TB of flash memory availability by 2025’ may even be a little conservative according to most recent analysis.

Big Data Visualizations
Katy Börner, Indiana University
Thanks to Moore’s Law, the amount of data available for any given phenomenon, whether sensed or simulated, has been growing by several orders of magnitude over the past decades. Intelligent sampling can be used to filter out the most relevant bits of information and is practiced in Physics, Astronomy, Medicine and other sciences. Subsequently, data needs to be analyzed and visualized to identify meaningful trends and phenomena, and to communicate them to others.

While most people learn in school how to read charts and maps, many never learn how to read a network layout—data literacy remains a challenge. The Information Visualization Massive Open Online Course (MOOC) at Indiana University teaches students from more than 100 countries how to read but also how to design meaningful network, topical, geospatial, and temporal visualizations. Using the tools introduced in this free course anyone can analyze, visualize, and navigate complex data sets to understand patterns and trends.

Candidate for Moore’s Law in Energy
Professor Francesco Stellacci, EPFL
It is currently hard to see a “Moore’s Law” applying to candidates in energy technology. Nuclear fusion could reserve some positive surprises, if several significant breakthroughs are found in the process of creating usable energy with this technique. For any other technology the technological growth will be slower. Best solar cells of today have a 30% efficiency, which could scale higher of course (obviously not much more than a factor of 3). Also cost could be driven down by an order of magnitude. Best estimates show, however, a combined performance improvement by a factor 30 over many years.

Further Discussion of Moore’s Law in Energy
Ross Koningstein, Google Director Emeritus
As of today there is no obvious Moore’s Law in the Energy sector which could decrease some major costs by 50% every 18 months. However material properties at nanoscale, and chemical processes such as catalysis are being investigated and could lead to promising results. Applications targeted are hydrocarbon creation at scale and improvement of oil refinery processes, where breakthrough in micro/nano property catalysts is pursued. Hydrocarbons are much more compatible at scale with the existing automotive/aviation and natural gas distribution systems. Here in California, Google Ventures has invested in Cool Planet Energy Systems, a company with neat technology that can convert biomass to gasoline/jet fuel/diesel with impressive efficiency.

One of the challenges is the ability to run many experiments at low cost per experiment, instead of only a few expensive experiments per year. Discoveries are likely to happen faster if more experiments are conducted. This leads to heavier investments, which are difficult to achieve within slim margin businesses. Therefore the nurturing processes for disruptive business are likely to come from new players, beside existing players which will decide to fund significant new investments.

Of course, these discussions could be opened for many other sectors. The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.

Sharpen Your Analysis Skills Over The Holidays At The Analytics Academy

Last month, we launched the Analytics Academy, a new hub for all users to participate in free, online, community-based video courses about digital analytics and Google Analytics. 

We’re pleased to share more than 145,000 students signed up for the Academy. Our team is delighted so many are interested in advancing their skills as marketers, analysts and business owners. But the good news is that while the current window to earn a certificate has ended, the educational materials remain up for everyone to access. 

Don’t fall behind your peers: the Holidays present the perfect opportunity to put some time aside and learn the latest in analytics. Start your journey in the Academy today by completing Digital Analytics Fundamentals. This way, you’ll be ready to go when we announce the next course in early 2014. 

Some key highlights from the course include:
  • An overview of today’s digital measurement landscape
  • Guidance on how to build an effective measurement plan
  • Best practices for collecting actionable data
  • Descriptions of key digital measurement concepts, terminology and analysis techniques
  • Deep-dives into Google Analytics reports with specific examples for evaluating your digital marketing performance
And a quick bonus for everyone: we recently conducted a Hangout on Air with Google Analytics Evangelist (and course Instructor) Justin Cutroni and Digital Marketing Evangelist Avinash Kaushik that’s a must-watch for all course participants and the Analytics community as a whole. We’ve embedded it below in case you were unable to attend live.


The next course is scheduled to start in early 2014 and will cover how to progress from measurement planning to implementation. We’ll be sharing more information with you soon. 

Posted by the Google Analytics Team

Thursday, 14 November 2013

The first detailed maps of global forest change



Most people are familiar with exploring images of the Earth’s surface in Google Maps and Earth, but of course there’s more to satellite data than just pretty pictures. By applying algorithms to time-series data it is possible to quantify global land dynamics, such as forest extent and change. Mapping global forests over time not only enables many science applications, such as climate change and biodiversity modeling efforts, but also informs policy initiatives by providing objective data on forests that are ready for use by governments, civil society and private industry in improving forest management.

In a collaboration led by researchers at the University of Maryland, we built a new map product that quantifies global forest extent and change from 2000 to 2012. This product is the first of its kind, a global 30 meter resolution thematic map of the Earth’s land surface that offers a consistent characterization of forest change at a resolution that is high enough to be locally relevant as well. It captures myriad forest dynamics, including fires, tornadoes, disease and logging.

Global 30 meter resolution thematic maps of the Earth’s land surface: Landsat composite reference image (2000), summary map of forest loss, extent and gain (2000-2012), individual maps of forest extent, gain, loss, and loss color-coded by year. Click to enlarge
The satellite data came from the Enhanced Thematic Mapper Plus (ETM+) sensor onboard the NASA/USGS Landsat 7 satellite. The expertise of NASA and USGS, from satellite design to operations to data management and delivery, is critical to any earth system study using Landsat data. For this analysis, we processed over 650,000 ETM+ images in order to characterize global forest change.

Key to the study’s success was the collaboration between remote sensing scientists at the University of Maryland, who developed and tested models for processing and characterizing the Landsat data, and computer scientists at Google, who oversaw the implementation of the final models using Google’s Earth Engine computation platform. Google Earth Engine is a massively parallel technology for high-performance processing of geospatial data, and houses a copy of the entire Landsat image catalog. For this study, a total of 20 terapixels of Landsat data were processed using one million CPU-core hours on 10,000 computers in parallel, in order to characterize year 2000 percent tree cover and subsequent tree cover loss and gain through 2012. What would have taken a single computer 15 years to perform was completed in a matter of days using Google Earth Engine computing.

Global forest loss totaled 2.3 million square kilometers and gain 0.8 million square kilometers from 2000 to 2012. Among the many results is the finding that tropical forest loss is increasing with an average of 2,101 additional square kilometers of forest loss per year over the study period. Despite the reduction in Brazilian deforestation over the study period, increasing rates of forest loss in countries such as Indonesia, Malaysia, Tanzania, Angola, Peru and Paraguay resulted in a statistically significant trend in increasing tropical forest loss. The maps and statistics from this study fill an information void for many parts of the world. The results can be used as an initial reference for countries lacking such information, as a spur to capacity building in such countries, and as a basis of comparison in evolving national forest monitoring methods. Additionally, we hope it will enable further science investigations ranging from the evaluation of the integrity of protected areas to the economic drivers of deforestation to carbon cycle modeling.

The Chaco woodlands of Bolivia, Paraguay and Argentina are under intensive pressure from agroindustrial development. Paraguay’s Chaco woodlands within the western half of the country are experiencing rapid deforestation in the development of cattle ranches. The result is the highest rate of deforestation in the world. Click to enlarge
Global map of forest change: http://earthenginepartners.appspot.com/science-2013-global-forest

If you are curious to learn more, tune in next Monday, November 18 to a live-streamed, online presentation and demonstration by Matt Hansen and colleagues from UMD, Google, USGS, NASA and the Moore Foundation:

Live-stream Presentation: Mapping Global Forest Change
Live online presentation and demonstration, followed by Q&A
Monday, November 18, 2013 at 1pm EST, 10am PST
Link to live-streamed event: http://goo.gl/JbWWTk
Please submit questions here: http://goo.gl/rhxK5X

For further results and details of this study, see High-Resolution Global Maps of 21st-Century Forest Cover Change in the November 15th issue of the journal Science.

Wednesday, 13 November 2013

Moore’s Law, Part 3: Possible extrapolations over the next 15 years and impact



This is the third entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.

--

More Moore
We examine data from the ITRS 2012 Overall Roadmap Technology Characteristics (ORTC 2012), and select notable interpolations; The chart below shows chip size trends up to the year 2026 along with the “Average Moore’s Law” line. Additionally, in the ORTC 2011 tables we find data on 3D chip layer increases (up to 128 layers), including costs. Finally, the ORTC 2011 index sheet estimates that the DRAM cost per bit at production will be ~0.002 microcents per bit by ~2025. From these sources we draw three More Moore (MM) extrapolations, that by the year 2025:

  • 4Tb Flash multi-level cell (MLC) memory will be in production
  • There will be ~100 billion transistors per microprocessing unit (MPU)
  • 1TB RAM Memory will cost less than $100


More than Moore
It should be emphasized that “More than Moore” (MtM) technologies do not constitute an alternative or even a competitor to the digital trend as described by Moore’s Law. In fact, it is the heterogeneous integration of digital and non-digital functionalities into compact systems that will be the key driver for a wide variety of application fields. Whereas MM may be viewed as the brain of an intelligent compact system, MtM refers to its capabilities to interact with the outside world and the users.

As such, functional diversification may be regarded as a complement of digital signal and data processing in a product. This includes the interaction with the outside world through sensors and actuators and the subsystem for powering the product, implying analog and mixed signal processing, the incorporation of passive and/or high-voltage components, micro-mechanical devices enabling biological functionalities, and more. While MtM looks very promising for a variety of diversification topics, the ITRS study does not give figures from which “solid” extrapolations can be made. However, we can make safe/not so safe bets going towards 2025, and examine what these extrapolations mean in terms of the user.

Today we have a 1TB hard disk drives (HDD) for $100, but the access speed to data on the disk does not allow to take full advantage of this data in a fully interactive, or even practical, way. More importantly, the size and construction of HDD does not allow for their incorporation into mobile devices, Solid state drives (SSD), in comparison, have similar data transfer rates (~1Gb/s), latencies typically 100 times less than HDD, and have a significantly smaller form factor with no moving parts. The promise of offering several TB of flash memory, cost effectively by 2025, in a device carried along during the day (e.g. smartphone, watch, clothing, etc.) represents a paradigm shift with regard of today’s situation; it will empower the user by moving him/her from an environment where local data needs to be refreshed frequently (as with augmented reality applications) to a new environment where full contextual data will be available locally and refreshed only when critically needed.

If data is pre-loaded in the order of magnitude of TBs, one will be able to get a complete contextual data set loaded before an action or a movement, and the device will dispatch its local intelligence to the user during the progress of the action, regardless of network availability or performance. This opens up the possibility of combining local 3D models and remote inputs, allowing applications like 3D conferencing to become available. The development and use of 3D avatars could even facilitate many social interaction models. To benefit from such applications the use of personal devices such as Google Glass may become pervasive, allowing users to navigate 3D scenes and environments naturally, as well as facilitating 3D conferencing and their “social” interactions.

The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.

Tuesday, 12 November 2013

Launching Real-Time Events and Conversions Reports out of Beta

It wasn’t too long ago that we launched events and conversions in Google Analytics Real-Time reports. We’ve heard from many of you who are using Real-Time reporting to test changes to your site, create dashboards to monitor your traffic and goals, or make rapid decisions about what content to promote today.   

That’s why we’re excited to announce that Real-Time events and conversions reports will be coming out of Beta to all users over the next few weeks. Based on your feedback, we’ve refined these reports to make them even more valuable:  you’ll soon see them in App profiles, and we’ve also added dedicated metrics for Unique Visitors.  

Events and Conversions in App Profiles

Once the development is done, launching a new version of your app is always a bit nerve-wracking. So many problems can happen at that stage, and most of the channels for finding out that things went wrong aren’t very helpful for helping you know how your whole user base is seeing things. Wouldn’t you like to know what your users are experiencing, not just what your servers or social media are telling you? With event tracking in real-time, you can use event labels and values to measure interactions at your users’ devices, so you can best understand and respond to what users are seeing in the wild.
(Click image for full-sized version)
Perhaps your latest release adds some new social features, and you want to maximize the number of people sharing content. You can add an event to a small interaction, or a view snippet to a dedicated dialog screen. Once you’ve done that, create a goal based on that interaction, and see it show up in real-time as users help spread the word.

Unique Visitors for Events and Conversions

Google Analytics users are a creative bunch, and use events and conversions for an incredibly wide variety of things - from caffeine to detailed web interactions. We’re always doing our best to help you understand your users better, which is why we’ve added Active Visitors metrics to the Real-Time Events and Conversions reports. Sharing a link or staying on a page for several minutes is great, but it’d be even better to be able to understand what percentage of your users hit a certain event or reach a particular goal in real time.  


When you create a new advertising campaign, blog post, or social media engagement, your traffic usually goes up - but without a knowledge of how individual users behave, it’s difficult to see the quality of your traffic as it changes. By looking at Active Visitor Conversions in real-time, you can better understand your conversion funnel as it’s happening: whether users are just browsing, or whether they’re actively engaging and converting.  

We hope you find the new reports valuable! We’d love to hear how you use them - let us know in the comments. Happy analyzing!

Posted by Jon Mesh, Google Analytics team

Moore’s Law, Part 2: More Moore and More than Moore

This is the second entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.

--

One of the fundamental lessons derived for the past successes of the semiconductor industry comes for the observation that most of the innovations of the past ten years—those that indeed that have revolutionized the way CMOS transistors are manufactured nowadays—were initiated 10–15 years before they were incorporated into the CMOS process. Strained silicon research began in the early 90s, high-κ/metal-gate initiated in the mid-90s and multiple-gate transistors were pioneered in the late 90s. This fundamental observation generates a simple but fundamental question: “What should the ITRS do to identify now what the extended semiconductor industry will need 10–15 years from now?”
- International Technology Roadmap for Semiconductors 2012

More Moore
As we look at the years 2020–2025, we can see that the physical dimensions of CMOS manufacture are expected to be crossing below the 10 nanometer threshold. It is expected that as dimensions approach the 5–7 nanometer range it will be difficult to operate any transistor structure that is utilizing the metal-oxide semiconductor (MOS) physics as the basic principle of operation. Of course, we expect that new devices, like the very promising tunnel transistors, will allow a smooth transition from traditional CMOS to this new class of devices to reach these new levels of miniaturization. However, it is becoming clear that fundamental geometrical limits will be reached in the above timeframe. By fully utilizing the vertical dimension, it will be possible to stack layers of transistors on top of each other, and this 3D approach will continue to increase the number of components per square millimeter even when horizontal physical dimensions will no longer be amenable to any further reduction. It seems important, then, that we ask ourselves a fundamental question: “How will we be able to increase the computation and memory capacity when the device physical limits will be reached?” It becomes necessary to re-examine how we can get more information in a finite amount of space.

The semiconductor industry has thrived on Boolean logic; after all, for most applications the CMOS devices have been used as nothing more than an “on-off” switch. Consequently, it becomes of paramount importance to develop new techniques that allow the use of multiple (i.e., more than 2) logic states in any given and finite location, which evokes the magic of “quantum computing” looming in the distance. However, short of reaching this goal, a field of active research involves increasing the number of states available, e.g. 4–10 states, and to increase the number of “virtual transistors” by 2 every 2 years.


More than Moore
During the blazing progress propelled by Moore’s Law of semiconductor logic and memory products, many “complementary” technologies have progressed as well, although not necessarily scaling to Moore’s Law. Heterogeneous integration of multiple technologies has generated “added value” to devices with multiple applications, beyond the traditional semiconductor logic and memory products that had lead the semiconductor industry from the mid 60s to the 90s. A variety of wireless devices contain typical examples of this confluence of technologies, e.g. logic and memory devices, display technology, microelectricomechanical systems (MEMS), RF and Analog/Mixed-signal technologies (RF/AMS), etc.

The ITRS has incorporated More than Moore and RF/AMS chapters in the main body of the ITRS, but is uncertain whether this is sufficient to encompass the plethora of associated technologies now entangled into modern products, or the multi-faceted public consumer who has become an influential driver of the semiconductor industry, demanding custom functionality in commercial electronic products. In the next blog of this series, we will examine select data from the ITRS Overall Roadmap Technology Characteristics (ORTC) 2012 and attempt to extrapolate the progress in the next 15 years, and its potential impact.

The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.

Monday, 11 November 2013

Moore’s Law, Part 1: Brief history of Moore's Law and current state

This is the first entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moore's Law situation, or promises for such a Law that would drive their future performance.


---

Moore's Law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster). -Wikipedia

Moore’s Law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. In it, Moore noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue "for at least ten years". Moore’s prediction has proven to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.

The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well (see Other formulations and similar laws). This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy, and is a driving force of technological and social change in the late 20th and early 21st centuries.

Most improvement trends have resulted principally from the industry’s ability to exponentially decrease the minimum feature sizes used to fabricate integrated circuits. Of course, the most frequently cited trend is in integration level, which is usually expressed as Moore’s Law (that is, the number of components per chip doubles roughly every 24 months). The most significant trend is the decreasing cost-per-function, which has led to significant improvements in economic productivity and overall quality of life through proliferation of computers, communication, and other industrial and consumer electronics.

Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moore's law - the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth

All of these improvement trends, sometimes called “scaling” trends, have been enabled by large R&D investments. In the last three decades, the growing size of the required investments has motivated industry collaboration and spawned many R&D partnerships, consortia, and other cooperative ventures. To help guide these R&D programs, the Semiconductor Industry Association (SIA) initiated the National Technology Roadmap for Semiconductors (NTRS) in 1992. Since its inception, a basic premise of the NTRS has been that continued scaling of electronics would further reduce the cost per function and promote market growth for integrated circuits. Thus, the Roadmap has been put together in the spirit of a challenge—essentially, “What technical capabilities need to be developed for the industry to stay on Moore’s Law and the other trends?”

In 1998, the SIA was joined by corresponding industry associations in Europe, Japan, Korea, and Taiwan to participate in a 1998 update of the Roadmap and to begin work toward the first International Technology Roadmap for Semiconductors (ITRS), published in 1999. The overall objective of the ITRS is to present industry-wide consensus on the “best current estimate” of the industry’s research and development needs out to a 15-year horizon. As such, it provides a guide to the efforts of companies, universities, governments, and other research providers or funders. The ITRS has improved the quality of R&D investment decisions made at all levels and has helped channel research efforts to areas that most need research breakthroughs.

For more than half a century these scaling trends continued, and sources in 2005 expected it to continue until at least 2015 or 2020. However, the 2010 update to the ITRS has growth slowing at the end of 2013, after which time transistor counts and densities are to double only every three years. Accordingly, since 2007 the ITRS has addressed the concept of functional diversification under the title “More than Moore” (MtM). This concept addresses an emerging category of devices that incorporate functionalities that do not necessarily scale according to “Moore's Law,” but provide additional value to the end customer in different ways.

The MtM approach typically allows for the non-digital functionalities (e.g., RF communication, power control, passive components, sensors, actuators) to migrate from the system board-level into a particular package-level (SiP) or chip-level (SoC) system solution. It is also hoped that by the end of this decade, it will be possible to augment the technology of constructing integrated circuits (CMOS) by introducing new devices that will realize some “beyond CMOS” capabilities. However, since these new devices may not totally replace CMOS functionality, it is anticipated that either chip-level or package level integration with CMOS may be implemented.

The ITRS provides a very comprehensive analysis of the perspective for Moore’s Law when looking towards 2020 and beyond. The analysis can be roughly segmented into two trends: More Moore (MM) and More than Moore (MtM). In the next blog in this series, we will look in the the recent conclusions mentioned in the ITRS 2012 report on both trends.

The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.

Tuesday, 5 November 2013

Improve your website’s performance with the new Speed Suggestions report

Users prefer fast sites. And businesses benefit from it: faster sites tend to have lower bounce rates, increased customer satisfaction and better engagement. Site owners agree, and it shows in their actions taken to optimize site speed: we’re pleased to see from our own benchmarks over the last two years the web is getting faster (not only desktop, even mobile access is around 30% faster compared to last year).

Making your own site faster is something you can act on today and one of the best ways to improve user experience. To help, we’re excited to launch the new Speed Suggestions report in our suite of website performance reports. Not only can you measure and visualize the performance of your website, but you can now also speed up the slowest pages with concrete and actionable suggestions.

Speed Suggestions report


The new Speed Suggestions report shows the average page load time for top visited pages on your website and integrates with the PageSpeed Insights tool to surface suggestions for improving the pages for speed. The PageSpeed Insights tool analyzes the contents of a web page and generates a speed score and concrete suggestions.  The speed score indicates the amount of potential improvement on the page.  The closer the score is to 100, the more optimized the page is for speed.

In the report, you can click through a suggestions link to see a page with all of the suggestions sorted by their impact on site speed. Example suggestions include reducing the amount of content that needs to load before your users can interact with the page, minifying JavaScript, and reducing redirects. Note that if you rewrite your urls before displaying the url in Analytics, or your pages requires a login (see the help article for more details), then the PageSpeed Insights tool may not be able to analyze the page and generate a score and suggestions.


If you would like to dig into the which of your pages take the most time for your users to load, check out the existing Page Timings report which breaks down the average page load time for each page.  Once you’ve identified your slowest pages, you can use the new Speed Suggestions report to improve them. For more general suggestions on how to improve your website, check out these performance articles, and read more about the new report in the detailed help center article.  As always, we welcome feedback on ways to improve the report for our users.

For more help, visit our Google Developers site with tools tips and ideas on making the web faster.

Posted by Chen Xiao, Google Analytics Team