Tuesday, 16 December 2014

Little Box Challenge Academic Awards



Last July, Google and the Institute of Electrical and Electronics Engineers Power Electronics Society (IEEE PELS) announced the Little Box Challenge, a competition designed to push the forefront of new technologies in the research and development of small, high power density inverters.

In parallel, we announced the Little Box Challenge award program designed to help support academics pursuing groundbreaking research in the area of increasing the power density for DC-­to­-AC power conversion. We received over 100 proposals and today we are proud to announce the following recipients of the academic awards:

Primary Academic Institution
Principal Investigator
University of Colorado Boulder
National Taiwan University of Science and Technology
Universidad Politécnica de Madrid
Texas A&M University
ETH Zürich
University of Bristol
Case Western Reserve University
University of Illinois Urbana-Champaign
University of Stuttgart
Queensland University of Technology

The recipients hail from many different parts of the world and were chosen based on their very strong and thoughtful entries dealing with all the issues raised in the request for proposals. Each of these researchers will receive approximately $30,000 US to support their research into high power density inverters, and are encouraged to use this work to attempt to win the $1,000,000 US grand prize for the Little Box Challenge.

There were many submissions beyond those chosen here that reviewers also considered to be very promising. We encourage all those who did not receive funding to still participate in the Little Box Challenge, and pursue improvements not only in power density, but also in the reliability, efficiency, safety, and cost of inverters (and of course, to attempt to win the grand prize!)

Friday, 12 December 2014

Keeping the GA Web Experience Modern

We're continuing to bring you new features and technologies in the design of Google Analytics to provide the best a user experience. With this in mind, starting January 31, 2015 we will no longer support official compatibility of Google Analytics with Microsoft Internet Explorer 9 (IE9). While you can continue to use IE9 after we discontinue support, some features may not work properly going forward. This update maintains our practice of supporting the newest browsers while discontinuing support for the third-oldest version, as we previously announced in September 2013.

We will continue to support the latest versions of Chrome, Firefox, Internet Explorer 10 or higher, Safari and other modern browsers. Of course, you will still be able to measure visits from users of all browsers, including IE 9. We will send further reminders prior to the deprecation, but do advise you begin preparing and implementing plans for this change at your earliest convenience.

Call for Research Proposals to participate in the Open Web of Things Expedition



Imagine a world in which access to networked technology defies the constraints of desktops, laptops or smartphones. A future where we work seamlessly with connected systems, services, devices and “things” to support work practices, education, and daily interactions. While the Internet of Things (IoT) conjures a vision of “anytime, any place” connectivity for all things, the realization is complex given the need to work across interconnected and heterogeneous systems, and the special considerations needed for security, privacy, and safety.

Google is excited about the opportunities the IoT presents for future products and services. To further the development of open standards, facilitate ease of use, and ensure that privacy and security are fundamental values throughout the evolution of the field, we are in the process of establishing an open innovation and research program around the IoT. We plan to bring together a community of academics, Google experts and potentially other parties to pursue an open and shared mission in this area.

As a first step, we are announcing an open call for research proposals for the Open Web of Things:

  • Researchers interested in the Expedition Lead Grant should build a team of PIs and put forward a proposal outlining a draft research roadmap both for their team(s), as well as how they propose to integrate related research that is implemented outside their labs (e.g., Individual Project Grants).
  • For the Individual Project Grants we are seeking research proposals relating to the IoT in the following areas (1) user interface and application development, (2) privacy & security, and (3) systems & protocols research.

Importantly, we are open to new and unorthodox solutions in all three of these areas, for example, novel interactions, usable security models, and new approaches for open standards and evolution of protocols.

Additionally, to facilitate hands-on research supporting our mission driven research, we plan to provide participating faculty access to hardware, software and systems from Google. We look forward to your submission by January 21, 2015 and expect to select proposals early Spring. Selected PIs will be invited to participate in a kick-off workshop at Google shortly after.

Thursday, 11 December 2014

Refreshing “The Customer Journey to Online Purchase” - New Insights into Marketing Channels

Last year we introduced “The Customer Journey to Online Purchase” -- a tool that helped marketers visualize the roles played by marketing channels like paid search, email and display ads in their customers' journeys.

The goal was to help marketers learn more about the customer journeys for their industries. If social makes your customers aware, and email makes them convert -- or vice versa -- you can make sure you're in both places with the right kind of message.

Today we're happy to introduce a new improved version of the Customer Journey to Online Purchase, with a few key enhancements.  We’ve refreshed the data based on millions of consumer interactions, updated the industry classifications, and we’ve split out paid search so you can see the influence of brand and generic search terms on the purchase decision.

In each industry you can now see journeys for small, medium and large companies, which can often be quite different.
Click to enlarge image
For instance, the above image shows the journey for customers of small businesses in the shopping industry. Note that organic search is very often an "assist" interaction for these customers.
Click to enlarge image
Now here's the same journey for large shopping businesses. Note that display clicks and social are strongly assisting interactions -- while display didn’t even appear for the small businesses above. For both small and large businesses, a direct website visit is most likely to be the last interaction. Across industries, the differences from small to large businesses illustrate how different marketing strategies and customer profiles may lead to different buying behavior.

And there's more! Now you can drill down into each marketing channel for a closer look at the role it plays based on its position in the purchase path. Channels that occur more frequently in the beginning of the path are more likely to help generate awareness for your product, while the end of the path is closer to the customer’s purchase decision.
Click to enlarge image
In these charts, for example, we see the different roles that different channels play in the Shopping industry. One interesting insight is that all channels -- even those traditionally thought of as “upper funnel” or “lower funnel” -- occur throughout the purchase path, but a given channel may be more common at particular stages depending on its role (and depending on the industry).

Each marketing campaign and channel may have a different impact on customers depending on when they interact with it. Using what you learn from this tool, you can help adapt your marketing messaging to be more relevant and useful for your customers.

Try the Customer Journey to Online Purchase today. And for more helpful marketing insights, check out Measure What Matters Most: our new guide chock-full of suggestions on how to measure the impact of your marketing -- across channels -- to complement what you learn from the Customer Journey tool and take action to improve your marketing.

Happy analyzing!


Learning Digital Skills online with Google Activate



According to Eurostat data, over 5 million people under age 25 are currently out of work in Europe, in contrast to an increasing demand for people with digital skills such as Digital Marketing, Big Data, Ecommerce, Mobile App Development and Cloud Computing. In particular, Spanish employers are finding it difficult to find individuals with the right skills, due to the lack of the digital education available.

In an effort to make contributions towards solving Spain’s unemployment in this sector, Google Spain, the Spanish Ministry of Industry through their business school EOI, Universidad Complutense de Madrid and Interactive Advertising Bureau (IAB) are collaborating to build Google Activate, a series of massive open online courses (MOOCs) dedicated to teach digital skills to the young unemployed people in Spain. This is an example of how online education can be scaled to address educational and economic issues.

The inspiration for Google Activate began with the summer 2012 launch of Course Builder, an experimental platform developed on Google technologies designed to provide the capability for anyone to create an online environment that can be used for a wide variety of education-related activities. In September of that same year, Course Builder was made available in Europe, as part of the Google Faculty Summit in London.

Among the early adopters of Course Builder in Europe was a partnership that included the University of Alicante, who in October 2012 launched Unimooc Aemprende, a MOOC for entrepreneurs. This is just one example of the use of Course Builder to build a MOOC designed to solve a broad problem, in this case the acquisition of skills for launching a small business. More than 30,000 people have participated in Unimooc since its launch.

As of today, more than 148,000 people have registered for Activate with 13% of participants earning a certificate, which is obtained after 13 exams certified by either the EOI, Universidad Complutense de Madrid or or the IAB (Interactive Advertising Bureau). Such certificates are being used by the awardees in their LinkedIn profile to position themselves for a job in the digital economy, where many jobs are being created. More than 19,000 students are already certified in one of the 5 digital areas.

Google Activate has plans to increase the number of students with digital skills reaching 160,000 with plans to expand further to other countries in the world.

Tuesday, 9 December 2014

Google Analytics in AdMob helps mobile app developer Eltsoft go global

Cross-posted on the Inside AdMob Blog 

Since March 2014, Google Analytics has been fully available in AdMob, and now app developers are increasingly seeing results by combining data from both platforms. Here’s one story that illustrates the power of AdMob and Google Analytics together.

Passion for languages and learning
Jason Byrne, and business partner Robert Diem, are passionate about making a difference in education. They came together during their time as professors in Japan to found Eltsoft LLC, a company that builds mobile language learning apps for iOS and Android. Together, they started creating a series of fun tools  that allow users to study whenever they want, wherever they are.

Global expansion
Their most popular app is English Grammar, which has been downloaded by more than a million people looking to sharpen their English-language skills in nearly 120 countries.

All of the company’s apps are available for free or as paid versions. To increase revenue, they chose AdMob to earn money from the free versions of their apps with advertising. “AdMob monetization is central to our success because it delivers high-quality, appropriate ads to our audience in their native languages, wherever they live,” says Jason. 

The Google Analytics data within AdMob helped them understand more about their users. "Our app, English Grammar, has users from all around the world, so we turned to data from Google Analytics and AdMob to understand which languages we should consider for localization. For example, we knew we had to prioritize German and French, but we discovered other languages that we didn't expect, such as Russian and Japanese."

A data-driven approach to marketing 
Eltsoft uses data to focus their marketing campaigns and assess where to use their resources most effectively. “Google Analytics keeps making campaign analysis simpler and clearer,” Jason says. “Data from various sources - Google Analytics and Google Play, for instance - are now all in one place. That helps me understand what’s happening with our ad campaigns.” 

While data analysis helped Eltsoft validate some of their hypotheses, it also uncovered opportunities according to Jason: “The greatest takeaway for me is that the results are never really what I expect. I am often surprised. Analytics has given us great insights into who our users are, and has provided a very important lesson in the value of surveying our user base. Our simple assumptions are often inaccurate.”

Replicating successful strategies
Eltsoft has developed a way to calculate the value of users by using a combination of AdMob metrics (like ad request values) and Analytics metrics (like user counts and sessions per user). Having Google Analytics in AdMob has unlocked such analysis because the data is available in the same interface.

As a result, Eltsoft can now understand what works best for their users. “For example, we’ve made changes to our apps, and Analytics has really helped us to track the effectiveness of those changes. I would say six months ago, that our success was a mystery. The data said we were doing well, but the whys were not clear. Therefore, we couldn’t replicate or push forward. But today, we understand what’s happening and can project our future success. We have not only the data, but can control certain variables allowing us to understand that data.”

“Google Analytics data is literally a goldmine,” says Jason.

If you want to learn more about how Eltsoft is using Google Analytics and AdMob, download the full case study

Want to learn how to get the most from Analytics in AdMob? Sign up for our free online course, Mobile App Analytics Fundamentals.



Posted by Russell Ketchum, Lead Product Manager, Google Analytics for Mobile Apps

MOOC Research and Innovation



Recently, Tsinghua University and Google collaborated to host the 2014 APAC MOOC Focused Faculty Workshop in Shanghai, China. The workshop brought together 37 professors from 12 countries in APAC, NA and EMEA to share, brainstorm and generate important topics that are of mutual interests in the research behind MOOCs and how to foster MOOC innovation.

During the 2-day workshop, faculty and Googlers shared lessons learned and best practices for the following focus areas:
  • Effectiveness of hybrid learning models.
  • Topics in adaptive learning and how they can tailor to individual students by Integrating MOOCs into a student's timetable / semester / curriculum.
  • Standards and practices for interoperability between online learning platforms.
  • Current focuses and important topics for future MOOC research.

In addition to discussing these focus areas, here was ample time for participants to brainstorm and discuss innovative research ideas for the next-steps in potential research collaboration. Emerging from these discussions were the following themes identified as important future research topics:
  • Adding new interactions to MOOCs including social and gamification
  • Building a data & analytics Infrastructure that provides a foundation for personalized learning
  • Interoperability across platforms, and providing access to online content for audiences with limited access.

Google is committed to supporting research and innovation in online learning at scale, through both grants and our open source Course Builder platform, and we are excited to pursue potential research collaborations with partner universities to move forward on the topics discussed. Stay tuned for future announcements on research and collaboration aimed at enabling further MOOC innovation.

Monday, 8 December 2014

High Quality Object Detection at Scale



Update - 26/02/2015
We recently discovered a bug in the evaluation methodology of our object detector. Consequently, the large numbers we initially reported below are not realistic, due to the fact that our separately trained context extractor was contaminated with half of the validation set images. Therefore, our initial results were overly optimistic and were not attainable by the methodology described in the paper. Re-evaluating our initial results, we have restricted ourselves to reporting only the single-model results on the other half of the dedicated validation set without retraining the models. With the updated evaluation, we are still able to report the best single-model result on the ILSVRC 2014 detection challenge data set, with 0.43 mAP when combining both Selective Search and MultiBox proposals with our post-classification model. The original draft of our paper "Scalable, High Quality Object Detection" has been updated to reflect this information. We are deeply sorry if our initial reported results caused any confusion in the community. Original post follows below. 
-C. Szegedy, S. Reed, D. Erhan, and D. Anguelov

The ILSVRC detection challenge is an influential academic benchmark for measuring the quality of object detection. This summer, the GoogLeNet team reported top results in the 2014 edition of the challenge, with ~2X improvement over the previous year’s best results. However, the quality of our results came at a high computational cost: processing each image took about two minutes on a state-of-the-art workstation.

Naturally, we began to think of how we could both improve the accuracy and reduce the computation time needed. Given the already high quality of previous results like those of GoogLeNet[6], we expected that further improvements to detection quality would be increasingly hard to achieve. In our recent paper Scalable, High Quality Object Detection[7], we detail advances that instead have resulted in an accelerated rate of progress in object detection:
Evolution of detection quality over time. On the y axis is the mean average precision of the best published results at any given time. The blue line shows result using individual models, the red line is multi-model ensembles. Overfeat[8] was the state-of-the-art at end of last year, followed by R-CNN[1] published in May. The later measurement points are the results of our team.[6,7]
As seen in the plot above, the mean average precision has been improved since August from 0.45 to 0.56: a 23% relative gain. The new approach can also match the quality of the former best solution with 140X reduced computational resources.

Most current approaches for object detection employ two phases[1]: in the first phase, some hand-engineered algorithm proposes regions of interest in the image. In the second phase, each proposed region is run through a deep neural network, identifying which proposed patches correspond to an object (and what that object is).

For the first phase, the common wisdom[1,2,3,4] was that it took skillfully crafted code to produce high quality region proposals. This has come with a drawback though: these methods don’t produce reliable scoring for the proposed regions. This forces the second phase to evaluate most of the proposed patches in order to achieve good results.

So we revisited our prior “MultiBox” work[5], in which we let the computer learn to pick the proposals to see whether we could avoid relying on any of the hand-crafted methods above. Although the MultiBox method, using previous generation vision network architectures, could not compete with hand-engineered proposal approaches, there were several advantages of fully relying on machine learning only. First, the quality of proposals increases with each new improved network architecture or training methodology without additional programming effort. Second, the regions come with confidence scores which are used for trading off running time versus quality. Additionally, the implementation is simplified.

Once we used new variants of the network architecture introduced in [6], MultiBox also started to perform much better; Now, we could match the coverage of alternative methods with half as many proposal patches. Also, we changed our networks to take the context of objects into account, fueling additional quality gains for the second phase. Furthermore, we came up with a new way to train deep networks to learn more robustly even when some objects are not annotated in the training set, which improved both phases.

Besides the significant gains in mean average precision, we can now cut the number of evaluated patches dramatically at a modest loss of quality: the task that used to take 2 minutes of processing time for a single image on a workstation by the GoogLeNet ensemble (of 6 networks), is now performed under a second using a single network without using GPUs. If we constrain ourselves to a single category like “dog”, we can now process 50 images/second on the same machine by a more streamlined approach[7] that skips the proposal generation step altogether.

As a core area of research in computer vision, object detection is used for providing strong signals for photo and video search, while high quality detection could prove useful for self-driving cars and automatically generated image captions. We look forward to the continuing research in this field.

References:

[1]  Rich feature hierarchies for accurate object detection and semantic segmentation
by Ross Girshick and Jeff Donahue and Trevor Darrell and Jitendra Malik (CVPR, 2014)

[2]  Prime Object Proposals with Randomized Prim’s Algorithm
by Santiago Manen, Matthieu Guillaumin and Luc Van Gool

[3]  Edge boxes: Locating object proposals from edges
by Lawrence C Zitnick, and Piotr Dollàr (ECCV 2014)

[4]  BING: Binarized normed gradients for objectness estimation at 300fps
by Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin and Philip Torr (CVPR 2014)

[5]  Scalable Object Detection using Deep Neural Networks
by Dumitru Erhan, Christian Szegedy, Alexander Toshev, and Dragomir Anguelov

[6]  Going deeper with convolutions
by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke and Andrew Rabinovich

[7]  Scalable, high quality object detection
by Christian Szegedy, Scott Reed, Dumitru Erhan and Dragomir Anguelov

[8]  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Network by Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus and Yann LeCun


* A PhD student at University of Michigan -- Ann Arbor and Software Engineering Intern at Google

Ringing in the New Year - Behavior Trends and Insights

Last month we published an analysis of how people behave before and during the Thanksgiving holiday in the US. We saw the most important days of the year for retailers, how to take advantage of the top transaction days, and when to take action.

Today we are looking at the patterns of behavior over the holidays and into the new year with the objective of understanding how digital marketers can prepare for 2015.

After looking at data from the previous three years, we found two interesting insights:
  1. User behavior is significantly different from country to country, but very consistent from year to year within a particular country.
  2. The beginning of January can be a great time to offer new deals outside of the US.
Read on to learn more about the analysis we performed and how to take advantage of the trends we found, it will help you get a head start on 2015!

User Behavior Trends
Patterns can tell us a lot about data, they are intuitive and show us a lot of information at a glance. With that in mind, we produced the charts below to show how people behave around Christmas and New Year’s Eve. We wanted to understand the differences between cultures, so we focused on the trends from three large economies: US, UK and France. All the charts show data from December 11 to January 14 for the last three years and the two vertical grey areas represent Christmas Day and New Year’s Eve.


United States: the transactions trend clearly shows that users purchase mostly up to a week before Christmas Day and no improvement is seen in early January, although sessions do return to normal quickly after New Year’s Eve. Publishers should take advantage of this rebound in sessions, while retailers may want to wait on providing deals until sales bounce back fully.

United Kingdom: transactions decline sharply until Christmas and then start rising sharply from December 26, and about a week after New Year’s Eve it raises to levels about the same or higher than pre-holiday, so you might consider creating marketing campaigns and promotions to take advantage of January’s rise. Sessions follow a similar pattern.

France: transactions and sessions follow a similar pattern as in the UK, but with a significant decline during New Year’s Eve. As you can see, there is a major spike in the second Wednesday of January every year, that’s the day Winter sales begin in France! Unlike in the US, January is an important month for French retailers, should we say the French Cyber Wednesday?

Get a head start on 2015
So how can you take advantage of those trends to be more successful during the coming year? Here are some ideas for you to act upon right now:
  1. Look at your own data for previous years to understand the patterns for your existing and previous customers.
  2. Check your Benchmarking reports to learn more about how other websites of your size and in your vertical performed.
  3. Use Google Trends to check trends from previous years related to your vertical and country.
  4. Make sure you match your marketing efforts to your local post-holiday trend.
About the Data & Charts 
In order to perform this analysis, we looked at billions of sessions from authorized Google Analytics users who have shared their website data anonymously (read more).


Thursday, 4 December 2014

What we can learn about effective, meaningful and diverse organizations



By becoming more conscious of our own stereotypes and biases, and making use of the insights revealed by the research on bias and stereotype threat, unconscious decision making, and cognitive illusions, each of us can bring more to our work and create diverse, innovative, and meaningful organizations.

Since 2009, I’ve been reading literature about the challenges and successes in making diverse teams effective, and speaking about this research. My goal is to help everyone understand more about unconscious decision-making and other barriers to inclusion, and through knowledge, combat these effects.

A short summary:
  • A team that is heterogeneous in meaningful ways is good for innovation, and good for business.
  • There are many challenges to making such teams effective, such as unconscious decision making, stereotype threat, and other cognitive illusions.
  • There is repeatable quantitative research which shows ways to combat some of these effects.
  • The barriers to effectiveness may seem overwhelming, but there is hope! Meaningful change is possible, and some examples of successful change are cited below.
In a bit more detail:
  1. Diversity is good for innovation and business. There is a correlation between financial success and the diversity of leadership teams, as shown in research by Catalyst, McKinsey and Cedric Herring. Further, research shows a strong correlation between having women on teams and innovation; concluding that there is a strong correlation between the presence of women and the social skills required to get ideas percolating into the open.
  2. We all make decisions unconsciously, influenced by our implicit associations. As an example of these effects, a large proportion of CEOs are taller than the average population and height is strongly correlated with financial and career success. It’s long been argued that women and underrepresented minorities are not represented in CEO leadership because there aren’t enough qualified individuals in the labor pool. This “pipeline issue” argument can’t be made for short and average-height people, however. Simple, repeatable tests measure, via response time and error rate, the implicit associations we have between concepts. These associations are created as an adaptive response, but we must understand our own implicit biases in order to make better decisions.
  3. Stereotype threat plays a role in preventing people from being fully effective. The low representation of women and minorities in Science has long been the source of a troubling question: is this an indication of a difference in innate ability (see Ben Barre’s response to Lawrence Summers’ remarks), or the result of some other effect? Claude Steele and his colleagues elegantly showed that two groups of people can have similar or opposite reactions, depending on the way a situation is presented. These and other experiments show that stereotype threat can compromise the performance of the subject of a stereotype, if he or she knows about the stereotype and cares about it.
  4. Change is possible. The above and other challenges may make it seem nearly impossible to create a diverse and highly functioning organization, but dramatic change can be made. Take, for example, the discovery of biased decision making and effective changes made via the use of data in the MIT Science Faculty Study, or the amazing changes at Harvey Mudd college, which not only increased participation of women as Computer Science majors from 12% to 40% in five years, but also increased the total number of CS majors from 25 to 30 per year to 70 CS graduates in the class of 2014.
If you’re interested in learning more, watch the video about the data on diversity below. You can read the full research in the November issue of Communications of the Association of Computing Machinery. You can read even more using the full bibliography.

Tuesday, 2 December 2014

Automatically making sense of data



While the availability and size of data sets across a wide range of sources, from medical to scientific to commercial, continues to grow, there are relatively few people trained in the statistical and machine learning methods required to test hypotheses, make predictions, and otherwise create interpretable knowledge from this data. But what if one could automatically discover human-interpretable trends in data in an unsupervised way, and then summarize these trends in textual and/or visual form?

To help make progress in this area, Professor Zoubin Ghahramani and his group at the University of Cambridge received a Google Focused Research Award in support of The Automatic Statistician project, which aims to build an "artificial intelligence for data science".

So far, the project has mostly been focussing on finding trends in time series data. For example, suppose we measure the levels of solar irradiance over time, as shown in this plot:This time series clearly exhibits several sources of variation: it is approximately periodic (with a period of about 11 years, known as the Schwabe cycle), but with notably low levels of activity in the late 1600s. It would be useful to automatically discover these kinds of regularities (as well as irregularities), to help further basic scientific understanding, as well as to help make more accurate forecasts in the future.

We can model such data using non-parametric statistical models based on Gaussian processes. Such methods require the specification of a kernel function which characterizes the nature of the underlying function that can accurately model the data (e.g., is it periodic? is it smooth? is it monotonic?). While the parameters of this kernel function are estimated from data, the form of the kernel itself is typically specified by hand, and relies on the knowledge and experience of a trained data scientist.

Prof Ghahramani's group has developed an algorithm that can automatically discover a good kernel, by searching through an open-ended space of sums and products of kernels as well as other compositional operations. After model selection and fitting, the Automatic Statistician translates each kernel into a text description describing the main trends in the data in an easy-to-understand form.

The compositional structure of the space of statistical models neatly maps onto compositionally constructed sentences allowing for the automatic description of the statistical models produced by any kernel. For example, in a product of kernels, one kernel can be mapped to a standard noun phrase (e.g. ‘a periodic function’) and the other kernels to appropriate modifiers of this noun phrase (e.g. ‘whose shape changes smoothly’, ‘with growing amplitude’). The end result is an automatically generated 5-15 page report describing the patterns in the data with figures and tables supporting the main claims. Here is an extract of the report produced by their system for the solar irradiance data:
Extract of the report for the solar irradiance data, automatically generated by the automatic statistician.
The Automatic Statistician is currently being generalized to find patterns in other kinds of data, such as multidimensional regression problems, and relational databases. A web-based demo of a simplified version of the system was launched in August 2014. It allowed a user to upload a dataset, and to receive an automatically produced analysis after a few minutes. An expanded version of the service will be launched in early 2015 (we will post details when available). We believe this will have many applications for anyone interested in Data Science.

Monday, 1 December 2014

Advances in Variational Inference: Working Towards Large-scale Probabilistic Machine Learning at NIPS 2014



At Google, we continually explore and develop large-scale machine learning systems to improve our user’s experience, such as providing better video recommendations, deciding on the best language translation in a given context, or improving the accuracy of image search results. The data used to train these systems often contains many inconsistencies and missing elements, making progress towards large-scale probabilistic models designed to address these problems an important and ongoing part of our research. One principled and efficient approach for developing such models relies on an approach known as Variational Inference.

A renewed interest and several recent advances in variational inference1,2,3,4,5,6 has motivated us to support and co-organise this year’s workshop on Advances in Variational Inference as part of the Neural Information Processing Systems (NIPS) conference in Montreal. These advances include new methods for scalability using stochastic gradient methods, the ability to handle data that arrives continuously as a stream, inference in non-linear time-series models, principled regularisation in deep neural networks, and inference-based decision making in reinforcement learning, amongst others.

Whilst variational methods have clearly emerged as a leading approach for tractable, large-scale probabilistic inference, there remain important trade-offs in speed, accuracy, simplicity and applicability between variational and other approximative schemes. The goal of the workshop will be to contextualise these developments and address some of the many unanswered questions through:

  • Contributed talks from 6 speakers who are leading the resurgence of variational inference, and shaping the debate on topics of stochastic optimisation, deep learning, Bayesian non-parametrics, and theory.
  • 34 contributed papers covering significant advances in methodology, theory and applications including efficient optimisation, streaming data analysis, submodularity, non-parametric modelling and message passing.
  • A panel discussion with leading researchers in the field that will further interrogate these ideas. Our panelists are David Blei, Neil Lawrence, Shinichi Nakajima and Matthias Seeger.

The workshop presents a fantastic opportunity to discuss the opportunities and obstacles facing the wider adoption of variational methods. The workshop will be held on the 13th December 2014 at the Montreal Convention and Exhibition Centre. For more details see: www.variationalinference.org.

References:

1. Rezende, Danilo J., Shakir Mohamed, and Daan Wierstra, Stochastic Backpropagation and Approximate Inference in Deep Generative Models, Proceedings of the 31st International Conference on Machine Learning (ICML-14), 2014.

2. Gregor, Karol, Ivo Danihelka, Andriy Mnih, Charles Blundell and Daan Wierstra, Deep AutoRegressive Networks, Proceedings of the 31st International Conference on Machine Learning (ICML-14), 2014.

3. Mnih, Andriy, and Karol Gregor, Neural Variational Inference and Learning in Belief Networks, Proceedings of the 31st International Conference on Machine Learning (ICML-14), 2014.

4. Kingma, D. P. and Welling, M., Auto-Encoding Variational Bayes, Proceedings of the International Conference on Learning Representations (ICLR), 2014.

5. Broderick, T., Boyd, N., Wibisono, A., Wilson, A. C., & Jordan, M., Streaming Variational Bayes, Advances in Neural Information Processing Systems (pp. 1727-1735), 2013.

6. Hoffman, M., Blei, D. M., Wang, C., and Paisley, J., Stochastic Variational Inference, Journal of Machine Learning Research, 14:1303–1347, 2013.

    With Google Analytics Premium and DoubleClick: Matalan increases conversion rate 28%

    This post originally appeared on the DoubleClick Advertiser blog as part of the with DoubleClick series, highlighting stories and perspectives from industry leaders about how they are succeeding with an integrated digital marketing platform.

    As one of the UK's leading family clothing retailers, Matalan must be nimble -- faster decisions mean better customer engagement and more sales. So they worked with Morpheus Media to implement Google Analytics Premium with DoubleClick Campaign Manager. Google Analytics' powerful insights helped Matalan make better business decisions, faster.

    Matalan was already using DoubleClick Campaign Manager to centralize their digital marketing and reports. Adding Google Analytics Premium side-by-side showed them campaign effectiveness even more clearly, allowing them to uncover the hidden value of their digital marketing efforts, like transactions where digital advertising had assisted a conversion on another channel.


    “It’s really helpful to be able to see one channel that might not be a heavy hitter in terms of revenue or traffic has an impact in creating a conversion on another channel,” says Lee Pinnington, Matalan's Multi-Channel Marketing Director.


    With a complete view of their digital marketing ROI thanks to the integration of DoubleClick Campaign Manager and Google Analytics Premium, Matalan was able to put their marketing dollars where they would truly be most effective. And the results were dramatic: a 28% rise in conversion rate and significant growth in both site visits and revenue.


    To learn more about Matalan's approach, check out the full case study.