Tuesday, 30 September 2014

Sudoku, Linear Optimization, and the Ten Cent Diet



(cross-posted on the Google Apps Developer blog, and the Google Developers blog)

In 1945, future Nobel laureate George Stigler wrote an essay in the Journal of Farm Economics titled The Cost of Subsistence about a seemingly simple problem: how could a soldier be fed for as little money as possible?

The “Stigler Diet” became a classic problem in the then-new field of linear optimization, which is used today in many areas of science and engineering. Any time you have a set of linear constraints such as “at least 50 square meters of solar panels” or “the amount of paint should equal the amount of primer” along with a linear goal (e.g., “minimize cost” or “maximize customers served”), that’s a linear optimization problem.

At Google, our engineers work on plenty of optimization problems. One example is our YouTube video stabilization system, which uses linear optimization to eliminate the shakiness of handheld cameras. A more lighthearted example is in the Google Docs Sudoku add-on, which instantaneously generates and solves Sudoku puzzles inside a Google Sheet, using the SCIP mixed integer programming solver to compute the solution.
Today we’re proud to announce two new ways for everyone to solve linear optimization problems. First, you can now solve linear optimization problems in Google Sheets with the Linear Optimization add-on written by Google Software Engineer Mihai Amarandei-Stavila. The add-on uses Google Apps Script to send optimization problems to Google servers. The solutions are displayed inside the spreadsheet. For developers who want to create their own applications on top of Google Apps, we also provide an API to let you call our linear solver directly.
Second, we’re open-sourcing the linear solver underlying the add-on: Glop (the Google Linear Optimization Package), created by Bruno de Backer with other members of the Google Optimization team. It’s available as part of the or-tools suite and we provide a few examples to get you started. On that page, you’ll find the Glop solution to the Stigler diet problem. (A Google Sheets file that uses Glop and the Linear Optimization add-on to solve the Stigler diet problem is available here. You’ll need to install the add-on first.)

Stigler posed his problem as follows: given nine nutrients (calories, protein, Vitamin C, and so on) and 77 candidate foods, find the foods that could sustain soldiers at minimum cost.

The Simplex algorithm for linear optimization was two years away from being invented, so Stigler had to do his best, arriving at a diet that cost $39.93 per year (in 1939 dollars), or just over ten cents per day. Even that wasn’t the cheapest diet. In 1947, Jack Laderman used Simplex, nine calculator-wielding clerks, and 120 person-days to arrive at the optimal solution.

Glop’s Simplex implementation solves the problem in 300 milliseconds. Unfortunately, Stigler didn’t include taste as a constraint, and so the poor hypothetical soldiers will eat nothing but the following, ever:

  • Enriched wheat flour
  • Liver
  • Cabbage
  • Spinach
  • Navy beans

Is it possible to create an appealing dish out of these five ingredients? Google Chef Anthony Marco took it as a challenge, and we’re calling the result Foie Linéaire à la Stigler:
This optimal meal consists of seared calf liver dredged in flour, atop a navy bean purée with marinated cabbage and a spinach pesto.

Chef Marco reported that the most difficult constraint was making the dish tasty without butter or cream. That said, I had the opportunity to taste our linear optimization solution, and it was delicious.

Monday, 29 September 2014

Collaborative Mathematics with SageMathCloud and Google Cloud Platform



(cross-posted on the Google for Education blog and Google Cloud Platform blog)

Modern mathematics research is distinguished by its openness. The notion of "mathematical truth" depends on theorems being published with proof, letting the reader understand how new results build on the old, all the way down to basic mathematical axioms and definitions. These new results become tools to aid further progress.

Nowadays, many of these tools come either in the form of software or theorems whose proofs are supported by software. If new tools produce unexpected results, researchers must be able to collaborate and investigate how those results came about. Trusting software tools means being able to inspect and modify their source code. Moreover, open source tools can be modified and extended when research veers in new directions.

In an attempt to create an open source tool to satisfy these requirements, University of Washington Professor William Stein built SageMathCloud (or SMC). SMC is a robust, low-latency web application for collaboratively editing mathematical documents and code. This makes SMC a viable platform for mathematics research, as well as a powerful tool for teaching any mathematically-oriented course. SMC is built on top of standard open-source tools, including Python, LaTeX, and R. In 2013, William received a 2013 Google Research Award which provided Google Cloud Platform credits for SMC development. This allowed William to extend SMC to use Google Compute Engine as a hosting platform, achieving better scalability and global availability.
SMC allows users to interactively explore 3D graphics with only a browser
SMC has its roots in 2005, when William started the Sage project in an attempt to create a viable free and open source alternative to existing closed-source mathematical software. Rather than starting from scratch, Sage was built by making the best existing open-source mathematical software work together transparently and filling in any gaps in functionality.

During the first few years, Sage grew to have about 75K active users, while the developer community matured with well over 100 contributors to each new Sage release and about 500 developers contributing peer-reviewed code.

Inspired by Google Docs, William and his students built the first web-based interface to Sage in 2006, called The Sage Notebook. However, The Sage Notebook was designed for a small number of users and would work for a small group (such as a single class), but soon became difficult to maintain for larger groups, let alone the whole web.

As the growth of new users for Sage began to stall in 2010, due largely to installation complexity, William turned his attention to finding ways to expand Sage's availability to a broader audience. Based on his experience teaching his own courses with Sage, and feedback from others doing the same, William began building a new Web-hosted version of Sage that can scale to the next generation of users.

The result is SageMathCloud, a highly distributed multi-datacenter application that creates a viable way to do computational mathematics collaboratively online. SMC uses a wide variety of open source tools, from languages (CoffeeScript, node.js, and Python) to infrastructure-level components (especially Cassandra, ZFS, and bup) and a number of in-browser toolkits (such as CodeMirror and three.js).

Latency is critical for collaborative tools: like an online video game, everything in SMC is interactive. The initial versions of SMC were hosted at UW, at which point the distance between Seattle and far away continents was a significant issue, even for the fastest networks. The global coverage of Google Cloud Platform provides a low-latency connection to SMC users around the world that is both fast and stable. It's not uncommon for long-running research computations to last days, or even weeks -- and here the robustness of Google Compute Engine, with machines live-migrating during maintenance, is crucial. Without it, researchers would often face multiple restarts and delays, or would invest in engineering around the problem, taking time away from the core research.

SMC sees use across a number of areas, especially:

  • Teaching: any course with a programming or math software component, where you want all your students to be able to use that component without dealing with the installation pain. Also, SMC allows students to easily share files, and even work together in realtime. There are dozens of courses using SMC right now.
  • Collaborative Research: all co-authors of a paper can work together in an SMC project, both writing the paper there and doing research-level computations.

Since launching SMC in May 2013, there are already more than 20,000 monthly active users who've started using Sage via SMC. We look forward to seeing if SMC has an impact on the number of active users of Sage, and are excited to learn about the collaborative research and teaching that it makes possible.

Tuesday, 23 September 2014

The Top 3 Google Analytics Configuration Issues Impacting your Data (and How to Fix Them)

Good data is important.  How important?  Studies show that inaccurate data has a direct impact on the bottom line of 88% of companies.  In fact, the average company loses 12% of its revenue due to bad data.  As you know, Google Analytics is a powerful product with a wealth of features to help you optimize your results online. However, to unleash the power of Google Analytics’ marketing tools, you must ensure the data collected is complete and of the highest quality. The insights that fuel action in Analytics depend on good data, especially for some of our advanced algorithmic marketing functionalities like data driven attribution.

Since its release two months ago, our popular new diagnostics tool is working hard to ensure you’re getting the best results. Today, we’d like to share insights into some of the most common account errors along with likely causes and suggested solutions. In particular, we’ll look at some solutions for when our diagnostics tool is telling you the following:  “Bad Default URL,” “Clicks and Sessions Discrepancy,” and “No Goal Conversions.”  Read on to understand the impact of these issues as well as their common causes.


Bad Default URL
“Data without quality is useless.”
João Correia, Analytics Strategist at Blast Analytics & Marketing

When you create a Google Analytics account for website tracking, one of the first questions we ask is for a default URL. This is generally the homepage of your website. Diagnostics ensures that you have tagged your default URL correctly for this property, and warns you if this is not the case. Having a properly tagged website is an essential step towards being able to understand consumer behavior.

This warning is generally caused by either missing or malformed tracking code installed on your default URL, or more simply a typo in the URL that was input. If the default URL is incorrect, simply login to your Google Analytics account, click the “Admin” button in the header, and click “Property Settings” to adjust your default URL. If the tracking code is flawed, you’ll want to talk to your webmaster and ask to have the tracking code correctly installed

Beyond the default URL, we also check for tracking code health across your site. We look for pages that have missing or malformed tags. And we continually run these checks, ensuring new pages you launch in the future also are properly tagged. 

Clicks / Session Discrepancies
"Diagnostics helped me identify and fix an AdWords data discrepancy in my account.  Without the tool, I may have never even realized that my data was inconsistent.  This is a great tool!"
Monika Rut-Koroglu, Digital Analytics and Optimization at FXCM

Google Analytics offers rich capabilities that help users share data with linked AdWords accounts and gain unique and powerful marketing insights. It’s common to expect the number of clicks you see in AdWords to match the number of sessions you see in Analytics; but this is not always the case. This discrepancy can slow down meaningful analysis, and is a situation that can and should be rectified.

The most common causes of this issue have to do with your configuration settings. For example, when you send ad clicks through a third party that redirects to your site; the third party will often times drop vital tagging parameters which are mandatory for Analytics and AdWords to associate clicks and sessions. Other examples are having AdWords auto-tagging disabled, and redirecting users to mobile sites while unintentionally dropping tagging parameters.

Fixes for these issues can vary; we have a detailed guide to walk users through this or you can follow prompts in Google Analytics when we identify specific actions for you to take. If you have a third party who uses redirects and drops parameters, talk to them to resolve the issue. If auto-tagging is disabled on your AdWords accounts, consider enabling it

No Goal Conversions
“[Google Analytics Diagnostics] is a great idea... Just discovered it the other day on my iPad. Helpful to let me redefine my goals better and find out what's not working.”
Sherri Matthew, Harpist and Small Business Owner

Google Analytics goals offer valuable ways to identify, track, and help you drive more valuable outcomes. Sometimes goals can break, and stop this critical stream of insights from reaching you. We run diagnostic checks to ensure your goals continually identify a steady flow of high value customers, and we warn you if this flow breaks.

The most common cause for goal breakage is when a goal is based on a URL that changes. If your webmaster updates the URLs on your site, and the URLs in the goal settings aren’t updated accordingly, this will cause your goal to stop tracking. The second most common cause for goals breaking is if the event tracking on your site changes and the events listed in the goal aren’t updated accordingly.

If you’ve had a goal break for these reasons, visit the “Admin” section via a link in the header of your Google Analytics account, and click “Goals” to correct your goal configurations.

More About Diagnostics
Google Analytics Diagnostics scans for problems every day (with some exceptions). It inspects your site tagging, account configuration, and reporting data for potential data-quality issues.  Only users with Edit permission can see and respond to diagnostics messages. Diagnostics honors the first response to a message; for example, when a user ignores a message, it is ignored for all users.

The tool currently scans for dozens of issues, and dozens more are planned. Just keep an eye on your account over time - it will notify you if and when new issues or opportunities are detected.


- Frank Kieviet and Matt Matyas, Google Analytics Team

Monday, 22 September 2014

Introducing Structured Snippets, now a part of Google Web Search



Google Web Search has evolved in recent years with a host of features powered by the Knowledge Graph and other data sources to provide users with highly structured and relevant data. Structured Snippets is a new feature that incorporates facts into individual result snippets in Web Search. As seen in the example below, interesting and relevant information is extracted from a page and displayed as part of the snippet for the query “nikon d7100”:
The WebTables research team has been working to extract and understand tabular data on the Web with the intent to surface particularly relevant data to users. Our data is already used in the Research Tool found in Google Docs and Slides; Structured Snippets is the latest collaboration between Google Research and the Web Search team employing that data to seamlessly provide the most relevant information to the user. We use machine learning techniques to distinguish data tables on the Web from uninteresting tables, e.g., tables used for formatting web pages. We also have additional algorithms to determine quality and relevance that we use to display up to four highly ranked facts from those data tables. Another example of a structured snippet for the query “superman”, this time as it appears on a mobile phone, is shown below:
Fact quality will vary across results based on page content, and we are continually enhancing the relevance and accuracy of the facts we identify and display. We hope users will find this extra snippet information useful.

Thursday, 18 September 2014

Sign in to edx.org with Google (and Facebook, and...)



Google is passionate about online education. In addition to our own Course Builder project, we’re also partners with edX, a not-for-profit that shares our desire for scalable, quality education for everyone. Their software, Open edX, lets people make educational content and deliver it online to anybody, anytime, anywhere. It powers their own site, edx.org, and is also used by companies and universities worldwide.

Today we’re very pleased to announce that you can now sign in to edx.org with your Google or Facebook account:
Until recently, users who wanted to take advantage of the high quality content on edx.org needed to create a new account first. This is a painful, error prone process―really, who wants to worry about yet another password? So we added the ability to use over 60 external authentication providers to Open edX, with support for everything from open standards like OpenID or OAuth 2.0, to custom university single sign-on systems. For their edx.org site, edX decided to let users pick between Google, Facebook, and a custom username and password.

If you run Open edX, you can also use this feature now. The authentication module is extensible so you can add any third-party provider you want if your favorite is not yet supported. And the feature is completely configurable, so you can pick whatever third-party authentication systems are best for your users, including none at all. It’s totally up to you.

By simultaneously increasing user choice, convenience, and security, we hope to make open online education even easier and safer to use, whether people pick Course Builder or Open edX for authoring and delivering courses. We’re very grateful to our partners at edX for working with us in this exciting field.

Wednesday, 17 September 2014

Enhanced Google Analytics Audience Capabilities Come to Apps

Good news for mobile app developers: Audience Demographics and Interests Reporting and Remarketing are now available for apps in Google Analytics.  Just one of the improvements for audience segmentation and remarketing we're announcing today, these changes should make it even easier for all our advertisers to reach their high-value customer segments. 

In-App Audience Demographics Reporting and Remarketing

Good analytics are especially important to app developers. At Google I/O, Hovhannes Avoyan, the CEO and Founder of PicsArt , had this to say: 

“We need analytics to help us understand who our users are, how they interact with our application, how our application performs. With all that knowledge, we want to apply different monetization strategies to different kinds of users.” 

Now developers can see just how different user segments engage and monetize with In-App Audience Demographics Reporting

And it's more than just data. Analysts and developers can blend audience demographic and behavior data into detailed audience lists to be targeted with in app remarketing campaigns. In short, all the great remarketing capabilities for Google Analytics users on the web are now available for apps as well.

New In-App Audience Demographics Reporting

Segmentation and remarketing lists get an upgrade
Creating remarketing lists for apps and web is now even easier with recent upgrades to both segmentation and audience building. A streamlined creation flow for creating audiences allows users to go from segment to audience within clicks (plus a few bonus admin features like list renaming and automatic list sizing).

New Audience Builder Experience, now supporting App lists
If you prefer to stand on the shoulders of remarketing giants, Analytics power users have developed and shared audience definitions that import via template links or from the solutions gallery. This simplifies things dramatically for new users. A process that could be complicated and time-consuming can now be done with 6 clicks in under 1 minute. Give it a try: import our Engagement Pack of Core Remarketing Lists.

On the segmentation side, users have told us they wanted segments to be more discoverable, easier to manage, and more intuitive to build. We've been listening, and have made interface improvements, adding a simple “Add Segment” button within reports, a new segment-selection interface, hover-over segment definitions, and a 1-click action dialogue to Share, Edit, Copy, Remove, or Remarket to a segment. 

New Segmentation Experience: fewer errors for better analysis

Measure remarketing performance with the new Display Targeting report
Once you’ve found a segment, created an audience, and activated your remarketing campaign, close the loop by measuring the performance of those audiences across all remarketing campaigns . Enter the new Adwords Display Targeting report in the Acquisition section to see all your active remarketing lists, along with impressions, spend, behavior, and conversion rates under the “Interests and Remarketing” tab.

New Remarketing List Performance in Display Targeting Report
You can learn how to update your SDK to enable these features in our Help Center or get started now by creating some remarketing lists. We hope that these improvements make your audience segmentation and remarketing-- in apps and on the web-- more intuitive and more effective. We’d love to hear from you! Please leave questions or feedback in the comments, and stay tuned for more audience-related improvements. 


Posted by Dan Stone, Product Manager, and Kanu Singhal, Technical Lead from the Google Analytics Audience Team

Thursday, 11 September 2014

Course Builder now supports the Learning Tools Interoperability (LTI) Specification



Since the release of Course Builder two years ago, it has been used by individuals, companies, and universities worldwide to create and deliver online courses on a variety of subjects, helping to show the potential for making education more accessible through open source technology.

Today, we’re excited to announce that Course Builder now supports the Learning Tools Interoperability (LTI) specification. Course Builder can now interoperate with other LTI-compliant systems and online learning platforms, allowing users to interact with high-quality educational content no matter where it lives. This is an important step toward our goal of making educational content available to everyone.

If you have LTI-compliant software and would like to serve its content inside Course Builder, you can do so by using Course Builder as an LTI consumer. If you want to serve Course Builder content inside another LTI-compliant system, you can use Course Builder as an LTI provider. You can use either of these features, both, or none—the choice is entirely up to you.

The Course Builder LTI extension module, now available on Github, supports LTI version 1.0, and its LTI provider is certified by IMS Global, the nonprofit member organization that created the LTI specification. Like Course Builder itself, this module is open source and available under the Apache 2.0 license.

As part of our continued commitment to online education, we are also happy to announce we have become an affiliate member of IMS Global. IMS Global shares our desire to provide education online at scale, and we look forward to working with the IMS community on LTI and other online education technologies.

Wednesday, 10 September 2014

New Benchmarking Reports Help Twiddy Boost Email Open Rates by 500%

If you’ve ever wondered how your website is performing compared to the competition, our new Benchmarking reports in Google Analytics will help you find out.

Analytics users can now compare their results to peers in their industry, choosing from 1600 industry categories, 1250 markets and 7 size buckets. Benchmarking leverages the footprint of Google Analytics and can help you set meaningful targets, spot trends occurring across industries and answer a whole array of questions: Which channels should you be investing more in? How does your mobile engagement compare to your peers? How unique is your audience?

The new Benchmarking reports display acquisition and engagement metrics — like sessions and bounce rate — by Channel, Location, or Device Category dimensions. To ensure total data transparency, the number of properties contributing to the benchmark is displayed once you choose the industry, market and size. A helpful heat map feature makes it easy to see areas of strength and opportunity, and where to devote more resources.

Benchmarking in Action: Twiddy Finds a New Email Marketing Opportunity

Twiddy.com, a vacation rentals company in the Outer Banks-- a popular summer getaway destination-- has been using Benchmarking reports to help focus its marketing resources. A look at their peer benchmarks by channel showed that Twiddy was doing many things well during its peak summer booking season. Still, “it was clear we were missing a huge opportunity in email marketing,” reports CMO Ross Twiddy. His team used Google Analytics data to revamp their email marketing and improve the flow and process.

Email opportunity: Visitors from email spend nearly twice as long on site as the average, but user sessions generated from email are 82% below average and new users from the channel fall 91% below similar sites.

Twiddy even used Google Analytics to choose the best-selling messages for their email campaigns. Their analysis helped them zero in on the factors that were most consistent in repeat bookings: the price range, location, rental type, and even vacation week that would be most likely to convert with for each customer. "We launched an email last week based on our findings, and it shattered our email marketing records: a 48% average open rate and a 40% clickthrough rate,” says Ross.

Twiddy is happy with the new visibility they’ve gained: “The Benchmarking reports were powerful enough for us to reallocate time, budget and resources towards running down the deficiency. We can’t wait to start testing the reports out more broadly during the next peak booking season.”

Get Started with Benchmarking

Benchmarking reports can be found in the “Audience” section of the reporting interface and are rolling out over the next few weeks to all Google Analytics users who have opted in to share their data anonymously. If you want to join in, simply check the “Share anonymously with Google and others” box in the Account Settings tab of your account admin page. This is only the beginning for benchmarking within Google Analytics. We’ll be expanding these capabilities in the coming months, both incorporating conversion metrics and adding support for mobile apps. For more information on Benchmarking reports, check out our Help Center.

Posted by: Nikhil Roy, Product Manager, Google Analytics

Friday, 5 September 2014

Building a deeper understanding of images



The ImageNet large-scale visual recognition challenge (ILSVRC) is the largest academic challenge in computer vision, held annually to test state-of-the-art technology in image understanding, both in the sense of recognizing objects in images and locating where they are. Participants in the competition include leading academic institutions and industry labs. In 2012 it was won by DNNResearch using the convolutional neural network approach described in the now-seminal paper by Krizhevsky et al.[4]

In this year’s challenge, team GoogLeNet (named in homage to LeNet, Yann LeCun's influential convolutional network) placed first in the classification and detection (with extra training data) tasks, doubling the quality on both tasks over last year's results. The team participated with an open submission, meaning that the exact details of its approach are shared with the wider computer vision community to foster collaboration and accelerate progress in the field.
The competition has three tracks: classification, classification with localization, and detection. The classification track measures an algorithm’s ability to assign correct labels to an image. The classification with localization track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the detection challenge is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a “bag of labels” -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.
This work was a concerted effort by Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Drago Anguelov, Dumitru Erhan, Andrew Rabinovich and myself. Two of the team members -- Wei Liu and Scott Reed -- are PhD students who are a part of the intern program here at Google, and actively participated in the work leading to the submissions. Without their dedication the team could not have won the detection challenge.

This effort was accomplished by using the DistBelief infrastructure, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture. Its seemingly complex structure (typical incarnations of which consist of over 100 layers with a maximum depth of over 20 parameter layers), is based on two insights: the Hebbian principle and scale invariance. As the consequence of a careful balancing act, the depth and width of the network are both increased significantly at the cost of a modest growth in evaluation time. The resultant architecture leads to over 10x reduction in the number of parameters compared to most state of the art vision networks. This reduces overfitting during training and allows our system to perform inference with low memory footprint.
For the detection challenge, the improved neural network model is used in the sophisticated R-CNN detector by Ross Girshick et al.[2], with additional proposals coming from the multibox method[1]. For the classification challenge entry, several ideas from the work of Andrew Howard[3] were incorporated and extended, specifically as they relate to image sampling during training and evaluation. The systems were evaluated both stand-alone and as ensembles (averaging the outputs of up to seven models) and their results were submitted as separate entries for transparency and comparison.

These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand what is in an image as well as where things are.

References:

[1] Erhan D., Szegedy C., Toshev, A., and Anguelov, D., "Scalable Object Detection using Deep Neural Networks", The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2147-2154.

[2] Girshick, R., Donahue, J., Darrell, T., & Malik, J., "Rich feature hierarchies for accurate object detection and semantic segmentation", arXiv preprint arXiv:1311.2524, 2013.

[3] Howard, A. G., "Some Improvements on Deep Convolutional Neural Network Based Image Classification", arXiv preprint arXiv:1312.5402, 2013.

[4] Krizhevsky, A., Sutskever I., and Hinton, G., "Imagenet classification with deep convolutional neural networks"Advances in neural information processing systems, 2012.

Wednesday, 3 September 2014

Working Together to Support Computer Science Education



(Cross-posted from the Google for Education blog)

Computer Science (CS) education in K-12 is receiving an increasing amount of attention from media and policy makers. Education groups have been working for years to build the infrastructure needed to support CS both inside and outside the school environment, including standards development and dissemination, models for teacher professional development, research, resources for educators, and the building of peer-driven and peer-supported communities of learning.

At Google, we strive to increase opportunities in CS and be a strong contributor to the community of those seeking to improve CS education through our engagement in research, curriculum resource development and dissemination, professional development of teachers, tools development, and large-scale efforts to engage young women and underrepresented groups in computer science. However, despite these efforts, there are still many challenges to overcome to improve the state of CS education.

For example, many people confuse computer science with education technology (the use of computing to support learning in other disciplines) and computer literacy (a very basic understanding of a limited number of computer applications). This confusion leads to the assumption that computer science education is taking place, when in fact in many schools it is not.

Women and minorities are still underrepresented in computer science education and in the high tech workplace. In her introduction to Jane Margolis’ Stuck in the Shallow End: Education, Race, and Computing, distinguished scientist Shirley Malcolm refers to computer science as “privileged knowledge” to which minority students often have no access. This statement is supported by data from the College Board and the National Center for Women and Information Technology.

Poverty also has a significant but often ignored impact on access to technology and quality computer science education. At present there are more than 16 million U.S. children living in poverty; these children are the least likely to have access to computer science knowledge and tools in their schools and homes.

There are many organizations and programs which focus on CS education, working hard to address these issues, and others. This gives Google the unique opportunity to analyze gaps in existing efforts and apply our resources towards programs that are most needed. In so doing, we hope to help uncover new strategies and create sustainable improvements to CS education.

Achieving systemic and sustained change in K-12 CS education is a complex undertaking that requires strategic support that complements both existing formal school programs and extracurricular education. Google is proud to be a member of the community committed to making tangible improvements to the state of CS education. In future blog posts, we will introduce you so some of the programs and resources that Google has been working on.

Tuesday, 2 September 2014

Hardware Initiative at Quantum Artificial Intelligence Lab



The Quantum Artificial Intelligence team at Google is launching a hardware initiative to design and build new quantum information processors based on superconducting electronics. We are pleased to announce that John Martinis and his team at UC Santa Barbara will join Google in this initiative. John and his group have made great strides in building superconducting quantum electronic components of very high fidelity. He recently was awarded the London Prize recognizing him for his pioneering advances in quantum control and quantum information processing. With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture. We will continue to collaborate with D-Wave scientists and to experiment with the “Vesuvius” machine at NASA Ames which will be upgraded to a 1000 qubit “Washington” processor.