Monday, 31 March 2014

Celebrating the First Set of Google Geo Education Awardees and Announcing Round Two



Google's GeoEDU Outreach program is excited to announce the opening of the second round of our Geo Education Awards, aimed at supporting qualifying educational institutions who are creating content and curricula for their mapping, remote sensing, or GIS initiatives.

If you are an educator in these areas, we encourage you to apply for an award. To celebrate the first round of awardees, and give a sense of the kind of work we have supported in the past, here are brief descriptions of some of our previous awards.

Nicholas Clinton, Tsinghua University
Development of online remote sensing course content using Google Earth Engine

Nick is building 10 labs for an introductory remote sensing class. Topics include studying electromagnetic radiation, image processing, time series analysis, and change detection. The labs are being taught currently, and materials will be made available when the course has been completed. From Lab 6:
truecolor.png
Let's look at some imagery in Earth Engine.  Search for the place 'Mountain View, CA, USA.'  What the heck is all that stuff!?  We are looking at this scene because of the diverse mix of things on the Earth surface.
ndvi.png
Add the Landsat 8 32-day EVI composite.  What do you observe?  Recall that the more vegetative cover the higher the index.  It looks like the "greenest" targets in this scene are golf courses.
ndwi.png
Let's say we don't really care about vegetation (not true, of course!), but we do care about water.  Let's see if the water indices can help us decipher our Mountain View mystery scene.

Dana Tomlin, University of Pennsylvania
Geospatial Programming: Child's Play

Dana is creating documentation, lesson plans, sample scripts, and homework assignments for each week in a 13-week, university-level course on geospatial programming. The course uses the Python computer programming language to utilize, customize, and extend the capabilities of three geographic information systems: Google’s Earth Engine, ESRI’s ArcGIS, and the open-source QGIS.

Declan G. De Paor, Old Dominion University
A Modular Approach to Introducing Google Mapping Technologies into Geoscience Curricula Worldwide

Declan's award supports senior student Chloe Constants who is helping design Google Maps Engine and Google Earth Engine modules for existing geoscience coursework, primarily focused on volcanic and tectonic hazards, and digital mapping. Declan and Chloe will present the modules at faculty development workshops in person and online. They see GME/GEE as a terrific way to offer authentic undergraduate research experiences to non-traditional geoscience students.

Mary Elizabeth Killilea, New York University
Google Geospatial Tools in a Global Classroom: “Where the City Meets the Sea: Studies in Coastal Urban Environments"

Mary and the Global Technology Services team at NYU are developing a land­ cover change lab using Google Earth Engine. NYU has campuses around the world, so their labs are written to be used globally. In fact, students in four campuses around the globe are currently collecting and sharing data for the lab. Students at their sites analyze their local cities, but do so in a global context.

DataCollection.jpg
One group of students used Android mobile devices to collect land use data in New York's Battery Park.
AbuDhabiLocations.jpg
While others in the same course collected these points in Abu Dhabi. Upon collection, the observations were automatically uploaded, mapped, and shared.

Scott Nowicki and Chris Edwards, University of Nevada at Las Vegas
Advanced Manipulation and Visualization of Remote Sensing Datasets with Google Earth Engine

Scott and Chris are taking biology, geoscience, and social science students on a field trip to collect geological data, and are generating screencast tutorials to show how these data can be queried, downloaded, calibrated, manipulated and interpreted using free tools including Google Earth Engine. These tutorials may be freely incorporated into any geospatial course, and all the field site data and analyses will be publicly released and published, giving a full description of what features are available to investigate, and how best to interpret both the remote sensing datasets and ground truth activities.

Steven Whitmeyer and Shelley Whitmeyer, James Madison University
Using Google Earth to Model Geologic Change Through Time

Steven and Shelley are building exercises for introductory geoscience courses focusing on coastal change, and glacial landform change. These exercises incorporate targets and goals of the Next Generation Science Standards. They are also developing tools to create new tectonic reconstructions of how continents and tectonic plates have moved since Pangaea breakup. Some of the current animations are available here and here.

We hope this overview of previous award recipients gives you a sense for the range of educational activities our GeoEDU awards are supporting. If you are working on innovative geospatial education projects, we invite you to apply for a GeoEDU award.

Friday, 28 March 2014

Sending data from Lantronix to Google Analytics

The following is a guest post from Kurt Busch, CEO, and Mariano Goluboff, Principal Field Applications Engineer at Lantronix.

Background
Google Analytics makes it easy to create custom dashboards to present data in the format that most helps to drive business processes. We’ve put together a solution that will make several of our devices (networking and remote access devices) easily configurable to enable delivery of end device data to Google Analytics. We use the Lantronix PremierWave family of devices to connect to an end device via a serial port like RS-232/485, or Ethernet, intelligently extract useful data, and send it to Google Analytics for use in M2M applications. 

What you need
To get started, grab the Pyserial module, and load it on your Lantronix PremierWave XC HSPA+. You’ll also want a device with a serial port that sends data you want to connect to Google Analytics. A digital scale like the 349KLX is a good choice.

Architecture overview
With the Measurement Protocol, part of Universal Analytics, it is now possible to connect data from more than web browsers to Analytics.

Lantronix integrated the Measurement Protocol by using an easy to deploy Python script. By being able to natively execute Python on PremierWave and xSenso devices, Lantronix makes it very easy to deploy intelligent applications leveraging Python’s ease of programming and extensive libraries.

The demonstration consists of a scale with an RS-232 output, connected to a Lantronix PremierWave XC HSPA+. The Python script running on the PremierWave XC HSPA+ parses the data from the scale, and sends the weight received to Google Analytics, where it can then be displayed.

The hardware setup is show in the picture below.



The technical details
The Python program demonstrated by Lantronix uses the Pyserial module to parse this data. The serial port is easily initialized with Pyserial:
class ser349klx:
# setup the serial port. Pass the device as '/dev/ttyS1' or '/dev/ttyS2' for
# serial port 1 and 2 (respectively) in PremierWave EN or XC HSPA+
def __init__(self, device, weight, ga):
while True:
try:
serstat = True
ser = serial.Serial(device,2400, interCharTimeout=0.2, timeout=1)
except Exception:
serstat = False
if serstat:
break
self.ser = ser
self.weight = weight
self.ga = ga

The scale used constantly sends the current weight via the RS-232 port, with each value separated by a carriage return:

def receive_line(self):
buffer = ''
while True:
buffer = buffer + self.ser.read(self.ser.inWaiting())
if '\r' in buffer:
lines = buffer.split('\r')
return lines[-2]

The code that finds a new weight is called from a loop, which then waits for 10 equal non-zero values to wait for the weight to settle before sending it to Google Analytics, as shown below:
# This runs a continuous loop listening for lines coming from the
# serial port and processing them.
def getData(self):
count = 0
prev = 0.0
#print self.ser.interCharTimeout
while True:
time.sleep(0.1)
try:
val = self.receive_line()
weight.value=float(val[-5:])*0.166
if (prev == weight.value):
count += 1
if (count == 10) and (str(prev) != '0.0'):
self.ga.send("{:.2f}".format(prev))
else:
count = 0
prev = weight.value
except Exception:
pass

Since the Google Analytics Measurement Protocol uses standard HTTP requests to send data from devices other than web browsers, the ga.send method is easily implemented using the Python urllib and urllib2 modules, as seen below:

class gaConnect:
def __init__(self, tracking, mac):
self.tracking = tracking
self.mac = mac
def send(self, data):
values = { 'v' : '1',
'tid' : self.tracking,
'cid' : self.mac,
't' : 'event',
'ec' : 'scale',
'ea' : 'weight',
'el' : data }
res = urllib2.urlopen(urllib2.Request("http://www.google-analytics.com/collect", urllib.urlencode(values)))

The last piece is to initialize get a Google Analytics connect object to connect to the user’s Analytics account:

ga = gaConnect("UA-XXXX-Y", dev.mac)

The MAC address of the PremierWave device is used to send unique information from each device.

Results
With these pieces put together, it’s quick and easy to get data from the device to Google Analytics, and then use the extensive custom reporting and modeling that is available to view the data. For example, see the screenshot below of real-time events:



Using Lantronix hardware, you can connect your serial devices or analog sensors to the network via Ethernet, Wi-Fi, or Cellular. Using Python and the Google Analytics Measurement Protocol, the data can be quickly and easily added to your custom Google Analytics reports and dashboards for use in business intelligence and reporting.

Posted by Aditi Rajaram, the Google Analytics team


Thursday, 27 March 2014

More Thoughts on Facebook/Oculus

In yesterday’s post on the Oculus acquisition (link), I focused on the idea that Facebook sees virtual reality as the future of social networking. I was skeptical of the Oculus deal because I see VR as a more fundamental change with much broader implications to the industry.

There is an alternate view, as a commenter on the post pointed out. Let’s assume for a minute that Mark Zuckerberg actually does understand the potential for VR to be far more than a communication technology. As I wrote last year (link), I think 3D displays combined with 3D printing and gesture interfaces can be the foundation of sensory computing, the next big computing revolution as important as the graphical interface revolution of the 1980s. Maybe Mark sees that, and the deal is really about him buying Oculus to remake computing, with Facebook just the vehicle he used to make the deal.

If so, bravo.

The companies that should be driving this new revolution are Microsoft and Apple, but computing incumbents are usually too wrapped up in their existing businesses to spot the next generation. Maybe it takes a relative outsider like Mark to see the potential. And in this case, it’s an outsider with oodles of money. Which is good news, because the new world of sensory computing will need a lot of investment to get it started.

When building a computing platform business, you have to hit a very careful balance between creating infrastructure that no one else can build, and leaving opportunity on the table so that others will invest. A successful computing platform is not a singular entity; it’s an ecosystem that wraps the platform vendor, developers, partners, and users into a network in which everyone invests and everyone benefits from the synergy between the parts.

In this world, the platform vendor is a bit like the conductor of an orchestra. You don’t play the instruments yourself; you make sure they all come together to make beautiful music.

To make sensory computing happen, Oculus will need to focus on four areas: technology standards, interface, economic model, and management.

Technology standards. Someone needs to define how the apps, software extensions, and accessories in the sensory computing world will communicate with each other. For example, how does a gesture recognition tool like Leap Motion communicate with the Oculus hardware and with applications built for it? Oculus needs to take the responsibility for creating the communication standards and APIs, and needs to write the sample code that will make it easy for developers to integrate them. This can’t be left to committees or the open source crowd. Committees are too slow, and open source is too chaotic.

Oculus also needs to write drivers. Lots and lots of drivers, to integrate its systems with the rest of the computing world. Most engineers hate writing drivers; they are boring and difficult and no one ever comes up to you and says, “killer keyboard driver, dude.” As a result platform vendors usually try to leave that detail to the open source community. But that doesn’t work, because volunteer open source developers are even less likely to do unsexy work. Building a platform without drivers is like building a city without sewer pipes. Mark, this is where your money will come in handy. Hire a bunch of good driver developers and put them to work interfacing Oculus with everything.

Some of the first drivers you need are 3D printer drivers. Make it easy for people to create in 3D and bring their creations into the real world.

Interface. Imagine the Mac without menus and windows and icons. A new computing paradigm needs new interface standards: how do we grab objects in a virtual world, how do we control the device, how do we move ourselves around, and how do we do all of that without inducing motion sickness (one of the biggest complaints from early users of the Oculus Rift hardware)? There’s some very subtle and challenging work to be done here. Oculus and its software partners have made a good start in the area of gaming (for example, how do you separate where you’re looking from where you’re shooting from where you want to move?) That same level of thinking needs to go into all aspects of sensory computing.

Economic model. The platform vendor needs to make sure that the people creating accessories and apps for the new platform have a reasonable chance to make money. The platform does not need to guarantee profit for everyone, but the good apps and accessories must have a reasonable chance to rise to the top and be rewarded. The App Store, for all its flaws, accomplished this on iOS. Facebook failed very badly in this area with its platform; it’ll have to do much better for sensory computing to succeed.

To go along with that economic model, you need evangelists: marketing/business managers who know how to recruit and motivate partners and developers. If Oculus had a staff of evangelists in place, they would have fanned out yesterday to explain the deal with Facebook and make sure it didn’t cause developers like Mojang to turn away.

Management. To run all of this, Oculus needs experienced people who have created platforms before and know how to avoid all the mistakes you can make along the way. This is a specialized area of knowledge, and not something you can learn on the job. Platform management is a skill set that doesn’t exist in either Facebook or Oculus today, and it’s also not available in Irvine, where Oculus is based. But it is available in Silicon Valley, 300 miles to the north.

The biggest challenge of all is figuring out how to make all of these changes and additions without overwhelming Oculus and losing its beautiful energy and vision and focus. I watched Palm turn from a spunky innovator into a bloated bureaucracy, and I don’t wish that fate on Oculus.

Some of the work, like driver creation, can be done in parallel without too much disruption to the core of Oculus. But many of the other changes reach into the heart of the company. It’ll take unusually skilled and patient management to implement all these changes. Mark Zuckerberg doesn’t have the time to do this, and I think Oculus doesn’t have all the bench strength it needs today. One of Mark’s key moves in growing Facebook was hiring experienced managers to supplement his skills. I think Oculus needs the same thing.


The big question

What does Mark Zuckerberg really want to do with Oculus? At this point there is enough contradictory information out there that you can read anything into the deal. But the most hopeful quote came from Oculus CEO Brendan Iribe, who described the early discussions with Zuckerberg (link):

“We showed him some of the internal prototypes, and he got so excited about the vision of what we were doing and about the potential that this is truly the next computing platform. He actually said that to us. And it’s like, ‘Wow! We are looking at this whole thing being just that gaming platform. But tell us more, Mark.’ And he started to describe it, and we started to believe it too. And we started to relate it to a lot of the experiences we were having.”

I’m still very skeptical about the risks in the deal, but computing desperately needs new leadership and ideas, and I hope the combination of Oculus and Zuckerberg will deliver them. I want to believe.

Making Sense of MOOC Data



In order to further evolve the open education system and online platforms, Google’s course design and development teams continually experiment with massive, open online courses. Recently, at the Association for Computing Machinery’s recent Learning@Scale conference in Atlanta, GA, several members of our team presented findings about our online courses. Our research focuses on learners’ goals and activities as well as self-evaluation as an assessment tool. In this post, I will present highlights from our research as well as how we’ve applied this research to our current course, Making Sense of Data.

Google’s five online courses over the past two years have provided an opportunity for us to identify learning trends and refine instructional design. As we posted previously, learners register for online courses for a variety of reasons. During registration, we ask learners to identify their primary goal for taking the class. We found that just over half (52.5%) of 41,000 registrants intended to complete the Mapping with Google course; the other half aimed to learn portions of the curriculum without earning a certificate. Next we measured how well participants achieved those goals by observing various interaction behaviors in the course, such as watching videos, viewing text lessons, and activity completion. We found that 42.4% of 21,000 active learners (who did something in the course other than register) achieved the goals they selected during registration. Similarly, for our Introduction to Web Accessibility course, we found that 56.1% of 4,993 registrants intended to complete the course. Based on their interactions with course materials, we measured that 49.5% of 1,037 active learners achieved their goals.

Although imperfect, these numbers are more accurate measures of course success than completion rates. Because students come to the course for many different reasons, course designers should make it easier for learners to meet a variety of objectives. Since many participants in online courses may just want to learn a few new things, we can help them by releasing all course content at the outset of the course and enabling them to search for specific topics of interest. We are exploring other ways of personalizing courses to help learners achieve individual goals.

Our research also indicates that learners who complete activities are more likely to complete the course than peers who completed no activities. Activities include auto-graded multiple-choice or short-answer questions that encourage learners to practice skills from the course and receive instant feedback. In the Mapping with Google course, learners who completed at least sixty percent of course activities were much more likely to submit final projects than peers who finished fewer activities. This leads us to believe that as course designers, we should be paying more attention to creating effective, relevant activities than focusing so heavily on course content. We hypothesize that learners also use activities’ instant feedback to help them determine whether they should spend time reviewing the associated content. In this scenario, we believe that learners could benefit from experiencing activities before course content.

As technological solutions for assessing qualitative work are still evolving, an active area of our research involves self-evaluation. We are also intrigued by previous research showing the links between self-evaluation and enhanced metacognition. In several courses, we have asked learners to submit projects aligned with course objectives, calibrate themselves by evaluating sample work, then apply a rubric to assess their own work. Course staff graded a random sample of project submissions then compared the learners’ scores with course staff’s scores. In general, we found a moderate agreement on Advanced Power Searching (APS) case studies (55.1% within 1 point of each other on a 16-point scale), with an increased agreement on the Mapping projects (71.6% within 2 points of each other on a 27-point scale). We also observed that students submitted high quality projects overall, with course staff scoring 73% of APS assignments a B (80%) or above; similarly, course staff evaluated 94% of Mapping projects as a B or above.

What changed between the two courses that allowed for a higher agreement with the mapping course? The most important change seems to be more objective criteria for the mapping project rubric. We also believe that we haven’t given enough weight to teaching learners how to evaluate their own work. We plan to keep experimenting with self-evaluation in future courses.


Since we are dedicated to experimenting with courses, we have not only applied these findings to the Making Sense of Data course, but we have also chosen to experiment with new open-source software and tools. We’re exploring the following aspects of online education in this class:

  • Placing activities before content
  • Reduced use of videos
  • Final project that includes self-reflection without scores
  • New open-source technologies, including authoring the course using edX studio and importing it into cbX (running on Google’s AppEngine platform) as well as Oppia explorations

We hope that our research and the open-source technologies we’re using will inspire educators and researchers to continue to evolve the next generation of online learning platforms.

Wednesday, 26 March 2014

Facebook, Ego, and Oculus Rift

When a big company is still controlled by its founders, its greatest strength is that it has the resources and the freedom to do almost anything, regardless of the shortsighted fears of investors. That’s also its greatest weakness. Case in point, Facebook.

I can rationalize reasons why Oculus VR is a good fit for Facebook, but I think the official explanation for the deal is pretty thin. To me, it says more about Facebook’s ego than it does about a coherent long-term strategy. Deals like this between dissimilar companies have a long history of failure in Silicon Valley; to make it work, Facebook will need to be skilled in some areas where it has little experience. The company is also creating important new competitors to itself, in ways that echo Google’s Motorola acquisition. I’m a huge fan of Oculus Rift, so I hope the deal ends better than the Motorola one. But history makes me skeptical.


Why Facebook wanted Oculus

Facebook’s explanation is that virtual reality is a new platform that, like mobile, could revolutionize social interaction. Facebook says it wants to be at the leading edge of that 3D social revolution, rather than trailing it the way it did mobile. That makes sense superficially, but the more you think about it, the shakier it sounds as the reasoning for this particular deal.

First of all, if you believe VR is a new platform, it’s not clear why you need to buy a hardware goggles company. It’s not like Oculus Rift is the only pair of 3D goggles in development. With Facebook’s market strength, it could have set a set a software standard and easily gotten it adopted by all the 3D vision companies. A small minority investment in Oculus would have been enough to secure their support. If you wanted a play in social VR, why not snap up SecondLife? Linden Lab has invested more than a decade in building software infrastructure for social VR, and would have cost a lot less than $2 billion.

Maybe you feel that the hardware and software have to be developed together. That’s a very Apple-like attitude, and therefore trendy in Silicon Valley. There have been persistent rumors that Facebook was working on its own phone. Maybe Facebook decided that it was too late to join the phone business, but it could get a jump on everyone else 3D.

But if hardware-software integration is the key, you’d want to drive deep integration between Oculus and Facebook’s software. You wouldn’t promise to run Oculus as a separate company, which is what Facebook claims it’ll do.

I think the real reason not to buy something like SecondLife is that it’s no longer trendy. Nothing smells worse in Silicon Valley than a company that failed to live up to its over-the-top hype, and the hype for SecondLife was astonishing about seven years ago. Oculus, on the other hand, is still at the takeoff stage in the hype cycle. It is the subject of a cult in the PC gaming community. The company promised to hit a sweet spot of affordability and quality for VR, and hardcore gamers embraced it enthusiastically through one of the first blowout Kickstarter campaigns. Although Oculus wasn’t mainstream news, there are literally millions of Oculus Rift-related videos on YouTube, most of them from enthusiasts drooling over the prototypes.

The cool factor. So by buying Oculus, Facebook makes itself cooler. The trouble is, it makes Oculus less cool. The enthusiasts who embraced Oculus because of its perceived authenticity and deep ties to the gaming world are appalled at the thought of it being owned by Facebook, which is seen as the poster child for lame low-res social gaming. It’s as if Motel Six bought the Ritz-Carlton. The Verge has a nice roundup of the angst here. My favorite quote:  “even Microsoft would have...been better than Facebook.”

Fear of missing out. I wonder if another motivation behind the Oculus purchase was the fear that if Facebook didn’t act, someone else would buy the company. If you feel VR is important and if Oculus is a leader, then maybe you buy it just so you don’t get closed out. The big VCs who invested in Oculus have a playbook for acquisitions, and it usually involves creating competitive bids, or the fear of them, to drive up the price. If Facebook was afraid that a competitor might buy the company, it might have felt the need to make a deal fast at an aggressive price.

Fighting Google. I think the primary motivation for the Oculus purchase was competition with Google. Both companies are led by ambitious technophile founders, and both have more money than they can count. Google has a cool new thermostat company, lots of neat special projects, and a very strong play in mobile that it is leveraging to push its own services, to the potential detriment of Facebook. Google also has a smart glasses initiative. Now Facebook has its own headgear, and the hottest new technology in gaming. The social aspect is important, but I think Facebook just wanted to be a bigger, more dynamic player. As Harry McCracken put it over at Time, “the world's biggest social network is no longer satisfied with just being a social network.” (link).

Isn’t it interesting how companies impose their own mental paradigms on technologies? Google looks at glasses and sees a way to search and consume web services on the go. Facebook looks at goggles and sees a new means for social communication.

That’s exactly what scares the fans of Oculus. They wanted the next great gaming experience, not a communication tool.


Risks of the deal

That brings us to the dangers in the Oculus deal. Let’s start with the thing not to worry about: the money. Facebook has more cash than it can possibly spend. An acquisition like this is just a way of recycling some of it. It’s kind of like Japan Inc. buying golf courses in the US in the 1980s. They had to do something with the money.

What I’m worried about are the odds that the deal won’t live up to Facebook’s lofty expectations. Let’s start with the risks to Oculus.

Loved to death. Whenever a big company buys a little one, there’s a big risk that the acquiring company will smother its new acquisition to death with enthusiasm. Everyone in the parent company is excited about the sexy new partner and has great ideas on how they can work together. There’s no way for the acquired company to deal with even a small fraction of these new ideas; usually it was working flat out just to do the basics prior to the deal and has no bandwidth for anything else.

Often the acquirer will be aware of this mismatch, and authorize the acquisition to hire a bunch of new employees to deal with the overload. But then the acquisition finds itself consumed by the hiring process, and its capacity for work actually goes down while the new employees are hired and trained. Usually first hiring priority has to be given to the parent company’s own employees, meaning the acquisition gets flooded with the parent company’s culture and business practices, and loses much of the distinctiveness that made it valuable in the first place.

To avoid smothering the acquisition, senior management in the parent company has to rigorously limit contact with the acquisition, and allow it to gradually staff up and grow into its new role. Does Facebook have that sort of discipline? So far it’s saying the right things, but the proof will be in actions, not words.

Arrested evolution. I’ve seen this happen over and over again. New device paradigms, if they succeed at all, usually create their own new usage patterns that nobody can predict in advance. To put it another way, we don’t know what the killer app for VR will be yet. Oculus is still in early beta on its first product, so it hasn’t had much opportunity to learn from the market. Chances are that when it ships, it’ll find that customer reactions pull it in directions it didn’t expect. Features that the company expected to be hot will go by the wayside, while something they casually tossed in at the last minute will turn out to be the biggest differentiator. A nimble startup can usually pivot to follow these discoveries. Will Facebook be open enough to let Oculus find its own way in the market, even if that leads it away from Facebook’s core business? If so, it would be a rare big company indeed.

Dealing with developers. Although Oculus and Facebook agree that its long-term future extends beyond gaming, I think it’s fair to say that unless the company is successful in gaming it may not be able to branch to those other markets. Success in gaming means recruiting developers to support Rift. Oculus had a lot of momentum prior to the acquisition, but we’ve already seen one developer (Mojang, the creator of Minecraft) decommit because of the Facebook deal (link).

I don’t think Mojang is necessarily an opinion leader among hardcore gamers, but Facebook’s history with developers worries me a lot. At one time, in the race to defeat MySpace, Facebook embraced developers enthusiastically. It made itself a welcoming platform for them, and many companies, especially game creators, jumped in enthusiastically. But although Facebook offered a lot of technical support for developers, it never put much effort into helping them make money. It was almost as if the company lost interest in developers once MySpace was out of the way.

I’m not saying that Facebook deliberately mistreated developers, but I think it never understood that a successful platform has to be both technically cool and financially rewarding to developers. Facebook never made the economics of its platform work, and as a result its developer base withered way. Some of the survivors switched to mobile instead and became leaders in the new generation of mobile games. To this day if you get them talking in private they’ll tell you about their lingering distrust for Facebook.

Facebook seems a lot more comfortable evangelizing developers to use its login and advertising APIs, rather than creating an economic and technology platform that makes them successful. But that won’t be enough to make Oculus a winner. If Facebook is serious about VR as the next big paradigm, it needs to change itself to embrace VR developers and help them succeed as businesses. Will Facebook learn how to take care of a platform business? Or will it take orders from tiny little Oculus in this area? To me, that’s one of the most important unknowns in the Oculus deal.

Indigestion. My other concern is that Oculus could create internal and external problems for Facebook. Working on VR may pull Facebook’s attention away from other, more pressing competitive threats. To me, the most important near-term challenge to Facebook is the rise of the Asian messaging networks that combine free short messaging with games and other online services. The acquisition of WhatsApp was meant to counter that, but Facebook still has to figure out how it’ll be integrated with the core company. Do Mark Zuckerberg and his management team have enough time and brain juice to figure out how to integrate both WhatsApp and Oculus?

Buying Oculus also creates potential new enemies for Facebook. Until the acquisition, companies like Sony and Microsoft had good reasons to view Facebook as a potential partner in their struggles against Google (remember, Microsoft owns about 1.6% of Facebook). But Oculus founder Palmer Luckey has been outspoken in his criticism of both PS4 and Xbox, saying they don’t have the power to do proper VR. And he has speculated about building mobile wireless chips into the Rift goggles, making them a long-term competitor to the smartphone (link). How will Apple feel about Facebook buying a company that says it’s going to make the smartphone obsolete?

When Google bought Motorola Mobility in 2012, I saw the shock and fear it generated in the Android licensee base. Even though Google eventually sold off most of Motorola, those companies will never again fully trust Google. I don’t think Oculus is the same level of shock to Facebook’s allies, but I suspect they’re now asking themselves whether they can trust Facebook as a partner in the future. That could hurt Facebook in ways it doesn’t even imagine today.

So I’m hopeful because I believe in the potential for VR, but I’m also very worried. For the Oculus deal to work, Facebook needs to understand developers much more deeply, exercise self-restraint organizationally, and navigate a very tricky landscape of allies who are now also competitors. None of those skills are particular strengths of Facebook today.

I hope they can learn quickly.

_____

Edit: There's an alternate view of the deal: What if Mark Zuckerberg really does want to make Oculus into the next generation of computing, not just a social extension? That creates another set of challenges, which I discuss here.

Tell a Meaningful Story With Data




This article was originally posted on Google Think Insights.

Most organizations recognize that being a successful, data-driven company requires skilled developers and analysts. Fewer grasp how to use data to tell a meaningful story that resonates both intellectually and emotionally with an audience. Marketers are responsible for this story; as such, they’re often the bridge between the data and those who need to learn something from it, or make decisions based on its analysis. As marketers, we can tailor the story to the audience and effectively use data visualization to complement our narrative. We know that data is powerful. But with a good story, it’s unforgettable.

Rudyard Kipling once wrote, “If history were taught in the form of stories, it would never be forgotten.” The same applies to data. Companies must understand that data will be remembered only if presented in the right way. And often a slide, spreadsheet or graph is not the right way; a story is.

Executives and managers are being bombarded with dashboards brimming with analytics. They struggle with data-driven decision making because they don’t know the story behind the data. In this article, I explain how marketers can make that data more meaningful through the use of storytelling.

The power of a meaningful story

In her “Persuasion and the Power of Story” video, Stanford University Professor of Marketing Jennifer L. Aaker explains that stories are meaningful when they are memorable, impactful and personal. Through the use of interesting visuals and examples, she details the way people respond to messaging when it’s delivered either with statistics or through story. Although she says engagement is quite different from messaging, she does not suggest one over the other. Instead, Aaker surmises that the future of storytelling incorporates both, stating, “When data and stories are used together, they resonate with audiences on both an intellectual and emotional level.

 In his book Facts Are Sacred, Simon Rogers discusses the foundations of data journalism and how The Guardian is using data to tell stories. He identifies ten lessons he’s learned from building and managing The Guardian’s Datablog, a pioneering website in the field. I found three of the lessons particularly insightful:
  1. Data journalism (and analytics in a broader sense) is a form of curation. There is so much data and so many data types that only experienced analysts can separate the wheat from the chaff. Finding the right information and the right way to display it is like curating an art collection. 
  2. Analysis doesn’t have to be long and complex. The data collection and analysis process can often be rigorous and time consuming. That said, there are instances when it should be quick, such as when it’s in response to a timely event that requires clarification. 
  3. Data analysis isn’t about graphics and visualizations; it’s about telling a story. Look at data the way a detective examines a crime scene. Try to understand what happened and what evidence needs to be collected. The visualization—it can be a chart, map or single number—will come naturally once the mystery is solved. The focus is the story. 
Stories, particularly those that are meaningful, are an effective way to convey data. Now let’s look at how we can customize them for our audiences.

Identify the audience

Most captivating storytellers grasp the importance of understanding the audience. They might tell the same story to a child and adult, but the intonation and delivery will be different. In the same way, a data-based story should be adjusted based on the listener. For example, when speaking to an executive, statistics are likely key to the conversation, but a business intelligence manager would likely find methods and techniques just as important to the story.

In a Harvard Business Review article titled “How to Tell a Story with Data,” Dell Executive Strategist Jim Stikeleather segments listeners into five main audiences: novice, generalist, management, expert and executive. The novice is new to a subject but doesn’t want oversimplification. The generalist is aware of a topic but looks for an overview and the story’s major themes. The management seeks in-depth, actionable understanding of a story’s intricacies and interrelationships with access to detail. The expert wants more exploration and discovery and less storytelling. And the executive needs to know the significance and conclusions of weighted probabilities.

Discerning an audience’s level of understanding and objectives will help the storyteller to create a narrative. But how should we tell the story? The answer to this question is crucial because it will define whether the story will be heard or not.

Using data visualization to complement the narrative

Analytics tools are now ubiquitous, and with them come a laundry list of visualizations—bar and pie charts, tables and line graphs, for example—that can be incorporated into reports and articles. With these tools, however, the focus is on data exploration, not on aiding a narrative. While there are examples of visualizations that do help tell stories, they’re rare and not often used in meetings and conferences. Why? Because finding the story is significantly harder than crunching numbers.

In their “Narrative Visualization: Telling Stories with Data” paper, Stanford researchers discuss author versus reader-driven storytelling. An author-driven narrative doesn’t allow the reader to interact with the charts. The data and visualizations are chosen by the author and presented to the reader as a finished product, similar to a printed magazine article. Conversely, the reader-driven narrative provides ways for the reader to play with data.

With the advent of data journalism, we’re now seeing these two approaches used together. According to the Stanford researchers, “These two visual narrative genres, together with interaction and messaging, must balance a narrative intended by the author with story discovery on the part of the reader.”

A good example of a hybrid author-reader approach is the presentation of The Customer Journey to Online Purchase tool. A few short paragraphs explain why the tool was created and how it works, and an interactive chart allows marketers to break down the information by industry and country. Additional interactive data visualizations provide even more context.

Another extremely efficient and visual way to tell a story is by using maps. In a tutorial on visualization, I show how a large data set can be transformed and incorporated into a story. It’s an example of how to take charts and graphs to the next level in order to add value to the story. In this case, I use Google Fusion Tables and some publicly available data to illustrate analytics data with colorful, interactive maps. The visualization provides more content for those interested in diving deeper into the data.



A good data visualization does a few things. It stands on its own; if taken out of context, the reader should still be able to understand what a chart is saying because the visualization tells the story. It should also be easy to understand. And while too much interaction can distract, the visualization should incorporate some layered data so the curious can explore.

Marketers are responsible for messaging; as such, they’re often the bridge between the data and those who need to learn something from it, or make decisions based on its analysis. By rethinking the way we use data and understanding our audience, we can create meaningful stories that influence and engage the audience on both an emotional and logical level.

Posted by Daniel Waisberg, Analytics Advocate

Tuesday, 25 March 2014

Understand the full value of TrueView ads with the new Video Campaigns report

Advertisers know that video ads have the ability to reach and convince customers in ways that other formats can’t, but traditional TV ads are often prohibitively expensive, difficult to target, and hard to measure. That’s why so many advertisers have looked to YouTube TrueView ads for their video needs.  With more than 1 billion unique users each month from across the world and with 40% of that traffic on mobile, YouTube is one of the best places to reach your target audience with high-quality, compelling video.

We’ve heard lots of feedback from loyal Google Analytics users asking for better TrueView reporting, which is why we’re so excited to announce a new Google Analytics Video Campaigns report that focuses on your TrueView ads. With this new report rolling out over the next few days, users can now see the detailed effects of their TrueView campaigns on their website traffic and revenue. You can access the new report under Acquisition > AdWords > Video Campaigns.

Click for full-size image

If you’ve never created a TrueView ad, it’s easy to do with AdWords for Video. Just head into AdWords, and under the +Campaign button, select Online Video.  


Once you’ve created an auto-tagged TrueView ad in AdWords and linked your Google Analytics and AdWords accounts, your TrueView-ad-driven traffic will show up in the Video Campaigns report after about 24 hours. This report has the familiar look and feel of the other AdWords reports but includes TrueView-specific metrics like Paid Views, Cost Per View, and Website Clicks. There are also new metric groups like Engagement, which helps you understand how users engage with your video and your website.  

Using this newly available data, you can fine-tune your TrueView campaign settings to optimize for views, clicks, or goal conversions. You can also segment the reports by Ad Content or Video, helping you analyze the quality of your video creatives in the context of your website goals. 

In addition, since TrueView ads are often more brand-focused, traffic they generate to your site will often be indirect traffic.  In order to analyze this type of traffic, check out the new Google Display Network Impression Reporting pilot, which can help you understand conversions that resulted from unclicked impressions or video views.  With this report, it’s possible to see how your TrueView ads are generating value beyond just direct clicks; you can dive deeper to understand how impressions, views, and clicks all contributed directly or indirectly to conversions on your site.

Click for full-size image

To get started with Video Campaigns reporting, simply link your AdWords and Google Analytics accounts and start an auto-tagged TrueView campaign via AdWords for video. After that, head over to the new report to fine-tune your budgets and targeting.  See you on YouTube!

Posted by Jon Mesh, Google Analytics Product Manager

Thursday, 20 March 2014

Berkeley Earth Maps Powered by Google Maps Engine now available in the Google Maps Gallery



Google Maps is a familiar and versatile tool for exploring the world, but adding new data on top of Google Maps has traditionally required expending effort for both data management and website scripting. Google recently expanded Google Maps Engine and debuted an updated Google Maps Gallery. These tools aim to make it easier for users and organizations to integrate their geographic data with Google Maps and share it with the world. At Berkeley Earth we had an early opportunity to work with these new tools.

The use of Google Maps Engine eliminates the need for users to run their own map-serving Web servers. Maps Engine also handles mundane mapping tasks, such as automatically converting georeferenced image files into beautiful map layers that can be viewed in Google Maps, no programming required.


Annual average land-surface temperature during the period 1951-1980 as estimated by Berkeley Earth.

Similarly, one can take tables of location data and map them onto a Google Map using geographic markers and popup message boxes that make it easy to explore georeferenced information.


Map of the more than 40,000 temperature stations used by the Berkeley Earth analysis. On the left is part of the original table of data. On the right is its representation in Google Maps Engine.

When mapping locations, the new Maps Engine tools allows users to upload their own geographic markers or chose from Google’s many selections; the geographic marker icons used in the temperature station map above were uploaded by us. Alternatively, we could have used one of the stock icons provided by Maps Engine. In addition, users can customize the content and appearance of the popup message boxes by using HTML. If the georeferenced data can be linked the web addresses of already existing online content, one can also incorporate images or outgoing links within the message boxes, helping the user find more information about the content presented in the map.

The ease of putting image layers into the new Maps Engine has allowed Berkeley Earth to create and share many scalable maps of climate and weather information that are fun to explore. Incorporating these maps in our website and posting them on the Google Maps Gallery provides the public with a new tool to help locate local weather stations, learn about local climate, and download various kinds of weather and climate data.

Now, anyone can easily learn about both the weather in their city and the climate of the entire globe from a single, simple interface. Google Maps Engine and the new Maps Gallery has allowed us to bring the story of climate to a broad audience in a way that can be easily understood.

Tuesday, 18 March 2014

New tools to grow your mobile app business

Today at the Game Developers Conference in San Francisco we will be announcing two key launches powered by Google Analytics and Google Tag Manager. You can follow the livestream today at 10:00AM PDT (5:00PM UTC) with the Google Analytics sessions from 2:30PM PDT.

Announcement #1: Bringing the power of Google Analytics to AdMob
We’re happy to announce that Google Analytics is fully available in the AdMob interface on a new Analyze tab. App developers now have a one-stop way to measure success and adjust their earning strategies based on what they learn.

Today’s app developers have to make decisions quickly and implement them seamlessly if they want to stay relevant. It also helps if every business decision is backed up and validated by reliable data. Until now, app developers using AdMob and Google Analytics had to use two separate tools to monetize and measure. Starting today, they’re now in one place.

More than just Google Analytics inside AdMob
The new tab is simpler, yes. But app businesses can also now make decisions faster without losing data accuracy. They’ll also benefit from a new set of features that make measurement the foundation of all monetization programs:
  • drop down menu to switch between individual apps reports
  • new home page with combined Google Analytics and AdMob reporting
  • new Analyze tab with all Google Analytics reports
To see the new feature in action, sign in to your AdMob account and look for the Analyze tab at the top of the page. 

click to enlarge

Your new home tab in AdMob will now incorporate data on how your app is monetizing as well as how it is performing overall with insights on in app purchase, traffic and ads metrics in your app: all in one tab - a unique feature just in Admob.

click to enlarge

Get started in one click with Google Analytics and AdMob 
1. Login or open a new account on AdMob and sign up for Google Analytics (GA) in the new Analyze tab. 
2. If you are already using Google Analytics for your apps, you can link your existing account with AdMob in the Analyze tab. 
3. If you are not using Google Analytics, you can sign up via AdMob and complete the process without leaving the interface.

Announcement #2: New Content Experiments with Google Tag Manager
People have a lot of choice when it comes to apps and keeping them engaged is a challenge. Businesses who experiment with different app layouts have a higher chance to find the best performing solution and keep users engaged. A few months ago we announced Google Tag Manager for apps, today we are enabling content experiments: an easy way to set up and run experiments to change anything from in-app promotions to menu layout. With Google Tag Manager you can modify app configuration for existing users without having to ship a new version.

But how can we always be sure that we are changing it for the best? Wouldn’t it better if you could validate business decisions with data? Now you can run content experiments on a subset of your users to choose the best option - where to show promotions? How often? Data in Google Analytics will answer your questions and you can now be sure your decisions will be backed by data.

Google Tag Manager has been built to be very intuitive, even for people not familiar with coding. Businesses can now let their marketers or business analysts run experiments without requiring a developer to be involved. App experiments are now accessible to everyone.


click to enlarge

Getting started with Google Tag Manager
  1. Sign up for an account at www.google.com/tagmanager and create a mobile container
  2. Download the SDK for either Android or iOS. 
  3. Start programming! Use the SDK to instrument configuration and events you care about in your app.
  4. When you’re ready to dynamically change your app, use the Google Tag Manager interface to start configuring. Remember to press the “Publish” button to push your rules and configurations to your users.
Posted by Russell Ketchum, Lead Product Manager, Google Analytics for Mobile Apps and Google Tag Manager