Wednesday, 30 April 2014

A Billion Words: Because today's language modeling standard should be higher



Language is chock full of ambiguity, and it can turn up in surprising places. Many words are hard to tell apart without context: most Americans pronounce “ladder” and “latter” identically, for instance. Keyboard inputs on mobile devices have a similar problem, especially for IME keyboards. For example, the input patterns for “Yankees” and “takes” look very similar:
Photo credit: Kurt Partridge

But in this context -- the previous two words, “New York” -- “Yankees” is much more likely.

One key way computers use context is with language models. These are used for predictive keyboards, but also speech recognition, machine translation, spelling correction, query suggestions, and so on. Often those are specialized: word order for queries versus web pages can be very different. Either way, having an accurate language model with wide coverage drives the quality of all these applications.

Due to interactions between components, one thing that can be tricky when evaluating the quality of such complex systems is error attribution. Good engineering practice is to evaluate the quality of each module separately, including the language model. We believe that the field could benefit from a large, standard set with benchmarks for easy comparison and experiments with new modeling techniques.

To that end, we are releasing scripts that convert a set of public data into a language model consisting of over a billion words, with standardized training and test splits, described in an arXiv paper. Along with the scripts, we’re releasing the processed data in one convenient location, along with the training and test data. This will make it much easier for the research community to quickly reproduce results, and we hope will speed up progress on these tasks.

The benchmark scripts and data are freely available, and can be found here: http://www.statmt.org/lm-benchmark/

The field needs a new and better standard benchmark. Currently, researchers report from a set of their choice, and results are very hard to reproduce because of a lack of a standard in preprocessing. We hope that this will solve both those problems, and become the standard benchmark for language modeling experiments. As more researchers use the new benchmark, comparisons will be easier and more accurate, and progress will be faster.

For all the researchers out there, try out this model, run your experiments, and let us know how it goes -- or publish, and we’ll enjoy finding your results at conferences and in journals.

Wednesday, 23 April 2014

Sharing is Caring - Unleash your productivity with asset sharing in Google Analytics


Innovation happens on every level

Within your organization there are multiple people working on different sides of the same problem. Making it easy for people to quickly and effectively share innovative solutions is a key enabler for more productivity, and better decisions. 

We are proud to announce a series of asset sharing tools within Google Analytics. To spread all your innovative solutions and assets even easier. Our permalink solution is a simple to use and privacy friendly way to share Google Analytics configurations across your organization, and beyond.

Narrow the focus for precise insights

Our popular segments feature helps you to narrow the focus of your analysis. Are you trying to answer a hypotheses for new, or recurring customers? Is this report more meaningful if you focus on a particular region? By sharing a segment, you share a certain point of view on a problem. Invite others to your view by sharing a segment you built, or a custom report.

Define success, and spread the love

Goals in Google Analytics help advertisers to map real business value into a conversion signal. Track users site engagement, media interactions, or sales events through Goal tracking. Now it is easier than ever to share your success definition across other views, or with other people in your organization.

Capture everything with Custom Channels Groupings

It all starts with traffic to your website. You spend a tremendous amount of effort and resources on getting people to visit. Custom Channel Grouping within Multi-Channel Funnels enables you to identify everything, especially traffic that is custom to your business model. Sharing this important view is now easier than ever. Create a Custom Channel Grouping, and share this among your organization.

Assign partial value to your marketing efforts

Custom Attribution Models allow Google Analytics users to assign partial value to the channel interactions which drive business value. You invest time and effort to build a customized attribution model, which reflects the nuances of your business. Now it is easier than ever to ensure all stakeholders are working off the same consistent definition of attribution.

“Amazing feature! I tried it … and like it.”
Sebastian Pospischil Director Digital Analytics, United Digital Group

How it works

Permalink is a simple to use, and privacy friendly way to share configuration assets. When you ‘share’ an asset, we are creating a copy of that asset or configuration, and create a unique URL which points to that copy. The asset copy will remain private and can only be accessed by someone with the URL. If you want to share your asset, just share the URL. The recipient clicks on the URL, and will be brought to a simple dialog to import the assets into his or her Google Analytics views. This feature also supports Dashboard, and Custom Reports.

Check out our Solutions Gallery within your Google Analytics account via the “Import from Gallery” button or directly at the standalone site for inspiration, and consider sharing your own permalinks via the “Share in Solutions Gallery” link. 

Happy Analyzing.

Posted by Stefan Schnabl, on behalf of the Google Analytics team

Thursday, 17 April 2014

Understanding multi-device user behavior in a single view

In this constantly connected world, users can interact with your business across many digital touchpoints: websites, mobile apps, web apps, and other digital devices. So to help you understand what users do in the increasingly diverse digital landscape, we’re enabling the ability to see web and app data in the same reporting view.



Here’s a bit more detail on this change:

Analyze app and web data in the same reporting view
Now you can see all data you send to one Google Analytics property in a single reporting view, regardless of the collection method you use of where the data comes from. If you send data from the web and from a mobile app to one property, both data sets appear in your reports. 

If you want to isolate data from one source, like if you only want to see web data in your reports, you can set up a filter to customize what you see. You can also use other tools to isolate each data set, including customizations in standard reports, dashboards, custom reports, and secondary dimensions

If you don’t send web and app data to the same property, this change doesn’t affect your data or your account.

Measure web apps
We’ve also added some new app-specific fields to the analytics.js JavaScript web collection library, including screen name, app name, app version, and exception tracking. These changes allow the JavaScript tracking code to take advantage of the app tracking framework, so you can more accurately collect data on your web apps.

Benefit from consistent dimension & metrics names
Until today, some metrics and dimensions used different names in app views and in web views, even though they presented the exact same data. Now, all metric, dimensions, and segment names are the same, regardless if they’re used for web or app data. This gives you a clear and consistent way to analyze and refer to all of your Google Analytics data.

Visitors are now users and visits are sessions:
There are two big changes to the names in Google Analytics: First, the Visitors web metric and Active Users app metric are now unified under the same name, Users. And second, Visits are now referred to as Sessions everywhere in all of Google Analytics. 

We’ll be making these changes starting today, and rolling them out incrementally over the next week. Visit our developer site for more information on these changes:
Posted by Nick Mihailovski, Product Manager

Wednesday, 16 April 2014

Lens Blur in the new Google Camera app



One of the biggest advantages of SLR cameras over camera phones is the ability to achieve shallow depth of field and bokeh effects. Shallow depth of field makes the object of interest "pop" by bringing the foreground into focus and de-emphasizing the background. Achieving this optical effect has traditionally required a big lens and aperture, and therefore hasn’t been possible using the camera on your mobile phone or tablet.

That all changes with Lens Blur, a new mode in the Google Camera app. It lets you take a photo with a shallow depth of field using just your Android phone or tablet. Unlike a regular photo, Lens Blur lets you change the point or level of focus after the photo is taken. You can choose to make any object come into focus simply by tapping on it in the image. By changing the depth-of-field slider, you can simulate different aperture sizes, to achieve bokeh effects ranging from subtle to surreal (e.g., tilt-shift). The new image is rendered instantly, allowing you to see your changes in real time.

Lens Blur replaces the need for a large optical system with algorithms that simulate a larger lens and aperture. Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames. From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the depth (distance) to every point in the scene. Here’s an example -- on the left is a raw input photo, in the middle is a “depth map” where darker things are close and lighter things are far away, and on the right is the result blurred by distance:

Here’s how we do it. First, we pick out visual features in the scene and track them over time, across the series of images. Using computer vision algorithms known as Structure-from-Motion (SfM) and bundle adjustment, we compute the camera’s 3D position and orientation and the 3D positions of all those image features throughout the series.

Once we’ve got the 3D pose of each photo, we compute the depth of each pixel in the reference photo using Multi-View Stereo (MVS) algorithms. MVS works the way human stereo vision does: given the location of the same object in two different images, we can triangulate the 3D position of the object and compute the distance to it. How do we figure out which pixel in one image corresponds to a pixel in another image? MVS measures how similar they are -- on mobile devices, one particularly simple and efficient way is computing the Sum of Absolute Differences (SAD) of the RGB colors of the two pixels.

Now it’s an optimization problem: we try to build a depth map where all the corresponding pixels are most similar to each other. But that’s typically not a well-posed optimization problem -- you can get the same similarity score for different depth maps. To address this ambiguity, the optimization also incorporates assumptions about the 3D geometry of a scene, called a "prior,” that favors reasonable solutions. For example, you can often assume two pixels near each other are at a similar depth. Finally, we use Markov Random Field inference methods to solve the optimization problem.

Having computed the depth map, we can re-render the photo, blurring pixels by differing amounts depending on the pixel’s depth, aperture and location relative to the focal plane. The focal plane determines which pixels to blur, with the amount of blur increasing proportionally with the distance of each pixel to that focal plane. This is all achieved by simulating a physical lens using the thin lens approximation.

The algorithms used to create the 3D photo run entirely on the mobile device, and are closely related to the computer vision algorithms used in 3D mapping features like Google Maps Photo Tours and Google Earth. We hope you have fun with your bokeh experiments!

Monday, 14 April 2014

Improving Your Data Quality: Google Analytics Diagnostics

Google Analytics is a powerful product with a wealth of features. Analytics data can fuel powerful actions like improving websites, streamlining mobile apps, and optimizing marketing investment. To realize this power, you must configure Analytics well and ensure high quality data. For these reasons, we’re starting a beta test with some of our users on Analytics Diagnostics that are aimed at finding data-quality issues, making you aware of them, and helping you fix them.

Analytics Diagnostics frequently scans for problems. It inspects your site tagging, account configuration, and reporting data for potential data-quality issues, looking for things like:
  • Missing or malformed Analytics tags 
  • Filters that conflict
  • Looking for the presence of (other) entries in reports
Here’s what it looks like:


As we get lots more feedback and improve the diagnostics system, we will release this to all of our users. It will take some time to get there; in the meantime, you are welcome to express interest in trying out the diagnostics system on your own GA accounts.

Posted by the Google Analytics Team

Friday, 11 April 2014

New user and sequence based segments in the Core Reporting API

Segmentation is one of the most powerful analysis techniques in Google Analytics. It’s core to understanding your users, and allows you to make better marketing decisions. Using segmentation, you can uncover new insights such as:
  • How loyalty impacts content consumption
  • How search terms vary by region
  • How conversion rates differ across demographics
Last year, we announced a new version of segments that included a number of new features.

Today, we’ve added this powerful functionality to the Google Analytics Core Reporting API. Here's an overview of the new capabilities we added:

User Segmentation
Previously, advanced segments were solely based on sessions. With the new functionality in the API, you can now define user-based segments to answer questions like “How many users had more than $1,000 in revenue across all transactions in the date range?”

Example: &segment=users::condition::ga:transactionRevenue>1000

Try it in the Query Explorer.

Sequence-based Segments
Sequence-based segments provide an easy way to segment users based on a series of interactions. With the API, you can now define segments to answer questions like “How many users started at page 1, then later, in a different session, made a transaction?”

Example: segment=users::sequence::ga:pagePath==/shop/search;->>perHit::ga:transactionRevenue>10

Try it in the Query Explorer.

New Operators
To simplify building segments, we added a bunch of new operators to simplify filtering on dimensions whose values are numbers, and limiting metric values within ranges. Additionally, we updated segment definitions in the Management API segments collection.

Partner Solutions
Padicode, one of our Google Analytics Technology Partners, used the new sequence-based segments API feature in their funnel analysis product they call PadiTrack.

PadiTrack allows Google Analytics customers to create ad-hoc funnels to identify user flow bottlenecks. By fixing these bottlenecks, customers can improve performance, and increase overall conversion rate.

The tool is easy to use and allows customers to define an ad-hoc sequence of steps. The tool uses the Google Analytics API to report how many users completed, or abandoned, each step.


According to Claudiu Murariu, founder of Padicode, “For us, the new API has opened the gates for advanced reporting outside the Google Analytics interface. The ability to be able to do a quick query and find out how many people added a product to the shopping cart and at a later time purchased the products, allows managers, analysts and marketers to easily understand completion and abandonment rates. Now, analysis is about people and not abstract terms such as visits.”

The PadiTrack conversion funnel analysis tool is free to use. Learn more about PadiTrack on their website.

Resources

We’re looking forward to seeing what people build using this powerful new functionality.

Posted by Nick Mihailovski, Product Manager, Google Analytics team

Wednesday, 9 April 2014

Smarter remarketing with Google Analytics



Sometimes, less is more.
While many marketers love the hundreds of dimensions they can use to create remarketing lists in Google Analytics, others have told us that the sheer number of possibilities can be overwhelming.

So to simplify the product while still ensuring great results for our users, we’re proud to announce a new type of remarketing list: one that’s managed automatically.

Introducing: Smart Lists with Google Analytics.
Now when creating a new remarketing list, you’ll have the option to have Analytics manage your list for you.

Smart List option in the Remarketing Interface

How does it work?
Smart Lists are built using machine learning across the millions of Google Analytics websites which have opted in to share anonymized conversion data, using dozens of signals like visit duration, page depth, location, device, referrer, and browser to predict which of your users are most likely to convert during a later visit.

Based on their on-site actions, Analytics is able to calibrate your remarketing campaigns to align with each user’s value.

If you use eCommerce transaction tracking and have enough traffic and conversions, your Smart List will be automatically upgraded. Marked as [My Smart List], your list will be customized based on the unique characteristics that cause your visitors to convert. Only you will have access to this list, and no new data will be shared whether you use this feature or not (learn more).

For practitioners, the promise of big data is also the burden - there are so many analyses to run, so much opportunity.  With Smart Lists, as with Data Driven Attribution, Google Analytics is  operationalizing statistical analysis - making us not just smarter marketers - but faster and more nimble. 

While we might have been able to achieve similar results with ongoing statistical analysis and a complex cookie structure, Smart Lists are simply plug and play. This speeds us along, so we can focus not on list management, but on growing the business. 
-- Melissa Shusterman, Engagement Director, www.maassmedia.com

For best results, make sure your Google Analytics goals and transactions are being imported into AdWords, then combine your Smart List with Conversion Optimizer using Target CPA or ROAS in AdWords.

If you’re new to remarketing, the Smart List is a great way to get started with strong performance results.  As you get comfortable with remarketing you can tailor your creatives and apply a variety of remarketing best practices.

If you’re a remarketer already employing a sophisticated list strategy, stay tuned while we gear up to extend this signal directly for your current lists as an optimization signal used in AdWords bidding.

We’ll be continuing to iterate on these models in order to help users better understand and act on their data. We’re also working on surfacing these signals elsewhere in your reports and in the product so you can dive into what factors help predict whether a user will likely convert.

We welcome your feedback and ideas. Please leave them right in the comments!

Happy Analyzing,
Ismail Sebe and Dan Stone
on behalf of the Google Analytics Team

Monday, 7 April 2014

Analytics & AdWords Bulk Account Linking

To maximize marketing investment and return, advertisers need insights into the effectiveness of their ads. However, gaining such insights is often overly cumbersome. This is why we’re pleased to announce that in the coming weeks, the Google Analytics and AdWords account linking process is becoming even more streamlined, making it easier for advertisers to quickly gain rich insights. The new linking process allows you to link multiple AdWords accounts all at once. This enables more tightly controlled linking access for each Google Analytics property. 

Enable Bulk AdWords Account Linking
Many Google Analytics users have multiple AdWords accounts. Until now, each AdWords account had to be individually linked. The new account linking wizard allows you to select any of the AdWords accounts in which you have Administrative access. The following screenshot shows what the wizard looks like for a user who has access to an AdWords MCC containing many AdWords accounts. Note that you can select multiple accounts:

Discover Unlinked Accounts
Many users want to quickly find unlinked AdWords accounts and link to them, and the new wizard makes this easy. A quick glance at the AdWords account list in the screenshot above shows which accounts are and aren’t linked. To link additional accounts, just mark the “X” in front of each account, and then continue.


Gain More Granular Control
With this launch, linking to AdWords now takes place at the Analytics property level instead of the account level. This is a benefit for those with many properties in a single Analytics account; if you have different teams of people managing each property, you no longer need to give them access to the full Analytics account in order to link to AdWords. Now, you can simply give that team access to only the appropriate property, and they can manage AdWords links. All it takes is property-level Edit permission to create and update AdWords links. This is another Analytics feature enabling large-scale Analytics customers to better control access to their Analytics accounts.

Visit The New AdWords Linking Section
Once the new linking process has launched to your account, you’ll be able to see all these features. Log in to your Analytics account, click the Admin button in the header, and you’ll see a new AdWords Linking section in the Property column:


These great new features are rolling out now and should fully launch to everyone in the coming weeks. Here’s what one of our users had to say:

"The linking process is now a lot more straightforward as I do not need to toggle between 2 different interfaces. Everything can be done in GA. In addition, all of the accounts that I manage are automatically listed in the interface so I do not need to look for them. This is a vast improvement from the previous experience." Sam Chew, Digital Manager, Air Asia

Log into your Analytics account soon to update your AdWords account links and gain rich marketing insights.

Posted by Dan Fielder and Matt Matyas, Google Analytics Team

Wednesday, 2 April 2014

Universal Analytics: Out of beta, into primetime

Universal Analytics is the re-imagining of Google Analytics for today’s multi-screen, multi-device world and all the measurement challenges that come with it. Since we launched UA in beta, we’ve seen some exciting use cases. Today we’re happy to finally announce: Universal Analytics is out of beta and everyone can use it with the same robust set of features you’re used to with classic Analytics!


Feature parity with Classic Analytics, new reports, better user-centric analysis
When we first introduced Universal Analytics and ran the beta trial, the number one request from our testers was for full access to all Google Analytics features and tools. Bringing Universal Analytics out of beta means that all the features, reports, and tools of Classic Analytics are now available in the product, including Remarketing and Audience reporting.

We’re also gradually rolling out the User ID feature to help you better understand your customers’ full journey. This feature shows anonymous engagement activity across different screens and visits to your site to provide a more user-centric view of your traffic, and help you build a more tailored experience for your customers as well. It will also enable new Cross Device reporting that shows how your users are interacting with your business across multiple devices. 

Additionally, Universal Analytics is also now covered by our Premium service-level agreement, which means that same level of service and additional product features Premium users have come to expect will stay the same when their accounts upgrade to Universal Analytics.

New Cross-Device Reports in GA let you see the full customer journey (click image for full-size).

Time Zone Based Processing: Fresher, more timely data
Today, all properties are processed in Pacific Standard Time. If you’re in a different time zone, this can create a lag in the data you see in your reports. With time zone based processing, you’ll see fresher data in your reports in a more timely manner.

Updates to the Measurement Protocol: User Agent / IP Override 
A top developer request, this feature allows developers to proxy data from devices and intranets, through internal servers, and finally onto Google Analytics. To support this, we added two fields to set the IP address and User Agent directly in the Measurement Protocol. With these features, we are also announcing the deprecation of the legacy mobile snippets. Users should update their code to use the Measurement Protocol

Our early Universal Analytics adopters have already seen some great results. This case study highlights some of the inspired ways our Certified Partner InfoTrust LLC has helped Beckfield College unlock the full capabilities of Universal Analytics including the use of Remarketing and Audience Reporting:

"Once we saw more than 25% of visits to Beckfield College's website were coming from a mobile device, we migrated them to Universal Analytics with plans on leveraging its cross-device tracking capabilities, and better understanding the full visitor journey across devices." -- James Love, InfoTrust LLC

If you use Google Analytics today, get started with Universal Analytics by upgrading your account. Learn more about the process in the Universal Analytics Upgrade Center, including auto-upgrade process, and timeline. 

If you are new to Google Analytics, learn more about Universal Analytics in the Help Center. 

We’ll share more creative implementations, case studies, and Universal Analytics resources in the coming months that we hope will inspire you to continue to grow your business with the insights you gain using Google Analytics. 

Posted by Nick Mihailovski, Product Manager, Google Analytics

Sawasdeee ka Voice Search




Typing on mobile devices can be difficult, especially when you're on the go. Google Voice Search gives you a fast, easy, and natural way to search by speaking your queries instead of typing them. In Thailand, Voice Search has been one of the most requested services, so we’re excited to now offer users there the ability to speak queries in Thai, adding to over 75 languages and accents in which you can talk to Google.

To power Voice Search, we teach computers to understand the sounds and words that build spoken language. We trained our speech recognizer to understand Thai by collecting speech samples from hundreds of volunteers in Bangkok, which enabled us to build this recognizer in just a fraction of the time it took to build other models. Our helpers are asked to read popular queries in their native tongue, in a variety of acoustic conditions such as in restaurants, out on busy streets, and inside cars.

Each new language for voice recognition often requires our research team to tackle new challenges, including Thai.
  • Segmentation is a major challenge in Thai, as the Thai script has no spaces between words, so it is harder to know when a word begins and ends. Therefore, we created a Thai segmenter to help our system recognize words better. For example: ตากลม can be segmented to ตาก ลม or ตา กลม. We collected a large corpus of text and asked Thai speakers to manually annotate plausible segmentations. We then trained a sequence segmenter on this data allowing it to generalize beyond the annotated data.
  • Numbers are an important part of any language: the string “87” appears on a web page and we need to know how people would say that. As with over 40 other languages, we included a number grammar for Thai, that tells you that “87” would be read as แปดสิบเจ็ด.
  • Thai users often mix English words with Thai, such as brand or artist names, in both spoken and written Thai which adds complexity to our acoustic models, lexicon models, and segmentation models. We addressed this by introducing ‘code switching’, which allows Voice Search to recognize when different languages are being spoken interchangeably and adjust phonetic transliteration accordingly.
  • Many Thai users frequently leave out accents and tone markers when they search (eg โน๊ตบุก instead of โน้ตบุ๊ก OR หมูหยอง instead of หมูหย็อง) so we had to create a special algorithm to ensure accents and tones were restored in search results provided and our Thai users would see properly formatted text in the majority of cases.

We’re particularly excited that Voice Search can help people find locally relevant information, ranging from travel directions to the nearest restaurant, without having to type long phrases in Thai.

Voice Search is available for Android devices running Jelly Bean and above. It will be available for older Android releases and iOS users soon.


Tuesday, 1 April 2014

Making Blockly Universally Accessible


We work hard to make our products accessible to people everywhere, in every culture. Today we’re expanding our outreach efforts to support a traditionally underserved community -- those who call themselves "tlhIngan."

Google's Blockly programming environment is used in K-12 classrooms around the world to teach programming. But the world is not enough. Students on Qo'noS have had difficulty learning to code because most of the teaching tools aren't available in their native language. Additionally, many existing tools are too fragile for their pedagogical approach. As a result, Klingons have found it challenging to enter computer science. This is reflected in the fact that less than 2% of Google engineers are Klingon.

Today we launch a full translation of Blockly in Klingon. It incorporates Klingon cultural norms to facilitate learning in this unique population:

  • Blockly has no syntax errors. This reduces frustration, and reduces the number of computers thrown through bulkheads.
  • Variables are untyped. Type errors can too easily be perceived as a challenge to the honor of a student's family (and we’ve seen where that ends).
  • Debugging and bug reports have been omitted, our research indicates that in the event of a bug, they prefer the entire program to just blow up.

Get a little keyboard dirt under your fingernails. Learn that although ghargh is delicious, code structure should not resemble it. And above all, be proud that tlhIngan maH. Qapla'!

You can try out the demo here or get involved here.

Mastering the science of random chance: Dataless Decision Making comes to Analytics Academy

The world of digital analytics changes fast. From Attribution Modeling to Universal Analytics. From App Analytics to Remarketing. From Tag Management to Audience Reporting. We’re constantly trying to help analysts and marketers measure their business and make better decisions. But sometimes we ignore alternative ways to make business decisions.

That’s why we’re excited to introduce our next Analytics Academy course: Data-less Decision Making.



In this four-unit course, we’ll present some of the most popular ways to avoid using data when making business decisions.  We’ll cover everything from mystical tools, like crystal balls and divining rods, to traditional data-avoidance techniques, like coin flipping. You’ll find that once you adopt these methods you’ll be able to make hundreds, and maybe thousands, of decisions a day!

Still wondering if this course is right for you? Check out our FAQ for more information.

We hope you enjoy the course!

Posted by the Google Analytics Education Team

Introducing iPierce

Confidential documents obtained from Apple indicate that, despite all the rumors, the company’s first entry into wearable computing will not be a watch. Instead, Apple plans a line of intelligent body piercings, collectively known as iPierce.

The first iPierce devices will be an eyebrow ring and a navel barbell (which Apple calls a stud). Both include a microphone, speaker, Bluetooth LE, WiFi, motion sensor, GPS, 4 MP camera, four gigs of RAM, and a lithium ion battery. The ring also includes a small low-power laser (more on that below), and the stud has a USB connector that enables it to be tethered directly to a smartphone or Mac. The eyebrow ring weighs one ounce, and the navel stud two ounces. The small size and light weight of the devices was made possible by a custom A7 processor, designed by Apple, that incorporates the CPU, memory, and radio controllers on a single die.

The iRing, as Apple calls it, will come in a single model, but can be customized with interchangeable colored gemstones created at Apple’s new sapphire factory in Arizona. There will be three models of iStud: a star, an Apple logo, and an adorable little kitty playing with a ball. All were designed by Jonathan Ive and each is carved from a single piece of surgical steel.

The iRing can also be installed in the ear or other fleshy appendage, but the iStud unfortunately cannot be worn in the tongue due to interference between the low-power radio and a user’s dental fillings. In early testing there were three cases of minor burns caused by inductive heating of the user’s fillings when they coupled with the radio frequency of the WiFi transceiver. Apple is researching ways to implant a small external antenna that would enable iStud to safely operate inside the mouth.

One of the breakthrough features of iPierce is that the devices don’t need to be charged. Special piezoelectric chips in the device convert the user’s body motions into electricity and trickle-charge the battery. In most cases, that is enough to keep the device charged, but if power becomes low the devices can also digest the user’s blood cells to produce additional energy. This will not have a noticeable effect on the user unless they use a lot of WiFi, in which case they might become slightly anemic. For this reason, each iPierce will come with a bottle of iron pills. The pill bottle was designed by Jonathan Ive and was carved from a single piece of brushed aluminum.


The software

All iPierce devices come bundled with the standard applications you’d expect: messaging, notifications, ringtones, relational database, and an app store. iPierce is controlled through a combination of Siri speech recognition and gesture recognition (for example, raising your eyebrow is equivalent to a swipe up on iPhone).

The iStud includes a color-changing LED that illuminates when the user receives a call or message. It flashes red for text messages, green for phone calls, and blue for App Store updates. The other colors are reserved for use by developers. There is also a vibration mode for use for use in libraries and other quiet settings.

The low-power laser in iRing can be used to display a screen image directly on the user’s eyeball (since the name Retina Display was already in use, Apple calls this technology iEye). One interesting application of this technology is that if a user has two iRings (one for each eye) they can automatically superimpose an image over anything that the user doesn’t want to see. For example, a large Hibiscus plant can be superimposed over an overflowing trash dumpster. Apple has written software that automatically detects any social media post that has a spoiler to a television show listed in the user’s Preferences, and replaces it with a quote from Hunter S. Thompson.

Apple says third party developers are working on iEye apps that will completely replace the user’s surroundings with synthetic environments. For example, a user could choose to live in a Lord of the Rings environment, with his or her friends replaced by characters from the movie, licensed through the App Store. For an extra fee, you can even make Siri talk like Gollum. ("Nasty little hobbitses wants to find a restaurant, do they? Siri never gets invited to eat at restaurants. All the hobbitses say is 'Siri calculate the tip.' Next time Siri sends you to a Taco Bell with a dirty bathroom.")


Availability

iPierce devices will be sold and installed only at Apple Stores. Apple has quietly trained more than three thousand store employees in how to install iPierce, assisted by a custom piercing device that I’m told resembles “a highly instrumented staple gun.” The staple gun was designed by Jonathan Ive and was carved from a single piece of titanium.

Like the iPhone battery, iPierce devices are not removable. But users will be able to buy screw-on upgrades.


Background and future plans

The iPierce project originated in 2009, when a super-secret team at Apple working on a smart watch presented their first prototype to Steve Jobs. I’m told by a contact at Apple that Jobs was aghast. “He shouted, ‘That’s the stupidest idea I’ve ever heard. Nobody wears watches anymore. I’d rather have a nail driven through my head than wear a watch.’” 

When the team returned to its confidential off-campus location in Sunnyvale, CA, it realized that no one was sure if Jobs’ last comment was hyperbole or an instruction on the product they should build. The team decided it was safest to assume it was an order, and switched their work to body piercings.

"We leaked the plans for a watch as a decoy," my Apple contact told me. "We figured we could probably get Google and Samsung to waste $50 million each working on a clone. I've got a bet with a friend that if we plant a rumor that we're working on a smart airplane we can make Google buy Boeing."

Now that iPierce is finally near completion, Apple's next wearable initiative will be iTat, a line of touch-sensitive LED tattoos. When paired with an iPierce, these tattoos could be programmed to display photographs, movies, games, and of course could also be used as a flashlight.


Posted April 1, 2014

Eight other April Firsts on Mobile Opportunity: 
2013: The truth about Google Street View
2012: Twitter at Gettysburg
2011: The microwave hairdryer, and four other colossal tech failures you've never heard of
2010: The Yahoo-New York Times merger
2009: The US government's tech industry bailout
2008: Survey: 27% of early iPhone adopters wear it attached to a body piercing
2007: Twitter + telepathy = Spitr, the ultimate social network
2006: Google buys Sprint