“Apple” Category

Apple’s Advantage

Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.

Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.

That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.

And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.

The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.

Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.

I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?

As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.

That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.

June 3rd, 2014

Where Does the iPad Go From Here?

When Apple released the iPad Mini a year ago, I wrote that the Mini wasn’t just a secondary option to the main iPad, but it is the iPad. What I meant is that the Mini fulfilled much of the original iPad’s vision better than it or any of its full-sized successors did. Because the Mini was so much lighter and so much easier to hold, the Mini was not only more enjoyable to use while sitting down on the couch or in bed, but opened up contexts that the full-sized iPad’s size and weight didn’t allow. The iPad’s promise was powerful computing available to you in something you could comfortably hold in your hands, and the Mini fully delivered on it.

With this year’s full-sized iPad, though, the line between the two devices blurred. It’s still discernible, but it’s murkier. The iPad Mini is still superior for reading since it’s lighter, but the difference has narrowed considerably. Forget how many grams each device is; the iPad Air is quite comfortable to hold one-handed. Not as nice as the Mini, but nice.

The Mini narrowed the performance and capability gap as well. The Mini now runs the ridiculously fast A7 processor, same as the iPad Air and iPhone 5S. For many people, the Mini is big enough to write on as well, or make presentations in Keynote. The full-sized iPad is still superior for those tasks, and is especially superior for tasks like sketching which benefit from a larger screen, but the difference really isn’t that large. They are both quite capable devices for whatever task people want to use them for. The comparison is much more akin to a 13-inch Macbook versus a 15-inch than it is to an iPhone versus an iPad.

Which begs the question: where is the iPad going? More specifically, where are iPad applications going?

The original iPad was released in 2010 with the full iWork suite, and along with the iPad 2 in 2011, Apple released Garageband and iMovie as well. Garageband in particular feels like the high water mark for creative applications on the iPad. Not only was Garageband incredibly powerful and feature-deep, but it felt like it was made for the iPad all along.

There are many other applications that are powerful in different ways as well. Paper is a wonderfully simple application, but is capable of remarkably beautiful and intricate work (Paper’s Tumblr makes that clear). Editorial is a well-constructed text editor that is, in some ways, superior to desktop-based text editors. Djay is, appropriately, a DJ application that could only exist on the iPad. And on and on.

I think, though, that we’re beginning to see diminishing returns for increasing the capabilities of existing iPad applications or pushing it into new spaces. Generally speaking, while iPad hardware is dramatically faster and more capable than it was in 2010 (or even last year), applications haven’t increased at anywhere near the same pace. There are a number of obvious reasons for that, of course; faster hardware doesn’t necessarily imply that software—which is dependent on many things, but is especially dependent on good ideas and demand for them—will advance at a similar pace.

But there’s another reason: iOS still hews closely to the one-app-at-a-time, app-as-silo concept that began with the iPhone in 2007. For some purposes, this actually makes the iPad a better tool than the PC; the iPad’s limited ability to multitask makes it easier to focus on writing or other focused tasks like sketching. But it also significantly constrains what’s possible on the device. Writing an email or article that references a website or note, or doing any work that requires looking at one application’s content while working in another, requires flipping back and forth between applications, which makes for an excellent test of patience. And there is virtually no support for allowing two applications to work together on a single task by passing data between them.

Many people have suggested that renaming the iPad the “iPad Air” sets the stage for an iPad Pro, presumably with a larger screen. It’s difficult to disagree with that, but I don’t think an iPad with merely a larger screen would justify a “pro” moniker. The difference between what it is capable of and what the iPad Air is capable of would be fairly small; it would be better at sketching, but not much else. But where it would make sense is if the one-app-at-a-time model is relaxed. Using two applications on screen at once, a la Microsoft’s Metro, would certainly benefit from a larger screen. And building support for allowing applications to work together on tasks and share their data would justify the new name that much more as well.

While conceptually these two changes are fairly simple (I wrote about what applications working together could look like last year), the details are always where it gets difficult. How do you enter “split-screen” mode? How do you get out? What affordances do you provide to users so they understand what’s going on? Do you allow the user to drag content back and forth between applications (I’d hope so!)? How do you indicate what can and can’t be dragged? How do you implement the API to do so? And so on. None of it is easy. It’s inherently complex, and while we all want iOS to become more powerful, these changes are fundamental to iOS’s conceptual design, and a wrong move could endanger what’s made iOS so convincing in the first place: its simplicity.

Nonetheless, if iOS and the iPad are going to continue to progress and become more capable, then these sorts of changes are inevitable. That’s (generally) where we’re going. The rest is details. It’s also, coincidentally, where most of the work is.

November 4th, 2013

The 5C

In an excellent interview with Business Week, Tim Cook explained their thinking for the iPhone 5C:

We never had an objective to sell a low-cost phone. Our primary objective is to sell a great phone and provide a great experience, and we figured out a way to do it at a lower cost. Therefore, we can pass that on. And we figured out a way to sell 4S at substantially less than we were selling it for before, and we’re passing it on. So we think there will be a lot more people in our tent, and we can really serve a lot more people. And that feels good.

The iPhone 5C is fascinating to me because nearly everyone—including John Gruber—got it wrong: it isn’t a “cheap” iPhone. Rather, it’s something that’s both much more obvious and surprising.

Implicit in the idea that Apple should release a cheaper iPhone is that it would be a secondary model for people who want an affordable prepaid iPhone and for international markets; that is, an implicit assumption was that the iPhone/iPhone 5S would remain the mainstream iPhone. That isn’t what Apple is doing with the iPhone 5C.

Instead, Apple has taken the strategy they’ve followed since releasing the iPhone 4—take last year’s model and make it available at $99—and created a distinct product from it, and made it the mainstream iPhone.

Rather than move the iPhone down market with the iPhone 5C, Apple moved the “regular” iPhone—this year, the iPhone 5S—up market to become the pro version, and establish the iPhone 5C as the “regular” iPhone. The iPhone 5C is now the iPhone that really is good enough for everyone. The A6 processor is fast, so is LTE, and the iPhone 5′s camera is very, very good. The colors lend it a feeling of accessibility, too; it feels less serious than the iPhone 5′s aluminum design, more fun, and the colors allow for a greater feeling of personalization and whimsy. (The cases only amplify that, misplaced circles aside.) It’s a very good phone at a reasonable $99 price-point, and it’s going to look much better in the store to potential customers than last year’s iPhone model did.1

Apple’s marketing certainly seems to be trumpeting this, too. Apple’s home page features the iPhone 5C, not the 5S, and it’s playing heavily on the 5C’s colors. They featured an iPhone 5C ad, not one for the 5S. Tim Cook and Phil Schiller referred to the iPhone 5S as Apple’s most “forward-looking” iPhone yet. Apple is positioning the iPhone 5C as Apple’s iPhone for everyone, and the iPhone 5S for people who want the best.

That makes some sense on the face of it; it allows Apple to sell a “new” iPhone at $99 with 16GB of storage, but with lower cost of goods sold, which means they can maintain their margin. It may also allow Apple to push the envelope a bit more at the top end because they no longer need to manufacture enough iPhone 5Ss to satisfy nearly everyone purchasing a new iPhone at launch. But if the iPhone is under mortal threat from low-end, commodity Android-powered smartphones, then this decision seems bizarre. It won’t compete with those devices. The iPhone 5C is cheaper, but it’s not much cheaper.

But it starts to make a lot of sense if you think that smartphones aren’t so far along that the low-end, cheap models are good enough compared to the iPhone. If Apple can still provide superior hardware and software that, combined, make for a genuinely better device that is palpable for regular customers, then Apple has no need to bloody itself in the low-end washer machine.

And that’s exactly what Apple seems to think. Tim Cook explains what he thinks makes Apple so special, and what makes this strategy possible:

You look at innovation like the iPhone’s camera and the detail that went into the camera. Most people hear the word camera, and they think of hardware. And hardware is really important to it, you know? With the stuff we did with the flash on this. But it’s software, and it’s the silicon—I mean, it’s everything.

So the way I think about Apple is that the magic of this place really comes up at its best when hardware, software, and services come together. And it’s sort of the intersection of those things is where things get incredibly magical. So facilitating that to happen and getting the collaboration level for that to happen is the magic here.

And one of my proudest moments is when other people see that. They don’t know that they’re seeing that, and that’s also the beauty. They don’t have to do it. But look at these (gesturing to iPhones). These are perfect examples where the hardware and the software and the service begin to blend. In some ways you don’t tell one from the other.

The iPhone’s camera is the perfect example of what Cook is arguing. The iPhone’s camera—a cellphone camera!—is now so good that many people have nearly no need for a dedicated point-and-shoot camera. This is only true, though, because Apple has focused on developing the camera in a way that can’t be captured so well on a specification sheet but really does make for a better camera. Rather than boost their sensor’s megapixel count, Apple has kept it steady at 8 megapixels for three iPhone models, and instead has boosted the sensor’s size. They’ve focused on doing rather incredible things with the iPhone’s Image Signal Processor to make for, and choose, better photos. While these things don’t translate well to selling points for cell carrier sales associates, it does make for a truly better camera, and customers do notice the difference. As a result, the iPhone feels like a device in a class of its own.

The obvious choice was to make a more affordable iPhone. I don’t think Apple is religiously opposed to making a cheaper iPhone, but they will only do so if they can make a convincing product. What Cook is saying is that making truly good products comes first. Eventually, I believe, Apple will do exactly that. That shouldn’t be a surprise; the iPhone 5C is highly reminiscent of my first Apple product, and one of my favorite devices ever: the iPod Mini. The iPod Mini had less storage than even the third-generation iPod (10GB versus the Mini’s 4GB), and wasn’t that much cheaper than the third-generation iPod ($299 versus $249), either. Critics at the time were perplexed; if Apple was going to make a lower-end iPod to compete with more affordable competing devices, the iPod Mini certainly wasn’t it.

But it didn’t matter, because it was a damned good product. For me (as a high school student at the time), the lower price finally made it attainable, and the colors were fun in a way the regular iPod never was. The iPod Mini was incredibly successful, and it wasn’t the end; Apple replaced it with the iPod Nano in 2005 at lower prices, and introduced the iPod Shuffle—a completely different kind of music player—in 2005 as well at even lower prices.

I think the iPhone will follow precisely the same path. That is, I believe Apple will build some kind of “iPhone” product for the low-end eventually, but it may not look like an iPhone at all.2

In that sense, what Apple did was incredibly obvious: it’s what they’ve been doing since Steve Jobs returned to Apple. They don’t identify price-points and product attributes and then create a product to fill it, as most companies do. They create genuinely good, convincing products that solve real needs for people first.

If you’ve been concerned about where Apple is going under Tim Cook, this should be a sign that there’s nothing to be concerned about. Apple’s unrelenting focus on making truly great products is not only still there, but seems to be reinvigorated under Cook’s new management team.

There have been a lot of headlines lately with some variation of “Is Innovation Finished At Apple?” I believe the best may still be ahead of Apple.

  1. Yes, technically, it’s almost identical to last year’s iPhone 5. But for people looking at several phones, a colorful iPhone is going to be a lot more exciting than the iPhone you’ve seen everywhere for a year. []
  2. This may be where Apple’s wrist device fits in. []
September 24th, 2013

Apple’s Best, Apple’s Worst

This section from USA Today’s interview with Craig Federighi and Jonathan Ive highlights what makes Apple an exceptional company, and also their greatest weakness:

“This right here is what I love about Apple, this incredibly sophisticated powerful technology that you’re almost not aware of, it absolutely blows me away,” he says. “You can’t get this without working cross-functionally.”

Federighi is quick to admit that any engineer tasked with such a challenge would be sure to call attention to his brilliant work. “You know, you’re going to have some big message saying ‘Scanning!’ and buzz-buzz-zzz-zzz later it says ‘Authenticated,’ blink-blink-blink, with 10 seconds of animation,” he says, as Ive starts laughing.

“Ultimately we realized all that had to disappear,” says Federighi. “If it disappears, we know we’ve done it.”

That’s absolutely what is special about Apple, the focus on using technology in truly meaningful ways rather than to pad specification lists. But Apple’s talent for doing so with hardware and hardware and software hasn’t really extended into web services.

I’m not sure why that is. It could be that while Apple culturally believes in good design and integrating hardware and software, since those values have been with Apple since the very beginning, building terrific web services has never been something they’ve truly believed in as a company. I wonder, then, whether a part of Tim Cook’s management re-organization (which this USA Today piece seems to be meant to show the results of) is building that cultural focus and appreciation for web services.

September 19th, 2013

Basil 2.0

Today, Apple released iOS 7—the most substantial update to iOS 7 since the App Store was introduced in 2008. With it, I am introducing a new version of Basil that is, yes, re-designed for iOS 7, but more importantly is a re-thinking of how key parts of the application work. I think it’s both simpler and more powerful. I’m very proud of it.

you can read more about it on the Basil weblog, or go and get it on the App Store.

I hope you all love it. It’s a privilege to work on an application so many people find useful.

September 18th, 2013

Touch ID and Accessibility

Steven Aquino:

My idea here is not so much of convenience (which is nice) but rather of usability. I know many folks with vision-and motor-related issues who bemoan iOS’s passcode prompt because not only does it take time, but also entering in said code isn’t necessarily an easy task. In fact, more than a few lament this so often that they forego a passcode altogether because it’s time-consuming and a pain (sometimes literally) to enter.

September 11th, 2013

Shawn’s Thoughts On the iPhone Event

From Shawn Blanc’s thoughts on Tuesday’s iPhone event:

Alas, Apple still has issues with off-device photo storage, syncing, etc. It’d be great if Apple took that same energy for innovation they are putting on the iPhone’s camera (hardware and software side) and devote it to vastly improving photo storage and organization with iCloud and multiple devices.

This is easily my biggest pain-point in day-to-day use with my iPhone.

September 11th, 2013

“Worth A Thousand Words”

Patrick Rhone on Tuesday’s iPhone event:

The real story, in my opinion, is the one worth a thousand words on each of those aforementioned tech blogs but not getting near that sort of coverage — the new camera in the iPhone 5S. The camera? Yes, the camera.

September 11th, 2013

“Yeah, That Feature Should be Easy to Do”

Today, Basil received a nice 1.6 update. This update brings a really obvious, simple feature that many other recipe applications include: ingredient scaling. It’s really convenient while in the kitchen to be able to adjust a recipe’s yield according to your needs (the amount of jambalaya you need for two on a quiet weeknight is a little different than the amount you need to make while having guests), and it’s conceptually very simple.

Except it’s not so simple.

It sounds like it, of course; when you think of doing ingredient scaling, it’s just taking an amount at the beginning of a line (“1 item,” “2 tablespoons”) and scaling it by some factor—cutting it in thirds, doubling it, whatever. That’s the approach most applications take for ingredient scaling: they assume all ingredients are phrased such that there is an amount at the beginning of the line and then a food item. Based on that assumption, they identify a number at the beginning of the ingredient line and scale it by whatever scale the user selects. In the basic cases, like “2 cups white sugar”, this works OK. Not great, but OK.

But the problem is that not all ingredients items are the basic case, because there are many ways to express things. What about ingredients like “1 (12 ounce) box of pasta” versus “2 medium cloves garlic (about 1 teaspoon)”; in the first case, the measurement amount shouldn’t be adjusted, but in the second, it should be. In the second case, assuming that you only have to adjust the amount at the beginning of the line can be very confusing while cooking: After I double the recipe, do I use 4 cloves of garlic, or 1 teaspoon? This assumption—made because parsing imprecise language like this is difficult—actually makes cooking harder for the user, because they have to remember how their application fails at correctly scaling ingredients.

A new feature for cooking isn’t helpful if using it actually makes it harder to cook. There are many other cases; what about scaling amounts expressed as words, like “half”? Or what about when you cut “1/2 cup” in fourths and it tells you to use “1/8 cup” rather than “2 tablespoons?”

Scaling ingredients isn’t useful if it makes you think about these things while cooking or doesn’t work for some ingredients. It’s a distraction.

I avoided building a scaling feature for Basil for that reason: I didn’t want to add more noise if it had all of these caveats. But I’m happy to announce that Basil now has ingredient scaling, and it does its best to do the right thing in all of these cases. It handles the cases discussed above, and many others, too. It will do its best to present ingredient amounts in a way that makes sense (no “1/9 cup sugar” or similarly bizarre things).

It isn’t perfect. There are probably cases I’ve missed, or cases it’s not handling properly, but I am committed to addressing those.

Working on ingredient scaling, though, reminded me that problems which seem exceedingly simple and easy to solve on the surface are usually icebergs waiting to sink you. They may look simple, but they tend to conceal much complexity underneath the surface that is only obvious once you’ve thought through the problem. These “yeah, that should be easy to do” features have, more often than not, been the hardest problems to solve in my experience.

This experience is also another lesson that while users like receiving new features and they’re exciting to announce, over the longer term users only want a new feature if it improves their net-experience in some way. If it ends up not being useful in many cases, or adds complexity to the process of using your application, then that feature could make your application less useful than it was before.

August 6th, 2013

Mansfield

Bob Mansfield was removed from Apple’s executive team page this weekend. Apple said in a statement that he will not be a part of Apple’s executive team, but will continue to work on special projects at Apple and report to Tim Cook. John Gruber reports that there’s no intrigue behind the movie—he really is just going to be focusing on special projects.

Erica Ogg writes:

The reason for the reassignment of Mansfield is only one of many questions about what is going on at Apple right now. He’s not the only high-profile executive whose job title at Apple isn’t very clear or defined. While that might be a sign of turmoil, as was the case last year when Mansfield tried to retire, it’s quite possible that Apple is now moving key executives onto secret projects that it won’t reveal until it’s ready.

That certainly seems right. It’s probably better to read this as a result of a greater focus at Apple on new product lines they’re developing, most likely a wearable device. Which is exciting. We’re seeing Apple’s future developing here.

July 29th, 2013

In Apple case, ‘The line between the legal and the illegal seems so thin’

Antitrust scholar Randal Picker:

There’s a simple story about publishers wanting to change prices and failing until the white knight appears in the form of Steve Jobs. But there’s a flip side to that. Apple would say: “we didn’t do anything here that we didn’t have an independent interest in doing, independent of whatever happened in e-book prices.

July 11th, 2013

Innovation Policy

Reihan Salam links to a post by Ross Eisenbrey which argues that government, not business or markets, is primarily responsible for innovation we’ve seen in technology:

Mazzucato suggests that, given the extent to which tech companies like Apple and Intel owe their great good fortune to the federal government’s investment in R&D, they should share more of their profits with the taxpayers. Instead, of course, Apple has been offshoring profits to avoid taxation and most of the tech industry is contributing to the efforts of the U.S Chamber of Commerce and the rest of the organized business lobby to cut corporate taxes and shrink the government. As Mazzucato makes clear, cutting taxes and the government is no recipe for an innovative, competitive future—just the opposite. 

Mazzucato points out that many of the iPhone’s core technologies, such as solid state storage, capacitive sensors and GPS, all have their roots in government-sponsored labs. She presumes, then, that government is therefore largely responsible for the innovation itself, and so (1) we should continue supporting government-funded research projects, and (2) those companies that benefit from taxpayer “risk-taking” should “share” more of their profits with the government.

But as Salam points out, companies have no moral obligation to do so. The government funded much of that research for its own purposes. Salam writes:

The U.S. government devised the technologies Mazzucato identifies for its own, usually defense-oriented reasons. Mazzucato implicitly suggests that in a counterfactual universe in which the Cold War had never taken place, and in which defense expenditures hadn’t diverted spending from other domains or forced higher tax levels, etc., innovations in information technology would not have taken place either. The decades that preceded the Cold War, during which there was considerable private sector innovation in early information technologies, suggests that this is not the case, but of course we can’t really say.

What’s worse, though, is that requiring some formative compensation for that research would undermine the very innovation that Eisenbrey and Mazzucato claim that the government was actually responsible for. Salam again:

As Amar Bhidé often notes, an Englishman pioneered the World Wide Web under the auspices of the government-financed CERN laboratory in Switzerland, yet the U.S. has been the main source of consumer internet innovation. U.S. internet firms do not, however, pay the Swiss and other European governments a formal innovation bounty. Part of the reason is that everyone profits from the free flow of knowledge, which is why excessive patents are such an economic scourge.

The reason is that doing so would reduce that free flow of information, and therefore the actual work it takes to create a useful product and bring it to market. Eisenbrey conveniently skips over the 1970s, 1980s, 1990s, and 2000s where technology companies invested huge sums of capital and work into developing these base technologies into something useful for consumers, and into something consumers would buy. There’s an implied derision at that effort as something other than innovation, but it absolutely is, and it’s what actually makes those technologies useful for people. Without those companies continuing to iterate on solid state storage, and without other companies creating salable products that utilize it, solid state storage never would have been anything more than a curiosity in a lab. Similarly, without Intel developing the microprocessor, and without Apple, Microsoft and the PC makers creating PCs for those microprocessors that appealed to consumers, they never would have developed like they have, either. And as a result, those technologies never would have evolved enough to create a handheld, touchscreen phone that’s always connected to the web. It would have been impossible.

Note, however, that story doesn’t minimize the role of government-funded research. Rather, it shows that it plays a role in innovation, but it is not the entire story by any stretch. And it shows, too, why the free flow of ideas and technology is so important. Without it, there can be very little actual innovation, because “innovation” inherently means seeing a connection between disparate ideas and technologies, how they can fulfill a need for people, and putting them together such that it creates something that didn’t really exist before. Innovation may be greater than the sum of its parts, but it is nonetheless the summation of many different things that already exist. Placing formal restrictions on those ideas, such as overbearing patents or, in this case, requirements to pay back more to the government which claims responsibility for them and state direction of innovation, impedes that flow of ideas and innovation as a result.

This is why simplistic stories about where new ideas and products come from, and simplistic moral stories about who deserves what for those ideas and products, can be so damaging—they elide much more complicated systems.

July 11th, 2013

Apple’s New Beginning

At WWDC this year, Apple announced the most dramatic update to iOS since iOS 2 introduced the App Store and SDK for developers. The update is also the most controversial.

In 2012, Tim Cook fired Scott Forstall, Apple’s senior vice-president (SVP) of iOS, and distributed his responsibilities among Apple’s executive team. Craig Federighi became SVP of software, unifying the iOS and OS X teams under one leader. Eddy Cue received the title SVP of “Internet Software and Services,” unifying Apple’s iTunes, iBooks and App stores, iCloud, Maps and Siri. And in addition to hardware design, Jonathan Ive assumed responsibility for Human Interface design as well in the shake-up.

At the time, Adam Lashinsky reported that Forstall refused to sign an apology for iOS 6′s Maps release, and that sealed his fate. While his refusal may have been what precipitated his firing, Lashinsky says that Forstall didn’t get along with Ive. I’ve heard similar things as well, and that those difficulties were not isolated with Ive.

I don’t think Cook decided to fire Forstall simply because he didn’t get along with others in Apple’s management team, or because he refused to take public responsibility for Maps’s issues. Rather, I believe it was a result of Cook re-organizing Apple’s management structure to reflect his becoming CEO. Cook is not Steve Jobs, and he does not pretend to be, so he decided to change Apple’s structure to reflect that.

Jobs fulfilled a rather unique role as CEO of Apple. Rather than oversee and manage the work of others, Jobs was intimately involved in product and product design decisions. No detail was too small for Jobs’s attention. Jobs both originated product ideas and took part in iterating on them, but more importantly, Jobs acted as a filter for Apple. All product decisions ultimately passed through him; he approved new products and new directions, and vetoed them, too. As a result of this role, collaboration among his lieutenants and teams wasn’t as important; indeed, Jobs was known to foster competition and even conflict among individuals and teams to try to elicit their very best work, and then he would choose what he saw as the superior work and direction for Apple.

Cook, as far as I know, doesn’t scrutinize each pixel of an application’s design. He doesn’t have Jobs’s ability to understand what makes for a good product. Jobs was one-of-a-kind, and Cook recognizes that. Recognizing that, however, means that he couldn’t continue Jobs’s management style. Instead, Cook needs to rely on his management team to replace Jobs’s role. Each member must take absolute responsibility for their area of focus and must be incredibly talented at managing it. Most integrally, though, because Apple no longer has the singular filter that all larger product decisions pass through, that management team must work together. Apple could withstand conflict and islands—even benefit from it—with Jobs because each area ultimately ran through him, and because he directed each area. Since Cook can’t fill that role, he needs supremely talented people in charge of each area working with each other to set Apple’s direction. Jobs’s Apple could feed off of discord, but Cook’s Apple must feed off of collaboration.

In Apple’s introduction video for iOS 7, Jonathan Ive says that “We see iOS 7 as defining an important direction and in many ways, a beginning.” While Ive may have meant they saw it as a new beginning for iOS, iOS 7 also marks the beginning of Apple’s new management structure.

When Cook fired Forstall last year, it wasn’t clear what it meant or whether it was for the better. Embarrassingly, Cook hired John Browett to head retail in April 2012 and fired him in October along with Forstall. One way to read that—and many did—was that Cook’s leadership was failing; he had hired an obviously bad fit for retail and was forced to get rid of him in six months. In that light, Forstall’s firing and Apple’s management restructuring looked like it could be the result of a struggling management team.

Until WWDC this year, it wasn’t clear whether that view was correct, or whether it was a part of Cook’s reorganizing Apple to work best under new leadership. Today, though, I think it’s clearly the latter view that was correct.

With that out of the way, I believe that this year’s keynote was meant to establish the foundation for Tim Cook’s Apple. In episode 19 of the Accidental Tech Podcast, Marco Arment, John Siracusa and Casey Liss discuss the keynote introduction video, which explains Apple’s motivation. Siracusa says that while he liked the video, he thinks that it suggested they were about to introduce something groundbreaking, and that OS X Mavericks, the new Mac Pro and iOS 7 didn’t live up to that. Siracusa might be right, but I think he misses its intent. This wasn’t meant to congratulate themselves for being great; rather, it was meant to affirm Apple’s motivation for what they do. Along with their “Our Signature” ad, I think they are the equivalent of the “Think Different” campaign for the Cook era.

Famously, Jobs said that the people at Apple shouldn’t try to make decisions by asking themselves what he would have done. Instead, he said, they should just do what’s right. Clearly, Cook took that to heart. This is Cook’s Apple, and they are not constraining themselves by what feels Jobs-like. Cook hasn’t confused the trappings of Jobs’s Apple—how Jobs managed the company—for its heart: an irrepressible desire to make insanely great products that improve people’s lives and give them joy.

Apple, then, has changed significantly since 2011. Things are quite different at the top than they were then, and to my eyes, Apple seems more open to the world than its ever been in important ways, too. But those changes have all been made so that Apple can continue doing what they always have. This may be Cook’s Apple, but the core is just as it’s always been.

July 9th, 2013

“Fertile Ground”

I haven’t posted anything this week both because I was in San Francisco until Wednesday evening and because iOS 7 has taken me a while to wrap my head around, but Marco Arment wrote what I think is one of the more interesting pieces about the update:

One of my favorite patterns in our industry is when the old and established are wiped out by disruption, irrelevance, or changing fashions. Like a forest fire, clearing out the old is very destructive and shouldn’t be taken lightly. But what’s left behind is a clean slate and immense opportunity.

I don’t think we’ve ever had such an opportunity en masse on iOS. After what we saw of iOS 7 yesterday, I believe this fall, we’ll get our chance.

He’s absolutely right. iOS 7 is so different—both aesthetically and functionally—that nearly all applications will need substantial updates. Applications that go un-changed will feel old and wrong in an even more dramatic way than applications that weren’t updated for retina displays. As a result, there’s going to be an App Store clearing.

But more importantly, this is an opportunity for new entrants into almost every category because it is so different. Application makers have a chance to re-think not only how they look, but what they do, and how they do it. The design concepts Apple introduced are in their infancy, and I think we all are going to help define them with the applications we build.

While talking with people at WWDC, I found myself starting to think that iOS 7 is early days in much the same way the iPhone was when the App Store was released. We have the chance to define the interfaces for the next five years. That’s incredibly exciting.

June 14th, 2013

On the Philosophy of Google Glass

Technology inherently alters how we live, what we do, how we think, how we die. Modern medicine has turned many diseases that, when contracted, brought along death more often than not into things of the past or minor annoyances, and injuries that would have been debilitating or deadly into ones that can be overcome. The motion picture didn’t simply make plays something that could be watched over and over again by many different audiences, but created a mass entertainment accessible by nearly everyone, and changed how we learn about news events and conceive of them. The car allowed for suburban sprawl, and for people to travel where they please, whether for the evening or for extended trips across thousands of miles; and in so doing, the car changed what the American “ideal” is, spawned innumerable cultural groups centered around it, and helped construct a rite of passage into adulthood for teenagers. Getting your driver license and first car is the first step toward being an adult, but also a right to freedom, to being able to go wherever you please on your own or with friends on nothing more than a moment’s notice.

So as long as humans have been creating tools to influence the world around us, the tools—technology—have been influencing us, too. It’s an inevitable byproduct of using something, and this isn’t exactly a new insight. Smartphones and now “wearable” computers like Google Glass are merely the latest human technology to influence their creators.

But while they may only be the latest example of something that’s been happening for as long as humans have created tools, there is, I think, something very different about so-called wearable computers like Google Glass. They have the potential to integrate themselves so deeply into the user that over time, and as they develop further, there will be little reason to differentiate between the device and the user. Removing your Glass device will feel very much like losing a limb or sense—something that you’ve grown used to depending on and using is gone. Through this much deeper integration, these devices could fundamentally alter the human experience and what it means to be human.

That might sound alarmist, like science fiction, or—if you own a smartphone—just remind you of that small moment of dread, like something’s wrong, when you leave the house without your phone.

Wearable computing has much more potential than “wearable” implies. Instead, through overlays on our vision (or more direct connections with the brain, potentially), things like Google Glass can become another sensory input as well as an output. Google Glass already allows you to look something up on Google (“Google, how tall is Mt. Everest?”) and get directions without ever pulling the phone out of your pocket, or using your hands at all; you ask, and the information is spoken to you or overlaid at the top of your vision. It’ll notify you about your flight this afternoon or messages you receive, and you can reply to the message, too. You can snap a photo or video. All without using your hands, and it’s all—again—on top of your vision.

The ultimate goal is to form a direct connection between our brains and the web, and all of it that entails. Google Glass is merely a first step toward that, and merely a hack that hijacks our vision to provide input to our brains and hijacks our voice for control. A direct connection with the brain is obviously ideal; there’s no “glasses” to wear, or need to use voice to control it, which isn’t very efficient. In Steven Levy’s In the Plex, he recounts a conversation he had with Larry Page and Sergey Brin in 2004:

Back in 2004, I asked Page and Brin what they saw as the future of Google search. “It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it will be easier in the future, that you can have just devices you talk to, or you can have computers that pay attention to what’s going on around them and suggest useful information.

The web’s information will be our brain’s information, and the web’s services will our brain’s tools. We would be able to immediately answer whatever question we have, or information we need. If you’re fixing your sink, instructions on how to do so (or maybe a video?) are just a thought away. In a few moments, you’ll be able to make fabulous chicken tikka masala. Humanity’s knowledge will have a direct pipe into our brains. And you’ll be able to do incredible things, too. You could snap a photo with a thought, send a message to someone or file away something you’ve come across to a note-taking service. You could control your home’s lights and television. You could… well, the list goes on.1

And, of course, you’ll be able to check Twitter and Facebook, and post to them, wherever you are and while doing whatever else.

I say all this because I don’t think there’s a significant difference between doing all of this through a Google Glass-like device or some direct brain connection, as Page proposes. If they’re successful at their purpose, both will be quickly adapted into our senses. Just as we’ve gotten used to being able to pull out our smartphones whenever we have a spare moment or need to settle some dispute or trivia, we’ll reflexively ask Glass the answer to a question, or to snap a photo, or to check the news real quick, or to look through our Facebook and Twitter stream, even at moments when we probably shouldn’t. And since the amount of effort it takes to do so will be so much smaller than it is with a smartphone (which is already terribly small), we will do all of it with that much more frequency. No event will be complete without taking a photo and posting it to our social network of choice, because unless it’s documented and unless we’ve stuck it in everyone else’s stream, then it didn’t really happen.

I don’t think that’s a positive, and it says nothing about the social effects of having a web-connected camera and microphone strapped to our faces. (Dustin Curtis touches on it in his piece about his experience with Glass.) But what I find most troubling is the philosophy underlying Larry Page and Sergey Brin’s thoughts on devices like Glass. They say that Glass’s goal is to get technology “out of the way,” but that isn’t it. The idea is that we will all be better off if we’re always connected to the web, always on, and have uninterrupted and instantaneous access to it and humanity’s “knowledge.” The idea that Page expresses is that if I can immediately learn about something I don’t know much about, I’ll be better off. I’ll be able to make smarter decisions and live a deeper, richer life by spending the time it would have taken to research and learn about something on more meaningful and substantive tasks.

I think, though, that is a terribly deluded and shallow understanding of what it means to “learn” about something. When we—humans—learn about something, we are not simply committing facts to our memory so we can recall them in the future. That’s a very tiny part of a much larger and much more important process. To “learn” about something is to study the information (when historical events occurred, what happened, etc), find connections between it and other things we’ve learned and experiences we’ve had, and to synthesize it into something greater—knowledge. Knowing, say, the Pythagorean Theorem in isolation isn’t of much use, but connecting it to your need to identify another object’s location suddenly makes it very useful. And more abstractly, knowing Roman and Greek history isn’t very useful all on its own, but being able to learn from it and apply its lessons to current political difficulties might prove very beneficial.

Synthesizing information into knowledge isn’t an instantaneous process because that’s not how we work. We form conclusions and connections between new information and other things we know by thinking through it and living it. Conveniently, and crucially, taking time to learn something or to answer our own question by pouring through books and articles and our own investigation allows us time to do that. We have little choice but to draw conclusions and form connections between what we’re looking at and what we already know or have seen before because our brains are working over the material for the answer we seek. We find knowledge when we engage our brains. And, moreover, we often stumble into things unintentionally while actually looking for something altogether unrelated. Things that end up becoming more important than what we were originally looking for in the first place.

Page’s idea—that we would be fundamentally better off if we had immediate access to all of humanity’s information—ignores that. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

This may all sound like so much hyperbole. After all, we’ve adapted just fine to prior new technologies, despite Luddite claims that it will destroy us. And, undoubtedly, we will adjust to this sort of thing, too, and the world will not come crashing down. I think, though, that this sort of thing—a more intimate connection between us and computers—is a path we are heading down, and since its more intimate nature also makes it more influential over us, I think we should deeply consider what it’s intended to accomplish for us and what might happen to us in the process.

Technology isn’t a force of nature that just happens. It’s something that we create, and so we should question why we are creating it. This has always been true, but I think it’s even more important now that we do so.

Technology, I think, should exist to improve our lives as humans, to magnify the good and minimize the bad, rather than change our nature or experience. That’s what I believe.

That’s what I believe, and you may disagree. I would suspect many do, especially those with more of a bent toward transhumanism. And that’s fine. But we should be having a much larger discussion about our technology’s intent than we are now, because it’s only increasing in importance.

  1. If the idea of a direct brain interface seems ridiculous, it shouldn’t; researchers have shown that monkeys can control a robot’s motion with their brain activity, and the means they use are relatively rudimentary. The brain appears to be able to adjust to and adapt new sensory inputs. []
May 31st, 2013