“Apple” Category

Apple Watch

The phone dominates your attention. For nearly every use, the phone has your undivided attention. Browsing the web, Twitter, Instagram, Snapchat, watching video, reading, messaging—all require focus on a screen that fills your vision, your primary attention, and generally some kind of interaction. Everything else, too, is always a home button or notification-tap away at all times.

Is that a shock when the phone is the single gateway to nearly everything? The PC is now for doing work, but the phone is for messaging, taking photos, sharing them, the web, Twitter, Facebook, finding places to go, getting directions there, and even making calls.

That is the reason we find ourselves, when we receive a message and pull out our phones to respond, often descending into a muscle memory check of our other iMessages, emails and Twitter stream. We pull out our phone for one purpose, like responding to a message or checking our schedule, and end up spending several mindless minutes (or, if I am honest, more than “several minutes”) checking in on whatever it is. We find ourselves doing this even when we shouldn’t. We do it while seeing friends and family, while out to dinner with them, while at home with family when we should be spending time with them or doing other things.

I used “we” above because I think anyone with a smartphone, or anyone who knows people with them, can find truth in it to a greater or lesser extent.

My concern with wrist-worn “smartwatches,” starting with the Pebble, is that they appear to primarily exist to push notifications that we receive on our phone to our wrist. They seem to exist to make dealing with phone calls, messages, updates easier; seeing them, ignoring them, replying to them. They are there to make dealing with our phones more convenient. And in large part, that is how smartwatches have been designed and used. “It’s there so I don’t have to pull my phone out of my pocket.”

But that idea of what smartwatches are for, making it more convenient to deal with the flood of notifications and information our phones provide us, is unimaginative. I think what the smartwatch can do is make the phone unnecessary for many purposes, create new purposes altogether, and allow us to benefit from a wrist-sized screen’s limitations.

The Apple Watch

On September 9th, Apple introduced their long-awaited watch, appropriately named the Apple Watch (from herein “the Watch”). We won’t be able to fully understand what Apple’s built until next year, but they did provide a fairly detailed look at the Watch and the software it runs.

It appears that, in contrast to Google’s approach with Google Wear (which is heavily focused on showing single bits of information or points of interaction on the screen, and relies on swiping between cards of data and interaction), Apple intends the Watch to run fairly sophisticated applications. The Watch retains the iPhone’s touch interface, but Apple has designed new means of interaction specific to a small screen. In addition to the tap, the Watch brings the “force tap,” which is used to bring up different options within applications (like, say, the shuffle and AirPlay buttons within the music application), and the “digital crown,” a repurposing of the normal watch’s crown into a sort of scroll wheel for the Watch. Using the digital crown, users can zoom in and out of maps and scroll through lists with precision and without covering the small screen. And, most interestingly, they have replaced the familiar vibration alert in our phones with a light “tap” from the Watch to notify the user.

What this allows is fairly sophisticated applications. You can not only search for locations around you, but you can zoom in and out of maps. You can scroll through your emails, messages, events or music. You can control your Apple TV.

This subsumes many of the reasons we pull out our phones during the day. We can check our schedule for the day, check a message when it’s received and send a quick reply, find a place to get a drink after dinner (and get directions there without having to walk and stare at your phone), ignore a phone call by placing your hand over your wrist, or put something on the Apple TV.

But what force taps and the digital crown will not do is make the Watch’s small screen as large as a phone’s. You can’t type out a reply to a message or email. You can’t browse the web for something. You can’t dig through a few months of your email to find a certain one. You can’t mindlessly swipe through Twitter (well, you could, but it’s going to be pretty difficult). That, though, is an advantage the Watch has over the phone. Because it is inherently limited, it also has to be laser-focused on a single purpose, and while using it, you are limited to accomplishing something. It’s a lot harder to lose yourself in a 1.5″ screen than it is in a 4+ inch screen.

That’s going to be one of the Watch’s primary purposes for existing: allowing us to do many of the things we do on our phones right now, but in a way that’s limited and, thus, less distracting. If you’re out to dinner and receive a message (and haven’t turned on Do Not Disturb), you’re going to be a lot less likely to spend a couple minutes on a reply, and then Instagram, if you’re checking and responding it to it on the Watch. It just doesn’t work that way.

In that way, I think Apple has embraced the wrist-worn watch’s inherent limitations. Rather than try to work around them, they are using them. They’ve built new means of interaction (force tap, digital crown, “taptic” feedback) that allows fairly sophisticated applications, but they didn’t use them to cram iOS in its entirety into the Watch.

What I think Apple is trying to do is build a new mode of personal computing on the wrist that is molded from the inherent limitations and opportunities that creates.

Truly Personal

In Jony Ive’s introduction to the Watch, Ive ends with a statement of purpose of sorts for it. He says,

I think we are now at a compelling beginning, actually designing technology to be worn. To be truly personal.

That sounds like a platitude, but I think it defines what Apple is trying to do. “Taptic feedback,” which Dave Hamilton describes as feeling like someone tapping you on the wrist, is a much less intrusive and jolting way of getting a notification than a vibration against your leg or the terrible noise it makes on a table, and more generally, focusing the Watch’s use on quick single purposes is, too.

What is interesting to me, though, is they are using the Watch’s nature to do things in a more personal—human—way, and to do things that the phone can’t. When providing directions, the Watch shows them on the screen just as you would expect on a phone, but it also does something neat: when it’s time to turn, it will let you know using its Taptic feedback, and it differentiates between left and right. As a result, there isn’t a need to stare at your phone while walking somewhere and getting directions.

They’ve also created a new kind of messaging. Traditionally, “messages” are either words sent from one person to another using text or speech. Since messages are communication through word, something inherently mental or intellectual rather than emotional, they are divorced from emotion. We can try to communicate emotion through text or speech (emoticons serve exactly that purpose), but communicating emotion to another person is always translated into text or speech, and then thought about by them, rather than felt. In person, we can communicate emotion with our facial expressions, body gestures, and through touch. There’s a reason hugging your partner before they leave on a long trip is so much more powerful than a text message saying you’ll miss them.

In a small way, using the Watch, Apple is trying to create a new way to communicate that can capture some of that emotion. Because the Watch can effectively “tap” your wrist, others can tap out a pattern on their Watch, and it will re-create those taps on your wrist, almost like they are tapping you themselves. You could send a tap-tap to your partner’s wrist while they are away on a trip just to say that you’re thinking about them. Isn’t that so much more meaningful a way to say it than a text message saying it? Doesn’t it carry more emotion and resonance?

That’s what they mean by making technology more personal. It means making it more human.

The Watch is not about making it more convenient to deal with notifications and information sent to us. It’s not even about, as I described above, keeping your phone in your pocket more often (although that will be a result). The Watch is creating a new kind of computing of our wrists that will be for different purposes than what the phone is for and what the tablet and PC are for. The Watch is for quickly checking and responding to messages, checking your schedule, finding somewhere to go and getting directions there, for helping you lead a more active (healthier) life, and for a more meaningful form of communication. And it will do that without sucking our complete attention onto it, like the phone, tablet and PC do.

The Watch is for doing things with the world and people around us. Finding places to go, getting there, exercising, checking in at the airport, and sending more meaningful messages. Even notifying you of a new message (if you don’t have Do Not Disturb turned on) while out to dinner with family or friends serves this purpose, because if you have to see it, you can do so in a less disruptive way and get back to what you are doing—spending time with people important to you.

The Watch is a new kind of computing born of, and made better by, it’s limitations. And I can’t wait.

September 15th, 2014

The Successor to Siri and Open Data

The founders of Siri are working on a new service called Viv that can link disparate sources of information together to answer questions:

But Kittlaus points out that all of these services are strictly limited. Cheyer elaborates: “Google Now has a huge knowledge graph—you can ask questions like ‘Where was Abraham Lincoln born?’ And it can name the city. You can also say, ‘What is the population?’ of a city and it’ll bring up a chart and answer. But you cannot say, ‘What is the population of the city where Abraham Lincoln was born?’” The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do.

Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like “Give me a flight to Dallas with a seat that Shaq could fit in.” Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom. And it can do all of this in a fraction of a second.

If I understand the advancement they’ve made, the service (1) will allow third-parties to link in their information or service and define what it is in a structured fashion (so Yelp could define their information set as points of interest, user ratings and reviews, and Uber could make their car service available) and (2) the service knows how to connect multiple information and/or services together so that it can answer a user’s question or fulfill their request.

The Wired article linked above provides an example of what this would look like. A user tell Viv that they need to pick up a bottle of wine that pairs well with lasagna on the way to their brother’s house.

Providing a solution to that requires the interaction of many different information sets and services. Viv would (1) use the user’s contacts to look up their brother’s address, (2) use a mapping service to create a route from the user’s current location to their brother’s house, along with some radius along the route with which the user is willing to deviate from to pick up the bottle of wine, (3) identify what ingredients compose “lasagna,” (4) identify what wines pair well with those ingredients, and (5) find stores within the specified radius of the user’s route that carries that wine.

That’s incredibly complicated. If Viv can do that not just for pre-planned scenarios (like Siri and Google Now currently do), but for arbitrary scenarios provided they have the necessary information and services, then they must also have made an advancement in natural language recognition to support it.

What most intrigues me, though, is the founders’ vision for providing Viv as a “utility” akin to electricity, so that any device could tap into the service and use its power. Effectively, what they are trying to build is a structured, universal data source. I wrote about this idea when Apple released Siri in 2012 and it’s something I’ve been thinking about for the last 5 years. The idea is to structure the world’s data so that it can be retrieved in a useful (read: computer usable) form.

It’s incredibly ambitious. With a sophisticated natural language front-end, users could ask for information on virtually anything and receive it immediately. You could, while cooking (is it obvious I make an application for cooking?), ask for healthy substitutes for butter, or the proper technique for blanching vegetables. The service would also have an API so that other software and services could access it. Imagine a hypothetical research application that allows you to request (not search!) the average temperature for each year in Los Angeles for 1900-2010, and getting back the data, and the data assembled into a chart. And then imagine requesting the average temperature for Los Angeles for 1900-2010 along with the amount of CO2 emissions for each year in the same range. With the data charted.

That’s a rather mundane example, actually. Imagine what kind of analyses would be possible if the world’s data is not only made available, but is immediately available in a structured format, and is constantly updated as the data is produced. There is the potential here, I think, for this to be as important as the advent of the Internet itself.

What concerns me, though, is how will this be made accessible. The article quotes Dag Kittlaus as saying that they envision deriving revenue from referrals made within the service. So, if you buy something through Amazon or request an Uber ride through Viv, they will earn a referral fee for it.

That makes perfect sense and is fairly brilliant. But what about making scientific data accessible? Will that require some kind of payment to access? Will I only be able to access that information through some kind of front-end, like a research application that I’ve paid for (and where the application’s developers pay some kind of fee to get access)? That would certainly be an advancement over where we are today in terms of making data accessible, but it would also prevent incredible innovation that open access could allow. Imagine if Wikipedia was a for-profit operation and, instead of being publicly available, was only accessible through subscription or through some kind of front-end. It would not be nearly the same thing.

It is heartening, though, that they are thinking so deeply about a business model. It would be a shame if such a terrific idea and incredible technology fails (or is absorbed by another company) because they hadn’t considered it. However, I hope they are considering, too, what open access to certain kinds of data (historical, political, scientific) could allow.

August 12th, 2014

Apple’s Advantage

Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.

Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.

That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.

And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.

The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.

Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.

I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?

As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.

That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.

June 3rd, 2014

Where Does the iPad Go From Here?

When Apple released the iPad Mini a year ago, I wrote that the Mini wasn’t just a secondary option to the main iPad, but it is the iPad. What I meant is that the Mini fulfilled much of the original iPad’s vision better than it or any of its full-sized successors did. Because the Mini was so much lighter and so much easier to hold, the Mini was not only more enjoyable to use while sitting down on the couch or in bed, but opened up contexts that the full-sized iPad’s size and weight didn’t allow. The iPad’s promise was powerful computing available to you in something you could comfortably hold in your hands, and the Mini fully delivered on it.

With this year’s full-sized iPad, though, the line between the two devices blurred. It’s still discernible, but it’s murkier. The iPad Mini is still superior for reading since it’s lighter, but the difference has narrowed considerably. Forget how many grams each device is; the iPad Air is quite comfortable to hold one-handed. Not as nice as the Mini, but nice.

The Mini narrowed the performance and capability gap as well. The Mini now runs the ridiculously fast A7 processor, same as the iPad Air and iPhone 5S. For many people, the Mini is big enough to write on as well, or make presentations in Keynote. The full-sized iPad is still superior for those tasks, and is especially superior for tasks like sketching which benefit from a larger screen, but the difference really isn’t that large. They are both quite capable devices for whatever task people want to use them for. The comparison is much more akin to a 13-inch Macbook versus a 15-inch than it is to an iPhone versus an iPad.

Which begs the question: where is the iPad going? More specifically, where are iPad applications going?

The original iPad was released in 2010 with the full iWork suite, and along with the iPad 2 in 2011, Apple released Garageband and iMovie as well. Garageband in particular feels like the high water mark for creative applications on the iPad. Not only was Garageband incredibly powerful and feature-deep, but it felt like it was made for the iPad all along.

There are many other applications that are powerful in different ways as well. Paper is a wonderfully simple application, but is capable of remarkably beautiful and intricate work (Paper’s Tumblr makes that clear). Editorial is a well-constructed text editor that is, in some ways, superior to desktop-based text editors. Djay is, appropriately, a DJ application that could only exist on the iPad. And on and on.

I think, though, that we’re beginning to see diminishing returns for increasing the capabilities of existing iPad applications or pushing it into new spaces. Generally speaking, while iPad hardware is dramatically faster and more capable than it was in 2010 (or even last year), applications haven’t increased at anywhere near the same pace. There are a number of obvious reasons for that, of course; faster hardware doesn’t necessarily imply that software—which is dependent on many things, but is especially dependent on good ideas and demand for them—will advance at a similar pace.

But there’s another reason: iOS still hews closely to the one-app-at-a-time, app-as-silo concept that began with the iPhone in 2007. For some purposes, this actually makes the iPad a better tool than the PC; the iPad’s limited ability to multitask makes it easier to focus on writing or other focused tasks like sketching. But it also significantly constrains what’s possible on the device. Writing an email or article that references a website or note, or doing any work that requires looking at one application’s content while working in another, requires flipping back and forth between applications, which makes for an excellent test of patience. And there is virtually no support for allowing two applications to work together on a single task by passing data between them.

Many people have suggested that renaming the iPad the “iPad Air” sets the stage for an iPad Pro, presumably with a larger screen. It’s difficult to disagree with that, but I don’t think an iPad with merely a larger screen would justify a “pro” moniker. The difference between what it is capable of and what the iPad Air is capable of would be fairly small; it would be better at sketching, but not much else. But where it would make sense is if the one-app-at-a-time model is relaxed. Using two applications on screen at once, a la Microsoft’s Metro, would certainly benefit from a larger screen. And building support for allowing applications to work together on tasks and share their data would justify the new name that much more as well.

While conceptually these two changes are fairly simple (I wrote about what applications working together could look like last year), the details are always where it gets difficult. How do you enter “split-screen” mode? How do you get out? What affordances do you provide to users so they understand what’s going on? Do you allow the user to drag content back and forth between applications (I’d hope so!)? How do you indicate what can and can’t be dragged? How do you implement the API to do so? And so on. None of it is easy. It’s inherently complex, and while we all want iOS to become more powerful, these changes are fundamental to iOS’s conceptual design, and a wrong move could endanger what’s made iOS so convincing in the first place: its simplicity.

Nonetheless, if iOS and the iPad are going to continue to progress and become more capable, then these sorts of changes are inevitable. That’s (generally) where we’re going. The rest is details. It’s also, coincidentally, where most of the work is.

November 4th, 2013

The 5C

In an excellent interview with Business Week, Tim Cook explained their thinking for the iPhone 5C:

We never had an objective to sell a low-cost phone. Our primary objective is to sell a great phone and provide a great experience, and we figured out a way to do it at a lower cost. Therefore, we can pass that on. And we figured out a way to sell 4S at substantially less than we were selling it for before, and we’re passing it on. So we think there will be a lot more people in our tent, and we can really serve a lot more people. And that feels good.

The iPhone 5C is fascinating to me because nearly everyone—including John Gruber—got it wrong: it isn’t a “cheap” iPhone. Rather, it’s something that’s both much more obvious and surprising.

Implicit in the idea that Apple should release a cheaper iPhone is that it would be a secondary model for people who want an affordable prepaid iPhone and for international markets; that is, an implicit assumption was that the iPhone/iPhone 5S would remain the mainstream iPhone. That isn’t what Apple is doing with the iPhone 5C.

Instead, Apple has taken the strategy they’ve followed since releasing the iPhone 4—take last year’s model and make it available at $99—and created a distinct product from it, and made it the mainstream iPhone.

Rather than move the iPhone down market with the iPhone 5C, Apple moved the “regular” iPhone—this year, the iPhone 5S—up market to become the pro version, and establish the iPhone 5C as the “regular” iPhone. The iPhone 5C is now the iPhone that really is good enough for everyone. The A6 processor is fast, so is LTE, and the iPhone 5′s camera is very, very good. The colors lend it a feeling of accessibility, too; it feels less serious than the iPhone 5′s aluminum design, more fun, and the colors allow for a greater feeling of personalization and whimsy. (The cases only amplify that, misplaced circles aside.) It’s a very good phone at a reasonable $99 price-point, and it’s going to look much better in the store to potential customers than last year’s iPhone model did.1

Apple’s marketing certainly seems to be trumpeting this, too. Apple’s home page features the iPhone 5C, not the 5S, and it’s playing heavily on the 5C’s colors. They featured an iPhone 5C ad, not one for the 5S. Tim Cook and Phil Schiller referred to the iPhone 5S as Apple’s most “forward-looking” iPhone yet. Apple is positioning the iPhone 5C as Apple’s iPhone for everyone, and the iPhone 5S for people who want the best.

That makes some sense on the face of it; it allows Apple to sell a “new” iPhone at $99 with 16GB of storage, but with lower cost of goods sold, which means they can maintain their margin. It may also allow Apple to push the envelope a bit more at the top end because they no longer need to manufacture enough iPhone 5Ss to satisfy nearly everyone purchasing a new iPhone at launch. But if the iPhone is under mortal threat from low-end, commodity Android-powered smartphones, then this decision seems bizarre. It won’t compete with those devices. The iPhone 5C is cheaper, but it’s not much cheaper.

But it starts to make a lot of sense if you think that smartphones aren’t so far along that the low-end, cheap models are good enough compared to the iPhone. If Apple can still provide superior hardware and software that, combined, make for a genuinely better device that is palpable for regular customers, then Apple has no need to bloody itself in the low-end washer machine.

And that’s exactly what Apple seems to think. Tim Cook explains what he thinks makes Apple so special, and what makes this strategy possible:

You look at innovation like the iPhone’s camera and the detail that went into the camera. Most people hear the word camera, and they think of hardware. And hardware is really important to it, you know? With the stuff we did with the flash on this. But it’s software, and it’s the silicon—I mean, it’s everything.

So the way I think about Apple is that the magic of this place really comes up at its best when hardware, software, and services come together. And it’s sort of the intersection of those things is where things get incredibly magical. So facilitating that to happen and getting the collaboration level for that to happen is the magic here.

And one of my proudest moments is when other people see that. They don’t know that they’re seeing that, and that’s also the beauty. They don’t have to do it. But look at these (gesturing to iPhones). These are perfect examples where the hardware and the software and the service begin to blend. In some ways you don’t tell one from the other.

The iPhone’s camera is the perfect example of what Cook is arguing. The iPhone’s camera—a cellphone camera!—is now so good that many people have nearly no need for a dedicated point-and-shoot camera. This is only true, though, because Apple has focused on developing the camera in a way that can’t be captured so well on a specification sheet but really does make for a better camera. Rather than boost their sensor’s megapixel count, Apple has kept it steady at 8 megapixels for three iPhone models, and instead has boosted the sensor’s size. They’ve focused on doing rather incredible things with the iPhone’s Image Signal Processor to make for, and choose, better photos. While these things don’t translate well to selling points for cell carrier sales associates, it does make for a truly better camera, and customers do notice the difference. As a result, the iPhone feels like a device in a class of its own.

The obvious choice was to make a more affordable iPhone. I don’t think Apple is religiously opposed to making a cheaper iPhone, but they will only do so if they can make a convincing product. What Cook is saying is that making truly good products comes first. Eventually, I believe, Apple will do exactly that. That shouldn’t be a surprise; the iPhone 5C is highly reminiscent of my first Apple product, and one of my favorite devices ever: the iPod Mini. The iPod Mini had less storage than even the third-generation iPod (10GB versus the Mini’s 4GB), and wasn’t that much cheaper than the third-generation iPod ($299 versus $249), either. Critics at the time were perplexed; if Apple was going to make a lower-end iPod to compete with more affordable competing devices, the iPod Mini certainly wasn’t it.

But it didn’t matter, because it was a damned good product. For me (as a high school student at the time), the lower price finally made it attainable, and the colors were fun in a way the regular iPod never was. The iPod Mini was incredibly successful, and it wasn’t the end; Apple replaced it with the iPod Nano in 2005 at lower prices, and introduced the iPod Shuffle—a completely different kind of music player—in 2005 as well at even lower prices.

I think the iPhone will follow precisely the same path. That is, I believe Apple will build some kind of “iPhone” product for the low-end eventually, but it may not look like an iPhone at all.2

In that sense, what Apple did was incredibly obvious: it’s what they’ve been doing since Steve Jobs returned to Apple. They don’t identify price-points and product attributes and then create a product to fill it, as most companies do. They create genuinely good, convincing products that solve real needs for people first.

If you’ve been concerned about where Apple is going under Tim Cook, this should be a sign that there’s nothing to be concerned about. Apple’s unrelenting focus on making truly great products is not only still there, but seems to be reinvigorated under Cook’s new management team.

There have been a lot of headlines lately with some variation of “Is Innovation Finished At Apple?” I believe the best may still be ahead of Apple.

  1. Yes, technically, it’s almost identical to last year’s iPhone 5. But for people looking at several phones, a colorful iPhone is going to be a lot more exciting than the iPhone you’ve seen everywhere for a year. []
  2. This may be where Apple’s wrist device fits in. []
September 24th, 2013

Apple’s Best, Apple’s Worst

This section from USA Today’s interview with Craig Federighi and Jonathan Ive highlights what makes Apple an exceptional company, and also their greatest weakness:

“This right here is what I love about Apple, this incredibly sophisticated powerful technology that you’re almost not aware of, it absolutely blows me away,” he says. “You can’t get this without working cross-functionally.”

Federighi is quick to admit that any engineer tasked with such a challenge would be sure to call attention to his brilliant work. “You know, you’re going to have some big message saying ‘Scanning!’ and buzz-buzz-zzz-zzz later it says ‘Authenticated,’ blink-blink-blink, with 10 seconds of animation,” he says, as Ive starts laughing.

“Ultimately we realized all that had to disappear,” says Federighi. “If it disappears, we know we’ve done it.”

That’s absolutely what is special about Apple, the focus on using technology in truly meaningful ways rather than to pad specification lists. But Apple’s talent for doing so with hardware and hardware and software hasn’t really extended into web services.

I’m not sure why that is. It could be that while Apple culturally believes in good design and integrating hardware and software, since those values have been with Apple since the very beginning, building terrific web services has never been something they’ve truly believed in as a company. I wonder, then, whether a part of Tim Cook’s management re-organization (which this USA Today piece seems to be meant to show the results of) is building that cultural focus and appreciation for web services.

September 19th, 2013

Basil 2.0

Today, Apple released iOS 7—the most substantial update to iOS 7 since the App Store was introduced in 2008. With it, I am introducing a new version of Basil that is, yes, re-designed for iOS 7, but more importantly is a re-thinking of how key parts of the application work. I think it’s both simpler and more powerful. I’m very proud of it.

you can read more about it on the Basil weblog, or go and get it on the App Store.

I hope you all love it. It’s a privilege to work on an application so many people find useful.

September 18th, 2013

Touch ID and Accessibility

Steven Aquino:

My idea here is not so much of convenience (which is nice) but rather of usability. I know many folks with vision-and motor-related issues who bemoan iOS’s passcode prompt because not only does it take time, but also entering in said code isn’t necessarily an easy task. In fact, more than a few lament this so often that they forego a passcode altogether because it’s time-consuming and a pain (sometimes literally) to enter.

September 11th, 2013

Shawn’s Thoughts On the iPhone Event

From Shawn Blanc’s thoughts on Tuesday’s iPhone event:

Alas, Apple still has issues with off-device photo storage, syncing, etc. It’d be great if Apple took that same energy for innovation they are putting on the iPhone’s camera (hardware and software side) and devote it to vastly improving photo storage and organization with iCloud and multiple devices.

This is easily my biggest pain-point in day-to-day use with my iPhone.

September 11th, 2013

“Worth A Thousand Words”

Patrick Rhone on Tuesday’s iPhone event:

The real story, in my opinion, is the one worth a thousand words on each of those aforementioned tech blogs but not getting near that sort of coverage — the new camera in the iPhone 5S. The camera? Yes, the camera.

September 11th, 2013

“Yeah, That Feature Should be Easy to Do”

Today, Basil received a nice 1.6 update. This update brings a really obvious, simple feature that many other recipe applications include: ingredient scaling. It’s really convenient while in the kitchen to be able to adjust a recipe’s yield according to your needs (the amount of jambalaya you need for two on a quiet weeknight is a little different than the amount you need to make while having guests), and it’s conceptually very simple.

Except it’s not so simple.

It sounds like it, of course; when you think of doing ingredient scaling, it’s just taking an amount at the beginning of a line (“1 item,” “2 tablespoons”) and scaling it by some factor—cutting it in thirds, doubling it, whatever. That’s the approach most applications take for ingredient scaling: they assume all ingredients are phrased such that there is an amount at the beginning of the line and then a food item. Based on that assumption, they identify a number at the beginning of the ingredient line and scale it by whatever scale the user selects. In the basic cases, like “2 cups white sugar”, this works OK. Not great, but OK.

But the problem is that not all ingredients items are the basic case, because there are many ways to express things. What about ingredients like “1 (12 ounce) box of pasta” versus “2 medium cloves garlic (about 1 teaspoon)”; in the first case, the measurement amount shouldn’t be adjusted, but in the second, it should be. In the second case, assuming that you only have to adjust the amount at the beginning of the line can be very confusing while cooking: After I double the recipe, do I use 4 cloves of garlic, or 1 teaspoon? This assumption—made because parsing imprecise language like this is difficult—actually makes cooking harder for the user, because they have to remember how their application fails at correctly scaling ingredients.

A new feature for cooking isn’t helpful if using it actually makes it harder to cook. There are many other cases; what about scaling amounts expressed as words, like “half”? Or what about when you cut “1/2 cup” in fourths and it tells you to use “1/8 cup” rather than “2 tablespoons?”

Scaling ingredients isn’t useful if it makes you think about these things while cooking or doesn’t work for some ingredients. It’s a distraction.

I avoided building a scaling feature for Basil for that reason: I didn’t want to add more noise if it had all of these caveats. But I’m happy to announce that Basil now has ingredient scaling, and it does its best to do the right thing in all of these cases. It handles the cases discussed above, and many others, too. It will do its best to present ingredient amounts in a way that makes sense (no “1/9 cup sugar” or similarly bizarre things).

It isn’t perfect. There are probably cases I’ve missed, or cases it’s not handling properly, but I am committed to addressing those.

Working on ingredient scaling, though, reminded me that problems which seem exceedingly simple and easy to solve on the surface are usually icebergs waiting to sink you. They may look simple, but they tend to conceal much complexity underneath the surface that is only obvious once you’ve thought through the problem. These “yeah, that should be easy to do” features have, more often than not, been the hardest problems to solve in my experience.

This experience is also another lesson that while users like receiving new features and they’re exciting to announce, over the longer term users only want a new feature if it improves their net-experience in some way. If it ends up not being useful in many cases, or adds complexity to the process of using your application, then that feature could make your application less useful than it was before.

August 6th, 2013

Mansfield

Bob Mansfield was removed from Apple’s executive team page this weekend. Apple said in a statement that he will not be a part of Apple’s executive team, but will continue to work on special projects at Apple and report to Tim Cook. John Gruber reports that there’s no intrigue behind the movie—he really is just going to be focusing on special projects.

Erica Ogg writes:

The reason for the reassignment of Mansfield is only one of many questions about what is going on at Apple right now. He’s not the only high-profile executive whose job title at Apple isn’t very clear or defined. While that might be a sign of turmoil, as was the case last year when Mansfield tried to retire, it’s quite possible that Apple is now moving key executives onto secret projects that it won’t reveal until it’s ready.

That certainly seems right. It’s probably better to read this as a result of a greater focus at Apple on new product lines they’re developing, most likely a wearable device. Which is exciting. We’re seeing Apple’s future developing here.

July 29th, 2013

In Apple case, ‘The line between the legal and the illegal seems so thin’

Antitrust scholar Randal Picker:

There’s a simple story about publishers wanting to change prices and failing until the white knight appears in the form of Steve Jobs. But there’s a flip side to that. Apple would say: “we didn’t do anything here that we didn’t have an independent interest in doing, independent of whatever happened in e-book prices.

July 11th, 2013

Innovation Policy

Reihan Salam links to a post by Ross Eisenbrey which argues that government, not business or markets, is primarily responsible for innovation we’ve seen in technology:

Mazzucato suggests that, given the extent to which tech companies like Apple and Intel owe their great good fortune to the federal government’s investment in R&D, they should share more of their profits with the taxpayers. Instead, of course, Apple has been offshoring profits to avoid taxation and most of the tech industry is contributing to the efforts of the U.S Chamber of Commerce and the rest of the organized business lobby to cut corporate taxes and shrink the government. As Mazzucato makes clear, cutting taxes and the government is no recipe for an innovative, competitive future—just the opposite. 

Mazzucato points out that many of the iPhone’s core technologies, such as solid state storage, capacitive sensors and GPS, all have their roots in government-sponsored labs. She presumes, then, that government is therefore largely responsible for the innovation itself, and so (1) we should continue supporting government-funded research projects, and (2) those companies that benefit from taxpayer “risk-taking” should “share” more of their profits with the government.

But as Salam points out, companies have no moral obligation to do so. The government funded much of that research for its own purposes. Salam writes:

The U.S. government devised the technologies Mazzucato identifies for its own, usually defense-oriented reasons. Mazzucato implicitly suggests that in a counterfactual universe in which the Cold War had never taken place, and in which defense expenditures hadn’t diverted spending from other domains or forced higher tax levels, etc., innovations in information technology would not have taken place either. The decades that preceded the Cold War, during which there was considerable private sector innovation in early information technologies, suggests that this is not the case, but of course we can’t really say.

What’s worse, though, is that requiring some formative compensation for that research would undermine the very innovation that Eisenbrey and Mazzucato claim that the government was actually responsible for. Salam again:

As Amar Bhidé often notes, an Englishman pioneered the World Wide Web under the auspices of the government-financed CERN laboratory in Switzerland, yet the U.S. has been the main source of consumer internet innovation. U.S. internet firms do not, however, pay the Swiss and other European governments a formal innovation bounty. Part of the reason is that everyone profits from the free flow of knowledge, which is why excessive patents are such an economic scourge.

The reason is that doing so would reduce that free flow of information, and therefore the actual work it takes to create a useful product and bring it to market. Eisenbrey conveniently skips over the 1970s, 1980s, 1990s, and 2000s where technology companies invested huge sums of capital and work into developing these base technologies into something useful for consumers, and into something consumers would buy. There’s an implied derision at that effort as something other than innovation, but it absolutely is, and it’s what actually makes those technologies useful for people. Without those companies continuing to iterate on solid state storage, and without other companies creating salable products that utilize it, solid state storage never would have been anything more than a curiosity in a lab. Similarly, without Intel developing the microprocessor, and without Apple, Microsoft and the PC makers creating PCs for those microprocessors that appealed to consumers, they never would have developed like they have, either. And as a result, those technologies never would have evolved enough to create a handheld, touchscreen phone that’s always connected to the web. It would have been impossible.

Note, however, that story doesn’t minimize the role of government-funded research. Rather, it shows that it plays a role in innovation, but it is not the entire story by any stretch. And it shows, too, why the free flow of ideas and technology is so important. Without it, there can be very little actual innovation, because “innovation” inherently means seeing a connection between disparate ideas and technologies, how they can fulfill a need for people, and putting them together such that it creates something that didn’t really exist before. Innovation may be greater than the sum of its parts, but it is nonetheless the summation of many different things that already exist. Placing formal restrictions on those ideas, such as overbearing patents or, in this case, requirements to pay back more to the government which claims responsibility for them and state direction of innovation, impedes that flow of ideas and innovation as a result.

This is why simplistic stories about where new ideas and products come from, and simplistic moral stories about who deserves what for those ideas and products, can be so damaging—they elide much more complicated systems.

July 11th, 2013

Apple’s New Beginning

At WWDC this year, Apple announced the most dramatic update to iOS since iOS 2 introduced the App Store and SDK for developers. The update is also the most controversial.

In 2012, Tim Cook fired Scott Forstall, Apple’s senior vice-president (SVP) of iOS, and distributed his responsibilities among Apple’s executive team. Craig Federighi became SVP of software, unifying the iOS and OS X teams under one leader. Eddy Cue received the title SVP of “Internet Software and Services,” unifying Apple’s iTunes, iBooks and App stores, iCloud, Maps and Siri. And in addition to hardware design, Jonathan Ive assumed responsibility for Human Interface design as well in the shake-up.

At the time, Adam Lashinsky reported that Forstall refused to sign an apology for iOS 6′s Maps release, and that sealed his fate. While his refusal may have been what precipitated his firing, Lashinsky says that Forstall didn’t get along with Ive. I’ve heard similar things as well, and that those difficulties were not isolated with Ive.

I don’t think Cook decided to fire Forstall simply because he didn’t get along with others in Apple’s management team, or because he refused to take public responsibility for Maps’s issues. Rather, I believe it was a result of Cook re-organizing Apple’s management structure to reflect his becoming CEO. Cook is not Steve Jobs, and he does not pretend to be, so he decided to change Apple’s structure to reflect that.

Jobs fulfilled a rather unique role as CEO of Apple. Rather than oversee and manage the work of others, Jobs was intimately involved in product and product design decisions. No detail was too small for Jobs’s attention. Jobs both originated product ideas and took part in iterating on them, but more importantly, Jobs acted as a filter for Apple. All product decisions ultimately passed through him; he approved new products and new directions, and vetoed them, too. As a result of this role, collaboration among his lieutenants and teams wasn’t as important; indeed, Jobs was known to foster competition and even conflict among individuals and teams to try to elicit their very best work, and then he would choose what he saw as the superior work and direction for Apple.

Cook, as far as I know, doesn’t scrutinize each pixel of an application’s design. He doesn’t have Jobs’s ability to understand what makes for a good product. Jobs was one-of-a-kind, and Cook recognizes that. Recognizing that, however, means that he couldn’t continue Jobs’s management style. Instead, Cook needs to rely on his management team to replace Jobs’s role. Each member must take absolute responsibility for their area of focus and must be incredibly talented at managing it. Most integrally, though, because Apple no longer has the singular filter that all larger product decisions pass through, that management team must work together. Apple could withstand conflict and islands—even benefit from it—with Jobs because each area ultimately ran through him, and because he directed each area. Since Cook can’t fill that role, he needs supremely talented people in charge of each area working with each other to set Apple’s direction. Jobs’s Apple could feed off of discord, but Cook’s Apple must feed off of collaboration.

In Apple’s introduction video for iOS 7, Jonathan Ive says that “We see iOS 7 as defining an important direction and in many ways, a beginning.” While Ive may have meant they saw it as a new beginning for iOS, iOS 7 also marks the beginning of Apple’s new management structure.

When Cook fired Forstall last year, it wasn’t clear what it meant or whether it was for the better. Embarrassingly, Cook hired John Browett to head retail in April 2012 and fired him in October along with Forstall. One way to read that—and many did—was that Cook’s leadership was failing; he had hired an obviously bad fit for retail and was forced to get rid of him in six months. In that light, Forstall’s firing and Apple’s management restructuring looked like it could be the result of a struggling management team.

Until WWDC this year, it wasn’t clear whether that view was correct, or whether it was a part of Cook’s reorganizing Apple to work best under new leadership. Today, though, I think it’s clearly the latter view that was correct.

With that out of the way, I believe that this year’s keynote was meant to establish the foundation for Tim Cook’s Apple. In episode 19 of the Accidental Tech Podcast, Marco Arment, John Siracusa and Casey Liss discuss the keynote introduction video, which explains Apple’s motivation. Siracusa says that while he liked the video, he thinks that it suggested they were about to introduce something groundbreaking, and that OS X Mavericks, the new Mac Pro and iOS 7 didn’t live up to that. Siracusa might be right, but I think he misses its intent. This wasn’t meant to congratulate themselves for being great; rather, it was meant to affirm Apple’s motivation for what they do. Along with their “Our Signature” ad, I think they are the equivalent of the “Think Different” campaign for the Cook era.

Famously, Jobs said that the people at Apple shouldn’t try to make decisions by asking themselves what he would have done. Instead, he said, they should just do what’s right. Clearly, Cook took that to heart. This is Cook’s Apple, and they are not constraining themselves by what feels Jobs-like. Cook hasn’t confused the trappings of Jobs’s Apple—how Jobs managed the company—for its heart: an irrepressible desire to make insanely great products that improve people’s lives and give them joy.

Apple, then, has changed significantly since 2011. Things are quite different at the top than they were then, and to my eyes, Apple seems more open to the world than its ever been in important ways, too. But those changes have all been made so that Apple can continue doing what they always have. This may be Cook’s Apple, but the core is just as it’s always been.

July 9th, 2013