“Original” Category

No Answer

You have all the answers to my questions
Even ones I didn’t have
Why should I know anything at all?

You know everything I need
Everything I may need
You hold it all for me
So I waste no time

But still I wonder, why don’t I wonder?
Like I did as a kid
But no answer

March 31st, 2014

Tony Fadell Explains Nest’s Sale to Google

Last week, Nest was acquired by Google for $3.2 billion.

There are only a few companies that have truly excited me in the last few years, and Nest is at the top. They worked on a very original, very real problem—thermostats were not only a user interface disaster and something nearly everyone hated to use, but also were an opportunity to do something meaningful: use energy more efficiently in people’s homes while also improving their relation with an important but ignored device. In addition, it clearly was the first product in a much larger plan. And it was a very good first step in a very good plan.

So, when I heard that Nest had sold to Google, I felt a pang of disappointment. Not because it was Google (which, if Nest were to be acquired, makes more sense than any other company I can think of), but rather because Nest is an incredibly ambitious company that, I think, had the opportunity to be as important to the next wave of development in personal computing and the Internet as Apple and Google were—and potentially as large. They were a key member in Silicon Valley’s next generation of meaningful companies, I thought.

Of course, nearly every bit of that can (and will) still be true, with the notable exception of remaining independent. They can still do all of that, but they will do so under Google’s banner, and for Google’s benefit. And that’s fine, all else being equal. Before I continue, though, we need to discuss why Nest decided to sell in the first place, and “for the money” doesn’t count, because I know Tony Fadell, Matt Rogers and everyone else there didn’t do it for the cash.

Here’s why they sold, according to Fadell:

I was spending nearly ninety percent of my time on building the infrastructure of the company and I wasn’t able to spend enough time and cycles on what I love doing: products and creating differentiated experiences for our customers. That is where my love is and Google offered to let us focus on that, but with scale that will help bring our horizon closer to us, faster. Google offers to bring that scale to us. For me, ultimately building great products is key.

Fadell cites European distribution as a specific example of what he means by “scale”—physical distribution and dealing with legal issues surrounding something as regulated and disparate as energy. Fadell wants to focus his time on developing products rather than handling all the issues surrounding it.

It’s hard to argue with that. Nest clearly wants to move quickly. The Nest thermostat is a shockingly good first product, and Nest Protect—which they released just two years later—is at least as good. Nest Protect also began revealing their larger strategy. Owning either one of them is great, but owning both of them makes each one better. Since they use your home’s wireless network, the Nest thermostat will automatically augment itself with the Protect’s motion sensors. And more importantly, if the Protect senses rising levels of carbon monoxide, the thermostat will shut off your home’s furnace. Their strategy, then, appears to be modular devices that are convincing on their own, but when used together not only all function better, but begin to form the basis for a connected home.

Being a part of Google will allow them to realize that strategy faster by increasing their resources so they can focus their bandwidth on developing product. Google also is doing industry-leading work in learning systems and web services, which obviously will benefit Nest. Like I said, of all the companies in the world that could have acquired Nest (which, admittedly, is a fairly short list), Google is the best fit.

But Google didn’t agree to acquire Nest entirely for Nest’s benefit. They did it, I assume, because Nest fills in particularly important holes in Google’s capabilities and in Google’s future development. While Google has been very good at building web applications, web services and a mobile operating system, they’ve done very little to prove that they can design and make hardware that real consumers will pay real money for. There’s a lot more involved there than design and supply chain. To a much greater extent, making hardware involves doing businessy things like identifying a target market for it, identifying what price they’ll pay at necessary levels of sales and margin, and then manufacturing a quality product in an efficient enough way to hit that margin. Nest has shown that not only can they do all of that, but they can produce an exceptional product that customers truly love. That’s invaluable, and it’s something Google hasn’t done.

Nest also provides an entry path for Google into the home. Starting into the connected home requires building hardware, and it requires a no-bullshit vision for how the connected home can improve people’s lives in substantive ways. Nest provides both of those things.

It sounds a symbiotic relationship, then. Google can provide Nest what it needs and Nest can provide Google something it needs, too. In Nest’s ideal vision of the relationship, Nest will remain largely independent—their own brand, leadership, teams and products. People and resources may flow across the Nest-Google boundary, but the two entities will nevertheless remain distinct. But in Google’s, Nest will begin to overlap and merge with Google itself. If Google wants the Nest acquisition to result in an improved capability for creating hardware products that consumers really want, then that necessarily requires Nest’s leadership to extend outside of Nest itself—which would require splitting their time, too. This is because while Nest may become functionally a discrete unit within Google (the “connected home” unit, let’s say), if it is to have any effect on the rest of Google, there has to be some sort of cross over. This may mean putting Nest’s leadership (whether that’s Matt Rogers, or another member of the team) in charge of Google’s hardware, or even having people in leadership roles move back and forth across the boundary. In any case, the boundary begins to smear, and Fadell’s reason for doing the deal—to focus his team’s time exclusively on product—begins to seem less likely.

Of course, that’s not necessarily negative. Perhaps a Nest-infused Google, and a Google-infused Nest, is better for everyone involved—Nest, Google, and us. I think there’s a good argument to be made there. But inherently, as that occurs, Nest begins to fade as a distinct entity, and it becomes more Google.

I think the most optimistic comparison for this acquisition is Disney’s 2006 acquisition of Pixar. Pixar remained an independent studio, kept their leadership, kept their campus, kept their culture, and created some of their most artistically and commercial films afterward. In return, Disney received Ed Catmull and John Lasseter’s services for turning around their declining animation department. And turn it around they did; Disney Animation Studios is enjoying something of a renaissance. Frozen, released in December 2013, was Disney’s biggest hit since The Lion King. The Pixar acquisition is one of the most successful acquisitions in history.

That could be how it works out here, too. I suspect, though, that while Pixar has thus far been able to retain its independence, Nest will not retain independence to the same extent. I have two main reasons for thinking so. First, the Disney-Pixar deal was incredibly specific in its intent: the deal was Catmull and Lasseter would oversee Disney Animation and Pixar would remain its own studio. The Google-Nest deal, as far as I can tell, doesn’t appear to be nearly as well-defined. As a result, blurring will happen with relative ease. Second, while in the movie business it’s actually beneficial for Pixar to remain independent in substance and in brand—it allows them to experiment in ways they couldn’t necessarily do if it was all a single studio, and it also allows them to release multiple movies per year in a way that doesn’t feel like Disney movies are competing for people’s attention—that structure doesn’t make nearly as much sense for Google and Nest. In reality, centralizing their hardware operation makes much more sense than continuing Nest as a parallel operation to Google’s other hardware operations. As a result, I think what we are more likely to see is Nest more or less become a part of Google while the brand continues on as Google’s “connected home” brand.

In the short-term, then, I think there’s very good reason to be excited about the deal. I bet we are going to see even more incredible things come out of Nest than we would have seen otherwise, and probably faster as well. But long-term, I’m disappointed. Nest is one of those rare companies that identified a brilliant product idea, in a large market, that would allow them to develop into something much greater in the future. And along the way, they built a first-rate company in all areas. I believe Nest would be one of the most important companies in the world for the next twenty years. And while they may still be integral to personal computing and the web’s future, it will likely be under Google’s banner. For better or for worse.

January 21st, 2014

Where Does the iPad Go From Here?

When Apple released the iPad Mini a year ago, I wrote that the Mini wasn’t just a secondary option to the main iPad, but it is the iPad. What I meant is that the Mini fulfilled much of the original iPad’s vision better than it or any of its full-sized successors did. Because the Mini was so much lighter and so much easier to hold, the Mini was not only more enjoyable to use while sitting down on the couch or in bed, but opened up contexts that the full-sized iPad’s size and weight didn’t allow. The iPad’s promise was powerful computing available to you in something you could comfortably hold in your hands, and the Mini fully delivered on it.

With this year’s full-sized iPad, though, the line between the two devices blurred. It’s still discernible, but it’s murkier. The iPad Mini is still superior for reading since it’s lighter, but the difference has narrowed considerably. Forget how many grams each device is; the iPad Air is quite comfortable to hold one-handed. Not as nice as the Mini, but nice.

The Mini narrowed the performance and capability gap as well. The Mini now runs the ridiculously fast A7 processor, same as the iPad Air and iPhone 5S. For many people, the Mini is big enough to write on as well, or make presentations in Keynote. The full-sized iPad is still superior for those tasks, and is especially superior for tasks like sketching which benefit from a larger screen, but the difference really isn’t that large. They are both quite capable devices for whatever task people want to use them for. The comparison is much more akin to a 13-inch Macbook versus a 15-inch than it is to an iPhone versus an iPad.

Which begs the question: where is the iPad going? More specifically, where are iPad applications going?

The original iPad was released in 2010 with the full iWork suite, and along with the iPad 2 in 2011, Apple released Garageband and iMovie as well. Garageband in particular feels like the high water mark for creative applications on the iPad. Not only was Garageband incredibly powerful and feature-deep, but it felt like it was made for the iPad all along.

There are many other applications that are powerful in different ways as well. Paper is a wonderfully simple application, but is capable of remarkably beautiful and intricate work (Paper’s Tumblr makes that clear). Editorial is a well-constructed text editor that is, in some ways, superior to desktop-based text editors. Djay is, appropriately, a DJ application that could only exist on the iPad. And on and on.

I think, though, that we’re beginning to see diminishing returns for increasing the capabilities of existing iPad applications or pushing it into new spaces. Generally speaking, while iPad hardware is dramatically faster and more capable than it was in 2010 (or even last year), applications haven’t increased at anywhere near the same pace. There are a number of obvious reasons for that, of course; faster hardware doesn’t necessarily imply that software—which is dependent on many things, but is especially dependent on good ideas and demand for them—will advance at a similar pace.

But there’s another reason: iOS still hews closely to the one-app-at-a-time, app-as-silo concept that began with the iPhone in 2007. For some purposes, this actually makes the iPad a better tool than the PC; the iPad’s limited ability to multitask makes it easier to focus on writing or other focused tasks like sketching. But it also significantly constrains what’s possible on the device. Writing an email or article that references a website or note, or doing any work that requires looking at one application’s content while working in another, requires flipping back and forth between applications, which makes for an excellent test of patience. And there is virtually no support for allowing two applications to work together on a single task by passing data between them.

Many people have suggested that renaming the iPad the “iPad Air” sets the stage for an iPad Pro, presumably with a larger screen. It’s difficult to disagree with that, but I don’t think an iPad with merely a larger screen would justify a “pro” moniker. The difference between what it is capable of and what the iPad Air is capable of would be fairly small; it would be better at sketching, but not much else. But where it would make sense is if the one-app-at-a-time model is relaxed. Using two applications on screen at once, a la Microsoft’s Metro, would certainly benefit from a larger screen. And building support for allowing applications to work together on tasks and share their data would justify the new name that much more as well.

While conceptually these two changes are fairly simple (I wrote about what applications working together could look like last year), the details are always where it gets difficult. How do you enter “split-screen” mode? How do you get out? What affordances do you provide to users so they understand what’s going on? Do you allow the user to drag content back and forth between applications (I’d hope so!)? How do you indicate what can and can’t be dragged? How do you implement the API to do so? And so on. None of it is easy. It’s inherently complex, and while we all want iOS to become more powerful, these changes are fundamental to iOS’s conceptual design, and a wrong move could endanger what’s made iOS so convincing in the first place: its simplicity.

Nonetheless, if iOS and the iPad are going to continue to progress and become more capable, then these sorts of changes are inevitable. That’s (generally) where we’re going. The rest is details. It’s also, coincidentally, where most of the work is.

November 4th, 2013

Nest Protect

Today, Nest announced their first new product since the Nest thermostat—Nest Protect. Nest Protect is a smoke and carbon monoxide alarm.

For an excellent look at Nest Protect, and profile of why they made it and the design process, you should read Stephen Levy’s piece for Wired.

…Wait, what? A smoke alarm?

Yes. Nest’s latest product is a $130 smoke alarm.

Nest’s basic strategy should be obvious now: take things we use in our daily lives but don’t at all enjoy using, or actively abhor using, and think through them so that they’re both better devices and delightful to own and use. (It’s also worth noting that they’re choosing product categories that are very large and universally needed.)

It’s more than that, though. The Nest thermostat and Nest Protect are standalone devices, but they work together. If you have a Nest thermostat and smoke alarms installed in your home, the smoke alarms will inform the thermostat when there’s movement in the home—which should make the Nest thermostat’s “Auto-Away” feature much more accurate, and thus able to be that much more efficient with a home’s energy use.

But what’s even more illuminating for what Nest’s vision is, though, is that if a Nest smoke alarm senses carbon monoxide, it will tell the thermostat to turn off the home’s furnace, which is a likely cause of carbon monoxide poisoning.

That’s truly smart. Nest has not only built two devices that work together to efficiently manage your home’s energy and protect you from fire, but they’ve created two devices that can actively judge the situation and work together to intervene in your home to keep you safe.

We’ve been hearing about the “smart home” for a very long time now, but this is the first time we’re legitimately there. Tony Fadell seemed to confirm this as Nest’s intent while talking with Stephen Levy:

In other words, Nest isn’t only about beautifying the thermostat or adding features to the lowly smoke detector. “We’re about creating the conscious home,” Fadell says. “To take a truly important device that has had no great innovation and make that device really, really great.” Left unsaid is a grander vision, with even bigger implications: many devices sensing the environment, talking to one another, and doing our bidding unprompted.

That’s a grand dream, and I think the Nest Protect—ostensibly just a smoke alarm—is going to be a key cog within their plan. Think about it: it’s not just a smoke alarm, but an Internet-connected computer with sophisticated sensors and software in every bedroom and on every floor. It knows when you wake up (since it has a motion-tracking sensor), when you go to bed, and even when you get up in the middle of the night. Along with the Nest thermostat, they also know when you leave for the day and when you get home. There’s a lot of immediate information there to begin doing some incredible things, and it’s something that could serve as a platform for all kinds of other services as well.

So yes, it is “just” a smoke alarm. And a very good one. But I think it’s also a piece of a much larger plan: make products that are so good that they can stand on their own and you’ll have to have them, but also work together to create something we’ve never seen before.

October 8th, 2013

The 5C

In an excellent interview with Business Week, Tim Cook explained their thinking for the iPhone 5C:

We never had an objective to sell a low-cost phone. Our primary objective is to sell a great phone and provide a great experience, and we figured out a way to do it at a lower cost. Therefore, we can pass that on. And we figured out a way to sell 4S at substantially less than we were selling it for before, and we’re passing it on. So we think there will be a lot more people in our tent, and we can really serve a lot more people. And that feels good.

The iPhone 5C is fascinating to me because nearly everyone—including John Gruber—got it wrong: it isn’t a “cheap” iPhone. Rather, it’s something that’s both much more obvious and surprising.

Implicit in the idea that Apple should release a cheaper iPhone is that it would be a secondary model for people who want an affordable prepaid iPhone and for international markets; that is, an implicit assumption was that the iPhone/iPhone 5S would remain the mainstream iPhone. That isn’t what Apple is doing with the iPhone 5C.

Instead, Apple has taken the strategy they’ve followed since releasing the iPhone 4—take last year’s model and make it available at $99—and created a distinct product from it, and made it the mainstream iPhone.

Rather than move the iPhone down market with the iPhone 5C, Apple moved the “regular” iPhone—this year, the iPhone 5S—up market to become the pro version, and establish the iPhone 5C as the “regular” iPhone. The iPhone 5C is now the iPhone that really is good enough for everyone. The A6 processor is fast, so is LTE, and the iPhone 5′s camera is very, very good. The colors lend it a feeling of accessibility, too; it feels less serious than the iPhone 5′s aluminum design, more fun, and the colors allow for a greater feeling of personalization and whimsy. (The cases only amplify that, misplaced circles aside.) It’s a very good phone at a reasonable $99 price-point, and it’s going to look much better in the store to potential customers than last year’s iPhone model did.1

Apple’s marketing certainly seems to be trumpeting this, too. Apple’s home page features the iPhone 5C, not the 5S, and it’s playing heavily on the 5C’s colors. They featured an iPhone 5C ad, not one for the 5S. Tim Cook and Phil Schiller referred to the iPhone 5S as Apple’s most “forward-looking” iPhone yet. Apple is positioning the iPhone 5C as Apple’s iPhone for everyone, and the iPhone 5S for people who want the best.

That makes some sense on the face of it; it allows Apple to sell a “new” iPhone at $99 with 16GB of storage, but with lower cost of goods sold, which means they can maintain their margin. It may also allow Apple to push the envelope a bit more at the top end because they no longer need to manufacture enough iPhone 5Ss to satisfy nearly everyone purchasing a new iPhone at launch. But if the iPhone is under mortal threat from low-end, commodity Android-powered smartphones, then this decision seems bizarre. It won’t compete with those devices. The iPhone 5C is cheaper, but it’s not much cheaper.

But it starts to make a lot of sense if you think that smartphones aren’t so far along that the low-end, cheap models are good enough compared to the iPhone. If Apple can still provide superior hardware and software that, combined, make for a genuinely better device that is palpable for regular customers, then Apple has no need to bloody itself in the low-end washer machine.

And that’s exactly what Apple seems to think. Tim Cook explains what he thinks makes Apple so special, and what makes this strategy possible:

You look at innovation like the iPhone’s camera and the detail that went into the camera. Most people hear the word camera, and they think of hardware. And hardware is really important to it, you know? With the stuff we did with the flash on this. But it’s software, and it’s the silicon—I mean, it’s everything.

So the way I think about Apple is that the magic of this place really comes up at its best when hardware, software, and services come together. And it’s sort of the intersection of those things is where things get incredibly magical. So facilitating that to happen and getting the collaboration level for that to happen is the magic here.

And one of my proudest moments is when other people see that. They don’t know that they’re seeing that, and that’s also the beauty. They don’t have to do it. But look at these (gesturing to iPhones). These are perfect examples where the hardware and the software and the service begin to blend. In some ways you don’t tell one from the other.

The iPhone’s camera is the perfect example of what Cook is arguing. The iPhone’s camera—a cellphone camera!—is now so good that many people have nearly no need for a dedicated point-and-shoot camera. This is only true, though, because Apple has focused on developing the camera in a way that can’t be captured so well on a specification sheet but really does make for a better camera. Rather than boost their sensor’s megapixel count, Apple has kept it steady at 8 megapixels for three iPhone models, and instead has boosted the sensor’s size. They’ve focused on doing rather incredible things with the iPhone’s Image Signal Processor to make for, and choose, better photos. While these things don’t translate well to selling points for cell carrier sales associates, it does make for a truly better camera, and customers do notice the difference. As a result, the iPhone feels like a device in a class of its own.

The obvious choice was to make a more affordable iPhone. I don’t think Apple is religiously opposed to making a cheaper iPhone, but they will only do so if they can make a convincing product. What Cook is saying is that making truly good products comes first. Eventually, I believe, Apple will do exactly that. That shouldn’t be a surprise; the iPhone 5C is highly reminiscent of my first Apple product, and one of my favorite devices ever: the iPod Mini. The iPod Mini had less storage than even the third-generation iPod (10GB versus the Mini’s 4GB), and wasn’t that much cheaper than the third-generation iPod ($299 versus $249), either. Critics at the time were perplexed; if Apple was going to make a lower-end iPod to compete with more affordable competing devices, the iPod Mini certainly wasn’t it.

But it didn’t matter, because it was a damned good product. For me (as a high school student at the time), the lower price finally made it attainable, and the colors were fun in a way the regular iPod never was. The iPod Mini was incredibly successful, and it wasn’t the end; Apple replaced it with the iPod Nano in 2005 at lower prices, and introduced the iPod Shuffle—a completely different kind of music player—in 2005 as well at even lower prices.

I think the iPhone will follow precisely the same path. That is, I believe Apple will build some kind of “iPhone” product for the low-end eventually, but it may not look like an iPhone at all.2

In that sense, what Apple did was incredibly obvious: it’s what they’ve been doing since Steve Jobs returned to Apple. They don’t identify price-points and product attributes and then create a product to fill it, as most companies do. They create genuinely good, convincing products that solve real needs for people first.

If you’ve been concerned about where Apple is going under Tim Cook, this should be a sign that there’s nothing to be concerned about. Apple’s unrelenting focus on making truly great products is not only still there, but seems to be reinvigorated under Cook’s new management team.

There have been a lot of headlines lately with some variation of “Is Innovation Finished At Apple?” I believe the best may still be ahead of Apple.

  1. Yes, technically, it’s almost identical to last year’s iPhone 5. But for people looking at several phones, a colorful iPhone is going to be a lot more exciting than the iPhone you’ve seen everywhere for a year. []
  2. This may be where Apple’s wrist device fits in. []
September 24th, 2013

Designing Basil’s New Timers

With iOS 7, I am releasing a large update to Basil. There’s a lot to talk about, and I’ll do so more this week, but I wanted to discuss the process around designing one of the main features: recipe timers.

When I released the first version of Basil, it would turn any time directions (the “15 minutes” in “Simmer for 15 minutes on low heat…”) into a button. When pressed, a timer would slide up on screen from the bottom. It was one of my favorite features because it made cooking a little easier in an immediately understandable way.

As soon as Basil hit the App Store, customers pointed out two large limitations. First, if you started a timer and switched to another application, it froze the timer. That’s not good, since switching to Safari to do things like look up how to properly slice a carrot into matchsticks or a substitute for masa is pretty typical in the kitchen. Second, Basil only allowed you to have a single timer running at a time, which made it difficult to time, say, two things cooking on the stove.

I’m happy to announce that the new version of Basil coming out this week addresses both of those limitations. But that’s not what I want to talk about.

Since I’ve been aware of these limitations since the just after Basil’s release, you may be wondering why I waited until now to address them. And you should be; it’s Basil’s biggest usability issue. You may especially be wondering why it’s taken so long since these limitations aren’t exactly a technically Hard Problem to solve.

It took this long because handling multiple timers turned out to be a fairly challenging design problem to solve while retaining Basil’s overall simplicity. Of course, I could have simply extended Basil’s current timer concept, and just show multiple timers across the bottom of the screen. It would look something like this:

Timers

That would have worked, but there are several serious problems with it. First, it magnifies an issue with the original design: the timers cover up a portion of the direction step at the bottom of the screen. Second, with multiple timer views across the bottom of the view, it’s also highly distracting. Third, it’s not at all clear which direction step each timer is associated with—which is a very bad thing when you’re cooking multiple things at once and forget which timer is for what.

So that concept wouldn’t work. The next route I briefly considered was using the same basic concept, but pushing the recipe view up, and putting the timers underneath it. That would solve the first problem, but not problems two and three, and it only solves the first problem by reducing the recipe’s screen space. That didn’t seem like a viable solution, either.

The third direction I considered was creating a timer “manager” of sorts that would only be visible when the user had opened a timer. If there were any open timers, Basil would show a little circle in the bottom-right of the view indicating how many active timers there were. Tapping it would open a view across the screen showing what timers were open for each recipe. It would be like a global look at all timers for your recipes, and you could pause or stop any of them, from anywhere in the application. It would look something like this (in sketch form, anyway):

Timer Manager

There are some advantages to this approach. It would solve problems one and two, since it would only be on screen when you want to glance at your timers. Also, it provides a lot of power: you can see where your timers are for your recipes from anywhere within Basil, so you can quickly check on the time for your beans on the stove while looking at your famous burger recipe to get them ready for the grill.

But as you can see from the illustration, it only adds that power by introducing enormous complexity. Now, you have a button/indicator view that appears in the screen’s right corner when you open a timer, and now you have a view that could literally show tens of timers at a time—which would mean that, for it to be useful, you would have to clear any timers you open but don’t end up wanting to use. And even when you manage it correctly, it’s still confusingly complex. There’s a lot going on in one view to comprehend when you’re trying to cook.

I dismissed that design path because it would effectively destroy the timer feature’s purpose by introducing complexity. If it’s too much to deal with, you won’t use it, and that’s especially true when it’s for an application that won’t be your sole focus most of the time. Implementing the timer manager would technically solve the original timer implementation’s limitations, and technically make it much more powerful, but substantively it would make the feature completely useless. And worse, it would water down Basil’s core focus: making things simple so you can focus on cooking.

Realizing how wrong the timer manager design is was frustrating. I’d spent a lot of time and brain power heading down that path, and it was completely wrong. What it forced me to do, though, is think about the feature’s purpose again with a fresh mind. Instead of thinking about how to make Basil’s timer feature—something that’s bolted on top of the recipe view—more powerful, I thought about how I could make it simpler.

In Basil’s original design, I thought of the timers like a digital equivalent to an actual timer—something separate from the recipe itself that would slide on and off screen as needed. But then I thought that distinction didn’t make sense; a timer is conceptually tied to the direction step it comes from. If step two says to sauté onions for five minutes, that timer for five minutes should be tied to step two. So why not just put the timer in the direction step itself? Like this:

DirectionTimer

While this doesn’t achieve the same power as the recipe manager, it not only solves problems one and two, but it also solves problem three by directly tying the timer to its direction step. There’s no question what a timer is tied to when you glance at it, and there’s no extra text to read to indicate it, either.

By doing so, this design path both simplifies the recipe view visually, which is highly desirable for cooking, and naturally presents multiple timers. I’m extremely proud that it makes a more powerful feature possible by simplifying the design concept. Hopefully, if I’ve done my job, none of this will ever occur to the user. It should seem obvious, and they should never have to think about it.

I wanted to step through some of the design process and my thinking for this feature because often when we use applications, how an interface is conceptually designed seems obvious, and implementing new features seems like it should be incredibly easy. Designing interfaces, though, is never easy, and while much of it may seem obvious, the details that only become apparent once you’ve thought through every part of it are the ones that cause a design to fail.

September 16th, 2013

No, Mr. President. Snowden is a Patriot.

Earlier this month, in a press conference where he stated that he supported “greater oversight, greater transparency, and constraints” on the government’s digital surveillance programs, President Obama stated that he doesn’t believe Snowden is a patriot:

No, I don’t think Mr. Snowden was a patriot. As I said in my opening remarks, I called for a thorough review of our surveillance operations before Mr. Snowden made these leaks.

My preference — and I think the American people’s preference — would have been for a lawful, orderly examination of these laws, a thoughtful fact-based debate that would then lead us to a better place. Because I never made claims that all the surveillance technologies that have developed since the time some of these laws had been put in place somehow didn’t require potentially some additional reforms. That’s exactly what I called for.

Apparently for Obama, calling for a “thorough review” of government surveillance absolves him of any responsibility for their current state, or for pushing for—and getting—a reform-free renewal of the FISA law in 2012.

Obama said Snowden should have used authorized channels to question the government’s actions, and that he would have been protected by a presidential policy directive Obama issued. He said this, apparently, despite that his directive doesn’t offer protection for contractors like Snowden, nor were the procedures actually in effect in June 2013 when Snowden released classified documents.

He said this while knowing that his administration prosecuted an NSA employee, Thomas Drake, who did precisely what Obama says whistleblowers should do: he raised his concerns through authorized channels within the government. After later leaking non-classified information to a reporter, Drake was charged with several felonies that could have resulted in 35 years in prison, and the federal government attempted to coerce him into helping them prosecute other whistleblowers.

So, then, Snowden had very little choice to do what he did. He saw that the government is violating the privacy of Americans on a massive scale, and that the government was actively hiding the nature of these programs and lying to the public about them. Following proper “procedure” would have resulted in his persecution, and likely wouldn’t have led to the information being disseminated so widely, so he released it to journalists.

His choice—which came at incredible personal cost to him (losing his job, home, family and the right to live in his own country)—is what’s allowed the public to have any knowledge at all of what our government is doing in our name, and to have anything resembling an “open debate and democratic process” with regard to this issue. The actions of a man Obama says isn’t a patriot.

This administration has worked very hard to preserve the government’s surveillance and to prevent the public from understanding its actual extent. Obama’s calling for a “thorough review” and “greater oversight, greater transparency” are merely words. Words that come too late, and words that contradict his administration’s actions. Snowden’s actions are what have allowed us to have knowledge of what the government is doing.

At great risk to himself, he helped expose the government’s surveillance programs so the public could know what was being done to us in our name by our elected leaders. If that isn’t patriotic, what is?

August 20th, 2013

“Yeah, That Feature Should be Easy to Do”

Today, Basil received a nice 1.6 update. This update brings a really obvious, simple feature that many other recipe applications include: ingredient scaling. It’s really convenient while in the kitchen to be able to adjust a recipe’s yield according to your needs (the amount of jambalaya you need for two on a quiet weeknight is a little different than the amount you need to make while having guests), and it’s conceptually very simple.

Except it’s not so simple.

It sounds like it, of course; when you think of doing ingredient scaling, it’s just taking an amount at the beginning of a line (“1 item,” “2 tablespoons”) and scaling it by some factor—cutting it in thirds, doubling it, whatever. That’s the approach most applications take for ingredient scaling: they assume all ingredients are phrased such that there is an amount at the beginning of the line and then a food item. Based on that assumption, they identify a number at the beginning of the ingredient line and scale it by whatever scale the user selects. In the basic cases, like “2 cups white sugar”, this works OK. Not great, but OK.

But the problem is that not all ingredients items are the basic case, because there are many ways to express things. What about ingredients like “1 (12 ounce) box of pasta” versus “2 medium cloves garlic (about 1 teaspoon)”; in the first case, the measurement amount shouldn’t be adjusted, but in the second, it should be. In the second case, assuming that you only have to adjust the amount at the beginning of the line can be very confusing while cooking: After I double the recipe, do I use 4 cloves of garlic, or 1 teaspoon? This assumption—made because parsing imprecise language like this is difficult—actually makes cooking harder for the user, because they have to remember how their application fails at correctly scaling ingredients.

A new feature for cooking isn’t helpful if using it actually makes it harder to cook. There are many other cases; what about scaling amounts expressed as words, like “half”? Or what about when you cut “1/2 cup” in fourths and it tells you to use “1/8 cup” rather than “2 tablespoons?”

Scaling ingredients isn’t useful if it makes you think about these things while cooking or doesn’t work for some ingredients. It’s a distraction.

I avoided building a scaling feature for Basil for that reason: I didn’t want to add more noise if it had all of these caveats. But I’m happy to announce that Basil now has ingredient scaling, and it does its best to do the right thing in all of these cases. It handles the cases discussed above, and many others, too. It will do its best to present ingredient amounts in a way that makes sense (no “1/9 cup sugar” or similarly bizarre things).

It isn’t perfect. There are probably cases I’ve missed, or cases it’s not handling properly, but I am committed to addressing those.

Working on ingredient scaling, though, reminded me that problems which seem exceedingly simple and easy to solve on the surface are usually icebergs waiting to sink you. They may look simple, but they tend to conceal much complexity underneath the surface that is only obvious once you’ve thought through the problem. These “yeah, that should be easy to do” features have, more often than not, been the hardest problems to solve in my experience.

This experience is also another lesson that while users like receiving new features and they’re exciting to announce, over the longer term users only want a new feature if it improves their net-experience in some way. If it ends up not being useful in many cases, or adds complexity to the process of using your application, then that feature could make your application less useful than it was before.

August 6th, 2013

Fitbit Flex

In the beginning of June, I began wearing a Fitbit Flex. I decided to purchase one for two reasons. First, wearable computing is growing very quickly, and is increasingly more interesting as well. Second, the idea of measuring (a few elements of) my health on a daily basis is fascinating to me, so I thought it might be beneficial to do so.

The Flex has become a much larger part of my daily routine than I thought it would. If you are unfamiliar with it, the Flex is a relatively unassuming wristband-bound device that tracks how many steps you take and your movement while you sleep and syncs that data to a companion iPhone application using Bluetooth. The iPhone application shows you (approximately) how many steps you made each day, what (approximate) distance you walked, and (approximately) how many calories you’ve burned. In addition, the application allows you to log meals as well.

Typically when I begin using some kind of system that’s supposed to improve my life in some way but requires constant input from me, I use it for a while when I’m excited about it, until one day when something gets in the way (life) or I just forget, and from then on I forget to use it altogether. To do applications are the best example of this; to be useful, they require constant user input. As a result, once the user fails to enter a new task, the motivation to continue using it begins to disappear.

I assumed the Flex and meal logging would follow that same pattern—I would use it for a couple weeks because I’d still be excited about a new piece of technology, but once that newness wore off and once I forgot to log a day’s meals, I would stop paying attention to it. And then it would be another device in a drawer.

After nearly two months of use, though, I’m still using it. And not just using some of it, like step counts—I’m also using meal logging and sleep tracking as well.

I think that’s because the Flex provides a certain amount of information without any effort on my part. As long as I wear it, I can see how much I’ve moved around each day, and a (good enough) approximation of how many calories I’ve burned. That’s quite informative on its own. Actual data (over a fairly large period of time) makes it much more clear to myself that I need a sustained plan for getting in better shape, and crucially, it also is much more rewarding on days when I am quite active. Seeing how many miles I’ve moved, and feeling the pleasant little buzz the Flex makes when I cross my daily goal, is surprisingly powerful. It’s certainly more powerful than the vague sense I had before that I wasn’t active enough.

As a result of that “free” information, I have a large motivation to also log my meals, because doing so will show me my (rough) caloric input versus my (rough) caloric output. We all know that certain foods are very high in calories, carbohydrates and fats, but it’s an amazing thing when you can get a fairly good idea of how many calories you’ve already eaten for the day, how many you’ve burned, and what that piece of chocolate cake is going to do. Suddenly, there’s a higher internal cost to eating badly, because you can see it all laid out for you.

But interestingly, logging my meals—something I’ve gotten in the habit of doing for the above reasons—has had a more subtle effect, too: eating well consistently is much more satisfying than it otherwise would be. Each time I check in that I had fruit or vegetables rather than chips or french fries, it’s gratifying. It makes it a bit of an accomplishment. I didn’t expect that at all, but at least for me, there’s absolutely a feeling of accomplishment that results from consistently eating and logging healthier meals and snacks.

Because I now have immediately accessible information about how much I exercise and what I eat, it’s given me much more motivation to plan my meals and exercise, and to stick with it. Before, it was fairly easy to skip out on a day’s exercise (which turned into a few days, and then weeks…) or to shrug off a day of poor meal choices as a one-time fluke (even if it wasn’t), because I could. It’s much harder for me to do that, though, when the information is right in front of my face. That’s important.

What’s important, too, is that it hasn’t turned me into a calorie-counting, joyless automaton. It hasn’t turned eating into something that’s purely for providing energy and nutrients. I don’t always avoid large meals or dessert because it might bump me over my calorie target for the day. In fact, it’s been liberating, because it’s helped me impose a bit of discipline in my regular eating, so having an occasional large meal or a dessert doesn’t feel shameful at all—it’s something that I can enjoy without worrying that I’m eating terribly. I know I’m consistently eating well, so occasional indulgence won’t hurt.

It’s interesting how powerful a little information can be.

July 23rd, 2013

Apple’s New Beginning

At WWDC this year, Apple announced the most dramatic update to iOS since iOS 2 introduced the App Store and SDK for developers. The update is also the most controversial.

In 2012, Tim Cook fired Scott Forstall, Apple’s senior vice-president (SVP) of iOS, and distributed his responsibilities among Apple’s executive team. Craig Federighi became SVP of software, unifying the iOS and OS X teams under one leader. Eddy Cue received the title SVP of “Internet Software and Services,” unifying Apple’s iTunes, iBooks and App stores, iCloud, Maps and Siri. And in addition to hardware design, Jonathan Ive assumed responsibility for Human Interface design as well in the shake-up.

At the time, Adam Lashinsky reported that Forstall refused to sign an apology for iOS 6′s Maps release, and that sealed his fate. While his refusal may have been what precipitated his firing, Lashinsky says that Forstall didn’t get along with Ive. I’ve heard similar things as well, and that those difficulties were not isolated with Ive.

I don’t think Cook decided to fire Forstall simply because he didn’t get along with others in Apple’s management team, or because he refused to take public responsibility for Maps’s issues. Rather, I believe it was a result of Cook re-organizing Apple’s management structure to reflect his becoming CEO. Cook is not Steve Jobs, and he does not pretend to be, so he decided to change Apple’s structure to reflect that.

Jobs fulfilled a rather unique role as CEO of Apple. Rather than oversee and manage the work of others, Jobs was intimately involved in product and product design decisions. No detail was too small for Jobs’s attention. Jobs both originated product ideas and took part in iterating on them, but more importantly, Jobs acted as a filter for Apple. All product decisions ultimately passed through him; he approved new products and new directions, and vetoed them, too. As a result of this role, collaboration among his lieutenants and teams wasn’t as important; indeed, Jobs was known to foster competition and even conflict among individuals and teams to try to elicit their very best work, and then he would choose what he saw as the superior work and direction for Apple.

Cook, as far as I know, doesn’t scrutinize each pixel of an application’s design. He doesn’t have Jobs’s ability to understand what makes for a good product. Jobs was one-of-a-kind, and Cook recognizes that. Recognizing that, however, means that he couldn’t continue Jobs’s management style. Instead, Cook needs to rely on his management team to replace Jobs’s role. Each member must take absolute responsibility for their area of focus and must be incredibly talented at managing it. Most integrally, though, because Apple no longer has the singular filter that all larger product decisions pass through, that management team must work together. Apple could withstand conflict and islands—even benefit from it—with Jobs because each area ultimately ran through him, and because he directed each area. Since Cook can’t fill that role, he needs supremely talented people in charge of each area working with each other to set Apple’s direction. Jobs’s Apple could feed off of discord, but Cook’s Apple must feed off of collaboration.

In Apple’s introduction video for iOS 7, Jonathan Ive says that “We see iOS 7 as defining an important direction and in many ways, a beginning.” While Ive may have meant they saw it as a new beginning for iOS, iOS 7 also marks the beginning of Apple’s new management structure.

When Cook fired Forstall last year, it wasn’t clear what it meant or whether it was for the better. Embarrassingly, Cook hired John Browett to head retail in April 2012 and fired him in October along with Forstall. One way to read that—and many did—was that Cook’s leadership was failing; he had hired an obviously bad fit for retail and was forced to get rid of him in six months. In that light, Forstall’s firing and Apple’s management restructuring looked like it could be the result of a struggling management team.

Until WWDC this year, it wasn’t clear whether that view was correct, or whether it was a part of Cook’s reorganizing Apple to work best under new leadership. Today, though, I think it’s clearly the latter view that was correct.

With that out of the way, I believe that this year’s keynote was meant to establish the foundation for Tim Cook’s Apple. In episode 19 of the Accidental Tech Podcast, Marco Arment, John Siracusa and Casey Liss discuss the keynote introduction video, which explains Apple’s motivation. Siracusa says that while he liked the video, he thinks that it suggested they were about to introduce something groundbreaking, and that OS X Mavericks, the new Mac Pro and iOS 7 didn’t live up to that. Siracusa might be right, but I think he misses its intent. This wasn’t meant to congratulate themselves for being great; rather, it was meant to affirm Apple’s motivation for what they do. Along with their “Our Signature” ad, I think they are the equivalent of the “Think Different” campaign for the Cook era.

Famously, Jobs said that the people at Apple shouldn’t try to make decisions by asking themselves what he would have done. Instead, he said, they should just do what’s right. Clearly, Cook took that to heart. This is Cook’s Apple, and they are not constraining themselves by what feels Jobs-like. Cook hasn’t confused the trappings of Jobs’s Apple—how Jobs managed the company—for its heart: an irrepressible desire to make insanely great products that improve people’s lives and give them joy.

Apple, then, has changed significantly since 2011. Things are quite different at the top than they were then, and to my eyes, Apple seems more open to the world than its ever been in important ways, too. But those changes have all been made so that Apple can continue doing what they always have. This may be Cook’s Apple, but the core is just as it’s always been.

July 9th, 2013

Spying in the U.S.A.

I don’t have much time lately, so this piece will be shorter than I intended, but I want to write briefly about the NSA’s spying programs.

The NSA is collecting, it seems, phone records for all U.S. citizens. They are collecting “metadata” on our phone calls—that is, what number a subscriber called, where they called from, and for how long the call lasted. The Obama administration tried to minimize this by claiming that the information collected did not include the subscriber’s identity or the conversations themselves.

That’s brazenly dishonest and preys on the public’s ignorance. While the program may not directly collect the subscriber’s identity, or the identity of the person they called (maybe! we really have no way to know, since the program is classified and this statement came from an anonymous administration official), it’s trivial for the government to look up who that phone number belongs to. Moreover, the information they are collecting is arguably more important than the conversations themselves, since they provide a very detailed picture of where a person goes, who they’re in contact with and at what times. The only way it could be reassuring that the NSA is not collecting identities and recording conversations is if you don’t understand how powerful a phone call’s metadata is, and how easy it is to look up who owns a phone number. This administration’s attempt to minimize it is, therefore, absolutely vile and reprehensible. It is not only dishonest, but disturbing. They are trying to use the public’s ignorance of the issue, which is a result of their tireless effort to keep these programs a secret, against us to prevent us from coming to an accurate understanding of precisely what the government is doing.

That is as worrying to me as the programs themselves. This administration has pursued leakers with a zeal and ferocity that this country has never seen before. President Obama claims that he does so in order to protect the lives of U.S. troops and intelligence agents in the field, but the administration’s response to these leaks shows another purpose: they are attempting to batten down the hatches to prevent the public from gaining an understanding of precisely what the government is doing to us in our name.

The end result is this: we have a government that collects all of our phone records, has access to nearly all of our personal, private online communications, and that not only refuses to tell us what they are doing, but attempts to cloud the issue and mislead us. They say that there are ample rules and safeguards in place to ensure this information is not abused. Bullshit. The only court overseeing it is a secret court whose decisions we cannot see, and who provides no real direct oversight whatever. And even if that were the case—even if they were using this information correctly and guarding it jealously from abuse—recording our every communication is inherently an abuse of our rights. It is an abrogation of the right to privacy when the government can record any and all communications we make. There is no privacy, just the government’s word that they won’t use their great power to harm us. This is not the beginning of a slippery slope; we have already fallen straight on our asses, slid down it and lie crumpled and battered at the bottom.

Our government spies on us all and lies to us about it. There is no gray here. It is wrong, and our government is doing us harm. Our government has convinced itself not just that it needs this power, but that it deserves this power, and that it deserves our trust. It does not deserve the power to collect any communications it pleases, nor does it deserve our trust that it will not use it to harm us.

In Obama’s 2009 Inaugural Address, he said:

The question we ask today is not whether our government is too big or too small, but whether it works — whether it helps families find jobs at a decent wage, care they can afford, a retirement that is dignified. Where the answer is yes, we intend to move forward. Where the answer is no, programs will end. And those of us who manage the public’s dollars will be held to account, to spend wisely, reform bad habits, and do our business in the light of day, because only then can we restore the vital trust between a people and their government.

If there is a tight summation of Obama’s governing philosophy, this is it—the question is not how big government is, but whether it works. Through his actions, Obama showed his philosophy to be false. He did not restore the “vital trust” between the people and our government—he undermined it by doing his business behind closed doors and by ruthlessly ensuring that anyone who dared leak information to the public about that business is prosecuted. But worse, this shows why the size of government—that is, the extent of government’s power and its reach into our lives—is so vitally important. As the government’s power coalesces, and as it reaches further into our lives, the likelihood of abuse of power increases and the magnitude of its effect does, too. Worse, this philosophy justifies the very abuses we see now and the abuses that Obama criticized so much before running for president: if those in power feel they can use it without “unnecessarily” (the definition of which is, conveniently, defined by the government) impinging on our rights while still protecting our security, then it is justified—because the standard used is effectiveness.

That philosophy provides for a dangerous level of confidence in all levels of government, and confidence in their motivations and ability to prevent abuse. As James Madison wrote, “If angels were to govern men, neither external nor internal controls on government would be necessary.” But humans are not angels, and power tends to alter people’s sense of right and wrong in unbecoming ways. That recognition implies that we should be skeptical of government and skeptical of increases in government power, because abuse is so easy a road to end up on. Obama dispenses with that skepticism and embraces a focus on how well it works. The NSA’s spying, the most espionage prosecutions of any president in history, and the government’s attempts at misleading us are the direct result of that.

There’s nothing else to be said than this: it is wrong, and it should not stand.

June 26th, 2013

On the Philosophy of Google Glass

Technology inherently alters how we live, what we do, how we think, how we die. Modern medicine has turned many diseases that, when contracted, brought along death more often than not into things of the past or minor annoyances, and injuries that would have been debilitating or deadly into ones that can be overcome. The motion picture didn’t simply make plays something that could be watched over and over again by many different audiences, but created a mass entertainment accessible by nearly everyone, and changed how we learn about news events and conceive of them. The car allowed for suburban sprawl, and for people to travel where they please, whether for the evening or for extended trips across thousands of miles; and in so doing, the car changed what the American “ideal” is, spawned innumerable cultural groups centered around it, and helped construct a rite of passage into adulthood for teenagers. Getting your driver license and first car is the first step toward being an adult, but also a right to freedom, to being able to go wherever you please on your own or with friends on nothing more than a moment’s notice.

So as long as humans have been creating tools to influence the world around us, the tools—technology—have been influencing us, too. It’s an inevitable byproduct of using something, and this isn’t exactly a new insight. Smartphones and now “wearable” computers like Google Glass are merely the latest human technology to influence their creators.

But while they may only be the latest example of something that’s been happening for as long as humans have created tools, there is, I think, something very different about so-called wearable computers like Google Glass. They have the potential to integrate themselves so deeply into the user that over time, and as they develop further, there will be little reason to differentiate between the device and the user. Removing your Glass device will feel very much like losing a limb or sense—something that you’ve grown used to depending on and using is gone. Through this much deeper integration, these devices could fundamentally alter the human experience and what it means to be human.

That might sound alarmist, like science fiction, or—if you own a smartphone—just remind you of that small moment of dread, like something’s wrong, when you leave the house without your phone.

Wearable computing has much more potential than “wearable” implies. Instead, through overlays on our vision (or more direct connections with the brain, potentially), things like Google Glass can become another sensory input as well as an output. Google Glass already allows you to look something up on Google (“Google, how tall is Mt. Everest?”) and get directions without ever pulling the phone out of your pocket, or using your hands at all; you ask, and the information is spoken to you or overlaid at the top of your vision. It’ll notify you about your flight this afternoon or messages you receive, and you can reply to the message, too. You can snap a photo or video. All without using your hands, and it’s all—again—on top of your vision.

The ultimate goal is to form a direct connection between our brains and the web, and all of it that entails. Google Glass is merely a first step toward that, and merely a hack that hijacks our vision to provide input to our brains and hijacks our voice for control. A direct connection with the brain is obviously ideal; there’s no “glasses” to wear, or need to use voice to control it, which isn’t very efficient. In Steven Levy’s In the Plex, he recounts a conversation he had with Larry Page and Sergey Brin in 2004:

Back in 2004, I asked Page and Brin what they saw as the future of Google search. “It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it will be easier in the future, that you can have just devices you talk to, or you can have computers that pay attention to what’s going on around them and suggest useful information.

The web’s information will be our brain’s information, and the web’s services will our brain’s tools. We would be able to immediately answer whatever question we have, or information we need. If you’re fixing your sink, instructions on how to do so (or maybe a video?) are just a thought away. In a few moments, you’ll be able to make fabulous chicken tikka masala. Humanity’s knowledge will have a direct pipe into our brains. And you’ll be able to do incredible things, too. You could snap a photo with a thought, send a message to someone or file away something you’ve come across to a note-taking service. You could control your home’s lights and television. You could… well, the list goes on.1

And, of course, you’ll be able to check Twitter and Facebook, and post to them, wherever you are and while doing whatever else.

I say all this because I don’t think there’s a significant difference between doing all of this through a Google Glass-like device or some direct brain connection, as Page proposes. If they’re successful at their purpose, both will be quickly adapted into our senses. Just as we’ve gotten used to being able to pull out our smartphones whenever we have a spare moment or need to settle some dispute or trivia, we’ll reflexively ask Glass the answer to a question, or to snap a photo, or to check the news real quick, or to look through our Facebook and Twitter stream, even at moments when we probably shouldn’t. And since the amount of effort it takes to do so will be so much smaller than it is with a smartphone (which is already terribly small), we will do all of it with that much more frequency. No event will be complete without taking a photo and posting it to our social network of choice, because unless it’s documented and unless we’ve stuck it in everyone else’s stream, then it didn’t really happen.

I don’t think that’s a positive, and it says nothing about the social effects of having a web-connected camera and microphone strapped to our faces. (Dustin Curtis touches on it in his piece about his experience with Glass.) But what I find most troubling is the philosophy underlying Larry Page and Sergey Brin’s thoughts on devices like Glass. They say that Glass’s goal is to get technology “out of the way,” but that isn’t it. The idea is that we will all be better off if we’re always connected to the web, always on, and have uninterrupted and instantaneous access to it and humanity’s “knowledge.” The idea that Page expresses is that if I can immediately learn about something I don’t know much about, I’ll be better off. I’ll be able to make smarter decisions and live a deeper, richer life by spending the time it would have taken to research and learn about something on more meaningful and substantive tasks.

I think, though, that is a terribly deluded and shallow understanding of what it means to “learn” about something. When we—humans—learn about something, we are not simply committing facts to our memory so we can recall them in the future. That’s a very tiny part of a much larger and much more important process. To “learn” about something is to study the information (when historical events occurred, what happened, etc), find connections between it and other things we’ve learned and experiences we’ve had, and to synthesize it into something greater—knowledge. Knowing, say, the Pythagorean Theorem in isolation isn’t of much use, but connecting it to your need to identify another object’s location suddenly makes it very useful. And more abstractly, knowing Roman and Greek history isn’t very useful all on its own, but being able to learn from it and apply its lessons to current political difficulties might prove very beneficial.

Synthesizing information into knowledge isn’t an instantaneous process because that’s not how we work. We form conclusions and connections between new information and other things we know by thinking through it and living it. Conveniently, and crucially, taking time to learn something or to answer our own question by pouring through books and articles and our own investigation allows us time to do that. We have little choice but to draw conclusions and form connections between what we’re looking at and what we already know or have seen before because our brains are working over the material for the answer we seek. We find knowledge when we engage our brains. And, moreover, we often stumble into things unintentionally while actually looking for something altogether unrelated. Things that end up becoming more important than what we were originally looking for in the first place.

Page’s idea—that we would be fundamentally better off if we had immediate access to all of humanity’s information—ignores that. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

This may all sound like so much hyperbole. After all, we’ve adapted just fine to prior new technologies, despite Luddite claims that it will destroy us. And, undoubtedly, we will adjust to this sort of thing, too, and the world will not come crashing down. I think, though, that this sort of thing—a more intimate connection between us and computers—is a path we are heading down, and since its more intimate nature also makes it more influential over us, I think we should deeply consider what it’s intended to accomplish for us and what might happen to us in the process.

Technology isn’t a force of nature that just happens. It’s something that we create, and so we should question why we are creating it. This has always been true, but I think it’s even more important now that we do so.

Technology, I think, should exist to improve our lives as humans, to magnify the good and minimize the bad, rather than change our nature or experience. That’s what I believe.

That’s what I believe, and you may disagree. I would suspect many do, especially those with more of a bent toward transhumanism. And that’s fine. But we should be having a much larger discussion about our technology’s intent than we are now, because it’s only increasing in importance.

  1. If the idea of a direct brain interface seems ridiculous, it shouldn’t; researchers have shown that monkeys can control a robot’s motion with their brain activity, and the means they use are relatively rudimentary. The brain appears to be able to adjust to and adapt new sensory inputs. []
May 31st, 2013

“Free Trials and Tire Kickers”

Marco Arment argues that free trials with higher-priced applications in the App Store would undermine people’s tendency to try out a number of applications even if they don’t use them long-term because they’re so affordable:

If the App Store mostly moved to higher purchase prices with trials, rather than today’s low purchase prices and no trials, this pattern would almost completely disappear. Instead, we’d get the free trials for almost everything, and then we’d only end up paying for the one that we liked best, or the cheapest one that solved the need, or maybe none of them if we didn’t need them for very long or decided that none were worth their prices.

In this type of market, the winners can make a lot more, because you can indeed charge more money. But the “middle class” — all of those apps that get tried but not bought — all make much less.

I think Marco’s right. (Please do read his entire piece. It’s very good.)

Since releasing Basil last year, I’ve been thinking a lot about this, and paid upgrades, which is a related topic. Trials seem like they would be a positive thing for developers; users could try out our applications, see how good they are, and then, theoretically, they would be willing to pay a higher price, and would do so at such a volume that our current sales would increase or, at minimum, wouldn’t suffer. Charging $10 for an application sounds a hell of a lot better than charging $2.99 or $3.99.

Marco is right that this would fundamentally change the nature of the App Store. Rather than spend a couple bucks here and there to try out new applications, users would more likely try out a large number of applications and end up paying for the one that best fits their needs. Of course, that may be more fair; users only pay for the application they need, and only the developer who provided it is paid. But as Marco points out, that erodes the entertainment aspect of the App Store.

As a result, since that market would resemble the PC or Mac software market, he argues the outcome probably would, too. A relatively small number of developers and companies will do especially well, and most others will make very little. That’s convincing.

I don’t think there’s a net benefit here for introducing trials. That market may support deeper, more full-featured applications, but it could also throw out one of the App Store’s greatest attributes: the ability for a single developer or small team to take a single good idea, turn it into an application, and make it accessible to a huge audience—all while possibly making a decent income and having the chance to make it a huge success.

Rather than hope for trials or even paid upgrades, I think developers need to utilize the tools we have: in-app purchase and subscriptions. IAP can allow developers to reach a wide audience with a low initial price (or free, even), and make more from those customers who are willing to pay for more. Paper for iPad is an excellent example of how to do this. The application comes with a “pen” drawing tool for free, but pencil, marker, paintbrush and color mixing tools are available through IAP. There’s nothing predatory or abusive about Paper; it’s a beautiful, useful application, and the tools available for purchase make it even more useful.

Those are the kinds of things we should be thinking about. Not only is hoping/waiting for trials unproductive, but it limits what your application is capable of. IAP is an incredible tool that allows for unique, powerful applications for users, all while making it available to a very large audience. That capability shouldn’t be shunned; instead, we should think about how to use it to make businesses that are sustainable for us and useful for customers.

May 10th, 2013

The Camera iPhone Ad

Ken Segall on Apple’s camera iPhone ad:

What this commercial does so well is capture the human side of technology. It’s a reflection of daily life, and it’s easy to see ourselves in it. The ad shows us how essential our phones have become, enabling us to capture the people, places and images we don’t want to forget.

What’s powerful about this ad to me is that it’s just people living, experiencing and enjoying little moments and big moments, and the iPhone is just there to capture some of it. Not to be front-and-center, not to be the focus of attention—just to snap a little part of it and continue on. It’s not that the iPhone is incidental to these moments; in fact, in many of them, it’s integral (kids videotaping their friends skating, snapping random photos of puddles). But none of these little vignettes have someone with their head buried in an app, ignoring everything around them—the iPhone is there to capture or make certain moments better.

Of course, the iPhone certainly does allow people to bury their heads and disappear from what’s going on around them, and people (we) certainly do that. I think, though, that’s counter to the iPhone’s spirit, and I love that this ad embodies that the iPhone is meant to make day-to-day life better, rather than to capture our lives altogether.

The ad doesn’t provide a ready-made tagline for why you should purchase the iPhone. There’s no explicit or implicit comparison to competing devices (except for the ending “Every day, more photos are taken with an iPhone than any other camera,” but that says more to what the iPhone is than to what the competition isn’t). It’s simply an affirmation of what Apple believes the iPhone to be, what its intent is, and that intent is much larger than the feature set.

And it’s a powerful ad because of that. I think this is Apple’s best ad since the “Think Different” campaign, and it very much the same kind of ad: it’s about what Apple is, not what their products do.

May 9th, 2013

Design Products As a User

When Apple announced the iPhone in 2007, they presented it as a device that did three things: made calls, played music and video, and browsed the web. When I saw the announcement, I knew that day that I had to have one when it was released.

That wasn’t because I was an Apple geek (although I certainly was). It was because I immediately knew what it did and what it would do for me. It would do what I’d tried to do with a PocketPC for a couple years—put the web, my contacts and calendar in my pocket, wherever I am—and combine my phone and iPod into a single device that is superior to them at their intended function. I knew it because that’s how Apple presented it. They presented it as a device that did those three things.

They could have presented it as a technological marvel, a device that combines a high-resolution multitouch screen, fast mobile processor, cell and WiFi radios, and proximity, ambient light and accelerometer sensors into a handheld device with desktop-breed software and surprisingly-good battery life, a PC in your pocket. But they didn’t; rather, they presented it in terms of what it did for users and what they would find useful about it.

This isn’t important just for presenting the iPhone, however; of course, doing so made it immediately intelligible to me and many others, even those for whom the technology underneath it is closer to magic than science. They presented it from the user’s perspective, rather than from the creator’s, and showed what role it could play in our own lives, rather than make the viewer do that translation on their own. That’s an important lesson for how to market a product, but I think what’s even more important is that this focus on what it does for the user didn’t start at Apple when they began creating the presentation to introduce the iPhone—it began all the way at the beginning of the project itself. They envisioned and designed the product as a user, rather than as a designer or engineer.

What this means is starting with a problem or unfulfilled need that people have, something that, if it were improved, would make people’s lives better in some way. Then, you must understand precisely what that problem is, what the person really wants, and what the underlying causes of it are. Only then will you start designing a product or service. By doing so, the entire product creation process—from generating ideas (“ideation,” a word I loathe) to packaging and delivering it to customers—is within the context of solving a concrete problem. Every design and engineering decision made happens within it, and there is a built-in decision process for whether to add or remove something, and metric for how well each part succeeds: does it better solve the problem for the user?

This goes beyond “empathizing” with users.1 Instead, it means thinking as a user, from beginning to end, and using that perspective to decide what you should or shouldn’t do, and what your product or service should or shouldn’t be.

Apple does this better than any other company, and that’s the case in part because they are ruthlessly focused on it. One of Jobs’s more well-known sayings is that he is as proud of the products they didn’t ship as the ones they have shipped. This line is held up as a reminder that to do great work, companies must focus. But focus on what? This provides us with an answer: focus on what will do the most good for users. All decisions flow from that.

  1. “Empathize with your users” has always seemed rather disgusting a concept to me—it shouldn’t be some great insight to empathize with your users, and in fact, it sounds clinically calculating to me: empathize with your users and you will have more success! Empathy—understanding the feelings of others and caring for them—should be the starting point for all businesses rather than something to bolt on in order to increase sales. []
April 3rd, 2013