“Original” Category

Apple Watch

The phone dominates your attention. For nearly every use, the phone has your undivided attention. Browsing the web, Twitter, Instagram, Snapchat, watching video, reading, messaging—all require focus on a screen that fills your vision, your primary attention, and generally some kind of interaction. Everything else, too, is always a home button or notification-tap away at all times.

Is that a shock when the phone is the single gateway to nearly everything? The PC is now for doing work, but the phone is for messaging, taking photos, sharing them, the web, Twitter, Facebook, finding places to go, getting directions there, and even making calls.

That is the reason we find ourselves, when we receive a message and pull out our phones to respond, often descending into a muscle memory check of our other iMessages, emails and Twitter stream. We pull out our phone for one purpose, like responding to a message or checking our schedule, and end up spending several mindless minutes (or, if I am honest, more than “several minutes”) checking in on whatever it is. We find ourselves doing this even when we shouldn’t. We do it while seeing friends and family, while out to dinner with them, while at home with family when we should be spending time with them or doing other things.

I used “we” above because I think anyone with a smartphone, or anyone who knows people with them, can find truth in it to a greater or lesser extent.

My concern with wrist-worn “smartwatches,” starting with the Pebble, is that they appear to primarily exist to push notifications that we receive on our phone to our wrist. They seem to exist to make dealing with phone calls, messages, updates easier; seeing them, ignoring them, replying to them. They are there to make dealing with our phones more convenient. And in large part, that is how smartwatches have been designed and used. “It’s there so I don’t have to pull my phone out of my pocket.”

But that idea of what smartwatches are for, making it more convenient to deal with the flood of notifications and information our phones provide us, is unimaginative. I think what the smartwatch can do is make the phone unnecessary for many purposes, create new purposes altogether, and allow us to benefit from a wrist-sized screen’s limitations.

The Apple Watch

On September 9th, Apple introduced their long-awaited watch, appropriately named the Apple Watch (from herein “the Watch”). We won’t be able to fully understand what Apple’s built until next year, but they did provide a fairly detailed look at the Watch and the software it runs.

It appears that, in contrast to Google’s approach with Google Wear (which is heavily focused on showing single bits of information or points of interaction on the screen, and relies on swiping between cards of data and interaction), Apple intends the Watch to run fairly sophisticated applications. The Watch retains the iPhone’s touch interface, but Apple has designed new means of interaction specific to a small screen. In addition to the tap, the Watch brings the “force tap,” which is used to bring up different options within applications (like, say, the shuffle and AirPlay buttons within the music application), and the “digital crown,” a repurposing of the normal watch’s crown into a sort of scroll wheel for the Watch. Using the digital crown, users can zoom in and out of maps and scroll through lists with precision and without covering the small screen. And, most interestingly, they have replaced the familiar vibration alert in our phones with a light “tap” from the Watch to notify the user.

What this allows is fairly sophisticated applications. You can not only search for locations around you, but you can zoom in and out of maps. You can scroll through your emails, messages, events or music. You can control your Apple TV.

This subsumes many of the reasons we pull out our phones during the day. We can check our schedule for the day, check a message when it’s received and send a quick reply, find a place to get a drink after dinner (and get directions there without having to walk and stare at your phone), ignore a phone call by placing your hand over your wrist, or put something on the Apple TV.

But what force taps and the digital crown will not do is make the Watch’s small screen as large as a phone’s. You can’t type out a reply to a message or email. You can’t browse the web for something. You can’t dig through a few months of your email to find a certain one. You can’t mindlessly swipe through Twitter (well, you could, but it’s going to be pretty difficult). That, though, is an advantage the Watch has over the phone. Because it is inherently limited, it also has to be laser-focused on a single purpose, and while using it, you are limited to accomplishing something. It’s a lot harder to lose yourself in a 1.5″ screen than it is in a 4+ inch screen.

That’s going to be one of the Watch’s primary purposes for existing: allowing us to do many of the things we do on our phones right now, but in a way that’s limited and, thus, less distracting. If you’re out to dinner and receive a message (and haven’t turned on Do Not Disturb), you’re going to be a lot less likely to spend a couple minutes on a reply, and then Instagram, if you’re checking and responding it to it on the Watch. It just doesn’t work that way.

In that way, I think Apple has embraced the wrist-worn watch’s inherent limitations. Rather than try to work around them, they are using them. They’ve built new means of interaction (force tap, digital crown, “taptic” feedback) that allows fairly sophisticated applications, but they didn’t use them to cram iOS in its entirety into the Watch.

What I think Apple is trying to do is build a new mode of personal computing on the wrist that is molded from the inherent limitations and opportunities that creates.

Truly Personal

In Jony Ive’s introduction to the Watch, Ive ends with a statement of purpose of sorts for it. He says,

I think we are now at a compelling beginning, actually designing technology to be worn. To be truly personal.

That sounds like a platitude, but I think it defines what Apple is trying to do. “Taptic feedback,” which Dave Hamilton describes as feeling like someone tapping you on the wrist, is a much less intrusive and jolting way of getting a notification than a vibration against your leg or the terrible noise it makes on a table, and more generally, focusing the Watch’s use on quick single purposes is, too.

What is interesting to me, though, is they are using the Watch’s nature to do things in a more personal—human—way, and to do things that the phone can’t. When providing directions, the Watch shows them on the screen just as you would expect on a phone, but it also does something neat: when it’s time to turn, it will let you know using its Taptic feedback, and it differentiates between left and right. As a result, there isn’t a need to stare at your phone while walking somewhere and getting directions.

They’ve also created a new kind of messaging. Traditionally, “messages” are either words sent from one person to another using text or speech. Since messages are communication through word, something inherently mental or intellectual rather than emotional, they are divorced from emotion. We can try to communicate emotion through text or speech (emoticons serve exactly that purpose), but communicating emotion to another person is always translated into text or speech, and then thought about by them, rather than felt. In person, we can communicate emotion with our facial expressions, body gestures, and through touch. There’s a reason hugging your partner before they leave on a long trip is so much more powerful than a text message saying you’ll miss them.

In a small way, using the Watch, Apple is trying to create a new way to communicate that can capture some of that emotion. Because the Watch can effectively “tap” your wrist, others can tap out a pattern on their Watch, and it will re-create those taps on your wrist, almost like they are tapping you themselves. You could send a tap-tap to your partner’s wrist while they are away on a trip just to say that you’re thinking about them. Isn’t that so much more meaningful a way to say it than a text message saying it? Doesn’t it carry more emotion and resonance?

That’s what they mean by making technology more personal. It means making it more human.

The Watch is not about making it more convenient to deal with notifications and information sent to us. It’s not even about, as I described above, keeping your phone in your pocket more often (although that will be a result). The Watch is creating a new kind of computing of our wrists that will be for different purposes than what the phone is for and what the tablet and PC are for. The Watch is for quickly checking and responding to messages, checking your schedule, finding somewhere to go and getting directions there, for helping you lead a more active (healthier) life, and for a more meaningful form of communication. And it will do that without sucking our complete attention onto it, like the phone, tablet and PC do.

The Watch is for doing things with the world and people around us. Finding places to go, getting there, exercising, checking in at the airport, and sending more meaningful messages. Even notifying you of a new message (if you don’t have Do Not Disturb turned on) while out to dinner with family or friends serves this purpose, because if you have to see it, you can do so in a less disruptive way and get back to what you are doing—spending time with people important to you.

The Watch is a new kind of computing born of, and made better by, it’s limitations. And I can’t wait.

September 15th, 2014

I Want to Know

When I was growing up, I was fascinated by space. One of my earliest memories—and I know this is strange—is, when I was four or five years old, trying to grasp the concept of emptiness in space. I imagined the vast emptiness of space between galaxies, nothing but emptiness. I tried to imagine what that meant, but most of all, I tried to imagine what it would look like.

That question, what color empty space would be, rolled around my brain the most. I couldn’t shake it. I would be doing something–playing Nintendo, coloring, whatever–and that question would pop into my head again. What does “nothing” look like? First, I imagined that it would look black, the black of being deep in a forest at night. But that didn’t seem right, either; black is still “something.” And then, I remember, I realized I was thinking about a much worse question. I wasn’t trying to imagine what the emptiness of space would look like. I was trying to imagine what nothing would look like.

I have that memory, I think, because thinking about that sort of broke my brain. I couldn’t comprehend what nothing is.

That question, of course, begins down toward the central question of what our universe is and how it was created. I think that’s why space–the planets, stars, galaxies–so fascinated me then; it’s this thing so alien to our world, that dwarfs it on a scale that’s incomprehensible to us, and yet it is us. We aren’t something held apart separate from it, but intimately a part of it and its history.

Trying to understand the physics of our universe, its structure and history is also an attempt to understand ourselves. I think, at some gut level, I understood that as a kid.

I poured myself into learning about our solar system and galaxy. My parents’ Windows PC had Encarta installed, and I was enthralled. I spent countless hours reading everything I could find within Encarta (which, at the time, felt like a truly magical fount of knowledge) about Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. And when I exhausted that source, I asked for books about space, and I obsessed over them. They were windows into these incredible places, and I couldn’t believe that we were a part of such a wondrous universe.

Through elementary school, my love for space continued to blossom. Then, NASA were my heroes. To my eyes, they were the people designing and launching missions across our solar system so we could understand even more about it. Many of the photos of Jupiter, Saturn, Uranus, and Neptune that I was so enraptured by were taken by spacecraft designed, built and launched by people at NASA. They were the people who had risked their lives to leave Earth and go to the Moon, to do something that most people up until just decades prior couldn’t even imagine as being possible. And they were the people who were exploring Mars with a little robotic rover called Sojourner that very moment.

They were my heroes because they were the people pushing us to explore our solar system, to learn what was out there and what came before us. I felt like I was at living during a momentous time in the history of humanity, and that I would live to see advances as incredible as 1969′s Moon landing. There wasn’t a doubt in my mind.

That year, in 1997, I was nine years old. It’s been seventeen years.

Since then, we have indeed made great advances. In that time, we’ve sent three separate rovers to Mars, and we discovered that Mars certainly had liquid water on its surface long ago in its history. We landed a probe on the surface of Saturn’s moon Titan, which sent back these photos. We’ve discovered that our galaxy is teeming with solar systems.

All truly great things. But we are no closer today to landing humans on Mars than we were in 1997. In fact, we are no closer to putting humans back on the Moon today than we were in 1997.

Some people would argue that’s nothing to be sad about, because there isn’t anything to be gained by sending humans to Mars, or anywhere else. Sending humans outside Earth is incredibly expensive and offers us nothing that can’t be gained through robotic exploration.

Humanity has many urges, but our grandest and noblest is our constant curiosity. Through our history as a species, we have wondered what is over that hill, over that ridge, beyond the horizon, and when we sat around our fires, what are the lights we see in the sky. Throughout, someone has wondered, and because they wondered, they wandered beyond the border that marks where our knowledge of the world ends, and they wandered into the unknown. We never crossed mountains, deserts, plains, continents and oceans because we did a return-on-investment analysis and decided there were economic benefits beyond the cost to doing so. We did so because we had to in order to survive, and we did so because we had to know what was there. We were curious, so we stepped out of what we knew into certain danger.

And yet that tendency of ours to risk everything to learn what is beyond everything we know is also integral to all of the progress we have made as a species. While working on rockets capable of leaving Earth’s atmosphere, it would hardly be obvious what that would allow us to do. Would someone then have known that rocketry would allow us to place satellites into orbit which would allow worldwide communication, weather prediction and the ability to locate yourself to within a few feet anywhere on Earth? Economic benefits that result from progress are hardly ever obvious beforehand.

But it is more than that. It isn’t just that exploration drives concrete economic benefits. We think in narratives. Since the Enlightenment and industrial revolution, we have built a narrative of progress. With each year that passes, we feel that things improve. Our computers get faster, smaller, more capable; we develop new drugs and treatments for diseases and conditions that, before, would be crippling or a death sentence; with each year, our lives improve. For a century and a half or so, that feeling hasn’t been too far from reality. But most especially, we have continued to do something that cuts to the very center of what it means to be human: we have explored. We explored the most dangerous parts of Earth, we have explored our oceans, we put humans into space and humans stepped foot on a foreign body. There is a reason that, when we think of our greatest achievements as a species, landing on the Moon comes to mind with ease. At a very deep level within us, exploring the unknown is tied up with what it means to progress.

As exciting and useful as it is to send probes to other planets and moons, it fails to capture our imagination in the same way that sending people does. The reason is because doing so–exploring the unknown ourselves–is such an incredible risk. What Buzz Aldrin, Neil Armstrong and Michael Collins did in 1969 was unfathomably dangerous. They knew–everyone knew–that there was a very good chance that they would fail to get back to Earth. But they accepted that risk, because for them, learning about the unknown was worth that risk.

Abandoning human exploration of space, then, has consequences more far reaching than what its proponents intend. We would not just be abandoning putting humans into space, but at some fundamental level within us will be resigning ourselves to staying here. We will have decided, as a species, that we have gone far enough, we will leave our borders at our planet’s atmosphere, and leave the rest of the solar system and galaxy to nature. And with that decision, we will resign ourselves to no longer exploring in the general sense.

That’s why it is so integral that we continue exploring. Pushing on the edge of what’s possible is what fuels our desire and ability to explore in all other areas, too.

There are still incredible mysteries for us to unlock. We don’t know whether Mars had life early in its history. We don’t know whether, in Europa’s and Enceladus’s oceans, there are lifeforms swimming through them as I write this. We don’t know whether there is intelligent life living on planets in solar systems in the Milky Way and beyond. We don’t know how life began on Earth, let alone how life began at all. And most of all, we don’t know whether it is possible for us to move beyond our own solar system.

But what I do know is this: I want to know. I want to know.

August 8th, 2014

Apple’s Advantage

Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.

Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.

That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.

And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.

The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.

Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.

I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?

As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.

That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.

June 3rd, 2014

No Answer

You have all the answers to my questions
Even ones I didn’t have
Why should I know anything at all?

You know everything I need
Everything I may need
You hold it all for me
So I waste no time

But still I wonder, why don’t I wonder?
Like I did as a kid
But no answer

March 31st, 2014

Tony Fadell Explains Nest’s Sale to Google

Last week, Nest was acquired by Google for $3.2 billion.

There are only a few companies that have truly excited me in the last few years, and Nest is at the top. They worked on a very original, very real problem—thermostats were not only a user interface disaster and something nearly everyone hated to use, but also were an opportunity to do something meaningful: use energy more efficiently in people’s homes while also improving their relation with an important but ignored device. In addition, it clearly was the first product in a much larger plan. And it was a very good first step in a very good plan.

So, when I heard that Nest had sold to Google, I felt a pang of disappointment. Not because it was Google (which, if Nest were to be acquired, makes more sense than any other company I can think of), but rather because Nest is an incredibly ambitious company that, I think, had the opportunity to be as important to the next wave of development in personal computing and the Internet as Apple and Google were—and potentially as large. They were a key member in Silicon Valley’s next generation of meaningful companies, I thought.

Of course, nearly every bit of that can (and will) still be true, with the notable exception of remaining independent. They can still do all of that, but they will do so under Google’s banner, and for Google’s benefit. And that’s fine, all else being equal. Before I continue, though, we need to discuss why Nest decided to sell in the first place, and “for the money” doesn’t count, because I know Tony Fadell, Matt Rogers and everyone else there didn’t do it for the cash.

Here’s why they sold, according to Fadell:

I was spending nearly ninety percent of my time on building the infrastructure of the company and I wasn’t able to spend enough time and cycles on what I love doing: products and creating differentiated experiences for our customers. That is where my love is and Google offered to let us focus on that, but with scale that will help bring our horizon closer to us, faster. Google offers to bring that scale to us. For me, ultimately building great products is key.

Fadell cites European distribution as a specific example of what he means by “scale”—physical distribution and dealing with legal issues surrounding something as regulated and disparate as energy. Fadell wants to focus his time on developing products rather than handling all the issues surrounding it.

It’s hard to argue with that. Nest clearly wants to move quickly. The Nest thermostat is a shockingly good first product, and Nest Protect—which they released just two years later—is at least as good. Nest Protect also began revealing their larger strategy. Owning either one of them is great, but owning both of them makes each one better. Since they use your home’s wireless network, the Nest thermostat will automatically augment itself with the Protect’s motion sensors. And more importantly, if the Protect senses rising levels of carbon monoxide, the thermostat will shut off your home’s furnace. Their strategy, then, appears to be modular devices that are convincing on their own, but when used together not only all function better, but begin to form the basis for a connected home.

Being a part of Google will allow them to realize that strategy faster by increasing their resources so they can focus their bandwidth on developing product. Google also is doing industry-leading work in learning systems and web services, which obviously will benefit Nest. Like I said, of all the companies in the world that could have acquired Nest (which, admittedly, is a fairly short list), Google is the best fit.

But Google didn’t agree to acquire Nest entirely for Nest’s benefit. They did it, I assume, because Nest fills in particularly important holes in Google’s capabilities and in Google’s future development. While Google has been very good at building web applications, web services and a mobile operating system, they’ve done very little to prove that they can design and make hardware that real consumers will pay real money for. There’s a lot more involved there than design and supply chain. To a much greater extent, making hardware involves doing businessy things like identifying a target market for it, identifying what price they’ll pay at necessary levels of sales and margin, and then manufacturing a quality product in an efficient enough way to hit that margin. Nest has shown that not only can they do all of that, but they can produce an exceptional product that customers truly love. That’s invaluable, and it’s something Google hasn’t done.

Nest also provides an entry path for Google into the home. Starting into the connected home requires building hardware, and it requires a no-bullshit vision for how the connected home can improve people’s lives in substantive ways. Nest provides both of those things.

It sounds a symbiotic relationship, then. Google can provide Nest what it needs and Nest can provide Google something it needs, too. In Nest’s ideal vision of the relationship, Nest will remain largely independent—their own brand, leadership, teams and products. People and resources may flow across the Nest-Google boundary, but the two entities will nevertheless remain distinct. But in Google’s, Nest will begin to overlap and merge with Google itself. If Google wants the Nest acquisition to result in an improved capability for creating hardware products that consumers really want, then that necessarily requires Nest’s leadership to extend outside of Nest itself—which would require splitting their time, too. This is because while Nest may become functionally a discrete unit within Google (the “connected home” unit, let’s say), if it is to have any effect on the rest of Google, there has to be some sort of cross over. This may mean putting Nest’s leadership (whether that’s Matt Rogers, or another member of the team) in charge of Google’s hardware, or even having people in leadership roles move back and forth across the boundary. In any case, the boundary begins to smear, and Fadell’s reason for doing the deal—to focus his team’s time exclusively on product—begins to seem less likely.

Of course, that’s not necessarily negative. Perhaps a Nest-infused Google, and a Google-infused Nest, is better for everyone involved—Nest, Google, and us. I think there’s a good argument to be made there. But inherently, as that occurs, Nest begins to fade as a distinct entity, and it becomes more Google.

I think the most optimistic comparison for this acquisition is Disney’s 2006 acquisition of Pixar. Pixar remained an independent studio, kept their leadership, kept their campus, kept their culture, and created some of their most artistically and commercial films afterward. In return, Disney received Ed Catmull and John Lasseter’s services for turning around their declining animation department. And turn it around they did; Disney Animation Studios is enjoying something of a renaissance. Frozen, released in December 2013, was Disney’s biggest hit since The Lion King. The Pixar acquisition is one of the most successful acquisitions in history.

That could be how it works out here, too. I suspect, though, that while Pixar has thus far been able to retain its independence, Nest will not retain independence to the same extent. I have two main reasons for thinking so. First, the Disney-Pixar deal was incredibly specific in its intent: the deal was Catmull and Lasseter would oversee Disney Animation and Pixar would remain its own studio. The Google-Nest deal, as far as I can tell, doesn’t appear to be nearly as well-defined. As a result, blurring will happen with relative ease. Second, while in the movie business it’s actually beneficial for Pixar to remain independent in substance and in brand—it allows them to experiment in ways they couldn’t necessarily do if it was all a single studio, and it also allows them to release multiple movies per year in a way that doesn’t feel like Disney movies are competing for people’s attention—that structure doesn’t make nearly as much sense for Google and Nest. In reality, centralizing their hardware operation makes much more sense than continuing Nest as a parallel operation to Google’s other hardware operations. As a result, I think what we are more likely to see is Nest more or less become a part of Google while the brand continues on as Google’s “connected home” brand.

In the short-term, then, I think there’s very good reason to be excited about the deal. I bet we are going to see even more incredible things come out of Nest than we would have seen otherwise, and probably faster as well. But long-term, I’m disappointed. Nest is one of those rare companies that identified a brilliant product idea, in a large market, that would allow them to develop into something much greater in the future. And along the way, they built a first-rate company in all areas. I believe Nest would be one of the most important companies in the world for the next twenty years. And while they may still be integral to personal computing and the web’s future, it will likely be under Google’s banner. For better or for worse.

January 21st, 2014

Where Does the iPad Go From Here?

When Apple released the iPad Mini a year ago, I wrote that the Mini wasn’t just a secondary option to the main iPad, but it is the iPad. What I meant is that the Mini fulfilled much of the original iPad’s vision better than it or any of its full-sized successors did. Because the Mini was so much lighter and so much easier to hold, the Mini was not only more enjoyable to use while sitting down on the couch or in bed, but opened up contexts that the full-sized iPad’s size and weight didn’t allow. The iPad’s promise was powerful computing available to you in something you could comfortably hold in your hands, and the Mini fully delivered on it.

With this year’s full-sized iPad, though, the line between the two devices blurred. It’s still discernible, but it’s murkier. The iPad Mini is still superior for reading since it’s lighter, but the difference has narrowed considerably. Forget how many grams each device is; the iPad Air is quite comfortable to hold one-handed. Not as nice as the Mini, but nice.

The Mini narrowed the performance and capability gap as well. The Mini now runs the ridiculously fast A7 processor, same as the iPad Air and iPhone 5S. For many people, the Mini is big enough to write on as well, or make presentations in Keynote. The full-sized iPad is still superior for those tasks, and is especially superior for tasks like sketching which benefit from a larger screen, but the difference really isn’t that large. They are both quite capable devices for whatever task people want to use them for. The comparison is much more akin to a 13-inch Macbook versus a 15-inch than it is to an iPhone versus an iPad.

Which begs the question: where is the iPad going? More specifically, where are iPad applications going?

The original iPad was released in 2010 with the full iWork suite, and along with the iPad 2 in 2011, Apple released Garageband and iMovie as well. Garageband in particular feels like the high water mark for creative applications on the iPad. Not only was Garageband incredibly powerful and feature-deep, but it felt like it was made for the iPad all along.

There are many other applications that are powerful in different ways as well. Paper is a wonderfully simple application, but is capable of remarkably beautiful and intricate work (Paper’s Tumblr makes that clear). Editorial is a well-constructed text editor that is, in some ways, superior to desktop-based text editors. Djay is, appropriately, a DJ application that could only exist on the iPad. And on and on.

I think, though, that we’re beginning to see diminishing returns for increasing the capabilities of existing iPad applications or pushing it into new spaces. Generally speaking, while iPad hardware is dramatically faster and more capable than it was in 2010 (or even last year), applications haven’t increased at anywhere near the same pace. There are a number of obvious reasons for that, of course; faster hardware doesn’t necessarily imply that software—which is dependent on many things, but is especially dependent on good ideas and demand for them—will advance at a similar pace.

But there’s another reason: iOS still hews closely to the one-app-at-a-time, app-as-silo concept that began with the iPhone in 2007. For some purposes, this actually makes the iPad a better tool than the PC; the iPad’s limited ability to multitask makes it easier to focus on writing or other focused tasks like sketching. But it also significantly constrains what’s possible on the device. Writing an email or article that references a website or note, or doing any work that requires looking at one application’s content while working in another, requires flipping back and forth between applications, which makes for an excellent test of patience. And there is virtually no support for allowing two applications to work together on a single task by passing data between them.

Many people have suggested that renaming the iPad the “iPad Air” sets the stage for an iPad Pro, presumably with a larger screen. It’s difficult to disagree with that, but I don’t think an iPad with merely a larger screen would justify a “pro” moniker. The difference between what it is capable of and what the iPad Air is capable of would be fairly small; it would be better at sketching, but not much else. But where it would make sense is if the one-app-at-a-time model is relaxed. Using two applications on screen at once, a la Microsoft’s Metro, would certainly benefit from a larger screen. And building support for allowing applications to work together on tasks and share their data would justify the new name that much more as well.

While conceptually these two changes are fairly simple (I wrote about what applications working together could look like last year), the details are always where it gets difficult. How do you enter “split-screen” mode? How do you get out? What affordances do you provide to users so they understand what’s going on? Do you allow the user to drag content back and forth between applications (I’d hope so!)? How do you indicate what can and can’t be dragged? How do you implement the API to do so? And so on. None of it is easy. It’s inherently complex, and while we all want iOS to become more powerful, these changes are fundamental to iOS’s conceptual design, and a wrong move could endanger what’s made iOS so convincing in the first place: its simplicity.

Nonetheless, if iOS and the iPad are going to continue to progress and become more capable, then these sorts of changes are inevitable. That’s (generally) where we’re going. The rest is details. It’s also, coincidentally, where most of the work is.

November 4th, 2013

Nest Protect

Today, Nest announced their first new product since the Nest thermostat—Nest Protect. Nest Protect is a smoke and carbon monoxide alarm.

For an excellent look at Nest Protect, and profile of why they made it and the design process, you should read Stephen Levy’s piece for Wired.

…Wait, what? A smoke alarm?

Yes. Nest’s latest product is a $130 smoke alarm.

Nest’s basic strategy should be obvious now: take things we use in our daily lives but don’t at all enjoy using, or actively abhor using, and think through them so that they’re both better devices and delightful to own and use. (It’s also worth noting that they’re choosing product categories that are very large and universally needed.)

It’s more than that, though. The Nest thermostat and Nest Protect are standalone devices, but they work together. If you have a Nest thermostat and smoke alarms installed in your home, the smoke alarms will inform the thermostat when there’s movement in the home—which should make the Nest thermostat’s “Auto-Away” feature much more accurate, and thus able to be that much more efficient with a home’s energy use.

But what’s even more illuminating for what Nest’s vision is, though, is that if a Nest smoke alarm senses carbon monoxide, it will tell the thermostat to turn off the home’s furnace, which is a likely cause of carbon monoxide poisoning.

That’s truly smart. Nest has not only built two devices that work together to efficiently manage your home’s energy and protect you from fire, but they’ve created two devices that can actively judge the situation and work together to intervene in your home to keep you safe.

We’ve been hearing about the “smart home” for a very long time now, but this is the first time we’re legitimately there. Tony Fadell seemed to confirm this as Nest’s intent while talking with Stephen Levy:

In other words, Nest isn’t only about beautifying the thermostat or adding features to the lowly smoke detector. “We’re about creating the conscious home,” Fadell says. “To take a truly important device that has had no great innovation and make that device really, really great.” Left unsaid is a grander vision, with even bigger implications: many devices sensing the environment, talking to one another, and doing our bidding unprompted.

That’s a grand dream, and I think the Nest Protect—ostensibly just a smoke alarm—is going to be a key cog within their plan. Think about it: it’s not just a smoke alarm, but an Internet-connected computer with sophisticated sensors and software in every bedroom and on every floor. It knows when you wake up (since it has a motion-tracking sensor), when you go to bed, and even when you get up in the middle of the night. Along with the Nest thermostat, they also know when you leave for the day and when you get home. There’s a lot of immediate information there to begin doing some incredible things, and it’s something that could serve as a platform for all kinds of other services as well.

So yes, it is “just” a smoke alarm. And a very good one. But I think it’s also a piece of a much larger plan: make products that are so good that they can stand on their own and you’ll have to have them, but also work together to create something we’ve never seen before.

October 8th, 2013

The 5C

In an excellent interview with Business Week, Tim Cook explained their thinking for the iPhone 5C:

We never had an objective to sell a low-cost phone. Our primary objective is to sell a great phone and provide a great experience, and we figured out a way to do it at a lower cost. Therefore, we can pass that on. And we figured out a way to sell 4S at substantially less than we were selling it for before, and we’re passing it on. So we think there will be a lot more people in our tent, and we can really serve a lot more people. And that feels good.

The iPhone 5C is fascinating to me because nearly everyone—including John Gruber—got it wrong: it isn’t a “cheap” iPhone. Rather, it’s something that’s both much more obvious and surprising.

Implicit in the idea that Apple should release a cheaper iPhone is that it would be a secondary model for people who want an affordable prepaid iPhone and for international markets; that is, an implicit assumption was that the iPhone/iPhone 5S would remain the mainstream iPhone. That isn’t what Apple is doing with the iPhone 5C.

Instead, Apple has taken the strategy they’ve followed since releasing the iPhone 4—take last year’s model and make it available at $99—and created a distinct product from it, and made it the mainstream iPhone.

Rather than move the iPhone down market with the iPhone 5C, Apple moved the “regular” iPhone—this year, the iPhone 5S—up market to become the pro version, and establish the iPhone 5C as the “regular” iPhone. The iPhone 5C is now the iPhone that really is good enough for everyone. The A6 processor is fast, so is LTE, and the iPhone 5′s camera is very, very good. The colors lend it a feeling of accessibility, too; it feels less serious than the iPhone 5′s aluminum design, more fun, and the colors allow for a greater feeling of personalization and whimsy. (The cases only amplify that, misplaced circles aside.) It’s a very good phone at a reasonable $99 price-point, and it’s going to look much better in the store to potential customers than last year’s iPhone model did.1

Apple’s marketing certainly seems to be trumpeting this, too. Apple’s home page features the iPhone 5C, not the 5S, and it’s playing heavily on the 5C’s colors. They featured an iPhone 5C ad, not one for the 5S. Tim Cook and Phil Schiller referred to the iPhone 5S as Apple’s most “forward-looking” iPhone yet. Apple is positioning the iPhone 5C as Apple’s iPhone for everyone, and the iPhone 5S for people who want the best.

That makes some sense on the face of it; it allows Apple to sell a “new” iPhone at $99 with 16GB of storage, but with lower cost of goods sold, which means they can maintain their margin. It may also allow Apple to push the envelope a bit more at the top end because they no longer need to manufacture enough iPhone 5Ss to satisfy nearly everyone purchasing a new iPhone at launch. But if the iPhone is under mortal threat from low-end, commodity Android-powered smartphones, then this decision seems bizarre. It won’t compete with those devices. The iPhone 5C is cheaper, but it’s not much cheaper.

But it starts to make a lot of sense if you think that smartphones aren’t so far along that the low-end, cheap models are good enough compared to the iPhone. If Apple can still provide superior hardware and software that, combined, make for a genuinely better device that is palpable for regular customers, then Apple has no need to bloody itself in the low-end washer machine.

And that’s exactly what Apple seems to think. Tim Cook explains what he thinks makes Apple so special, and what makes this strategy possible:

You look at innovation like the iPhone’s camera and the detail that went into the camera. Most people hear the word camera, and they think of hardware. And hardware is really important to it, you know? With the stuff we did with the flash on this. But it’s software, and it’s the silicon—I mean, it’s everything.

So the way I think about Apple is that the magic of this place really comes up at its best when hardware, software, and services come together. And it’s sort of the intersection of those things is where things get incredibly magical. So facilitating that to happen and getting the collaboration level for that to happen is the magic here.

And one of my proudest moments is when other people see that. They don’t know that they’re seeing that, and that’s also the beauty. They don’t have to do it. But look at these (gesturing to iPhones). These are perfect examples where the hardware and the software and the service begin to blend. In some ways you don’t tell one from the other.

The iPhone’s camera is the perfect example of what Cook is arguing. The iPhone’s camera—a cellphone camera!—is now so good that many people have nearly no need for a dedicated point-and-shoot camera. This is only true, though, because Apple has focused on developing the camera in a way that can’t be captured so well on a specification sheet but really does make for a better camera. Rather than boost their sensor’s megapixel count, Apple has kept it steady at 8 megapixels for three iPhone models, and instead has boosted the sensor’s size. They’ve focused on doing rather incredible things with the iPhone’s Image Signal Processor to make for, and choose, better photos. While these things don’t translate well to selling points for cell carrier sales associates, it does make for a truly better camera, and customers do notice the difference. As a result, the iPhone feels like a device in a class of its own.

The obvious choice was to make a more affordable iPhone. I don’t think Apple is religiously opposed to making a cheaper iPhone, but they will only do so if they can make a convincing product. What Cook is saying is that making truly good products comes first. Eventually, I believe, Apple will do exactly that. That shouldn’t be a surprise; the iPhone 5C is highly reminiscent of my first Apple product, and one of my favorite devices ever: the iPod Mini. The iPod Mini had less storage than even the third-generation iPod (10GB versus the Mini’s 4GB), and wasn’t that much cheaper than the third-generation iPod ($299 versus $249), either. Critics at the time were perplexed; if Apple was going to make a lower-end iPod to compete with more affordable competing devices, the iPod Mini certainly wasn’t it.

But it didn’t matter, because it was a damned good product. For me (as a high school student at the time), the lower price finally made it attainable, and the colors were fun in a way the regular iPod never was. The iPod Mini was incredibly successful, and it wasn’t the end; Apple replaced it with the iPod Nano in 2005 at lower prices, and introduced the iPod Shuffle—a completely different kind of music player—in 2005 as well at even lower prices.

I think the iPhone will follow precisely the same path. That is, I believe Apple will build some kind of “iPhone” product for the low-end eventually, but it may not look like an iPhone at all.2

In that sense, what Apple did was incredibly obvious: it’s what they’ve been doing since Steve Jobs returned to Apple. They don’t identify price-points and product attributes and then create a product to fill it, as most companies do. They create genuinely good, convincing products that solve real needs for people first.

If you’ve been concerned about where Apple is going under Tim Cook, this should be a sign that there’s nothing to be concerned about. Apple’s unrelenting focus on making truly great products is not only still there, but seems to be reinvigorated under Cook’s new management team.

There have been a lot of headlines lately with some variation of “Is Innovation Finished At Apple?” I believe the best may still be ahead of Apple.

  1. Yes, technically, it’s almost identical to last year’s iPhone 5. But for people looking at several phones, a colorful iPhone is going to be a lot more exciting than the iPhone you’ve seen everywhere for a year. []
  2. This may be where Apple’s wrist device fits in. []
September 24th, 2013

Designing Basil’s New Timers

With iOS 7, I am releasing a large update to Basil. There’s a lot to talk about, and I’ll do so more this week, but I wanted to discuss the process around designing one of the main features: recipe timers.

When I released the first version of Basil, it would turn any time directions (the “15 minutes” in “Simmer for 15 minutes on low heat…”) into a button. When pressed, a timer would slide up on screen from the bottom. It was one of my favorite features because it made cooking a little easier in an immediately understandable way.

As soon as Basil hit the App Store, customers pointed out two large limitations. First, if you started a timer and switched to another application, it froze the timer. That’s not good, since switching to Safari to do things like look up how to properly slice a carrot into matchsticks or a substitute for masa is pretty typical in the kitchen. Second, Basil only allowed you to have a single timer running at a time, which made it difficult to time, say, two things cooking on the stove.

I’m happy to announce that the new version of Basil coming out this week addresses both of those limitations. But that’s not what I want to talk about.

Since I’ve been aware of these limitations since the just after Basil’s release, you may be wondering why I waited until now to address them. And you should be; it’s Basil’s biggest usability issue. You may especially be wondering why it’s taken so long since these limitations aren’t exactly a technically Hard Problem to solve.

It took this long because handling multiple timers turned out to be a fairly challenging design problem to solve while retaining Basil’s overall simplicity. Of course, I could have simply extended Basil’s current timer concept, and just show multiple timers across the bottom of the screen. It would look something like this:

Timers

That would have worked, but there are several serious problems with it. First, it magnifies an issue with the original design: the timers cover up a portion of the direction step at the bottom of the screen. Second, with multiple timer views across the bottom of the view, it’s also highly distracting. Third, it’s not at all clear which direction step each timer is associated with—which is a very bad thing when you’re cooking multiple things at once and forget which timer is for what.

So that concept wouldn’t work. The next route I briefly considered was using the same basic concept, but pushing the recipe view up, and putting the timers underneath it. That would solve the first problem, but not problems two and three, and it only solves the first problem by reducing the recipe’s screen space. That didn’t seem like a viable solution, either.

The third direction I considered was creating a timer “manager” of sorts that would only be visible when the user had opened a timer. If there were any open timers, Basil would show a little circle in the bottom-right of the view indicating how many active timers there were. Tapping it would open a view across the screen showing what timers were open for each recipe. It would be like a global look at all timers for your recipes, and you could pause or stop any of them, from anywhere in the application. It would look something like this (in sketch form, anyway):

Timer Manager

There are some advantages to this approach. It would solve problems one and two, since it would only be on screen when you want to glance at your timers. Also, it provides a lot of power: you can see where your timers are for your recipes from anywhere within Basil, so you can quickly check on the time for your beans on the stove while looking at your famous burger recipe to get them ready for the grill.

But as you can see from the illustration, it only adds that power by introducing enormous complexity. Now, you have a button/indicator view that appears in the screen’s right corner when you open a timer, and now you have a view that could literally show tens of timers at a time—which would mean that, for it to be useful, you would have to clear any timers you open but don’t end up wanting to use. And even when you manage it correctly, it’s still confusingly complex. There’s a lot going on in one view to comprehend when you’re trying to cook.

I dismissed that design path because it would effectively destroy the timer feature’s purpose by introducing complexity. If it’s too much to deal with, you won’t use it, and that’s especially true when it’s for an application that won’t be your sole focus most of the time. Implementing the timer manager would technically solve the original timer implementation’s limitations, and technically make it much more powerful, but substantively it would make the feature completely useless. And worse, it would water down Basil’s core focus: making things simple so you can focus on cooking.

Realizing how wrong the timer manager design is was frustrating. I’d spent a lot of time and brain power heading down that path, and it was completely wrong. What it forced me to do, though, is think about the feature’s purpose again with a fresh mind. Instead of thinking about how to make Basil’s timer feature—something that’s bolted on top of the recipe view—more powerful, I thought about how I could make it simpler.

In Basil’s original design, I thought of the timers like a digital equivalent to an actual timer—something separate from the recipe itself that would slide on and off screen as needed. But then I thought that distinction didn’t make sense; a timer is conceptually tied to the direction step it comes from. If step two says to sauté onions for five minutes, that timer for five minutes should be tied to step two. So why not just put the timer in the direction step itself? Like this:

DirectionTimer

While this doesn’t achieve the same power as the recipe manager, it not only solves problems one and two, but it also solves problem three by directly tying the timer to its direction step. There’s no question what a timer is tied to when you glance at it, and there’s no extra text to read to indicate it, either.

By doing so, this design path both simplifies the recipe view visually, which is highly desirable for cooking, and naturally presents multiple timers. I’m extremely proud that it makes a more powerful feature possible by simplifying the design concept. Hopefully, if I’ve done my job, none of this will ever occur to the user. It should seem obvious, and they should never have to think about it.

I wanted to step through some of the design process and my thinking for this feature because often when we use applications, how an interface is conceptually designed seems obvious, and implementing new features seems like it should be incredibly easy. Designing interfaces, though, is never easy, and while much of it may seem obvious, the details that only become apparent once you’ve thought through every part of it are the ones that cause a design to fail.

September 16th, 2013

No, Mr. President. Snowden is a Patriot.

Earlier this month, in a press conference where he stated that he supported “greater oversight, greater transparency, and constraints” on the government’s digital surveillance programs, President Obama stated that he doesn’t believe Snowden is a patriot:

No, I don’t think Mr. Snowden was a patriot. As I said in my opening remarks, I called for a thorough review of our surveillance operations before Mr. Snowden made these leaks.

My preference — and I think the American people’s preference — would have been for a lawful, orderly examination of these laws, a thoughtful fact-based debate that would then lead us to a better place. Because I never made claims that all the surveillance technologies that have developed since the time some of these laws had been put in place somehow didn’t require potentially some additional reforms. That’s exactly what I called for.

Apparently for Obama, calling for a “thorough review” of government surveillance absolves him of any responsibility for their current state, or for pushing for—and getting—a reform-free renewal of the FISA law in 2012.

Obama said Snowden should have used authorized channels to question the government’s actions, and that he would have been protected by a presidential policy directive Obama issued. He said this, apparently, despite that his directive doesn’t offer protection for contractors like Snowden, nor were the procedures actually in effect in June 2013 when Snowden released classified documents.

He said this while knowing that his administration prosecuted an NSA employee, Thomas Drake, who did precisely what Obama says whistleblowers should do: he raised his concerns through authorized channels within the government. After later leaking non-classified information to a reporter, Drake was charged with several felonies that could have resulted in 35 years in prison, and the federal government attempted to coerce him into helping them prosecute other whistleblowers.

So, then, Snowden had very little choice to do what he did. He saw that the government is violating the privacy of Americans on a massive scale, and that the government was actively hiding the nature of these programs and lying to the public about them. Following proper “procedure” would have resulted in his persecution, and likely wouldn’t have led to the information being disseminated so widely, so he released it to journalists.

His choice—which came at incredible personal cost to him (losing his job, home, family and the right to live in his own country)—is what’s allowed the public to have any knowledge at all of what our government is doing in our name, and to have anything resembling an “open debate and democratic process” with regard to this issue. The actions of a man Obama says isn’t a patriot.

This administration has worked very hard to preserve the government’s surveillance and to prevent the public from understanding its actual extent. Obama’s calling for a “thorough review” and “greater oversight, greater transparency” are merely words. Words that come too late, and words that contradict his administration’s actions. Snowden’s actions are what have allowed us to have knowledge of what the government is doing.

At great risk to himself, he helped expose the government’s surveillance programs so the public could know what was being done to us in our name by our elected leaders. If that isn’t patriotic, what is?

August 20th, 2013

“Yeah, That Feature Should be Easy to Do”

Today, Basil received a nice 1.6 update. This update brings a really obvious, simple feature that many other recipe applications include: ingredient scaling. It’s really convenient while in the kitchen to be able to adjust a recipe’s yield according to your needs (the amount of jambalaya you need for two on a quiet weeknight is a little different than the amount you need to make while having guests), and it’s conceptually very simple.

Except it’s not so simple.

It sounds like it, of course; when you think of doing ingredient scaling, it’s just taking an amount at the beginning of a line (“1 item,” “2 tablespoons”) and scaling it by some factor—cutting it in thirds, doubling it, whatever. That’s the approach most applications take for ingredient scaling: they assume all ingredients are phrased such that there is an amount at the beginning of the line and then a food item. Based on that assumption, they identify a number at the beginning of the ingredient line and scale it by whatever scale the user selects. In the basic cases, like “2 cups white sugar”, this works OK. Not great, but OK.

But the problem is that not all ingredients items are the basic case, because there are many ways to express things. What about ingredients like “1 (12 ounce) box of pasta” versus “2 medium cloves garlic (about 1 teaspoon)”; in the first case, the measurement amount shouldn’t be adjusted, but in the second, it should be. In the second case, assuming that you only have to adjust the amount at the beginning of the line can be very confusing while cooking: After I double the recipe, do I use 4 cloves of garlic, or 1 teaspoon? This assumption—made because parsing imprecise language like this is difficult—actually makes cooking harder for the user, because they have to remember how their application fails at correctly scaling ingredients.

A new feature for cooking isn’t helpful if using it actually makes it harder to cook. There are many other cases; what about scaling amounts expressed as words, like “half”? Or what about when you cut “1/2 cup” in fourths and it tells you to use “1/8 cup” rather than “2 tablespoons?”

Scaling ingredients isn’t useful if it makes you think about these things while cooking or doesn’t work for some ingredients. It’s a distraction.

I avoided building a scaling feature for Basil for that reason: I didn’t want to add more noise if it had all of these caveats. But I’m happy to announce that Basil now has ingredient scaling, and it does its best to do the right thing in all of these cases. It handles the cases discussed above, and many others, too. It will do its best to present ingredient amounts in a way that makes sense (no “1/9 cup sugar” or similarly bizarre things).

It isn’t perfect. There are probably cases I’ve missed, or cases it’s not handling properly, but I am committed to addressing those.

Working on ingredient scaling, though, reminded me that problems which seem exceedingly simple and easy to solve on the surface are usually icebergs waiting to sink you. They may look simple, but they tend to conceal much complexity underneath the surface that is only obvious once you’ve thought through the problem. These “yeah, that should be easy to do” features have, more often than not, been the hardest problems to solve in my experience.

This experience is also another lesson that while users like receiving new features and they’re exciting to announce, over the longer term users only want a new feature if it improves their net-experience in some way. If it ends up not being useful in many cases, or adds complexity to the process of using your application, then that feature could make your application less useful than it was before.

August 6th, 2013

Fitbit Flex

In the beginning of June, I began wearing a Fitbit Flex. I decided to purchase one for two reasons. First, wearable computing is growing very quickly, and is increasingly more interesting as well. Second, the idea of measuring (a few elements of) my health on a daily basis is fascinating to me, so I thought it might be beneficial to do so.

The Flex has become a much larger part of my daily routine than I thought it would. If you are unfamiliar with it, the Flex is a relatively unassuming wristband-bound device that tracks how many steps you take and your movement while you sleep and syncs that data to a companion iPhone application using Bluetooth. The iPhone application shows you (approximately) how many steps you made each day, what (approximate) distance you walked, and (approximately) how many calories you’ve burned. In addition, the application allows you to log meals as well.

Typically when I begin using some kind of system that’s supposed to improve my life in some way but requires constant input from me, I use it for a while when I’m excited about it, until one day when something gets in the way (life) or I just forget, and from then on I forget to use it altogether. To do applications are the best example of this; to be useful, they require constant user input. As a result, once the user fails to enter a new task, the motivation to continue using it begins to disappear.

I assumed the Flex and meal logging would follow that same pattern—I would use it for a couple weeks because I’d still be excited about a new piece of technology, but once that newness wore off and once I forgot to log a day’s meals, I would stop paying attention to it. And then it would be another device in a drawer.

After nearly two months of use, though, I’m still using it. And not just using some of it, like step counts—I’m also using meal logging and sleep tracking as well.

I think that’s because the Flex provides a certain amount of information without any effort on my part. As long as I wear it, I can see how much I’ve moved around each day, and a (good enough) approximation of how many calories I’ve burned. That’s quite informative on its own. Actual data (over a fairly large period of time) makes it much more clear to myself that I need a sustained plan for getting in better shape, and crucially, it also is much more rewarding on days when I am quite active. Seeing how many miles I’ve moved, and feeling the pleasant little buzz the Flex makes when I cross my daily goal, is surprisingly powerful. It’s certainly more powerful than the vague sense I had before that I wasn’t active enough.

As a result of that “free” information, I have a large motivation to also log my meals, because doing so will show me my (rough) caloric input versus my (rough) caloric output. We all know that certain foods are very high in calories, carbohydrates and fats, but it’s an amazing thing when you can get a fairly good idea of how many calories you’ve already eaten for the day, how many you’ve burned, and what that piece of chocolate cake is going to do. Suddenly, there’s a higher internal cost to eating badly, because you can see it all laid out for you.

But interestingly, logging my meals—something I’ve gotten in the habit of doing for the above reasons—has had a more subtle effect, too: eating well consistently is much more satisfying than it otherwise would be. Each time I check in that I had fruit or vegetables rather than chips or french fries, it’s gratifying. It makes it a bit of an accomplishment. I didn’t expect that at all, but at least for me, there’s absolutely a feeling of accomplishment that results from consistently eating and logging healthier meals and snacks.

Because I now have immediately accessible information about how much I exercise and what I eat, it’s given me much more motivation to plan my meals and exercise, and to stick with it. Before, it was fairly easy to skip out on a day’s exercise (which turned into a few days, and then weeks…) or to shrug off a day of poor meal choices as a one-time fluke (even if it wasn’t), because I could. It’s much harder for me to do that, though, when the information is right in front of my face. That’s important.

What’s important, too, is that it hasn’t turned me into a calorie-counting, joyless automaton. It hasn’t turned eating into something that’s purely for providing energy and nutrients. I don’t always avoid large meals or dessert because it might bump me over my calorie target for the day. In fact, it’s been liberating, because it’s helped me impose a bit of discipline in my regular eating, so having an occasional large meal or a dessert doesn’t feel shameful at all—it’s something that I can enjoy without worrying that I’m eating terribly. I know I’m consistently eating well, so occasional indulgence won’t hurt.

It’s interesting how powerful a little information can be.

July 23rd, 2013

Apple’s New Beginning

At WWDC this year, Apple announced the most dramatic update to iOS since iOS 2 introduced the App Store and SDK for developers. The update is also the most controversial.

In 2012, Tim Cook fired Scott Forstall, Apple’s senior vice-president (SVP) of iOS, and distributed his responsibilities among Apple’s executive team. Craig Federighi became SVP of software, unifying the iOS and OS X teams under one leader. Eddy Cue received the title SVP of “Internet Software and Services,” unifying Apple’s iTunes, iBooks and App stores, iCloud, Maps and Siri. And in addition to hardware design, Jonathan Ive assumed responsibility for Human Interface design as well in the shake-up.

At the time, Adam Lashinsky reported that Forstall refused to sign an apology for iOS 6′s Maps release, and that sealed his fate. While his refusal may have been what precipitated his firing, Lashinsky says that Forstall didn’t get along with Ive. I’ve heard similar things as well, and that those difficulties were not isolated with Ive.

I don’t think Cook decided to fire Forstall simply because he didn’t get along with others in Apple’s management team, or because he refused to take public responsibility for Maps’s issues. Rather, I believe it was a result of Cook re-organizing Apple’s management structure to reflect his becoming CEO. Cook is not Steve Jobs, and he does not pretend to be, so he decided to change Apple’s structure to reflect that.

Jobs fulfilled a rather unique role as CEO of Apple. Rather than oversee and manage the work of others, Jobs was intimately involved in product and product design decisions. No detail was too small for Jobs’s attention. Jobs both originated product ideas and took part in iterating on them, but more importantly, Jobs acted as a filter for Apple. All product decisions ultimately passed through him; he approved new products and new directions, and vetoed them, too. As a result of this role, collaboration among his lieutenants and teams wasn’t as important; indeed, Jobs was known to foster competition and even conflict among individuals and teams to try to elicit their very best work, and then he would choose what he saw as the superior work and direction for Apple.

Cook, as far as I know, doesn’t scrutinize each pixel of an application’s design. He doesn’t have Jobs’s ability to understand what makes for a good product. Jobs was one-of-a-kind, and Cook recognizes that. Recognizing that, however, means that he couldn’t continue Jobs’s management style. Instead, Cook needs to rely on his management team to replace Jobs’s role. Each member must take absolute responsibility for their area of focus and must be incredibly talented at managing it. Most integrally, though, because Apple no longer has the singular filter that all larger product decisions pass through, that management team must work together. Apple could withstand conflict and islands—even benefit from it—with Jobs because each area ultimately ran through him, and because he directed each area. Since Cook can’t fill that role, he needs supremely talented people in charge of each area working with each other to set Apple’s direction. Jobs’s Apple could feed off of discord, but Cook’s Apple must feed off of collaboration.

In Apple’s introduction video for iOS 7, Jonathan Ive says that “We see iOS 7 as defining an important direction and in many ways, a beginning.” While Ive may have meant they saw it as a new beginning for iOS, iOS 7 also marks the beginning of Apple’s new management structure.

When Cook fired Forstall last year, it wasn’t clear what it meant or whether it was for the better. Embarrassingly, Cook hired John Browett to head retail in April 2012 and fired him in October along with Forstall. One way to read that—and many did—was that Cook’s leadership was failing; he had hired an obviously bad fit for retail and was forced to get rid of him in six months. In that light, Forstall’s firing and Apple’s management restructuring looked like it could be the result of a struggling management team.

Until WWDC this year, it wasn’t clear whether that view was correct, or whether it was a part of Cook’s reorganizing Apple to work best under new leadership. Today, though, I think it’s clearly the latter view that was correct.

With that out of the way, I believe that this year’s keynote was meant to establish the foundation for Tim Cook’s Apple. In episode 19 of the Accidental Tech Podcast, Marco Arment, John Siracusa and Casey Liss discuss the keynote introduction video, which explains Apple’s motivation. Siracusa says that while he liked the video, he thinks that it suggested they were about to introduce something groundbreaking, and that OS X Mavericks, the new Mac Pro and iOS 7 didn’t live up to that. Siracusa might be right, but I think he misses its intent. This wasn’t meant to congratulate themselves for being great; rather, it was meant to affirm Apple’s motivation for what they do. Along with their “Our Signature” ad, I think they are the equivalent of the “Think Different” campaign for the Cook era.

Famously, Jobs said that the people at Apple shouldn’t try to make decisions by asking themselves what he would have done. Instead, he said, they should just do what’s right. Clearly, Cook took that to heart. This is Cook’s Apple, and they are not constraining themselves by what feels Jobs-like. Cook hasn’t confused the trappings of Jobs’s Apple—how Jobs managed the company—for its heart: an irrepressible desire to make insanely great products that improve people’s lives and give them joy.

Apple, then, has changed significantly since 2011. Things are quite different at the top than they were then, and to my eyes, Apple seems more open to the world than its ever been in important ways, too. But those changes have all been made so that Apple can continue doing what they always have. This may be Cook’s Apple, but the core is just as it’s always been.

July 9th, 2013

Spying in the U.S.A.

I don’t have much time lately, so this piece will be shorter than I intended, but I want to write briefly about the NSA’s spying programs.

The NSA is collecting, it seems, phone records for all U.S. citizens. They are collecting “metadata” on our phone calls—that is, what number a subscriber called, where they called from, and for how long the call lasted. The Obama administration tried to minimize this by claiming that the information collected did not include the subscriber’s identity or the conversations themselves.

That’s brazenly dishonest and preys on the public’s ignorance. While the program may not directly collect the subscriber’s identity, or the identity of the person they called (maybe! we really have no way to know, since the program is classified and this statement came from an anonymous administration official), it’s trivial for the government to look up who that phone number belongs to. Moreover, the information they are collecting is arguably more important than the conversations themselves, since they provide a very detailed picture of where a person goes, who they’re in contact with and at what times. The only way it could be reassuring that the NSA is not collecting identities and recording conversations is if you don’t understand how powerful a phone call’s metadata is, and how easy it is to look up who owns a phone number. This administration’s attempt to minimize it is, therefore, absolutely vile and reprehensible. It is not only dishonest, but disturbing. They are trying to use the public’s ignorance of the issue, which is a result of their tireless effort to keep these programs a secret, against us to prevent us from coming to an accurate understanding of precisely what the government is doing.

That is as worrying to me as the programs themselves. This administration has pursued leakers with a zeal and ferocity that this country has never seen before. President Obama claims that he does so in order to protect the lives of U.S. troops and intelligence agents in the field, but the administration’s response to these leaks shows another purpose: they are attempting to batten down the hatches to prevent the public from gaining an understanding of precisely what the government is doing to us in our name.

The end result is this: we have a government that collects all of our phone records, has access to nearly all of our personal, private online communications, and that not only refuses to tell us what they are doing, but attempts to cloud the issue and mislead us. They say that there are ample rules and safeguards in place to ensure this information is not abused. Bullshit. The only court overseeing it is a secret court whose decisions we cannot see, and who provides no real direct oversight whatever. And even if that were the case—even if they were using this information correctly and guarding it jealously from abuse—recording our every communication is inherently an abuse of our rights. It is an abrogation of the right to privacy when the government can record any and all communications we make. There is no privacy, just the government’s word that they won’t use their great power to harm us. This is not the beginning of a slippery slope; we have already fallen straight on our asses, slid down it and lie crumpled and battered at the bottom.

Our government spies on us all and lies to us about it. There is no gray here. It is wrong, and our government is doing us harm. Our government has convinced itself not just that it needs this power, but that it deserves this power, and that it deserves our trust. It does not deserve the power to collect any communications it pleases, nor does it deserve our trust that it will not use it to harm us.

In Obama’s 2009 Inaugural Address, he said:

The question we ask today is not whether our government is too big or too small, but whether it works — whether it helps families find jobs at a decent wage, care they can afford, a retirement that is dignified. Where the answer is yes, we intend to move forward. Where the answer is no, programs will end. And those of us who manage the public’s dollars will be held to account, to spend wisely, reform bad habits, and do our business in the light of day, because only then can we restore the vital trust between a people and their government.

If there is a tight summation of Obama’s governing philosophy, this is it—the question is not how big government is, but whether it works. Through his actions, Obama showed his philosophy to be false. He did not restore the “vital trust” between the people and our government—he undermined it by doing his business behind closed doors and by ruthlessly ensuring that anyone who dared leak information to the public about that business is prosecuted. But worse, this shows why the size of government—that is, the extent of government’s power and its reach into our lives—is so vitally important. As the government’s power coalesces, and as it reaches further into our lives, the likelihood of abuse of power increases and the magnitude of its effect does, too. Worse, this philosophy justifies the very abuses we see now and the abuses that Obama criticized so much before running for president: if those in power feel they can use it without “unnecessarily” (the definition of which is, conveniently, defined by the government) impinging on our rights while still protecting our security, then it is justified—because the standard used is effectiveness.

That philosophy provides for a dangerous level of confidence in all levels of government, and confidence in their motivations and ability to prevent abuse. As James Madison wrote, “If angels were to govern men, neither external nor internal controls on government would be necessary.” But humans are not angels, and power tends to alter people’s sense of right and wrong in unbecoming ways. That recognition implies that we should be skeptical of government and skeptical of increases in government power, because abuse is so easy a road to end up on. Obama dispenses with that skepticism and embraces a focus on how well it works. The NSA’s spying, the most espionage prosecutions of any president in history, and the government’s attempts at misleading us are the direct result of that.

There’s nothing else to be said than this: it is wrong, and it should not stand.

June 26th, 2013

On the Philosophy of Google Glass

Technology inherently alters how we live, what we do, how we think, how we die. Modern medicine has turned many diseases that, when contracted, brought along death more often than not into things of the past or minor annoyances, and injuries that would have been debilitating or deadly into ones that can be overcome. The motion picture didn’t simply make plays something that could be watched over and over again by many different audiences, but created a mass entertainment accessible by nearly everyone, and changed how we learn about news events and conceive of them. The car allowed for suburban sprawl, and for people to travel where they please, whether for the evening or for extended trips across thousands of miles; and in so doing, the car changed what the American “ideal” is, spawned innumerable cultural groups centered around it, and helped construct a rite of passage into adulthood for teenagers. Getting your driver license and first car is the first step toward being an adult, but also a right to freedom, to being able to go wherever you please on your own or with friends on nothing more than a moment’s notice.

So as long as humans have been creating tools to influence the world around us, the tools—technology—have been influencing us, too. It’s an inevitable byproduct of using something, and this isn’t exactly a new insight. Smartphones and now “wearable” computers like Google Glass are merely the latest human technology to influence their creators.

But while they may only be the latest example of something that’s been happening for as long as humans have created tools, there is, I think, something very different about so-called wearable computers like Google Glass. They have the potential to integrate themselves so deeply into the user that over time, and as they develop further, there will be little reason to differentiate between the device and the user. Removing your Glass device will feel very much like losing a limb or sense—something that you’ve grown used to depending on and using is gone. Through this much deeper integration, these devices could fundamentally alter the human experience and what it means to be human.

That might sound alarmist, like science fiction, or—if you own a smartphone—just remind you of that small moment of dread, like something’s wrong, when you leave the house without your phone.

Wearable computing has much more potential than “wearable” implies. Instead, through overlays on our vision (or more direct connections with the brain, potentially), things like Google Glass can become another sensory input as well as an output. Google Glass already allows you to look something up on Google (“Google, how tall is Mt. Everest?”) and get directions without ever pulling the phone out of your pocket, or using your hands at all; you ask, and the information is spoken to you or overlaid at the top of your vision. It’ll notify you about your flight this afternoon or messages you receive, and you can reply to the message, too. You can snap a photo or video. All without using your hands, and it’s all—again—on top of your vision.

The ultimate goal is to form a direct connection between our brains and the web, and all of it that entails. Google Glass is merely a first step toward that, and merely a hack that hijacks our vision to provide input to our brains and hijacks our voice for control. A direct connection with the brain is obviously ideal; there’s no “glasses” to wear, or need to use voice to control it, which isn’t very efficient. In Steven Levy’s In the Plex, he recounts a conversation he had with Larry Page and Sergey Brin in 2004:

Back in 2004, I asked Page and Brin what they saw as the future of Google search. “It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it will be easier in the future, that you can have just devices you talk to, or you can have computers that pay attention to what’s going on around them and suggest useful information.

The web’s information will be our brain’s information, and the web’s services will our brain’s tools. We would be able to immediately answer whatever question we have, or information we need. If you’re fixing your sink, instructions on how to do so (or maybe a video?) are just a thought away. In a few moments, you’ll be able to make fabulous chicken tikka masala. Humanity’s knowledge will have a direct pipe into our brains. And you’ll be able to do incredible things, too. You could snap a photo with a thought, send a message to someone or file away something you’ve come across to a note-taking service. You could control your home’s lights and television. You could… well, the list goes on.1

And, of course, you’ll be able to check Twitter and Facebook, and post to them, wherever you are and while doing whatever else.

I say all this because I don’t think there’s a significant difference between doing all of this through a Google Glass-like device or some direct brain connection, as Page proposes. If they’re successful at their purpose, both will be quickly adapted into our senses. Just as we’ve gotten used to being able to pull out our smartphones whenever we have a spare moment or need to settle some dispute or trivia, we’ll reflexively ask Glass the answer to a question, or to snap a photo, or to check the news real quick, or to look through our Facebook and Twitter stream, even at moments when we probably shouldn’t. And since the amount of effort it takes to do so will be so much smaller than it is with a smartphone (which is already terribly small), we will do all of it with that much more frequency. No event will be complete without taking a photo and posting it to our social network of choice, because unless it’s documented and unless we’ve stuck it in everyone else’s stream, then it didn’t really happen.

I don’t think that’s a positive, and it says nothing about the social effects of having a web-connected camera and microphone strapped to our faces. (Dustin Curtis touches on it in his piece about his experience with Glass.) But what I find most troubling is the philosophy underlying Larry Page and Sergey Brin’s thoughts on devices like Glass. They say that Glass’s goal is to get technology “out of the way,” but that isn’t it. The idea is that we will all be better off if we’re always connected to the web, always on, and have uninterrupted and instantaneous access to it and humanity’s “knowledge.” The idea that Page expresses is that if I can immediately learn about something I don’t know much about, I’ll be better off. I’ll be able to make smarter decisions and live a deeper, richer life by spending the time it would have taken to research and learn about something on more meaningful and substantive tasks.

I think, though, that is a terribly deluded and shallow understanding of what it means to “learn” about something. When we—humans—learn about something, we are not simply committing facts to our memory so we can recall them in the future. That’s a very tiny part of a much larger and much more important process. To “learn” about something is to study the information (when historical events occurred, what happened, etc), find connections between it and other things we’ve learned and experiences we’ve had, and to synthesize it into something greater—knowledge. Knowing, say, the Pythagorean Theorem in isolation isn’t of much use, but connecting it to your need to identify another object’s location suddenly makes it very useful. And more abstractly, knowing Roman and Greek history isn’t very useful all on its own, but being able to learn from it and apply its lessons to current political difficulties might prove very beneficial.

Synthesizing information into knowledge isn’t an instantaneous process because that’s not how we work. We form conclusions and connections between new information and other things we know by thinking through it and living it. Conveniently, and crucially, taking time to learn something or to answer our own question by pouring through books and articles and our own investigation allows us time to do that. We have little choice but to draw conclusions and form connections between what we’re looking at and what we already know or have seen before because our brains are working over the material for the answer we seek. We find knowledge when we engage our brains. And, moreover, we often stumble into things unintentionally while actually looking for something altogether unrelated. Things that end up becoming more important than what we were originally looking for in the first place.

Page’s idea—that we would be fundamentally better off if we had immediate access to all of humanity’s information—ignores that. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

This may all sound like so much hyperbole. After all, we’ve adapted just fine to prior new technologies, despite Luddite claims that it will destroy us. And, undoubtedly, we will adjust to this sort of thing, too, and the world will not come crashing down. I think, though, that this sort of thing—a more intimate connection between us and computers—is a path we are heading down, and since its more intimate nature also makes it more influential over us, I think we should deeply consider what it’s intended to accomplish for us and what might happen to us in the process.

Technology isn’t a force of nature that just happens. It’s something that we create, and so we should question why we are creating it. This has always been true, but I think it’s even more important now that we do so.

Technology, I think, should exist to improve our lives as humans, to magnify the good and minimize the bad, rather than change our nature or experience. That’s what I believe.

That’s what I believe, and you may disagree. I would suspect many do, especially those with more of a bent toward transhumanism. And that’s fine. But we should be having a much larger discussion about our technology’s intent than we are now, because it’s only increasing in importance.

  1. If the idea of a direct brain interface seems ridiculous, it shouldn’t; researchers have shown that monkeys can control a robot’s motion with their brain activity, and the means they use are relatively rudimentary. The brain appears to be able to adjust to and adapt new sensory inputs. []
May 31st, 2013