Earlier this month, Apple introduced iOS 9 with new search and Siri features. In iOS 9, users will be able to search for both specific activities and content that is contained in iOS applications. What this means is that users should be able to search for “sunburn” to find information on how to treat sunburns that is provided by iOS medical applications, and tapping the item will immediately launch it in the application. Additionally, these features will allow users to reference whatever they are looking at in a reminder they create through Siri. That is, when looking at an Amazon product page, users will be able to tell Siri “Remind me to buy this tonight,” and Siri will add a reminder with the link included.
Prior to iOS 8, an application’s functionality and content were indivisible from the application itself. If a user was looking at a photo in their photo library and wanted to edit it using a more powerful editing application they had installed, they had to leave Photos, open the editing application, and find and open the photo again there. If the user needed to access a text document they had stored in Pages, they had to launch Pages. In iOS 8, Apple eliminated the redundancy in the former example through extensions, which allow developers to atomize their application’s functionality and allow users to utilize it outside the scope of the application itself.1
The latter example is still true within iOS 8. Content is indivisible from the application itself. iOS 9, however, begins to break content and tasks from the application by making them searchable through what used to be called Spotlight on iOS but is now just Search.
The features around Search and Siri Reminders are absolutely useful. It is flexible and convenient to be able to move over to the resurrected Search page on the home screen and type in, say, “Denver” to find my flight or Airbnb reservation. What I find more interesting than the user-facing features here, though, are the tools provided to developers to make this possible, and the direction task and content search indicate iOS may be heading.
To allow iOS’s new Search feature to surface tasks and content that are contained within applications, developers must indicate to the system what within their application is content that should be surfaced, and what type of content it is (image, audio, event, etc). Developers do much the same thing for tasks. Somewhat similarly, extensions indicate to the system what kind of content they can consume.
This is referred to as “deep linking,” because it allows users to follow a “link” to somewhere deep within an application for some kind of task or content, exactly like clicking on a link in Google to a news article within a website, as opposed to going to the website’s home page and moving through their hierarchy to the article. “Deep linking,” while apt, is also somewhat misleading because this allows much more than just search. When developers update their applications to take advantage of Apple’s new APIs for identifying content and tasks to the system, they will be helping the system structure what–and what kind–of data is on the user’s device. The system will know what content is on a user’s device, what kind of content that is, and what kind of content applications provide. The system will know what photos, contacts, events (say, hotel reservations), and music are on a user’s device.
Using these tools, we could begin to construct an understanding of what the user is doing. Applications are indicating to the system what tasks the user is doing (editing a text document, browsing a web page, reading a book), as well as what kind of content it is they are interacting with. From this, we can make inferences about what the user’s intent is. If the user is reading a movie review in the New York Times application, they may want to see show times for that movie at a local theater. If the user is a student writing an essay about the Ming dynasty in China, they may want access to books they have read on the topic, or other directly relevant sources (and you can imagine such a tool being even more granular than being related to “the Ming dynasty”). Apple is clearly moving in this direction in iOS 9 through what it is calling “Proactive,” which notifies you when it is time to leave for an appointment, but there is the possibility of doing much more, and doing it across all applications on iOS.
Additionally, extensions could be the embryonic stage of application functions broken out from the application and user interface shell, one-purpose utilities that can take in some kind of content, transform it, and provide something else. A Yelp “extension” (herein I will call them “utilities” to distinguish between what an extension currently is and what I believe it could evolve into) could, for example, take in a location and food keywords, and provide back highly rated restaurants associated with the food keywords. A Fandango extension could similarly provide movie show times, or even allow the purchase of movie tickets. A Wikipedia extension could provide background information on any subject. And on and on.
In a remarkable piece titled Magic Ink, Bret Victor describes what he calls the “Information Ecosystem.” Victor describes a platform where applications (what he calls “views”) indicate to the system some topic of interest from the user, and utilities (what he calls “translators”) take in some kind of content and transform it into something else. What this platform would do is then provide inputs to all applications and translators. The platform would provide some topic of interest that has been inferred from the user; as I described above, this may be a text document where the user is writing about the Ming dynasty, or a movie review the user is reading through a web browser. Applications and translators can then consume these topics of interest and information provided by utilities. The Fandango utility I describe above could consume the movie review’s keywords, for example, and provide back to the platform movie show times in the area. The Wikipedia utility could consume the text document, and provide back information on the Ming dynasty.
What is important here is that the user intent that can be inferred from what the user is doing and what specific content they are working with, and the utilities described above, could be chained together and utilized by separate applications for the user, in such a way that was not explicitly designed beforehand. Continuing the movie review case, while the user is reading a review for Inside Out in the New York Times application, they could invoke Fandango to find local show times and purchase tickets. This could occur either by opening the Fandango application, which would immediately display the relevant show times, or through Siri (“When is this playing?”). More interesting, one could imagine a new kind of topical research application that, upon notice that the user is writing an essay related to the Ming dynasty, pulls up any number of relevant sources, from Wikipedia (using the Wikipedia utility) and online sources (papers, websites). Perhaps the user has read several books about the Ming dynasty within iBooks, and has highlighted them and added notes. If iBooks identifies that information to the system, such a research application could even bring up not just the books, but specific sections relevant to what they are writing, and passages they highlighted or left notes on. Through the platform Victor describes, the research application could do so without being explicitly designed to interface with iBooks. As a result, the work the user has done in one application can flow into another application in a new form and for a new purpose.
To further illustrate what this may allow, I am going to stretch the above research application example. Imagine that a student is writing an essay on global warming in Pages on the iPad in the left split-view, and has the research application open on the right. As the user is writing, the text will be fed into a topic processor, and “global warming” will be identified as a topic of interest by iOS. Because earlier that week they had added a number of useful articles and papers to Instapaper from Safari, Instapaper will see “global warming” as a topic of interest, and serve up to the system all articles and papers related to the topic. Then, a science data utility the user had installed at the beginning of the semester would also take in “global warming” as a topic, and would offer data on the change in global temperature since the Industrial Revolution. The research application, open on the right side of the screen, will see the articles and papers brought forward by Instapaper and the temperature data provided by the science data utility, and make them immediately available. The application could group the papers and articles together as appropriate, and show some kind of preview of the temperature data, which could then be opened into a charting application (say, Numbers) to create a chart of the rise in temperatures to put in the essay. And the research application could adjust what it provides as the user writes, without them doing anything at all.
What we would have is the ability to do research in disparate applications, and have a third application organize our research for the user in a relevant manner. Incredibly, that application could also provide access to relevant historical data for the user as well. All of this would be done without the need for this application to build in the ability to search the web and academic papers for certain topics (although it could, of course). Rather, the application is free to focus on organizing research in a meaningful and useful way in response to what the user is doing, and they would just need to do so by designing for content types, not very specific data formats coming from very specific sources.
Utilities, too, would not necessarily need to be installed with a traditional application, or “installed” at all. Because they are face-less functions, they could be listed and installed separate from applications themselves, and Apple could integrate certain utilities into the operating system to provide system-wide functionality without any work on the user’s part. For example, utilities could be used in the same way that Apple currently integrates Fandango for movie times and Yelp for restaurant data and reviews. Siri would obviously be a beneficiary of this, but all applications could be smarter and more powerful as a result.
Apple hasn’t built the Information Ecosystem in iOS 9. While iOS 9′s new search APIs allow developers to identify what type of content something is, we do not yet have more sophisticated types (like book notes and highlights), nor a system for declaring new types in a general way that all applications can see (like a “movie show times” type).2 Such a system will be integral to realizing what Victor describes, and is by no means a trivial problem. But the component parts are increasingly coming into existence. I don’t know if that is the direction Apple is heading, but it certainly *could be*, based on the last few years of iOS releases. What is clear, though, is Apple is intent on trying to infer more about what the user is doing and their intent, and provide useful features using it. iOS 7 began passively remembering frequently-visited locations and indicated how long it would take to get to, say, the user’s office in the morning. iOS 9 builds on that sort of concept by notifying the user when they need to leave for an appointment to get there on time, and by automatically starting a certain playlist the user likes when they get in the car. Small steps, but the direction of those steps is obvious.
I hope Apple is putting the blocks in place to build something like the Information Ecosystem. Building the Information Ecosystem would go a long way to expanding the power of computing by breaking applications–discrete and indivisible units of data and function–into their component parts, freeing that data to flow into other parts of the system and to capture user intent, and for the functionality to augment other areas in unexpected ways.
I believe that the Information Ecosystem ought to be the future of computing. I hope Apple is putting the blocks in place to build something like it.
Jupiter beckons in the distance, a small light, the greatest planet of all
I stare through the window, timeless, as the light slowly grows larger
I wonder what it will be like to see it with my own eyes
Swirls of orange and red and brown, a globe so large I can’t comprehend
The Jovian moons circling around the greatest planet of all, enraptured,
It is growing larger through the window
Through the window that separates me from the void,
Separates warmth and air and life from emptiness and death
This is what we have constructed
To ferry us across the great emptiness of space
It is larger still, I see color!
To see it with our own eyes
I see the moons!
To see if there is life beyond our little blue dot, so far away
To strike off into the unknown once again
To extend humanity beyond our home
I see it, I see it! I see!
But oh, this is the dream of a child
A great dream, but a dream
Remembered by an old man,
What could have been
The phone dominates your attention. For nearly every use, the phone has your undivided attention. Browsing the web, Twitter, Instagram, Snapchat, watching video, reading, messaging—all require focus on a screen that fills your vision, your primary attention, and generally some kind of interaction. Everything else, too, is always a home button or notification-tap away at all times.
Is that a shock when the phone is the single gateway to nearly everything? The PC is now for doing work, but the phone is for messaging, taking photos, sharing them, the web, Twitter, Facebook, finding places to go, getting directions there, and even making calls.
That is the reason we find ourselves, when we receive a message and pull out our phones to respond, often descending into a muscle memory check of our other iMessages, emails and Twitter stream. We pull out our phone for one purpose, like responding to a message or checking our schedule, and end up spending several mindless minutes (or, if I am honest, more than “several minutes”) checking in on whatever it is. We find ourselves doing this even when we shouldn’t. We do it while seeing friends and family, while out to dinner with them, while at home with family when we should be spending time with them or doing other things.
I used “we” above because I think anyone with a smartphone, or anyone who knows people with them, can find truth in it to a greater or lesser extent.
My concern with wrist-worn “smartwatches,” starting with the Pebble, is that they appear to primarily exist to push notifications that we receive on our phone to our wrist. They seem to exist to make dealing with phone calls, messages, updates easier; seeing them, ignoring them, replying to them. They are there to make dealing with our phones more convenient. And in large part, that is how smartwatches have been designed and used. “It’s there so I don’t have to pull my phone out of my pocket.”
But that idea of what smartwatches are for, making it more convenient to deal with the flood of notifications and information our phones provide us, is unimaginative. I think what the smartwatch can do is make the phone unnecessary for many purposes, create new purposes altogether, and allow us to benefit from a wrist-sized screen’s limitations.
On September 9th, Apple introduced their long-awaited watch, appropriately named the Apple Watch (from herein “the Watch”). We won’t be able to fully understand what Apple’s built until next year, but they did provide a fairly detailed look at the Watch and the software it runs.
It appears that, in contrast to Google’s approach with Google Wear (which is heavily focused on showing single bits of information or points of interaction on the screen, and relies on swiping between cards of data and interaction), Apple intends the Watch to run fairly sophisticated applications. The Watch retains the iPhone’s touch interface, but Apple has designed new means of interaction specific to a small screen. In addition to the tap, the Watch brings the “force tap,” which is used to bring up different options within applications (like, say, the shuffle and AirPlay buttons within the music application), and the “digital crown,” a repurposing of the normal watch’s crown into a sort of scroll wheel for the Watch. Using the digital crown, users can zoom in and out of maps and scroll through lists with precision and without covering the small screen. And, most interestingly, they have replaced the familiar vibration alert in our phones with a light “tap” from the Watch to notify the user.
What this allows is fairly sophisticated applications. You can not only search for locations around you, but you can zoom in and out of maps. You can scroll through your emails, messages, events or music. You can control your Apple TV.
This subsumes many of the reasons we pull out our phones during the day. We can check our schedule for the day, check a message when it’s received and send a quick reply, find a place to get a drink after dinner (and get directions there without having to walk and stare at your phone), ignore a phone call by placing your hand over your wrist, or put something on the Apple TV.
But what force taps and the digital crown will not do is make the Watch’s small screen as large as a phone’s. You can’t type out a reply to a message or email. You can’t browse the web for something. You can’t dig through a few months of your email to find a certain one. You can’t mindlessly swipe through Twitter (well, you could, but it’s going to be pretty difficult). That, though, is an advantage the Watch has over the phone. Because it is inherently limited, it also has to be laser-focused on a single purpose, and while using it, you are limited to accomplishing something. It’s a lot harder to lose yourself in a 1.5″ screen than it is in a 4+ inch screen.
That’s going to be one of the Watch’s primary purposes for existing: allowing us to do many of the things we do on our phones right now, but in a way that’s limited and, thus, less distracting. If you’re out to dinner and receive a message (and haven’t turned on Do Not Disturb), you’re going to be a lot less likely to spend a couple minutes on a reply, and then Instagram, if you’re checking and responding it to it on the Watch. It just doesn’t work that way.
In that way, I think Apple has embraced the wrist-worn watch’s inherent limitations. Rather than try to work around them, they are using them. They’ve built new means of interaction (force tap, digital crown, “taptic” feedback) that allows fairly sophisticated applications, but they didn’t use them to cram iOS in its entirety into the Watch.
What I think Apple is trying to do is build a new mode of personal computing on the wrist that is molded from the inherent limitations and opportunities that creates.
In Jony Ive’s introduction to the Watch, Ive ends with a statement of purpose of sorts for it. He says,
I think we are now at a compelling beginning, actually designing technology to be worn. To be truly personal.
That sounds like a platitude, but I think it defines what Apple is trying to do. “Taptic feedback,” which Dave Hamilton describes as feeling like someone tapping you on the wrist, is a much less intrusive and jolting way of getting a notification than a vibration against your leg or the terrible noise it makes on a table, and more generally, focusing the Watch’s use on quick single purposes is, too.
What is interesting to me, though, is they are using the Watch’s nature to do things in a more personal—human—way, and to do things that the phone can’t. When providing directions, the Watch shows them on the screen just as you would expect on a phone, but it also does something neat: when it’s time to turn, it will let you know using its Taptic feedback, and it differentiates between left and right. As a result, there isn’t a need to stare at your phone while walking somewhere and getting directions.
They’ve also created a new kind of messaging. Traditionally, “messages” are either words sent from one person to another using text or speech. Since messages are communication through word, something inherently mental or intellectual rather than emotional, they are divorced from emotion. We can try to communicate emotion through text or speech (emoticons serve exactly that purpose), but communicating emotion to another person is always translated into text or speech, and then thought about by them, rather than felt. In person, we can communicate emotion with our facial expressions, body gestures, and through touch. There’s a reason hugging your partner before they leave on a long trip is so much more powerful than a text message saying you’ll miss them.
In a small way, using the Watch, Apple is trying to create a new way to communicate that can capture some of that emotion. Because the Watch can effectively “tap” your wrist, others can tap out a pattern on their Watch, and it will re-create those taps on your wrist, almost like they are tapping you themselves. You could send a tap-tap to your partner’s wrist while they are away on a trip just to say that you’re thinking about them. Isn’t that so much more meaningful a way to say it than a text message saying it? Doesn’t it carry more emotion and resonance?
That’s what they mean by making technology more personal. It means making it more human.
The Watch is not about making it more convenient to deal with notifications and information sent to us. It’s not even about, as I described above, keeping your phone in your pocket more often (although that will be a result). The Watch is creating a new kind of computing of our wrists that will be for different purposes than what the phone is for and what the tablet and PC are for. The Watch is for quickly checking and responding to messages, checking your schedule, finding somewhere to go and getting directions there, for helping you lead a more active (healthier) life, and for a more meaningful form of communication. And it will do that without sucking our complete attention onto it, like the phone, tablet and PC do.
The Watch is for doing things with the world and people around us. Finding places to go, getting there, exercising, checking in at the airport, and sending more meaningful messages. Even notifying you of a new message (if you don’t have Do Not Disturb turned on) while out to dinner with family or friends serves this purpose, because if you have to see it, you can do so in a less disruptive way and get back to what you are doing—spending time with people important to you.
The Watch is a new kind of computing born of, and made better by, it’s limitations. And I can’t wait.
When I was growing up, I was fascinated by space. One of my earliest memories—and I know this is strange—is, when I was four or five years old, trying to grasp the concept of emptiness in space. I imagined the vast emptiness of space between galaxies, nothing but emptiness. I tried to imagine what that meant, but most of all, I tried to imagine what it would look like.
That question, what color empty space would be, rolled around my brain the most. I couldn’t shake it. I would be doing something–playing Nintendo, coloring, whatever–and that question would pop into my head again. What does “nothing” look like? First, I imagined that it would look black, the black of being deep in a forest at night. But that didn’t seem right, either; black is still “something.” And then, I remember, I realized I was thinking about a much worse question. I wasn’t trying to imagine what the emptiness of space would look like. I was trying to imagine what nothing would look like.
I have that memory, I think, because thinking about that sort of broke my brain. I couldn’t comprehend what nothing is.
That question, of course, begins down toward the central question of what our universe is and how it was created. I think that’s why space–the planets, stars, galaxies–so fascinated me then; it’s this thing so alien to our world, that dwarfs it on a scale that’s incomprehensible to us, and yet it is us. We aren’t something held apart separate from it, but intimately a part of it and its history.
Trying to understand the physics of our universe, its structure and history is also an attempt to understand ourselves. I think, at some gut level, I understood that as a kid.
I poured myself into learning about our solar system and galaxy. My parents’ Windows PC had Encarta installed, and I was enthralled. I spent countless hours reading everything I could find within Encarta (which, at the time, felt like a truly magical fount of knowledge) about Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. And when I exhausted that source, I asked for books about space, and I obsessed over them. They were windows into these incredible places, and I couldn’t believe that we were a part of such a wondrous universe.
Through elementary school, my love for space continued to blossom. Then, NASA were my heroes. To my eyes, they were the people designing and launching missions across our solar system so we could understand even more about it. Many of the photos of Jupiter, Saturn, Uranus, and Neptune that I was so enraptured by were taken by spacecraft designed, built and launched by people at NASA. They were the people who had risked their lives to leave Earth and go to the Moon, to do something that most people up until just decades prior couldn’t even imagine as being possible. And they were the people who were exploring Mars with a little robotic rover called Sojourner that very moment.
They were my heroes because they were the people pushing us to explore our solar system, to learn what was out there and what came before us. I felt like I was at living during a momentous time in the history of humanity, and that I would live to see advances as incredible as 1969′s Moon landing. There wasn’t a doubt in my mind.
That year, in 1997, I was nine years old. It’s been seventeen years.
Since then, we have indeed made great advances. In that time, we’ve sent three separate rovers to Mars, and we discovered that Mars certainly had liquid water on its surface long ago in its history. We landed a probe on the surface of Saturn’s moon Titan, which sent back these photos. We’ve discovered that our galaxy is teeming with solar systems.
All truly great things. But we are no closer today to landing humans on Mars than we were in 1997. In fact, we are no closer to putting humans back on the Moon today than we were in 1997.
Some people would argue that’s nothing to be sad about, because there isn’t anything to be gained by sending humans to Mars, or anywhere else. Sending humans outside Earth is incredibly expensive and offers us nothing that can’t be gained through robotic exploration.
Humanity has many urges, but our grandest and noblest is our constant curiosity. Through our history as a species, we have wondered what is over that hill, over that ridge, beyond the horizon, and when we sat around our fires, what are the lights we see in the sky. Throughout, someone has wondered, and because they wondered, they wandered beyond the border that marks where our knowledge of the world ends, and they wandered into the unknown. We never crossed mountains, deserts, plains, continents and oceans because we did a return-on-investment analysis and decided there were economic benefits beyond the cost to doing so. We did so because we had to in order to survive, and we did so because we had to know what was there. We were curious, so we stepped out of what we knew into certain danger.
And yet that tendency of ours to risk everything to learn what is beyond everything we know is also integral to all of the progress we have made as a species. While working on rockets capable of leaving Earth’s atmosphere, it would hardly be obvious what that would allow us to do. Would someone then have known that rocketry would allow us to place satellites into orbit which would allow worldwide communication, weather prediction and the ability to locate yourself to within a few feet anywhere on Earth? Economic benefits that result from progress are hardly ever obvious beforehand.
But it is more than that. It isn’t just that exploration drives concrete economic benefits. We think in narratives. Since the Enlightenment and industrial revolution, we have built a narrative of progress. With each year that passes, we feel that things improve. Our computers get faster, smaller, more capable; we develop new drugs and treatments for diseases and conditions that, before, would be crippling or a death sentence; with each year, our lives improve. For a century and a half or so, that feeling hasn’t been too far from reality. But most especially, we have continued to do something that cuts to the very center of what it means to be human: we have explored. We explored the most dangerous parts of Earth, we have explored our oceans, we put humans into space and humans stepped foot on a foreign body. There is a reason that, when we think of our greatest achievements as a species, landing on the Moon comes to mind with ease. At a very deep level within us, exploring the unknown is tied up with what it means to progress.
As exciting and useful as it is to send probes to other planets and moons, it fails to capture our imagination in the same way that sending people does. The reason is because doing so–exploring the unknown ourselves–is such an incredible risk. What Buzz Aldrin, Neil Armstrong and Michael Collins did in 1969 was unfathomably dangerous. They knew–everyone knew–that there was a very good chance that they would fail to get back to Earth. But they accepted that risk, because for them, learning about the unknown was worth that risk.
Abandoning human exploration of space, then, has consequences more far reaching than what its proponents intend. We would not just be abandoning putting humans into space, but at some fundamental level within us will be resigning ourselves to staying here. We will have decided, as a species, that we have gone far enough, we will leave our borders at our planet’s atmosphere, and leave the rest of the solar system and galaxy to nature. And with that decision, we will resign ourselves to no longer exploring in the general sense.
That’s why it is so integral that we continue exploring. Pushing on the edge of what’s possible is what fuels our desire and ability to explore in all other areas, too.
There are still incredible mysteries for us to unlock. We don’t know whether Mars had life early in its history. We don’t know whether, in Europa’s and Enceladus’s oceans, there are lifeforms swimming through them as I write this. We don’t know whether there is intelligent life living on planets in solar systems in the Milky Way and beyond. We don’t know how life began on Earth, let alone how life began at all. And most of all, we don’t know whether it is possible for us to move beyond our own solar system.
But what I do know is this: I want to know. I want to know.
Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.
Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.
That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.
And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.
The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.
Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.
I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?
As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.
That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.
You have all the answers to my questions
Even ones I didn’t have
Why should I know anything at all?
You know everything I need
Everything I may need
You hold it all for me
So I waste no time
But still I wonder, why don’t I wonder?
Like I did as a kid
But no answer