“Free” electric vehicle chargers are an excellent example of what happens when valuable resources are priced at zero:

The bad moods stem from the challenges drivers face finding recharging spots for their battery-powered cars. Unlike gas stations, charging stations are not yet in great supply, and that has led to sharp-elbowed competition. Electric-vehicle owners are unplugging one another’s cars, trading insults, and creating black markets and side deals to trade spots in corporate parking lots. The too-few-outlets problem is a familiar one in crowded cafes and airports, where people want to charge their phones or laptops. But the need can be more acute with cars — will their owners have enough juice to make it home? — and manners often go out the window.

There is always a cost, and the resource is always allocated in some manner. But it is usually in a much more arbitrary, inefficient, and unjust manner.

October 12th, 2015

Jonathan Blanks writes on why he owns guns and believes gun ownership is everyone’s right:

Like many Americans, my family history is closely tied to firearms. I was raised with a sense of duty to protect my loved ones. Danger wasn’t something that was abstract or imaginary in my family history or my upbringing, and so we had to learn to deal with it.

I’m not a Second Amendment absolutist, and I am open to changes to our gun laws. But gun ownership is important to me, and responsible individuals must be allowed to make the choice for themselves and their families if they want to own firearms.

Absolutely worth reading, especially with Twitter full of righteous stupidity like this:

So if you define “liberty” as the right to own a gun, go fuck yourself. You are a disgrace to all that’s good and right about America. – Mike Monteiro

Monteiro’s indignant tirade is representative of a sentiment that, in typical fashion, swept across Twitter and fell away in a matter of days—all anger, hate and righteous assurance that the speaker is absolutely right, and that the people they disagree with are not only wrong, but utterly stupid or evil. But for all of this righteousness, all of this anger, it is all hot air.

This sort of thing replaces thinking about an issue, reading about it and reading what other people think and actually considering it, with posturing. It is, ultimately, self-aggrandizing. It accomplishes zero, besides make the person feel good about themselves for their 140 characters.

On the opposite end of the spectrum is Jonathan Blank’s piece. He doesn’t argue by denigrating people he disagrees with, or questioning their intelligence or motivations—he makes a calm, reasoned argument for what he thinks.

Blanks argues that, as a black family in Indiana, guns were vital to his family’s defense from the KKK, and that having a gun was necessary to defend his girlfriend’s friend from an abusive husband. His thesis is that people have the right to protect themselves, and a gun is often the only way to do that.

That doesn’t just extend to defending yourself against other individuals, but also against an abusive state. The left is fond of arguing that gun ownership as a check against government violating our liberty is absurd because no one with a shotgun or AR-15 could successfully take on the U.S. military. This argument is absurd. The goal is not simply to defeat an abusive government, but to make it prohibitively difficult and bloody for the government to become tyrannical. And, indeed, it would certainly be possible to defeat an abusive government—the Afghan wars, Iraq war, Vietnam war,… and on and on show what guerrilla fighters can do against an overwhelmingly superior force.

But Blanks makes an even more important point: even if defeat was certain, it would not matter. Individuals have the right to defend themselves against violations of their rights, whether it comes from individuals or the government, and whether or not they will win. And guns are vital to that. There is no liberty if people cannot even attempt to protect themselves.

October 12th, 2015

My thanks to Transpose for sponsoring this week’s RSS feed.

Transpose is a dead-simple way for anyone to build databases for their data. Anyone can use Transpose, in any field, because of how it works: Transpose lets you create simple forms for collecting your data, and from there, you can filter, sort and search your data in any way you want. Even better, Transpose identifies names and locations to create suggested searches, so you can quickly drill down in your data for what you are looking for. If you are creating a database of media contacts for your new project, or even organizing purchase orders for your business, don’t use a spreadsheet: Transpose is the right tool for the job. You can create a form that is exactly right for your task. Transpose is tailor-made for these sorts of tasks, so you can quickly enter your data, sort through it, and even export in a variety of formats. And, of course, this is all online, and not stuck in a spreadsheet on your PC. Transpose is a single place you can do data management, note-taking and to-do tracking all in a single, integrated place, and replace disparate tools like Evernote and Trello with a single tool.

Use the right tool for the job. Give Transpose a try today.

Sponsorship by the Syndicate.

September 29th, 2015

iOS 9 and the Information Ecosystem

June 25th, 2015

Earlier this month, Apple introduced iOS 9 with new search and Siri features. In iOS 9, users will be able to search for both specific activities and content that is contained in iOS applications. What this means is that users should be able to search for “sunburn” to find information on how to treat sunburns that is provided by iOS medical applications, and tapping the item will immediately launch it in the application. Additionally, these features will allow users to reference whatever they are looking at in a reminder they create through Siri. That is, when looking at an Amazon product page, users will be able to tell Siri “Remind me to buy this tonight,” and Siri will add a reminder with the link included.

Prior to iOS 8, an application’s functionality and content were indivisible from the application itself. If a user was looking at a photo in their photo library and wanted to edit it using a more powerful editing application they had installed, they had to leave Photos, open the editing application, and find and open the photo again there. If the user needed to access a text document they had stored in Pages, they had to launch Pages. In iOS 8, Apple eliminated the redundancy in the former example through extensions, which allow developers to atomize their application’s functionality and allow users to utilize it outside the scope of the application itself.1

The latter example is still true within iOS 8. Content is indivisible from the application itself. iOS 9, however, begins to break content and tasks from the application by making them searchable through what used to be called Spotlight on iOS but is now just Search.

The features around Search and Siri Reminders are absolutely useful. It is flexible and convenient to be able to move over to the resurrected Search page on the home screen and type in, say, “Denver” to find my flight or Airbnb reservation. What I find more interesting than the user-facing features here, though, are the tools provided to developers to make this possible, and the direction task and content search indicate iOS may be heading.

An Information Ecosystem

To allow iOS’s new Search feature to surface tasks and content that are contained within applications, developers must indicate to the system what within their application is content that should be surfaced, and what type of content it is (image, audio, event, etc). Developers do much the same thing for tasks. Somewhat similarly, extensions indicate to the system what kind of content they can consume.

This is referred to as “deep linking,” because it allows users to follow a “link” to somewhere deep within an application for some kind of task or content, exactly like clicking on a link in Google to a news article within a website, as opposed to going to the website’s home page and moving through their hierarchy to the article. “Deep linking,” while apt, is also somewhat misleading because this allows much more than just search. When developers update their applications to take advantage of Apple’s new APIs for identifying content and tasks to the system, they will be helping the system structure what–and what kind–of data is on the user’s device. The system will know what content is on a user’s device, what kind of content that is, and what kind of content applications provide. The system will know what photos, contacts, events (say, hotel reservations), and music are on a user’s device.

Using these tools, we could begin to construct an understanding of what the user is doing. Applications are indicating to the system what tasks the user is doing (editing a text document, browsing a web page, reading a book), as well as what kind of content it is they are interacting with. From this, we can make inferences about what the user’s intent is. If the user is reading a movie review in the New York Times application, they may want to see show times for that movie at a local theater. If the user is a student writing an essay about the Ming dynasty in China, they may want access to books they have read on the topic, or other directly relevant sources (and you can imagine such a tool being even more granular than being related to “the Ming dynasty”). Apple is clearly moving in this direction in iOS 9 through what it is calling “Proactive,” which notifies you when it is time to leave for an appointment, but there is the possibility of doing much more, and doing it across all applications on iOS.

Additionally, extensions could be the embryonic stage of application functions broken out from the application and user interface shell, one-purpose utilities that can take in some kind of content, transform it, and provide something else. A Yelp “extension” (herein I will call them “utilities” to distinguish between what an extension currently is and what I believe it could evolve into) could, for example, take in a location and food keywords, and provide back highly rated restaurants associated with the food keywords. A Fandango extension could similarly provide movie show times, or even allow the purchase of movie tickets. A Wikipedia extension could provide background information on any subject. And on and on.

In a remarkable piece titled Magic Ink, Bret Victor describes what he calls the “Information Ecosystem.” Victor describes a platform where applications (what he calls “views”) indicate to the system some topic of interest from the user, and utilities (what he calls “translators”) take in some kind of content and transform it into something else. What this platform would do is then provide inputs to all applications and translators. The platform would provide some topic of interest that has been inferred from the user; as I described above, this may be a text document where the user is writing about the Ming dynasty, or a movie review the user is reading through a web browser. Applications and translators can then consume these topics of interest and information provided by utilities. The Fandango utility I describe above could consume the movie review’s keywords, for example, and provide back to the platform movie show times in the area. The Wikipedia utility could consume the text document, and provide back information on the Ming dynasty.

What is important here is that the user intent that can be inferred from what the user is doing and what specific content they are working with, and the utilities described above, could be chained together and utilized by separate applications for the user, in such a way that was not explicitly designed beforehand. Continuing the movie review case, while the user is reading a review for Inside Out in the New York Times application, they could invoke Fandango to find local show times and purchase tickets. This could occur either by opening the Fandango application, which would immediately display the relevant show times, or through Siri (“When is this playing?”). More interesting, one could imagine a new kind of topical research application that, upon notice that the user is writing an essay related to the Ming dynasty, pulls up any number of relevant sources, from Wikipedia (using the Wikipedia utility) and online sources (papers, websites). Perhaps the user has read several books about the Ming dynasty within iBooks, and has highlighted them and added notes. If iBooks identifies that information to the system, such a research application could even bring up not just the books, but specific sections relevant to what they are writing, and passages they highlighted or left notes on. Through the platform Victor describes, the research application could do so without being explicitly designed to interface with iBooks. As a result, the work the user has done in one application can flow into another application in a new form and for a new purpose.

To further illustrate what this may allow, I am going to stretch the above research application example. Imagine that a student is writing an essay on global warming in Pages on the iPad in the left split-view, and has the research application open on the right. As the user is writing, the text will be fed into a topic processor, and “global warming” will be identified as a topic of interest by iOS. Because earlier that week they had added a number of useful articles and papers to Instapaper from Safari, Instapaper will see “global warming” as a topic of interest, and serve up to the system all articles and papers related to the topic. Then, a science data utility the user had installed at the beginning of the semester would also take in “global warming” as a topic, and would offer data on the change in global temperature since the Industrial Revolution. The research application, open on the right side of the screen, will see the articles and papers brought forward by Instapaper and the temperature data provided by the science data utility, and make them immediately available. The application could group the papers and articles together as appropriate, and show some kind of preview of the temperature data, which could then be opened into a charting application (say, Numbers) to create a chart of the rise in temperatures to put in the essay. And the research application could adjust what it provides as the user writes, without them doing anything at all.

What we would have is the ability to do research in disparate applications, and have a third application organize our research for the user in a relevant manner. Incredibly, that application could also provide access to relevant historical data for the user as well. All of this would be done without the need for this application to build in the ability to search the web and academic papers for certain topics (although it could, of course). Rather, the application is free to focus on organizing research in a meaningful and useful way in response to what the user is doing, and they would just need to do so by designing for content types, not very specific data formats coming from very specific sources.

Utilities, too, would not necessarily need to be installed with a traditional application, or “installed” at all. Because they are face-less functions, they could be listed and installed separate from applications themselves, and Apple could integrate certain utilities into the operating system to provide system-wide functionality without any work on the user’s part. For example, utilities could be used in the same way that Apple currently integrates Fandango for movie times and Yelp for restaurant data and reviews. Siri would obviously be a beneficiary of this, but all applications could be smarter and more powerful as a result.

A Direction

Apple hasn’t built the Information Ecosystem in iOS 9. While iOS 9′s new search APIs allow developers to identify what type of content something is, we do not yet have more sophisticated types (like book notes and highlights), nor a system for declaring new types in a general way that all applications can see (like a “movie show times” type).2 Such a system will be integral to realizing what Victor describes, and is by no means a trivial problem. But the component parts are increasingly coming into existence. I don’t know if that is the direction Apple is heading, but it certainly *could be*, based on the last few years of iOS releases. What is clear, though, is Apple is intent on trying to infer more about what the user is doing and their intent, and provide useful features using it. iOS 7 began passively remembering frequently-visited locations and indicated how long it would take to get to, say, the user’s office in the morning. iOS 9 builds on that sort of concept by notifying the user when they need to leave for an appointment to get there on time, and by automatically starting a certain playlist the user likes when they get in the car. Small steps, but the direction of those steps is obvious.

I hope Apple is putting the blocks in place to build something like the Information Ecosystem. Building the Information Ecosystem would go a long way to expanding the power of computing by breaking applications–discrete and indivisible units of data and function–into their component parts, freeing that data to flow into other parts of the system and to capture user intent, and for the functionality to augment other areas in unexpected ways.

I believe that the Information Ecosystem ought to be the future of computing. I hope Apple is putting the blocks in place to build something like it.

  1. Although for the photos example, Apple hasn’t totally finished the job; while you can use photo editing extensions from within the Photos application, you can’t do so for photos added to emails or iMessages. []
  2. We can declare new types, and other applications can use them, but as far as I know each developer must be aware of those types separately, and build them into the application. This makes it difficult to use data types declared by third-parties. []

Today is an excellent day to re-read Ken White’s excellent overview of the Oliver Wendell Holmes Jr’s oft-referenced “shouting fire in a theater” quote:

Holmes’ quote is the most famous and pervasive lazy cheat in American dialogue about free speech.

January 7th, 2015

During a Reddit AMA earlier this week, Elon Musk said he hopes to announce his Mars transport system plans.

As you’d expect, I’m incredibly excited about this. We are still obviously a long ways away from the first manned mission to Mars, but there is finally substantive work being done to get us there. Of course, NASA announced late last year that the Orion space capsule is a part of their plan toward a manned mission to Mars, which is terribly exciting. Humans conducting experiments on Mars and exploring the planet is something I hope to see before I die.

But Musk and SpaceX, I think, are even more exciting, because Musk’s intent is not just to send a scientific mission to the planet. Musk’s intent is to send waves of one-way missions to Mars full of people to colonize the planet. Musk’s intent, then, is a magnitude more ambitious than NASA’s. Musk’s intent is to start humanity’s expansion through space.

In October, I lamented our lack of progress with space travel. I hope more than anything that, in two decades, I can look back at that piece and laugh—while watching the greatest explorers in the history of our species make one more giant leap for humanity.

January 7th, 2015

A Jovian Dream

October 1st, 2014

Jupiter beckons in the distance, a small light, the greatest planet of all
I stare through the window, timeless, as the light slowly grows larger
I wonder what it will be like to see it with my own eyes
Swirls of orange and red and brown, a globe so large I can’t comprehend
The Jovian moons circling around the greatest planet of all, enraptured,
It is growing larger through the window
Through the window that separates me from the void,
Separates warmth and air and life from emptiness and death
This is what we have constructed
To ferry us across the great emptiness of space
It is larger still, I see color!
To see it with our own eyes
I see the moons!
To see if there is life beyond our little blue dot, so far away
To strike off into the unknown once again
To extend humanity beyond our home
I see it, I see it! I see!

But oh, this is the dream of a child
A great dream, but a dream
Remembered by an old man,
What could have been

After China placed limitations on candidates for Hong Kong’s city leader, Hong Kong erupted in protest. This, of course, presents a large challenge to Xi Jinping and the PRC. Edward Wong and Chris Buckley write for the Times:

China’s Communist Party has ample experience extinguishing unrest. For years it has used a deft mix of censorship, arrests, armed force and, increasingly, money, to repress or soften calls for political change.

But as he faces massive street demonstrations in Hong Kong pressing for more democracy in the territory, the toolbox of President Xi Jinping of China appears remarkably empty.

It’s an especially difficult challenge for Beijing because their options are so limited. If they come to an agreement with the protesters, or even remove the limits entirely, it will not only show weakness on Beijing’s part (something they are loathe to do), but could encourage similar protests in China proper. But they also have little ability to clamp down on protests in Hong Kong.

This is as large a threat to the PRC we’ve seen in years, and it has the potential to rival the 1989 protests.

September 29th, 2014

Apple Watch

September 15th, 2014

The phone dominates your attention. For nearly every use, the phone has your undivided attention. Browsing the web, Twitter, Instagram, Snapchat, watching video, reading, messaging—all require focus on a screen that fills your vision, your primary attention, and generally some kind of interaction. Everything else, too, is always a home button or notification-tap away at all times.

Is that a shock when the phone is the single gateway to nearly everything? The PC is now for doing work, but the phone is for messaging, taking photos, sharing them, the web, Twitter, Facebook, finding places to go, getting directions there, and even making calls.

That is the reason we find ourselves, when we receive a message and pull out our phones to respond, often descending into a muscle memory check of our other iMessages, emails and Twitter stream. We pull out our phone for one purpose, like responding to a message or checking our schedule, and end up spending several mindless minutes (or, if I am honest, more than “several minutes”) checking in on whatever it is. We find ourselves doing this even when we shouldn’t. We do it while seeing friends and family, while out to dinner with them, while at home with family when we should be spending time with them or doing other things.

I used “we” above because I think anyone with a smartphone, or anyone who knows people with them, can find truth in it to a greater or lesser extent.

My concern with wrist-worn “smartwatches,” starting with the Pebble, is that they appear to primarily exist to push notifications that we receive on our phone to our wrist. They seem to exist to make dealing with phone calls, messages, updates easier; seeing them, ignoring them, replying to them. They are there to make dealing with our phones more convenient. And in large part, that is how smartwatches have been designed and used. “It’s there so I don’t have to pull my phone out of my pocket.”

But that idea of what smartwatches are for, making it more convenient to deal with the flood of notifications and information our phones provide us, is unimaginative. I think what the smartwatch can do is make the phone unnecessary for many purposes, create new purposes altogether, and allow us to benefit from a wrist-sized screen’s limitations.

The Apple Watch

On September 9th, Apple introduced their long-awaited watch, appropriately named the Apple Watch (from herein “the Watch”). We won’t be able to fully understand what Apple’s built until next year, but they did provide a fairly detailed look at the Watch and the software it runs.

It appears that, in contrast to Google’s approach with Google Wear (which is heavily focused on showing single bits of information or points of interaction on the screen, and relies on swiping between cards of data and interaction), Apple intends the Watch to run fairly sophisticated applications. The Watch retains the iPhone’s touch interface, but Apple has designed new means of interaction specific to a small screen. In addition to the tap, the Watch brings the “force tap,” which is used to bring up different options within applications (like, say, the shuffle and AirPlay buttons within the music application), and the “digital crown,” a repurposing of the normal watch’s crown into a sort of scroll wheel for the Watch. Using the digital crown, users can zoom in and out of maps and scroll through lists with precision and without covering the small screen. And, most interestingly, they have replaced the familiar vibration alert in our phones with a light “tap” from the Watch to notify the user.

What this allows is fairly sophisticated applications. You can not only search for locations around you, but you can zoom in and out of maps. You can scroll through your emails, messages, events or music. You can control your Apple TV.

This subsumes many of the reasons we pull out our phones during the day. We can check our schedule for the day, check a message when it’s received and send a quick reply, find a place to get a drink after dinner (and get directions there without having to walk and stare at your phone), ignore a phone call by placing your hand over your wrist, or put something on the Apple TV.

But what force taps and the digital crown will not do is make the Watch’s small screen as large as a phone’s. You can’t type out a reply to a message or email. You can’t browse the web for something. You can’t dig through a few months of your email to find a certain one. You can’t mindlessly swipe through Twitter (well, you could, but it’s going to be pretty difficult). That, though, is an advantage the Watch has over the phone. Because it is inherently limited, it also has to be laser-focused on a single purpose, and while using it, you are limited to accomplishing something. It’s a lot harder to lose yourself in a 1.5″ screen than it is in a 4+ inch screen.

That’s going to be one of the Watch’s primary purposes for existing: allowing us to do many of the things we do on our phones right now, but in a way that’s limited and, thus, less distracting. If you’re out to dinner and receive a message (and haven’t turned on Do Not Disturb), you’re going to be a lot less likely to spend a couple minutes on a reply, and then Instagram, if you’re checking and responding it to it on the Watch. It just doesn’t work that way.

In that way, I think Apple has embraced the wrist-worn watch’s inherent limitations. Rather than try to work around them, they are using them. They’ve built new means of interaction (force tap, digital crown, “taptic” feedback) that allows fairly sophisticated applications, but they didn’t use them to cram iOS in its entirety into the Watch.

What I think Apple is trying to do is build a new mode of personal computing on the wrist that is molded from the inherent limitations and opportunities that creates.

Truly Personal

In Jony Ive’s introduction to the Watch, Ive ends with a statement of purpose of sorts for it. He says,

I think we are now at a compelling beginning, actually designing technology to be worn. To be truly personal.

That sounds like a platitude, but I think it defines what Apple is trying to do. “Taptic feedback,” which Dave Hamilton describes as feeling like someone tapping you on the wrist, is a much less intrusive and jolting way of getting a notification than a vibration against your leg or the terrible noise it makes on a table, and more generally, focusing the Watch’s use on quick single purposes is, too.

What is interesting to me, though, is they are using the Watch’s nature to do things in a more personal—human—way, and to do things that the phone can’t. When providing directions, the Watch shows them on the screen just as you would expect on a phone, but it also does something neat: when it’s time to turn, it will let you know using its Taptic feedback, and it differentiates between left and right. As a result, there isn’t a need to stare at your phone while walking somewhere and getting directions.

They’ve also created a new kind of messaging. Traditionally, “messages” are either words sent from one person to another using text or speech. Since messages are communication through word, something inherently mental or intellectual rather than emotional, they are divorced from emotion. We can try to communicate emotion through text or speech (emoticons serve exactly that purpose), but communicating emotion to another person is always translated into text or speech, and then thought about by them, rather than felt. In person, we can communicate emotion with our facial expressions, body gestures, and through touch. There’s a reason hugging your partner before they leave on a long trip is so much more powerful than a text message saying you’ll miss them.

In a small way, using the Watch, Apple is trying to create a new way to communicate that can capture some of that emotion. Because the Watch can effectively “tap” your wrist, others can tap out a pattern on their Watch, and it will re-create those taps on your wrist, almost like they are tapping you themselves. You could send a tap-tap to your partner’s wrist while they are away on a trip just to say that you’re thinking about them. Isn’t that so much more meaningful a way to say it than a text message saying it? Doesn’t it carry more emotion and resonance?

That’s what they mean by making technology more personal. It means making it more human.

The Watch is not about making it more convenient to deal with notifications and information sent to us. It’s not even about, as I described above, keeping your phone in your pocket more often (although that will be a result). The Watch is creating a new kind of computing of our wrists that will be for different purposes than what the phone is for and what the tablet and PC are for. The Watch is for quickly checking and responding to messages, checking your schedule, finding somewhere to go and getting directions there, for helping you lead a more active (healthier) life, and for a more meaningful form of communication. And it will do that without sucking our complete attention onto it, like the phone, tablet and PC do.

The Watch is for doing things with the world and people around us. Finding places to go, getting there, exercising, checking in at the airport, and sending more meaningful messages. Even notifying you of a new message (if you don’t have Do Not Disturb turned on) while out to dinner with family or friends serves this purpose, because if you have to see it, you can do so in a less disruptive way and get back to what you are doing—spending time with people important to you.

The Watch is a new kind of computing born of, and made better by, it’s limitations. And I can’t wait.

The founders of Siri are working on a new service called Viv that can link disparate sources of information together to answer questions:

But Kittlaus points out that all of these services are strictly limited. Cheyer elaborates: “Google Now has a huge knowledge graph—you can ask questions like ‘Where was Abraham Lincoln born?’ And it can name the city. You can also say, ‘What is the population?’ of a city and it’ll bring up a chart and answer. But you cannot say, ‘What is the population of the city where Abraham Lincoln was born?’” The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do.

Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like “Give me a flight to Dallas with a seat that Shaq could fit in.” Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom. And it can do all of this in a fraction of a second.

If I understand the advancement they’ve made, the service (1) will allow third-parties to link in their information or service and define what it is in a structured fashion (so Yelp could define their information set as points of interest, user ratings and reviews, and Uber could make their car service available) and (2) the service knows how to connect multiple information and/or services together so that it can answer a user’s question or fulfill their request.

The Wired article linked above provides an example of what this would look like. A user tell Viv that they need to pick up a bottle of wine that pairs well with lasagna on the way to their brother’s house.

Providing a solution to that requires the interaction of many different information sets and services. Viv would (1) use the user’s contacts to look up their brother’s address, (2) use a mapping service to create a route from the user’s current location to their brother’s house, along with some radius along the route with which the user is willing to deviate from to pick up the bottle of wine, (3) identify what ingredients compose “lasagna,” (4) identify what wines pair well with those ingredients, and (5) find stores within the specified radius of the user’s route that carries that wine.

That’s incredibly complicated. If Viv can do that not just for pre-planned scenarios (like Siri and Google Now currently do), but for arbitrary scenarios provided they have the necessary information and services, then they must also have made an advancement in natural language recognition to support it.

What most intrigues me, though, is the founders’ vision for providing Viv as a “utility” akin to electricity, so that any device could tap into the service and use its power. Effectively, what they are trying to build is a structured, universal data source. I wrote about this idea when Apple released Siri in 2012 and it’s something I’ve been thinking about for the last 5 years. The idea is to structure the world’s data so that it can be retrieved in a useful (read: computer usable) form.

It’s incredibly ambitious. With a sophisticated natural language front-end, users could ask for information on virtually anything and receive it immediately. You could, while cooking (is it obvious I make an application for cooking?), ask for healthy substitutes for butter, or the proper technique for blanching vegetables. The service would also have an API so that other software and services could access it. Imagine a hypothetical research application that allows you to request (not search!) the average temperature for each year in Los Angeles for 1900-2010, and getting back the data, and the data assembled into a chart. And then imagine requesting the average temperature for Los Angeles for 1900-2010 along with the amount of CO2 emissions for each year in the same range. With the data charted.

That’s a rather mundane example, actually. Imagine what kind of analyses would be possible if the world’s data is not only made available, but is immediately available in a structured format, and is constantly updated as the data is produced. There is the potential here, I think, for this to be as important as the advent of the Internet itself.

What concerns me, though, is how will this be made accessible. The article quotes Dag Kittlaus as saying that they envision deriving revenue from referrals made within the service. So, if you buy something through Amazon or request an Uber ride through Viv, they will earn a referral fee for it.

That makes perfect sense and is fairly brilliant. But what about making scientific data accessible? Will that require some kind of payment to access? Will I only be able to access that information through some kind of front-end, like a research application that I’ve paid for (and where the application’s developers pay some kind of fee to get access)? That would certainly be an advancement over where we are today in terms of making data accessible, but it would also prevent incredible innovation that open access could allow. Imagine if Wikipedia was a for-profit operation and, instead of being publicly available, was only accessible through subscription or through some kind of front-end. It would not be nearly the same thing.

It is heartening, though, that they are thinking so deeply about a business model. It would be a shame if such a terrific idea and incredible technology fails (or is absorbed by another company) because they hadn’t considered it. However, I hope they are considering, too, what open access to certain kinds of data (historical, political, scientific) could allow.

August 12th, 2014

I Want to Know

August 8th, 2014

When I was growing up, I was fascinated by space. One of my earliest memories—and I know this is strange—is, when I was four or five years old, trying to grasp the concept of emptiness in space. I imagined the vast emptiness of space between galaxies, nothing but emptiness. I tried to imagine what that meant, but most of all, I tried to imagine what it would look like.

That question, what color empty space would be, rolled around my brain the most. I couldn’t shake it. I would be doing something–playing Nintendo, coloring, whatever–and that question would pop into my head again. What does “nothing” look like? First, I imagined that it would look black, the black of being deep in a forest at night. But that didn’t seem right, either; black is still “something.” And then, I remember, I realized I was thinking about a much worse question. I wasn’t trying to imagine what the emptiness of space would look like. I was trying to imagine what nothing would look like.

I have that memory, I think, because thinking about that sort of broke my brain. I couldn’t comprehend what nothing is.

That question, of course, begins down toward the central question of what our universe is and how it was created. I think that’s why space–the planets, stars, galaxies–so fascinated me then; it’s this thing so alien to our world, that dwarfs it on a scale that’s incomprehensible to us, and yet it is us. We aren’t something held apart separate from it, but intimately a part of it and its history.

Trying to understand the physics of our universe, its structure and history is also an attempt to understand ourselves. I think, at some gut level, I understood that as a kid.

I poured myself into learning about our solar system and galaxy. My parents’ Windows PC had Encarta installed, and I was enthralled. I spent countless hours reading everything I could find within Encarta (which, at the time, felt like a truly magical fount of knowledge) about Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. And when I exhausted that source, I asked for books about space, and I obsessed over them. They were windows into these incredible places, and I couldn’t believe that we were a part of such a wondrous universe.

Through elementary school, my love for space continued to blossom. Then, NASA were my heroes. To my eyes, they were the people designing and launching missions across our solar system so we could understand even more about it. Many of the photos of Jupiter, Saturn, Uranus, and Neptune that I was so enraptured by were taken by spacecraft designed, built and launched by people at NASA. They were the people who had risked their lives to leave Earth and go to the Moon, to do something that most people up until just decades prior couldn’t even imagine as being possible. And they were the people who were exploring Mars with a little robotic rover called Sojourner that very moment.

They were my heroes because they were the people pushing us to explore our solar system, to learn what was out there and what came before us. I felt like I was at living during a momentous time in the history of humanity, and that I would live to see advances as incredible as 1969′s Moon landing. There wasn’t a doubt in my mind.

That year, in 1997, I was nine years old. It’s been seventeen years.

Since then, we have indeed made great advances. In that time, we’ve sent three separate rovers to Mars, and we discovered that Mars certainly had liquid water on its surface long ago in its history. We landed a probe on the surface of Saturn’s moon Titan, which sent back these photos. We’ve discovered that our galaxy is teeming with solar systems.

All truly great things. But we are no closer today to landing humans on Mars than we were in 1997. In fact, we are no closer to putting humans back on the Moon today than we were in 1997.

Some people would argue that’s nothing to be sad about, because there isn’t anything to be gained by sending humans to Mars, or anywhere else. Sending humans outside Earth is incredibly expensive and offers us nothing that can’t be gained through robotic exploration.

Humanity has many urges, but our grandest and noblest is our constant curiosity. Through our history as a species, we have wondered what is over that hill, over that ridge, beyond the horizon, and when we sat around our fires, what are the lights we see in the sky. Throughout, someone has wondered, and because they wondered, they wandered beyond the border that marks where our knowledge of the world ends, and they wandered into the unknown. We never crossed mountains, deserts, plains, continents and oceans because we did a return-on-investment analysis and decided there were economic benefits beyond the cost to doing so. We did so because we had to in order to survive, and we did so because we had to know what was there. We were curious, so we stepped out of what we knew into certain danger.

And yet that tendency of ours to risk everything to learn what is beyond everything we know is also integral to all of the progress we have made as a species. While working on rockets capable of leaving Earth’s atmosphere, it would hardly be obvious what that would allow us to do. Would someone then have known that rocketry would allow us to place satellites into orbit which would allow worldwide communication, weather prediction and the ability to locate yourself to within a few feet anywhere on Earth? Economic benefits that result from progress are hardly ever obvious beforehand.

But it is more than that. It isn’t just that exploration drives concrete economic benefits. We think in narratives. Since the Enlightenment and industrial revolution, we have built a narrative of progress. With each year that passes, we feel that things improve. Our computers get faster, smaller, more capable; we develop new drugs and treatments for diseases and conditions that, before, would be crippling or a death sentence; with each year, our lives improve. For a century and a half or so, that feeling hasn’t been too far from reality. But most especially, we have continued to do something that cuts to the very center of what it means to be human: we have explored. We explored the most dangerous parts of Earth, we have explored our oceans, we put humans into space and humans stepped foot on a foreign body. There is a reason that, when we think of our greatest achievements as a species, landing on the Moon comes to mind with ease. At a very deep level within us, exploring the unknown is tied up with what it means to progress.

As exciting and useful as it is to send probes to other planets and moons, it fails to capture our imagination in the same way that sending people does. The reason is because doing so–exploring the unknown ourselves–is such an incredible risk. What Buzz Aldrin, Neil Armstrong and Michael Collins did in 1969 was unfathomably dangerous. They knew–everyone knew–that there was a very good chance that they would fail to get back to Earth. But they accepted that risk, because for them, learning about the unknown was worth that risk.

Abandoning human exploration of space, then, has consequences more far reaching than what its proponents intend. We would not just be abandoning putting humans into space, but at some fundamental level within us will be resigning ourselves to staying here. We will have decided, as a species, that we have gone far enough, we will leave our borders at our planet’s atmosphere, and leave the rest of the solar system and galaxy to nature. And with that decision, we will resign ourselves to no longer exploring in the general sense.

That’s why it is so integral that we continue exploring. Pushing on the edge of what’s possible is what fuels our desire and ability to explore in all other areas, too.

There are still incredible mysteries for us to unlock. We don’t know whether Mars had life early in its history. We don’t know whether, in Europa’s and Enceladus’s oceans, there are lifeforms swimming through them as I write this. We don’t know whether there is intelligent life living on planets in solar systems in the Milky Way and beyond. We don’t know how life began on Earth, let alone how life began at all. And most of all, we don’t know whether it is possible for us to move beyond our own solar system.

But what I do know is this: I want to know. I want to know.

Apple’s Advantage

June 3rd, 2014

Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.

Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.

That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.

And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.

The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.

Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.

I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?

As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.

That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.

No Answer

March 31st, 2014

You have all the answers to my questions
Even ones I didn’t have
Why should I know anything at all?

You know everything I need
Everything I may need
You hold it all for me
So I waste no time

But still I wonder, why don’t I wonder?
Like I did as a kid
But no answer

Tesla plans to build a massive new lithium ion factory that would double world production volume. Doing so could dramatically change the car industry:

When Tesla first began working on its Model S saloon barely five years ago, lithium-ion batteries were priced at about $1,000 a kilowatt-hour (kWh). Manufacturers are notoriously secretive about pricing details, but industry insiders hint that prices have now slipped to anywhere from $400 to $750 a kWh. Even so, that means the 85 kWh pack in a Model S costs Tesla between $34,000 and $63,750. A study by the Boston Consulting Group projected that prices would need to come down to $200 or less per kWh to make electric vehicles truly competitive with the more familiar car that relies on internal combustion. The gigafactory would slash these production costs.

Tesla intends to do so, presumably, to reduce cost enough to make an affordable Tesla.

People enjoy making Hyperloop jokes about Elon Musk, but there’s really no one else in the world doing such ambitious work. How can you not love that?

March 4th, 2014

Julian Sanchez:

I’m perfectly open to the notion that it may be wise and justifiable to extent the protections of anti-discrimination law to groups not currently covered—but I also wish supporters of such reforms would acknowledge that there’s a genuine impingement on associational freedom involved in such extensions, and that no simple sweeping principle can obviate the need for a close examination of the tradeoffs in each case.

March 1st, 2014