“Web” Category

iOS 9 and the Information Ecosystem

Earlier this month, Apple introduced iOS 9 with new search and Siri features. In iOS 9, users will be able to search for both specific activities and content that is contained in iOS applications. What this means is that users should be able to search for “sunburn” to find information on how to treat sunburns that is provided by iOS medical applications, and tapping the item will immediately launch it in the application. Additionally, these features will allow users to reference whatever they are looking at in a reminder they create through Siri. That is, when looking at an Amazon product page, users will be able to tell Siri “Remind me to buy this tonight,” and Siri will add a reminder with the link included.

Prior to iOS 8, an application’s functionality and content were indivisible from the application itself. If a user was looking at a photo in their photo library and wanted to edit it using a more powerful editing application they had installed, they had to leave Photos, open the editing application, and find and open the photo again there. If the user needed to access a text document they had stored in Pages, they had to launch Pages. In iOS 8, Apple eliminated the redundancy in the former example through extensions, which allow developers to atomize their application’s functionality and allow users to utilize it outside the scope of the application itself.1

The latter example is still true within iOS 8. Content is indivisible from the application itself. iOS 9, however, begins to break content and tasks from the application by making them searchable through what used to be called Spotlight on iOS but is now just Search.

The features around Search and Siri Reminders are absolutely useful. It is flexible and convenient to be able to move over to the resurrected Search page on the home screen and type in, say, “Denver” to find my flight or Airbnb reservation. What I find more interesting than the user-facing features here, though, are the tools provided to developers to make this possible, and the direction task and content search indicate iOS may be heading.

An Information Ecosystem

To allow iOS’s new Search feature to surface tasks and content that are contained within applications, developers must indicate to the system what within their application is content that should be surfaced, and what type of content it is (image, audio, event, etc). Developers do much the same thing for tasks. Somewhat similarly, extensions indicate to the system what kind of content they can consume.

This is referred to as “deep linking,” because it allows users to follow a “link” to somewhere deep within an application for some kind of task or content, exactly like clicking on a link in Google to a news article within a website, as opposed to going to the website’s home page and moving through their hierarchy to the article. “Deep linking,” while apt, is also somewhat misleading because this allows much more than just search. When developers update their applications to take advantage of Apple’s new APIs for identifying content and tasks to the system, they will be helping the system structure what–and what kind–of data is on the user’s device. The system will know what content is on a user’s device, what kind of content that is, and what kind of content applications provide. The system will know what photos, contacts, events (say, hotel reservations), and music are on a user’s device.

Using these tools, we could begin to construct an understanding of what the user is doing. Applications are indicating to the system what tasks the user is doing (editing a text document, browsing a web page, reading a book), as well as what kind of content it is they are interacting with. From this, we can make inferences about what the user’s intent is. If the user is reading a movie review in the New York Times application, they may want to see show times for that movie at a local theater. If the user is a student writing an essay about the Ming dynasty in China, they may want access to books they have read on the topic, or other directly relevant sources (and you can imagine such a tool being even more granular than being related to “the Ming dynasty”). Apple is clearly moving in this direction in iOS 9 through what it is calling “Proactive,” which notifies you when it is time to leave for an appointment, but there is the possibility of doing much more, and doing it across all applications on iOS.

Additionally, extensions could be the embryonic stage of application functions broken out from the application and user interface shell, one-purpose utilities that can take in some kind of content, transform it, and provide something else. A Yelp “extension” (herein I will call them “utilities” to distinguish between what an extension currently is and what I believe it could evolve into) could, for example, take in a location and food keywords, and provide back highly rated restaurants associated with the food keywords. A Fandango extension could similarly provide movie show times, or even allow the purchase of movie tickets. A Wikipedia extension could provide background information on any subject. And on and on.

In a remarkable piece titled Magic Ink, Bret Victor describes what he calls the “Information Ecosystem.” Victor describes a platform where applications (what he calls “views”) indicate to the system some topic of interest from the user, and utilities (what he calls “translators”) take in some kind of content and transform it into something else. What this platform would do is then provide inputs to all applications and translators. The platform would provide some topic of interest that has been inferred from the user; as I described above, this may be a text document where the user is writing about the Ming dynasty, or a movie review the user is reading through a web browser. Applications and translators can then consume these topics of interest and information provided by utilities. The Fandango utility I describe above could consume the movie review’s keywords, for example, and provide back to the platform movie show times in the area. The Wikipedia utility could consume the text document, and provide back information on the Ming dynasty.

What is important here is that the user intent that can be inferred from what the user is doing and what specific content they are working with, and the utilities described above, could be chained together and utilized by separate applications for the user, in such a way that was not explicitly designed beforehand. Continuing the movie review case, while the user is reading a review for Inside Out in the New York Times application, they could invoke Fandango to find local show times and purchase tickets. This could occur either by opening the Fandango application, which would immediately display the relevant show times, or through Siri (“When is this playing?”). More interesting, one could imagine a new kind of topical research application that, upon notice that the user is writing an essay related to the Ming dynasty, pulls up any number of relevant sources, from Wikipedia (using the Wikipedia utility) and online sources (papers, websites). Perhaps the user has read several books about the Ming dynasty within iBooks, and has highlighted them and added notes. If iBooks identifies that information to the system, such a research application could even bring up not just the books, but specific sections relevant to what they are writing, and passages they highlighted or left notes on. Through the platform Victor describes, the research application could do so without being explicitly designed to interface with iBooks. As a result, the work the user has done in one application can flow into another application in a new form and for a new purpose.

To further illustrate what this may allow, I am going to stretch the above research application example. Imagine that a student is writing an essay on global warming in Pages on the iPad in the left split-view, and has the research application open on the right. As the user is writing, the text will be fed into a topic processor, and “global warming” will be identified as a topic of interest by iOS. Because earlier that week they had added a number of useful articles and papers to Instapaper from Safari, Instapaper will see “global warming” as a topic of interest, and serve up to the system all articles and papers related to the topic. Then, a science data utility the user had installed at the beginning of the semester would also take in “global warming” as a topic, and would offer data on the change in global temperature since the Industrial Revolution. The research application, open on the right side of the screen, will see the articles and papers brought forward by Instapaper and the temperature data provided by the science data utility, and make them immediately available. The application could group the papers and articles together as appropriate, and show some kind of preview of the temperature data, which could then be opened into a charting application (say, Numbers) to create a chart of the rise in temperatures to put in the essay. And the research application could adjust what it provides as the user writes, without them doing anything at all.

What we would have is the ability to do research in disparate applications, and have a third application organize our research for the user in a relevant manner. Incredibly, that application could also provide access to relevant historical data for the user as well. All of this would be done without the need for this application to build in the ability to search the web and academic papers for certain topics (although it could, of course). Rather, the application is free to focus on organizing research in a meaningful and useful way in response to what the user is doing, and they would just need to do so by designing for content types, not very specific data formats coming from very specific sources.

Utilities, too, would not necessarily need to be installed with a traditional application, or “installed” at all. Because they are face-less functions, they could be listed and installed separate from applications themselves, and Apple could integrate certain utilities into the operating system to provide system-wide functionality without any work on the user’s part. For example, utilities could be used in the same way that Apple currently integrates Fandango for movie times and Yelp for restaurant data and reviews. Siri would obviously be a beneficiary of this, but all applications could be smarter and more powerful as a result.

A Direction

Apple hasn’t built the Information Ecosystem in iOS 9. While iOS 9′s new search APIs allow developers to identify what type of content something is, we do not yet have more sophisticated types (like book notes and highlights), nor a system for declaring new types in a general way that all applications can see (like a “movie show times” type).2 Such a system will be integral to realizing what Victor describes, and is by no means a trivial problem. But the component parts are increasingly coming into existence. I don’t know if that is the direction Apple is heading, but it certainly *could be*, based on the last few years of iOS releases. What is clear, though, is Apple is intent on trying to infer more about what the user is doing and their intent, and provide useful features using it. iOS 7 began passively remembering frequently-visited locations and indicated how long it would take to get to, say, the user’s office in the morning. iOS 9 builds on that sort of concept by notifying the user when they need to leave for an appointment to get there on time, and by automatically starting a certain playlist the user likes when they get in the car. Small steps, but the direction of those steps is obvious.

I hope Apple is putting the blocks in place to build something like the Information Ecosystem. Building the Information Ecosystem would go a long way to expanding the power of computing by breaking applications–discrete and indivisible units of data and function–into their component parts, freeing that data to flow into other parts of the system and to capture user intent, and for the functionality to augment other areas in unexpected ways.

I believe that the Information Ecosystem ought to be the future of computing. I hope Apple is putting the blocks in place to build something like it.

  1. Although for the photos example, Apple hasn’t totally finished the job; while you can use photo editing extensions from within the Photos application, you can’t do so for photos added to emails or iMessages. []
  2. We can declare new types, and other applications can use them, but as far as I know each developer must be aware of those types separately, and build them into the application. This makes it difficult to use data types declared by third-parties. []
June 25th, 2015

The Successor to Siri and Open Data

The founders of Siri are working on a new service called Viv that can link disparate sources of information together to answer questions:

But Kittlaus points out that all of these services are strictly limited. Cheyer elaborates: “Google Now has a huge knowledge graph—you can ask questions like ‘Where was Abraham Lincoln born?’ And it can name the city. You can also say, ‘What is the population?’ of a city and it’ll bring up a chart and answer. But you cannot say, ‘What is the population of the city where Abraham Lincoln was born?’” The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do.

Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like “Give me a flight to Dallas with a seat that Shaq could fit in.” Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom. And it can do all of this in a fraction of a second.

If I understand the advancement they’ve made, the service (1) will allow third-parties to link in their information or service and define what it is in a structured fashion (so Yelp could define their information set as points of interest, user ratings and reviews, and Uber could make their car service available) and (2) the service knows how to connect multiple information and/or services together so that it can answer a user’s question or fulfill their request.

The Wired article linked above provides an example of what this would look like. A user tell Viv that they need to pick up a bottle of wine that pairs well with lasagna on the way to their brother’s house.

Providing a solution to that requires the interaction of many different information sets and services. Viv would (1) use the user’s contacts to look up their brother’s address, (2) use a mapping service to create a route from the user’s current location to their brother’s house, along with some radius along the route with which the user is willing to deviate from to pick up the bottle of wine, (3) identify what ingredients compose “lasagna,” (4) identify what wines pair well with those ingredients, and (5) find stores within the specified radius of the user’s route that carries that wine.

That’s incredibly complicated. If Viv can do that not just for pre-planned scenarios (like Siri and Google Now currently do), but for arbitrary scenarios provided they have the necessary information and services, then they must also have made an advancement in natural language recognition to support it.

What most intrigues me, though, is the founders’ vision for providing Viv as a “utility” akin to electricity, so that any device could tap into the service and use its power. Effectively, what they are trying to build is a structured, universal data source. I wrote about this idea when Apple released Siri in 2012 and it’s something I’ve been thinking about for the last 5 years. The idea is to structure the world’s data so that it can be retrieved in a useful (read: computer usable) form.

It’s incredibly ambitious. With a sophisticated natural language front-end, users could ask for information on virtually anything and receive it immediately. You could, while cooking (is it obvious I make an application for cooking?), ask for healthy substitutes for butter, or the proper technique for blanching vegetables. The service would also have an API so that other software and services could access it. Imagine a hypothetical research application that allows you to request (not search!) the average temperature for each year in Los Angeles for 1900-2010, and getting back the data, and the data assembled into a chart. And then imagine requesting the average temperature for Los Angeles for 1900-2010 along with the amount of CO2 emissions for each year in the same range. With the data charted.

That’s a rather mundane example, actually. Imagine what kind of analyses would be possible if the world’s data is not only made available, but is immediately available in a structured format, and is constantly updated as the data is produced. There is the potential here, I think, for this to be as important as the advent of the Internet itself.

What concerns me, though, is how will this be made accessible. The article quotes Dag Kittlaus as saying that they envision deriving revenue from referrals made within the service. So, if you buy something through Amazon or request an Uber ride through Viv, they will earn a referral fee for it.

That makes perfect sense and is fairly brilliant. But what about making scientific data accessible? Will that require some kind of payment to access? Will I only be able to access that information through some kind of front-end, like a research application that I’ve paid for (and where the application’s developers pay some kind of fee to get access)? That would certainly be an advancement over where we are today in terms of making data accessible, but it would also prevent incredible innovation that open access could allow. Imagine if Wikipedia was a for-profit operation and, instead of being publicly available, was only accessible through subscription or through some kind of front-end. It would not be nearly the same thing.

It is heartening, though, that they are thinking so deeply about a business model. It would be a shame if such a terrific idea and incredible technology fails (or is absorbed by another company) because they hadn’t considered it. However, I hope they are considering, too, what open access to certain kinds of data (historical, political, scientific) could allow.

August 12th, 2014

Lowering the Gates

Matt Bischoff on the New York Times’ 10-free-articles-a-month limit:

Since The Times’s mobile products are partially supported by advertising, it’s counterintuitive to drive down the number of ad impressions by cutting off enthusiastic users just as they’re getting excited about the content. Ten articles per month just aren’t enough to justify keeping the apps installed; it’s almost insulting. The proof is in the plummeting App Store ratings as well as in the company’s usage statistics, which I suspect show readers returning less frequently since the change.

I read the Times every morning and have for the past five or six years. So let’s be honest: Not only are their subscription plans inscrutable (separate plans for smartphone and tablet access? Why?), but the new 10 articles-per-month limit is clearly designed to coerce people into subscribing. But instead of convincing more people to subscribe, it’s likely to piss more people off and turn them away from the Times.

It appears that the Times doesn’t have a unified strategy to transition their company to digital. Sad.

February 12th, 2014

Building the Insect Drone

Fascinating article in Popular Science about the development of insect-like drones:

They teamed up with Wood, whose lab had since joined Harvard’s Wyss Institute for Biologically Inspired Engineering, and together they applied for an Air Force grant. Wood’s group then used an image-capture system to record and analyze fly behavior before, during, and after collisions with glass. By closely observing the positions of the flies’ body parts, they could measure the exact flip and twist of wings and legs. 

When Guiler and Vaneck slowed down the film, they were amazed at what they saw. “I thought the fly would tumble a bit and lose a lot of altitude,” Vaneck says. “But the fly recovery was elegant. It happened so rapidly; it was breathtaking.”

One of the more thought-provoking articles I’ve read in a while. What struck me while reading it is how much we are now learning about life by observing and understanding how the simplest creatures—worms and insects—deal with and thrive in a complex, changing and threatening world. We may be the dominant and most intelligent species, but we have much to learn about how life succeeds. And by casting away the arrogance that being the dominant species engenders, we will learn much, much more.

January 27th, 2014

Tony Fadell Explains Nest’s Sale to Google

Last week, Nest was acquired by Google for $3.2 billion.

There are only a few companies that have truly excited me in the last few years, and Nest is at the top. They worked on a very original, very real problem—thermostats were not only a user interface disaster and something nearly everyone hated to use, but also were an opportunity to do something meaningful: use energy more efficiently in people’s homes while also improving their relation with an important but ignored device. In addition, it clearly was the first product in a much larger plan. And it was a very good first step in a very good plan.

So, when I heard that Nest had sold to Google, I felt a pang of disappointment. Not because it was Google (which, if Nest were to be acquired, makes more sense than any other company I can think of), but rather because Nest is an incredibly ambitious company that, I think, had the opportunity to be as important to the next wave of development in personal computing and the Internet as Apple and Google were—and potentially as large. They were a key member in Silicon Valley’s next generation of meaningful companies, I thought.

Of course, nearly every bit of that can (and will) still be true, with the notable exception of remaining independent. They can still do all of that, but they will do so under Google’s banner, and for Google’s benefit. And that’s fine, all else being equal. Before I continue, though, we need to discuss why Nest decided to sell in the first place, and “for the money” doesn’t count, because I know Tony Fadell, Matt Rogers and everyone else there didn’t do it for the cash.

Here’s why they sold, according to Fadell:

I was spending nearly ninety percent of my time on building the infrastructure of the company and I wasn’t able to spend enough time and cycles on what I love doing: products and creating differentiated experiences for our customers. That is where my love is and Google offered to let us focus on that, but with scale that will help bring our horizon closer to us, faster. Google offers to bring that scale to us. For me, ultimately building great products is key.

Fadell cites European distribution as a specific example of what he means by “scale”—physical distribution and dealing with legal issues surrounding something as regulated and disparate as energy. Fadell wants to focus his time on developing products rather than handling all the issues surrounding it.

It’s hard to argue with that. Nest clearly wants to move quickly. The Nest thermostat is a shockingly good first product, and Nest Protect—which they released just two years later—is at least as good. Nest Protect also began revealing their larger strategy. Owning either one of them is great, but owning both of them makes each one better. Since they use your home’s wireless network, the Nest thermostat will automatically augment itself with the Protect’s motion sensors. And more importantly, if the Protect senses rising levels of carbon monoxide, the thermostat will shut off your home’s furnace. Their strategy, then, appears to be modular devices that are convincing on their own, but when used together not only all function better, but begin to form the basis for a connected home.

Being a part of Google will allow them to realize that strategy faster by increasing their resources so they can focus their bandwidth on developing product. Google also is doing industry-leading work in learning systems and web services, which obviously will benefit Nest. Like I said, of all the companies in the world that could have acquired Nest (which, admittedly, is a fairly short list), Google is the best fit.

But Google didn’t agree to acquire Nest entirely for Nest’s benefit. They did it, I assume, because Nest fills in particularly important holes in Google’s capabilities and in Google’s future development. While Google has been very good at building web applications, web services and a mobile operating system, they’ve done very little to prove that they can design and make hardware that real consumers will pay real money for. There’s a lot more involved there than design and supply chain. To a much greater extent, making hardware involves doing businessy things like identifying a target market for it, identifying what price they’ll pay at necessary levels of sales and margin, and then manufacturing a quality product in an efficient enough way to hit that margin. Nest has shown that not only can they do all of that, but they can produce an exceptional product that customers truly love. That’s invaluable, and it’s something Google hasn’t done.

Nest also provides an entry path for Google into the home. Starting into the connected home requires building hardware, and it requires a no-bullshit vision for how the connected home can improve people’s lives in substantive ways. Nest provides both of those things.

It sounds a symbiotic relationship, then. Google can provide Nest what it needs and Nest can provide Google something it needs, too. In Nest’s ideal vision of the relationship, Nest will remain largely independent—their own brand, leadership, teams and products. People and resources may flow across the Nest-Google boundary, but the two entities will nevertheless remain distinct. But in Google’s, Nest will begin to overlap and merge with Google itself. If Google wants the Nest acquisition to result in an improved capability for creating hardware products that consumers really want, then that necessarily requires Nest’s leadership to extend outside of Nest itself—which would require splitting their time, too. This is because while Nest may become functionally a discrete unit within Google (the “connected home” unit, let’s say), if it is to have any effect on the rest of Google, there has to be some sort of cross over. This may mean putting Nest’s leadership (whether that’s Matt Rogers, or another member of the team) in charge of Google’s hardware, or even having people in leadership roles move back and forth across the boundary. In any case, the boundary begins to smear, and Fadell’s reason for doing the deal—to focus his team’s time exclusively on product—begins to seem less likely.

Of course, that’s not necessarily negative. Perhaps a Nest-infused Google, and a Google-infused Nest, is better for everyone involved—Nest, Google, and us. I think there’s a good argument to be made there. But inherently, as that occurs, Nest begins to fade as a distinct entity, and it becomes more Google.

I think the most optimistic comparison for this acquisition is Disney’s 2006 acquisition of Pixar. Pixar remained an independent studio, kept their leadership, kept their campus, kept their culture, and created some of their most artistically and commercial films afterward. In return, Disney received Ed Catmull and John Lasseter’s services for turning around their declining animation department. And turn it around they did; Disney Animation Studios is enjoying something of a renaissance. Frozen, released in December 2013, was Disney’s biggest hit since The Lion King. The Pixar acquisition is one of the most successful acquisitions in history.

That could be how it works out here, too. I suspect, though, that while Pixar has thus far been able to retain its independence, Nest will not retain independence to the same extent. I have two main reasons for thinking so. First, the Disney-Pixar deal was incredibly specific in its intent: the deal was Catmull and Lasseter would oversee Disney Animation and Pixar would remain its own studio. The Google-Nest deal, as far as I can tell, doesn’t appear to be nearly as well-defined. As a result, blurring will happen with relative ease. Second, while in the movie business it’s actually beneficial for Pixar to remain independent in substance and in brand—it allows them to experiment in ways they couldn’t necessarily do if it was all a single studio, and it also allows them to release multiple movies per year in a way that doesn’t feel like Disney movies are competing for people’s attention—that structure doesn’t make nearly as much sense for Google and Nest. In reality, centralizing their hardware operation makes much more sense than continuing Nest as a parallel operation to Google’s other hardware operations. As a result, I think what we are more likely to see is Nest more or less become a part of Google while the brand continues on as Google’s “connected home” brand.

In the short-term, then, I think there’s very good reason to be excited about the deal. I bet we are going to see even more incredible things come out of Nest than we would have seen otherwise, and probably faster as well. But long-term, I’m disappointed. Nest is one of those rare companies that identified a brilliant product idea, in a large market, that would allow them to develop into something much greater in the future. And along the way, they built a first-rate company in all areas. I believe Nest would be one of the most important companies in the world for the next twenty years. And while they may still be integral to personal computing and the web’s future, it will likely be under Google’s banner. For better or for worse.

January 21st, 2014

The LACMA Art + Technology Lab

The Los Angeles County Museum of Art has announced an art and technology lab sponsored by Google and SpaceX that will allow artists to experiment with new technologies.

The lab will also feature talks, demonstrations and prototypes for the public.

I absolutely love this, and I’m particularly excited that it’s happening at LACMA—one of my favorite places to visit, and something that could help foster the technology community here in Los Angeles. In addition, I love that they’re facilitating experimentation with technology for the sake of experimentation itself. We need more blue sky projects in technology that aren’t necessarily directly actionable. Those sorts of projects can often be a fountain of inspiration and ideas.

December 30th, 2013

Lulu, the Men Rating App

Deborah Schoeneman wrote a good profile of a men rating application called Lulu, which allows women to “review” men much like Yelp after having gone out with them:

Mr. Brockway has since gotten several more reviews (#DudeCanCook), none quite as glowing as the one written by his girlfriend, but he nonetheless has an exceptionally high 9.8 ranking. “There’s nothing I can do about it except be the best person I can be,” he said, adding: “It inspires guys to be good and treat girls the way they should be treated. Like angels.”

Not all men are so magnanimous about their presence on Lulu, of course. Last summer, Neel Shah, a comedy writer, was at a bar in Los Angeles on a date with a woman who pulled up his profile. “She started reading me these negative hashtags and I was like, ‘Uh, this is awkward,’ ” said Mr. Shah, 30, whose profile has been viewed 448 times and “favorite” eight times for an average score of 6.7. His hashtags include #TallDarkAndHandsome and #CleansUpGood, along with the less flattering #TemperTantrums and #WanderingEye.

Even in the best light, I find this idea—that an application like Lulu “inspires” men to be “the best person” they can be, through the threat of getting a poor review—to have the same problem that “inspiring” people to be good by threatening them with an eternity of torture has: are you really a good person if the central reason you’re doing it is to avoid something negative for yourself?

Lulu, and Lulu’s backers within the article, are lauding something that attempts to control people’s behavior by public shaming and the literal threat of not being able to get a date as a step forward in civility.

That’s gross enough, but I think it’s worse than that. Neel Shah goes on to describe how one reviewer (isn’t that word alone repulsive in this context? A reviewer of the desirability of a person?) said that laughing at his jokes may take some effort, which is a benign enough comment on its own. But it isn’t on its own; that comment is a public comment on someone’s sense of humor meant to be used for deciding whether to date them or not, and it contributes to an overall numerical ranking for them. Not only does Lulu attempt to publicly shame people (and laude them!), it also attempts to quantify things that are inherently unquantifiable. How do you reduce a person’s desirability down to a one through ten score?

December 18th, 2013

“Please stop live tweeting people’s private conversations”

Nisha Chittal:

Not to be the Grinch, but can we consider for a moment the fact that live-tweeting and broadcasting another person’s private conversations to the internet for our own entertainment is actually pretty creepy?

It’s not just creepy. It’s creepy at best, and a passive-aggressive, disgusting form of bullying at worst.

December 3rd, 2013

Anki Drive Is Racing Google, Not Other Toy Makers

Austin Carr on Anki Drive’s greater ambitions:

Today, Anki launched Drive to the world, and, even as a stepping stone, it’s an incredibly compelling product. Users can pit their cars against their AI-powered counterparts, zooming around a track and competing in a variety of scenarios–races, battles, and so forth–while using the iPhone as a steering wheel and control center. Sofman and team like to call it “the first video game in the real world.” But the larger promise of Anki is the underlying technology it’s inventing in the process, which could make the startup competitive in a whole host of industries. “This is a way to zigzag through a lot of really compelling products to get to the Holy Grails: full-blown autonomous driving, having [robotic] helpers around the house, health-care applications,” Sofman beams.

Well, that helps explain why Apple allowed Anki to introduce themselves on the center stage. That’s potentially very exciting.

October 22nd, 2013

Nest Protect

Today, Nest announced their first new product since the Nest thermostat—Nest Protect. Nest Protect is a smoke and carbon monoxide alarm.

For an excellent look at Nest Protect, and profile of why they made it and the design process, you should read Stephen Levy’s piece for Wired.

…Wait, what? A smoke alarm?

Yes. Nest’s latest product is a $130 smoke alarm.

Nest’s basic strategy should be obvious now: take things we use in our daily lives but don’t at all enjoy using, or actively abhor using, and think through them so that they’re both better devices and delightful to own and use. (It’s also worth noting that they’re choosing product categories that are very large and universally needed.)

It’s more than that, though. The Nest thermostat and Nest Protect are standalone devices, but they work together. If you have a Nest thermostat and smoke alarms installed in your home, the smoke alarms will inform the thermostat when there’s movement in the home—which should make the Nest thermostat’s “Auto-Away” feature much more accurate, and thus able to be that much more efficient with a home’s energy use.

But what’s even more illuminating for what Nest’s vision is, though, is that if a Nest smoke alarm senses carbon monoxide, it will tell the thermostat to turn off the home’s furnace, which is a likely cause of carbon monoxide poisoning.

That’s truly smart. Nest has not only built two devices that work together to efficiently manage your home’s energy and protect you from fire, but they’ve created two devices that can actively judge the situation and work together to intervene in your home to keep you safe.

We’ve been hearing about the “smart home” for a very long time now, but this is the first time we’re legitimately there. Tony Fadell seemed to confirm this as Nest’s intent while talking with Stephen Levy:

In other words, Nest isn’t only about beautifying the thermostat or adding features to the lowly smoke detector. “We’re about creating the conscious home,” Fadell says. “To take a truly important device that has had no great innovation and make that device really, really great.” Left unsaid is a grander vision, with even bigger implications: many devices sensing the environment, talking to one another, and doing our bidding unprompted.

That’s a grand dream, and I think the Nest Protect—ostensibly just a smoke alarm—is going to be a key cog within their plan. Think about it: it’s not just a smoke alarm, but an Internet-connected computer with sophisticated sensors and software in every bedroom and on every floor. It knows when you wake up (since it has a motion-tracking sensor), when you go to bed, and even when you get up in the middle of the night. Along with the Nest thermostat, they also know when you leave for the day and when you get home. There’s a lot of immediate information there to begin doing some incredible things, and it’s something that could serve as a platform for all kinds of other services as well.

So yes, it is “just” a smoke alarm. And a very good one. But I think it’s also a piece of a much larger plan: make products that are so good that they can stand on their own and you’ll have to have them, but also work together to create something we’ve never seen before.

October 8th, 2013

Apple’s Best, Apple’s Worst

This section from USA Today’s interview with Craig Federighi and Jonathan Ive highlights what makes Apple an exceptional company, and also their greatest weakness:

“This right here is what I love about Apple, this incredibly sophisticated powerful technology that you’re almost not aware of, it absolutely blows me away,” he says. “You can’t get this without working cross-functionally.”

Federighi is quick to admit that any engineer tasked with such a challenge would be sure to call attention to his brilliant work. “You know, you’re going to have some big message saying ‘Scanning!’ and buzz-buzz-zzz-zzz later it says ‘Authenticated,’ blink-blink-blink, with 10 seconds of animation,” he says, as Ive starts laughing.

“Ultimately we realized all that had to disappear,” says Federighi. “If it disappears, we know we’ve done it.”

That’s absolutely what is special about Apple, the focus on using technology in truly meaningful ways rather than to pad specification lists. But Apple’s talent for doing so with hardware and hardware and software hasn’t really extended into web services.

I’m not sure why that is. It could be that while Apple culturally believes in good design and integrating hardware and software, since those values have been with Apple since the very beginning, building terrific web services has never been something they’ve truly believed in as a company. I wonder, then, whether a part of Tim Cook’s management re-organization (which this USA Today piece seems to be meant to show the results of) is building that cultural focus and appreciation for web services.

September 19th, 2013

“The Boy Who Played With Fusion”

Popular Science’s Tom Clynes has an excellent profile of Taylor Wilson, a high schooler who built a fusion reactor:

Almost from the beginning, it was clear that the older of the Wilsons’ two sons would be a difficult child to keep on the ground. It started with his first, and most pedestrian, interest: construction. As a toddler in Texarkana, the family’s hometown, Taylor wanted nothing to do with toys. He played with real traffic cones, real barricades. At age four, he donned a fluorescent orange vest and hard hat and stood in front of the house, directing traffic. For his fifth birthday, he said, he wanted a crane. But when his parents brought him to a toy store, the boy saw it as an act of provocation. “No,” he yelled, stomping his foot. “I want a real one.”
This is about the time any other father might have put his own foot down. But Kenneth called a friend who owns a construction company, and on Taylor’s birthday a six-ton crane pulled up to the party. The kids sat on the operator’s lap and took turns at the controls, guiding the boom as it swung above the rooftops on Northern Hills Drive.

To the assembled parents, dressed in hard hats, the Wilsons’ parenting style must have appeared curiously indulgent. In a few years, as Taylor began to get into some supremely dangerous stuff, it would seem perilously laissez-faire. But their approach to child rearing is, in fact, uncommonly intentional. “We want to help our children figure out who they are,” Kenneth says, “and then do everything we can to help them nurture that.”

The story’s worth reading for many reasons, and there are many parts of it worth discussing, but I found the above section to be particularly important. Rather than discourage Wilson’s fascination with what are incredibly dangerous things (including mixing—and setting off in the backyard for family and neighbors—explosives), or see it as something dangerous to him and others, they tried to nurture it. His parents have certainly been indulgent, but they also helped their child to learn enough—and cultivate the right group of supporters around him—to do something very few people of any age are capable of doing.

Some might think that this anecdote doesn’t have much relevance for most parents and most schools with kids who are not as gifted as Wilson (clearly) is. But I think that’s bullshit. Children have a remarkable amount of curiosity. Kids want to learn things about the world around them—in that sense, Wilson is no different from any other kid. What’s different in this case is that his parents, and the number of professors and researchers he contacted, embraced his curiosity and helped give him the tools to explore things he was interested in.

Not every kid needs to achieve something as remarkable as Wilson has. But what I think this shows us is that we are holding children back at home and in schools by not embracing their fascinations and harnessing it toward learning. If a child is fascinated by dinosaurs, is that not a perfect entry-path toward learning about biology, evolution and ecosystems, and even the solar system by way of the extinction of dinosaurs? If a child loves airplanes, isn’t that a great opportunity to learn about aeronautics, and the value of thinking through different designs and testing their effectiveness? And on and on. Children shouldn’t be forced to learn about such things, but parents and teachers can embrace their enthusiasm for certain things and provide them with the resources, tools and—most importantly—the encouragement to explore them further.

September 3rd, 2013

Interleaved Curriculums See Huge Improvements in Retention

In Tampa, a test is underway in eight middle school classrooms to see how effective “interleaved” curriculums are for learning:

Dr. Bjork and others have shown that studying mixed sets of related things — paintings, birds, baseball pitches — greatly improves people’s ability to make quick, accurate distinctions among them, compared with studying as usual, in blocks. Others have found the same improvements when the items being mixed are specific kinds of problems, like calculating volumes, or exponents.

A growing number of cognitive scientists now believe that this cocktail-shaker approach could improve students’ comprehension of a wide array of scientific concepts, whether chemical bonds, parallel evolution, the properties of elementary particles or pre-algebra.

Interleaved curriculums mix questions on related, but distinct, topics. This contrasts with the traditional approach to teaching most of us experienced in school, where students focus exclusively on one concept at a time and repeatedly answer questions about it. This is called “blocked” curriculum.

The Institute of Education Sciences designed the test such that half of each class received “interleaved” homework assignments for two kinds of questions and blocked assignments for another two, while the other half of the class received blocked assignments for the first two kinds of questions and interleaved for the last two.

The test is fairly small, but the results were dramatic: at the end, students were given a surprise cumulative exam of the material covered. Students answered 38 percent of the normal, “blocked” material correctly, and answered 72 percent of the interleaved material correctly.

That small of a test doesn’t confirm interleaving is superior, and certainly doesn’t confirm that it’s superior for all students and all material. But the logic makes sense; by mixing in prior material into homework assignments, students have to go through another step that they don’t have to do while answering blocked questions: they have to identify what the question is asking, and then decide what tool they’ve learned will help them answer it. As a result, they should better understand the material itself, and they should form an association between the tool they’ve learned and the kind of problem it applies to.

In mathematics courses I took, I became very good at learning whatever concept we were focusing on at the moment and applying it to each question by rote. Rather than focus on identifying what the question was asking and figuring out the best way to solve it, I would instead identify the template that each question related to that topic followed, identify each constituent part, and insert them into the concept. Since that was quicker than trying to analyze each question as if it was unique (they nearly always were not), I did whatever would get me to the correct destination the fastest.

In addition, this also resulted in me forgetting material covered toward the beginning of the semester toward the end. I would need to re-learn much of it for final exams. That suggests I wasn’t so much learning the material as learning a mechanical process to follow.

Interleaving is interesting to me because it should help reduce that tendency. Not only will it better force students to interpret what each question is actually asking (and thus better understand whatever topic they learned), but it’s a built-in review of prior material which should reinforce it.

September 3rd, 2013

“What’s Lost When Everything Is Recorded”

Quentin Hardy asks a good question: what’s lost when everything is increasingly recorded?

Remember the get-to-know-me chat of a first date or that final (good or bad) conversation with someone you knew for years? Chances are, as time has passed, your memory of those moments has changed. Did you nervously twitch and inarticulately explain your love when you asked your spouse to marry you? Or, as you recall it, did you gracefully ask for her hand, as charming as Cary Grant?

Thanks to our near-endless access to digital recording devices, the less-than-Hollywood version of you will be immortalized on the home computer, or stored for generations in some digital computing cloud.

One thing I’d like to add that’s slightly tangent to his piece is that much of our memories are directly associated with emotions. We don’t just remember the day we graduated high school, but the excitement and nervousness of finishing our childhood and the start of life as an adult. When we think about the death of a friend or family member, we remember the shock upon hearing it, the deep sadness immediately after, and our slow, non-linear path toward accepting it.

Many of these memories, too, result in new emotions that cause us to reflect on our current position in life. Thinking about high school graduation may evoke a longing to go back to high school, a nostalgia for it, or even mild satisfaction that that part of our life is finished because we’re in a much better place now. We may then consider what we did after, and where we are now; the emotions that surface due to that memory may cause us to consider our current position or current events from a different perspective than we would normally.

The emotions we associate with memories, then, are just as important as the literal event itself, or perhaps even more important. Those emotions help spur consideration about what those events mean to us and to our life, and help lead to conclusions about them. They help imbue events—things that we remember to have occurred—with personal meaning. Without those emotions, those events have very little meaning, if any at all.

I wonder, then, how recording—photo, video, captured text from conversations with people—that’s only increasing in reach and fidelity will interact with our method of remembering events and forming meaning. Will peeking back at almost any time in our past and having some kind of documentation for it make our recollection more accurate and therefore provide a deeper emotional understanding of the events and deeper conclusions about them? Will this increasingly perpetual documentation smooth out the peaks in our memory—the big events that stand out from the more mundane events—leading to less emotional analysis of them?

I don’t know. But it’s worth thinking about.

August 21st, 2013

Brains and Boltzmann Machines

The Boltzmann machine algorithm appears to be a good model for important brain processes like learning:

“It’s the best possibility we really have for understanding the brain at present,” said Sue Becker, a professor of psychology, neuroscience, and behavior at McMaster University in Hamilton, Ontario. “I don’t know of a model that explains a wider range of phenomena in terms of learning and the structure of the brain.”

Hinton, a pioneer in the field of artificial intelligence, has always wanted to understand the rules governing when the brain beefs a connection up and when it whittles one down — in short, the algorithm for how we learn. “It seemed to me if you want to understand something, you need to be able to build one,” he said. Following the reductionist approach of physics, his plan was to construct simple computer models of the brain that employed a variety of learning algorithms and “see which ones work,” said Hinton, who splits his time between the University of Toronto, where he is a professor of computer science, and Google.

I think this approach—trying to approximately model the brain’s functions—is ultimately going to provide much more understanding our brain than studying actual brains and nervous systems. The latter is absolutely useful, but not only are our tools for doing so remarkably primitive, but even relatively simple brains are monumentally complex, which means it’s fairly difficult to truly understand how they function by directly studying them.

Learning more from computer models may seem counterintuitive, but doing so forces us to actually understand the system’s design, which should indicate more universal principles that can be applied to biological brains as well.

If you can’t tell, I find the brain to be one of the most fascinating things we have to study. We are truly in the early days of understanding how they work, and not only is discovering it wonderfully exciting, but I think it will provide us with the tools to build computer systems and software of an entirely new magnitude, and answer many questions we’ve had about ourselves that have been relegated to religion and philosophy out of necessity.

July 30th, 2013