“Web” Category

Jeff Hawkins’ Real-Time Big Data Bet

Jeff Hawkins’ new company focuses on analyzing streaming data for patterns, rather than mining old datasets:

Data storage companies like EMC and Hewlett-Packard thrive on storing massive amounts of data cheaply. Data analysis companies including Microsoft, I.B.M., and SAS fetch that data and crunch the history to find patterns. They and others rely on both the traditional relational databases from Oracle, and newer “unstructured” databases like Hadoop.

Much of this will be a relic within a few years, according to Mr. Hawkins. “Hadoop won’t go away, but it will manage a lot less stuff,” he said in an interview at Numenta’s headquarters in Redwood City, Calif. “Querying databases won’t matter as much, as people worry instead about millions of streams of real-time data.” In a sensor-rich world of data feeds, he is saying, we will model ourselves more closely on the constant change that is the real world.

Interesting premise. I think this sort of thing is going be one of the next big frontiers for the technology industry.

November 29th, 2012

Speaking of 3D Printers

Wagner Custom Skis makes skis specifically tailored to each person and what kind of conditions they want to ski.

Is there any reason this company should have to worry about the manufacturing? They should be able to focus on creating skis that perfectly fit a person, rather than also trying to perfect manufacturing. 3D printing could eventually allow them to do that.

November 26th, 2012

The Softwarization of Everything

3D printing could mean the softwarization of physical objects:

Researchers at the Cornell University Creative Machines Lab recently developed a machine that they used to print components vital to working electronics, as well as a functioning electromagnet and battery. Home printers are poised to follow a similar development curve. Hod Lipson, the Creative Machines Lab director, says consumer-grade multi-material printers are less than a decade away.

Think about a world where 3D printers are affordable and the instructions for printing objects are distributed online like applications. Prices will decline and development of them will increase, as people iterate on designs at speeds closer to software.

No longer will it be necessary to find suppliers and manufacturers to create a new hardware product—just distribute the instructions online.

November 26th, 2012

The Great Firewall

Eveline Chao has one of the better overviews of China’s Internet censorship I’ve seen:

Sina Weibo users can post anything they like, and often sensitive posts will even appear in their personal feed, but the post is blocked from search results. In other words, a user might have no idea their post has been “disappeared” and their friends and other users can’t see the post in their feeds. After a term has been unblocked, it quietly reappears in users’ feeds and search results.

Because what’s censored and how it’s censored is not uniform, the effect on speech may be even worse than a strict program for censoring defined topics in every case. Since people aren’t always sure what’s going to be censored (or in some cases, if it is censorship at all), there’s a freezing effect. You can get around a well-defined censorship program, but it’s much harder to get around one that is always changing.

November 21st, 2012

The Automated Future

For the last few decades, we have struggled with how to employ manufacturing workers who lost their well-paid job with great benefits due to a globalized economy. When workers in another part of the world are willing to work for a fraction of what it costs to manufacture something in the United States, it’s obvious why companies move their manufacturing operations: it’s a significant cost advantage and, worse, if they don’t, their competitors will. This is only more true today. In January, Charles Duhigg and Keith Bradsher reported for the New York Times that for technology products especially, the labor cost itself is less important. What matters is that Asia—especially China—is the only place where every part of the supply chain exists in one region, that can manufacture quickly and at immense scale.

Manufacturing, too, is increasingly automated. The human’s role in actually putting things together is decreasing. Automation on large scale for identical products, like cars, has been a reality for decades. What’s happening now, though, is that smaller scale, small production runs are being automated as well. Rethink Robotics has created a robot called Baxter that can be “taught” how to do repeating tasks, and can work around humans. Rethink Robotics says Baxter can work for the equivalent of $4 an hour. Vanguard Plastics, a 30-person company in Connecticut, is using Baxter for menial tasks. Vanguard’s president, Chris Budnick, says that workers who did these jobs before are not being laid off, but are now assigned to “higher-level” tasks like training Baxter for each new production run.

Robots like Baxter are a work multiplier. Whereas before Vanguard required humans to do menial tasks, now they only need humans to train robots how to do something. But many more people are required to do the menial tasks than are required to train robots, so while no one may be losing their job now, they will need to find new productive tasks for them in the future—or eliminate their jobs. As robots like Baxter get better, too, manufactures will need even fewer employees to train them.

Other industries face very similar problems. Retail salespersons and cashiers, for example, account for nearly 6 percent of all jobs in the U.S., but are increasingly irrelevant. For many products, shopping online is more convenient and cheaper. Tower Records, Blockbuster and Borders all failed fundamentally because purchasing music, movies and books online is much better than paying more money for the privilege of driving to a store, hoping they have what you want and waiting in line. Even grocery stores are reducing their need for cashiers by employing self-checkout machines, which allow customers to scan and pay for items on their own and require only one employee to monitor several self-checkout machines.

Almost all of the jobs lost due to offshoring and automation have been low or semi-skilled kinds of jobs. Manufacturing jobs required training, but certainly did not require several years of specialty education and training to do. Retail sales and cashier positions require almost zero training. It would appear, then, that since offshoring and automation are eliminating low and semi-skilled jobs, we can re-orient our economy toward “knowledge work,” or work whose primary task is thinking. Examples of these kinds of jobs are software engineers, engineers, lawyers, doctors, accountants, managers and scientists. These kinds of jobs require a tremendous investment in education and training, and therefore seem not to fall prey to offshoring and automation.

In The Lights in the Tunnel, Martin Ford asks a very good question: “What is the likely economic impact of machines or computers that begin to catch up with—and maybe even surpass—the average person’s capability to do a typical job?” Or, more provocatively: If computers can already beat the best chess players in the world, isn’t it likely that they will also soon be able to perform many routine jobs?

Ford argues that not only is this true, as we’re seeing for manufacturing and retail jobs, but that it is also true for highly-skilled knowledge work jobs. Think about what a radiologist does. Much of what they do is read routine x-rays or CT and MRI scans to diagnose issues with patients. Since radiology is increasingly digital, and knowledge of what different conditions and diseases look like can be digitally represented and algorithmically identified, it’s likely that some of what human radiologists do today—the more routine, easy to identify cases—will be handled by computers instead. Doing so will dramatically decrease costs for hospitals because they will have to employ less doctors, which require large salaries, health insurance, vacation and sick days, and have to be hired and managed. Computers don’t.

The same, of course, is true for much of what general practice doctors do as well. Computers like IBM’s Watson could diagnose patients with routine things like the flu and provide a treatment as well. In fact, because Watson would have access to exponentially more medical research, journal articles, studies and patient history (and aggregate patient data), Watson may very well provide better diagnoses and treatments than the average human doctor.

Ford points out this is true for other fields, too, like law. He writes:

Currently there are jobs in the United States for many thousands of lawyers who rarely, if ever, go into a courtroom. These attorneys are employed in the areas of legal research and contracts. They work at law firms and spend much of their time in the library or accessing legal databases through their computers. They research case law, and write briefs which summarize relevant court cases and legal strategies from the past.… Can a computer do the lawyer’s job? (70-71)

Is there any reason to think that computers will never be able to do this kind of basic research and summarization? I don’t think so. What this suggests is that automation will challenge many kinds of knowledge work just as much as low and semi-skilled work. Indeed, companies will have even more reason to automate these kinds of jobs, because they are generally very well-paid jobs.

Manufacturing and retail job elimination, then, is just the first wave of many to come. The question, though, is not how to get those jobs back and protect the ones that still exist. That isn’t going to happen, is counter-productive and a waste of time. The question to ask is, when many of the jobs people depend on our automated, what kind of jobs will they do instead?

That question is, I think, the most important question to answer for the next few decades.

I have some ideas, but for now, I just want to ask the question and want you to think about it. How do we productively employ these people?

November 21st, 2012

If It’s In the Cloud, the Government Can Read It, Says Senate Bill

Senator Patrick Leahy is pushing a bill that would allow government agencies to access private information like email and Facebook posts without a warrant:

Leahy’s rewritten bill would allow more than 22 agencies — including the Securities and Exchange Commission and the Federal Communications Commission — to access Americans’ e-mail, Google Docs files, Facebook wall posts, and Twitter direct messages without a search warrant. It also would give the FBI and Homeland Security more authority, in some circumstances, to gain full access to Internet accounts without notifying either the owner or a judge.

I propose a compromise for the federal government: we, the people, will allow you to access our private information and communications without a warrant if you, the government, allow us to access yours, too. All email, phone calls and documents, any time we please. After all, if you’re not doing anything wrong then you’ve got nothing to hide, right?

UPDATE: Forbes reports that CNET’s article is incorrect, and that Patrick Leahy is not pushing for explicitly allowing government agencies to access private online communications without a warrant.

November 20th, 2012

Endless Musical Choice and the Endless Web

Mike Spies:

Because how else can you form a relationship with a record when you’re cursed with the knowledge that, just an easy click away, there might be something better, something crucial and cataclysmic? The tyranny of selection is the opposite of freedom. And the more you click, the more you enhance the disposability of your endeavor.

I don’t think this is a made-up problem or some kind of misplaced nostalgia for a past where new albums cost no less than $16, required you to drive to a retail store to purchase, and we had very little idea how good they were until we bought them and listened to them.

This issue isn’t unique to music. The web brings an endless abundance of information to read and reference, and of new content, always something better behind a click or tap. When that’s the case, there’s no time to focus on what’s in front of you, to absorb everything in it, to understand what it is or what it’s saying, and to think about what it means or how it relates to other things. There’s always a better album, a better article, a new breaking story, so we’re always moving on toward greener pastures.

It’s easy to dismiss this kind of criticism as luddite nostalgia for a world that never really existed, or as “first-world problems”—in other words, the substance-free complaining of people who should be happy with what they have and should shut their mouths because they’re lucky to have it. But that dismissive retort, one used all-too-often in the technology community, ignores what I think is one of the great challenges for the technology community: to make the age of the web not just one of unlimited access to unlimited information, but one that empowers people as humans.

November 19th, 2012

The Republican Study Committee’s Excellent Paper On Copyright

The Republican Study Committee published an excellent paper criticizing current copyright law (PDF):

Today’s legal regime of copyright law is seen by many as a form of corporate welfare
that hurts innovation and hurts the consumer. It is a system that picks winners and losers,
and the losers are new industries that could generate new wealth and added value. We
frankly may have no idea how it actually hurts innovation, because we don’t know what
isn’t able to be produced as a result of our current system.

It’s cogently written and insightful. I highly recommend reading it.

Unfortunately, the RSC pulled the paper after only a day. Here’s their reason why:

“On issues where there are several different perspectives among our members, our policy briefs should reflect that. This policy brief presented one view among conservatives on U.S. copyright law,” said RSC spokesman Brian Straessle. “Due to an oversight in our review process, it did not account for the full range of perspectives among our members. It was removed from the website to address that concern.”

Perhaps. And if it re-appears soon along with arguments as to why our current copyright laws should be maintained, that will be fine. But if that isn’t the case, it’s shameful that the RSC pulled a smart paper.

November 19th, 2012

Walmart’s iPhone App Has “In-Store” Mode

Walmart is taking advantage of mobile devices to make their stores less sucky:

If you opt in, Walmart will use your location to provide you with an app designed specifically for that store. Head to another Walmart and your app will work for that store. It has useful features: You can make a list by speaking into the phone. You can search a product by typing in a word or phrase — tissues, say, or light bulbs — and the app will show you what aisle to go to. It has an interactive map. It shows you promotions specific to that store. And Walmart is testing a feature called “Scan & Go” that would you scan can items as you shop, so you can go quickly through self-checkout.

Cheers to Walmart for embracing mobile computing to make their stores better, and especially for using it to make the checkout process better. I go out of my way to avoid Walmart because, among other reasons, their checkout process is dreadful.

November 14th, 2012

Siri’s Future


A transactional Siri has the seeds to shake up the $500 billion global advertising industry. For a consumer with intent to purchase, the ideal input comes close to “pure” information, as opposed to ephemeral ad impression or a series of search results which need to be parsed by the user. Siri, well-oiled by the very rich contextual awareness of a personal mobile device, could deliver “pure” information with unmatched relevance at the time it’s most needed. Eliminating all intermediaries, Siri could “deliver” a customer directly to a vendor, ready for a transaction Apple doesn’t have to get involved in. Siri simply matches intent and offer more accurately, voluntarily and accountably than any other method at scale that we’ve ever seen.

Siri could undermine mobile advertising as a whole because it knows (or at least can know) what the user wants. There’s no need to display ads, because the user’s desire is already known: all Siri needs to do is connect the user to the appropriate service.

Fascinating point from Kontra. In large part, by only connecting users with services or vendors when they ask for it, this approach also avoids much of the privacy creepiness that Google’s approach entails. The user affirmatively requests Siri to do something for them, and then it uses their information to take an appropriate action.

November 13th, 2012

Siri as the Main Interface

Patrick Rhone:

In other words, what if when we slid to unlock instead of being met with rows and pages of icons we, instead, were met with Siri? What if our primary interaction with such devices was not touch, but voice? What would that look like? What would that feel like?

Exactly the question we should be asking. As voice gets better as an interface for using computers, there’s no reason it should be a supplementary interface—it could be the interface. There would be no reason that devices require a large screen for displaying information and interacting with it. What do those devices look like? How would their purpose change?

The implications for mobile computing are obvious, but one thing I’m wondering a lot about is how a voice interface, combined with applications, would completely change the purpose of the TV. Could the TV be used as a communal computer for basic tasks like getting weather information, movie times, news, et cetera? Wouldn’t it be a very good place to leave notes for other family members (When you get home, the TV could tell you that your spouse went to the grocery store, or that you need to take care of the broken toilet), manage a common calendar, and manage a connected home (lights, heating and cooling, et cetera)? What other uses could a TV-as-computer operated by voice allow that we haven’t thought of?

November 12th, 2012

Evernote 5′s New iOS Interface

Evernote completely re-designed the user interface for their iOS application in the new version, and I think it looks quite good. Definitely take a look at it; it’s a unique interface design I haven’t seen before, and it looks promising.

November 3rd, 2012

Google Now Is The Future

From Matthew Panzarino’s Nexus 4 review:

Search for your favorite sports team a few times? Now will start to tell you when they’re playing and what the score is. Spend a lot of time in one place and then move a distance away quickly and Now will know you’re traveling and recommend photo opportunities nearby. It’s breathtakingly brilliant and invasive all at once. But it’s also extremely useful.

November 2nd, 2012

The Verge’s Google Now Profile

The Verge’s Dieter Bohn has a terrific profile of Google Now, which attempts to provide information you want before you ask for it.

Google Now is one of the things Google’s working on that has me most excited.

October 30th, 2012

The Future of iOS

Something I’ve been thinking a lot about lately is where iOS is going. Apple just released the sixth version of iOS, and after more than five years, iOS is significantly more powerful than when the iPhone shipped in 2007. Then, the iPhone had no GPS (or a location-finding feature at all), no copy-and-paste, no voice control, no push notifications, no third-party applications, no multitasking (except for certain applications like iPod and Safari), and just a short list of ringtones—and that was it. It’s come a long way.

iOS 6 was not a significant update for new end-user features. It brought a new Maps application, updated Siri with new capabilities, and added Passbook—significant updates, but not nearly on the scale of past updates. It’s easy to read this as a sign that iOS has matured, and Apple is simply polishing it as-is without changing much. I think that’s dead wrong.

I think that’s wrong for, among others, two reasons: remote view controllers and Siri. Both of these could make iOS a much more powerful operating system than we are used to. Let me explain why these are so significant.

Breaking Down the Sandbox

In iOS, the application model is quite simple: applications play in their own sandbox, and they can’t play in another application’s sandbox. This design is more secure because applications can’t damage other data on the device, but it is also very limiting. Applications have little ability to work together as a result.1

Earlier this month, though, Ole Begemann noticed something interesting in iOS 6: the Mail compose view, which pops up whenever users want to email something from another application, runs in a separate process from the open application. That sounds rather boring, but it’s actually very important. Here’s why.

Effectively, you can think of the Mail compose view as a mini “application” that, when the user wants to email something from an application they are using (let’s say from Basil), slides in on screen and allows them to do so. As a result, I didn’t have to write anything at all related to email for Basil, and yet my customers can email things from Basil to their heart’s content. Now imagine if I, and any other developer, could write mini applications that add functionality to other applications.

For example, Basil could declare that it handles web URLs. Then, I could build the recipe saving feature of Basil as a mini application (stored in Basil) that users can use from any other application that deals with the web. If someone who uses Basil is browsing recipe websites in Safari, they could tap a “Basil” icon in the action sheet, and my mini application would pop up, save the recipe to Basil and get out of the way—all without the user ever leaving Safari. The same could be true for Reeder (save recipes from cooking websites you subscribe to while browsing your feeds), Twitter applications, or anything else—and those developers wouldn’t have to know that Basil even exists. If their application declares that it works with web URLs and Basil declares it can handle them, users will get the functionality without any special work between me and the other developer.

As a result, applications could work together. This would make iOS a much more powerful system. But it could be even better.


Imagine if developers could not only build mini applications for use with other applications, but services without a user interface. So, for example, Tapity’s Languages application could write a service for translating words, and declare that it handles language (or some other generalized language-related function). Applications that deal heavily with text would then be able to translate into several different languages.

That’d be great. But now imagine that not only could these services be written for other applications, but could also be written for Siri using a Siri API.2 Using Languages’ theoretical service, users with the application installed on their iPhone could translate words by asking Siri, “What is ‘milk’ in Italian?”3 Or if you have a flight tracking application installed, you could ask Siri when a flight will arrive. Or if you have a news application, Siri could tell you what’s happening right now.

Those are just a few mundane examples, but the important part here is this: it would allow developers to extend Siri’s capabilities. If Apple builds a Siri API similar to this, Siri could become infinitely useful. It would truly be a new interface for all of iOS, and an interface for doing almost anything.

I want to note that while it does appear likely that Apple is building remote view controllers, my second idea—remote services along with a Siri API—is entirely speculation. I have no idea if that’s what Apple is planning. What I want to show, though, is that iOS is far from maturity. Whatever Apple decides on, we could see dramatic changes to iOS as we know it in the next few years.

  1. It’s worth noting that in iOS 3.2, applications could declare that they can open certain file types. By doing so, users can “send” files from one application to another. For example, if you are emailed a PDF and tap on it, you are presented with the option to open the PDF in Goodreader (if you have it installed), because it declares itself for that file type. This feature made the sandbox model less limiting because users can now move files between applications, but that’s all it does—move them between applications. Applications still have little ability to work together. []
  2. This, of course, isn’t exactly trivial; the API would have to identify what the user is requesting, what functionality category it falls under, and then provide any applications which declare they can handle it with the user’s request. This alone is a very difficult problem to solve, and one I might come back to in another article. []
  3. This example may be difficult simply because while the service may be able to translate the word correctly, Siri may not be able to speak it in the new language correctly. But it could, of course, just show the answer on screen. []
October 30th, 2012