“Web” Category

A Guide to Barbecuing for the Fourth

Over at the Basil blog, I wrote an overview of making great barbecue for the Fourth of July.

There’s not much better than good barbecue, it’s a lot of fun to do, and it’s really not very hard, either. If you want to give barbecue a try, I think this is a great place to start.

This weekend is going to be a bit of a cooking fest for me. For the Fourth of July itself, my girlfriend and I are going to be making grilled steak banh mi, with a creamy avocado and salsa verde sauce, cilantro and home-made pickled daikon and carrots. And on Saturday, my Dad and I are going to smoke a pork shoulder (my first time trying pork shoulder, so it should be fun).

I love cooking. It’s incredibly satisfying to make a good meal for family and friends yourself, and it’s so much fun learning new dishes and experimenting with them. If cooking isn’t quite your thing, give it a try sometime. You might like doing it, and it’s a terrific way to spend time with your significant other, family and friends. The time spent together with people while cooking in the kitchen or backyard is almost always special.

Happy Fourth of July to you all!

July 1st, 2013

Education, Technology, Passivity

Partly in response to my piece on the philosophy of Google Glass, Alan Jacobs wrote this:

But if awakening students from those slumbers has always been the task of the true educator, that task is all the more difficult in a time of technologies of knowledge, or “knowledge,” that asymptotically approach omnipresence. Google Glass, along with a whole range of similar technologies, enforces the very passivity which truly liberal education is concerned to defeat.

That’s absolutely right. Just as “the truth is in the cloud” for web services that sync data, I think we’re setting the stage for the web turning into some sort of ostensibly-neutral source of true knowledge. When we have immediate, unfettered access to the web’s information, it begins to take on a feeling of absolute truth, even for things that are inherently opinion or taste.

In 2010, I argued that this idea of “openness” that Facebook (and to some extent, Google) have pushed so much for—that we will all be better off if we share more of our lives and identity with the public—undermines public and private as separate spheres, and therefore also the space for people to form their own beliefs and identity. As the public sphere encroaches and overruns the private sphere, it is necessarily harder to experiment with new tastes, opinions and beliefs, and to settle on certain ones, because the entire process is more transparent for observation and judgment.

These things—devices like Google Glass, which give us immediate access to the web, and social networks, which push us to share more and more of ourselves with the public—are intertwined. The result could be, or already is, a greater emphasis on “Well, everyone else thinks…” and less on “Well, I believe this to be true because…”. And that should be profoundly worrying.

June 6th, 2013

“Free”

Dr. Drang on “free”:

First, you have to recognize that you’ve been “the product” your entire life. So were your parents and so were your grandparents. Television and radio, newspapers and magazines—they all sell your attention to their primary customers: advertisers. Even things you “pay for” sell you off to advertisers because you really don’t pay for them—you only cover part of the costs. Despite this obvious and longstanding fact of life, while everyone bitches about commercials, no one says TV networks are insidious or underhanded because they run ads.1 I’ve never heard of anyone boycotting Mad Men because they don’t want to be a product sold by AMC.

June 3rd, 2013

Creating Machine Avatars For All

Dmitry Itskov wants to unlock us from the prisons of our physical bodies and upload “us”—the sum of our brain’s connections that create “us”—into affordable machine bodies. He believes this will free us to live better, more meaningful lives:

Mr. Itskov says he will invest at least part of his fortune in such ventures, but his primary goal with 2045 is not to become richer. In fact, the more you know about Mr. Itskov, the less he seems like a businessman and the more he seems like the world’s most ambitious utopian. He maintains that his avatars would not just end world hunger — because a machine needs maintenance but not food — but that they would also usher in a more peaceful and spiritual age, when people could stop worrying about the petty anxieties of day-to-day living.

“We need to show that we’re actually here to save lives,” he said. “To help the disabled, to cure diseases, to create technology that will allow us in the future to answer some existential questions. Like what is the brain, what is life, what is consciousness and, finally, what is the universe?”

As seems to happen, this is perfectly timed with my piece on Google Glass last week.

Perhaps I’m just cynical, but this story seems very much like a microcosm for much of technology, and especially Google Glass. There’s a very nice veneer on top of Itskov’s avatar idea, a veneer that says it’s to help people and to solve real problems like curing diseases (because our bodies will no longer be organic) and ending hunger (because the only sustenance we will require is electricity and maintenance), and to free people of the “petty anxieties” of day-to-day life like providing a living for you and your family. As a result, humanity will be free to tackle much larger issues.

I say it’s a veneer because whether Itskov’s vision has any chance at being realized or not (it certainly doesn’t in the relatively near future), his solution to these “problems” solves them by eliminating much of what makes us “human” in the first place. Who we are as individuals is not merely defined by the connections in our brains, but also by how we experience the physical world and interact with it, and our struggle to survive and improve our lot. Even if you can successfully map a person’s brain and re-create the individual within a computer as a result, they inherently won’t be the same person, or feel the same way, by nature of their new body. Sudden immortality, coupled with no need to ever seek food and survive, could play havoc with a brain evolutionarily-designed to focus primarily on it.

In other words, the “solution” may destroy what’s worth saving in the first place: humanity.

June 3rd, 2013

On the Philosophy of Google Glass

Technology inherently alters how we live, what we do, how we think, how we die. Modern medicine has turned many diseases that, when contracted, brought along death more often than not into things of the past or minor annoyances, and injuries that would have been debilitating or deadly into ones that can be overcome. The motion picture didn’t simply make plays something that could be watched over and over again by many different audiences, but created a mass entertainment accessible by nearly everyone, and changed how we learn about news events and conceive of them. The car allowed for suburban sprawl, and for people to travel where they please, whether for the evening or for extended trips across thousands of miles; and in so doing, the car changed what the American “ideal” is, spawned innumerable cultural groups centered around it, and helped construct a rite of passage into adulthood for teenagers. Getting your driver license and first car is the first step toward being an adult, but also a right to freedom, to being able to go wherever you please on your own or with friends on nothing more than a moment’s notice.

So as long as humans have been creating tools to influence the world around us, the tools—technology—have been influencing us, too. It’s an inevitable byproduct of using something, and this isn’t exactly a new insight. Smartphones and now “wearable” computers like Google Glass are merely the latest human technology to influence their creators.

But while they may only be the latest example of something that’s been happening for as long as humans have created tools, there is, I think, something very different about so-called wearable computers like Google Glass. They have the potential to integrate themselves so deeply into the user that over time, and as they develop further, there will be little reason to differentiate between the device and the user. Removing your Glass device will feel very much like losing a limb or sense—something that you’ve grown used to depending on and using is gone. Through this much deeper integration, these devices could fundamentally alter the human experience and what it means to be human.

That might sound alarmist, like science fiction, or—if you own a smartphone—just remind you of that small moment of dread, like something’s wrong, when you leave the house without your phone.

Wearable computing has much more potential than “wearable” implies. Instead, through overlays on our vision (or more direct connections with the brain, potentially), things like Google Glass can become another sensory input as well as an output. Google Glass already allows you to look something up on Google (“Google, how tall is Mt. Everest?”) and get directions without ever pulling the phone out of your pocket, or using your hands at all; you ask, and the information is spoken to you or overlaid at the top of your vision. It’ll notify you about your flight this afternoon or messages you receive, and you can reply to the message, too. You can snap a photo or video. All without using your hands, and it’s all—again—on top of your vision.

The ultimate goal is to form a direct connection between our brains and the web, and all of it that entails. Google Glass is merely a first step toward that, and merely a hack that hijacks our vision to provide input to our brains and hijacks our voice for control. A direct connection with the brain is obviously ideal; there’s no “glasses” to wear, or need to use voice to control it, which isn’t very efficient. In Steven Levy’s In the Plex, he recounts a conversation he had with Larry Page and Sergey Brin in 2004:

Back in 2004, I asked Page and Brin what they saw as the future of Google search. “It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it will be easier in the future, that you can have just devices you talk to, or you can have computers that pay attention to what’s going on around them and suggest useful information.

The web’s information will be our brain’s information, and the web’s services will our brain’s tools. We would be able to immediately answer whatever question we have, or information we need. If you’re fixing your sink, instructions on how to do so (or maybe a video?) are just a thought away. In a few moments, you’ll be able to make fabulous chicken tikka masala. Humanity’s knowledge will have a direct pipe into our brains. And you’ll be able to do incredible things, too. You could snap a photo with a thought, send a message to someone or file away something you’ve come across to a note-taking service. You could control your home’s lights and television. You could… well, the list goes on.1

And, of course, you’ll be able to check Twitter and Facebook, and post to them, wherever you are and while doing whatever else.

I say all this because I don’t think there’s a significant difference between doing all of this through a Google Glass-like device or some direct brain connection, as Page proposes. If they’re successful at their purpose, both will be quickly adapted into our senses. Just as we’ve gotten used to being able to pull out our smartphones whenever we have a spare moment or need to settle some dispute or trivia, we’ll reflexively ask Glass the answer to a question, or to snap a photo, or to check the news real quick, or to look through our Facebook and Twitter stream, even at moments when we probably shouldn’t. And since the amount of effort it takes to do so will be so much smaller than it is with a smartphone (which is already terribly small), we will do all of it with that much more frequency. No event will be complete without taking a photo and posting it to our social network of choice, because unless it’s documented and unless we’ve stuck it in everyone else’s stream, then it didn’t really happen.

I don’t think that’s a positive, and it says nothing about the social effects of having a web-connected camera and microphone strapped to our faces. (Dustin Curtis touches on it in his piece about his experience with Glass.) But what I find most troubling is the philosophy underlying Larry Page and Sergey Brin’s thoughts on devices like Glass. They say that Glass’s goal is to get technology “out of the way,” but that isn’t it. The idea is that we will all be better off if we’re always connected to the web, always on, and have uninterrupted and instantaneous access to it and humanity’s “knowledge.” The idea that Page expresses is that if I can immediately learn about something I don’t know much about, I’ll be better off. I’ll be able to make smarter decisions and live a deeper, richer life by spending the time it would have taken to research and learn about something on more meaningful and substantive tasks.

I think, though, that is a terribly deluded and shallow understanding of what it means to “learn” about something. When we—humans—learn about something, we are not simply committing facts to our memory so we can recall them in the future. That’s a very tiny part of a much larger and much more important process. To “learn” about something is to study the information (when historical events occurred, what happened, etc), find connections between it and other things we’ve learned and experiences we’ve had, and to synthesize it into something greater—knowledge. Knowing, say, the Pythagorean Theorem in isolation isn’t of much use, but connecting it to your need to identify another object’s location suddenly makes it very useful. And more abstractly, knowing Roman and Greek history isn’t very useful all on its own, but being able to learn from it and apply its lessons to current political difficulties might prove very beneficial.

Synthesizing information into knowledge isn’t an instantaneous process because that’s not how we work. We form conclusions and connections between new information and other things we know by thinking through it and living it. Conveniently, and crucially, taking time to learn something or to answer our own question by pouring through books and articles and our own investigation allows us time to do that. We have little choice but to draw conclusions and form connections between what we’re looking at and what we already know or have seen before because our brains are working over the material for the answer we seek. We find knowledge when we engage our brains. And, moreover, we often stumble into things unintentionally while actually looking for something altogether unrelated. Things that end up becoming more important than what we were originally looking for in the first place.

Page’s idea—that we would be fundamentally better off if we had immediate access to all of humanity’s information—ignores that. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

This may all sound like so much hyperbole. After all, we’ve adapted just fine to prior new technologies, despite Luddite claims that it will destroy us. And, undoubtedly, we will adjust to this sort of thing, too, and the world will not come crashing down. I think, though, that this sort of thing—a more intimate connection between us and computers—is a path we are heading down, and since its more intimate nature also makes it more influential over us, I think we should deeply consider what it’s intended to accomplish for us and what might happen to us in the process.

Technology isn’t a force of nature that just happens. It’s something that we create, and so we should question why we are creating it. This has always been true, but I think it’s even more important now that we do so.

Technology, I think, should exist to improve our lives as humans, to magnify the good and minimize the bad, rather than change our nature or experience. That’s what I believe.

That’s what I believe, and you may disagree. I would suspect many do, especially those with more of a bent toward transhumanism. And that’s fine. But we should be having a much larger discussion about our technology’s intent than we are now, because it’s only increasing in importance.

  1. If the idea of a direct brain interface seems ridiculous, it shouldn’t; researchers have shown that monkeys can control a robot’s motion with their brain activity, and the means they use are relatively rudimentary. The brain appears to be able to adjust to and adapt new sensory inputs. []
May 31st, 2013

“Dueling philosophies” for Wearable Tech

While comparing Google Glass to a (theoretical) Apple-made watch, Ken Segall makes this observation:

Second, there’s the company’s love of humanity. That is, Apple has never created technology for technology’s sake. It creates technology that strikes a chord with human beings.

I’d restate that in a different way: Apple seeks to make technology that makes us better as humans, rather than try to change what we are. The Mac, iLife, the iPod, iPhone and iPad all fit this very well. None of them try to redefine what it means to be human.

Google Glass, as a technology, begins to do exactly that. It’s a first stab at providing an immediate connection between the web and our brains, and it does so by overlaying an interface on our most important sensory input. There’s no meaning in “it gets technology out of the way,” as Google is wont to say, because Glass is intended to always be in the way, to become a part of us in a much more literal way than smartphones have.

That’s not only unappealing to me, but I think the idea—that we will be better off if we literally integrate the web into ourselves and therefore very fundamentally change the human experience—disturbing.

May 31st, 2013

Jamelle Bouie’s “Star Trek: Into Darkness” Complaints

Jamelle Bouie wasn’t satisfied with Star Trek: Into Darkness:

These aren’t as disjointed as they look, and they all point to my main problem with Into Darkness: I don’t mind that Abrams and Lindelof wanted to remake “Space Seed” and Wrath. What I mind is that it was half-assed, with hardly any thought given to the characters. The first movie could get away with what it was—a long sequence of action set-pieces strung together by a threadbare plot. But this needed to have an actual core, and Abrams couldn’t deliver.

Jamelle is dead-on. My biggest misgiving with J.J. Abrams’s style of filmmaking is that he likes to create the facade of substance in his films without much at all underneath. It’s a science fiction wrapper for little more than adventure blockbusters.

May 28th, 2013

Xbox One

The Xbox One sounds really nice, and the reason why has nothing to do with games:

“Xbox,” she says, and a small faint Xbox logo in the upper right-hand corner of the screen begins to glow; the Kinect is listening. “ESPN,” she finishes. The guide, which is currently highlighting Seattle’s local channel 4, switches to channel 206: ESPN. Because the Kinect’s voice control is already engaged, she doesn’t need to prompt it again, so she just says “watch.” There’s a flash as the connected DirecTV makes the change, and all of a sudden SportsCenter comes on the screen. The most shocking part about it is the ease; there’s no more hunting through your guide for FX or Travel Channel or whatever network or show you’re looking for. You can just say “Xbox, watch Travel Channel” or “Xbox, watch Sons of Anarchy,” and you’re there. If the show itself isn’t on, a global search will collate all of your options for watching it, from on-demand to streaming services.

Microsoft’s best asset for the post-PC future really is the Xbox, and I hope they start acting like it.

May 21st, 2013

A closer look at Paper’s zoom

The people at FiftyThree have a terrific look at designing Paper’s new zoom feature:

In this case, however, it was one of those ideas that our co-founder Andrew Allen had stored in the back of his mind, and we knew it was right the first time we looked at it in design. And yet, the road to release is long, and paved with a thousand thorny details. Your attention to those details is what sculpts the experience of the product, and we pride ourselves on being careful sculptors.

Paper’s zoom is a case study in good design. They didn’t toss in an under-conceived feature after someone requested it. Instead, they thought through its purpose and implemented it for that. And the end result is a very natural, useful feature that’s better than most implementations because they considered its context.

I wanted to say, too, that the excerpt quoted above captures something I think all people who work on software and really care about the end result end up feeling: the real work is in those tiny little details that, if you do your job right, most people won’t ever notice, but will add up to an experience that’s absolutely discernible to users and makes your product functionally and viscerally superior to others.

May 21st, 2013

The One-Person Product

A terrific retrospective from Marco Arment about his days working on Tumblr with David Karp:

David always had a vision for where he wanted to go next. I was never the “idea guy” — in addition to my coding and back-end duties, I often served as an idea editor. David would come in with a grand new feature idea, and I’d tell him which parts were infeasible or impossible, which tricky conditions and edge cases we’d need to consider, and which other little niceties and implementation details we should add. But the ideas were usually David’s, and the product roadmap was always David’s.

May 20th, 2013

Using Robots to Solve the Elderly Health Care Problem

Nick Bolton reports on the development of robots to aide in elderly care:

Sherry Turkle, a professor of science, technology and society at the Massachusetts Institute of Technology and author of the book “Alone Together: Why We Expect More From Technology and Less From Each Other,” did a series of studies with Paro, a therapeutic robot that looks like a baby harp seal and is meant to have a calming effect on patients with dementia, Alzheimer’s and in health care facilities. The professor said she was troubled when she saw a 76-year-old woman share stories about her life with the robot.

“I felt like this isn’t amazing; this is sad. We have been reduced to spectators of a conversation that has no meaning,” she said. “Giving old people robots to talk to is a dystopian view that is being classified as utopian.” Professor Turkle said robots did not have a capacity to listen or understand something personal, and tricking patients to think they can is unethical.

May 20th, 2013

“Google, the Villain”

Matt Alexander:

The reason advertising is the predominant means for Google’s revenue is because it is business model agnostic. Knowing it can advertise products to people atop its various platforms, Google can sustain and build virtually anything it wants for the maximum number of users. The cost of entry for a user into the Google ecosystem is a minuscule processing cycle occurring somewhere deep within an anonymous server farm to push a targeted ad towards you.

Perhaps you, as a technologist, balk at such an intrusion. Most, however, focus only upon the end-result.

For mobile, it isn’t the case that mobile advertising allows Google to “sustain” anything they would like to build. Estimates place mobile’s contribution to Google’s total revenue at 13 percent for 2013, despite all of the Android and iOS devices in use. What’s allowed them to experiment so much on the “desktop” web and on mobile is revenue generated from Google search advertisements, which have been strong for years.

That success allowed the experimentation to happen. As search increasingly shifts to mobile devices, however, revenue generated from mobile will have to grow substantially to replace desktop-based search, and grow dramatically to offset mobile’s lower advertising margin. Which means that advertising probably is not and will not be something that’s built atop its new mobile products and services, but will instead be built into the products and services themselves to maximize effectiveness. You won’t see ads in a yellow box besides search results on Google Glass. The pizza restaurant it suggests when you ask for a dinner recommendation might be the ad, or the nice Thai place a couple blocks away that Google Now recommends because it knows it’s almost dinner time and, from your search results and emails, it knows you like Thai food, might be as well.

Which is fine, in and of itself. But things can get into the gray spectrum really quick when a company is monitoring your location, search queries, emails, calendar, social network activity (if you use Google+), et al., in order to not only provide you with specific recommendations and timely information (“you need to leave in ten minutes for your 2pm meeting”), but also to better serve you targeted and timely adverts. Many people are fine with it, of course (I have my misgivings, but don’t find it morally wrong), but many people might not feel so good about their activity being logged so they can be sold to more effectively. And it’s certainly a lot easier to cross the line into abusing that power in order to maximize revenue rather than to maximize utility for the user when your primary source of revenue are the advertisements themselves.

That says nothing, either, about whether the world Google is creating with Glass and Google Now is a desirable one. But, nonetheless, Google is not a villain, nor is Apple a savior. As Matt notes, Google is filled with incredibly bright people working feverishly to create things they think will make people’s lives a little better. That’s absolutely true, and I’ve no doubt that Sergey Brin and Larry Page’s motivations are the same. That does not mean, however, that the business model they choose is divisible from the form their product and services take, nor that the products and services they create are necessarily beneficial. I don’t think it’s Google’s goal to push as many advertisements as possible through everyone’s eyeballs, but I’m also concerned about the philosophical motivation underlying Google’s path (and, for that matter, Facebook’s).

May 20th, 2013

Square Cash

Square has a teaser page up for Square Cash. The tag line is “Email money to anyone’s debit card.” (Small side-note: that’s one of the best teaser pages I’ve ever seen.)

Sign me up, please. This sounds awesome.

May 20th, 2013

Google+ Gets Good With Photos

The Verge’s Casey Newton has a good look at the new Google+ features:

Enter the data center. Google is betting that its powerful machine-learning algorithms will work not just to attract users to Google+, but to keep them there. And as with Instagram and Facebook before it, photographs will be central to the effort. Photographers were among the most enthusiastic early adopters of Google+, which alone among its peers displayed photos up to 2,048 pixels wide. (Last month Google began permitting full-size photos to be uploaded, though they count against the free 15GB shared-storage limit Google now has for Gmail, Drive, and Google+.) Until now, Google has worked to make Google+ the best online home for your photos. With today’s update, it wants to make your photos look better there than they do anywhere else.

There’s certainly some neat ideas here; surfacing the better shots (ones that aren’t blurry, under or over-exposed, and duplicates) is nice, for example, and if the auto-enhance isn’t overzealous, that’s convenient as well. (Some features, like combining multiple group photos together so you get one where everyone’s smiling, or automatically smoothing people’s skin to hide imperfections, might be a little less exciting.)

Google announced a lot of exciting things. Google search by voice, where Google will reply with answers instead of just search results in many cases, is coming to the desktop as well, and is continuing to get much better. Google+ is nice generally and the iOS application is quite lovely. Their new streaming music service looks pretty good. Despite that I think putting a screen in front of our eyes is a terrible idea, Google Glass is impressive work in many ways. Etc. etc. But while much of what Google is doing is nice in and of itself, there doesn’t appear to be a thread running through it all—it’s just a lot of stuff.

Google is doing incredible things and is capable of incredible things, but I think that’s their single biggest weakness: an inability to focus their work for a single thesis. I speculated in March that Larry Page was re-focusing Google; that may still be true, but this year’s Google I/O certainly doesn’t support that idea.

May 15th, 2013

Mathew Honan’s Fitbit Flex Review

Mathew Honan:

In well-defined markets, it’s rare to see a breakthrough device. And yet here we are. There are a lot of sleep and activity trackers to choose from right now, but none better than the Fitbit Flex. It is the most wearable, best-syncing device in the scrum, with the best app to boot. And it does all this at a great price.

This is one of the most interesting areas in technology right now.

May 10th, 2013