“Web” Category

Calca

Calca is an incredible new calculator for OS X and iOS. “Calculator,” though, is not the proper description; it’s a text editor that allows you to declare variables and do live calculations with it.

What a good idea, and something that should have been obvious before. It’s like writing out calculations on paper by hand, except the calculations happen live.

You’ve got to use it. It’s dramatically more useful than a normal calculator application or spreadsheet for most purposes we use them for. It makes using them laughable when you can use this.

Calca is one of my favorite new applications. Go get it.

July 28th, 2013

Scanadu Scout

Scanadu Scout is billed as a “medical tricorder”, and if it does what they say it does, that seems like a fair description.

It’s a little device you hold up to your forehead for ten seconds, and it tells you your heart rate, skin and core body temperature, blood pressure and a few other things, too. Pretty incredible.

July 24th, 2013

Fitbit Flex

In the beginning of June, I began wearing a Fitbit Flex. I decided to purchase one for two reasons. First, wearable computing is growing very quickly, and is increasingly more interesting as well. Second, the idea of measuring (a few elements of) my health on a daily basis is fascinating to me, so I thought it might be beneficial to do so.

The Flex has become a much larger part of my daily routine than I thought it would. If you are unfamiliar with it, the Flex is a relatively unassuming wristband-bound device that tracks how many steps you take and your movement while you sleep and syncs that data to a companion iPhone application using Bluetooth. The iPhone application shows you (approximately) how many steps you made each day, what (approximate) distance you walked, and (approximately) how many calories you’ve burned. In addition, the application allows you to log meals as well.

Typically when I begin using some kind of system that’s supposed to improve my life in some way but requires constant input from me, I use it for a while when I’m excited about it, until one day when something gets in the way (life) or I just forget, and from then on I forget to use it altogether. To do applications are the best example of this; to be useful, they require constant user input. As a result, once the user fails to enter a new task, the motivation to continue using it begins to disappear.

I assumed the Flex and meal logging would follow that same pattern—I would use it for a couple weeks because I’d still be excited about a new piece of technology, but once that newness wore off and once I forgot to log a day’s meals, I would stop paying attention to it. And then it would be another device in a drawer.

After nearly two months of use, though, I’m still using it. And not just using some of it, like step counts—I’m also using meal logging and sleep tracking as well.

I think that’s because the Flex provides a certain amount of information without any effort on my part. As long as I wear it, I can see how much I’ve moved around each day, and a (good enough) approximation of how many calories I’ve burned. That’s quite informative on its own. Actual data (over a fairly large period of time) makes it much more clear to myself that I need a sustained plan for getting in better shape, and crucially, it also is much more rewarding on days when I am quite active. Seeing how many miles I’ve moved, and feeling the pleasant little buzz the Flex makes when I cross my daily goal, is surprisingly powerful. It’s certainly more powerful than the vague sense I had before that I wasn’t active enough.

As a result of that “free” information, I have a large motivation to also log my meals, because doing so will show me my (rough) caloric input versus my (rough) caloric output. We all know that certain foods are very high in calories, carbohydrates and fats, but it’s an amazing thing when you can get a fairly good idea of how many calories you’ve already eaten for the day, how many you’ve burned, and what that piece of chocolate cake is going to do. Suddenly, there’s a higher internal cost to eating badly, because you can see it all laid out for you.

But interestingly, logging my meals—something I’ve gotten in the habit of doing for the above reasons—has had a more subtle effect, too: eating well consistently is much more satisfying than it otherwise would be. Each time I check in that I had fruit or vegetables rather than chips or french fries, it’s gratifying. It makes it a bit of an accomplishment. I didn’t expect that at all, but at least for me, there’s absolutely a feeling of accomplishment that results from consistently eating and logging healthier meals and snacks.

Because I now have immediately accessible information about how much I exercise and what I eat, it’s given me much more motivation to plan my meals and exercise, and to stick with it. Before, it was fairly easy to skip out on a day’s exercise (which turned into a few days, and then weeks…) or to shrug off a day of poor meal choices as a one-time fluke (even if it wasn’t), because I could. It’s much harder for me to do that, though, when the information is right in front of my face. That’s important.

What’s important, too, is that it hasn’t turned me into a calorie-counting, joyless automaton. It hasn’t turned eating into something that’s purely for providing energy and nutrients. I don’t always avoid large meals or dessert because it might bump me over my calorie target for the day. In fact, it’s been liberating, because it’s helped me impose a bit of discipline in my regular eating, so having an occasional large meal or a dessert doesn’t feel shameful at all—it’s something that I can enjoy without worrying that I’m eating terribly. I know I’m consistently eating well, so occasional indulgence won’t hurt.

It’s interesting how powerful a little information can be.

July 23rd, 2013

“Zeitgeisty Books Bundle”

Tomely has put together a terrific ebook bundle that includes Jack Cheng’s These Days. The bundle is completely DRM-free, a portion of proceeds goes to charity, and if These Days is an indication of the other books’ quality, it’s a damned good collection.

Plus, this is just a good idea. I love this kind of thing.

July 22nd, 2013

Digg Reader (Sponsor)

My thanks to Digg for sponsoring this week’s RSS feed. This should be especially applicable to everyone, so go give it a look!


Digg (yes, that Digg) has released a new RSS Reader for the web, iPhone, and iPad (Android coming soon). The design is sleek and clean, and the apps are speedy and efficient.

Whether you’re a hardcore RSS junky or simply want all your favorite online reading in one place, Digg Reader is for you. It’s free and available today!

Sponsorship by The Syndicate.

July 13th, 2013

Innovation Policy

Reihan Salam links to a post by Ross Eisenbrey which argues that government, not business or markets, is primarily responsible for innovation we’ve seen in technology:

Mazzucato suggests that, given the extent to which tech companies like Apple and Intel owe their great good fortune to the federal government’s investment in R&D, they should share more of their profits with the taxpayers. Instead, of course, Apple has been offshoring profits to avoid taxation and most of the tech industry is contributing to the efforts of the U.S Chamber of Commerce and the rest of the organized business lobby to cut corporate taxes and shrink the government. As Mazzucato makes clear, cutting taxes and the government is no recipe for an innovative, competitive future—just the opposite. 

Mazzucato points out that many of the iPhone’s core technologies, such as solid state storage, capacitive sensors and GPS, all have their roots in government-sponsored labs. She presumes, then, that government is therefore largely responsible for the innovation itself, and so (1) we should continue supporting government-funded research projects, and (2) those companies that benefit from taxpayer “risk-taking” should “share” more of their profits with the government.

But as Salam points out, companies have no moral obligation to do so. The government funded much of that research for its own purposes. Salam writes:

The U.S. government devised the technologies Mazzucato identifies for its own, usually defense-oriented reasons. Mazzucato implicitly suggests that in a counterfactual universe in which the Cold War had never taken place, and in which defense expenditures hadn’t diverted spending from other domains or forced higher tax levels, etc., innovations in information technology would not have taken place either. The decades that preceded the Cold War, during which there was considerable private sector innovation in early information technologies, suggests that this is not the case, but of course we can’t really say.

What’s worse, though, is that requiring some formative compensation for that research would undermine the very innovation that Eisenbrey and Mazzucato claim that the government was actually responsible for. Salam again:

As Amar Bhidé often notes, an Englishman pioneered the World Wide Web under the auspices of the government-financed CERN laboratory in Switzerland, yet the U.S. has been the main source of consumer internet innovation. U.S. internet firms do not, however, pay the Swiss and other European governments a formal innovation bounty. Part of the reason is that everyone profits from the free flow of knowledge, which is why excessive patents are such an economic scourge.

The reason is that doing so would reduce that free flow of information, and therefore the actual work it takes to create a useful product and bring it to market. Eisenbrey conveniently skips over the 1970s, 1980s, 1990s, and 2000s where technology companies invested huge sums of capital and work into developing these base technologies into something useful for consumers, and into something consumers would buy. There’s an implied derision at that effort as something other than innovation, but it absolutely is, and it’s what actually makes those technologies useful for people. Without those companies continuing to iterate on solid state storage, and without other companies creating salable products that utilize it, solid state storage never would have been anything more than a curiosity in a lab. Similarly, without Intel developing the microprocessor, and without Apple, Microsoft and the PC makers creating PCs for those microprocessors that appealed to consumers, they never would have developed like they have, either. And as a result, those technologies never would have evolved enough to create a handheld, touchscreen phone that’s always connected to the web. It would have been impossible.

Note, however, that story doesn’t minimize the role of government-funded research. Rather, it shows that it plays a role in innovation, but it is not the entire story by any stretch. And it shows, too, why the free flow of ideas and technology is so important. Without it, there can be very little actual innovation, because “innovation” inherently means seeing a connection between disparate ideas and technologies, how they can fulfill a need for people, and putting them together such that it creates something that didn’t really exist before. Innovation may be greater than the sum of its parts, but it is nonetheless the summation of many different things that already exist. Placing formal restrictions on those ideas, such as overbearing patents or, in this case, requirements to pay back more to the government which claims responsibility for them and state direction of innovation, impedes that flow of ideas and innovation as a result.

This is why simplistic stories about where new ideas and products come from, and simplistic moral stories about who deserves what for those ideas and products, can be so damaging—they elide much more complicated systems.

July 11th, 2013

Apple’s New Beginning

At WWDC this year, Apple announced the most dramatic update to iOS since iOS 2 introduced the App Store and SDK for developers. The update is also the most controversial.

In 2012, Tim Cook fired Scott Forstall, Apple’s senior vice-president (SVP) of iOS, and distributed his responsibilities among Apple’s executive team. Craig Federighi became SVP of software, unifying the iOS and OS X teams under one leader. Eddy Cue received the title SVP of “Internet Software and Services,” unifying Apple’s iTunes, iBooks and App stores, iCloud, Maps and Siri. And in addition to hardware design, Jonathan Ive assumed responsibility for Human Interface design as well in the shake-up.

At the time, Adam Lashinsky reported that Forstall refused to sign an apology for iOS 6′s Maps release, and that sealed his fate. While his refusal may have been what precipitated his firing, Lashinsky says that Forstall didn’t get along with Ive. I’ve heard similar things as well, and that those difficulties were not isolated with Ive.

I don’t think Cook decided to fire Forstall simply because he didn’t get along with others in Apple’s management team, or because he refused to take public responsibility for Maps’s issues. Rather, I believe it was a result of Cook re-organizing Apple’s management structure to reflect his becoming CEO. Cook is not Steve Jobs, and he does not pretend to be, so he decided to change Apple’s structure to reflect that.

Jobs fulfilled a rather unique role as CEO of Apple. Rather than oversee and manage the work of others, Jobs was intimately involved in product and product design decisions. No detail was too small for Jobs’s attention. Jobs both originated product ideas and took part in iterating on them, but more importantly, Jobs acted as a filter for Apple. All product decisions ultimately passed through him; he approved new products and new directions, and vetoed them, too. As a result of this role, collaboration among his lieutenants and teams wasn’t as important; indeed, Jobs was known to foster competition and even conflict among individuals and teams to try to elicit their very best work, and then he would choose what he saw as the superior work and direction for Apple.

Cook, as far as I know, doesn’t scrutinize each pixel of an application’s design. He doesn’t have Jobs’s ability to understand what makes for a good product. Jobs was one-of-a-kind, and Cook recognizes that. Recognizing that, however, means that he couldn’t continue Jobs’s management style. Instead, Cook needs to rely on his management team to replace Jobs’s role. Each member must take absolute responsibility for their area of focus and must be incredibly talented at managing it. Most integrally, though, because Apple no longer has the singular filter that all larger product decisions pass through, that management team must work together. Apple could withstand conflict and islands—even benefit from it—with Jobs because each area ultimately ran through him, and because he directed each area. Since Cook can’t fill that role, he needs supremely talented people in charge of each area working with each other to set Apple’s direction. Jobs’s Apple could feed off of discord, but Cook’s Apple must feed off of collaboration.

In Apple’s introduction video for iOS 7, Jonathan Ive says that “We see iOS 7 as defining an important direction and in many ways, a beginning.” While Ive may have meant they saw it as a new beginning for iOS, iOS 7 also marks the beginning of Apple’s new management structure.

When Cook fired Forstall last year, it wasn’t clear what it meant or whether it was for the better. Embarrassingly, Cook hired John Browett to head retail in April 2012 and fired him in October along with Forstall. One way to read that—and many did—was that Cook’s leadership was failing; he had hired an obviously bad fit for retail and was forced to get rid of him in six months. In that light, Forstall’s firing and Apple’s management restructuring looked like it could be the result of a struggling management team.

Until WWDC this year, it wasn’t clear whether that view was correct, or whether it was a part of Cook’s reorganizing Apple to work best under new leadership. Today, though, I think it’s clearly the latter view that was correct.

With that out of the way, I believe that this year’s keynote was meant to establish the foundation for Tim Cook’s Apple. In episode 19 of the Accidental Tech Podcast, Marco Arment, John Siracusa and Casey Liss discuss the keynote introduction video, which explains Apple’s motivation. Siracusa says that while he liked the video, he thinks that it suggested they were about to introduce something groundbreaking, and that OS X Mavericks, the new Mac Pro and iOS 7 didn’t live up to that. Siracusa might be right, but I think he misses its intent. This wasn’t meant to congratulate themselves for being great; rather, it was meant to affirm Apple’s motivation for what they do. Along with their “Our Signature” ad, I think they are the equivalent of the “Think Different” campaign for the Cook era.

Famously, Jobs said that the people at Apple shouldn’t try to make decisions by asking themselves what he would have done. Instead, he said, they should just do what’s right. Clearly, Cook took that to heart. This is Cook’s Apple, and they are not constraining themselves by what feels Jobs-like. Cook hasn’t confused the trappings of Jobs’s Apple—how Jobs managed the company—for its heart: an irrepressible desire to make insanely great products that improve people’s lives and give them joy.

Apple, then, has changed significantly since 2011. Things are quite different at the top than they were then, and to my eyes, Apple seems more open to the world than its ever been in important ways, too. But those changes have all been made so that Apple can continue doing what they always have. This may be Cook’s Apple, but the core is just as it’s always been.

July 9th, 2013

A Guide to Barbecuing for the Fourth

Over at the Basil blog, I wrote an overview of making great barbecue for the Fourth of July.

There’s not much better than good barbecue, it’s a lot of fun to do, and it’s really not very hard, either. If you want to give barbecue a try, I think this is a great place to start.

This weekend is going to be a bit of a cooking fest for me. For the Fourth of July itself, my girlfriend and I are going to be making grilled steak banh mi, with a creamy avocado and salsa verde sauce, cilantro and home-made pickled daikon and carrots. And on Saturday, my Dad and I are going to smoke a pork shoulder (my first time trying pork shoulder, so it should be fun).

I love cooking. It’s incredibly satisfying to make a good meal for family and friends yourself, and it’s so much fun learning new dishes and experimenting with them. If cooking isn’t quite your thing, give it a try sometime. You might like doing it, and it’s a terrific way to spend time with your significant other, family and friends. The time spent together with people while cooking in the kitchen or backyard is almost always special.

Happy Fourth of July to you all!

July 1st, 2013

Education, Technology, Passivity

Partly in response to my piece on the philosophy of Google Glass, Alan Jacobs wrote this:

But if awakening students from those slumbers has always been the task of the true educator, that task is all the more difficult in a time of technologies of knowledge, or “knowledge,” that asymptotically approach omnipresence. Google Glass, along with a whole range of similar technologies, enforces the very passivity which truly liberal education is concerned to defeat.

That’s absolutely right. Just as “the truth is in the cloud” for web services that sync data, I think we’re setting the stage for the web turning into some sort of ostensibly-neutral source of true knowledge. When we have immediate, unfettered access to the web’s information, it begins to take on a feeling of absolute truth, even for things that are inherently opinion or taste.

In 2010, I argued that this idea of “openness” that Facebook (and to some extent, Google) have pushed so much for—that we will all be better off if we share more of our lives and identity with the public—undermines public and private as separate spheres, and therefore also the space for people to form their own beliefs and identity. As the public sphere encroaches and overruns the private sphere, it is necessarily harder to experiment with new tastes, opinions and beliefs, and to settle on certain ones, because the entire process is more transparent for observation and judgment.

These things—devices like Google Glass, which give us immediate access to the web, and social networks, which push us to share more and more of ourselves with the public—are intertwined. The result could be, or already is, a greater emphasis on “Well, everyone else thinks…” and less on “Well, I believe this to be true because…”. And that should be profoundly worrying.

June 6th, 2013

“Free”

Dr. Drang on “free”:

First, you have to recognize that you’ve been “the product” your entire life. So were your parents and so were your grandparents. Television and radio, newspapers and magazines—they all sell your attention to their primary customers: advertisers. Even things you “pay for” sell you off to advertisers because you really don’t pay for them—you only cover part of the costs. Despite this obvious and longstanding fact of life, while everyone bitches about commercials, no one says TV networks are insidious or underhanded because they run ads.1 I’ve never heard of anyone boycotting Mad Men because they don’t want to be a product sold by AMC.

June 3rd, 2013

Creating Machine Avatars For All

Dmitry Itskov wants to unlock us from the prisons of our physical bodies and upload “us”—the sum of our brain’s connections that create “us”—into affordable machine bodies. He believes this will free us to live better, more meaningful lives:

Mr. Itskov says he will invest at least part of his fortune in such ventures, but his primary goal with 2045 is not to become richer. In fact, the more you know about Mr. Itskov, the less he seems like a businessman and the more he seems like the world’s most ambitious utopian. He maintains that his avatars would not just end world hunger — because a machine needs maintenance but not food — but that they would also usher in a more peaceful and spiritual age, when people could stop worrying about the petty anxieties of day-to-day living.

“We need to show that we’re actually here to save lives,” he said. “To help the disabled, to cure diseases, to create technology that will allow us in the future to answer some existential questions. Like what is the brain, what is life, what is consciousness and, finally, what is the universe?”

As seems to happen, this is perfectly timed with my piece on Google Glass last week.

Perhaps I’m just cynical, but this story seems very much like a microcosm for much of technology, and especially Google Glass. There’s a very nice veneer on top of Itskov’s avatar idea, a veneer that says it’s to help people and to solve real problems like curing diseases (because our bodies will no longer be organic) and ending hunger (because the only sustenance we will require is electricity and maintenance), and to free people of the “petty anxieties” of day-to-day life like providing a living for you and your family. As a result, humanity will be free to tackle much larger issues.

I say it’s a veneer because whether Itskov’s vision has any chance at being realized or not (it certainly doesn’t in the relatively near future), his solution to these “problems” solves them by eliminating much of what makes us “human” in the first place. Who we are as individuals is not merely defined by the connections in our brains, but also by how we experience the physical world and interact with it, and our struggle to survive and improve our lot. Even if you can successfully map a person’s brain and re-create the individual within a computer as a result, they inherently won’t be the same person, or feel the same way, by nature of their new body. Sudden immortality, coupled with no need to ever seek food and survive, could play havoc with a brain evolutionarily-designed to focus primarily on it.

In other words, the “solution” may destroy what’s worth saving in the first place: humanity.

June 3rd, 2013

On the Philosophy of Google Glass

Technology inherently alters how we live, what we do, how we think, how we die. Modern medicine has turned many diseases that, when contracted, brought along death more often than not into things of the past or minor annoyances, and injuries that would have been debilitating or deadly into ones that can be overcome. The motion picture didn’t simply make plays something that could be watched over and over again by many different audiences, but created a mass entertainment accessible by nearly everyone, and changed how we learn about news events and conceive of them. The car allowed for suburban sprawl, and for people to travel where they please, whether for the evening or for extended trips across thousands of miles; and in so doing, the car changed what the American “ideal” is, spawned innumerable cultural groups centered around it, and helped construct a rite of passage into adulthood for teenagers. Getting your driver license and first car is the first step toward being an adult, but also a right to freedom, to being able to go wherever you please on your own or with friends on nothing more than a moment’s notice.

So as long as humans have been creating tools to influence the world around us, the tools—technology—have been influencing us, too. It’s an inevitable byproduct of using something, and this isn’t exactly a new insight. Smartphones and now “wearable” computers like Google Glass are merely the latest human technology to influence their creators.

But while they may only be the latest example of something that’s been happening for as long as humans have created tools, there is, I think, something very different about so-called wearable computers like Google Glass. They have the potential to integrate themselves so deeply into the user that over time, and as they develop further, there will be little reason to differentiate between the device and the user. Removing your Glass device will feel very much like losing a limb or sense—something that you’ve grown used to depending on and using is gone. Through this much deeper integration, these devices could fundamentally alter the human experience and what it means to be human.

That might sound alarmist, like science fiction, or—if you own a smartphone—just remind you of that small moment of dread, like something’s wrong, when you leave the house without your phone.

Wearable computing has much more potential than “wearable” implies. Instead, through overlays on our vision (or more direct connections with the brain, potentially), things like Google Glass can become another sensory input as well as an output. Google Glass already allows you to look something up on Google (“Google, how tall is Mt. Everest?”) and get directions without ever pulling the phone out of your pocket, or using your hands at all; you ask, and the information is spoken to you or overlaid at the top of your vision. It’ll notify you about your flight this afternoon or messages you receive, and you can reply to the message, too. You can snap a photo or video. All without using your hands, and it’s all—again—on top of your vision.

The ultimate goal is to form a direct connection between our brains and the web, and all of it that entails. Google Glass is merely a first step toward that, and merely a hack that hijacks our vision to provide input to our brains and hijacks our voice for control. A direct connection with the brain is obviously ideal; there’s no “glasses” to wear, or need to use voice to control it, which isn’t very efficient. In Steven Levy’s In the Plex, he recounts a conversation he had with Larry Page and Sergey Brin in 2004:

Back in 2004, I asked Page and Brin what they saw as the future of Google search. “It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it will be easier in the future, that you can have just devices you talk to, or you can have computers that pay attention to what’s going on around them and suggest useful information.

The web’s information will be our brain’s information, and the web’s services will our brain’s tools. We would be able to immediately answer whatever question we have, or information we need. If you’re fixing your sink, instructions on how to do so (or maybe a video?) are just a thought away. In a few moments, you’ll be able to make fabulous chicken tikka masala. Humanity’s knowledge will have a direct pipe into our brains. And you’ll be able to do incredible things, too. You could snap a photo with a thought, send a message to someone or file away something you’ve come across to a note-taking service. You could control your home’s lights and television. You could… well, the list goes on.1

And, of course, you’ll be able to check Twitter and Facebook, and post to them, wherever you are and while doing whatever else.

I say all this because I don’t think there’s a significant difference between doing all of this through a Google Glass-like device or some direct brain connection, as Page proposes. If they’re successful at their purpose, both will be quickly adapted into our senses. Just as we’ve gotten used to being able to pull out our smartphones whenever we have a spare moment or need to settle some dispute or trivia, we’ll reflexively ask Glass the answer to a question, or to snap a photo, or to check the news real quick, or to look through our Facebook and Twitter stream, even at moments when we probably shouldn’t. And since the amount of effort it takes to do so will be so much smaller than it is with a smartphone (which is already terribly small), we will do all of it with that much more frequency. No event will be complete without taking a photo and posting it to our social network of choice, because unless it’s documented and unless we’ve stuck it in everyone else’s stream, then it didn’t really happen.

I don’t think that’s a positive, and it says nothing about the social effects of having a web-connected camera and microphone strapped to our faces. (Dustin Curtis touches on it in his piece about his experience with Glass.) But what I find most troubling is the philosophy underlying Larry Page and Sergey Brin’s thoughts on devices like Glass. They say that Glass’s goal is to get technology “out of the way,” but that isn’t it. The idea is that we will all be better off if we’re always connected to the web, always on, and have uninterrupted and instantaneous access to it and humanity’s “knowledge.” The idea that Page expresses is that if I can immediately learn about something I don’t know much about, I’ll be better off. I’ll be able to make smarter decisions and live a deeper, richer life by spending the time it would have taken to research and learn about something on more meaningful and substantive tasks.

I think, though, that is a terribly deluded and shallow understanding of what it means to “learn” about something. When we—humans—learn about something, we are not simply committing facts to our memory so we can recall them in the future. That’s a very tiny part of a much larger and much more important process. To “learn” about something is to study the information (when historical events occurred, what happened, etc), find connections between it and other things we’ve learned and experiences we’ve had, and to synthesize it into something greater—knowledge. Knowing, say, the Pythagorean Theorem in isolation isn’t of much use, but connecting it to your need to identify another object’s location suddenly makes it very useful. And more abstractly, knowing Roman and Greek history isn’t very useful all on its own, but being able to learn from it and apply its lessons to current political difficulties might prove very beneficial.

Synthesizing information into knowledge isn’t an instantaneous process because that’s not how we work. We form conclusions and connections between new information and other things we know by thinking through it and living it. Conveniently, and crucially, taking time to learn something or to answer our own question by pouring through books and articles and our own investigation allows us time to do that. We have little choice but to draw conclusions and form connections between what we’re looking at and what we already know or have seen before because our brains are working over the material for the answer we seek. We find knowledge when we engage our brains. And, moreover, we often stumble into things unintentionally while actually looking for something altogether unrelated. Things that end up becoming more important than what we were originally looking for in the first place.

Page’s idea—that we would be fundamentally better off if we had immediate access to all of humanity’s information—ignores that. It provides facts, but elides conclusions and connections. What’s worse, it starves us of opportunities to use our skill for critical thinking, and since it is a skill and is therefore something that must be developed and practiced, it starves us of the chance to develop it.

I find that troubling. Glass is not a technology that is designed to amplify our own innate abilities as humans or to make us better as humans, but rather one that acts as a crutch to lean on in place of exercising the very thing that makes us human. I don’t find that exciting. I find that disturbing.

This may all sound like so much hyperbole. After all, we’ve adapted just fine to prior new technologies, despite Luddite claims that it will destroy us. And, undoubtedly, we will adjust to this sort of thing, too, and the world will not come crashing down. I think, though, that this sort of thing—a more intimate connection between us and computers—is a path we are heading down, and since its more intimate nature also makes it more influential over us, I think we should deeply consider what it’s intended to accomplish for us and what might happen to us in the process.

Technology isn’t a force of nature that just happens. It’s something that we create, and so we should question why we are creating it. This has always been true, but I think it’s even more important now that we do so.

Technology, I think, should exist to improve our lives as humans, to magnify the good and minimize the bad, rather than change our nature or experience. That’s what I believe.

That’s what I believe, and you may disagree. I would suspect many do, especially those with more of a bent toward transhumanism. And that’s fine. But we should be having a much larger discussion about our technology’s intent than we are now, because it’s only increasing in importance.

  1. If the idea of a direct brain interface seems ridiculous, it shouldn’t; researchers have shown that monkeys can control a robot’s motion with their brain activity, and the means they use are relatively rudimentary. The brain appears to be able to adjust to and adapt new sensory inputs. []
May 31st, 2013

“Dueling philosophies” for Wearable Tech

While comparing Google Glass to a (theoretical) Apple-made watch, Ken Segall makes this observation:

Second, there’s the company’s love of humanity. That is, Apple has never created technology for technology’s sake. It creates technology that strikes a chord with human beings.

I’d restate that in a different way: Apple seeks to make technology that makes us better as humans, rather than try to change what we are. The Mac, iLife, the iPod, iPhone and iPad all fit this very well. None of them try to redefine what it means to be human.

Google Glass, as a technology, begins to do exactly that. It’s a first stab at providing an immediate connection between the web and our brains, and it does so by overlaying an interface on our most important sensory input. There’s no meaning in “it gets technology out of the way,” as Google is wont to say, because Glass is intended to always be in the way, to become a part of us in a much more literal way than smartphones have.

That’s not only unappealing to me, but I think the idea—that we will be better off if we literally integrate the web into ourselves and therefore very fundamentally change the human experience—disturbing.

May 31st, 2013

Jamelle Bouie’s “Star Trek: Into Darkness” Complaints

Jamelle Bouie wasn’t satisfied with Star Trek: Into Darkness:

These aren’t as disjointed as they look, and they all point to my main problem with Into Darkness: I don’t mind that Abrams and Lindelof wanted to remake “Space Seed” and Wrath. What I mind is that it was half-assed, with hardly any thought given to the characters. The first movie could get away with what it was—a long sequence of action set-pieces strung together by a threadbare plot. But this needed to have an actual core, and Abrams couldn’t deliver.

Jamelle is dead-on. My biggest misgiving with J.J. Abrams’s style of filmmaking is that he likes to create the facade of substance in his films without much at all underneath. It’s a science fiction wrapper for little more than adventure blockbusters.

May 28th, 2013

Xbox One

The Xbox One sounds really nice, and the reason why has nothing to do with games:

“Xbox,” she says, and a small faint Xbox logo in the upper right-hand corner of the screen begins to glow; the Kinect is listening. “ESPN,” she finishes. The guide, which is currently highlighting Seattle’s local channel 4, switches to channel 206: ESPN. Because the Kinect’s voice control is already engaged, she doesn’t need to prompt it again, so she just says “watch.” There’s a flash as the connected DirecTV makes the change, and all of a sudden SportsCenter comes on the screen. The most shocking part about it is the ease; there’s no more hunting through your guide for FX or Travel Channel or whatever network or show you’re looking for. You can just say “Xbox, watch Travel Channel” or “Xbox, watch Sons of Anarchy,” and you’re there. If the show itself isn’t on, a global search will collate all of your options for watching it, from on-demand to streaming services.

Microsoft’s best asset for the post-PC future really is the Xbox, and I hope they start acting like it.

May 21st, 2013