“Original” Category

The Party After Trump

We are only a few weeks away from the 2016 general election. Unless the polls are fundamentally wrong, Donald Trump will lose, and he will lose by a significant margin. Trump and the movement he represents will go down to ignominious defeat.

I am no fan of Hillary Clinton. I find her corruption to be repulsive, her exposing classified information dangerous and characteristic of her narcissism, and her platform to be little other than more government as the solution to every problem.

And yet, when Trump loses next month, we should all have a small moment of celebration. Trump is uniquely dangerous to our nation, and to the world, and we are fortunate that he appears too incompetent and appalling an individual to defeat as poor a candidate as Hillary Clinton.

That moment will be short indeed.

After Trump loses the election, what Trump represents will not fade. While Trump’s outright racism, affection for authoritarianism, and desire to subordinate the United States to Russia are new, the seeds of Trumpism precede him. Trumpism was not created whole cloth by Trump. He saw a large contingent of the GOP that was frustrated with the Republican Party’s failure to repeal the Affordable Care Act and to slash immigration, after the GOP made promises to its voters that leaders knew they could not keep while President Obama was in office, and a GOP in Congress that was largely unresponsive and uncaring to the economic difficulties of Americans; a contingent of voters still obsessed with the idea that President Obama was not born within the United States and therefore is ineligible to be president; a president that used an executive order to force through changes to an immigration system he did not have the votes in Congress to make; a contingent of voters that distrust the media and increasingly the GOP, and who were increasingly defined by no beliefs besides opposition to immigration, and to Democrats, separate from disagreements over ideology. Instead, politics calcified into little more than fighting the enemy for no other reason than because they are the enemy and must, by definition, be defeated.

Racism on the right did not begin with Trump’s announcement in June 2015 that he would run for president. Despite many Republicans’ protests that the GOP is the party of Lincoln, the party has had a long history of racism within it. In the last decade, many Republicans did not just see Barack Obama as a poor candidate for president, but someone that is not eligible to be president, and that does not have our interests at heart. The birthers based their skepticism on no evidence besides chain emails, his name and skin color. John McCain, to his lasting credit, pushed back against it when confronted with it at a campaign event during the 2008 race, but Republicans generally did not forcefully confront it. Many on the right oppose welfare spending based on blatant anti-black racism, arguing that blacks who live in poor neighborhoods have no excuse for being poor but their own failings, and that those failings may be caused by characteristics of blacks. Those arguments are based on a willful misunderstanding of our history and are a re-packaged version of the idea that blacks are inferior to whites. The right’s opposition to illegal immigration itself, although not inherently racist in nature, certainly is supported by some based on bigotry against Hispanics. “They’re taking our jobs” is little more than xenophobia, even if there are good reasons to oppose lax immigration rules, both legal and illegal. Trump did not create racism in the GOP. He exploited it.

The appeal Trump made to Republican voters in the primaries was that he recognized their plight, and would make America great again. That slogan is integral to his appeal, because it speaks to every complaint this group of voters has. It implies that “real Americans” (and all that phrase implies) were screwed over by globalists in both parties that pushed for free trade deals which, in their mind, gutted the American economy of well-paying industrial jobs for lower-skilled workers, and for lax immigration rules and enforcement of them, which allowed millions of illegal immigrants to enter the U.S. and steal their jobs. It implies that we must re-assert security within the country against foreigners, the immediate descendants of immigrants, and in our cities. It implies that America has been weakened and reduced by a black president, someone that isn’t “really” American, and whose loyalties lie with foreigners. And it speaks to their desire to “take” control of the country from the “elites”—the people in the Republican and Democratic parties, in the media, and who run corporations; the people, they believe, that conspire to send jobs overseas and bring foreigners here to water down the power of “real” Americans (whites).

That slogan, along with his call for building a wall across the U.S.-Mexican border, deporting all illegal immigrants, banning Muslims from entering the United States, slapping stiff tariffs on all trade with China, penalties for companies that move jobs overseas, and asserting “law and order” in our cities, speak directly to this group of voters. The bare nature of it—the illegal immigrants, the Mexicans and the Chinese stole your jobs, and I’m going to stop them—along with his insistence on describing illegal immigrants as “drug dealers” and “rapists”—was directed at them: The United States is now a third-world nation, your life is terrible, and it is not your fault. Our country is terrible because of the Mexicans, because of the Chinese, and because of the conniving elites that plotted to screw you over so they could get wealthy. He stiffened his appeal by pointing out that he knows what the elites do, because he has participated in their corrupt system. He turned his own corruption into an asset with the crowd that was increasingly angry with our institutions.

Making his appeal that way also framed his opponents as part of the problem. They are all officeholders, and did not deliver for this group of voters. In his telling, they not only did not deliver (because they are ineffective politicians), but also plotted amnesty for the hated illegal immigrants. They are simultaneously incapable and nefarious.

The group of voters, and the elements described above, preceded Trump. He saw them, exploited them ruthlessly, and amplified them. For him, they should not just be skeptical of the GOP leadership and distrusting of the media, but they should resist the “rigged” system by voting for change—for Donald Trump. They should not just support a stricter immigration policy and a secure border, but they should abhor immigrants, see them as the cause of our problems, and see free trade and immigration as a conspiracy to impoverish and debase whites. They should not just be skeptical about President Obama’s place of birth and his legitimacy as president (as shameful as that skepticism is), and question his refusal to acknowledge the threat of Islamic terrorism, but they should see the truth before their eyes that Obama is working for our enemies to weaken the United States.

Trumpism will not dissolve after November 8 because the conditions that gave birth to it already existed. As such, we must affirmatively decide what the GOP will be in 2017 and beyond.

The Future of the GOP

After Trump secured the Republican nomination in May 2016, Republican leaders—no matter how critical of Trump they had been prior—began falling in line behind him. The party apparatus swung hard in his favor, denying attempts to reform the GOP, and put down an effort led by Mike Lee at the Republican National Convention to call for a roll call vote on new rules. The GOP threw in with Trump.

Most of the party’s leaders have supported Trump. Reince Priebus, Paul Ryan, Marco Rubio, John McCain, and Ted Cruz offered their support, and claimed that while they disagree with Trump on many things, they believe Hillary Clinton is a fundamental threat to the country and must be defeated. Often tacitly, but sometimes explicitly, they argued that Trump’s brand of racist, xenophobic, conspiratorial and authoritarian populism is preferable to Hillary Clinton sitting in the Oval Office.

Many in the party did so either out of fear that the contingent of voters described above would not vote for them if they did not back Trump, and so they could try to hold together the Republican coalition. They feared that without giving in to Trump, they, and the party, would be finished. They feared that group of voters—”the base.” Regardless of whether they believe the above to be true, they have, with their actions, threatened to make Trump president.

That is unacceptable.

Trump is fundamentally not conservative. Trump represents an active disdain toward limited government and individual rights, toward the rule of law, and toward an aspirational view of the United States. Trump’s implicit—and often explicit—appeals to white nationalism, and his attacks on non-whites, reject an America defined by a shared love of liberty and belief in the power of the individual and community. In its place, Trumpism substitutes a respect for “white” culture and history, where whites have given light to a world in perpetual darkness. Institutions are not to be trusted because they are rigged against whites. Instead, Trump—the champion of disaffected whites—should be trusted, and he should be trusted with extraordinary powers to make this country great again.

In the last few weeks, Trump has made much of the subtext explicit, and extended it. In a speech on October 13, Trump claimed that there is a “global power structure” that has conspired to rob the working class of wealth and jobs, and to end our sovereignty as a nation. Citing documents leaked by Wikileaks, Trump says that Hillary Clinton is a part of the conspiracy, and has plotted with international banks to plunder the nation and destroy our sovereignty, and is rigging the election with her co-conspirators in the media. What was once (crude) subtext in his slogan and statements is now just the text itself. Trump uses the words of a dictator: there is a conspiracy against the people, to impoverish them and disenfranchise them, and only a strongman like Trump can fight them.

The conservative vision of the United States—generally speaking—is we should dream big, and be free to work tirelessly to achieve those dreams. We should work together, voluntarily and within our communities, to help people in need and improve our communities. It is a view of the world where individual rights are sacred, and where respect for people—all people—is integral. It is a view of the world where our country is defined not by a shared ethnicity, but a thirst for liberty and self-determination. Our vision is not to be ruled by a strongman, or need the leadership of a great leader as president.

Fundamental to this view of the world is the rule of law. Without a set of laws that are comprehensible by all, and that are applied equally to all, there can be no limited government whose primary role is to protect individual rights, and provide space for a flourishing civil society. Without respect for our institutions, the rule of law will ultimately whither away.

Thus, Trumpism damages conservatism on two fronts. First, Trumpism challenges the idea that our nation is defined by ideas, and therefore challenges those ideas themselves. If our shared identity is not tied to a shared love for liberty, then what binds our nation together falls away. Doing so inherently breaks down the United States into its constituent ethnic, religious and cultural communities, and encourages people to fight for their communities to be empowered over others. If there is no shared identity, there is no reason to push for work to benefit everyone as a whole. Trump’s supporters offer a window into what that world looks like when they tell Hispanic Americans to “go home,” and when they threaten to intimidate non-white voters on election day, because for many of his supporters, being “American” is tied directly to ethnicity and culture. Second, Trumpism undermines faith in our institutions, and thus weakens the rule of law. If “the system”—the political parties, the government, the economy—are all “rigged” against us, why should the Constitution be seen as anything more than an old piece of paper? Why should we not support a strongman that will right the system, provide real Americans (whites) with jobs and dignity, and send the “foreigners” back to “their” country?

I admit that causation does not only flow in one direction; Trumpism is a response to a decline of faith in our institutions, caused by many of the reasons described earlier in this piece. However, while Trumpism is a response, it is also an amplifier, and a sharpening of distrust of our institutions into conspiracy theories. Trumpism is also not only a threat to conservatism, but a threat to our form of government, through the same mechanisms described above.

Our party’s leaders have tried to placate Trumpism’s supporters, to save their own jobs and to try to hold together the Republican coalition. But a coalition that includes people who seek to undermine the party’s and country’s values is not a coalition worth having.

Trumpism cannot be worked with, it cannot be directed toward productive ends, and it cannot be negotiated with. It must be called what it is: racist, xenophobic, authoritarian, anti-American. It must be fought, and it must not be accepted into the party. As conservatives, we cannot let Trumpism control the party. Our party leaders’ support for it is unacceptable.

Either the party will be pushed back toward working to solve our country’s problems with conservative ideas, or it will give in to the ethnic authoritarianism of Trump. There is no middle ground, and the party has made its choice.

Either the Republican Party stands for conservatism, and for respecting all Americans, or it stands for ethnic authoritarianism. If our party will not stand for conservatism, it is incumbent upon us to abandon the party, and start over. Today, the party has refused to abandon Trump after he called all illegal immigrants rapists, drug dealers and criminals; after he said John McCain is not a war hero; after he called for religious tests to be administered for immigrants, and a ban on all Muslims entering the country; after he repeatedly praised Vladimir Putin; after he repeatedly said he would compel the military to target wives and children of terrorists; after he sought out the support of white supremacists; after it became clear he has assaulted women; after Trump charged Clinton with being part of a global conspiracy to rob the working class and destroy the United States’ sovereignty; and after he insisted that our democratic system is rigged and illegitimate. The party has stood by him, and supports putting him—a demagogue that believes in nothing besides his own self-aggrandizement and the power of government—in the White House.

It is time for us to recognize the truth: the GOP is not a party worth saving, and not a party that anyone can support with their conscience intact.

I will not vote for leaders that did not repudiate Trump, and that did not repudiate Trumpism. I will not donate to them, I will not volunteer for them, and I will not offer them public support. It is time to support people who stood with dignity, people like Ben Sasse, Mike Lee, and Justin Amash, and to make room for new leaders that won’t bow to a vile authoritarian because of the letter next to his name and for fear of losing their position.

It is also our responsibility to help define what our new party does stand for after the election. Most importantly, the party must represent all Americans—Americans of all ages, cultures, ethnicities. Too often in the modern era, Republicans have given in to the idea that conservatism cannot appeal to non-whites, to the working class, and to the young. In 2012, we turned that idea into a campaign plank: Romney’s “47%” comment reflected the idea that conservative ideas fundamentally cannot appeal to a large part of the country, and thus that we should not even try. When Romney accepted Trump’s endorsement in 2012 (it is worth noting that the Romney campaign did not exactly enthusiastically embrace Trump’s endorsement, however—quite the opposite), and joked about President Obama’s birth certificate, he threw a bone to the group of voters that believe President Obama is not a “real” American. Romney certainly did not believe there is doubt about Obama’s fidelity to America, but giving those voters a knowing wink did not just “excite the base” a little ahead of the election—it legitimized racism in the party and in the country, and said that our party stands with them.

Those were shameful moments for Romney, a good man, but those ideas, and our leaders’ willingness to condone and encourage them, are part of the reason we now have Donald Trump as our nominee for president. That is both because we breathed life into those ideas and voters, and because that thinking is self-fulfilling: if we believe that non-whites, the working class and the young will never support conservatism, then that is reflected in our proposed policy, goals, focus and tone. If you all but tell non-whites that this party is not for you, why would they ever entertain the idea of supporting it? If you refuse to genuinely listen to other people’s experience living in America, what they care about, and what ideas they have, how can you expect them to take your ideas seriously? How can you expect that your proposals reflect the experiences and concerns they have?

The future of conservatism begins with something simple: listening. Listen to blacks, Hispanics, Asians, homosexuals, the middle class, the poor, the young. Listen, and try to understand what their experience in America is, what issues affect them, and what they believe.

Listening to other people and discussing with them will provide the grist for re-thinking what conservatism means in today’s world, and how we can address problems affecting all Americans. There already are many conservative thinkers doing precisely that. People like Reihan Salam, Yuval Levin, and Charles C. W. Cooke, have dealt seriously with the United States as it is in 2016. We need to do so as a movement.

No matter what specific policy you advocate for, starting with a respect and love for all Americans, and by genuinely listening to their experiences, is where our future begins.

October 17th, 2016

A Danger to Our Political System

After defeating the British in the Revolutionary War, General George Washington almost certainly could have seized control, and made himself dictator. Washington was revered, the Continental Congress was weak, and the argument that the colonies needed the stable leadership of a tested leader in the post-war period would have been an easy one to make. But he did not.

Washington’s insistence on civilian control of the military is now a bedrock of the United States’ political system. The very thought of the military intervening in our country’s political decisions, much less overthrowing a democratically-elected Congress or president, sends a shiver down the spine of Americans, and would cut to the deepest level of what it means to be American. That idea—that elected civilians are our leaders, and the military answer to them—has held strong throughout our history.

There is no physical barrier, however, that prevents the military from intervening in the political process, or even from removing elected leaders from office. There is no wall, no defense. The military controls the weapons, and could do so if they pleased.

What has prevented it here is the norm created when Washington relinquished power to Congress. It hasn’t happened because it violates an idea of what is acceptable in our country, and one that defines our country.

That is not the only norm we depend on within our system. We have depended, too, on the idea that even if we wholly disagree with officials elected to office and want to see them unseated as soon as possible, they are afforded the respect of holding office. They were elected to office through our political system, and while we may think they should not be in office, we at least acknowledge they were elected. Similarly, elected leaders have respected a norm that they will not use their new powers to persecute the people they have replaced.

Together with the Constitution, which slows down and impedes the ability of majorities to enact sweeping changes to our laws, these norms (more numerous than the three discussed above) limit the scope of change possible for a single election. One general election year will not mean that the previous administration will be thrown in prison and minority groups’ freedom of speech will be denied. It will not mean that the economy will be nationalized. It will not mean that all members of a racial group will be rounded up and placed in internment camps.

By doing so, it turns down the temperature on our political debate. When people’s rights are not being directly decided by a single election, or whether the last administration will be imprisoned, there is much less incentive for people to make drastic decisions, like for a president to refuse to transfer power to the elected candidate. Such norms help ensure stability.

I fear we are well down the path of weakening the very norms that have girded our democracy.

I am, of course, writing about Donald J. Trump, who will be the Republican Party’s nominee for president.

Trump, though, did not start this erosion. We can trace it in its current form back at least to the 2000 election, and certainly to Obama’s presidency, with the right’s courting of birther conspiracy theorists that insisted President Obama is a foreigner and thus incapable of holding office. We can lay blame on President George W. Bush for expanding the scope of executive power, legitimizing torture, and on President Obama for enshrining Bush’s expansion of power and expanding it further still. There is much blame to go around.

Trump is something altogether new, however. Whereas the past two presidents have undermined our norms at the edges while still paying respect to them (the role of the executive in our system, respect for the rights of all Americans, and the legitimacy of our political system itself), Trump has undermined our system’s norms whenever he has found it politically advantageous to do so.

Shadi Hamid writes:

His commitment to the protections enshrined in U.S. constitution are questionable, at best, and if we assume the worst, downright frightening (the difficulty with Trump is that he’s not precise with words, so it’s sometimes hard to make sense of what he’s saying). He has expressed support for registering Muslims in a database, elaborating that they could “sign up at different places.” When a reporter asked how this was different from requiring Jews to register in Nazi Germany, Trump said “you tell me,” prompting The Atlantic’s David Graham to note that “it’s hard to remember a time when a supposedly mainstream candidate had no interest in differentiating ideas he’s endorsed from those of the Nazis.” Trump, for good measure, has also refused to disavow President Franklin D. Roosevelt’s internment of Japanese-Americans.

That is not even close to an exhaustive list, and Trump has added to it since, by stating that Gonzalo Curiel, a federal judge presiding over a lawsuit he is involved in, should Curiel should recuse himself from the case for impartiality because Curiel is of Mexican descent.

By doing so, Trump is using his position as a candidate for president to threaten a sitting judge, and is undermining the legitimacy of the judiciary. When a candidate for president uses his position to question a judge’s impartiality, the judiciary’s stature is weakened. What good are court rulings if the president states rulings that run counter to their interests are biased and illegitimate? Through his statements, Trump lessens the standing of the judiciary, and raises the specter of ignoring rulings altogether if he is elected. After all, why should the president respect “biased” and illegitimate rulings from an unelected body of judges?

Trump, too, is fond of threatening people he finds disagreeable. He has threatened the Ricketts family and David French’s family with consequences if they do not fall in line, and has used lawsuits as a bludgeon against people in the past. Those threats appear to be part of who Trump is and what he believes a good leader to be. He is, after all, the man that complimented the Chinese Communist Party’s strength for putting down the 1989 Tiananmen democracy protest with tanks and bullets, and the man that said he would compel the U.S. military to carry out unlawful orders, even if they refused.

Is that our norm of what the executive—the body of government that signs and enforces laws drafted by the democratically-elected legislature—is? Someone that questions the impartiality of a federal judge because of the judge, and uses his race as an excuse? Someone that doesn’t recoil at the idea of placing American citizens of one religion in a database so they can be tracked by the federal government? Someone that finds murdering the wives and children of terrorists as an intentional strategy morally acceptable, and believes it is “leadership” to force the military to carry out such atrocities? Someone that thinks it is not beneath a president to threaten private citizens for crossing him?

Those are not the norms we have established, or the norms that have provided remarkable stability in our political system since our founding. They are the signs of someone that fancies himself an authoritarian, and of a person that believes anything, or anyone, that stands in his way are to be crushed. They are the marks of a demagogue willing to do anything in the pursuit of power.

Trump will likely not be elected president. Despite that, by allowing this man to be the nominee for president for the Republican party, by allowing him to say and do the things he does, we are doing damage to our system of government. We are normalizing Trump’s behavior, normalizing his blatant use of racism and threats. He is raising the specter that things we did not think people would ever do, could be done as a result of a single election.

Trump will not be the end of our system, even if elected. But he is accelerating the decline of what has helped make our form of government so strong and resilient. And for that, we—members of the party that has elevated this man to be our nominee—should be deeply ashamed.

There is no honor in sticking by a party that makes Trump our standard bearer, no good to come from party unity.

June 7th, 2016

Trump

The United States is a country founded on ideas. Ethnicity and religion are not what have bonded us from our founding. It is the fundamental ideas expressed in our Declaration of Independence, and in our fight for independence, that run through our country’s history. Our founding set forth that individuals are ends unto themselves, and deserve to be respected as such; that government’s role is not to be the ultimate source of authority and power within society, but merely to protect the people’s pre-existing rights; and that through our will and determination, there is no limit to what we can accomplish.

We have not always honored and lived up to those ideas. Our founding itself was stained with the deepest of shames, the enslavement of human beings, while our founders argued for the dawn of a new beginning. We subjugated the Indians, and cruelly abused them like non-humans. We let the cancer of slavery metastasize, until war was the only option remaining; and after slavery was broken, we allowed Jim Crow to replace it. We have not yet entirely grappled with what our country’s greatest shame means, nor have we left the effects of slavery to the pages of history. They remain here with us today.

And yet America is a tremendous miracle. From British colonialism and abuse, we won our independence as a country, and forged one of the greatest works of humanity: the Constitution. The Constitution not only explicitly laid out the extent of the federal government’s powers, and enumerated the rights of the people that must not be infringed, but created a political system that, through separation of powers and the pitting of different power centers against each other, limited the ability of the government to fall under dominance of a single group and single passion of the time, to limit the ability of the government to be used as a tool of repression, even if it represented the will of the majority. It is a marvel of all time.

Through our unique genesis, we forged an identity separate from ethnicity and religion. Our identity, what it is to be American, centers around our belief in respect for each other as individuals, and for our right to pursue our dreams. By doing so, our country has been able to adopt waves of immigrants, people utterly different from the people already here, and integrate them into our nation. Whatever our race, religion and culture, if we share the same fundamental ideas, we are one people. Our identity is our ideas.

We have not always lived up to that, either. But it is remarkable how many different peoples have immigrated to the United States since our founding, and in the ensuing decades became as “American” as anyone else. That is the strength of our country: We will take anyone, if they believe there is a better tomorrow through work. We can all have different skin colors, follow a different religion (or none at all), eat different food, have differing ideas for what the good life is, even speak different languages—and be unified as a single people. That is a miracle, and despite not always living up to it, it also aptly captures something fundamental to our country.

Our country, at its best, is not about “staying with our own kind,” or taking from others to increase the lot of “our people.” Our country is about being different, having different ideas—but being on the whole unified under an assumption that we can create a better tomorrow for everyone through work.

That is also why I have found Donald Trump’s campaign for president so disturbing. Trump has built his campaign—to “make America great again”—on the belief that America is lost, that we are an embarrassment, that we are weak, and that we can only return to “greatness” on the back of a great leader. Trump has made his appeal not by arguing for how we can empower all of us, as Americans, to pursue our dreams for a better tomorrow, but by appealing to the ethnic and religious differences between Americans. He has not just argued that open immigration could be harmful and we should be cognizant of it, but that Mexicans are rapists, drug dealers and killers. He has not just pushed for being mindful of the threat posed by Islamic terrorism, but has flirted with the idea of registering all Muslim Americans in a database so they can be tracked, and with barring Muslim Americans traveling abroad from returning to their own country. He is a man that has played on conspiracy theory and overt racism.

Trump has praised the “strength” of repressive dictators such as Vladimir Putin and repressive governments such as the People’s Republic of China, and has said—often on the same day he threatened an individual or company with consequences if he is elected—that he would open up libel laws so journalists could be sued for writing or saying what he finds to be misleading or false.

Trump claims he is conservative. What I see is a man that, in order to rise to the top, willfully pulls on the ethnic and religious differences in our country, and uses and amplifies prejudice and hatred, to garner the support of whites. He is intentionally dividing us as a nation, pitting white Christians against Hispanics and Muslims, regular people against the wealthy and “media elite,” “Americans” (by which he means white people) against foreigners, which includes not only foreign nations, but American citizens that have descended from immigrants of foreign nations. Trump is tearing at the very fabric of our nation.

He tears at it, while also undermining the bedrock idea that the government does not lead our nation, but that the individuals do. Ideology may not be fundamental to Trump, but a belief in the supremacy of great leaders, and in their necessity for a country to do great things, is. That belief underlies his fondness for Putin, a man unafraid of using the power of the state toward his ends, and to crush his opposition. It underlies his praise for the PRC in 1989, when the PRC crushed a budding protest movement in Tiananmen Square in Beijing. And it underlies his support for the use of torture and for killing the families of terrorists—great leaders do what is necessary to win.

Trump, then, is a man willing to divide us as a people, so that he can lead us to “greatness.” Trump’s idea of leadership is not to respect the limits of the federal government’s power, and the presidency’s power, but to do whatever he thinks is necessary (laws, morals, and individual rights be damned) to show our strength and impose his will, both on the world and at home. Trump does not see himself as the leader of a country defined by its rights, but as someone smarter and stronger than everyone else, and thus entitled to impose his will on whomever he pleases. There is a reason that “little,” “loser,” “low-energy,” and “weak” are some of his most-used insults for his opponents, and he speaks so often of being a “winner.”

I cannot support Trump because he is fundamentally destructive of what our country is. Trump is willfully tearing at what holds our country together and what defines us as a people. I cannot, and will not, support a man that appeals to our fears, to our baser instincts, that turns every issue into one of us versus them, and that peddles in conspiracy and racism. I cannot, and will not, support a man that fancies himself an authoritarian, a man that threatens people that say things he doesn’t like, and threatens to undermine the first amendment. I cannot, and I will not.

I will not support Donald Trump if he is the Republican Party’s nominee for president. If the GOP is remade in his image, I will leave the party. I owe the party no obligation, if the party has become destructive of what I cherish most. I cannot, and I will not.

I promise that I will fight Trump, the demagogue, now, and if he wins the nomination. I will not accept it, and nor should you.

If, like me, you are a Republican, I appeal to you to vote in your state’s primary, and to vote against Donald Trump. He has not won yet, and we can still fight. Let us defeat him. Let us win a victory for what we love about our country.

February 29th, 2016

A Jovian Dream

Jupiter beckons in the distance, a small light, the greatest planet of all
I stare through the window, timeless, as the light slowly grows larger
I wonder what it will be like to see it with my own eyes
Swirls of orange and red and brown, a globe so large I can’t comprehend
The Jovian moons circling around the greatest planet of all, enraptured,
Captured
It is growing larger through the window
Through the window that separates me from the void,
Separates warmth and air and life from emptiness and death
This is what we have constructed
To ferry us across the great emptiness of space
It is larger still, I see color!
To see it with our own eyes
I see the moons!
To see if there is life beyond our little blue dot, so far away
To strike off into the unknown once again
To extend humanity beyond our home
I see it, I see it! I see!

But oh, this is the dream of a child
A great dream, but a dream
Remembered by an old man,
What could have been

October 1st, 2014

Apple Watch

The phone dominates your attention. For nearly every use, the phone has your undivided attention. Browsing the web, Twitter, Instagram, Snapchat, watching video, reading, messaging—all require focus on a screen that fills your vision, your primary attention, and generally some kind of interaction. Everything else, too, is always a home button or notification-tap away at all times.

Is that a shock when the phone is the single gateway to nearly everything? The PC is now for doing work, but the phone is for messaging, taking photos, sharing them, the web, Twitter, Facebook, finding places to go, getting directions there, and even making calls.

That is the reason we find ourselves, when we receive a message and pull out our phones to respond, often descending into a muscle memory check of our other iMessages, emails and Twitter stream. We pull out our phone for one purpose, like responding to a message or checking our schedule, and end up spending several mindless minutes (or, if I am honest, more than “several minutes”) checking in on whatever it is. We find ourselves doing this even when we shouldn’t. We do it while seeing friends and family, while out to dinner with them, while at home with family when we should be spending time with them or doing other things.

I used “we” above because I think anyone with a smartphone, or anyone who knows people with them, can find truth in it to a greater or lesser extent.

My concern with wrist-worn “smartwatches,” starting with the Pebble, is that they appear to primarily exist to push notifications that we receive on our phone to our wrist. They seem to exist to make dealing with phone calls, messages, updates easier; seeing them, ignoring them, replying to them. They are there to make dealing with our phones more convenient. And in large part, that is how smartwatches have been designed and used. “It’s there so I don’t have to pull my phone out of my pocket.”

But that idea of what smartwatches are for, making it more convenient to deal with the flood of notifications and information our phones provide us, is unimaginative. I think what the smartwatch can do is make the phone unnecessary for many purposes, create new purposes altogether, and allow us to benefit from a wrist-sized screen’s limitations.

The Apple Watch

On September 9th, Apple introduced their long-awaited watch, appropriately named the Apple Watch (from herein “the Watch”). We won’t be able to fully understand what Apple’s built until next year, but they did provide a fairly detailed look at the Watch and the software it runs.

It appears that, in contrast to Google’s approach with Google Wear (which is heavily focused on showing single bits of information or points of interaction on the screen, and relies on swiping between cards of data and interaction), Apple intends the Watch to run fairly sophisticated applications. The Watch retains the iPhone’s touch interface, but Apple has designed new means of interaction specific to a small screen. In addition to the tap, the Watch brings the “force tap,” which is used to bring up different options within applications (like, say, the shuffle and AirPlay buttons within the music application), and the “digital crown,” a repurposing of the normal watch’s crown into a sort of scroll wheel for the Watch. Using the digital crown, users can zoom in and out of maps and scroll through lists with precision and without covering the small screen. And, most interestingly, they have replaced the familiar vibration alert in our phones with a light “tap” from the Watch to notify the user.

What this allows is fairly sophisticated applications. You can not only search for locations around you, but you can zoom in and out of maps. You can scroll through your emails, messages, events or music. You can control your Apple TV.

This subsumes many of the reasons we pull out our phones during the day. We can check our schedule for the day, check a message when it’s received and send a quick reply, find a place to get a drink after dinner (and get directions there without having to walk and stare at your phone), ignore a phone call by placing your hand over your wrist, or put something on the Apple TV.

But what force taps and the digital crown will not do is make the Watch’s small screen as large as a phone’s. You can’t type out a reply to a message or email. You can’t browse the web for something. You can’t dig through a few months of your email to find a certain one. You can’t mindlessly swipe through Twitter (well, you could, but it’s going to be pretty difficult). That, though, is an advantage the Watch has over the phone. Because it is inherently limited, it also has to be laser-focused on a single purpose, and while using it, you are limited to accomplishing something. It’s a lot harder to lose yourself in a 1.5″ screen than it is in a 4+ inch screen.

That’s going to be one of the Watch’s primary purposes for existing: allowing us to do many of the things we do on our phones right now, but in a way that’s limited and, thus, less distracting. If you’re out to dinner and receive a message (and haven’t turned on Do Not Disturb), you’re going to be a lot less likely to spend a couple minutes on a reply, and then Instagram, if you’re checking and responding it to it on the Watch. It just doesn’t work that way.

In that way, I think Apple has embraced the wrist-worn watch’s inherent limitations. Rather than try to work around them, they are using them. They’ve built new means of interaction (force tap, digital crown, “taptic” feedback) that allows fairly sophisticated applications, but they didn’t use them to cram iOS in its entirety into the Watch.

What I think Apple is trying to do is build a new mode of personal computing on the wrist that is molded from the inherent limitations and opportunities that creates.

Truly Personal

In Jony Ive’s introduction to the Watch, Ive ends with a statement of purpose of sorts for it. He says,

I think we are now at a compelling beginning, actually designing technology to be worn. To be truly personal.

That sounds like a platitude, but I think it defines what Apple is trying to do. “Taptic feedback,” which Dave Hamilton describes as feeling like someone tapping you on the wrist, is a much less intrusive and jolting way of getting a notification than a vibration against your leg or the terrible noise it makes on a table, and more generally, focusing the Watch’s use on quick single purposes is, too.

What is interesting to me, though, is they are using the Watch’s nature to do things in a more personal—human—way, and to do things that the phone can’t. When providing directions, the Watch shows them on the screen just as you would expect on a phone, but it also does something neat: when it’s time to turn, it will let you know using its Taptic feedback, and it differentiates between left and right. As a result, there isn’t a need to stare at your phone while walking somewhere and getting directions.

They’ve also created a new kind of messaging. Traditionally, “messages” are either words sent from one person to another using text or speech. Since messages are communication through word, something inherently mental or intellectual rather than emotional, they are divorced from emotion. We can try to communicate emotion through text or speech (emoticons serve exactly that purpose), but communicating emotion to another person is always translated into text or speech, and then thought about by them, rather than felt. In person, we can communicate emotion with our facial expressions, body gestures, and through touch. There’s a reason hugging your partner before they leave on a long trip is so much more powerful than a text message saying you’ll miss them.

In a small way, using the Watch, Apple is trying to create a new way to communicate that can capture some of that emotion. Because the Watch can effectively “tap” your wrist, others can tap out a pattern on their Watch, and it will re-create those taps on your wrist, almost like they are tapping you themselves. You could send a tap-tap to your partner’s wrist while they are away on a trip just to say that you’re thinking about them. Isn’t that so much more meaningful a way to say it than a text message saying it? Doesn’t it carry more emotion and resonance?

That’s what they mean by making technology more personal. It means making it more human.

The Watch is not about making it more convenient to deal with notifications and information sent to us. It’s not even about, as I described above, keeping your phone in your pocket more often (although that will be a result). The Watch is creating a new kind of computing of our wrists that will be for different purposes than what the phone is for and what the tablet and PC are for. The Watch is for quickly checking and responding to messages, checking your schedule, finding somewhere to go and getting directions there, for helping you lead a more active (healthier) life, and for a more meaningful form of communication. And it will do that without sucking our complete attention onto it, like the phone, tablet and PC do.

The Watch is for doing things with the world and people around us. Finding places to go, getting there, exercising, checking in at the airport, and sending more meaningful messages. Even notifying you of a new message (if you don’t have Do Not Disturb turned on) while out to dinner with family or friends serves this purpose, because if you have to see it, you can do so in a less disruptive way and get back to what you are doing—spending time with people important to you.

The Watch is a new kind of computing born of, and made better by, it’s limitations. And I can’t wait.

September 15th, 2014

I Want to Know

When I was growing up, I was fascinated by space. One of my earliest memories—and I know this is strange—is, when I was four or five years old, trying to grasp the concept of emptiness in space. I imagined the vast emptiness of space between galaxies, nothing but emptiness. I tried to imagine what that meant, but most of all, I tried to imagine what it would look like.

That question, what color empty space would be, rolled around my brain the most. I couldn’t shake it. I would be doing something–playing Nintendo, coloring, whatever–and that question would pop into my head again. What does “nothing” look like? First, I imagined that it would look black, the black of being deep in a forest at night. But that didn’t seem right, either; black is still “something.” And then, I remember, I realized I was thinking about a much worse question. I wasn’t trying to imagine what the emptiness of space would look like. I was trying to imagine what nothing would look like.

I have that memory, I think, because thinking about that sort of broke my brain. I couldn’t comprehend what nothing is.

That question, of course, begins down toward the central question of what our universe is and how it was created. I think that’s why space–the planets, stars, galaxies–so fascinated me then; it’s this thing so alien to our world, that dwarfs it on a scale that’s incomprehensible to us, and yet it is us. We aren’t something held apart separate from it, but intimately a part of it and its history.

Trying to understand the physics of our universe, its structure and history is also an attempt to understand ourselves. I think, at some gut level, I understood that as a kid.

I poured myself into learning about our solar system and galaxy. My parents’ Windows PC had Encarta installed, and I was enthralled. I spent countless hours reading everything I could find within Encarta (which, at the time, felt like a truly magical fount of knowledge) about Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. And when I exhausted that source, I asked for books about space, and I obsessed over them. They were windows into these incredible places, and I couldn’t believe that we were a part of such a wondrous universe.

Through elementary school, my love for space continued to blossom. Then, NASA were my heroes. To my eyes, they were the people designing and launching missions across our solar system so we could understand even more about it. Many of the photos of Jupiter, Saturn, Uranus, and Neptune that I was so enraptured by were taken by spacecraft designed, built and launched by people at NASA. They were the people who had risked their lives to leave Earth and go to the Moon, to do something that most people up until just decades prior couldn’t even imagine as being possible. And they were the people who were exploring Mars with a little robotic rover called Sojourner that very moment.

They were my heroes because they were the people pushing us to explore our solar system, to learn what was out there and what came before us. I felt like I was at living during a momentous time in the history of humanity, and that I would live to see advances as incredible as 1969′s Moon landing. There wasn’t a doubt in my mind.

That year, in 1997, I was nine years old. It’s been seventeen years.

Since then, we have indeed made great advances. In that time, we’ve sent three separate rovers to Mars, and we discovered that Mars certainly had liquid water on its surface long ago in its history. We landed a probe on the surface of Saturn’s moon Titan, which sent back these photos. We’ve discovered that our galaxy is teeming with solar systems.

All truly great things. But we are no closer today to landing humans on Mars than we were in 1997. In fact, we are no closer to putting humans back on the Moon today than we were in 1997.

Some people would argue that’s nothing to be sad about, because there isn’t anything to be gained by sending humans to Mars, or anywhere else. Sending humans outside Earth is incredibly expensive and offers us nothing that can’t be gained through robotic exploration.

Humanity has many urges, but our grandest and noblest is our constant curiosity. Through our history as a species, we have wondered what is over that hill, over that ridge, beyond the horizon, and when we sat around our fires, what are the lights we see in the sky. Throughout, someone has wondered, and because they wondered, they wandered beyond the border that marks where our knowledge of the world ends, and they wandered into the unknown. We never crossed mountains, deserts, plains, continents and oceans because we did a return-on-investment analysis and decided there were economic benefits beyond the cost to doing so. We did so because we had to in order to survive, and we did so because we had to know what was there. We were curious, so we stepped out of what we knew into certain danger.

And yet that tendency of ours to risk everything to learn what is beyond everything we know is also integral to all of the progress we have made as a species. While working on rockets capable of leaving Earth’s atmosphere, it would hardly be obvious what that would allow us to do. Would someone then have known that rocketry would allow us to place satellites into orbit which would allow worldwide communication, weather prediction and the ability to locate yourself to within a few feet anywhere on Earth? Economic benefits that result from progress are hardly ever obvious beforehand.

But it is more than that. It isn’t just that exploration drives concrete economic benefits. We think in narratives. Since the Enlightenment and industrial revolution, we have built a narrative of progress. With each year that passes, we feel that things improve. Our computers get faster, smaller, more capable; we develop new drugs and treatments for diseases and conditions that, before, would be crippling or a death sentence; with each year, our lives improve. For a century and a half or so, that feeling hasn’t been too far from reality. But most especially, we have continued to do something that cuts to the very center of what it means to be human: we have explored. We explored the most dangerous parts of Earth, we have explored our oceans, we put humans into space and humans stepped foot on a foreign body. There is a reason that, when we think of our greatest achievements as a species, landing on the Moon comes to mind with ease. At a very deep level within us, exploring the unknown is tied up with what it means to progress.

As exciting and useful as it is to send probes to other planets and moons, it fails to capture our imagination in the same way that sending people does. The reason is because doing so–exploring the unknown ourselves–is such an incredible risk. What Buzz Aldrin, Neil Armstrong and Michael Collins did in 1969 was unfathomably dangerous. They knew–everyone knew–that there was a very good chance that they would fail to get back to Earth. But they accepted that risk, because for them, learning about the unknown was worth that risk.

Abandoning human exploration of space, then, has consequences more far reaching than what its proponents intend. We would not just be abandoning putting humans into space, but at some fundamental level within us will be resigning ourselves to staying here. We will have decided, as a species, that we have gone far enough, we will leave our borders at our planet’s atmosphere, and leave the rest of the solar system and galaxy to nature. And with that decision, we will resign ourselves to no longer exploring in the general sense.

That’s why it is so integral that we continue exploring. Pushing on the edge of what’s possible is what fuels our desire and ability to explore in all other areas, too.

There are still incredible mysteries for us to unlock. We don’t know whether Mars had life early in its history. We don’t know whether, in Europa’s and Enceladus’s oceans, there are lifeforms swimming through them as I write this. We don’t know whether there is intelligent life living on planets in solar systems in the Milky Way and beyond. We don’t know how life began on Earth, let alone how life began at all. And most of all, we don’t know whether it is possible for us to move beyond our own solar system.

But what I do know is this: I want to know. I want to know.

August 8th, 2014

Apple’s Advantage

Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.

Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.

That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.

And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.

The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.

Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.

I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?

As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.

That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.

June 3rd, 2014

No Answer

You have all the answers to my questions
Even ones I didn’t have
Why should I know anything at all?

You know everything I need
Everything I may need
You hold it all for me
So I waste no time

But still I wonder, why don’t I wonder?
Like I did as a kid
But no answer

March 31st, 2014

Tony Fadell Explains Nest’s Sale to Google

Last week, Nest was acquired by Google for $3.2 billion.

There are only a few companies that have truly excited me in the last few years, and Nest is at the top. They worked on a very original, very real problem—thermostats were not only a user interface disaster and something nearly everyone hated to use, but also were an opportunity to do something meaningful: use energy more efficiently in people’s homes while also improving their relation with an important but ignored device. In addition, it clearly was the first product in a much larger plan. And it was a very good first step in a very good plan.

So, when I heard that Nest had sold to Google, I felt a pang of disappointment. Not because it was Google (which, if Nest were to be acquired, makes more sense than any other company I can think of), but rather because Nest is an incredibly ambitious company that, I think, had the opportunity to be as important to the next wave of development in personal computing and the Internet as Apple and Google were—and potentially as large. They were a key member in Silicon Valley’s next generation of meaningful companies, I thought.

Of course, nearly every bit of that can (and will) still be true, with the notable exception of remaining independent. They can still do all of that, but they will do so under Google’s banner, and for Google’s benefit. And that’s fine, all else being equal. Before I continue, though, we need to discuss why Nest decided to sell in the first place, and “for the money” doesn’t count, because I know Tony Fadell, Matt Rogers and everyone else there didn’t do it for the cash.

Here’s why they sold, according to Fadell:

I was spending nearly ninety percent of my time on building the infrastructure of the company and I wasn’t able to spend enough time and cycles on what I love doing: products and creating differentiated experiences for our customers. That is where my love is and Google offered to let us focus on that, but with scale that will help bring our horizon closer to us, faster. Google offers to bring that scale to us. For me, ultimately building great products is key.

Fadell cites European distribution as a specific example of what he means by “scale”—physical distribution and dealing with legal issues surrounding something as regulated and disparate as energy. Fadell wants to focus his time on developing products rather than handling all the issues surrounding it.

It’s hard to argue with that. Nest clearly wants to move quickly. The Nest thermostat is a shockingly good first product, and Nest Protect—which they released just two years later—is at least as good. Nest Protect also began revealing their larger strategy. Owning either one of them is great, but owning both of them makes each one better. Since they use your home’s wireless network, the Nest thermostat will automatically augment itself with the Protect’s motion sensors. And more importantly, if the Protect senses rising levels of carbon monoxide, the thermostat will shut off your home’s furnace. Their strategy, then, appears to be modular devices that are convincing on their own, but when used together not only all function better, but begin to form the basis for a connected home.

Being a part of Google will allow them to realize that strategy faster by increasing their resources so they can focus their bandwidth on developing product. Google also is doing industry-leading work in learning systems and web services, which obviously will benefit Nest. Like I said, of all the companies in the world that could have acquired Nest (which, admittedly, is a fairly short list), Google is the best fit.

But Google didn’t agree to acquire Nest entirely for Nest’s benefit. They did it, I assume, because Nest fills in particularly important holes in Google’s capabilities and in Google’s future development. While Google has been very good at building web applications, web services and a mobile operating system, they’ve done very little to prove that they can design and make hardware that real consumers will pay real money for. There’s a lot more involved there than design and supply chain. To a much greater extent, making hardware involves doing businessy things like identifying a target market for it, identifying what price they’ll pay at necessary levels of sales and margin, and then manufacturing a quality product in an efficient enough way to hit that margin. Nest has shown that not only can they do all of that, but they can produce an exceptional product that customers truly love. That’s invaluable, and it’s something Google hasn’t done.

Nest also provides an entry path for Google into the home. Starting into the connected home requires building hardware, and it requires a no-bullshit vision for how the connected home can improve people’s lives in substantive ways. Nest provides both of those things.

It sounds a symbiotic relationship, then. Google can provide Nest what it needs and Nest can provide Google something it needs, too. In Nest’s ideal vision of the relationship, Nest will remain largely independent—their own brand, leadership, teams and products. People and resources may flow across the Nest-Google boundary, but the two entities will nevertheless remain distinct. But in Google’s, Nest will begin to overlap and merge with Google itself. If Google wants the Nest acquisition to result in an improved capability for creating hardware products that consumers really want, then that necessarily requires Nest’s leadership to extend outside of Nest itself—which would require splitting their time, too. This is because while Nest may become functionally a discrete unit within Google (the “connected home” unit, let’s say), if it is to have any effect on the rest of Google, there has to be some sort of cross over. This may mean putting Nest’s leadership (whether that’s Matt Rogers, or another member of the team) in charge of Google’s hardware, or even having people in leadership roles move back and forth across the boundary. In any case, the boundary begins to smear, and Fadell’s reason for doing the deal—to focus his team’s time exclusively on product—begins to seem less likely.

Of course, that’s not necessarily negative. Perhaps a Nest-infused Google, and a Google-infused Nest, is better for everyone involved—Nest, Google, and us. I think there’s a good argument to be made there. But inherently, as that occurs, Nest begins to fade as a distinct entity, and it becomes more Google.

I think the most optimistic comparison for this acquisition is Disney’s 2006 acquisition of Pixar. Pixar remained an independent studio, kept their leadership, kept their campus, kept their culture, and created some of their most artistically and commercial films afterward. In return, Disney received Ed Catmull and John Lasseter’s services for turning around their declining animation department. And turn it around they did; Disney Animation Studios is enjoying something of a renaissance. Frozen, released in December 2013, was Disney’s biggest hit since The Lion King. The Pixar acquisition is one of the most successful acquisitions in history.

That could be how it works out here, too. I suspect, though, that while Pixar has thus far been able to retain its independence, Nest will not retain independence to the same extent. I have two main reasons for thinking so. First, the Disney-Pixar deal was incredibly specific in its intent: the deal was Catmull and Lasseter would oversee Disney Animation and Pixar would remain its own studio. The Google-Nest deal, as far as I can tell, doesn’t appear to be nearly as well-defined. As a result, blurring will happen with relative ease. Second, while in the movie business it’s actually beneficial for Pixar to remain independent in substance and in brand—it allows them to experiment in ways they couldn’t necessarily do if it was all a single studio, and it also allows them to release multiple movies per year in a way that doesn’t feel like Disney movies are competing for people’s attention—that structure doesn’t make nearly as much sense for Google and Nest. In reality, centralizing their hardware operation makes much more sense than continuing Nest as a parallel operation to Google’s other hardware operations. As a result, I think what we are more likely to see is Nest more or less become a part of Google while the brand continues on as Google’s “connected home” brand.

In the short-term, then, I think there’s very good reason to be excited about the deal. I bet we are going to see even more incredible things come out of Nest than we would have seen otherwise, and probably faster as well. But long-term, I’m disappointed. Nest is one of those rare companies that identified a brilliant product idea, in a large market, that would allow them to develop into something much greater in the future. And along the way, they built a first-rate company in all areas. I believe Nest would be one of the most important companies in the world for the next twenty years. And while they may still be integral to personal computing and the web’s future, it will likely be under Google’s banner. For better or for worse.

January 21st, 2014

Where Does the iPad Go From Here?

When Apple released the iPad Mini a year ago, I wrote that the Mini wasn’t just a secondary option to the main iPad, but it is the iPad. What I meant is that the Mini fulfilled much of the original iPad’s vision better than it or any of its full-sized successors did. Because the Mini was so much lighter and so much easier to hold, the Mini was not only more enjoyable to use while sitting down on the couch or in bed, but opened up contexts that the full-sized iPad’s size and weight didn’t allow. The iPad’s promise was powerful computing available to you in something you could comfortably hold in your hands, and the Mini fully delivered on it.

With this year’s full-sized iPad, though, the line between the two devices blurred. It’s still discernible, but it’s murkier. The iPad Mini is still superior for reading since it’s lighter, but the difference has narrowed considerably. Forget how many grams each device is; the iPad Air is quite comfortable to hold one-handed. Not as nice as the Mini, but nice.

The Mini narrowed the performance and capability gap as well. The Mini now runs the ridiculously fast A7 processor, same as the iPad Air and iPhone 5S. For many people, the Mini is big enough to write on as well, or make presentations in Keynote. The full-sized iPad is still superior for those tasks, and is especially superior for tasks like sketching which benefit from a larger screen, but the difference really isn’t that large. They are both quite capable devices for whatever task people want to use them for. The comparison is much more akin to a 13-inch Macbook versus a 15-inch than it is to an iPhone versus an iPad.

Which begs the question: where is the iPad going? More specifically, where are iPad applications going?

The original iPad was released in 2010 with the full iWork suite, and along with the iPad 2 in 2011, Apple released Garageband and iMovie as well. Garageband in particular feels like the high water mark for creative applications on the iPad. Not only was Garageband incredibly powerful and feature-deep, but it felt like it was made for the iPad all along.

There are many other applications that are powerful in different ways as well. Paper is a wonderfully simple application, but is capable of remarkably beautiful and intricate work (Paper’s Tumblr makes that clear). Editorial is a well-constructed text editor that is, in some ways, superior to desktop-based text editors. Djay is, appropriately, a DJ application that could only exist on the iPad. And on and on.

I think, though, that we’re beginning to see diminishing returns for increasing the capabilities of existing iPad applications or pushing it into new spaces. Generally speaking, while iPad hardware is dramatically faster and more capable than it was in 2010 (or even last year), applications haven’t increased at anywhere near the same pace. There are a number of obvious reasons for that, of course; faster hardware doesn’t necessarily imply that software—which is dependent on many things, but is especially dependent on good ideas and demand for them—will advance at a similar pace.

But there’s another reason: iOS still hews closely to the one-app-at-a-time, app-as-silo concept that began with the iPhone in 2007. For some purposes, this actually makes the iPad a better tool than the PC; the iPad’s limited ability to multitask makes it easier to focus on writing or other focused tasks like sketching. But it also significantly constrains what’s possible on the device. Writing an email or article that references a website or note, or doing any work that requires looking at one application’s content while working in another, requires flipping back and forth between applications, which makes for an excellent test of patience. And there is virtually no support for allowing two applications to work together on a single task by passing data between them.

Many people have suggested that renaming the iPad the “iPad Air” sets the stage for an iPad Pro, presumably with a larger screen. It’s difficult to disagree with that, but I don’t think an iPad with merely a larger screen would justify a “pro” moniker. The difference between what it is capable of and what the iPad Air is capable of would be fairly small; it would be better at sketching, but not much else. But where it would make sense is if the one-app-at-a-time model is relaxed. Using two applications on screen at once, a la Microsoft’s Metro, would certainly benefit from a larger screen. And building support for allowing applications to work together on tasks and share their data would justify the new name that much more as well.

While conceptually these two changes are fairly simple (I wrote about what applications working together could look like last year), the details are always where it gets difficult. How do you enter “split-screen” mode? How do you get out? What affordances do you provide to users so they understand what’s going on? Do you allow the user to drag content back and forth between applications (I’d hope so!)? How do you indicate what can and can’t be dragged? How do you implement the API to do so? And so on. None of it is easy. It’s inherently complex, and while we all want iOS to become more powerful, these changes are fundamental to iOS’s conceptual design, and a wrong move could endanger what’s made iOS so convincing in the first place: its simplicity.

Nonetheless, if iOS and the iPad are going to continue to progress and become more capable, then these sorts of changes are inevitable. That’s (generally) where we’re going. The rest is details. It’s also, coincidentally, where most of the work is.

November 4th, 2013

Nest Protect

Today, Nest announced their first new product since the Nest thermostat—Nest Protect. Nest Protect is a smoke and carbon monoxide alarm.

For an excellent look at Nest Protect, and profile of why they made it and the design process, you should read Stephen Levy’s piece for Wired.

…Wait, what? A smoke alarm?

Yes. Nest’s latest product is a $130 smoke alarm.

Nest’s basic strategy should be obvious now: take things we use in our daily lives but don’t at all enjoy using, or actively abhor using, and think through them so that they’re both better devices and delightful to own and use. (It’s also worth noting that they’re choosing product categories that are very large and universally needed.)

It’s more than that, though. The Nest thermostat and Nest Protect are standalone devices, but they work together. If you have a Nest thermostat and smoke alarms installed in your home, the smoke alarms will inform the thermostat when there’s movement in the home—which should make the Nest thermostat’s “Auto-Away” feature much more accurate, and thus able to be that much more efficient with a home’s energy use.

But what’s even more illuminating for what Nest’s vision is, though, is that if a Nest smoke alarm senses carbon monoxide, it will tell the thermostat to turn off the home’s furnace, which is a likely cause of carbon monoxide poisoning.

That’s truly smart. Nest has not only built two devices that work together to efficiently manage your home’s energy and protect you from fire, but they’ve created two devices that can actively judge the situation and work together to intervene in your home to keep you safe.

We’ve been hearing about the “smart home” for a very long time now, but this is the first time we’re legitimately there. Tony Fadell seemed to confirm this as Nest’s intent while talking with Stephen Levy:

In other words, Nest isn’t only about beautifying the thermostat or adding features to the lowly smoke detector. “We’re about creating the conscious home,” Fadell says. “To take a truly important device that has had no great innovation and make that device really, really great.” Left unsaid is a grander vision, with even bigger implications: many devices sensing the environment, talking to one another, and doing our bidding unprompted.

That’s a grand dream, and I think the Nest Protect—ostensibly just a smoke alarm—is going to be a key cog within their plan. Think about it: it’s not just a smoke alarm, but an Internet-connected computer with sophisticated sensors and software in every bedroom and on every floor. It knows when you wake up (since it has a motion-tracking sensor), when you go to bed, and even when you get up in the middle of the night. Along with the Nest thermostat, they also know when you leave for the day and when you get home. There’s a lot of immediate information there to begin doing some incredible things, and it’s something that could serve as a platform for all kinds of other services as well.

So yes, it is “just” a smoke alarm. And a very good one. But I think it’s also a piece of a much larger plan: make products that are so good that they can stand on their own and you’ll have to have them, but also work together to create something we’ve never seen before.

October 8th, 2013

The 5C

In an excellent interview with Business Week, Tim Cook explained their thinking for the iPhone 5C:

We never had an objective to sell a low-cost phone. Our primary objective is to sell a great phone and provide a great experience, and we figured out a way to do it at a lower cost. Therefore, we can pass that on. And we figured out a way to sell 4S at substantially less than we were selling it for before, and we’re passing it on. So we think there will be a lot more people in our tent, and we can really serve a lot more people. And that feels good.

The iPhone 5C is fascinating to me because nearly everyone—including John Gruber—got it wrong: it isn’t a “cheap” iPhone. Rather, it’s something that’s both much more obvious and surprising.

Implicit in the idea that Apple should release a cheaper iPhone is that it would be a secondary model for people who want an affordable prepaid iPhone and for international markets; that is, an implicit assumption was that the iPhone/iPhone 5S would remain the mainstream iPhone. That isn’t what Apple is doing with the iPhone 5C.

Instead, Apple has taken the strategy they’ve followed since releasing the iPhone 4—take last year’s model and make it available at $99—and created a distinct product from it, and made it the mainstream iPhone.

Rather than move the iPhone down market with the iPhone 5C, Apple moved the “regular” iPhone—this year, the iPhone 5S—up market to become the pro version, and establish the iPhone 5C as the “regular” iPhone. The iPhone 5C is now the iPhone that really is good enough for everyone. The A6 processor is fast, so is LTE, and the iPhone 5′s camera is very, very good. The colors lend it a feeling of accessibility, too; it feels less serious than the iPhone 5′s aluminum design, more fun, and the colors allow for a greater feeling of personalization and whimsy. (The cases only amplify that, misplaced circles aside.) It’s a very good phone at a reasonable $99 price-point, and it’s going to look much better in the store to potential customers than last year’s iPhone model did.1

Apple’s marketing certainly seems to be trumpeting this, too. Apple’s home page features the iPhone 5C, not the 5S, and it’s playing heavily on the 5C’s colors. They featured an iPhone 5C ad, not one for the 5S. Tim Cook and Phil Schiller referred to the iPhone 5S as Apple’s most “forward-looking” iPhone yet. Apple is positioning the iPhone 5C as Apple’s iPhone for everyone, and the iPhone 5S for people who want the best.

That makes some sense on the face of it; it allows Apple to sell a “new” iPhone at $99 with 16GB of storage, but with lower cost of goods sold, which means they can maintain their margin. It may also allow Apple to push the envelope a bit more at the top end because they no longer need to manufacture enough iPhone 5Ss to satisfy nearly everyone purchasing a new iPhone at launch. But if the iPhone is under mortal threat from low-end, commodity Android-powered smartphones, then this decision seems bizarre. It won’t compete with those devices. The iPhone 5C is cheaper, but it’s not much cheaper.

But it starts to make a lot of sense if you think that smartphones aren’t so far along that the low-end, cheap models are good enough compared to the iPhone. If Apple can still provide superior hardware and software that, combined, make for a genuinely better device that is palpable for regular customers, then Apple has no need to bloody itself in the low-end washer machine.

And that’s exactly what Apple seems to think. Tim Cook explains what he thinks makes Apple so special, and what makes this strategy possible:

You look at innovation like the iPhone’s camera and the detail that went into the camera. Most people hear the word camera, and they think of hardware. And hardware is really important to it, you know? With the stuff we did with the flash on this. But it’s software, and it’s the silicon—I mean, it’s everything.

So the way I think about Apple is that the magic of this place really comes up at its best when hardware, software, and services come together. And it’s sort of the intersection of those things is where things get incredibly magical. So facilitating that to happen and getting the collaboration level for that to happen is the magic here.

And one of my proudest moments is when other people see that. They don’t know that they’re seeing that, and that’s also the beauty. They don’t have to do it. But look at these (gesturing to iPhones). These are perfect examples where the hardware and the software and the service begin to blend. In some ways you don’t tell one from the other.

The iPhone’s camera is the perfect example of what Cook is arguing. The iPhone’s camera—a cellphone camera!—is now so good that many people have nearly no need for a dedicated point-and-shoot camera. This is only true, though, because Apple has focused on developing the camera in a way that can’t be captured so well on a specification sheet but really does make for a better camera. Rather than boost their sensor’s megapixel count, Apple has kept it steady at 8 megapixels for three iPhone models, and instead has boosted the sensor’s size. They’ve focused on doing rather incredible things with the iPhone’s Image Signal Processor to make for, and choose, better photos. While these things don’t translate well to selling points for cell carrier sales associates, it does make for a truly better camera, and customers do notice the difference. As a result, the iPhone feels like a device in a class of its own.

The obvious choice was to make a more affordable iPhone. I don’t think Apple is religiously opposed to making a cheaper iPhone, but they will only do so if they can make a convincing product. What Cook is saying is that making truly good products comes first. Eventually, I believe, Apple will do exactly that. That shouldn’t be a surprise; the iPhone 5C is highly reminiscent of my first Apple product, and one of my favorite devices ever: the iPod Mini. The iPod Mini had less storage than even the third-generation iPod (10GB versus the Mini’s 4GB), and wasn’t that much cheaper than the third-generation iPod ($299 versus $249), either. Critics at the time were perplexed; if Apple was going to make a lower-end iPod to compete with more affordable competing devices, the iPod Mini certainly wasn’t it.

But it didn’t matter, because it was a damned good product. For me (as a high school student at the time), the lower price finally made it attainable, and the colors were fun in a way the regular iPod never was. The iPod Mini was incredibly successful, and it wasn’t the end; Apple replaced it with the iPod Nano in 2005 at lower prices, and introduced the iPod Shuffle—a completely different kind of music player—in 2005 as well at even lower prices.

I think the iPhone will follow precisely the same path. That is, I believe Apple will build some kind of “iPhone” product for the low-end eventually, but it may not look like an iPhone at all.2

In that sense, what Apple did was incredibly obvious: it’s what they’ve been doing since Steve Jobs returned to Apple. They don’t identify price-points and product attributes and then create a product to fill it, as most companies do. They create genuinely good, convincing products that solve real needs for people first.

If you’ve been concerned about where Apple is going under Tim Cook, this should be a sign that there’s nothing to be concerned about. Apple’s unrelenting focus on making truly great products is not only still there, but seems to be reinvigorated under Cook’s new management team.

There have been a lot of headlines lately with some variation of “Is Innovation Finished At Apple?” I believe the best may still be ahead of Apple.

  1. Yes, technically, it’s almost identical to last year’s iPhone 5. But for people looking at several phones, a colorful iPhone is going to be a lot more exciting than the iPhone you’ve seen everywhere for a year. []
  2. This may be where Apple’s wrist device fits in. []
September 24th, 2013

Designing Basil’s New Timers

With iOS 7, I am releasing a large update to Basil. There’s a lot to talk about, and I’ll do so more this week, but I wanted to discuss the process around designing one of the main features: recipe timers.

When I released the first version of Basil, it would turn any time directions (the “15 minutes” in “Simmer for 15 minutes on low heat…”) into a button. When pressed, a timer would slide up on screen from the bottom. It was one of my favorite features because it made cooking a little easier in an immediately understandable way.

As soon as Basil hit the App Store, customers pointed out two large limitations. First, if you started a timer and switched to another application, it froze the timer. That’s not good, since switching to Safari to do things like look up how to properly slice a carrot into matchsticks or a substitute for masa is pretty typical in the kitchen. Second, Basil only allowed you to have a single timer running at a time, which made it difficult to time, say, two things cooking on the stove.

I’m happy to announce that the new version of Basil coming out this week addresses both of those limitations. But that’s not what I want to talk about.

Since I’ve been aware of these limitations since the just after Basil’s release, you may be wondering why I waited until now to address them. And you should be; it’s Basil’s biggest usability issue. You may especially be wondering why it’s taken so long since these limitations aren’t exactly a technically Hard Problem to solve.

It took this long because handling multiple timers turned out to be a fairly challenging design problem to solve while retaining Basil’s overall simplicity. Of course, I could have simply extended Basil’s current timer concept, and just show multiple timers across the bottom of the screen. It would look something like this:

Timers

That would have worked, but there are several serious problems with it. First, it magnifies an issue with the original design: the timers cover up a portion of the direction step at the bottom of the screen. Second, with multiple timer views across the bottom of the view, it’s also highly distracting. Third, it’s not at all clear which direction step each timer is associated with—which is a very bad thing when you’re cooking multiple things at once and forget which timer is for what.

So that concept wouldn’t work. The next route I briefly considered was using the same basic concept, but pushing the recipe view up, and putting the timers underneath it. That would solve the first problem, but not problems two and three, and it only solves the first problem by reducing the recipe’s screen space. That didn’t seem like a viable solution, either.

The third direction I considered was creating a timer “manager” of sorts that would only be visible when the user had opened a timer. If there were any open timers, Basil would show a little circle in the bottom-right of the view indicating how many active timers there were. Tapping it would open a view across the screen showing what timers were open for each recipe. It would be like a global look at all timers for your recipes, and you could pause or stop any of them, from anywhere in the application. It would look something like this (in sketch form, anyway):

Timer Manager

There are some advantages to this approach. It would solve problems one and two, since it would only be on screen when you want to glance at your timers. Also, it provides a lot of power: you can see where your timers are for your recipes from anywhere within Basil, so you can quickly check on the time for your beans on the stove while looking at your famous burger recipe to get them ready for the grill.

But as you can see from the illustration, it only adds that power by introducing enormous complexity. Now, you have a button/indicator view that appears in the screen’s right corner when you open a timer, and now you have a view that could literally show tens of timers at a time—which would mean that, for it to be useful, you would have to clear any timers you open but don’t end up wanting to use. And even when you manage it correctly, it’s still confusingly complex. There’s a lot going on in one view to comprehend when you’re trying to cook.

I dismissed that design path because it would effectively destroy the timer feature’s purpose by introducing complexity. If it’s too much to deal with, you won’t use it, and that’s especially true when it’s for an application that won’t be your sole focus most of the time. Implementing the timer manager would technically solve the original timer implementation’s limitations, and technically make it much more powerful, but substantively it would make the feature completely useless. And worse, it would water down Basil’s core focus: making things simple so you can focus on cooking.

Realizing how wrong the timer manager design is was frustrating. I’d spent a lot of time and brain power heading down that path, and it was completely wrong. What it forced me to do, though, is think about the feature’s purpose again with a fresh mind. Instead of thinking about how to make Basil’s timer feature—something that’s bolted on top of the recipe view—more powerful, I thought about how I could make it simpler.

In Basil’s original design, I thought of the timers like a digital equivalent to an actual timer—something separate from the recipe itself that would slide on and off screen as needed. But then I thought that distinction didn’t make sense; a timer is conceptually tied to the direction step it comes from. If step two says to sauté onions for five minutes, that timer for five minutes should be tied to step two. So why not just put the timer in the direction step itself? Like this:

DirectionTimer

While this doesn’t achieve the same power as the recipe manager, it not only solves problems one and two, but it also solves problem three by directly tying the timer to its direction step. There’s no question what a timer is tied to when you glance at it, and there’s no extra text to read to indicate it, either.

By doing so, this design path both simplifies the recipe view visually, which is highly desirable for cooking, and naturally presents multiple timers. I’m extremely proud that it makes a more powerful feature possible by simplifying the design concept. Hopefully, if I’ve done my job, none of this will ever occur to the user. It should seem obvious, and they should never have to think about it.

I wanted to step through some of the design process and my thinking for this feature because often when we use applications, how an interface is conceptually designed seems obvious, and implementing new features seems like it should be incredibly easy. Designing interfaces, though, is never easy, and while much of it may seem obvious, the details that only become apparent once you’ve thought through every part of it are the ones that cause a design to fail.

September 16th, 2013

No, Mr. President. Snowden is a Patriot.

Earlier this month, in a press conference where he stated that he supported “greater oversight, greater transparency, and constraints” on the government’s digital surveillance programs, President Obama stated that he doesn’t believe Snowden is a patriot:

No, I don’t think Mr. Snowden was a patriot. As I said in my opening remarks, I called for a thorough review of our surveillance operations before Mr. Snowden made these leaks.

My preference — and I think the American people’s preference — would have been for a lawful, orderly examination of these laws, a thoughtful fact-based debate that would then lead us to a better place. Because I never made claims that all the surveillance technologies that have developed since the time some of these laws had been put in place somehow didn’t require potentially some additional reforms. That’s exactly what I called for.

Apparently for Obama, calling for a “thorough review” of government surveillance absolves him of any responsibility for their current state, or for pushing for—and getting—a reform-free renewal of the FISA law in 2012.

Obama said Snowden should have used authorized channels to question the government’s actions, and that he would have been protected by a presidential policy directive Obama issued. He said this, apparently, despite that his directive doesn’t offer protection for contractors like Snowden, nor were the procedures actually in effect in June 2013 when Snowden released classified documents.

He said this while knowing that his administration prosecuted an NSA employee, Thomas Drake, who did precisely what Obama says whistleblowers should do: he raised his concerns through authorized channels within the government. After later leaking non-classified information to a reporter, Drake was charged with several felonies that could have resulted in 35 years in prison, and the federal government attempted to coerce him into helping them prosecute other whistleblowers.

So, then, Snowden had very little choice to do what he did. He saw that the government is violating the privacy of Americans on a massive scale, and that the government was actively hiding the nature of these programs and lying to the public about them. Following proper “procedure” would have resulted in his persecution, and likely wouldn’t have led to the information being disseminated so widely, so he released it to journalists.

His choice—which came at incredible personal cost to him (losing his job, home, family and the right to live in his own country)—is what’s allowed the public to have any knowledge at all of what our government is doing in our name, and to have anything resembling an “open debate and democratic process” with regard to this issue. The actions of a man Obama says isn’t a patriot.

This administration has worked very hard to preserve the government’s surveillance and to prevent the public from understanding its actual extent. Obama’s calling for a “thorough review” and “greater oversight, greater transparency” are merely words. Words that come too late, and words that contradict his administration’s actions. Snowden’s actions are what have allowed us to have knowledge of what the government is doing.

At great risk to himself, he helped expose the government’s surveillance programs so the public could know what was being done to us in our name by our elected leaders. If that isn’t patriotic, what is?

August 20th, 2013

“Yeah, That Feature Should be Easy to Do”

Today, Basil received a nice 1.6 update. This update brings a really obvious, simple feature that many other recipe applications include: ingredient scaling. It’s really convenient while in the kitchen to be able to adjust a recipe’s yield according to your needs (the amount of jambalaya you need for two on a quiet weeknight is a little different than the amount you need to make while having guests), and it’s conceptually very simple.

Except it’s not so simple.

It sounds like it, of course; when you think of doing ingredient scaling, it’s just taking an amount at the beginning of a line (“1 item,” “2 tablespoons”) and scaling it by some factor—cutting it in thirds, doubling it, whatever. That’s the approach most applications take for ingredient scaling: they assume all ingredients are phrased such that there is an amount at the beginning of the line and then a food item. Based on that assumption, they identify a number at the beginning of the ingredient line and scale it by whatever scale the user selects. In the basic cases, like “2 cups white sugar”, this works OK. Not great, but OK.

But the problem is that not all ingredients items are the basic case, because there are many ways to express things. What about ingredients like “1 (12 ounce) box of pasta” versus “2 medium cloves garlic (about 1 teaspoon)”; in the first case, the measurement amount shouldn’t be adjusted, but in the second, it should be. In the second case, assuming that you only have to adjust the amount at the beginning of the line can be very confusing while cooking: After I double the recipe, do I use 4 cloves of garlic, or 1 teaspoon? This assumption—made because parsing imprecise language like this is difficult—actually makes cooking harder for the user, because they have to remember how their application fails at correctly scaling ingredients.

A new feature for cooking isn’t helpful if using it actually makes it harder to cook. There are many other cases; what about scaling amounts expressed as words, like “half”? Or what about when you cut “1/2 cup” in fourths and it tells you to use “1/8 cup” rather than “2 tablespoons?”

Scaling ingredients isn’t useful if it makes you think about these things while cooking or doesn’t work for some ingredients. It’s a distraction.

I avoided building a scaling feature for Basil for that reason: I didn’t want to add more noise if it had all of these caveats. But I’m happy to announce that Basil now has ingredient scaling, and it does its best to do the right thing in all of these cases. It handles the cases discussed above, and many others, too. It will do its best to present ingredient amounts in a way that makes sense (no “1/9 cup sugar” or similarly bizarre things).

It isn’t perfect. There are probably cases I’ve missed, or cases it’s not handling properly, but I am committed to addressing those.

Working on ingredient scaling, though, reminded me that problems which seem exceedingly simple and easy to solve on the surface are usually icebergs waiting to sink you. They may look simple, but they tend to conceal much complexity underneath the surface that is only obvious once you’ve thought through the problem. These “yeah, that should be easy to do” features have, more often than not, been the hardest problems to solve in my experience.

This experience is also another lesson that while users like receiving new features and they’re exciting to announce, over the longer term users only want a new feature if it improves their net-experience in some way. If it ends up not being useful in many cases, or adds complexity to the process of using your application, then that feature could make your application less useful than it was before.

August 6th, 2013