Chi Ling Chan – The Stanford Daily https://stanforddaily.com Breaking news from the Farm since 1892 Wed, 19 Nov 2014 08:19:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://stanforddaily.com/wp-content/uploads/2019/03/cropped-DailyIcon-CardinalRed.png?w=32 Chi Ling Chan – The Stanford Daily https://stanforddaily.com 32 32 204779320 Standardized testing: the scourge of student life https://stanforddaily.com/2014/11/19/standardized-testing-the-scourge-of-student-life/ https://stanforddaily.com/2014/11/19/standardized-testing-the-scourge-of-student-life/#respond Wed, 19 Nov 2014 08:19:56 +0000 https://stanforddaily.com/?p=1092371 Tests should at no point be the be-all-and-end-all, as they are now among public education systems. Even in their most enlightened forms, they should be no more than a small part of a student’s education toolkit. From the perspective of learning, passing tests doesn’t begin to compare with inquiring and pursuing topics that engage and excite us - as learners, not test-takers.

The post Standardized testing: the scourge of student life appeared first on The Stanford Daily.

]]>
Finals week is in the offing, and the congregation of bikes outside Green Library is growing. As Stanford students are preparing for their finals, high school students across the U.S are clutching nervously to No.2 pencils, hedging college bets with their second or third SAT/ACT tests. Each year, more than 1.5 million students spend four stress-packed hours bubbling answers, doing everything they can to bump up their scores in standardized tests. Most call it the scourge of student life, but endure we must.

But really, must we? How did young children with an eager appetite for learning find themselves bent over an average of 113 standardized tests between pre-K and grade 12? And then, more of the same in college? I’m not even going to begin talking about testing in Asia, where I grew up, and where testing is a fact of student life more so than anywhere else. Exams ought to be examined: Why test at all? What are we measuring ourselves against, and who decides what’s worth testing?

“Why we test” is a question that cannot be divorced from “why we educate,” and the history of standardized testing provides some provisional answers. The earliest record of testing comes from China, where bureaucrat-hopefuls were tested on Confucian philosophy and poetry before being deemed fit to govern. The ancient Greeks tested students through socratic dialogues that led not to a score, but to more dialogue; the obsession for a “correct” response, they thought, was for shopkeepers. Then came the industrial revolution, and with it the emergence of universal public education system that took kids off the farms and trained them to become industrial workers. That was part of the whole transformation of American society in the nineteenth century and a shared experience across most industrializing economies of the time.

Historically, public education was foremost in service of industry and state ideology rather than independent learning. Certainly there are a lot of meaningful learning that goes on in schools, but there is also an awful lot of control and indoctrination. The point of standardized testing was not  as is widely advertised  to facilitate learning, but to sort people into various stations in industry. Standardized testing rewards “correct answers” and conformism, not genuine inquiry, because it was not designed for that purpose in the first place.

In the history of testing, Stanford played a non-trivial role. The famed Stanford-Binet Intelligence Scale was developed by Stanford psychologist Lewis M. Terman to test for “cognitive ability and general intelligence” based on five weighted factors: knowledge, quantitative reasoning, visual-spatial processing, working memory, and fluid reasoning. The Stanford-Binet test quickly gained traction across the world, inspiring many variants that continue to be widely used in schools, workplaces and the military. In the illustrious and dismal history of standardized testing, some folks at Stanford got to decide for the world who is smart and who isn’t.

The problem with any test for “general intelligence” is that it elevates those who score well on standardized tests to being “generally” intelligent, a stand-in for intelligence of all kinds. The careless assumption  that if you are “generally” intelligent you must be better at everything you do than the guy whose test score doesn’t match up  has led to a halo effect around test performance that is most pernicious. A signal of test-taking capability became synonymous with capability and (worse yet) potential itself, used as evidence for ability far beyond its predictability warranted.

Here and everywhere test scores are used as a signal, accurate to several decimal points, for admissions into school and employment – never mind that there are different types of intelligence, and never mind that test scores are imperfect proxies that correlate with parental income and racial profile. At their worst, they become for young learners a measure of self-worth: those who do well in tests bask in overconfidence of their own intelligence, and those don’t suffer crises of confidence and (often unfounded) doubts of their own faculties and potential. Both forget that all a test score can tell us is that we are capable of performing well in the circumscribed confines of a test; tests measure maximal performance, not typical performance.

This is not to argue that standardized tests must go. Neither am I advocating for the sort of heart-above-head egalitarianism that accompanies popular arguments against aptitude tests (the irony with that position, of course, is that test based selection used to be an enlightened policy among liberals and progressives to level a hereditary caste system). That tests are not perfect measures does not mean that they cannot be useful. If designed thoughtfully, tests can be part of a feedback mechanism that help us learn better. And given that a public school system through which millions of students pass necessitates some measure of performance, the more constructive question is not whether or not we test, but how we test, and  most importantly  why we test. Tests designed for sorting are going to be very different from tests designed for learning.

Regardless, tests should at no point be the be-all-and-end-all, as they are now among public education systems. Even in their most enlightened forms, they should be no more than a small part of a student’s education toolkit. From the perspective of learning, passing tests doesn’t begin to compare with inquiring and pursuing topics that engage and excite us – as learners, not test-takers. Looking back on my education, the moments of insight and learning that I hold on to most dearly came about not because a test told me what I “needed to know,” or when I lost a point where I shouldn’t have. They were those that came as a result of inquiry into questions that excited me  questions that certainly do not end with test timers forcing me to put down my pen.

This piece was inspired by a recent roundtable discussion “Examining Exams” with Dan Schwartz, organized by The Stanford Roundtable for Science, Technology and Society. Contact chiling ‘at’ stanford.edu if you would like to continue this conversation.

The post Standardized testing: the scourge of student life appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/11/19/standardized-testing-the-scourge-of-student-life/feed/ 0 1092371
In Conversation with Sandra Liu Huang https://stanforddaily.com/2014/11/05/in-conversation-with-sandra-huang/ https://stanforddaily.com/2014/11/05/in-conversation-with-sandra-huang/#respond Wed, 05 Nov 2014 17:11:10 +0000 https://stanforddaily.com/?p=1091435 When Quora first got off the ground in 2010, there were plenty of such sites on the Internet. Yet, as its founders saw, “no one had come along to build something that was really good yet.” What differentiated Quora was not so much the idea, but the product itself. The lady behind some of the most important nuts and bolts that make Quora tick is Sandra Liu Huang, its director of product management.

The post In Conversation with Sandra Liu Huang appeared first on The Stanford Daily.

]]>
To call Quora the chimera of Google and Facebook is not unapt  the question-and-answer startup has the boundlessness of Google (given that there is no limit to what questions one can ask), and the sociability of Facebook (users collaborate and suggest edits to others’ answers). In many ways, the idea itself is nothing new. When Quora first got off the ground in 2010, there were plenty of such sites on the Internet. Yet, as its founders saw, “no one had come along to build something that was really good yet.” What differentiated Quora was not so much the idea, but the product itself.

The lady behind some of the most important nuts and bolts that make Quora tick is Sandra Liu Huang, its director of product management. A Stanford alumna (class of 2002), Sandra’s foray into the tech scene had been more accidental than intentional. She came into Stanford with an inclination towards disciplines in the humanities, but that was to change when she stumbled upon CS 106A when a high school friend demanded she take him to that class. Several more computer science classes later, she found her own sweet spot in the program of Science, Technology and Society.

Into the Valley

Still, it did not become clear that she was going to end up working in Silicon Valley until she took Tom Krosnick’s popular class on Global Entrepreneurship Management. “That class got me asking several questions: What do I value? What am I good at doing? What will people pay for?” Several internships later, she decided that the high-velocity, action-oriented culture of Silicon Valley was the right fit for her. It also gave her the opportunity to engage intellectually with one of the problems at the core of STS challenges: how innovations affect the way we live, act and adapt.

In 2002, she walked straight into the dot com bust as a fresh graduate, eager to be at a high velocity workplace. “At that time, given the relative shortage of venture capital, it really mattered that companies were adding real value,” she recalled. She landed her first job in marketing at Google, then a relatively small company with 2,000 employees.

Complexity and Consistency

Quora came along in 2009 with a simple proposition: If someone’s brain held a piece of useful knowledge, that knowledge should be able to find its way out there for everyone. Over time, it evolved into a space where people share experiences, from what it’s like to get accepted into Stanford to what it’s like to fly on Air Force One (Sandra’s personal favorite). With Hollywood wanting to turn a single Quora thread into a TV show, it is also proving to be fertile ground for the imagination.

What makes Quora such a gold mine for for the curious and the inquisitive? Answer: the presence of other curious and inquisitive people. A critical mass of people with solid answers doesn’t snowball without a solid platform. As product manager, Sandra’s mission has been to make Quora a smooth place to share what we know through good user interface design.

Getting things right is, as any experienced product designer would attest, an iterative process requiring trial and error. For Sandra  product design majors on campus might take note — two design principles are key. The first is the effective management of complexity. “We could make a design issue more complicated for users, or for backend engineers; there’s a very delicate tradeoff between the two.” The other is consistency: ensuring that usable components are familiar with users, that they rest on the same building blocks on the backend. She credited Stanford’s human-computer interaction classes for preparing her well for designing intuitive, empathetic interfaces.

Leaning in and leaning out

If a woman in tech is as a rare as a girl on a skateboard, a woman tech executive remains, unfortunately, a curiosity. Despite various attempts to create a more gender-balanced industry — from Sheryl Sandberg’s “Lean In” movement to projects encouraging girls to code, the lack of representation of women in tech remains an outstanding problem in the valley. This is especially true of management and executive level positions.

“It’s a hard problem with many inflexion points,” Sandra reflected, “a lot has to do with culture  some stereotypes find its way to people’s heads at an early age. On the leadership side, it’s all about keeping women in the workforce.” Although her identity as a woman never figured prominently in the early stages of her career, over time this perspective was to change. “Having a family gives you a different experience,” the mother of two said, “it’s really important for companies to have policies that make it work for women.”

Clearly, the choice that women face is not as simple as that between leaning in and leaning out. Sandberg’s advice to not “leave before you lean” is an empowering and necessary one, but confidence can only do so much. Ultimately, the message of empowerment has to be complemented by structural changes in the workplace. “Your career is not lateral,” said Sandra, “and sometimes it’s not as simple as a choice between leaning in and leaning out.” Silicon Valley, it seems, needs to figure out how to make it easy for women get back into the game even after they choose to lean out.

In the meantime, navigating the tech industry might take more than grit and gumption. “What, then, does it take?” one might ask. It wouldn’t be a bad idea to ask Quora.

Interview and profile by Chi Ling Chan, contactable at chiling ‘at’ stanford.edu.

The post In Conversation with Sandra Liu Huang appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/11/05/in-conversation-with-sandra-huang/feed/ 0 1091435
Privacy is (not) dead https://stanforddaily.com/2014/10/07/privacy-is-not-dead/ https://stanforddaily.com/2014/10/07/privacy-is-not-dead/#respond Wed, 08 Oct 2014 04:05:56 +0000 https://stanforddaily.com/?p=1089227 Against this backdrop, the main contribution of Eggers’ - alongside many others’ - to the privacy debate does not reside in their having anything new to say about privacy. In fact, little is new. They are merely asking that we take pause, to linger over what it means to surrender shreds of our personal life. They ask a question with no easy answer: what is gained and what is lost? Whatever that answer may be, it is hardly time yet to ‘get over it’, as Zuckerberg would have us do. Not now, and probably not ever.

The post Privacy is (not) dead appeared first on The Stanford Daily.

]]>
If you were at Encina Hall last Thursday evening, you would have run into Dave Eggers, who was in conversation with creative writing professor Tobias Wolff about The Circle, Eggers’ recent dystopian novel about the death of privacy. In Eggers’ world, secrets are lies and privacy is theft; three wise men rule the world and demand that all aspects of human existence flow through its portal. Here, privacy is (really) dead and you are trapped in hyper-visibility ruled by a digital aristocrazia that profits from your individuality. You are watched not by Big Brother, but rather by countless Small Brothers whose surreptitious presence lies beneath, illuminating screens of the gadgets that never leave you. You live in a universal digital dorm room, a panopticon in which everyone is a cube, both watching and being watched by every other cube. You are part of a counterculture everybody else is part of, naked to the shapeless crowd with whom you conform. There are no secrets to be had and you have no right to be forgotten.

The point of a dystopian novel is to show, with its Orwellian brushstrokes, that the truth is closer to fiction than one might think. Eggers adds to a growing number of works—both fiction and non-fiction—that are raising uncomfortable questions about the tyranny of transparency, perpetual presence in social networks and the voracious information appetites of both corporations and governments.

In some ways, these issues have been beaten to death. But their popularity also reflects growing concerns about privacy in an information-saturated world. And this is as it should be: measured against its potential implications, privacy as a concern remains underaddressed. There remains little consensus on what constitutes a reasonable expectation of privacy that society should work to uphold, and ‘regulation’ has thus been far more akin to fire-fighting than a well-considered framework of rules.

Meanwhile, privacy norms have surreptitiously shifted without our noticing: It is now normal, for example, for parents to track every movement of their children through phone apps; it is normal for employers to have access to employees’ work computers and emails without prior permission (why trust when you can track?); it is normal for governments and profit-making corporations lurking behind ‘terms and conditions’ that are never designed to be read to track our online behavior. Information that our parents’ generation might have jealously guarded is now shared across a dizzying number of channels and platforms, recombined and used in ways of which we have little knowledge. Someone who takes deliberate effort to protect her privacy is easily taken as “having something to hide.”

“Privacy is dead. Get over it.” proclaimed Mark Zuckerberg in a TechCrunch Interview in 2010 as he made the controversial decision to change the privacy settings of Facebook’s 350 million users. The implicit assumption of many technologists who are shaping the online ecosystem and architecture to reflect the new social dial tone for the 21st century is this: Humans, by nature, are social animals, and hyper-connectivity brings humans closer to living authentic lives by eliminating loneliness and enabling full disclosure.

But even if privacy is dead, should we simply ‘get over it’? Perhaps I am indeed less sanguine, but I am of the view that our uniqueness as a species lies in our ability to stand apart from the crowd, to disentangle ourselves from society, to be let alone and to be able to think and act for ourselves. What people don’t share is just as important as what they do share. There is nothing contradictory about the Aristotelian notion that humans are social creatures and Mill’s vision of liberty being grounded in individual autonomy. After all, there is all the difference between saying ‘I want to be alone’, and saying ‘I want to be let alone.’ Defenders of privacy never said the former; Zuckerberg just seems to have missed the point.

Where reality might depart from dystopian sci-fi is this: The diminishing space and freedom to be let alone may have little to do with an insidious grand plan to undermine privacy in the abstract. Rather, there are simply actors doing what they can within bounds prescribed by society, and it happens that we are moving towards a public-by-default, private-through-effort direction. The speed by which technology is altering privacy norms has given us neither the time nor the chance to renegotiate shifting lines between the public and private. The litany of concerns—from government surveillance to psychological implications of radical sharing—is long and daunting. Put against the immediate gratifications from clicking ‘I agree,’ distributed long-term costs never seemed more abstract. Why be bothered by Amazon tracking what you read and buy when the bargains of two-day-shipping and freebies seem like no-brainers?

Against this backdrop, the main contribution of Eggers’—alongside many others’—to the privacy debate does not reside in his having anything new to say about privacy. In fact, little is new. He and the others are merely asking that we take pause, to linger over what it means to surrender shreds of our personal life. They ask a question with no easy answer: What is gained and what is lost? Whatever that answer may be, it is hardly time yet to ‘get over it,’ as Zuckerberg would have us do. Not now, and probably not ever.

Contact Chi Ling Chan at chiling ‘at’ stanford.edu.

The post Privacy is (not) dead appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/10/07/privacy-is-not-dead/feed/ 0 1089227
The trouble with moral culpability https://stanforddaily.com/2014/09/23/the-trouble-with-moral-culpability/ https://stanforddaily.com/2014/09/23/the-trouble-with-moral-culpability/#respond Tue, 23 Sep 2014 21:56:10 +0000 https://stanforddaily.com/?p=1088191 As always, the leap from the is to the ought is never an easy one to make, but my sense is that retributive justice or the idea of “just desert”deserves way less currency than it has now. In place of retribution should be a focus on rehabilitation or recidivism reduction.

The post The trouble with moral culpability appeared first on The Stanford Daily.

]]>
Lex talionis, a.k.a “an eye for an eye”: the age-old principle that has us believe that every crime has its fitting punishment, and every wrongdoing a just recompensation. Since Hammurabi’s Code, “retributive justice” – a most oxymoronic coupling, one might say – has worked its way into the justice system and permitted punishments as wide-ranging as incarceration to the death penalty. It has allowed us to quietly nod our heads as society systematically punishes people based on an elaborate system aimed at according a criminal his just dessert.

Central to any retributive punishment, of course, is the premise of moral blameworthiness, which presupposes moral agency and free will. When we fault or punish a person for doing something wrong, we are implicitly saying that the act was committed out of his own volition, of his own free choice.

But the most cursory examination of scientific evidence should give us pause. We now know that when our biology changes, so do our decision-making processes and our desires. The drives we take for granted – be it sexual drive, aggression, motivation for work – depend on intricate workings of our neural circuitry and biological machinery.

Damages to the amygdala, for example, can lead to a constellation of symptoms, including fear, blunting of emotion and overreaction. Frontal disinhibition is known to impair decision-making capacity and repressive personality disorders, causing patients to lose their ability to control impulses. In one study, 57 percent of frontotemporal-dementia patients had sociopathic behavior, compared with only 27 percent of Alzheimer’s patients. The former lacked premeditation and claimed remorse, but did not act on it or express concern for the consequences.

Clinical studies are also contributing evidence that not only are we not free to will, we are not free to won’t. For patients with Tourette’s syndrome, actions occur in the absence of free will. The neural machinery triggers actions that the patient has absolutely no control over. The same applies for people with chorea, and a host of other conditions for which involuntary actions are symptomatic.

It seems that the more we know, the more free will is an illusion. Increasingly, scientific evidence are challenging the very premise of blameworthiness, dissociating moral culpability from the moral agent. If a person cannot help but behave in a way that runs roughshod over societal norms due to biological reasons that we are only beginning to get a handle on, can society reasonably fault him? On what grounds is systematic punishment – ranging from incarceration to capital punishment – justified?

As science reveals more about what drives behavior, “evil”or “ill-will”– and all the words that underscore blameworthiness – are perhaps just words for what we cannot yet explain. As we get better at measuring brain activity, previous behavior that seem inexplicable and human “intent”that were incomprehensible in the past might finally be explainable, even if only partially. Strip away free will and all assumptions of intent, and we come face to face with disease. The dead men walking are not so much “evil” or offsprings of the devil as they are plagued by their own biology. If we believe that, then capital punishment becomes a preposterous notion: Just as we do not kill patients infected with a certain disease, we should not kill criminals no matter how heinous their crimes are. Getting rid of an infected patient, the physician would tell us, does not get rid of the disease.

This is not an argument for the complete elimination of punishment, or moral responsibility per se. After all, different types of punishment serves different purpose, and retribution is not the only reason we punish. But, the evidence we glean from science does pose a challenge to the premise of moral culpability, which we often take for granted in our legal argot and moral reasoning. We are learning that biology and agency are not easily separable, and that we exist along a continuum along every possible axis used to measure human beings.

This presents some thorny problems for the judge. Supposing all conditions relating to the situation of a crime are held equal, should person A and person B (which we must reasonably assume to be different along some axis) be ascribed equal culpability and hence receive the same punishment? Certainly the current legal system, at least in principle, does so in the name of equality before the law. But if we’re really the unequal oddballs that we are, the question that hangs in the air is: should it?

As always, the leap from the is to the ought is never an easy one to make, but my sense is that retributive justice or the idea of “just dessert” deserves way less currency than it has now. In place of retribution should be a focus on rehabilitation or recidivism reduction. This calls for more science, innovation and more experimental approaches to figure out, to that end, what works and what doesn’t. For whatever emotional satisfaction that retributive punishment might bring for the punisher or victim, there is nevertheless no “justice” to be had through retribution.

The lesson we’re taught in pre-school sandboxes still rings true: an eye for an eye makes the world go blind.

Contact Chi Ling, Chan at chiling ‘at’ stanford.edu.

The post The trouble with moral culpability appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/09/23/the-trouble-with-moral-culpability/feed/ 0 1088191
Litost, and Love https://stanforddaily.com/2014/07/21/litost-and-love/ https://stanforddaily.com/2014/07/21/litost-and-love/#respond Tue, 22 Jul 2014 01:59:59 +0000 https://stanforddaily.com/?p=1086951 One summer day, I found myself sharing a bench outside Encina Hall with a stranger. In place of awkward silences, said stranger wanted to know what I was reading (Carl Jung). A long conversation about dreams and symbols ensued. And then, quite out of the blue, she asked: “Are you happy at Stanford?” Instinctively, I […]

The post Litost, and Love appeared first on The Stanford Daily.

]]>
One summer day, I found myself sharing a bench outside Encina Hall with a stranger. In place of awkward silences, said stranger wanted to know what I was reading (Carl Jung). A long conversation about dreams and symbols ensued. And then, quite out of the blue, she asked:

“Are you happy at Stanford?”

Instinctively, I wanted to say “yes,” because 1) saying otherwise might require further explanation; 2) it is a standard answer almost expected of every Stanford student when a stranger asks (warning: duck syndrome); 3) it is not untrue.

“Yes,” I paused, “but I cry a lot too.” I thought back on the occasional nights when I cried myself to sleep, over heartbreaks, self-loathing, the general condition of being lost. “I guess I’m not out of the woods yet,” I said, seeing no reason to lie.

“Most people your age aren’t,” she said, “but it is especially hard for you Stanford students, as far as I can tell…” She had worked as a counselor on campus before. “In a generally happy place like this, there is not quite enough room for sadness.”

But we know there is no lack of that on campus; a recent survey by the Daily finds that 23 percent of the student population had considered attempting suicide. A surprising statistic for a place that many from the outside call paradise, perhaps. But I hardly think this is something exclusive to Stanford students; growing up and growing old in this wild and crazy world is just not the easiest thing we have to confront. Everyone has a secret sadness which the world knows not, and when that sadness isn’t given the attention it deserves, it turns into an internalized battle of ambivalences, of aggression turned inward. A human condition that has come to be known as depression.

The Czechs have a word with no exact translation into any other language. The word is litost, which is a feeling as infinite as an open accordion, a feeling that is the synthesis of many others: grief, sympathy, remorse and an indefinable longing. As Milan Kundera puts it, “It is a special sorrow, a state of torment caused by a sudden insight into one’s own miserable self.”

In our childhood we are neither privy nor subject to litost because we are capable of forgetting the self; this is why we do not see children groveling in their own misery, and why we are so envious of them. The coming-of-age brings to bear a new consciousness: We become painfully aware of our faults, our shortcomings, and, if we fail to come to terms with the general imperfectability of being human, self-loathing emerges from the excesses of litost to cast a long shadow over the self. This new consciousness descends right as we lose the ability to forget, and we become incapable of forgiving or forgetting ourselves.

The only effective remedy for this rather dreadful state, as 200,000 years of experience with human existence have taught us, is love.

***

“Have you found love where you’re at?” she asked.

“Do you mean to ask if I am single, or do you mean love in the way I love food and books?”

“I mean, have you found love for yourself? Love for who you are?”

An impossible question to answer, I thought. In that long, wide pause several muses spoke.

One can find love in many ways: In a person, in an undertaking, in the (always) understated beauty of the world. Of all things, love for the self is the hardest to do. It presupposes knowing who the self is, which is impossible to know in the present tense. I could only know who I was, and who I want to be, but only the future self can tell me who I “am.” By which time only the past tense would be appropriate. If the present self always evades definition, how does one love that which one does not know?

“Sometimes,” I replied.

I traced the genealogy of all the moments that constitute “sometimes.” Growing up has meant the loss of absolute identifications: The child learns that Santa Claus is a lie (though why we lie to children is a question that still baffles me), and, as she grows out of her comfort zone to confront strangers appraising her, that “good” means different things in different contexts to different people. The self fragments and longs for reconstitution. Love as a source of absolute identification provides a permanent measure, an antidote to the dreadful litost. It enlarges us, and gives even sadness an aspect of purpose.

But love doesn’t come easily, even though it is sometimes the most taken-for-granted thing in the world.

It is often said that one cannot properly love others without having first loved the self. I often think that the reverse is more true: One who is incapable of loving another cannot properly love herself. And we learn to love ourselves through simple acts of loving and being loved by others.

I recently learned why it is that I feel this strange joy when I cook for others. It dawned on me one day, while I was cooking a pot of curry, that it had everything to do with my mother: That had been how my mother showed me love, every day for as long as I was living with her. Cooking became one of the ways I knew to love, too. I seldom tell the friends I cook for that I love them very much (cheesiness is a crime these days), but I hope they felt the love in their tummies.

In the end, the correct question to ask is not for what we live, but for whom we live. Without others, we are inscrutable to ourselves. Who “am” I without all the ties by which I am constituted? In all our searching, the only thing we have found that makes the emptiness and litost bearable, is each other.

***

“I hope you find it always,” she said.

“And same to you.”

We left it at that. I began my trek to Trader Joe’s just as the cerulean blue in the sky gave way to a dusky red. It was almost dinner time, and it was time to cook.

 

Contact Chi Ling Chan at chiling@stanford.edu.

The post Litost, and Love appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/07/21/litost-and-love/feed/ 0 1086951
On Changing the World https://stanforddaily.com/2014/04/16/on-changing-the-world/ https://stanforddaily.com/2014/04/16/on-changing-the-world/#comments Wed, 16 Apr 2014 08:35:20 +0000 https://stanforddaily.com/?p=1084513 If you’re on your way to building the next app that the first world will love for a few months and then forget, think hard about whether you’re delivering on the promise of making the world a better place. Don't start a startup just because you can. Think outside of the technological quick-fix box.

The post On Changing the World appeared first on The Stanford Daily.

]]>
When you’re in the epicenter of technological innovations and at the cutting edge of science, it’s easy to get carried away. “Change-the-world” stories abound, and “creating impact” is the staple of many a student’s personal ambitions. In dining halls, one frequently has meals interrupted by a starry-eyed student-cum-entrepreneur promoting a new app or business idea that promises to be game-changing and disruptive. Usually, the “nothing-like-anything-you’ve-seen-before” app turns out to solve a problem that wasn’t even there to begin with. Do I really need yet another “game-changing” mobile payment app that lets me pay my friend who sits right across from me at the dinner table?

Here on the Stanford campus, making something cool beats making something that matters. Too often one encounters smart, well-trained engineers who could help cure cancer or fix healthcare.gov but who are working on a sexting app. And then there are those who take Peter Thiel’s drop-out-of-college advice seriously. Each year, more than a few students drop out of college to work full-time on their next-big-startup-idea. In a recent conversation with my former lecturer, she complained about a student of hers who was not paying attention in class because he was working on his new startup during her lectures. By the end of the quarter, the sophomore told her he was dropping out to start a company.

It feels almost as if starting a startup should be on the bucket list of every Stanford student. “You’re a Stanford student,” I was once asked, “what’s your startup?” Maybe that explains why, instead of hearing “I’m going to solve Problem X,” we are increasingly hearing “I want to start a company.” On some rare occasions, one encounters real diamonds in the rough—like Code the Change, which tries to bring computer science and social change together to benefit non-profits—but an initiative like this one is the exception rather than the norm.

Sometimes, the “world” we think we know is in fact so small it extends no further than the Silicon Valley bubble. It’s a world where apps and new technology are being built to make lives better—but for a subset of society that is already very privileged. The “problems” that emerge out of need-finding (rather than need-solving) exercises tend to be very first-world, and in part this is because the ideas that are rewarded here in Silicon Valley are those that solve “problems” faced by a very narrow demographic—ours. But some such problems are really not even problems to begin with.

And when real problems are taken on, the approaches taken have a tendency to exhibit traits of what Evgeny Morozov calls a “solutionist” predisposition: the idea that there is a technological quick-fix for every problem in the world. For the record, “Africa? There’s an app for it” is a real headline on Wired. There is a tendency to fixate on what the new arsenal of digital technologies allow us to do without first inquiring what is worth doing.

Silicon Valley—and this is true, to some extent, of Stanford too—has no lack of damn-the-establishment hackers and utopian cyber-gurus who like to view technology as beyond politics and society. It is a worldview rooted in their belief in technological determinism—the reductionist perspective that presumes that everything, including social relations, will sort itself out as long as we have technological progress.

Solutions to real social problems—those that actually do make a difference—are usually less sexy. Technology can be part of the solution, but it seldom is, if ever, the silver bullet. That “paradigm-shifting” shiny new app might be spiffy and might have had venture capitalists pouring in millions in funding, but it does not save the world.

The pressing problems of our day—from poverty to inequality, public health to education systems—require adroit inventions and adaptations in politics and social relations. They often demand long and protracted institutional responses, not one-off hackathons. And then there are the real-life equivalent of NP-hard problems: problems to which there might not even exist a solution, and for which we can only expect a good response. Let’s also agree that there is no app that can “save Africa” if “saving Africa” is even a sensible phrase to use at all.

If you’re on your way to building the next app that the first world will love for a few months and then forget, think hard about whether you’re delivering on the promise of making the world a better place. Don’t start a startup just because you can. Think outside of the technological quick-fix box. If you see a problem that could require a technological approach, try building non-technological solutions to fix it. And, finally, if “changing the world” is part of the personal narrative that motivates what you do, then perhaps the first step should be to try to understand how the world works.

Thanks to PQ for reading drafts of this column.

Contact Chi Ling Chan at chiling@stanford.edu.

The post On Changing the World appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/04/16/on-changing-the-world/feed/ 2 1084513
The new digital age (part 2 of 2) https://stanforddaily.com/2014/03/06/the-new-digital-age-part-2-of-2/ https://stanforddaily.com/2014/03/06/the-new-digital-age-part-2-of-2/#respond Thu, 06 Mar 2014 20:40:39 +0000 https://stanforddaily.com/?p=1086687 The new problem of the digital age — though some would call it a blessing — is that ordinary citizens now have the power to create major disruptions to democracies on a scale previously impossible, and previously mediated by institutions. Never before have democracies stood so precariously on the edge of chaos.

The post The new digital age (part 2 of 2) appeared first on The Stanford Daily.

]]>
See part one of our coverage here

The new problem of the digital age — though some would call it a blessing — is that ordinary citizens now have the power to create major disruptions to democracies on a scale previously impossible, and previously mediated by institutions. Never before have democracies stood so precariously on the edge of chaos.

What Cohen calls the “empowering bias” of technology raises expectations of what ordinary citizens can do, but the success of a revolution depends on far more that technology alone. “No revolution is successful on the day of, or the day after,” Rice argued. “The Internet leaves little time for present-day leaders to operate.”

The balkanization of online information has created echo chambers that lead to the extreme polarization of political views we see in America today. The most pressing problems require time and political leadership, and both Schmidt and Rice fear that the digital age has compromised the opportunity for public deliberation.

And then there is, unavoidably, the question of privacy. Less was said of NSA’s incursions, and more questions were asked about the accountability of private corporations. After all, the real threat to privacy may come not from governments, but from private corporations.

“Target knows more about you than the government does,” Rice quipped, “And whereas NSA doesn’t care about what you said to your grandmother, Amazon cares.”

When corporations have control over so much data of ordinary citizens, what keeps them accountable? Should we, as a member of the audience suggested, hold elections for Google, who controls data of much of our online existence?

Schmidt believes that the competitive market does the job: users’ vote of confidence is their continued use of Google’s services, which feeds on people’s trust. One could always switch to Bing — the difference that sets private corporations apart from government is that they have competitors that constituents can switch to at any time. They also don’t have a monopoly over the use of force, and are subject to the laws of the land.

It is refreshing that the chief architects of our new digital age are forthright about the limitations of technology, and have largely abandoned unqualified paeans to technology. But they are also, clearly, the chief optimists of this digital age.

“We have the right architecture,” Cohen said, “but we don’t know who the players will be.” Schmidt is a self-proclaimed “optimist about people” who believes the empowerment of technology is such a great story that he “never want(s) society to stop it or slow it down.”

Ultimately, the use of technology for good or ill depends on the guiding human hand in the new digital age. Forget all the talk about machines taking over. What happens in the future is up to us.

This post was originally published on thedishdaily.com before it was acquired by The Stanford Daily in summer 2014.

The post The new digital age (part 2 of 2) appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/03/06/the-new-digital-age-part-2-of-2/feed/ 0 1086687
The new digital age (part 1 of 2) https://stanforddaily.com/2014/03/06/the-new-digital-age-part-1-of-2/ https://stanforddaily.com/2014/03/06/the-new-digital-age-part-1-of-2/#respond Thu, 06 Mar 2014 20:21:47 +0000 https://stanforddaily.com/?p=1086685 How will war, diplomacy and revolution change with increased access to information technologies? How much privacy and security must individuals relinquish in the new digital age? Is there anything technology can do about ongoing revolutions? The CEMEX auditorium was packed to the brim Tuesday with listeners eager for answers to these questions. Onstage, technology met policy: Jared Cohen — a young Stanford alumnus and director of Google Ideas — was flanked by former Secretary of State Condoleezza Rice and Google Chief Eric Schmidt.

The post The new digital age (part 1 of 2) appeared first on The Stanford Daily.

]]>
It is almost a truism that the advent of Google is shaping not only Internet history but also human history.

The new digital age (part 1 of 2)“The point is not just to understand the world,” so spoke Marx, “but to change it.” Much is known about how Google, in its quest to organize all of the world’s information, has changed the world. But much less is known about how it understands the world, the new digital age and the role of technology in the socio-political landscape in which we live.

How will war, diplomacy and revolution change with increased access to information technologies? How much privacy and security must individuals relinquish in the new digital age? Is there anything technology can do about ongoing revolutions?

The CEMEX auditorium was packed to the brim Tuesday with listeners eager for answers to these questions. Onstage, technology met policy: Jared Cohen — a young Stanford alumnus and director of Google Ideas — was flanked by former Secretary of State Condoleezza Rice and Google Chief Eric Schmidt.

Right from the get-go, the spotlight was cast on the darkest and most autocratic places in the world. “The totalitarian cult of personality has been completely eliminated by the Internet,” Cohen said, “because the ability to create a society without doubt is no longer possible in our new digital age.”

This explains why there exists an upper bound to what China can do to its people before its legitimacy comes under threat, unlike places in North Korea where the visibility of government abuse is something more ardently to be wished than seriously to be expected. Even if the Internet does not lead to more knowledge, it has created greater awareness and visibility that bring to light atrocities and abuses before they become pervasive. Grievances become scalable.

“When a billion people in China are coming online — people who have no visibility in the urban area before — the grievances of one city or province has the potential to scale, and that would be game-changing,” Cohen said.

But there were also sobering analyses of what technology cannot do. This is seen where tech idealism meets realism. In the Syrian crisis, for example, technology is of very little help to refugees who have left behind everything that matters. Technological intervention is quite helpless without proper state intervention, because, as Cohen reminded, “States are still the dominant unit [of power] and they’re the ones who have to take charge.”

More disturbingly, Schmidt argued, technology can be used in service of bad ends, especially when part of the panoptical apparatus of autocratic regimes. In the hands of the masses in societies that have not grown up with doubt and choice, unintended consequences abound, especially when the Internet is used to flame tension and hatred.

Even in democracies, greater transparency brought about by the Internet does not always translate to better policies.

What worries Schmidt is not the fact that governments today collect large amounts of data on citizens, but that data could be leaked. In the past, the dissemination of knowledge was limited by how fast one could make Xerox copies; today it merely takes a Manning or Assange for millions of government documents to be stolen and distributed widely.

Schmidt and Cohen are, much to the surprise of some in the audience, decidedly anti-wikileaks. And for good reason: There is no way anybody could pore through every single document to ensure that they would be used judiciously.

“If you’re going to gather these information,” Schmidt advised, “be sure that they can’t be easily stolen.”

This is becoming harder to do in an age in which it is increasingly difficult for the CIA to keep secrets, and when public confidence in intelligence agencies has taken a beating following the Snowden revelations.

See part two here

This post was originally published on thedishdaily.com before it was acquired by The Stanford Daily in summer 2014.

The post The new digital age (part 1 of 2) appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/03/06/the-new-digital-age-part-1-of-2/feed/ 0 1086685
Change, Inequality, and Indifference in Silicon Valley, Part I: When the Google Bus Stops https://stanforddaily.com/2014/01/23/change-inequality-and-indifference-in-silicon-valley-part-i/ https://stanforddaily.com/2014/01/23/change-inequality-and-indifference-in-silicon-valley-part-i/#respond Thu, 23 Jan 2014 09:08:26 +0000 https://stanforddaily.com/?p=1081709 This column is the first part of a multi-part series. The next segment will be released in two weeks. An earlier version of this piece appeared in the Dish Daily. There are two men standing in front of a bus headed for Mountain View and the sign they carry reads “FUCK OFF, GOOGLE.” For the […]

The post Change, Inequality, and Indifference in Silicon Valley, Part I: When the Google Bus Stops appeared first on The Stanford Daily.

]]>
This column is the first part of a multi-part series. The next segment will be released in two weeks.

An earlier version of this piece appeared in the Dish Daily.

There are two men standing in front of a bus headed for Mountain View and the sign they carry reads “FUCK OFF, GOOGLE.”

For the Google employees aboard the “Google bus” taking them to work that day, confronting that jarring message probably wasn’t the best start to their morning. Then it got worse. Protestors in Oakland attacked that Google bus on that day last month, smashing a window and distributing fliers calling for a moratorium against evictions of residents in Oakland. It is tempting to see this as an isolated incident mounted by some disenchanted Luddites, but it simply isn’t. Just last week, San Francisco activists blocked an Apple bus, parading a wooden coffin bearing the words “Affordable Housing.”

Just what is going on? Why, for all the ostensibly democratizing nature of the technology celebrated by Silicon Valley, are its employees seen as “chums living on free 24/7 buffets driving up housing prices?”

There appears to be some sort of hipster-on-hipster hatred going on in the city, not unlike the fuzzie vs. techie tension so pervasive on the Stanford campus. But just as the psychology of the oft-parodied fuzzie vs. techie tension runs deeper than humanities majors hating on computer science, the housing protestors are not taking to the streets because they are against technology per se.

Tech work isn’t some harbinger of evil, and there is no reason to believe that Bay Area residents see tech as the enemy, even if viral photos of angry banners tempt us to. Soon after the incident, the University of San Francisco released a poll suggesting that bread-and-butter issues – for example, the declining affordability of housing – are at the heart of the debacle.

Partly because of the ongoing tech boom in the Bay Area and the recovering real estate market, housing prices have escalated dramatically: The median price of a home in San Francisco topped $1 million earlier this year. Hitting especially hard, the median rent for a two-bedroom apartment has reached $3,250, the highest in the country. Over the past year alone, the whole Bay Area has seen a 22 percent increase in home prices, and between 2010 and 2013, the median rent in San Francisco rose by 15 percent.

These increasing prices have been met with community upheaval. 2013 alone saw 1716 evictions, the majority of which have been of seniors and people with disabilities. Existing residents in Bay Area communities are being squeezed out by young yuppies working in the tech industry that desire to experience the thrills of urban life.

***

The root cause of the problem doesn’t lie solely with Google, Apple or the rest of the tech industry, although they were specifically called out in the protests. The simplest explanation is that, for a multitude of reasons – access to jobs being one, the allure of city life being another – many tech employees have chosen to live in San Francisco. In a way, this is a happy problem that other declining cities in the U.S. can only dream of, but fast growth and the rapid influx of workers are causing a housing crisis, making the Bay Area and San Francisco in particular victims of their own success.

The real fault, I think, lies in San Francisco’s housing policies, which have not responded to the surge in demand. Thanks to restrictive zoning laws, a Byzantine permit process and a pathological culture of NIMBYism, housing supply has stalled. Barely 1,500 new housing units per year have been built over the past two decades, which is half of what Seattle (boasting a tech economy not unlike the Bay Area’s) produces in a year. In 2011, only 269 units were built, but the city added over 40,000 new jobs in 2012 alone. Because infill development has been met with active resistance in San Francisco, regional population growth has been pushed elsewhere – to Oakland, the Brooklyn of the Bay Area.

Change, it must be understood, is not a tide that lifts all boats. When it comes too fast, too quickly and with indifference to (un)fairness, some resistance can only be expected. In the face of change, it is easy to see who will have to make way: the poor, the older, the less code-fluent, and yes, the less-white. The manifest dissatisfaction towards tech money makes clear one thing: that the tech sector, for all the paeans it sings to egalitarianism, is not exempt from the host of inequalities we see throughout the country.

Silicon Valley is a place where “the right kind” of nerdy does well – and “the right kind” generally happens to be white or Asian, and male. The culture and instincts so prevalent in the Bay Area are not free from ethnic or gender biases.

Follow the money, and these biases readily reveal themselves: In Silicon Valley, 89 percent of founding teams of Series A startups are all-male, and only 3 percent are all-female. (Curiously, almost 1/3 of Massachusetts’ founding teams are all-female.) 83 percent of founding teams that received Series A funding are all-white, compared to less than 1 percent that are all-black – and the latter get far less seed money than their Asian and white counterparts. Should we even be surprised to learn that the median income for households in the Bay Area headed by non-Hispanic whites is $77,000, compared with $26,000 for those headed by African-Americans?

The question remains: Why have tech companies become a target in recent protests? Why are lines being drawn for a battle seemingly between the techies and the non-techies?

A probable answer: This is not merely about housing policy, but also change, inequality and indifference.

Contact Chi Ling Chan at chiling@stanford.edu.

The post Change, Inequality, and Indifference in Silicon Valley, Part I: When the Google Bus Stops appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/01/23/change-inequality-and-indifference-in-silicon-valley-part-i/feed/ 0 1081709
To Be Everything At Once https://stanforddaily.com/2014/01/16/to-be-everything-at-once/ https://stanforddaily.com/2014/01/16/to-be-everything-at-once/#comments Fri, 17 Jan 2014 07:21:14 +0000 https://stanforddaily.com/?p=1081572 Keeping your options open give you the illusion that we can be everything at once, but at some point you will need to make a choice.

The post To Be Everything At Once appeared first on The Stanford Daily.

]]>
When I ask Stanford students why they do the things they do, their answers invariably fall under one of these categories: 1) they don’t have a choice (tuition is insane these days); 2) they simply love doing it and that’s reason enough; 3) they are “keeping their options open.” Increasingly I am hearing more of the third, perhaps because Stanford students live in a beckoning world with so many opportunities, they naturally want to make the most of them.

After all, since day one of freshman year we have been told — over and over to the point of becoming trite — to explore, to try new things, to be unafraid of reimagining yourself. It’s an uplifting message to start a starry-eyed, confused freshman on her exploration. I remember how it was when I first got to Stanford two years ago, feeling overwhelmed by the dizzying spread of everything on offer. Waltzing through activities fair convinced me I’d need three lifetimes, maybe more, to fully experience Stanford. And there was the unmistakable sense of optimism in the air, the eucalyptic scent of possibilities. Something was changing, and you felt like you could become anybody you wanted to be.

Or to be everything at once. Lenka’s song could well be the theme song of most Stanford students: “All I wanna be/ all I wanna be, oh/ All I wanna be is everything.”

The one word that would come to characterize most people’s freshman year — I said “most people” because there are always the Type-A kids who knew they wanted to be Mr. President the first day they learned to spell the word “ambition” — is “trying.” Trying out for a new club, trying a new class, trying to find the self, whatever that last one means.

And Stanford does give us a lot of room to try, and try again. Silicon Valley logic of “failing faster failing often” doesn’t apply; we live in what is probably the most protective bubble you could find on earth.

So, when I hear people wanting to “keep their options open,” I wonder why some of us are still working so hard to put off choosing, even in such a protected environment.

Two reasons come to mind. The first is quite understandable: you haven’t found something you’re willing to bet on, to commit to, something you believe in enough to dedicate your time to. In the meantime, keep exploring until you hit on something that’s really up your alley. Some people spend their whole life exploring, and with some luck you might.

The second reason confounds me: you think you have found your calling, but you want to hold off pursuing it because you need to “keep your options open” — options have become things that are good in and of themselves.

I met a friend at a recruitment event recently. Dressed up in a spiffy suit, he told me he wanted to be a consultant.

“What happened to your plan of becoming a teacher? I thought you were going to fix the broken education system!”

“Eventually, eventually. For now I will keep my options open.”

I was starting to understand that it is a matter of strategy — because, as the investment banker will tell you, putting all your eggs into one basket is much riskier than if you spread it out across different investment options.

“But life is not an investment decision, and you only live once!” I wanted to tell him, “How long are you going to postpone living up to what you believe in because of some nebulous ‘option’ in the offing?”

I held back; after all, there is really nothing wrong with wanting to be a consultant, but I walked away from the conversation feeling slightly perturbed. I wondered if being in an elite institution actually makes it harder to find your true calling, because saying that you want to be a great full-time mother, or the teacher who would make math fun for kids doesn’t seem that great of a return to a 60k per-annum investment.

We are probably better off with more options than fewer, but I also can’t help noticing how “keeping your options open” could, more than anything, betray an intense fear of failure. Instead of dedicating our efforts to doing something wholeheartedly, we make a series of tepid commitments and hope the “options” they yield come to something.

Some call this playing smart, but this could also be timidity: because we have never experienced anything other than success, our sense of self has been built around our ability to succeed, and the chance of failure is just terrifying. Instead of being committed to a personal vision, or an idea, we grow to be committed to “success” — which in our society has very standard definitions — and end up telling ourselves that our personal calling can wait.

Can it really? I am inclined to think that as we grow older, every decision we make starts narrowing our window of possibilities: if you decide, for instance, to be a pre-med today, the possibility of being a concert pianist 10 years down the road is, by the most optimistic gauge, slim.

Keeping your options open give you the illusion that we can be everything at once, but at some point you will need to make a choice. You can love Lenka’s song, but you can’t be everything at once: you have to decide who you are and who you are not.

And because every decision you make necessarily closes off doors, your true calling could well end up being outside your universe of possibilities, or a forgotten dream. It is in this way that Stanford, for all the possibilities it opens students to, can be paradoxically so limiting.

I do believe there is such a thing as a true calling, and it will find those who can say a hundred Noes for the sake of an overwhelming Yes. Sometimes what we need is not more options, but steadfast commitment and sheer perseverance to stand behind what we believe in.

The biggest tragedy, to my mind, is to graduate from Stanford with many achievements but little experience, great success but no vision. And there is little chance we can figure out what we actually believe in if we spend all that time laying out exit routes before we even find an entrance.

 

Chi Ling, Chan can be reached at chiling@stanford.edu and @callmechiling

The post To Be Everything At Once appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/01/16/to-be-everything-at-once/feed/ 1 1081572
Life and Death in the Digital Cloud https://stanforddaily.com/2014/01/05/life-and-death-in-the-digital-cloud/ https://stanforddaily.com/2014/01/05/life-and-death-in-the-digital-cloud/#respond Mon, 06 Jan 2014 06:21:18 +0000 https://stanforddaily.com/?p=1081198 We already speak of living multiple lives — one offline in the “real world” and as many virtual online lives as we care to manage. Rarely do we know how we look in our amorphous digital garb, or what it says about us. Data taken in aggregate can reveal information about us that we did not intend to share.

The post Life and Death in the Digital Cloud appeared first on The Stanford Daily.

]]>
I have always thought that a day will come when human beings can live forever. It’s just a matter of time before we figure out how to do it. The more interesting question is what form our defiance of death will take.

Will we take anti-aging pills to magically stop our bodies from aging? Will we plug our preserved brains into new bodies made of sterner stuff? Or will we download all our memories into a supercomputer that replicates human consciousness before our bodies die?

After watching Charlie Brooker’s “Be Right Back” in the British TV series “Black Mirror,” I’m beginning to believe in another possibility: Instead of resurrection or preservation, technology could recreate with the manifold digital footprints you leave in your virtual life.

Compiling a database of photographs, tweets, voice recordings, emails, chat logs and status updates, an artificial intelligence program could pick up a dead man’s online mannerisms, tastes, preferences and memories from his digital interactions to reverse-engineer his personality.

This was the unsettling prospect offered to the protagonist Martha, who had suffered the devastating loss of her fiance: a service that lets you talk to the dead. The computer gathers enough of her fiance’s digital footprints to let her talk on the phone with him. She enhances this virtual reincarnation by reminding him of certain memories he’s meant to have, which he retains.

The virtual life that once seemed nothing more than a thin simulacrum acquires a life of its own through artificial intelligence, which calculates a life “essence” from the bits of personality sprinkled into the digital infrastructure. “You,” or something that behaves and speaks very much like you, lives on in the machine.

This may sound like sci-fi taking reality to a logical extreme, but here’s something that might creep you out: It’s already happening.

LivesOn, a social media service in beta testing, uses artificial intelligence to mimic an individual’s Twitter activity, keeping an online persona alive after the physical self has kicked the bucket. “When your heart stops beating,” goes the company’s tagline, “you’ll keep tweeting.”

Other apps, like IfIDie1st, offer people a “once in a lifetime chance” at posthumous fame by broadcasting AI-generated messages on various social networking platforms. “Immortality is right around the corner,” intones the ghoulish advertisement, “along with death.”

We already speak of living multiple lives — one offline in the “real world” and as many virtual online lives as we care to manage. Some, particularly those of us who cannot live a waking day without smartphones and social media, already live in a culture of “real virtuality.” This occurs when digitalized networks of communication are so inclusive of cultural expressions and personal experiences that virtuality becomes a fundamental dimension of reality.

Our smartphones and iPads keep us perpetually connected, and increasingly our physical self is defined as much by our online profiles and postings as by our “real-life” interactions. Each tweet and “like” we share goes into weaving an elaborate digital garb that can represent us forever in the cloud. Your digital footprints are not easily erased; the Internet has the never-forgets memory of an elephant.

Rarely do we know how we look in our amorphous digital garb, or what it says about us. Data taken in aggregate can reveal information about us that we did not intend to share. Combining information from multiple sources, companies have long been able to develop dossiers from discrete consumer behavior data.

Target, for instance, is notorious for its ability to predict consumer patterns through analyzing shopping habits; the company even went so far as to discern a teen girl’s pregnancy before her parents so they could begin marketing maternity items to the mother-to-be.

Our ability to extrapolate habits from consumption data is constantly improving, and considering that habits — rather than conscious decision-making — shape 45 percent of the choices we make, the possibility of immortal virtual identities is not so far off.

Most people living in “real virtuality” are probably still too young to think about digital death while they are busy leading their digital lives. But the specter of death will not go away until scientists discover the secret to immortality for the offline self.

While we wait for that biological solution, it’s worth contemplating what happens to our digital identities when we die: Do we want to have them erased, or do we want to preserve them indefinitely in the ether(net)? Do we want to keep our digital selves alive for our loved ones? Will we create digital wills to address this, or will we let the corporate giants see to this for us?

These are new problems for a digital generation that’s living more online by the day. It is already a truism that the Internet has changed the world completely, and us along with it. And now it even appears that the perennial question of death is further complicated by the prospect of digital immortality enabled by apps. Congratulations, or should we say commiserations?

Chi Ling Chan can be reached at chiling@stanford.edu and on Twitter @callmechiling.

The post Life and Death in the Digital Cloud appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/01/05/life-and-death-in-the-digital-cloud/feed/ 0 1081198
Conversational Jerks https://stanforddaily.com/2013/11/15/conversational-jerks/ https://stanforddaily.com/2013/11/15/conversational-jerks/#respond Fri, 15 Nov 2013 15:43:03 +0000 https://stanforddaily.com/?p=1080541 A friend called me up one evening, barely concealing his exasperation. “I screwed up my presentation in class,” he confided disappointedly. “No one was even listening.” He had stayed up all night to prepare for his presentation, and went into that morning eager to test his ideas. When he finally got to the speaker’s stand, […]

The post Conversational Jerks appeared first on The Stanford Daily.

]]>
A friend called me up one evening, barely concealing his exasperation. “I screwed up my presentation in class,” he confided disappointedly. “No one was even listening.” He had stayed up all night to prepare for his presentation, and went into that morning eager to test his ideas.

When he finally got to the speaker’s stand, he found himself before a dozen students whose faces were hidden behind laptops, and whose thumbs were too busy skating on the black mirrors of smartphones.

It is a disconcerting time to be standing before any audience. Today’s speakers from the most esteemed lecturers to students nervously delivering their maiden speeches can no longer expect attention from the audience as a given. We live in a technological universe in which we are always connected, always communicating, shifting between conversations offline and a dozen conversations simultaneously happening online. For better or for worse, it has never been easier to tune out of a conversation.

A deafening silence hangs over classrooms as the virtual space clutters with chatter. We are sitting in the audience, our minds wandering about like busy bees taking technological sips in the virtual flowerbed as the speaker’s voice eases into a comfortable white noise in the background. Occasionally we look up and afford him a moment of attention, judging within that few seconds if he is worthy of continued attention, and at the slightest pause or stutter we switch our focus elsewhere.

Technology has made it possible to customize our lives: In between pauses we send emails, run online errands and browse Reddit. Life is too short to waste a moment on a boring conversation; technology is liberating: it gives us the control we want over our every second of our lives.

Well, yes, except that might also have made you a conversational jerk. There are times when tuning out behind gadgets can be rude. Ever been to a party where you’re talking to somebody who’s looking over your shoulder, or just when you’re about deliver your punch line lets a third person jump in on the conversation? Essentially you are doing the same thing, except from behind your device.

You could be hurting your friend, who expects no more than basic courtesy or a nod of affirmation from you at some point in his delivery. The laptop screen can feel like Harry Potter’s invisible cloak, but it isn’t: Your uninterested face is in full view, signaling your absence.[y1]

Then again, should we kid ourselves that we have someone’s attention just because they are looking in our direction, and their thumbs are still? I will admit that I am not the best listener in the world: There are times when I accidentally space out, when I am too sleep-deprived to pay full attention, as much as I want to.

And, on some occasions, an overt sign of inattentiveness can be a protest against power, a deliberate signal to a speaker to please just stop talking.[y2]  Maybe technology just offered us a more polite way to tell somebody “you’re boring” than ostentatiously reading a book or stomping out of the classroom. Not to mention the fact that it is quite an effective remedy against the Zzz monster on a sleep-deprived morning.

Which is why, for all my frustration with technology, I couldn’t quite make up my mind. If respect must be earned, then perhaps attention should be too. Can we blame people for tuning out if the speaker didn’t put in any effort to engage? When does that become disrespectful and make us conversation jerks?

Some teachers have resorted to phone stacks, or imposed a no-laptop rule because studies have found in-class multitasking to be distracting for other students. I am not a fan of overly restrictive rules, simply because I think there is too much diversity in circumstances for general dos and don’ts to be useful.

But here’s an experiment I did over the past few months: I have gone back to the old-school way of taking notes in class with pen and paper, resisting the temptation to pull out my laptop or phone, even during moments of boredom. It has worked out great so far, though, as with anything, you lose some and you win some.

The downside is that I can’t Google questions I have on the spot and have to leave them until after class. The upside? I am taking better notes, drawing more connections in lectures, and learning what makes a presentation boring so I know to avoid those pitfalls. Above all I have become more patient, to learn to appreciate the unedited moments– moments in which a speaker stutters or goes silent, and reveals himself in a way that would have gone unnoticed had I shifted my attention elsewhere.

There is no cut-and-dried rule one can rely on– it is something that requires constant experimenting, good judgment and sensitivity to expectations in different situations. My only hope is that we figure out what works best for ourselves, making conscious, deliberate choices that also consider how it affects people around us. In other words, think before you toggle and switch. Some conversations are worth your undivided attention.

Contact Chi Ling Chan at chiling@stanford.edu

The post Conversational Jerks appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/11/15/conversational-jerks/feed/ 0 1080541
The Disenchanted Russia https://stanforddaily.com/2013/11/08/taking-pause/ https://stanforddaily.com/2013/11/08/taking-pause/#comments Fri, 08 Nov 2013 16:25:26 +0000 https://stanforddaily.com/?p=1080334 "Don't talk to Russians about change or revolution," a Russian friend told me. "They are sick and tired of it. People want evolution, not revolution."

The post The Disenchanted Russia appeared first on The Stanford Daily.

]]>
Two months into my study abroad here in Moscow, I have this to say: Russia is a depressing place, even if there is and I admit there is beauty in its melancholy. On rainy days, when gloom and doom is the order of the day, I can almost taste the dolefulness that hangs in the air.

The further one goes from its capital Moscow, the more depressing it gets a point made with great poignancy by a recent New York Times feature titled “The Russia Left Behind.” Signs of physical decay all around seem too eager to remind one of the profound political and social decay that Russia is going through. It is as if the sun has left for another planet.

And if Russians are one of the most pessimistic and disenchanted people in the world, they have reasons to be. Few peoples in the world can be said to have experienced a history as dramatic and traumatic as the Russians’. For every generation since 1917, the only constant has been change: not incremental, gradual changes, but sea changes that continually sent the country down topsy-turvy vortexes.

First there was the October Revolution in 1917, which promised colossal changes meant to shift Russia’s autocratic paradigm to a liberal parliamentary system. Alas, that was hijacked by another form of autocracy in the shape of communist despotism.

When the myth of the communists’ messianic mission evaporated, tectonic policy shifts through Gorbachev’s Glasnost and Perestroika sought to put Russia on a new path to modernization. But then came 1991: the Soviet Union collapsed like a house of cards, and the Communist project was pronounced dead.

Boris Yeltsin staged a coup shortly after, and this time an anti-communist revolution took Russia by storm, promising democratic and economic reforms. For a moment Russia looked primed to set off on its own path to modernity, democracy and free markets, but instead of prosperity and freedom the Russians got economic meltdown, crime and ethnic strife.

The pendulum swung between autocracy and liberal reforms as revolutions upon revolutions put the Russians on a seesaw of hope and disillusionment. Reforms gave way, time and again, to the resurgent tradition of autocracy.

All that seems to be left now is a sense of disenchantment, summed up disgruntledly by ex-Prime Minister Viktor Chernomyrdin: “We tried better, but it went the usual way.” Everything has changed, but nothing has changed.

Except, perhaps, the people’s appetite for change. “Don’t talk to Russians about change or revolution,” a Russian friend told me. “They are sick and tired of it. People want evolution, not revolution.” But the “evolution” that the Russians so desire is not going to happen on its own. As with any social and political changes, incremental improvements can only be realized with painstaking persistence and a healthy dose of idealism. But judging from the political apathy manifest in low turnout rates in the recent Moscow mayor election and the sore lack of new political contesters, the younger generation has almost neither.

The general attitude of the populous is to “wait and see,” which, unfortunately, is probably the only smart thing to do. “There is no way you can get to any political position of importance without playing dirty,” my friend reminded me. “Every level of the system reeks with rank corruption, and it is so hard to break in because it is an absolutely closed system.”

In Russia, it is almost impossible to enter into the political game unless you are already part of the existing power structure. Attempting to challenge the system from the outside is like running up to a concrete wall and bashing your head against it a pointless and dangerous venture. Change, it appears, has to wait.

But for how long? Four attempts to save Russia from decay have failed, and a lot of time has been lost. The rich are leaving for better places, and the less well-off stay put because they can’t afford to leave.

Social mobility continues to decline and is on its way to proving a point about post-socialist societies that initial accumulation of capital is also final. Seeing little incentives to change the status quo, the Russian oligarchs continue to sit comfortably on Russia’s oil wealth, overlooking a glamorous decay.

Each morning, on my commute, I come across this portrait I now consider to be distinctively Russian: potholes on the roads, slabs of concrete lying about on unkempt earth, open construction sites from which clouds of dust rises up with each passing of an old rickety Lada, chipped tessellations on the exteriors of apartments worn off by a combination of age and negligence.

At the academy where I study, an entire skyscraper stands despondently upon a wasteland with no past glory to speak of, for it had been abandoned before it was even completed. Yet despite all the profound decay and general gloominess, there are days when the sun peers through the thick cloud cover to cast a silver lining, bestowing upon Moscow a gorgeous radiance.

Perhaps Russia’s silver lining, too, lies in wait beneath the depressing cover of rank corruption. One thing, however, is quite certain: If Russia were to succeed in lifting itself out its slow road to ruin, it would have done that in spite of its government, and not because of it.

Chi Ling, Chan is currently studying abroad in Moscow, Russia and is contactable at chiling ‘at’ stanford.edu.

The post The Disenchanted Russia appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/11/08/taking-pause/feed/ 3 1080334
The Fifth Estate https://stanforddaily.com/2013/10/30/the-fifth-estate/ https://stanforddaily.com/2013/10/30/the-fifth-estate/#comments Wed, 30 Oct 2013 18:26:51 +0000 https://stanforddaily.com/?p=1079959 Just as Internet fandom doesn’t necessarily reap box office earnings, Internet leaks don’t always translate into real world political change. For all its disclosures, a politically apathetic public greeted WikiLeaks. Real change takes more than freedom of information, it requires people’s willingness and ability to act upon it online and off as well.

The post The Fifth Estate appeared first on The Stanford Daily.

]]>
Bill Condon’s “anti-WikiLeaks” film, “The Fifth Estate,” is a long-anticipated movie based on accounts by WikiLeaks insider Daniel Domscheit-Berg. Even prior to its showing, controversies over the film abound.

Julian Assange himself gave the film an unintended publicity push by writing pre-production letters urging Benedict Cumberbatch (who plays Assange in the film) not to take part in what he calls “a reactionary snoozefest that only the U.S. government would love.” Doing what it does best (i.e., spilling the beans), WikiLeaks leaked the movie screenplay online and dubbed it “a work of fiction masquerading as fact.”

Given the online buzz, the film was expected to do decently well at the box-office. After all, this is a biopic of one of the most controversial figures of the 21st century. If it couldn’t win over the guys in Reddit chat rooms, it would at least have the support of more than 60,000 ‘Cumberbitches’ (his die-hard fans) who would pay to see Cumberbatch play the silver-haired, digital anarchist that is Assange.

Alas, the $30 million dollar production raked in a paltry $1.7 million in the U.S. box office, the worst opening weekend yet for any 2013 film release. It appears that Internet fame simply did not translate into real-world popularity.

The same, I think, can be said about WikiLeaks itself: just as Internet fandom doesn’t necessarily reap box office earnings, Internet leaks don’t always translate into real world political change.

The Internet, it must be said, has made information freer than it has ever been, enabling access and dissemination of a large scale. Whereas in the past exposés involved physically smuggling classified documents, today it is only a matter of dragging, dropping and clicking “Send.”

Seeing that technology has sounded the death knell for old-fashioned secrecy and ushered in a new age of radical transparency, Julian Assange took it upon himself to liberate secrets from the guarded wardrobes of the state. WikiLeaks, in his view, was performing a public service through the revelation of truths that the powerful seek to conceal.

The logic behind WikiLeaks is a simple one: give people information, and they will change the world. It is assumed that collecting and disseminating damning “state secrets” can easily upend structures that legitimize power.

“If we could find one moral man, one whistle-blower, someone willing to expose those secrets,” the on-screen Assange says with a gravity befitting a revolutionary at the cusp of change, “that man can topple the most powerful and most repressive of regimes.”

But, as it turns out, expecting political change to follow naturally from giving people access might be as a naive as banking on Cumberbitches to move the film up the box office charts.

Truth be told, the disclosures of classified cables was a massive public relations disaster that caused the State Department much embarrassment. But the damages have been limited, and they have not produced significant changes in policy and politics.

Even the most controversial footage, like the infamous Collateral Murder“, which brought home the brutality of U.S.’s military actions in Afghanistan, has inspired surprisingly little reaction against U.S. military engagements abroad. For all its disclosures, a politically apathetic public greeted WikiLeaks.

Worse yet, instead of gaining broad support from the masses, American public opinion has turned its back on WikiLeaks. According to a CNN poll, 77 percent of Americans disapproved of WikiLeaks’ release of U.S. diplomatic and military documents, believing that the disclosures damaged U.S foreign relations.

Public sentiment continues to err on the side of caution: Seventy-five percent believe that there are “some things the public does not have a right to know if it might affect national security”. Despite having taken the media by storm, WikiLeaks has had limited effects on public opinion.

Neither has there been any sign of a popular movement or youth rebellion against the gross power imbalance between state and citizens — not even after Edward Snowden’s leaks on Big Brother in the U.S. One would have expected Snowden’s PRISM and X-KEYSCORE exposés would make Americans more concerned about privacy, but that doesn’t seem to have happened.

The majority of Americans still have few qualms about the NSA spying on them. The most recent poll by Pew Research reveals that 56 percent of Americans choose security over privacy, a proportion that has surprisingly increased from the 2006 level of 51 percent.

Snowden’s revelations about NSA surveillance programs have done little to alter public views about the tradeoff between security and liberty, or public safety and personal privacy. This past summer, the US House of Representatives took an amendment to vote, which, if passed, would have ended the NSA’s mass collection of phone records. The amendment was defeated.

Like Cumberbatch’s melodramatic portrayal of Assange in “The Fifth Estate,” whistle-blowers like WikiLeaks and Snowden might have provided more drama than real political change. This is not to say that their exposés amounted to little. They have gone some way to promoting better political accountability around the world, reminding us that we cannot rely on anybody – not even the government – to tell us the truth.

But exposing secrets can only go so far. Real change takes more than freedom of information: information alone does not speak for itself, nor is it an agent of change — it is constrained by people’s willingness and ability to make sense of it, and more importantly, to act upon it online and off.

There should be no delusion that laptop-hogging hackers can change the world with the mere blow of a whistle. What comes after the sounding of the whistle will determine if the community of activists on the Internet deserves to be called “The Fifth Estate.” Cumberbitches didn’t save the movie from its box-office flop, and it is unlikely WikiLeaks can change the world — not in the way Assange imagined it will, and certainly not on its own.

Contact Chi Ling Chan at chiling ‘at’ stanford.edu

The post The Fifth Estate appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/10/30/the-fifth-estate/feed/ 2 1079959
Smartphones and Networked Individualism https://stanforddaily.com/2013/10/24/smartphones-and-networked-individualism/ https://stanforddaily.com/2013/10/24/smartphones-and-networked-individualism/#comments Fri, 25 Oct 2013 04:48:41 +0000 https://stanforddaily.com/?p=1079772 I don't want to be contactable at every instant, or ever-present in the virtual world. We have adopted a new lifestyle without giving enough thought to what it means to be constantly sharing aspects of our lives on our thin simulacrums online.

The post Smartphones and Networked Individualism appeared first on The Stanford Daily.

]]>
Call me a dinosaur, if you must. I have never owned a smartphone, and I still carry one of those old Nokia models that lets me text, call and nothing more.

“Why don’t you just get a smartphone so I can get hold of you anytime?” friends have frequently entreated. Even my parents have been, with all the best intentions, trying to lure me into their Whatsapp conversations.

Perhaps that makes me a less-than-ideal friend and daughter, but really, I don’t want to be contactable at every instant, or ever-present in the virtual world.

On some levels it is impossible to dispute that having the Internet on your palm affords a number of conveniences. We now marvel at how we lived life before we had Google Maps, Facebook and the World Wide Web in our hands, and we feel we are empowered by our possession of these myriad resources. Can we really do without a smartphone today, we the hyper-connected generation?

It’s not obvious that we can: we sleep with our phones, and take no ease unless we know where it is. We hold onto them like a rosary, reflexively thumbing it even as we speak.

We feel our smartphones flickering in the periphery, in the middle of a lecture. In what seems like a nervous tic — we are compelled to look down on our device every couple minutes, as if there is always something very important to do or to attend to. You see, I’m not sure I will call being chained to a smartphone empowering.

I still startle each time I enter an eerily silent subway with passengers all hunched over their screens, tapping away at Candy Crush, scrolling down an endless Facebook newsfeed.

What became more troubling was realizing how family gatherings have become technology parties where both adults and children sit in the same living room, eyes glued to their devices. Or at dinner parties, where people could be physically there, but with their heads in the technological cloud except during the occasional clinking of glasses. We can be so alone together these days.

Sociologists say we are living in an age of networked individualism — people are not hooked on gadgets, they argue, people are just hooked on each other.

We are increasingly networked as individuals in loose, fragmented networks providing on-demand succor, rather than embedded in tight-knitted groups. We can choose who we want to interact with over the network, and overcome physical constraints of the social environment we grow up in.

In the past people lived in villages, today we live in cities and tomorrow we live in huge server farms we call “the cloud”. For many of us — and you, reader, if you are reading this presently on your smartphone or laptop — tomorrow is already here.

There is a general sense that this represents some sort of freedom that we have never had. But for all the semblance of freedom we have gained, I can’t help wondering what we might have lost.

How much of our lives are we secluding away as we divide our attention between the interminable notifications, emails and social posts that aren’t really that pressing? Is Candy Crush really that much more rewarding than, say, a serendipitous conversation with a random stranger you meet on the train? Or reading a book, for that matter? Yes, I forgot to mention how smartphones have also taken the book away from people these days.

More so than that, I think we have adopted a new lifestyle without giving enough thought to what it means to be constantly sharing aspects of our lives on our thin simulacrums online. Do radical sharing, openness and personal transparency make us happier, or more lonely and divided?

Is social networking, which smartphones have made enticingly easy, really creating more authentic identities, or entrapping us in a hive mind where groupthink leads to the cult of the amateur and an amnesia of the self? And what about the massive amounts of personal data and digital footprints we leave behind in the public-by-default, private-through-effort Internet culture we live in?

Clearly these are not easy questions to answer, and I imagine it wouldn’t be a simple choice between having a smartphone or a dumbphone.

In fact, even the choice to stay out of the virtual network is increasingly an illusion as maintaining an online presence is normalized to the point that not participating makes you unusual, even suspicious.

What surprises me, though, is how rarely we even ask these questions before we adopt a lifestyle of hyper-connectivity.

I might be getting a little nostalgic here, but I do miss the days when serendipitous interactions occur in the real — not virtual — world: random discussions over a book a fellow commuter is reading, or the nod of recognition from someone noticing that we were wearing the same T-shirt.

And more so than that, I miss a time when it had been easier for everyone to be fully present at a get-together, enjoying each other’s company without the distractions of a flicker or buzz on their phones.

At least for now, I am quite contented with my old Nokia, and am holding out on getting a smartphone. So, leave me a message after the tone, and I promise to call you back.

Contact Chi Ling at chilling ‘at’ stanford.edu

The post Smartphones and Networked Individualism appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/10/24/smartphones-and-networked-individualism/feed/ 7 1079772
Working Harder Doesn’t Pay Better https://stanforddaily.com/2013/10/16/working-harder-doesnt-pay-better/ https://stanforddaily.com/2013/10/16/working-harder-doesnt-pay-better/#comments Thu, 17 Oct 2013 03:43:57 +0000 https://stanforddaily.com/?p=1079465 In 1930, Keynes had predicted that by the century's end technology would have advanced sufficiently that developed countries like the United States and Great Britain would have achieved a 15-hour work week. While it is true that technology has led to massive gains in overall productivity, a three to four hour workday simply hasn’t materialized.

The post Working Harder Doesn’t Pay Better appeared first on The Stanford Daily.

]]>
For as long as I can remember, my mother has been working as an office secretary, a 9-to-5 job that isn’t the most exciting in the world. But it was a livelihood that, though not well paying, sustained the family.

Over the years, I have noticed two things: her pay has hardly risen– and, in fact, decreased in real terms– and her working hours have become longer. This is in spite of the fact that her productivity has increased significantly: Whereas 20 years ago she would tabulate sums manually with a calculator, she now uses Excel to accomplish the same tasks at a much faster pace.

Where did all that working smarter, working better (which economists call “productivity gains”) go to? It certainly wasn’t reflected in her paycheck, nor had it led to shorter working hours. Across the world, as far as data suggests, majority of workers are working harder and becoming more productive, but they are earning less in terms of real income.

This should be puzzling to us. In 1930, Keynes had predicted that by the century’s end technology would have advanced sufficiently that developed countries like the United States and Great Britain would have achieved a 15-hour work week. While it is true that technology has led to massive gains in overall productivity, a three to four hour workday simply hasn’t materialized.

If anything, people are working harder and being paid less (relative to their output): In the United States, for example, worker productivity has increased by 254 percent from 1948 to 2011, but hourly compensation has seen only seen a 113 percent increase, which is less than half of what it should be if pay follows productivity. Workers are constantly told to increase their productivity through obtaining new skills, but, as we can see, it just doesn’t really pay off for the workers themselves.

At the same time, governments are investing heavily in technological innovation, seeing it as the key to increasing productivity, lowering costs of production and fostering greater competitiveness. While new technology is often readily adopted, the social costs that come with such changes are virtually never discussed.

It seems as if everybody has held a quasi-religious faith in the automatic beneficence of technological progress, assuming that greater investments in technology naturally brings about economic growth, jobs and shorter working hours.

But it hasn’t. Laborsaving technologies have not been used to save workers’ labor but rather to accrue greater profits for the top echelons. Productivity gains have gone to those with power, at the expense of those without it.

To the extent that there have been tangible benefits from automation, they have gone in only one direction: up. Today, labor’s share of income is declining, contributing to greater inequality and CEO and managers sitting on fat bonuses and paychecks. The Occupy Movements shed some light on these inequalities, but little has changed after its fizzling out.

So what have we got with all that technological progress? Instead of being relieved of effort, workers have been relieved of their livelihood. As I have written in a previous column, technology is putting people out of work faster than ever. Theoretically, that might be a good thing because people are now relieved from mindless, menial jobs that sap the human soul. But in practice, unemployment in capitalist societies that provide little welfare provisions means a very hard life, not self-actualization.

Unless, of course, you are living in Switzerland, a country that is going through a referendum to give every adult a basic income of 2500 francs (USD$2800) regardless of employment status.

Such proposals would probably be met with suspicion even in the most liberal quarters of United States: “Work” and “ethics” have such a strong link in people’s heads that paying someone for not working is simply unacceptable, and well, “socialist.” In the United States, that label is apparently a sufficient reason to kill a proposal regardless of its merits.

“Radical” proposals aside, it is probably time to take a break from singing paeans to technological progress to take stock of unexamined assumptions. We have blithely assumed that once we get the competition going in a capitalist system, it will promote technological progress and things will take care of themselves, when really this is an ill-defined faith that is rarely articulated with logical rigor and clarity.

When we examine these assumptions closely, we see that the causal chain is ambiguous at every link: Adoption of new technology does not necessarily lead to increases in productivity, and productivity gains might not translate into better welfare and standard of living for all.

Some will win and many will lose– there is nothing automatic about the magic loop between technology and welfare. They merely follow from the social choices made by those who have had the power to decide who gets what.

The post Working Harder Doesn’t Pay Better appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/10/16/working-harder-doesnt-pay-better/feed/ 2 1079465
The Importance of Being Alone https://stanforddaily.com/2013/10/07/the-importance-of-being-alone/ https://stanforddaily.com/2013/10/07/the-importance-of-being-alone/#respond Mon, 07 Oct 2013 17:45:11 +0000 https://stanforddaily.com/?p=1079169 For some of us, this tugging need to be somewhere doing something has made us incapable of solitude.

The post The Importance of Being Alone appeared first on The Stanford Daily.

]]>
During my second week of junior year at Stanford, I found myself alone at a dingy cafe in Moscow drinking cheap Russian compote, listening to the breathy whir of the oven puffing heat into the frigid air. Outside people were coming and going while my headphones blasted The Beatles’ “Eleanor Rigby.” All the lonely people/ Where do they all come from…?

It suddenly hit me that it had been a while since I had had the chance to be alone, sitting midday lost in a sea of foreignness, doing nothing. There is something very precious about moments like these– moments of solitude, when one finds company in herself, her thoughts freed from the Internet hive mind and allowed to wander about on its own. This is one reason why I had come all the way to Moscow this fall: to have time alone.

Yes, solitude. That was what I had been craving so desperately while I had been at Stanford, so much so that I decided I had to take a quarter off. Two years at Stanford have taught me that there is no place in the world where it is harder to find time alone. Much too often, there is no time to breathe, let alone think. It is as if the entire campus has been skillfully organized to prevent feeling alone: There is always a mixer somewhere on campus, a party, a meeting or an interview to go to. There is always something, someplace to go to, and we inundate our calendars to keep loneliness at bay. There simply isn’t time– nor reason– to be alone.

For some of us, this tugging need to be somewhere doing something has made us incapable of solitude. A friend once confided to me that she felt unbearably anxious and uncomfortable every time she found herself sitting alone. She refused to write her papers in her own room, preferring to do so at a friend’s place, and she would skip going to the dining halls if she couldn’t find a dinner buddy. The equivocation of aloneness with loneliness has become so entrenched that being alone is something to be averted at all costs, as if there is anything wrong with spending time with oneself.

In this age of hyper-connectivity, averting aloneness has become all too easy. With social media, everybody is practically one degree apart: All it takes is a Facebook message to find oneself in somebody’s company, if only virtually. The News Feed and other networking platforms have made it possible to be privy to friends’ lives even if we have no part in them, creating the illusion that we are somewhat “connected” to the experiences of many distanced others. A disturbing byproduct of being in constant company is that our minds, too, are never alone: They buzz with sound bites and opinions we conveniently lay claim to by clicking “share.” So much for independent thought. The canvassing of sound bites through social networking– online and off– is fast replacing sustained thought and introspection.

What happens when sociability leaves little room for solitude? What does it mean to be educated at a place where you are never alone, and where it takes enormous effort to find solitude?

I think it means we are at risk of losing an essential precondition for thinking deeply, or living the intellectual life that a liberal arts education is supposed to give us.

“In order to understand the world,” Albert Camus said, “one has to turn away from it on occasion.” The same might be said of understanding the self: If we are fearful of finding ourselves alone, then it is likely that we will never find ourselves.

There has always been a tug of war between the fear of loneliness and the need for solitude. Our language wisely captures the two sides of being alone– we have the word “loneliness” which carries with it the reverberations of isolation and seclusion, and “solitude,” which rings with a quiet freedom and tranquility. The former drives us into an endless quest for company, at the expense of time and space for solitude. At Stanford, the pressures to be constantly networked and connected easily eclipse the importance of being alone.

So amid the avalanche of emails asking for your attendance at this event and that event, take time to be alone. Wake up at 8 a.m. when it is dead quiet and savor the precious silence uninterrupted by crashing bikes. Make yourself a cup of hot chocolate to last through the book you meant to read but never got to. Grab that instrument you have neglected for too long. Go to Lake Lagunita before the sun sets, right down into the middle of the lakebed, and take a deep, deep breath. Feel the freedom of solitude, weightless as feathers. Put your gadgets away, be uncontactable for a time– your friends will understand.

Because being alone is okay. And if you came into college with the hope of understanding the world and finding yourself, Camus’ advice is good to keep in mind.

The post The Importance of Being Alone appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/10/07/the-importance-of-being-alone/feed/ 0 1079169
Automation, Robots and the Disappearing Worker https://stanforddaily.com/2013/10/02/automation-robots-and-the-disappearing-worker/ https://stanforddaily.com/2013/10/02/automation-robots-and-the-disappearing-worker/#comments Wed, 02 Oct 2013 18:29:52 +0000 https://stanforddaily.com/?p=1079017 There is no doubt, for one, that workers are increasingly being squeezed out by robots and automation. And we're not talking only about jobs at the lowest end of the pay scale-- what were once considered "middle-class" jobs are also being hollowed out.

The post Automation, Robots and the Disappearing Worker appeared first on The Stanford Daily.

]]>
Days ago, I came upon a letter sent to The Economist, written by a jobless man whose desperation is hard to miss:

“I am young and unemployed and face a lifetime on the dole. Why? This morning I collected my jobseekers allowance from my bank, where I have it paid directly into my account. I did not see a cashier, but withdrew money from a cash point. Then I went to the supermarket… I scanned the items at a self-serve till, no need for a check-out assistant. I went home, switched on my Chinese computer and applied for jobs online. I do not send letters through the post; e-mail is more convenient. I then shopped online, I rarely use local shops. Who can I blame for the lack of jobs?”

A century ago, the same question had been on the minds of a group of workers in Yorkshire, Lancashire and Nottingham. In an act of rebellion, they took it out on spinning jennies and power looms, smashing the machines that had left low-skilled, low-wage laborers without work. Posterity calls them “Luddites,” a word that is today synonymous with being an old fuddy-duddy, decidedly anti-technology and anti-progress.

In Stanford and in Silicon Valley especially, being labeled a Luddite is almost tantamount to being sent into exile. After all, who can be anti-technology in the heart of techie paradise? Yet slapping the luddite label on the unemployed does not make the problem go away. Being so close to Silicon Valley, we see technological progress creating a seemingly endless stream of lucrative jobs; what we don’t see, however, is how it is also eliminating other types of jobs and leaving the typical worker worse off than before.

There is no doubt, for one, that workers are increasingly being squeezed out by robots and automation. And we’re not talking only about jobs at the lowest end of the pay scale– what were once considered “middle-class” jobs are also being hollowed out as your property agent gives way to mobile applications like Airbnb and Trulia and your baggage check-in staff at the airport are replaced by self check-in kiosks. With the advent of MOOCs, community college lecturers might in time find themselves confronting superstar lecturers who put their own necessity into question.

Today, a whole class of workers is being rendered irrelevant as technology like the Internet, big data and artificial intelligence are automating many routine tasks. It’s not as simple as robots replacing workers– digital processes are creating new processes that enable us to do more with fewer people and making human jobs obsolete at a faster pace the skills and organizations can catch up.

Under such circumstances, the market does what it does best: It rewards whoever adds the greatest economic value captured by the price mechanism. And they are, inescapably, owners of capital and machines that bring about greater productivity and profits. With capital-based technological change comes a notable shift in income away from labor: in the United States, the share of compensation in gross domestic income is at a 60-year low, and the share of middle-class income has fallen from 62 percent in the 1980s to 45 percent today. Wither the American dream– there is probably no worse time to be a worker with no special skills.

Who is to be blamed? The popular rejoinder proffered by governments all over has been uniformly disingenuous: market forces. The inexorable forces of market competition, so the story goes, has led to innovations that increase productivity, and international trade has put downward pressures on wages.

For many governments, especially those insisting on welfare minimalism, the sole corrective has been to promote labor productivity: The onus is always on the workers to play catch-up with the robots. But increasingly this is not going to work, because better education will not do much to increase incomes or reduce inequality as long as productivity increases of the machines outpace that of the worker– which it most certainly will.

“Market forces” is a convenient scapegoat because, being sufficiently nebulous, it doesn’t hold anyone responsible and creates the illusion that the plight of the middle-class is ‘inevitable’ in the face of unstoppable technological advancements and globalization. But if it is true that automation is efficient, it is simply untrue that it got there because of the market.

Much of the most important innovations were a result of public sector investment. Silicon Valley, for example, did not come about through private capital– before there was Silicon Valley there was Microwave Valley, which was essentially a federal project specializing in electronic intelligence for CIA and the military. Stanford had research labs working for the CIA, and several engineering doctoral theses were actually classified.

Before Google and Facebook became poster children for Silicon Valley, the largest employer in the valley had been Lockheed Martin. In short, what is now the world’s hotbed of innovation once started out as Uncle Sam’s experiment.

If Silicon Valley and all the technological disruptions that have made less-skilled workers obsolete is a result of government-driven market distortion, then the hollowing out of the middle class is a failure of government, not “technology” or “market forces”.

Luddites past and present are not anti-technology in the abstract– rather, the real struggle is against the restructuring of social relations at their expense. Historically, technology both creates and destroys jobs; increasingly, though, the costs of technological transitions are going to fall on the workers and the less skilled.

It is no coincidence that the United States is seeing a more unequal distribution of wealth despite tremendous increases in economic productivity. For governments and technological optimists (which we have no lack of in Stanford) alike, there is a need to re-visit the assumption that technological progress is a good in and of itself that can be allowed to eclipse notions of fairness and social betterment.

Contact Chi Ling Chan at chiling@stanford.edu

The post Automation, Robots and the Disappearing Worker appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2013/10/02/automation-robots-and-the-disappearing-worker/feed/ 11 1079017