Sunday, November 29, 2009

The Internet's future lies in our hands

Reading "The Future of the Internet and How to Stop It" was a little like reading the history of the Internet minus any comic genius or mentions to how Al Gore "supposedly" invented the Net. (I never believed he did, but have always found the story amusing.)

That said, the book provides some extremely valuable context for those of us who are new to Internet studies. The information is alarming yet Zittrain doesn't seem interested in sparking a hysterical frenzy. Instead, like any respectable lawyer, he presents a balanced and informed argument that is relevant to everyone.

The fact that Zittrain's main argument - that "restrictive tools and rash approaches to security challenges are endangering the health of the online ecosystem" - seems both a bit far-fetched yet terrifyingly accurate at the same time adds some intrigue to the book. (Nov. 28, 2007 CNET News article)

The book is obviously much more nuanced, but Zittrain's key argument can be summarized by a few lines published in the Nov. 2007 CNET News article.
"You can call Zittrain's theme the AOL-ization of technology. Instead of personal computers being able to run any program from any source without approval from a third party--which many of us were used to in the 1980s and 1990s--Zittrain fears we're entering a world where centralized approval becomes necessary.
Examples are numerous: Apple's lockdown of the iPhone. Some Google applications that say developers can't "disparage" the company. Facebook.com's copyright policy for developers that says if the application permits file-sharing, they must "register an agent for notices of copyright infringements with the U.S. Copyright Office." Some terms of service agreements that require disclosure of source code."
Put another way, it is Zittrain's belief that the Internet has transformed from a "generative" one in which innovation was due to provided software of hardware... to a "tethered" one in which users use tethered appliances like iPhones and digital cameras which are "locked down" to develop new content, source code, programs, etc. He argues throughout the book that even PC's are in "lock down" mode, preventing users from making changes that may or may not improve their experience.

I should clarify that when Zittrain uses the term "generative," he is referring to technologies and networks like your standard PC that allow "tinkering and all sorts of creative uses." When he uses the term "tethered," however, he is referring to networks and technologies that discourage any sort of tinkering. "Basically, “take it or leave it” proprietary devices like Apple’s iPhone or the TiVo, or online walled gardens like the old AOL and current cell phone networks"(http://techliberation.com/2008/03/23/review-of-zittrains-future-of-the-internet/).

The problem with "locked down" and/or tethered machines, he states, is that...:
"When endpoints are locked down, and producers are unable to deliver innovative products directly to users, openness in the middle of the network becomes meaningless. Open highways do not mean freedom when they are so dangerous that one never ventures from the house" (165).
Zittrain goes on to argue that the Internet's final stake really does make a difference and that the ongoing battles between flexibility/openness and security/reliability need to be addressed with creativity rather than reactionary mandates supporting either doctrine. He supports this theory by noting that IBM, AT&T and Microsoft were all forced to unbundle their products when the U.S. Government decided they were working afoul of anti-trust regulations. Google hasn't met this fate, but Zittrain believes it is only a matter of time before the watchdogs catch on and force Google to dissolve some of its market share.

Like most authors we've read this semester, Zittrain doesn't offer a definite solution to the questions he poses. He says that publicity may help if Internet users can be persuaded to consider the trade-offs involved in using the Net.

Another solution he offers is something called the Green-Red split system, which involves one computer system that's locked down and totally reliable (Green) and another that's open to innovation (Red.) Zittrain writes positively of the concept, but it seems a bit far-fetched to me. Getting your average citizen used to one system can be onerous enough. I can't imagine how many could navigate two systems, even if they're easily navigable.

To me, the most likely solution Zittrain offers is the concept of distributed control, the backbone of Wikipedia. He argues that the combination of limited regulation and a neighborhood watch-type environment is bound to be successful, as the combination can preserve the creative spirit which has spurred multiple innovations without getting embroiled in the worst of the security issues plaguing other options.

This is seen on page 147, when Zittrain writes:
"Wikipedia has since come to stand for the idea that involvement of people in the information they read - whether to fix a typographical error or to join a debate over its veracity or completeness - is an important end itself, one made possible by the recursive generativity of a network that welcomes new outposts without gatekeepers; of software that can be created and deployed at those outposts; and of an ethos that welcomes new ideas without gatekeepers, one that asks the people bearing those ideas to argue for and substantiate them to those who question."

Overall, Zittrain makes a very compelling argument for open-source code and the more general idea that users should have an active role in governing the Net. It'll be interesting to see how things continue to pan out and whether there's enough interest to generate more of a Wikipedia-like environment on the Web.

Monday, November 23, 2009

Where are the Cliff Notes when really needed?

I've got to be honest with you guys. I am completely clueless about what to say about Galloway and Thacker's The Exploit. Even Habermas seemed to make more sense than this. That doesn't mean that I necessarily understood Habermas, but this is practically undecipherable.

Network theory seemed like such an easy concept, until I started reading this. To me, network theory is the science behind how people, objects, diseases, graphs are all interrelated and connected to one another. It's useful for explaining the spread of diseases such as influenza and AIDS, but it also has applications in fields as diverse as particle physics, economics, operations research and sociology, as noted on the Wikipedia entry concerning the theory.

Network theory and networks, in general, are limited, however, by their inability to predict where something like H1N1- commonly referred to as swine flu - will strike next.

This is supported on page 95, when the authors write:
"While individuals, groups, or organizations may be responsible for 'causing' emerging infectious diseases, it is notoriously difficult to predict the exact consequences of such decisions or to foresee the results of such actions. This is because emerging infectious diseases are not weapons like missiles, planes, or bombs; they are networks, and it is as networks that they function and as networks that they are controlled."
Scientists can use past examples to hypothesize where something like H1N1 might strike next, but it's nothing more than a hypothesis, a guess based on data collected about previous incidences. This is part of the problem researchers are facing with the current pandemic strain of H1N1. They know how it is transmitted and which groups are most likely to be impacted based on the genetic make-up of the strain, but the network doesn't reveal precisely who or when it will strike. The network also doesn't explain why people who are otherwise perfectly healthy and have no underlying medical conditions have died from H1n1.

That said, networks are extremely powerful.

As stated on the back of the book:
"The network has become the core organizational structure for postmodern politics, culture, and life, replacing the modern era's hierarchical systems. From peer-to-peer file sharing and massive multiplayer online games to contagion vectors of digital or biological viruses and global affiliations of terrorist organizations, the network form has become so invasive that nearly every aspect of contemporary society can be located within it."
This is particularly apparent when you consider the hierarchy of mainstream media, which has changed dramatically with the advent of both the Internet and more recently, social media. Whereas mainstream media once remained largely one-directional with networks distributing content directly to the user with little to no feedback, current media entities are all but required to be in constant interaction with their users/clients/viewers. In a sense, they have given up some control in order to be relevant and remain connected to the larger network.

As stated on page 124:
"...the unidirectional media of the past were ignoring half the loop. At least television did not know if the home audience was watching or not. As mathematicians might say, television is a 'directed' or unidirectioal graph. Today's media have closed the loop; they are 'undirected' or bidirectional graphs. Today's media physically require the maintained, constant, continuous interaction of users. This is the political tragedy of interactivity...Television was a huge megaphone. The Internet is a high-bandwidth security camera."
Another and probably bigger point the authors emphasize throughout the text is the idea that there's a substantial division between networks and sovereignty. While sovereign powers involve an entity - generally one individual - having supreme, independent authority over a territory, there are no widely accepted leaders or codes of law for forming or governing networks. Companies like Google may exert a lot of influence but they are not sovereign powers in the sense that they do not officially 'control' the space. Their influence is not codified in or by law. Thus, to the authors, the very idea that networks, particularly the Internet, are either naturally or intentionally egalitarian is misleading.

As written on page 4:
"The network, it appears, has emerged as a dominant form describing the nature of control today, as well as resistance to it."
And then on page 5, when the authors write:
"Perhaps there is no greater lesson about networks than the lesson about control: networks, by their mere existence, are not liberating; they exercise novel forms of control that operate at a level that is anonymous and non-human, which is to say material."
Overall, I find this book extremely confusing. The authors supposedly challenge the "assumption that networks are inherently egalitarian" and "contend that there exist new modes of control entirley native to networks," yet they never move beyond mere speculation to offer their own theory about the actual effects of networks. It probably would have helped if I had made it through the whole book, but the authors seem to come up short when it comes to offering definitive solutions to the questions they raise.

Monday, November 16, 2009

Two degrees in London

I've always been intrigued by the theory that everyone in the world is somehow connected to everyone else. I'm not sure exactly where or when I started believing in the "Kevin Bacon" theory of connectivity, but it was long before the actor came out against not the idea, per se, but his name's attachment to it. You'd think that he would follow the ideology that "any PR is good PR," but evidently, this particular topic is not one of his favorites.

As I was reading Duncan Watts' Six Degrees - struggling to get through the mathematical mumbo jumbo, I might add- I kept thinking about my own example of the paradox. My story isn't as exciting as being less than 6 degrees away from say, the Dalai Lama or the late Princess Diana or the Dixie Chicks' Natalie Maines (of which, I really am only 2-3 connections away - we share a college professor who became a good friend/mentor to me), but it seems emblematic of the issue at hand.

My anecdote takes place in London, England, where I spent a too-short weekend while studying abroad during my junior year at SMU. As a lifelong Girl Scout, I have always wanted to visit as many of the World Centers as possible but particularly London's Pax Lodge. So when a girlfriend and I took the Tube across the English Channel we naturally made a special trip to visit the center of Girl Scouts/Girl Guides in England. How the trip relates to the reading is that while visiting the Center, we met a few volunteers and mentioned that we were SMU students from Texas who were studying abroad in Paris. One of the young ladies mentioned that a friend others from high school was currently attending SMU. Since SMU is a relatively small school of about 4,000 undergraduates and 6,000 graduates, we asked where she was from and for the name of her friend. To make a long story short, it turned out that her high school classmate was one of our best friends - and he happened to be studying in London while we were in Paris. The two hadn't seen each other since high school, so we took her name and number and promised to e-mail it to our friend. I'm not sure if the two ever met up with each other, but we did try to get the two in touch with each other.

Though I thought at the time that this was nothing more than an amazing coincidence and evidence that the world is actually very small, this chance meeting no longer surprises me as much. It actually seems to support Watt's idea that people of similar backgrounds who have never met are likely to run in the same circles and are therefore, likely to come into contact with each other or each other's friends. In this case, it was likely or at least possible that this young lady and I would have met at some point or another because the three of us were all involved in scouting, were studying/living abroad at the same time and came from similar backgrounds. Most importantly, however, the young lady and I also shared our mutual friend in common.

This does not mean that I think chance or pure dumb luck is irrelevant when it comes to small-world networks. It's entirely possible that the young lady and I never would have met if my girlfriend and I had opted to visit Pax Lodge at another time or simply passed her door/office when she was out to lunch or on a bathroom break. Thus, chance definitely plays some role - but as Watts argues and I believe, it can't be the only role.

This claim is supported when Watts references a talk he heard by sociologist Harrison White. The gist of the talk, as Watts explains, is that "people know each other because of the things they do, or more generally the contexts they inhibit... All the things we do, all the features that define us, and all the activities we pursue that lead us to meet and interact with each other are contexts. So, the contexts in which each of us participates is an extremely important determinant of the network structure that we subsequently create" (115).

The argument continues when Watts writes: "By belonging to certain groups and playing certain roles, individuals acquire characteristics that make them more or less likely to interact with one another. Social identity, in other words, drives the creation of social networks."

Returning to my personal anecdote, what Watts seems to be arguing is that our mutual involvement in scouting (whether boy or girl) made it that much more likely that the three of us would come into contact with each other. That the connection took place in London makes the story more intriguing, but it is not as relevant as the fact that Scouts tend to run in similar circles and pursue similar activities.

As Watts states on page 116: "The more contexts two people share, the closer they are, and the more likely they are to be connected."

Makes sense to me.

Monday, November 09, 2009

Shaping identity before birth

When I applied for admission to the ATEC/EMAC program a few years ago, one of my primary interests was in studying online social networking and how individuals use the Internet to shape their identity both on and offline. This continues to remain my focus today, so boyd's argument that it's not only what we post about ourselves - but also what others post about us - that shapes our identity falls exactly into my field of interest.

Like many people, I have personal experience with loved ones posting information that we may not feel is quite ready for prime time. In my case, it was my dear, sweet husband. In his defense, he was understandably thrilled to find out that we really were going to have a second child and that he was going to be a daddy for the third time. I was thrilled - and still am - thrilled, but at the time was not yet ready to declare it to the world through FB. So, when I saw his status update and consequently picked my jaw up off the floor, I scrambled to get in touch with any of our "joint" friends who might have seen the update. Luckily, many of my friends hadn't seen the post and those that had chose to remain discreet, offering little more than "Congratulations" when they went to my page and found that I hadn't posted the announcement -yet. Also luckily, my boss - whom I hadn't planned to tell for another 6-7 weeks - has extremely limited access to my profile thanks to the block feature and had no idea that my husband had outed my secret to a few hundred friends, relatives, colleagues, grade school acquaintances... (I am lucky to have a great relationship with my boss, but I do try to maintain some work/home divide.) Needless to say, I spent the better part of a day trying to head off any pandemonium about why I didn't tell someone first or in person by calling everyone I could get a hold of before they saw the news. This included my parents, grandparents and best friends, none of whom knew we might be expecting another child. And, when it was time to determine whether we wanted to find out the baby's sex, my husband asked whether it was ok if he posted the news on FB. We did find out the sex, but after a long discussion opted against posting it on FB in order to deliver the news personally to close family/friends who had expressed interest.

I mention this anecdote because it is a perfect reflection of how our off-line identities can and are influenced not only by what we post online about ourselves but what others post about ourselves. The funny thing is that this is an entirely new phenomenon.

As Boyd states, this is the first time in history that young people must publicly define themselves by writing themselves "into being as a precondition of social participation" (p.120). Pre-Internet, I wouldn't have had to worry whether my parents found out that they were going to have another grandchild by logging into their e-mail or FB accounts. Since this is not something that people generally plaster on billboards or mass-produced fliers, the only way they could have found out would have been if a friend or relative who had heard the news accidentally spilled the beans either in a letter or phone call. Since the chances are slim that I would have told anyone before my mother, it would have been virtually impossible for them to hear the news. Now, on the other hand, our baby's identity is being shaped before she even officially enters the world through my and others' use of online media to share news about her growth, antics inside the womb, etc. We haven't posted any sonogram pictures because of skepticism about how others' may use them, but many parents-to-be freely do so, shaping their child's identity even more. Personally, I would love to do a survey 15 to 20 years from now of children whose parents posted their sonogram pictures online - particularly those whose parents posted the "What sex is the baby?" images. Discovering images of what makes you male/female online - and the comments thereafter - has to have at least some impact on a person's ego.

It'll also be interesting to see what my children consider private as they come of age in this new world order where many young people use "'security through obscurity' to achieve privacy." As boyd states, "To exist in mediated contexts, people must engage in explicit acts to write themselves into being. On social network sites, this means creating a profile and fleshing out the fields as an act of self-presentation." So, will my children consider the information I post about them TMI? Or will they mock my privacy settings and openly post even more details about themselves and their own children? Only time will tell, but the fact remains that what someone chooses to reveal about themselves online is not always nearly as telling as what others reveal about them or what they choose to remain hidden from view.

Monday, November 02, 2009

Control is an illusion.

I usually try to start my blog entries with some pithy or cute anecdote of something I've either witnessed or read about in the media. Not today. I really need to just get this out before I'm completely overwhelmed by the task of situating Nakamura and Foucault in the same context.

One of the ideas put forth by Nakamura that I find intriguing is the concept that the Internet "functions as a tourism machine; it reproduces digital images of race as Other" (326). She spends much of the article discussing how the Internet allows this function and coins the term "identity tourism" to explain just how the Net enables users to adopt personae other than their own. What is damaging about this, she argues, is that while people claim to be enriching diversity the Internet by portraying diversity, they're really only fostering the continuation of racial stereotypes that exist off-line.

This can be seen in the following line: "I coined the term 'cybertype' to describe the distinctive ways that the Internet propagates, disseminates, and commodifies images of race and racism...cybertyping is the process by which computer/human interfaces, the dynamics and economics of access, and the means by which users are able to express themselves online interacts with the 'cultural layer' or ideologies regarding race that they bring with them into cyberspace" (318).

As well as this line: "Until we acquire some insight into racial cybertypes on the Internet, we are quite likely to be hoodwinked and bamboozled by the images of race we see on the net, images which bear no more relation to real people of color than minstrel shows do to dignified black people" (331).

This statement reinforces the concept that cybertyping is partly due to the fact that minorities - including women - have limited access to the Internet.

Though access is clearly improving among both minorities and women, the problem remains that most commercial sites tend to view these two groups, in particular, as nothing more than advertising and marketing opportunities. Rather than offering thoughtful discussions, many of these sites focus their attention on driving sales. They aim to convince women and minorities to empty their pocketbooks rather than engage their brains. Even the Oxygen network, a 24-hour cable-TV network geared to for women founded by (among others) Oprah Winfrey , falls victim to this methodology, says Nakamura.

While the network actively bills itself as someplace where women can "take a breath" from the exhausting task of being female (Salon.com), the channel seems to be little more than a marketing outlet designed to convince women to keep shopping. (Full disclosure: I do not have cable and have never watched the Oxygen network. My observations are based on what I've read, seen and heard about the network through mainstream media organizations and the network's own Web site.) A cursory visit to the site uncovers ads about toothpaste and cell phones alongside video clips highlighting the Bad Girls Club, virtual makeovers and celebrity gossip.

Now, I am not in any way trying to slam the network. It clearly serves an audience and does it well. It wouldn't have remained in existence if it didn't meet those basic qualifications, even with Oprah's backing. My premise and I believe Nakamura's premise, is that the site doesn't do enough to counter the white maleness of the Web.

So, how does this idea that the Internet is controlled by rich, white males relate to Foucault? In more ways than I imagined when I first read the two articles, it turns out. The chief similarity, however, has to do with the concept - whether real or imaginary - of control.

Whereas Nakamura argues that the Internet is largely controlled by white males - Foucault takes a more historical approach to show how many of society's fundamental institutions - prisons, schools, businesses, and other entities that have historically been founded and led by white males - reflect this fascination with control.

He does this by first describing the Panopticon, a circular prison with a surveillance tower at its center. The Panopticon is designed to instill in the prisoners a feeling of constant surveillance. The inmates cannot see who’s watching them from the tower, but they constantly “feel” the presence of authority because of the omnipresent tower - much like I do today when passing through an airport or sitting in a classroom at UTD. I feel that someone is watching my every move, but I have no idea who's doing the watching, when they're watching, or even what they're looking for.

Foucault further explores this idea of an unseen authority when he argues that the Panopticon is designed to empower society. To him, Panopticon's role is not one of submission, but of amplification: "although it arranges power, although it is intended to make it more economic and more effective, it does so not for power itself, not for the immediate salvation of a threatened society; its aim is to strengthen the social forces - to increase production, to develop the economy, spread education, raise the level of public morality; to increase and multiply" (472).

I'm not sure the idea that "our society is not one of spectacle, but of surveillance" is empowering. It actually seems a bit creepy to me, very "Big Brotherish."

Going back to Nakamura, though, I think she's right in the sense that "the digital divide is both a result of and a contributor to the practice of racial cybertyping." Since emerging media is all about collaboration, though, I do wonder how this will play out in the next few years as technology becomes more widespread. Will women take the reigns from men? Will African-Americans? Hispanics? Will the groups work together? Will all their efforts ultimately fail because we're culturally programmed to view the Internet as the white man's domain? I, for one, will be watching with open eyes.

Monday, October 26, 2009

From the telephone tree to eHow

Woo hoo! Finally someone that makes absolute and perfect sense to me. I knew when everyone discussing social and emerging media @sciwri09 kept quoting Clay Shirky that he must be important, but I never imagined that I'd find my head shaking in agreement throughout most of his 2008 book entitled Here Comes Everybody.

Throughout Here Comes Everybody, Shirky uses Internet mainstays such as MySpace, MeetUp, and Wikipedia to evaluate how the Internet impacts current - i.e. modern - group dynamics. His key premise - that "revolution doesn't happen when society adopts new technology, it happens when society adopts new behaviors" (Shirky, 160) - is actually quoted on the book's cover.

I was particularly intrigued by his discussion in Chapter 5 where he argues that despite the fact that everyone has access to the same tools to contribute to an online space equally, it has yet to lead to a "huge increase in equality of participation" (123).

According to Shirky, less than 2 percent of those who use Wikipedia ever contribute anything to the ecosystem yet millions derive information and resources from the site. The same could be said of Flickr or any of a number of mailing lists that I've joined over the course of my professional career to keep up on topics in journalism, public relations, science writing, etc. For example, whereas I might occasionally contribute a handful of photos to the public Flickr stream or a comment on a PR listserv, my contributions pale in comparison to the more active participants who are posting hundreds of photos at a time and/or actually initiating discussions on the listserv.

As Shirky states:"The most active contributor to a Wikipedia article, the most avid tagger of Flickr photos, and the most vocal participant in a mailing list all tend to be much more active than the median participant, so active in fact that any measure of 'average' participation becomes meaningless...As we get more weblogs, or more MySpace pages, or more YouTube videos, the gap between the material that gets the most attention and merely average attention will grow, as will the gap between average and median" (127).

Though Shirky didn't mention eHow, it seems an apt comparison because it encourages users to submit answers to questions/problems they're knowledgeable - or think they're knowledgeable - about. Some might argue that the site operates differently because writers are offered either a small stipend upfront or nothing, in exchange for a share of ad revenue, but I would disagree, to a point. Yes, eHow offers some writers a stipend - but it's small enough to be laughable given the submission requirements. Others with less popular posts/submissions may never see dime.

In a recent USA Today article, eHow founder Richard Rosenblatt credits his success with his decision to be a different type of publisher. Rather than guess what users want to read, Rosenblatt scours the Internet to gauge the most popular Web site links, clicks and searches. "We only make content we know there's a need for," he told USA Today writer Jefferson Graham.

According to stats posted in the USA Today article, eHow.com attracts about 50 million users a month - more than cnn.com, twitter.com and even weather.com.

The site works somewhat like Wikipedia. Anyone can post an article on eHow and see it on the Web site almost instantaneously. Where eHow differs from Wikipedia is that instead of letting other users edit and revise the content, though, a team of paid employees actually patrols eHow and removes roughly a quarter of unsolicited posts for reasons including inaccuracies.

Though I have rarely used eHow, I like the fact that at least 20 percent of the answers/articles are penned by real people who have some knowledge of the particular topic, whether it be how to apply wallpaper, make low-sodium smashed sweet potatoes or write an eHow article. It's peer-to-peer sharing rather than the talk-down approach often perpetuated by the mainstream media (no offense to my MM buddies, who really do push the powers-that-be to let them write above the seventh-grade reading level.) The style of writing is much like blog posts; succinct and conversational. The site's submission guidelines strongly suggest that writers keep their articles to between 400 and 600 words, so the content is generally heavy on facts instead of fancy writing. One critical downside is that, unlike Wikipedia, the articles are generally short on source and/or reference material. (This is why I don't look to eHow for health information.)

All in all, I think Shirky is right on in his final assessment that for those born before 1980, new technology will "always have a certain provisional quality...When a real, once-in-a-lifetime change comes along, we are at risk of regarding it as a fad, as with the grown-ups arguing over the pocket calculator in my local paper." Having been born in late-late 1978, I tend to live in both extremes - I remember life pre-Internet, yet my professional experiences working for TV, wire service and newspaper companies have exposed me to these new tools and often trained me to use them as they emerged. It's a strange line to straddle, but one that like Shirky I hope to maintain so that I don't ever believe that one ideologys about technology is absolute.

Monday, October 19, 2009

The public sphere at #sciwri09

One idea that I found particularly interesting is Poster's claim that "the internet is above all a decentralized communication system...Anyone hooked up to the Internet may initiate a call, send a message that he or she has composed to one or multiple recipients, and receive messages in return."

To me, one of the key problems w/ this argument is that this particular communications system is only accessible by those "hooked up to the Internet." While the number of people with no Web is at its lowest point in history - particularly in the West (developed countries) - those in the developing world generally don't fare nearly as well in terms of access. Since one of Habernas' central ideas is that the "quality of society depends on our capacity to communicate, to debate and discuss," the fact that only a fraction, albeit a large one, of the world's population can communicate via the Internet seems to indicate that the Internet does not in fact contribute a "new quality to the public sphere" in all areas of the world. It does in the United States, but not so much globally.

Boeder gets at this notion that the public sphere - much like the Internet - is transnational yet not global , but his argument begs the question whether the Internet is really a public sphere as defined by Habernas. To Habernas, the public sphere involved face-to-face discussions about the important issues of the day. He described it as "an area in social life where people can get together and freely discuss and identify societal problems, and through that discussion influence political action." (Wikipedia, public sphere) Boeder, on the other hand, argues that the public sphere is and has always been "more virtual: It's meaning lies in its abstraction...groups and individuals can indeed accomplish change by communicative action, and digital communications technology may empower them to do so."

I would argue, much like Poster, that the Internet is a modern-day public sphere for a number of reasons, chiefly that the "prevailing hierarchies of race, class and especially gender" don't matter. There is no social hierarchy on the Web. Instead, anyone can be anybody they want by merely stating that they are that gender, age, race, nationality, etc. Yes, power relations still exist in the sense that not everyone has equal access to the Internet and some only have access courtesy of a governmental or educational entity, but the general principle that everyone on the Internet is viewed as an equal seems to hold true.

This has become more apparent the past few days as I've been attending the annual meeting of the National Association of Science Writers/Council for the Advancement of Science Writing in Austin. Probably 50 percent, maybe more, of the attendees have been tweeting the meeting. The rest have either refrained from entering the Twitter/FB/YouTube world, or like me, got away from home without their laptops and/or Twitter-enabled PDA's. What I find intriguing about the two groups is the fact that those who are tweeting from the meeting have formed an online community in which they're discussing good lectures, interesting points made by speakers, key ideas... they're helping shape future meetings and providing input on the current one for meeting planners/attendees and those who couldn't make it. In essence, they're using the communicative tool of the Internet to support and enable change, leaving the rest of the attendees essentially out of the process/conversation.

Another interesting trend I've noticed addresses Poster's argument that everyone is equal on the Internet. A cursory glance of those using the hashtag #sciwri09 definitely supports this theory. For example, unless the individual indicated their name/position/title in their Twitter profile, I didn't necessarily know before the conference whether someone I was following was a PhD astrophysicist or a fellow public information officer. @physicsdavid is but one example - and someone I urge you all to follow. (He participated in a great panel discussion on social media.)

Overall, I think that Poster was pretty much right on when he claimed that the Internet is a "decentralized communication system" but only to the point in which he's speaking about those with Internet access. Without Internet access, people have no way to enter into this modern-day public sphere and their lack of access/participation undoubtedly has at least a small implication on how our global society operates and will continue to operate.

That's all for now. Back to #sciwri09. See you all next week.

Monday, October 12, 2009

Objectivity/transparancy is impossible.

Karl Marx is not my friend. Granted, this is my first real exposure to Marx. Though I've heard a lot about his theories through the years, this is the first time I've ever actually sat down and read something by him. Needless to say, I now know why I've consciously avoided him all these years and I'm particularly glad to hear that I'm not the only newbie in the class.

Enough mindless ranting. I may be completely off-base here, but based on some crowd sourcing with classmates Aline McKenzie, Gary Hardee and some FB friends, it seems like Marx 's main argument is that classes, particularly lower classes, exist primarily because their membership accepts the class structure and buys into its existence. Rather than instigating a revolution by demanding equality, they accept the social order as life and continue plodding along. While Marx limits his discussion to earlier time periods, his argument could be compared to the status of women's rights pre-feminism.

Marx also argues that those who control the means of material production do the same for idea production. This can be seen in the section of Part B titled "Ruling Class and Ruling Ideas" when he states:
"The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has hte means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it."
I think what he's trying to say here is that the ruling class, whatever it may be, uses its status to control not only the media but also the message delivered by the media in order to prop up its own beliefs/ideology. Today, we'd blame it on the corporate owners and our modern-day trend of corporate ownership of media!

Hall makes a similar claim throughout "Encoding/Decoding. " Like Marx, he seems to argue that how someone identifies himself or herself influences how they interpret the information fed to them by the media. He also argues that because of this, the mainstream media plays into the very power structure that it so often claims to denounce.

Since the general public has essentially become the "media" in recent years, this leads me to assume that if Hall were writing the same article today, he would argue that Hall everyone with a Web site/Twitter account/FB page has a vested interest in society's power structure. Having written this piece before "we" became the media, though, Hall instead focuses his attention on how traditional broadcast media tries to meet a very utopian ideal that everything they present is completely transparent, ie. objective.

Hall argues that broadcasters fail miserably because they don't acknowledge that their ties to the ruling class (i.e. corporate owners) prevent them from being completely objective/transparent. They also fail because they're unable to recognize and address the fact that they cater to a specific audience of like-minded individuals rather than the general public they claim to serve. This last point can be seen when he states:
"More often broadcasters are concerned that the audience has failed to take the meaning as they - the broadcasters - intended. What they really mean to say is that viewers are not operating within the 'dominant' or 'preferred' code. Their ideal is 'perfectly transparent communication." Instead, what they have to confront is 'systematically distorted communication...' "
Now, I'm not as familiar with broadcast news outlets as I am with print ones, but the problems facing mainstream media today are hardly relegated to one particular medium. Newspapers, TV and radio stations, and magazines are all victim to the current climate in which a few individuals/corporations own multiple media outlets, giving them much more control over content than in years past when few people owned more than a single entity in the same market.

Today, several major corporations own television, radio and print media outlets in the same market - something almost unheard of when credentialed journalists weren't constantly battling community bloggers for scoops. Federal law does limit what an individual or individual corporation can own, so many mainstream media outlets have simply started sharing content with their competition rather than shutting their doors. For instance, the San Antonio Express-News and the Houston Chronicle share feature stories. And locally, the Dallas Morning News and Fort Worth Star-Telegram share features, reviews and even sports coverage. The editors of both papers report that it's purely a cost-saving measure, but it also limits diversity. This is particularly apparent in arts coverage, which used to involve multiple critics from multiple news. Though they often covered the same events, their varied insights and critiques served an important cultural role in that one person or organization wasn't either blessing or condemning a particular show/event/performance. Though KERA and D magazine have stepped up their arts coverage since the DMN and Star-Telegram began sharing content, the diversity of opinion that once existed is but a fraction of its former glory.

In addition, as much as I'd like to say that the owners of mainstream media outlets have little, if any, influence on content, I'd be lying if I said so. Despite assurances that the "newsroom makes all editorial decisions," every mainstream media organization has what it calls its "sacred cows," - the stories that the publisher/managing editor/owner thinks are worth coverage so they're assigned and printed with little regard to their actual news value.

These examples are meant to serve as evidence that aspects of both Marx and Hall's arguments remain in play today, much to my disappointment. As we move forward and media conglomeration continues to accelerate, I think the key point to remember is that we must recognize that mainstream media outlets do generally have a point of view and that the best recourse is to be a fan of multiple outlets - radio, television and print - preferably ones with different leanings.

Sunday, October 04, 2009

Pure data yet unquantifiable?

I know we're not supposed to delve too deeply into the structure or validity of these readings, but I have to say that I was extremely frustrated by Manovich's work. Not because of his arguments - which make perfect sense - but because of his use of the English language. Of the chapters we were asked to read, I had to read at least half the sentences two, three or even four times in order to have some vague idea what Manovich was talking about. The whole book was filled with extraneous words, misused words, bad grammar, etc. There are too many instances to mention here, but needless to say I found most of the text extremely frustrating. Did anyone else find themselves questioning whether an English-speaking editor ever glimpsed at this book??

Now that my beef with the author's grammar is out of the way, let's get on to the actual reading.

As so eloquently noted by Wikipedia, Manovich uses his 2001 book, The Language of New Media, to argue that there are five general principles underlying new media. These principles include:

  • Numerical representation: new media objects exist as data
  • Modularity: the different elements of new media exist independently
  • Automation: new media objects can be created and modified automatically
  • Variability: new media objects exist in multiple versions
  • Transcoding: a new media object can be converted into another format
I am particularly interested in his argument that new media objects are nothing more than data. He supports his argument by stating that
"All new media objects, whether created from scratch on computers or converted from analog media sources, are composed of digital code; they are numerical representations. This fact has two key consequences:
1. A new media object can be described formally (mathematically)...
2. A new media object is subject to algorithmic manipulation... we can automatically remove "noise" from a photograph,... In short, media becomes programmable." (27)
What strikes me about this is that if new media is nothing more than data and can be described mathematically, why is it that we have yet to come up with any good, solid way to quantify the successes of social media - arguably the newest form of new media?

As a member of the social media team at UT Southwestern Medical Center, I am constantly asked by colleagues and higher-ups how we can quantify whether any of our Tweets, FB posts and/or YouTube videos are driving traffic to our clinics and hospitals. My explanation that there's really no good way to do that has sufficed for now, but it's only a matter of time before that excuse runs its course. At this time, the best way to calculate the impact of our efforts is to ask every single patient who walks through the doors whether they came to UT Southwestern because of a tweet. Obviously, this is nothing short of impossible. Even if we were to use our electronic medical records system to track the response, it would still be incredibly challenging to garner enough response to make the effort worthwhile.

Luckily for my team, we're not the only ones facing this challenge. Public relations groups and companies worldwide are struggling to explain to clients that while investing in social media is a smart move, it's not necessarily one that's easily quantified in terms of success. Yes, it's possible to generate reports on traffic for particular Web sites and you can also gauge success by tracking the number of followers (Twitter), fans or friends (YouTube and Facebook) a particular company/group/cause enlists. However, trying to determine whether someone bought Huggies over Pampers because they saw a positive tweet is impossible without communicating directly with that particular consumer. So, here you have a medium that's supposedly pure data but whose results aren't quantifiable - at least in terms of social media.

A recent AdAge article touched on this discrepancy as the writer tried to explain how the advertising/public relations industry is trying to navigate the challenges posed in part by social media.

In this article, Tim Marklein, exec VP- measurement and strategy at Interpublic Group of Cos.' Weber Shandwick, said that "getting clients to understand the benefits of engagement over impressions is the biggest challenge agencies have."
"The beauty of engagement is that it's a deeper level of involvement with a brand than you had in the eyeball or impressions world," Mr. Marklein said in the same article. "People were comfortable knowing the value The Wall Street Journal had for whatever vertical they are in. But with the number of blogs out there and traditional media putting more emphasis on web properties, marketers are unsettled on the traditional things they believed in but now know they need new approaches to figure this out."
Another expert quoted in the article, Allyson Hugley, VP-insight creation at Publicis Groupe's MS&L Worldwide, said that the "rise of digital and social media has caused everyone to rethink their approach to measurement."

"We had to fine-tune our approach to measurement not as something that happens at the end of the discussion but from the beginning and throughout the process," Ms. Hugley said in the AdAge article.

Though social media isn't addressed in Manovich's book, I do wonder how he would explain this conundrum or whether he would simply call it irrelevant. After all, social media in itself clearly meets Manovich's definition of new media. My question is whether the manifestation of social media also meets his definition and can be quantified.

Sunday, September 27, 2009

Can the Mona Lisa be aura-free?

In "The Work of Art in the Age of Mechanical Reproduction," German cultural critic Walter Benjamin muses on what our ability to reproduce images in mass quantities does to art and culture. His basic argument is that mass production of art - paintings, sculpture, etc. - eliminates the "aura" of a work.

To Benjamin, a work's aura is best described as"that which withers in the age of mechanical reproduction is the aura of the work of art...One might generalize by saying: the technique of reproduction detaches the reproduced object from the domain of tradition (221)."

He used the word to refer
"to the sense of awe and reverence one presumably experienced in the presence of unique works of art. According to Benjamin, this aura inheres not in the object itself but rather in external attributes such as its known line of ownership, its restricted exhibition, its publicized authenticity, or its cultural value...With the advent of art's mechanical reproducibility, and the development of forms of art (such as film) in which there is no actual original, the experience of art could be freed from place and ritual and instead brought under the gaze and control of a mass audience, leading to a shattering of the aura (Wikipedia)."
In 2005, Former UC Santa Barbara graduate student David Roh took Benjamin's thesis argument and moved it a step further by examining whether it still holds true in the early 21st century:
"Walter Benjamin defines aura as the distance between a purveyor of the work of art and the work itself. With the advent of mechanical reproduction, he [Benjamin] argues, the distance has been closed, aura diminished, and the work of art democratized. Fast-forward nearly 70 years later, and we find that instead of aura having been completely eradicated by perfect and nearly limitless digital reproduction, the distance between the work of art and the purveyor (consumer) grows wider than ever. "
I would argue that they're both right. It may be taking the easy way out, but it's hard to disagree with Benjamin's belief that something is lost when a Kandinsky painting is mass-produced as a greeting card or refrigerator magnet. Having taken a half dozen Art History courses over the years and spent more time in art museums than at home, I think it's somewhat criminal how commercialized art has become. I have a fair share of prints of my favorite works, and I realize the art has always been commercial (how else could an artist survive) but it does seem to have multiplied in recent years. For example, during a quick visit to the Dallas Museum of Art's gift shop after viewing the recent King Tut exhibit I found plastic sarcophagus's, fabric replicas of the head garments Tut wore and numerous poster-size images of some objects not even included in the exhibition.

Though I used to be avid collector of postcards of major works of art, I finally stopped because as Benjamin argues, viewing a reproduction or copy is not nearly as satisfying as seeing the actual work of art in person. That's because postcards have no "aura" - no soul. They're mere imitations - and often bad ones at that - of something that should be considered non-transferable. Prints aren't much better, but since actual works of art are way out of my financial reach I often go that route so I can have some semblance of art other than my own or my daughter's on display at home.

To me, it seems that the commercialization and reproduction in mass quantities of art has indeed caused art to lose its "aura."

However, Roh is also right in the sense that just because something is more available to the masses doesn't make it any less important or awe-inspiring when viewed in person. r all, reproductions can also be inspiring. I am not a big fan of Leonardo's Mona Lisa but I can't ignore that while it is probably the most reproduced image in modern times, millions of people still flock to the Musee de Louvre to see the actual work hanging on the wall. While the story of the work is itself intriguing, many say they're inspired by all the reproductions to go see the real thing. And so, despite having seen countless reproductions, they still stand in line for hours at a time in order to stand six-feet away from a 21 by 30-inch painting hidden behind a glass case several inches thick. Clearly, at least some of the work's aura must remain. (What's really funny about this is that after seeing the Mona Lisa in person, even after previously seeing countless reproductions, few leave the Louvre without yet another copy of the work either on a postcard, magnet, coffee cup, T-shirt, etc. )

So, who's more right? Benjamin? Roh? Neither? Benjamin doesn't address it in this article, but I think the answer depends a lot of someone's answer to the following question - what is art?

Can a photograph be called art if it's a photograph of another work of art, say a painting or a sculpture? Or what about a film that's a compilation of previous films - can it still be considered art if the only new aspect is way the clips are arranged? Is a postcard adorned with the image of a work of art actually art? For that matter, can anything commercially and/or mass-produced be called art?

Sunday, September 20, 2009

15th century model for the 21st century??

What to say about a book that captured my attention about as much as the dictionary. Not to be disparaging as I recognize that this is an important topic, but the book needed some oomph.

Needless to say, there was one passage, near the end of Chapter 5, that struck me as a very acute interpretation of something that holds true to this day. It reads:
"Yet however sophisticated present findings have become, we still have to call upon a fifteenth-century invention to secure them. Even at present, a given scholarly discovery, whatever its nature (whether it entails using a shovel or crane, a code book, a tweezer, or carbon 14), has to be registered in print - announced in a learned journal and eventually spelled out in full - before it can be acknowledged as a contribution or put to further use (141)."
I confront this reality on a daily basis as a science/medical writer at UT Southwestern Medical Center. In my role as a senior communications specialist (basically PR), I witness firsthand our researchers' struggle to publish sometimes groundbreaking discoveries in scientific journals that carry clout in their respective fields. Peer-reviewed journals such as The Proceedings of the National Academy of Sciences, Nature, the Archives of Internal Medicine, the New England Journal of Medicine (NEJM), and the Journal of the American Medical Association (JAMA) have such stringent acceptance policies that only the most thoroughly vetted - and usually staid - research ever sees the light of publication. Researchers must secure approval for numerous peers in order for a paper to be considered, much less accepted. And once accepted, they must go through what often seems like an endless cycle of revisions in which they must answer every question posed to them. All this to get some new findings in print.

Open-access online journals have proliferated in recent years, but open-access doesn't mean that anything and everything will be published. There's still an approval process. In this sense, open access means that the material must be posted online for all to see, rather than available only to those with a subscription or some other inside track. The study doesn't have to be written in lay language either, somewhat limiting the materials' accessibility.

The problem with this system is that in order to be taken seriously - and receive grant money - researchers must publish their findings, no matter how minuscule or incremental they may be. As Eisenstein stated, the must be "registered in print - announced in a learned journal" to be considered worthy of further attention.

It makes sense that a study first published in the New England Journal of Medicine would have more clout than one printed in let's say Vogue. But who's to say what is valuable and what isn't? For all we know, someone could have discovered the cure for cancer - but it was so far-fetched that the researcher and his/her findings were shunned or flat-out ignored, never to see the light of day in a "scholarly" journal of any refute. I know several researchers who have stopped short of submitting research that turned out to be revolutionary because they initially considered the idea too far-fetched to be taken seriously. What hope is there for scientific and medical advancements if researchers censor themselves as well as their peers?

One might argue that scientists should immediately post everything online, but few researchers I've spoken with have any interest in publicizing early findings. They say there's inherent danger in publicizing their results or study methods too early; doing so would create an environment ripe for poaching others' ideas. So, you have a two-pronged problem here: Researchers need to publish their findings in print to get validation and support for their research. But, at the same time, though, they don't want to publish too early or publish far-fetched - even if they're valid - results out of fear that they'll either be subject to poaching or laughter.

I wish I had an answer to this problem. It seems very outdated to rely on a 15th century model, but no better solution has come to pass. In order for change to take place, there has to be both a new outlet and a collective belief amongst researchers that it's in their best interest to adapt to the new model.

Sunday, September 13, 2009

Remediation remediated

Disclaimer: I read this book a few semesters ago, yet I find it even more fascinating this time around. Maybe it's the fact that I have a much better understanding of what hypermediacy, immediacy and remediation are all about? Then again, maybe not. Suffice it to say that this wasn't my first pass at this reading.

This may sound funny, but reading this text makes me want to scream, cry and laugh all at the same time. Scream because the basic concepts presented here seem so simple - yet most who hold leadership positions in mainstream media corporations fail to grasp the concept that none of their so-called innovations are "new". Cry because if newspapers don't do a better job adapting to the changes brought about by new media, they will continue to falter. And laugh because I can remember listening to the Dallas Morning News' publisher and other members of upper management rave only a few years ago about how adding a team of "online" reporters and editors was going to revolutionize the news industry and turn the newspaper back into "the" source of information for Dallas-Fort Worth residents. Having a team devoted to producing content for the Web was going to provide the paper's online readers with the "immediacy" they desired while giving other reporters time to work on more nuanced articles for the daily paper. Unfortunately, most of those reporters and editors were axed in the latest round of layoffs. The Web site has certainly gotten better over the years, but I wouldn't necessarily call it a "must-read" for many locals.

Though the term remediation may be unfamiliar to some, it makes perfect sense when you think about it. There's nothing new about it - it's just a new term to illustrate what artists and others have done for centuries. I remember countless K-12 and college art classes where the assignment was to take a work of art and refashion it in another medium. We once fashioned a sarcophagus (a la King Tut) out of cardboard - something more than a few years away in Tut's time. In another class, we took still photographs and then digitally-enhanced them using PhotoShop - something impossible when the first photograph was taken. Newspapers have taken similar steps to reinvent themselves in different mediums.

As the authors mention numerous times, all one has to do is look at a single issue of USA Today to note how similar its layout is to that of the Web. Bolter and Grusin note this early on when they state:
"Although the paper has been criticized for lowering print journalism to the
level of television news, visually the USA Today does not draw primarily on
television. Its layout resembles a multimedia computer application more than it
does a television broadcast; the paper attempts to emulate in print the
graphical user interface of a web site."
USA Today isn't the only newspaper to rethink its use of graphics, photos and varied fonts as the Web has become practically omnipresent throughout society. Even the venerable Wall Street Journal has started using colored pictures and grahpics throughout the edition. The same is true of the New York Times, once dubbed "The Gray Lady" for its lack of color. Locally, both the Dallas Morning News and Fort Worth Star-Telegram have taken a stab at emulating USA Today's print edition from time to time. Whether their efforts have been successful is open for debate, but it is interesting how as the Web has evolved, newspapers have become more like tabloids or magazines than what Western society has historically considered a newspaper. The stories are generally shorter and less nuanced. (There are certainly exceptions to this, but not as many as even a few years ago.) They're also more visually-oriented than in years past with multiple photos and graphics, some of which are only available to online readers.

Though I am no longer employed full-time by a newspaper, I'm still a voracious consumer of news....television, online, print, radio. What I find most frustrating about all of it is that everybody - not just newspapers - is constantly trying to be like everyone else. As the authors also noted, television news broadcasts are more like the Web than ever before with multiple mini-screens and scroll bars fighting over your attention. Recently, many TV anchors have begun asking viewers to tweet answers to questions posed on the air; the results are shown later in the broadcast. How long will it be before consumers start broadcasting the news from their personal computers? Oh, wait. That's already happening. Rather than buy into the so-called "immediacy" that mainstream media outlets purport to deliver, many consumers are ditching it entirely and reporting the news that matters to them themselves. They're using PDA's, iPhones, laptops, etc. to report in real time what they're seeing, hearing, feeling. This sort of "immediacy" is what mainstream media strives to achieve, but oftentimes misses because of a supposed lack of staff and/or money.

I've rambled on long enough about this particular reading, but suffice is to say that I'll be intrigued to hear what others think about remediation, hypermediacy and immediacy. Is it something that mainstream media can achieve, have already achieved, or will never be able to achieve? My bet is on the middle one. I think many outlets have achieved hypermediacy and immediacy, but only to a point. As for remediation - well, most mainstream media is the definition of remediation.

Monday, September 07, 2009

The state of education today

In 2008, Business Week ran an eight-part series by Don Tapscott, the author of Grown Up Digital. In the series, he argues how digital technology has affected the children of the baby boomers, a group he's nicknamed the "Net Generation."

Though the entire series is intriguing, I find the Nov. 30 article particularly relevant to our needs. In it, Mr. Tapscott describes a speech he delivered to a group of university presidents:

"The prevailing model of education, I said, made no sense for young people
today. This model revolves around the sage on the stage, the teacher who
delivers a one-size-fits-all, one-way lecture. This model, designed in the
Industrial Age, might have been a good way to condition young people for a
mass-production economy, but it makes sense neither for young people who have
grown up digital nor for the demands of this digital age."
What amazes me is how similar it is to the following statement McLuhan made in the 1969 Playboy interview:

"Our entire educational system is reactionary, oriented to past values and past
technologies, and will likely continue so until the old generation relinquishes
power. The generation gap is actually a chasm, separating not two age groups but
two vastly divergent cultures. I can understand the ferment in our schools,
because our educational system is totally rearview mirror. It's a dying and
outdated system founded on literate values and fragmented and classified data
totally unsuited to the needs of the first television generation."
Who would have - or better yet - could have imagined that 40 years later, we'd be having the exact same debate albeit about the first Internet generation instead of the first television generation?

Few, if anyone, foresaw how pervasive the Internet would become when it was first introduced - yet here we are today devoting countless hours trying to figure out how to best educate children who literally grew up online.

Just as McLuhan said that the division between the first television generation and those educated beforehand was more a chasm than a simple generation gap, I would argue alongside Tapscott that the same could be said of the division between today's youth and even my generation - which witnessed the explosion of the Internet as we were graduating from high school in the mid- to late 90s.

Consider this: Until my sophomore year in high school, I was using an electric typewriter to prepare term papers. My half-brother, on the other hand, has to my knowledge never used a typewriter. When he graduates from Garland High School in May 2010 he'll have spent his entire academic career preparing presentations and term papers using a computer. He was preparing Power Point presentations (elementary school) at an age when I was expected to present posters or overhead slides, if I wanted extra credit. And the time he has spent in any library is negligible compared to the years I devoted to the Corpus Christi Central Library - where in the late 80s/early 90s, I used a card catalog system to research everything from the anatomy of wolves to the history of China before I reached middle school.

Don't get me started on encyclopedias. It was a huge deal when my mother forked over who-knows-how-much for our family's first and only set of World Book Encyclopedia's - yet I have never once seen my half-brother crack an encyclopedia or even mention using one, for that matter. There is not - nor has there ever been - a set of encyclopedia's at my dad's house. That's partly because what my mother considered a major investment in her children's education was made almost completely obsolete by the time my half-brother started school.

The problem this causes in education is that many - not all - teachers - have yet to realize just how expansive this generational chasm really is. They look out over a roomful of students who have been online since birth and have no clue how to engage them.

In the Business Week article, Tapscott describes how the academics reacted when he questioned why it is taking so long for the educational system to change. One educator (whose age wasn't disclosed) blamed the problem on his/her colleague's age: "Their average age is 57, and they're teaching in a 'post-Gutenberg' mode."

"Post-Gutenberg?" another president injected. "I don't think so...Our model of
learning is pre-Gutenberg. We've got a bunch of professors reading from
handwritten notes, writing on blackboards, and the students are writing down
what they say. This is a pre-Gutenberg model—the printing press is not even an
important part of the learning paradigm."
Unfortunately, the university president's assessment that many teachers are still following a pre-Gutenberg model remains right on the money. While this style of teaching is fine for many - but not all - nontraditional (i.e. older) students, the methodology simply doesn't serve the younger "Net" generation. This is because they have grown up to believe, rightfully so, that their education is in their hands and that the teacher-focused, one-size-fits-all, methodology is archaic and doesn't fit their lifestyle.

Luckily, many teachers are beginning to change their ways. As an education reporter at the Dallas Morning News for nearly five years, I witnessed first-hand how classrooms are moving toward becoming more "Net-generation" friendly. Lectures still have a place in the classroom, but teachers are encouraging more group interactions and fostering conversations rather than one-way dialogues. They're also embracing the Internet as much as their superiors allow by using blogs, Wikis, and social networking applications to connect with students.

All in all, education seems to be moving in the right direction - just not nearly fast enough. With technology continuing to advance as quickly as it does, educators will continue to be hard-pressed to keep up with the latest and greatest technology. My only hope is that they try - for our (my) children's sake.

Sunday, August 30, 2009

Plato and Saussure

In the August 24, 2009 edition of Wired magazine, writer Clive Thompson shares some not-so-surprising news on some theorists belief that technology has killed people's ability to write and that "texting has dehydrated language into 'bleak, bald, sad shorthand' (as University College of London English professor John Sutherland has moaned)."

Turns out, Thompson writes, that many of those theorists are completely off-track, particularly when one takes into account the findings of Andrea Lunsford, a professor of writing and rhetoric at Stanford University who has organized a project called the Stanford Study of Writing to analyze and study college students' prose. Dr. Lunsford collected close to 15,000 student writing samples between 2001 and 2006; finding that "technology isn't killing our ability to write. It's reviving it—and pushing our literacy in bold new directions."

Dr. Lunsford's study revealed not only that today's students write more than in any time in recent history but that they're "remarkably adept at what rhetoricians call kairos—assessing their audience and adapting their tone and technique to best get their point across. The modern world of online writing, particularly in chat and on discussion threads, is conversational and public, which makes it closer to the Greek tradition of argument than the asynchronous letter and essay writing of 50 years ago." (as told to Wired)

She added:
"For them, writing is about persuading and organizing and debating, even if it's over something as quotidian as what movie to go see."
This article seemed particularly relevant to me after coming across this passage in Plato's Phaedrus:
"Oratory is the art of enchanting the soul, the therefore he who would be an orator has to learn the differences of human souls - they are so many and of such a nature, and from them come the differences between man and man. .. he who knows all this, and knows also when he should speak and when he should refrain, and when he should use pithy sayings, pathetic appeals, sensational effects, and all the other modes of speech which he has learned; ... but if he fail in any of these points, whether in speaking or teaching or writing them...he who says 'I don't believe you' has the better of him (448)."
Or this one:
"with them the point is all-important (449)."
After reading both Plato's Phaedrus and the article referencing Dr. Lunsford's work, I would argue that Dr. Lunsford is on to something when she states, "I think we're in the midst of a literacy revolution the likes of which we haven't seen since Greek civilization."

As her research shows, today's students write more than ever before - just in different formats. Gone are the long, drawn-out writings of our parents' and more likely, grandparents' generations. They've been replaced by 14o-character tweets, facebook postings and instant messages. What I - and many of my generation- like about much so-called modern writing is that today's writers get to the point. They don't fill their prose with unnecessary fluff and/or jargon. Every word, sometimes every single letter - is carefully thought out to deliver the maximum amount of content in the shortest time frame.

I would like to think that my writing is more like my peers than my parents' or grandparents'. If so, it has more to do with my training and experience in daily newspapers, wire services and TV stations than the more academic-style of writing I learned in elementary, middle and high school. As oft-maligned as it is, news writing is very similar to that which both Plato and Dr. Lansford speak. Journalists have one basic command: keep it short and to the point. News audiences have little tolerance - and even less time - for unnecessary details, so writers/anchors have to drill down to the most basic information for their stories. You know the saying, 'It bleeds, it leads." Well, that's about as basic as it gets. With relatively little effort, a reporter can find and share the answers to these almost primal questions - who did what; to whom; where; when; and if lucky, why they did it. The why matters less to the audience than the where (did it happen in my neighborhood) and the who (do I know him/her) but bonus points are awarded to the reporter who can get the criminal to say why he/she did something in a single soundbite. Exactly how someone did something is another question that always sparks interest- was the person killed with a gun/ax/refrigerator?? Who doesn't dawdle by accident scenes hoping- but not seriously (at least I don't)- to catch a glimpse of a bloody corpse...or go home after passing a wreck to see if news crews had any more luck? It's human nature to want to know what's happening around us and reporters do an admirable job of finding and reporting the facts about issues people care about quickly and succinctly - something Plato seems to consider crucial to good rhetoric/writing.

Monday, January 19, 2009

Aesthetics of Interactive Arts: Assignment 1

1/Multicolored candies, individually wrapped in cellophaneIdeal weight 175 lb; installed dimensions variable, approximately 92 x 92 x 92 cm (36 x 36 x 36 in.)Collection Donna and Howard Stone, on extended loan to the Art Institute of Chicago, 1.1999

Felix González-Torres used ordinary materials to extraordinary ends. From 1986 until his early death in 1996, he produced work of uncompromising beauty and simplicity, transforming the everyday into profound meditations on love and loss. González-Torres’s quiet, elegiac oeuvre comprises serial work including lightbulb strings, candy spills, beaded curtains, language-based works, graph-paper drawings, and stacked paper sculptures. This installation is an allegorical portrait of the artist’s partner, Ross Laycock, who died of an AIDS-related illness in 1991. The 175 pounds of candy corresponds to Ross's ideal body weight. While on display in our contemporary galleries, viewers are encouraged to help themselves. As the pile diminishes, candies are replaced.

SOURCE: http://www.artic.edu/aic/collections/artwork/152961