Thursday, July 15, 2010

Klaus Event

Chris Klaus spoke this evening at the Georgia Tech College of Computing building that bears his name. Klaus made his fortune as founder and former CTO of Internet Security Systems, a company he started while still a Tech student in 1990. He has stepped down as CTO, and is the CEO and founder of Kaneva, a 3-D virtual world.

His talk focused on trends in gaming, advice about technology start-ups, and bringing the gaming industry to Georgia. A lot of people were interested in what he had to say. There was a sizable crowd, and before and after the talk there was time for networking. This was the lowest female to male ratio I've experienced in a long time, and that's saying a lot at Tech. That could be because there were so many Tech alumni there, and in earlier years the ratio was even less balanced than now. The majority, though not all, of the attendees were Tech affiliated, so I got plenty of chances to respond, "No, I'm a Tech alum" when people asked if I was a Tech student. The novelty hasn't worn off yet.

It was really cool to be able to converse about some of my research interests with people who are knowledgeable about related topics and have come by their expertise in different ways. The people I met were not in academia, but mostly worked in the private sector or had their own business or non-profit. It was a great way to close my last day of work at CACP, because in talking with so many people, I got a better picture of just how much I have learned both through CACP and through earlier experiences. I can finally talk about my background and interests with some degree of coherence because I can finally see some degree of coherence.

The speech emphasized several themes and trends that are garnering lots of attention lately. For virtual worlds he mentioned scalability and simulation capacity. He provided a useful phrase, augmented reality, to describe a trend, being facilitated by wireless mobile devices, of the real and the virtual increasingly interacting. Examples he gave were MyTown, Zillo, and geotagged games. He also used the word freemium to describe games and software that is offered free, with charges for upgrades. This is the business model he foresees having growing success, associated with the trend moving from value in content to value in aggregation. And of course, the really big deal trends are user generated content (so big it gets an acronym, UGC) and crowdsourcing (so big they had to make up a word for it).

None of these trends were new to me, but what was new was hearing them described in unabashed relation to earning lots and lots of money. At work, I do consider how virtual worlds and online gaming can improve employment, expand the economy, or earn people a living, but it never even crossed my mind to wonder how I could earn money by them. That is probably a useful exercise. I love to theorize about the economy in the abstract, but am quite out of touch with actual money and business. Klaus talked about using venture capital, angel investors, and technology incubators, all of which I understand in theory, but I wouldn't know the first thing about actually approaching them with an actual business plan. Do you just google search for venture capitalists and call them up?

Klaus gave a list of metrics that venture capitalists pay attention to when deciding if they will invest in your tech start-up. These include acquisition (getting people to your site), activation (getting people to sign up), referral (getting people to tell their friends), retention (getting people to come back), and revenue (getting paid).

He mentioned a techie news site he likes, Techmeme, which is a prime example of the "gatewatching" trend I mentioned in a previous post. Klaus also mentioned the Glitch program, run by Georgia Tech and Morehouse College, that brings in high school students to test video games and get excited about computer science over the summer. And finally, it was highly amusing when someone during the Q&A asked Klaus if he had heard of Second Life (Kaneva's biggest competitor).

Tuesday, July 6, 2010

Copyright: "Protection" for Whom?

One of my future classmates shared an article about a discussion document floated by the Federal Trade Commission (FTC) considering extension of copyright protection for news agencies.

Media organisations would have the exclusive right, for a predetermined period, to publish their material online. The draft also considers curtailing fair use, the legal principle that allows search engines to reproduce headlines and links, so long as the use is selective and transformative (as with a list of search results). Jeff Jarvis, who teaches journalism students to become entrepreneurs at New York’s City University, says this sounds like an attempt to protect newspapers more than journalism.


Why do news corporations merit such protection? I understand the Econ 101 explanation for copyright laws (and wrote a cringeworthy paper on the subject in high school). It boils down to incentivizing production of intangibles by lending them some of the properties of tangibles (i.e. excludability) to address some of the causes of market failure (i.e. freeriding).

Copyright laws, like any other laws, restrict the freedom of society, and therefore can only be justified if they also protect society. That was the original intent of copyright law-- to protect society from the underproduction of socially beneficial works. The Copyright Clause in the U.S. Constitution authorizes Congress "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." So the purpose was promoting science and useful arts, while the method was granting exclusive rights. But the rhetoric has changed. What was once the method is taken for the purpose. For example, Ayn Rand advocates copyrights as protecting "a man’s right to the product of his mind" (rather than a society's right to progress). And what about these newspaper "protections" being considered? Are they protecting society from harmful underproduction?

First, let me note some distinctions to bear in mind. There are products associated with news, but news is not a product. News itself--what happens in the world--is the story of humanity and is not subject to markets, nor, a fortiori, to market failure. Newspapers and newspaper delivery, for instance are products (the first a good, the second a service). More generally, communication of the news is a product, often entailing a combination of goods and services. Underproduced communication of the news would indeed be a social harm.

Communicating the news once required access to a printing press and physical delivery of newspapers. A journalist alone could not very practically communicate the news, given the high capital requirements and distribution costs. News corporations formed out of practical necessity, and coordinated the efforts of journalists, editors, photographers, printers, etc. to produce a periodical that could then be mass delivered. Physical limitations on communicative materials also required the news media to play a gatekeeping role, determining what news was fit to print in the limited space and also maintaining some semblance of credibility. So communicating the news required corporations, and corporations require incentives.

But now, a journalist (or anyone with knowledge of a news occurrence) does have a practical way of communicating the news. The capital requirements for communication are drastically different with the Internet. The coordinating role played by news corporations is no longer as necessary. Many functions of traditional news corporations can be crowdsourced now that traditional barriers to collaboration such as time and geography have been overcome. The need for mass media brands to signal credibility is reduced now that authors can easily link to other articles, multimedia, and primary sources, and people can comment directly on articles and discuss dubious claims, and even award reputation points. Plus the general public is better equipped to check credibility for themselves (as I did, for example, by finding FTC documents after reading the first news article), especially as digital literacy rises.

Lower news communication costs, by challenging mass media hegemony, have enabled niche news production. This actually increases news coverage for everyone, and additionally can bring hope of social justice to sparse populations or interest groups who have traditionally been ignored by the mass media. The example I have in mind is people with rare types of disabilities. In place of gatekeeping, a number of sites have taken to gatewatching, helping audiences navigate the vastness of news channels. Gatewatchers publicize, rather than publish, news, by providing headlines, summaries, and links to stories they deem relevant or noteworthy. Gatewatching also occurs when we "like" or "digg" articles, link to them in our blogs, and share them on our social networking sites (like when Daniel called my attention to the article on copyrighting).

In another age, government intervention via copyright law did mitigate underproduction of news communication. But the proposed strengthening of copyright laws is unneccessary, as news communication is at no risk of being underproduced. In fact, if the government is going to intervene, better to do so by increasing broadband access than by propping up newspaper corporations.

Saturday, July 3, 2010

Empiricism: Abstract Nonsense Part II?

There are two types of friends-- those that tell you the truth, and those that tell you what you want to hear. The latter are enjoyable to be around, but in the long run are probably not so good for you.

You can learn a lot about society from its conceptual metaphors, and they are more influential than you might think. The common expression that numbers don't lie not only personifies data, it characterizes data as that first type of friend, the rare kind that tells it to you straight. Many a college admissions essay, my own included, are essentially an eager theme-and-variations on that sentiment. With my wordy ode to empiricism, I got into college and learned all about data, got to know numbers really well, only to discover that maybe they're not exactly what I thought.

At work I have the onerous task of trying to determine the number of jobs that can be attributed to digital technology, and to analyze the trends and the potential for future employment in the information economy. So lately I've been deep in the databases of the Census, BLS (OES, CES, BDM, CPS etc.), BEA, OECD, even AARP. I've fought the good fight with Excel.

The result? Whatever you want to hear. I can't post the graphs up here, but widely disparate trends and figures can come out of some straightforward analysis. As an exercise, go to BLS and use the data to justify that digital media-related employment is growing. And then use the data to justify that it is in decline. Anything I want to tell you, I can tell you, and back up with statistics and a graph. The problem is, I don't know what I want to tell you. I want the data to tell me what is true!

I ran into similar frustrations studying measurement and metrics at the National Center for Education Research and working at the Budget Office.

I know the surface-level explanations. Sometimes data collection is shoddy. Sometimes metrics are poorly designed, methods are lacking, people are careless, tools are faulty. Sometimes, goes a twist on the expression, numbers don't lie, liars use numbers. But I think the issue is more fundamental.

Data implies codification, representation, and reduction. Numbers, like I mentioned in the last post, are abstract-- so bogglingly abstract, that when you think about it, our haste to represent the world with them is remarkable. Empiricism, reductionism, and data are a critical part of understanding our world, and I know no good way around them, but I think they are ingrained too deeply, and too rarely questioned by the very people who use them most.

Categorically Abstract Nonsense

My abstract algebra professor, Dr. Ernie Croot, used a phrase that first amused and then intrigued me: "This is the power of abstract nonsense." That was several months ago, and I've been tossing it around in my head ever since. I kept finding it a more and more packed phrase, even if it was only meant to be casual and comical. It turns out the phrase abstract nonsense was actually coined by mathematicians in the 1940s or so with the emergence of category theory.

My math coursework included only a teasingly brief introduction to category theory, really just enough to know that it's out there and that it adds even more levels of abstraction to what I saw in my abstract algebra courses. From what I understand, category theory generalizes notions including rings (which themselves generalize integers), groups (which generalize mappings and symmetries) and modules (which generalize vector spaces AND rings AND (abelian) groups). Even the least abstract thing just mentioned, the integers, is abstract. They came from what? Our need to count? But could we even conceptualize counting before having integers?

When theories aim to be applicable to everything, are they useful to nothing? The most generalized and abstract theorem I learned in Dr. Croot's class was the Fundamental Theorem of Finitely Generated Modules over Principal Ideal Domains. (Really I just like the name of it and wanted an excuse to mention it!) Now, there are tons of finitely generated modules, over tons of principal ideal domains. And this theorem tells you about any of them. But if you pick a special principal ideal domain, the integers, you get as a corollary the Fundamental Theorem of Finite Abelian Groups, which was pretty much the culmination of an entire previous semester course in group theory. Which is more important, the more general theorem or the corollary?

When I start my graduate studies in economics, instead of math, I don't expect the abstraction (or the nonsense!) to vanish. And while pure mathematicians make no qualms about valuing abstraction qua abstraction, I'm not sure there's such a consensus among economists.

It seems to be a pattern for me lately, that I read something that really fascinates me and then later find out it was written by a Marxist. Such is the case with Franco Moretti's Graphs, Maps, Trees: Abstract Models for Literary History. Moretti postulates:

Theories are nets, and we should evaluate them, not as ends in themselves, but for how they concretely change the way we work: for how they allow us to enlarge the ... field, and re-design it in a better way, replacing the old, useless distinctions ... with new temporal, spatial, and morphological distinctions.


Moretti's book comes from the field of literary history, and yet fits into a discussion that started with abstract algebra. (Only thanks to the hyperlinked Web did I ever come near it. I just can't get over the positive feedback between digital networks and knowledge networks.) Anyhow, I hope that in grad school I gain a better understanding of not only how to theorize, but why to theorize.

Sunday, June 27, 2010

Epic Win! Part 1

Georgia Tech has a pretty good sense of self-deprecating humor. We take a definite pride in our nerddom, our unshowered masses, our LAN parties and crazy mad Matlab skills. The Gamer is an archetypal figure, even a Tech cultural icon. I came so very close to getting out of Tech without getting into gaming. But alas, in the home stretch, I've been sucked in.

I'm not actually gaming (yet?), I'm just into gaming. There are a several reasons I am intrigued. I'll set them out briefly here, talk a bit about the first, and elaborate on the rest in later posts.

1. The on the verge of an epic win mentality
2. The experience economy and the blur between production and consumption, and between labor and leisure
3. Network effects and increasing returns
4. Accessibility and design issues
5. Surveillance, power, media, and governance issues
6. Virtual currencies and economies
7. The psychology and philosophy of avatars and embodiment
8. Collaboration, communities, and sociological issues

I chose to discuss the "epic win" mentality first simply because it requires the least amount of background info. I recommend watching this video of a talk by Dr. Jane McGonigal called "Gaming can make a better world."

Side note, McGonigal is awesome. Her job title is Director of Games Research & Development at the Institute for the Future in Palo Alto. She did her Ph.D. at Berkeley. MIT Technology Review calls her one of the top 35 innovators changing the world through technology. She is also an engaging and energetic speaker who seems genuinely thrilled to be doing what she's doing, which is looking for ways that gaming can save the world. So I really do recommend the video.

In gaming, she says "an epic win is an outcome that is so extraordinarily positive you had no idea it was even possible until you achieved it. It was almost beyond the threshold of imagination. And when you get there you are shocked to discover what you are truly capable of. That is an epic win." She notes that
when we're in game worlds I believe that many of us become the best version of ourselves, the most likely to help at a moment's notice, the most likely to stick with a problem as long at it takes, to get up after failure and try again. And in real life, when we face failure, when we confront obstacles, we often don't feel that way. We feel overcome. We feel overwhelmed. We feel anxious, maybe depressed, frustrated or cynical. We never have those feelings when we're playing games, they just don't exist in games.


How do we start to explain this phenomenon? Here's how she begins to explain:

Whenever you show up in one of these online games especially in World of Warcraft, there are lots and lots of different characters who are willing to trust you with a world-saving mission, right away. But not just any mission, it's a mission that is perfectly matched with your current level in the game. Right? So, you can do it. They never give you a challenge that you can't achieve. But it is on the verge of what you're capable of. So, you have to try hard. But there is no unemployment in World of Warcraft. There is no sitting around wringing your hands. There is always something specific and important to be done. And there are also tons of collaborators. Everywhere you go, hundreds of thousands of people ready to work with you to achieve your epic mission.

It's not something that we have in real life that easily, this sense that at our fingertips are tons of collaborators. And also there is this epic story, this inspiring story of why we're there, and what we're doing. And then we get all this positive feedback. You guys have heard of leveling up and plus-one strength, and plus-one intelligence. We don't get that kind of constant feedback in real life.


She also states four things gaming makes people good at: urgent optimism (from always being on the verge of an epic win), trust and cooperation, blissful productivity (realizing that we're happier pursuing a purpose than just hanging out), and epic meaning (because gamers love to be attached to awe-inspiring missions and planetary-scale stories).

So how do we harness the positive power of gaming? How can we get that best self from games to come out and play in the real world? One possibility is to design games that have real-world impact (McGonigan gives some examples). Another is to design the real world--the workplace, for instance, or schools-- to be more like gaming, at least in the positive senses. Technology can help, by giving feedback, facilitating collaboration, and even gearing tasks ("missions") towards skill levels.

June

Unpresuming June
Slyly, sweetly, summer slips away

Thursday, June 24, 2010

Faith and Science

I consider myself a scientist, a philosopher, and a person of faith. I am desperately, ardently seeking knowledge, meaning, and truth. At the foundation of my pursuit is my faith, and thereupon I build my philosophy, and thereupon science. There is a tendency among scientists to take the reverse treatment, designating science as the fount of truth, and taking the futile exercise of judging faith by the means, motives, and methods of science.

I say futile because faith by very definition cannot be reasoned out with scientific rationality. Anything that can be so reasoned is, by definition, not faith. So you see how this leads to all manner of tautology and circular reasoning. If you start with science and build from there, all you get is more science. The methods of science are not suited for questions of faith. This sounds some combination of foolish, wild, unsatisfying, or frightening to anyone who is used to equating science with truth, and its rationality with finding truth (i.e., to nearly all of us).

And why is it that we have no problem “believing in” the scientific method itself? Things like deduction, probability, causation, natural laws, representation, and abstraction, for instance-- how can we even “prove” that they are “legitimate”? It’s sort of like (rough analogy warning) picking the axioms in mathematics. The legitimacy of deductive reasoning is an axiom of science.This is setting up for a whole lot of questions that I can’t even begin to address right now (but think about often), but my point is that even going with just science requires a certain type of faith, just not what we typically think of as faith. When you decide how to decide what should qualify as truth, that is an act of faith.

So my approach, like I said, is to make faith the foundation. This approach does not exclude science but encompasses it.

If I may make another rough analogy, if faith is the foundation of my house, and philosophy the structure, then science is the window. I, the resident of the house, depend on the foundation that supports everything (faith), go about my life according to the structure (philosophy), and view the world through the window (science). I understand that there’s a whole lot of the world that I can’t view through my window. In particular, I can’t view the house’s foundation, no matter how intently I look, but that doesn’t distress me or make me doubt the foundation. But I nonetheless enjoy looking out the window, and feel compelled to look out it frequently and deduce what I can about the world.

Concerning the specifics of my faith itself, and how it is formed and what it implies, I will have to save for later. (To put it very, very briefly, I am Catholic). My philosophy as well, I can't touch on tonight but hope to delve into more later. For now I highly suggest Pope John Paul II's "Faith and Reason" (Fides et Ratio) if you're up for some more reading on this topic, from a very wise and articulate thinker. I haven’t yet settled on what the intent of this blog is, as you can probably tell considering how different today’s post is from yesterday’s ode to tabbed browsing. I only know that I intend to be thoughtful, and hope to get some feedback from other thoughtful people.

The one with the Anarchist Globetrotting Priest

It's a shame that the vast contributions of Ivan Illich have been obscured by his portrayal, in the later years of his life, as a troublesome anarchist. As his fascinating obituary notes, "He was far more significant as an archaeologist of ideas, someone who helped us to see the present in a truer and richer perspective, than as an ideologue."

I wasn't even aware of him until a few months ago, when I came across his 1973 Tools for Conviviality, a remarkable critique of industrialism and its technology. Nothing like "anarchy" jumped out at me. Rather, I was struck by his ability to see the big picture--he "gets it," which I've found to be rare among people who write about technology and society, who often get caught up in hype or triviality--and by his prescience.

Illich proposed the concept of convivial tools, or technologies that foster creative and autonomous interactions between people. He claimed that since people “need technology to make the most of the energy and imagination each has,” that “society must be reconstructed to enlarge the contribution of autonomous individuals and primary groups to the total effectiveness of a new system of production designed to satisfy the human needs which it also determines. In fact, the institutions of industrial society do just the opposite.” Here's an introductory excerpt:

I choose the term "conviviality" to designate the opposite of industrial productivity. I intend it to mean autonomous and creative intercourse among persons, and the intercourse of persons with their environment; and this in contrast with the conditioned response of persons to the demands made upon them by others, and by a man-made environment. I consider conviviality to be individual freedom realized in personal interdependence and, as such, an intrinsic ethical value. I believe that, in any society, as conviviality is reduced below a certain level, no amount of industrial productivity can effectively satisfy the needs it creates among society's members.

Present institutional purposes, which hallow industrial productivity at the expense of convivial effectiveness, are a major factor in the amorphousness and meaninglessness that plague contemporary society. The increasing demand for products has come to define society's process. I will suggest how this present trend can be reversed and how modern science and technology can be used to endow human activity with unprecedented effectiveness. This reversal would permit the evolution of a life style and of a political system which give priority to the protection, the maximum use, and the enjoyment of the one resource that is almost equally distributed among all people: personal energy under personal control.


Twenty and thirty years later, convivial tools (by a variety of other names) are the big deal. Look at Manuel Castells' The Rise of the Network Society, Kevin Kelly's 1998 New Rules for the New Economy, Yochai Benkler's 2006 The Wealth of Networks, and Adler and Heckscher's Collaborative Community, just to name a few. The "it" technologies that are part of the now widely proclaimed transformation from the industrial economy to the networked information economy (Benkler's phrase), like open source, peer publishing, web 2.0, user-developed virtual environments, and basically any of the technologies based on collaboration, contribution, and flexibility, fit the convivial tools description. Decentralization, horizontal rather than vertical organizational structures, new modes of production, and interdependent autonomy are the themes.

I'm really interested in what these themes might (or might not) imply for the economy and employment. Recently I've been researching what digital media might mean for employment for people with disabilities, and if there is going to be an impact, I think the "conviviality" is key. It is the concept of autonomy through interdependent efficacy (rather than through isolated independence) that is so relevant, and that seems to be the link between disability studies, sociology, and employment.

So when I first came across Illich, I was impressed that he called it so far ahead. Then I was even more impressed when I saw the scope of his publications on education, technology, health, history, gender, pain (he was plagued by a painful facial tumor), and politics. Reading more about his life, I learned that he was also a Catholic priest, an activist, and a polyglot. He travelled the world and lived in other countries, working with, rather than opposed to, the cultures he encountered. He was global before being global was the thing. I think those are a few clues as to why he "got it"-- the breadth of his knowledge and insight, his cultural exposure, his intense experiences, maybe even his pain.

Wednesday, June 23, 2010

The Tabbed Revolution

I distinctly remember the first time I used tabbed browsing. (Which is kind of sad considering that I don't remember my first day of college or my first time to drive a car). Anyway, the first time I used tabbed browsing was before I had thoroughly embraced the Internet lifestyle. I used the Internet, daily even, but it was not yet the central tool and space of my productivity. I certainly didn't, at that point, spend countless hours reading and writing and thinking about the Internet. So all I thought about the tabbing capability of my newly downloaded Firefox was, that's kind of cool. I also learned the Ctrl+T shortcut because that too was kind of cool.

Now, however, I do spend countless hours, and earn my keep, reading and writing and thinking about the Internet. Mainly I try to think about which features and uses are a big deal for society and what they could mean. But I have never run across anything so simple, and yet transformative, as tabbed browsing.

The Web is hyperlinks. What do you do on the Web? You follow hyperlinks. That is browsing. Before tabbed browsing, you started on your homepage, and to follow a link you either left the previous page behind, or opened a whole new window. Since it is clunky to have lots of windows open at once, browsing becomes essentially linear. You go from one site to a next to a next in a linear succession. You can "go back," of course, but doesn't even that phrase indicate an underlying linearity?

The Web is also a web. Its hyperlinks are not linear, and to traverse them that way is to grossly underutilize the Web's potential. With tabbed browsing, we are closer to browsing the webbed Web. You can leave one site open while opening up several of its links, and follow all sorts of paths and shapes and branches in all sorts of orders. (IE8 even color codes groups of tabs.) In the networked information economy, I can't imagine working any other way. With tabs it is so much easier to cross reference, to follow several trains of thought without having to remember to go back, to sample so much more with so much more flexibility. These days I'm hardly working if I don't have three windows, each with six tabs open.

Academic publishing is really a web too, when you think of all the links of people referencing each other. So if you're doing a literature review or reading about some topic in an online journal, tabbed browsing makes it so much more effective. Knowledge itself, even, is a web, and so also, I'm told, are our brains. Tabbed browsing allows us to conceptualize and internalize the Web as a web, rather than just a digital version of the more linear media we used before the Internet.

Tuesday, June 22, 2010

Last Month in Georgia

Since I only have one month left in Georgia, I am transitioning to this new blog from my previous blog, Georgia State of State. This post is essentially a copy of my last post on that blog.

This summer I am finally feeling my interests and plans sealing together. Not that I have any definite specific plan, but at least I get a better feel for how my values, interests, and talents fit together. That is both an exciting and a daunting feeling. Once you begin to verbalize your ambitions and their implications to yourself--to make them palpable--then the prospect of disappointing yourself is also palpable.

My interests are not as Georgia-specific as when I started the Georgia State of State, but my affection for Georgia has only grown. On July 20 I am leaving Atlanta to start a PhD in Economics at UC Berkeley, and though I have moved many times before, this is the first time I anticipate being homesick. This is my first attachment to a place. I have run so many times down its roads, admired the cityscape from every view at every time of day and year. Every idea, every trouble of the past four years, I have digested running through Atlanta. I know the streets and feel like they know me, having witnessed all my moods, trials, and triumphs.

I have seen (and contributed to) Georgia's quirkiness. I have also seen need personified. Here is the power of place attachment: desperation has a face and lives in Georgia. I used to know, conceptually, people are in need, and that, intellectually, motivated me. Now I know, my people need me! I know it sounds ridiculous here and now to talk about my people. But how else can I refer to the multitudes who share my Piedmont Park and Freedom Parkway, my downtown views and strips of shade, my incredulity at the weather?

In four years in Georgia, I have become intellectual, but not an intellectual. Georgia does not breed intellectuals. It breeds decency. Though my studies take me ever farther into abstraction, esoterism, academe, I stay grounded in the understanding that academia may be a means, but never an end.

I got to live in Georgia at a most marvelous age--young but not too. Some days you can positively feel the glow of being young, capable, and striving. It burns and glows so intensely that others older and younger than you can sense it. They look to you to surprise them. And you hope beyond hope to never stop surprising yourself.