The Pauper Lords of Open Source

January 12th, 2007

I spent some time this week with some friends from the Portland Perl Mongers (PDX.pm) at 2007’s first meeting, devoted to Web app frameworks, games, and Martinis, in no particular order. Just as Seattle is blessed with a vibrant Ruby community (the Ruby Brigade meets up on Capitol Hill), Portland has a truly exceptional Perl community. At any given PDX.pm meeting, you’re likely to see any number of O’Reilly authors, high level Perl monks, and authors of CPAN modules you use regularly.

(Whether or not you know that you use them, the Perl language and its Comprehensive Perl Archive Network, or CPAN, are the behind-the-scenes heavy lifters of a great deal of the dynamic Web. The “modules” on CPAN are rarely applications themselves, but are underlying code libraries or components that enable user applications.)

Many of these luminary or guru type attendees at the PDX.pm, therefore, are effectively vendors and colleagues to those of us who have written Perl software using CPAN modules — whether or not we’ve ever met. And yet, apart from the occasional pint of beer, or royalties on an O’Reilly book or two, most of us will never pay (at least not directly) these Perl gurus for their work. This got me thinking, and, as I get to below, also got me somewhat concerned.

Open Source Software (OSS), including Perl itself and the modules on CPAN, is much loved in the business world due, in part, to its compelling price tag. Some OSS, like the Linux kernel, is developed with a great deal of assistance from commercial entities, who are often motivated to increase compatibility with their product offerings (as in driver development) or to serve some internal functionality requirement. But a dirty secret — one whose propagation has been discouraged by the promoters of OSS as part of their mostly laudable efforts to gain for it the same credibility enjoyed by commercial software — is that a vast amount of OSS is in fact written and / or maintained by the “volunteer” developers of lore. These folks are, I must emphasize, not amateur or “unprofessional” — indeed, the reason that many OSS developers / maintainers undertake the task is for professional recognition or advancement — it is simply that they have no (or only intermittent) direct economic ties to the software they maintain. Therefore, in many cases, you might have a software module whose author maintains it “out of the goodness of his heart.” Without the financial lever, users of the module cannot compel him to resume development if it falls off of his list of priorities.

(Note that I do not condemn OSS on this account; from the user’s perspective, the benefit of having free and unfettered access to the source code, in combination with the gratis price, should more than make up for the incremental uncertainty added by the reduced extortibility of the author.)

What is gained for the author and maintainer of, for example, CPAN modules, is a measure of respect and credibility among his peers. The merry band of geeks at PDX.pm, for example, treats certain of its guru alumni like conquering heroes when they return to give a talk. And I have no doubt that such a status gains one an entree when seeking consulting contracts or employment. But it’s not guaranteed, and just as the user has no leverage over the author, the author has no leverage over his users by which to demand a quid pro quo.

A disturbing consequence of this economic model came up explicitly in the PDX.pm post-meeting chatter at the Lucky Lab, however. There exist major contributors to software projects relied upon and used by thousands (if not millions; and in the case of Perl and its indirect users, it would not be much hyperbole to say a billion) of people — which OSS contributors essentially live in poverty.

Why is this, more so than other forms of poverty, especially disturbing? Well, for starters, these people are clearly creating economic value, but are martyrs to a system that doesn’t pay them back.

That open source is economically valuable may not be obvious to an Econ 101 student or the less thoughtful MBA, but even a nontechnical executive can look at the market cap of e.g. Red Hat ($4.2 B as of January 2007) and consider their ability to get there with only 1,100 employees (that’s $3.8 M per employee, comparable to $4.3 M for e.g. Microsoft, who is nearly 70x larger and is of course the global market leader and therefore presumably can get away with less SG&A per developer, and much more than $1.6 M for Oracle). If that doesn’t convince you, consider the replacement cost of many open-source products with their proprietary counterparts, and even applying a devil’s advocate discount, we find that the OSS developers are creating something worth money — yet, they often capture little or none of that for themselves.

So, although a brilliant developer on his own (or in loose collaboration with a few like minds across the world) can produce some useful thing that we believe to be worth money, that same individual has quite a challenge before him to monetize that work. And in fact, the rewards he seeks — recognition and social credit among peers — are best gained in precisely the circumstances that (are conventionally believed to) preclude monetization. By that, I mean that being a great software guru these days is best achieved by having others read and admire your code — which is done by open-sourcing it — and which in turn (notionally) stops you from extracting a fee from consumers.

Therefore, an injustice is done here because the rewards of good work are distributed not merely inequitably but in fact not at all to those who perform the work.

Another, and more practical consequence, is that there are any number of important parts of any sophisticated open-source software stack that are cared for by persons in limited or sometimes precarious personal situations. I mean cases like developers without health insurance, developers who share rooms with dodgy roommates, developers who could be staring down tax liens. You may point out that there are many more and poorer folks than OSS developers to worry about; of course this is true. But OSS developers are providing the raw materials from which innumerable startups (and incumbents!) are driving the highest-growth parts of our economy and building substantial empires.

There are very real risks for users of OSS that a dependency for some major project could fall victim to the personal hardships of a developer, if, for example, he is made to seek other work to support himself or if he falls ill without insurance. In the biggest cases (say, Google), the firm running the dependent project could simply take over the code — but in the case of a startup building on an OSS stack, the danger looms rather larger.

I have no grand plan to remedy this. And I don’t think that the (terminally broken anyhow) music copyright regime gives us much guidance. While in music it is the peripheral producers who languish unpaid and the stars who get big dollars, in the case of OSS developers, often the marginal producers are comfortably paid corporate contributors while the true “rock stars” of OSS go broke.

And although a large part of this problem comes from the effective waiver of copyright inherent in OSS, closed source software is not the answer. Closed source software (at least that which is meant to be used by developers or geeks) sucks too much.

Probably the closest I have to a recommendation on this is that users of OSS find and reward the “stars” of projects they depend heavily upon. And, having said that, I am off to send money to Bram’s Ugandan orphan
s (for the incomparable vim) and beer and pizza to the GNU Screen guys.

Rescinding a Dumb Policy

December 27th, 2006

A while ago I posted a dumb policy essentially stating that I wouldn’t accept entrepreneurs’ (Web-based) social networking requests while they were under active consideration for investment here at Voyager Capital.

This was ill-conceived. After all, anyone who is observing whether or not a given entrepreneur (Mr. X, let’s say) is garnering links from VC Y must already know that Mr. X has met with VC Y. Therefore, any explicit policy tying linking to investment status leaks information to the adversary.

The only solution? Throw in some randomness (like the occasional tricky check for slowplay or a bluff-raise). Therefore, I rescind my dumb-ass (and anyway kind of arrogant sounding) policy — let’s get linked.

Myspace: Exemplifying the "Worse is Better" Principle

December 22nd, 2006

I’d been holding out as long as I could, but the time finally has arrived when I have no excuse any longer not to have a Myspace account. Considering that it’s impossible to hear a Web deal pitch these days without some reference to the 800-pound, Goth-dressing, emo-listening gorilla that is Myspace (be it as an exit comp, a business partner, a go to market venue for reaching a demo, or whatever), it was clear that having some first-hand knowledge of the beast would be to Voyager‘s advantage.

Nonetheless, I had hesitated up until this weekend. Prior to that, I’d visited Myspace perhaps ten times total — usually coming across someone’s personal page that ranked highly for some obscure Google query — and I really felt that each time my browser started rendering the mix of self-administered webcam stills, cacophonous and concurrent music and video widgets playing over one another, ill-considered typographical conventions, and color schemes from visual artists’ professional purgatory, I was actively losing IQ points.

(Nerd alert: I always thought one’s .plan was a perfectly good way to put up a personal profile that your friends could check and that you could, in turn, obsessively monitor to see who’s been checking you out and when. Alas.)

Well, no longer. Speaking with a couple of musician friends who use Myspace to promote their bands finally convinced me to take the plunge (along with the imperative thoroughly to understand the thing for business reasons). And, grudgingly, I must admit, the signup process beat my extraordinarily low expectations for aesthetics, etc. That is to say: the default color scheme is mercifully legible, and there are no Snoop Dogg vids playing while you register an account.

Still: judged by any reasonable standard, Myspace is terrible software. While registering an account, a field failed validation, and the form came back with the checked status of the checkboxes reset to defaults. The password is emailed to new users in plaintext. The cookie / session scheme is inconsistent and various pages “forget” that you are logged in from time to time. The most basic types of usability enhancements — like, say, making the automatic hyperlink to searching on the title of a favorite movie search under “movies” (and music under “music,” etc.) — are unimplemented. In-band signalling proliferates, facing the user with odd directives about pseudo-tags to include in text blocks to toggle HTML escaping.

All in all, the implementation of Myspace would probably get a “D” in Philip‘s MIT class on Web apps. But you can hire an awful lot of “A” earning MIT grads for $580 M.

The lesson here, I believe, is emphatically not that architecture and quality of implementation doesn’t matter. But it does prove that such quality is neither a necessary nor a sufficient condition for outsized success under the right circumstances. The distilled version of the lesson is probably something like this: if you have a credible shot at an acquisition exit (based on some non-earnings and non-quality metric, like user signups), and if you have hit the “hockey stick” of user uptake, then you should not throttle user growth (your putative value metric) in favor of fixing the problems — just do the bare minimum to keep things working while you stoke the fires.

Most startups hold their breath after launch wondering if they can get the dogs to eat the dog food. But if the dogs are ravenously devouring your dog food, however crappy the ingredients may be, it’s no time to worry about QA on the unidentified meat parts. “Just keep shovelin’ it out there” would be the Myspace motto.

Of course, I must return to the “under the right circumstances” caveat from above. Myspace’s growth curve overlapped nicely with a period of compressed risk premia and renewed enthusiasm in acquisitions in the Web space. If Myspace had “blown up” in, say, 2002, and had needed to demonstrate staying power for three years until the M&A environment was ready for an exit — well, then, the kind of archictecture / scalability and user experience issues I mention above may well have been its downfall.

So, entrepreneurs of the world: worse is better in Web software (except when it’s not). And, the answer as to when it is better probably has more to do with macroeconomics and industry trends than with your technology and user demographic. Yet another example of the difference (and the not uncommon incompatibility) between the skills of building a superior product, creating a great and thriving company, and making a huge return on investment.

(Thank heavens for the fact that not all entrepreneurs focus as single-mindedly on the third skill (huge ROI), for many times the ability of companies to reap such rewards is dependent upon the skillful borrowing of innovations from companies that have focused on numbers one and two (product and culture). Identifying such skillful borrowing in the Seattle technology ecosystem is left as an exercise for the reader. My question for economists is, is it possible more justly to apportion the rewards of innovation to the innovators?)

The Paradox of Quality Site Visitors

October 10th, 2006

Last month I met with a friend who complained to me that his website — a high quality hobbyist community site — received X page views per day, but was turning over less than X / 2 dollars per year in revenue. Doing the math, that turns out to under .15 cents per page view.

Exacerbating this, he then cited a conversation he’d had with a domain squatter (domain troll), who also was receiving X page views per day on a network of typo domains and similar. That domain squatter was making more like X * 5 dollars annually — more than ten times as much money as my friend with the valuable, sticky community!

The paradox, it seems is this: in a pay-per-click driven world, site visitors who want to stay on your site — due to it having the once-much-lauded quality of “stickiness” — are worth much less than those who want to flee your site because it’s clearly not valuable, and hence will click through to somewhere else.

Curses::UI Escape key bindings are slow; here’s why.

September 14th, 2006

I am throwing together a quick Curses (console / terminal) based UI for a database here, prior to putting (or hiring someone to put!) a Web front-end on it. In keeping with my experience with elinks, I wanted the menubar activation key to be Escape. However, it was running slower than molasses in February — it seemed to take a FULL SECOND before the Esc key would register and focus / unfocus the menubar.

Well, poking around a bit gave me the answer. From man ncurses(3):

ESCDELAY   Specifies the total time, in milliseconds, for which ncurses will   await a character sequence, e.g., a function key.  The default   value, 1000 milliseconds, is enough for most uses.  However, it is   made a variable to accommodate unusual applications."  

Duh. It was taking exactly a full second.

Social Networking Connection Policy for Current / Prospective Investment Candidates

August 25th, 2006

*Update:* This policy is dumb-assed and I have rescinded it; see [http://rlucas.net/blog/metablog/rescinded_social_networking_connection_policy.html my more recent post]. Policy on adding connections: Please note that I do not accept connection invitations from folks who are, or in the immediate future will be, prospective investments for Voyager. This is to avoid either actual or apparent leaking of information or impropriety in investment decisions. (I am happy to re-connect after active investment consideration is over!)

A Rational Scheme for Medical Laboratory Results

August 20th, 2006

Medical laboratory results these days are a hodgepodge of numbers on various scales and with various units. For example, the Merck Manual lists various laboratory test normal ranges and their units:

Hematocrit: Male 41-50%, Female 35-46% Hemoglobin: Male 13.8-17.2 g/dL, Female 12.0-15.6 g/dL ... Sodium: 135-146 mmol/L  

These “normal ranges” can be sort of misleading. If your value is numerically half of the lower-end of the hematocrit range, for example (say, 20%), you would be sick but still alive. However, if you have only half the normal range of sodium concentration (say, 70 mmol/L), you’d be dead.

This is crappy. It imposes a high cognitive load on doctors by requiring them to know a variety of “normal” ranges, it makes lab results opaque to patients and the uninitiated, and it has a “hidden memorization cost” of knowing the implications of going outside the normal range (such as the difference between having half the normal measurement for hematocrit vs. sodium, above).

I propose a replacement scheme for all scalar laboratory values (at least those in the main test batteries, like the chem-N and CBCs). In my scheme, all “unit” lab results are replaced (realistically, augmented) by “rational” values. Rational values are normalized at 100 for the center of the range. The “normal range” is represented by the range 90-110. The values associated with roughly 50% mortality are set at 50-150. The ranges 80-120, 70-130, and 60-140 will be pegged at some statistics-based measurement, either based upon standard deviation or upon increased chances of negative outcomes, whichever an appropriate standards body decides best (there are some labs for which it might not make sense to have it be standard deviation-based, others for which it would).

The correspondence of “unit” to “rational” measurements is not necessarily linear; the formulae to determine this will be decided per-test, reviewed annually by the standards body, and published as an appendix to standard references and on the Web.

The “core rational” lab values are those which are unadjusted for average adults. “Adjusted rational” lab values are adjusted for sex and body mass. “Peds adjusted rational” values are adjusted as above but with age ranges.

All lab reports will show these values on the summary page; “unit” measurements will be provided as well (they will doubtless remain indispensable for certain purposes). Color-coding would be straightforward: green for +/- 10, yellow for +– 20, orange for +– 30, and red for +/- 40.

This will become an ever more crucial part of diagnosis as we move toward greater automation (e.g. field lab-testing machines that paramedics could carry) and de-skilling of the medical profession (nurse practitioners, paramedics, self-administered care and monitoring, etc.). It also becomes a key part of the understanding required for personal medical choice as we move the economics of health care toward a (partial) patient-pays model.

If someone wants to give me a grant for a year of my time with a couple of assistants, I’ll go ahead and set this up. Drop me an email – rlucas@tercent.com.

Brooklyn Restaurant Review (Seattle, WA)

August 19th, 2006

Date: 2006-08-18 Reviewer: rlucas Review: The “Brooklyn” Restaurant (Seattle, WA) Summary: C minus for overall experience We recently got hit with a heat wave here in Seattle, and so my wife and I decided to celebrate having survived another blistering weekend day by going out. I was looking for a steak, so we hit up the Brooklyn, where we’ve enjoyed happy hour before a time or two. Unfortunately, we had a very disappointing experience. Despite our having made a reservation, our table wasn’t ready when we showed up five minutes fashionably late. The hostesses suggested we sit in the waiting area, which is a bit of a dismal nook (in retrospect, having gone for a Martini at that moment would have significantly improved the evening). When we were seated, it was in the bar area in front (at the time, I didn’t realize there was a real dining room in back or I would have agitated for it). This might have been OK during a less busy time or in the daylight. As it was, the room had noisy acoustics and folks were crammed in rather close to one another — our evening out together was now being augmented by the evenings out of various folks, such as some chatty tourists from Boston. Furthermore, at night, the high-pressure sodium (bright orange and buzzy) street lamps come on, and shine through the blinds in a most unflattering way. This would have been water under the bridge if the rest of the experience had been excellent. It was not. Her seafood something-or-other was quite passable — B+ / A- — but my filet mignon, a dish I order once or twice a year, was a C cut of meat, too large by half and stringy and oddly marbled. Our waiter was professional and prompt, but — and it’s hard to say if this is an impression we formed due to the other factors or not — he seemed awful /weird/ in an inexplicable way. The wine by the glass selections were good, not excellent, but definitely above average for by-the-glass (handy descriptions of the 8-9 types provided a guide and doubtless helped us make satisfying choices). In the end, we left feeling like we’d dropped a c-note in vain. The Brooklyn is forever cast in my mind’s eye in the light of a high-pressure sodium streetlamp — with not a jot of the luxury and escapism that is the stock in trade of even merely good steakhouses. Conclusion: Avoid the Brooklyn — and if you do end up there, avoid the front of the house in the evening, and the filet mignon unless you hear otherwise.

Picoformats – The Lazy, Curmudgeonly Answer to Microformats

August 19th, 2006

I like the idea of Microformats — they’re essentially loosely standardized schemata with a strictly standardized syntax (which plays nice with (X)HTML — hence the “h” in front of hCard, the new groovy Microformat version of vCard). They’re 90% of the way to my new Nirvana of all-text interoperability (I no longer trust binary formats unless the readers and writers are Open Source and old).

But I hate angle brackets, and I hate having to do crap that the computer should be doing.

So, I’m defining Picoformats. It’s like this. You can do a pReview, or pCard, or pCalendar, or whatever — we’ll call it a pThing. If there’s a corresponding Microformat or other similar format defined, you SHOULD include the constituent fields that the format defines as required; if not, you SHOULD try and include the same fields when do you do the next pThing.

You SHOULD write in plain English, or whatever you like. You SHOULD write in the format you like, but you MUST NOT hand-code more markup than necessary, memorize any XML namespaces, or be tied to special-purpose tools.

You SHOULD prefix the constituent parts of your pThing schema with the name of the field, set apart in some reasonable way, like being the first thing on a newline before a comma, like this:

Name: Bob Smith  

or with some easily-added formatting, like this:

Name Bob Smith

or whatever happens to work for you. You MUST NOT freak out if the word you use to represent the field is not exactly the same as in the corresponding schema, although it’s AWFUL NICE if you keep it real close (like “Date Reviewed” instead of “dtreviewed”).

You SHOULD keep all the schema fields in the same document, right up next to each other, and delimited in some easy to dig way like starting on newlines or something.

You MUST trust that Google and its heirs will help people find your pThing, and that smart people like Ana-Maria will help computers understand your pThing. You SHOULD help them both out by writing a parser to a standard format if it’s easy to do.

Now: to write a pReview or two.

Full Sail Session vs. Domestic Beers

August 19th, 2006

A few weeks ago, P—- made fun of me for ordering a bottle of Session brand beer at Captain Ankeny’s, one of downtown Portland’s best Wednesday night deals ($1 cheese pizza, $2.50 microbrews). “Session?” he chided, “why spend two bucks on that when you can get a pint of good stuff for fifty cents more?”

Fifty cents or no, I enjoyed my pizza and my Session. But the point he raised is a valid one: if you pour a Session out of its dark brown stubby bottle, you may be surprised to see a hue more reminiscent of Foster’s than Full Sail. It’s a golden lager, not an ale, and certainly not heavy on either hops or malt flavors.

Is this just a case of “meet the new boss, same as the old boss?” Are we being led astray by marketing, by the clever ruse of Full Sail Brewing, who now are trying to parlay their well-built brand into high-margin success by selling us swill at a premium?

This calls for a blind taste test. Three glasses, labelled underneath with the beer type, with equal volumes (~ 4 oz) of Session (bottle), Bud (can), and MGD (can). Mixed in a dark kitchen and scrambled around so I honestly didn’t know which was which when I took them into the living room.

Appearance:

Beer #1: Lightest of the three by a hair over #2. Head seemed to come down the quickest with the least residual foam on the sides. Steady but modest effervescence, with a few bubbles adhering to the glass.

Beer #2: A bit darker than #1, but still a light color; this is more like gold with a bit of copper in it than shiny 24k stuff. The head has gone down but left an even cylindrical section of foam still stuck to the sides of the glass. Effervescence is continuous at about the same rate as #1 but with no bubbles adhering to the bottom.

Beer #3: Significantly darker than the other two (actually, about as dark as both put together — when #1 and #2 are next to one another, they make about as dark an image as #3 alone). Still not “dark” by any means. The head has gone down leaving an uneven “mountain range” ringing the glass. Bubbles are slower and bigger; there are many large bubbles adhering to the bottom.

Smell:

Beer #1: Smells like cheap beer. Kind of a sharp odor, with a sweetness.

Beer #2: Smells like cheap beer, but a little more vegetable smell, and a smell like wet old clothes.

Beer #3: Smells like honey with a bit of spice. I am getting a feeling that a bias toward #3 may be setting in.

First taste:

Beer #1: Mild taste, a bit of sweetness, almost no hops.

Beer #2: Sweeter, still quite mild. Almost no hops. I think there’s a bit of a chemical aftertaste.

Beer #3: Sweeter still, with more alcohol / carbonation to attack the palate (I’m not skilled enough to discern the two). There is a slight bitter, asparagus-like aftertaste.

Second taste:

Beer #1: This time, felt the carbonation / alcohol a bit more (mouth sensitized?). The sugar is the dominant taste, though it is very dilute, like a little bit of honey dissolved in water. I think now that this is sweeter (in terms of sweetness, not in terms of sugar content) than #2.

Beer #2: Getting a bit more bitterness (hops?) out of it. The sugars are there — you can taste them — but they don’t manifest as sweetness so much as in added body. The aftertaste is not so much a taste as a feel — the residual taste, strictly speaking, is sugary, but there’s a kind of oily, chemical feel.

Beer #3: More alcohol, lots more hops, and a return to the honey-sweet type of sugar flavor (tempered by those hops, though). The aftertaste is the most pronounced of the three; however, it’s more of a lingering bitterness / acidity, rather than #2’s weird chemical residue.

Recommendation:

Beer #1: OK. Not great. Would drink it if it were free.

Beer #2: Sort of OK. Would drink it if it were free and I were trying to be polite. Would choose pretty much any microbrew or import over #2.

Beer #3: A little more interesting. “OK plus.” Would stand up next to mass-produced imports, or perhaps almost up to Sam Adams. Doesn’t seem like it would stand up to a real microbrew or a Muenchener bier.

Results (could you guess?):

Beer #1: Bud

Beer #2: MGD

Beer #3: Session

Conclusion:

My methodology may be questioned, but the results seem pretty firm. Session is different from the major domestics. It’s probably also not as good as a “real” microbrew, if, like me, you enjoy strong or novel flavors (various forms of hops shenanigans, strong IPAs, infusions, Rauchbiers, steam beers, bocks, etc.). I didn’t bother comparing it to a domestic “ultrapremium,” though that might be a more valid comparison. At the price we’re seeing locally ($10/12), Session is a relatively good buy.

Date: 2006-08-18 Reviewer: rlucas Reviewed: Full Sail Brewing's "Session" beer Summary: Compared Session to Budweiser (Bud) and Miller Genuine Draft (MGD) Result: Session wins on hops, sugar, and alcohol.  MGD has a bad aftertaste. Conclusion: Session is not just a marketing ploy, but an OK value beer.