Archive for the ‘ancient’ Category

Once Upon A Time

Wednesday, December 31st, 1969

[Update: As is often the case when lots and lots of people (say, the whole Internet)
look at a problem, I came to this conclusion independently along with a
whole bunch of other folks.  I wrote this freshman effort at
blogging prior to becoming aware of the “Eternal September” concept;
however, this trope of pre/post-1993 Internet quality is much more
concisely described by the “Eternal September” entry in Wikipedia:
http://en.wikipedia.org/wiki/Eternal_September .  My take on it
doesn't put as much blame directly on AOL users as the folk wisdom of
Eternal September does; I try to look at structural differences in the
modes of communication and speculate as to their effects on the types
of interactions that went on.]

Once upon a time, the Internet was cool (circa pre-1993). At that time
there was a lot of info with a decent signal to noise ratio, and a lot
of knowledgable people, You could read the FAQs for a newsgroup on a
subject (anything from hang gliding to Germany) and get a fairly good
dose of knowledge on the topic, as well as a direct line to a bunch of
people who knew it well. Is there a way to get something as cool as
that back out of today's incarnation of the Internet (that is, the
largely Web-mediated experience)? I hold that maybe there is some hope
and that we can get the Internet back to being somewhat collaborative
and useful again.

If the Internet was so grand, what did people
do with it back then? There was the normal Internet stuff that still
goes on today and will probably go on forever: email and FTP, which
respectively served the most personal and most technical needs of its
users (sending letters and distributing software). There was real-time
chatting of various types, much as there is today. But the big
difference in the way people interacted then and now is the difference
between Usenet and the Web.

Usenet (a.k.a. netnews or
newsgroups) provided for the syndication of so-called “news” messages
grouped into subject-matter categories. In practice, these newsgroups
weren't really news per se. They were rather forums for discussion and
debate by people, often quite knowledgable people, about defined
subject areas (of all sorts, but most commonly political/religious
debate, hobbies, and computer/technical issues). People built up their
reputations by contributing constructively to these discussions but the
most presitigious thing you could do within the context of a newsgroup
was to help maintain its FAQ. The Frequently Asked Questions list was
kind of a “greatest hits” of the newsgroup's content. Most of the
active newsgroups had these FAQs, and they were routinely made
available in the context of the newsgroup itself as well as being
archived and distributed as ends in themselves. The maintainers of an
FAQ of course had to be able contributors who would structure and even
add novel material to the FAQ, but the document really represented a
collaborative effort of the group's active members, and was often
largely paraphrased or excerpted from newsgroup postings (with
attribution; another honor for the constructive group member).

(There
was of course no such thing as a newsgroup that had only one member who
wrote the FAQ based upon his own discussion with himself and the
questions he had answered. The idea would be preposterous; newsgroups
were collaborative centers.)

(Note that the kind of knowledge
I'm discussing here is not the easy kind, like stock quotes, movie
times, sports scores, etc., which various companies have already
handled quite well [and which, I may add, were not nearly so easily
available during the Usenet era]. I call that the “easy” kind of
information because it's easy to imagine the SQL statement that
retrieves it, e.g. select showtime, location from movie_showings where
film_id = 38372 and city_name = 'boston'. I'm more interested in domain
knowledge of a particular field, such as “what are some good books I
should read to learn about hang gliding,” or “what does it mean if
program foo version 4.21 says 'error xyz-2?'”)

Sometime after
1993 a bunch of things started happening: commercial spam began to fill
up Usenet and folks' email boxes; waves of the uninitiated began
incurring the wrath of old-timers by their breaches of netiquette,
leading to a general lowering of the signal-to-noise ratio; and, of
course, people got turned on to this whole idea of the Web. Here was a
medium in which anyone could become a publisher! If you were expert on
a topic, or if you had a cool digital photo, or if you just happened to
know HTML, you could publish a Web site and become sort of famous! Of
course, this was a pain in the ass: posting on Usenet just meant typing
an email message, but having a web page required knowing and doing a
lot of tedious but not very interesting stuff, so you really had to
have some time to put into it.

However, the Web had pictures and
clicking with the mouse, while Usenet had boring words and typing —
and AOL users were starting to come onto the Internet. So the Web took
over.

The dominant mode for interaction on the Internet — but
more importantly, for publishing of subject-matter knowledge — moved
away from Usenet to the Web. (Of course, Usenet is still around, and
the newsgroups generally put their FAQs on the Web, but a newcomer to
the Internet might never even hear of Usenet during his Web-mediated
experience.) Rather than posting an article to a group and waiting to
read other articles posted in response, you now published a “site” and
counted how many visitors came. (Plus, you could enjoy hours on the web
without ever using your keyboard, which meant of course that its users
were even physically disconnected from the means of actually inputting
any information.)

Everyone who was an aspirant to Web fame and
had an interest in model trains, say, would create his own model trains
Web site, provide his own set of (supposedly) interesting content, and,
often, maintain his own FAQ of questions asked of him by visitors to
the site. At first, these aspirants were individuals, but soon enough
affinity groups or associations and commerical interests got involved,
doing basically the same thing. Perhaps you see where I am going with
this, gentle reader. The way in which personal knowledge was packaged
up and distributed became centered on the individual, and the
relationship changed from one of collaboration between peers to one of
publisher and reader.

A well-known lament about web publishing
is that unlike print publishing, the cost is so low as to admit
amateurs, crazies, and just plain bad authors — anyone with sufficient
motivation to brave the arcana of FTP and HTML. On the other hand, I
have just complained that the model simultaneously changed from a
peer-to-peer to a client-server relationship. Could it be that both of
these charges are true? It seems this would be the worst of both
worlds: not only are people no longer as engaged in the constructive
addition to the commons, but those that control the production and
distribution of knowledge aren't even filtered out by the requirements
of capital investment. It's like creating a legislature by taking the
worst parts each from the House and Senate. Sadly, this describes much
of the past ten years of the Internet's history.

However, there
is some hope. Whereas previously, “anyone” could have a Web site but
precious few put in the many hours it required in practice, the promise
of Weblogs is to actually open Web publishing to “anyone.” This won't
filter out the crazies, but at least it won't artificially inflate
their importance by raising the bar just high enough to keep everyone
else out. Comment forums, content-management systems, Wikis,
trackbacks, and the like are helping to re-enable the sort of
collaboration that made the Usenet system work.

Bottom line: it rather feels like we're almost back to 1993.

Next time: future directions, pitfalls, and why blogging (alone) is not the answer.

Check-cutters drop ball, bash Harvard, circle wagons; "consumerist" attitudes toward computing.

Wednesday, December 31st, 1969

Paymaxx, a payroll services provider, recently confessed to a major
mistake that essentially made public many of their customers'
employees' W-2 forms. My firm uses Paymaxx to run payroll. So, as it
happens, does another Harvard-associated person's small computer firm.
This person, however, has more time (or more curiosity) than I, and
discovered a gaping hole in the system serving W-2 forms, a hole that
made it trivial to retrieve others' forms. This person did not create
the hole or “crack into” the system — just stumbled upon the hole left
open. What happened next was unfortunate.

The discoverer of the hole was in a bind; to confirm the existence
and nature of the hole, he necessarily performed some testing and
experiments. Upon forming a supported theory of the problem, he
contacted the company with his complaint, and a sales pitch for his
services to fix it. Was this morally correct? Certainly, he was
compelled to take action by knowledge that his security and privacy was
threatened; certainly, he was correct to inform the company. Certainly,
he was under no obligation to provide his expertise without
compensation. However, the quandary seems to center on the nature and
specificity of his notice / sales pitch to the company: did he wrongly
withhold information about the problem in a manner as to constitute
(morally, if not legally) a form of extortion?

The response of Paymaxx was less than satisfactory as well. In a letter to its customers, Paymaxx stated:

The hacker, is a 21 year-old Harvard student (or
graduate) with a history of similar stunts. He was a PowerPayroll
customer for nearly four years. In mid- February when we informed him
(and the rest of our customer base) of the availability of 2004 W-2
information on-line, he e-mailed one of our sales reps informing him
that he had found a flaw in the security aspects of our on-line W-2
application and that he would tell us about it if we would hire his
firm. We considered this a sales pitch and dismissed him.

The remainder of the letter is a bunch of hand-waving.
However, it is this paragraph that is most troubling. Why was their
customer referred to as a “21-year old Harvard student?” This seems to
me nothing more than an attempt to excuse their incompetence by
averring that it required an evil genius from Harvard (that spooky and
much-maligned ivory tower of mysterious egghead commies) to get into
their systems. Bad job, Paymaxx — there went your opportunity to own
up to your screw-up, be clear about how and why you screwed up, and
demonstrate the objective steps you've taken to prevent it in future.
Instead, you pled the Harvard defense, and tried to shift the blame
onto someone else. However, rather than inveigh against Paymaxx for
their wounded-animal response, I'd rather look to the systemic reasons
why we can expect this kind of problem throughout corporate America for
the forseeable future. I'll begin with a brief technical description,
and then give my theory on the attitude that leads to this kind of
result.

The problem was, schematically, that the URLs for retrieving W-2 forms were like this:

http://bogus.paymaxx.com/w2form?123456

Where, as you might guess, the next employee's form is 123457. This
is not exactly how the problem manifested, but it's close enough to
illustrate: the engineers who put that into play were either lazy or
stupid, not taking into account that changing digits in the URL is
trivial. Put in the right number, and you get the W-2 form, with name,
address, and earnings.

(Merely to demonstrate that I am not declaiming against their engineers
uninformedly, let me state that what needs to have been done is to 1.
use HTTPS, if they had not, and 2. engineer the sharing of a true,
non-trivially guessable secret (for example by snail-mailing a PIN to
each employee), and 3. putting a guess-number-count limit on the
retrieval dialog to prevent brute-force attacks. In defense of Paymaxx,
they are probably just the first payroll company to get caught with
something like this — I have chosen to stay with them despite, and
somewhat because of, their experience with this problem, since now they
should be more rightly paranoid about security and because I don't
expect any better from other firms.)

I can only speculate at the reasons behind this goof, but it does
fit with a general pattern I have witnessed, of what I term a “consumer
attitude” to data and computing. This attitude is promoted by the false
promises of the software industry to liberate us from the burdensome
task of comprehension — the notion that all software can be
“intuitive” and that humans and computers can interact without the
humans holding up their end of the bargain. Holding this attitude leads
to the implicit adoption of certain maxims;

  • All that is displayed visually (representation) is the thing itself
    (underlying form) and can only be manipulated thereby, and conversely,
  • How something can be manipulated via a visual interface is the only means of manipulating it.
  • (or, things work as they apparently do, and they don't work in other ways.)
  • The visual interface must permit a user with no or cursory
    training to access any conceivable functionality (by conceivable, I
    mean conceivable by a lay person with experience in the problem domain
    and describable in plain language, for example, “move the invoice date
    to the first Monday of the month;” I except functionality that lay
    persons would not think themselves qualified to describe, such as
    certain mathematical wrangling), and therefore,
  • Any program functionality that is reasonably described in plain
    layman's terms by someone familiar with the problem domain should be
    simple to implement, by a layman who is made familiar with computing
    tools (rather than by a programmer who is made familiar with the
    problem domain).

The attitude brings with it the conceit of thinking that others will
share the attitude — an assumption that always proves fatally flawed,
for even imagining a world devoid of legitimate curious “hackers,”
there will always be black-hat “crackers” who shun the maxims of
consumer attitudes in favor of experimenting, breaking things, and
seeking alternative scenarios. The consumer attitude is one of taking
the image on the screen at face value; of seeing the shiny parts of the
system as the important onces. It is also, unfortunately, the reigning
attitude in the business world, because having a “producer” orientation
to data and computing is hard and often unpleasant — much easier to
fire up Excel or Solitaire, than to write code! The consumer attitude
makes one believe that links are something clicked upon and not
manipulated, and dulls one to critical and proactive thinking about
security.

I
am not suggesting that every executive be intimately familiar with Web
application security before leading his company to make use of the Web,
but in the Paymaxx case, it apepars that even their engineers
manifested the consumer attitude, thinking shallowly about their
application's security.  Hiring these engineers, therefore was the big problem.  If executives have ONE imperative in their relationship to technology, it's responsible vendor selection! 

I suggest therefore that executives be made aware
of the existence of the consumer attitude and the problems with it, and
be trained to evaluate solutions and providers with an eye toward
avoiding “consumerist” technology thinking. Those who design, create,
manage, and maintain our technology infrastructure must have a
“producer's” attitude toward technology, understanding what the hard
problems are, and that they are hard, and not shying from depth of
understanding. Inevitably, this will grow to include executives at most
kinds of businesses, as all forms of organization rely increasingly on
information technology.

We are in a unique historical
moment with regard to this problem of attitude. The past century did
not suffer so greatly, for every shipping concern would naturally have
been managed by men who had sailed on ships, and every bridge-building
outfit would have been managed by engineers and architects — because
ship's officers and engineers had existed as professions for
generations. There might be one generation of management-age persons
who have a solid generalist background in computer science as of today,
and these few are a tiny fraction of the number needed to fill the
ranks of executive positions at IT-reliant firms. As a result, we are
stuck with dilettante consumers making critical decisions for
productive firms. Who would hire someone to oversee a pharmaceutical
plant's operations on the basis of his qualification of taking medicine
daily? It is absurd — but every time we put a “consumerist” person in
charge of an IT-reliant operation, we do the same thing.

There was a time when people did not hold a consumer attitude towards IT; indeed, the pendulum was too far in the other direction. People were scared witless about computers, and
they were seen as the domain of “wizards.” Indeed, secretaries became “pseudo-wizards” in their own right,
memorizing WordPerfect macros, and in effect writing their own programs
for routine tasks. This, of course, did not last: while some arcane jobs will always require engineers, for the
most part people got over their computer fears with training. 

It was accepted that to use a computer required
training and knowledge, as with using an automobile or a welder's
torch.  Then, with the rise of the Gog and Magog of Windows and
Macintosh, we found ourselves in the middle of an apocalyptic war
between two indistinguishable armies — meet the new boss, same as the
old boss. What they fought over was market share, but what they agreed
upon was promising the world that computers should be easy and
effortless.  Details of interface were the ideas in dispute,
rather than the underlying metaphors, attitudes, and concepts. And it
was amidst this battle — waged over the turf of the newly discovered
mass-market for computing — that the consumer attitude was
propagandized to the masses as well as the elites.

It made sense, too, in a world where computers were machines for
three families of applications: word processing and spreadsheets,
email, and custom (internal) applications. Word processing — at least
at a casual to moderate use level — is a great candidate for WYSIWYG,
know-nothing interfaces. Spreadsheets had the beautiful characteristic
of direct analog to well-understood ledger books and pocket
calculators, combined with a spatial orientation that paralleled the
WYSIWYG ideal of the word processor. Email was a finite
domain, and it had similar metaphors to familiar tools. And custom
applications, internal to a given organization, were the special
exceptions to the know-nothing rule — staffs were trained on workflow
processes, order entry “screens,” predefined queries written for a
particular purpose. Each internal application was like a special tool
inside the firm, usable for its one purpose, and only by those who were
trained.

And
how well this regime worked for a while! Get familiar with the clicking
and typing bits, and you've got the word processing, spreadsheet, and
email stuff down pat. Watch the training video or read the manual, and
you can use your company's order-tracking system or pull the
quarter-to-date sales figures from the Oracle database.  But what
happens as soon as Visual Basic for Applications is embedded in your
word processor?  What happens when your Excel model requires a
procedural language routine, or sources data from an external database?

If businesspeople are to operate effectively in the world of
computing, I believe that we must produce a thriving culture of rounded
generalist executives, interacting with honest vendors who make the problems
of computing as simple as possible — but no simpler! 
We must expect people to learn some of the underlying ideas behind the
abstractions; just as a freight forwarder must understand the underying
limitations and strengths of various forms of transport, regulations,
etc., an author of a complex data report must understand the
limitations and strengths of his data sources, the concept of the
normalization of data, timeliness and validity, etc.

Future directions: why a
consumerist “know-nothing,” and a technician, “specialized tool” model
are both insufficient ways for businesspeople to approach computing.
Necessity of generalist computing knowledge. Folly of having businesses
driven by IT run by modern-computing-illiterate executives (would one
run an oil company with no chemical engineers or geologists on the
management team?). Folly of expecting interfaces to require a constant
amount of learning (zero) while they expose a geometrically expanding
range of functionality to the user. Uniqueness of the generalist
computing skill set and how it is already as important to an executive
to understand data as it is to understand accounting and bookkeeping —
even if this is not accepted today.

Class::MethodMaker v2 dies with cryptic "Unknown error" in compilation with bad arguments to use / require

Wednesday, December 31st, 1969

If you use Class::MethodMaker and have a subtle error in your

use Class::MethodMaker [ whatever…];

line, such as not quoting a bareword, you can end up with this error:

Unknown error
Compilation failed in require.
BEGIN failed–compilation aborted.

If this happens, scrutinize your “use” lines and especially your C:MM line.If you use Class::MethodMaker and have a subtle error in your

use Class::MethodMaker [ whatever…];

line, such as not quoting a bareword, you can end up with this error:

Unknown error
Compilation failed in require.
BEGIN failed–compilation aborted.

If this happens, scrutinize your “use” lines and especially your C:MM line.

[FIX] Adobe Reader v.5 fails to open PDF with "There was an error opening this document. A temporary file could not be opened."

Wednesday, December 31st, 1969

If you see “There was an error opening this document.  A temporary file could not be opened.” when trying to open a PDF file, you may need to clean out C:Documents and SettingsUSERNAMELocal SettingsTempAcr*.tmp

Cheers to “gprellwitz” who suggests this here:

http://www.experts-exchange.com/Web/Graphics/Adobe_Acrobat/Q_20790405.html

Jeers to the MSFT developer who decided that “Documents and Settings” with spaces and mixed caps was a better home directory prefix than “home”

[BUG] ActiveRecord woes with SQL Server odd names (spaces and brackets), ntext data types

Wednesday, December 31st, 1969

ActiveRecord (latest versions of ODBC, DBI, and AR as of today
2005-12-01) seems to be having trouble with at least two things that
SQL Server 7 does:

1. The SQL Server adaptor (sqlserver_adaptor.rb) get_table_name sub
expects a name to have no whitespace in it.  The conditional and
regex need to be changed to look for bracketed names like [Poorly
Designed Table].  Then, the columns sub needs to know to take the
brackets off the ends of the names when it looks up the table by its
textual value.  To complicate this, according to
http://msdn2.microsoft.com/en-us/library/ms176027.aspx you can have
either double-quotes or square brackets as your delimiters in SQL
Server names, and you can even escape brackets by doubling.  I
have written hackish code that solves for simple [Dumb Name] tables but
not the whole enchilada, so I'm not posting it here yet.

2. The data type “ntext” seems to create memory allocation problems; I get an error of:

/usr/lib/ruby/site_ruby/1.8/DBD/ODBC/ODBC.rb:220:in `fetch': failed to allocate memory (NoMemoryError)

Running this on CYGWIN_NT-5.1 RANDALL-VAIO 1.5.19s(0.141/4/2) 20051102 13:29:13 i686 unknown unknown Cygwin on Win XP Pro.

WORKAROUND: Excel for Mac toolbars "trapped" off the screen

Wednesday, December 31st, 1969

If you hook up an external monitor to your Mac OS X machine and run Excel 2004 for Mac on it, you might move your toolbars completely or partially over to the second desktop area. If you then remove the external monitor, it is possible for the toolbars to get “stuck” such that only a corner (like the resizing corner) is visible. You can resize them, but not move them back onto your main screen.

You can try to use “Reset” in the View:Toolbars:Customize Toolbars/Menus, but that doesn't work. There's some other reset-to-defaults choice somewhere that I tried (and can't find now) that didn't work either. Quitting and restarting does nothing.

Try going into your home directory (/Users/username) and nuking this file:

/Users/username/Library/Preferences/Microsoft/Excel Toolbars (11)

Upon restarting Excel, they were back to normal location. Problem solved (except for the braindead engineering).

[BUG/WORKAROUND] Microsoft Outlook 2003 / XP can't import vCard notes field; Entourage on Mac can

Wednesday, December 31st, 1969

I am trying to sync my contact information between apps and machines. Here are my absolute, non-negotiable requirements:

(“Works
with” means I can round-trip in the given format — not necessarily
that it is native, and it's OK if I have to use a tool or a script
intermediary since I'll be scripting this stuff anyhow.)

1. Works with Outlook.

2. Works with Address Book on Mac.

3. Works with abook from the command line.

4. Is editable text in case I need to switch platforms, or do revision control, or any of a host of things.

I
was good with some kind of cobbled-together vCard solution, until I
discovered that MS Outlook 2003 on Windows XP could export note:
fields in .vcf files (oh, and insult to injury — there is no “Export”
option to vCard, you have to do some right-clicky nonsense or else
highlight all your contacts and “Forward as vCard”) but would not
import those same notes!

Outlook does not round trip a fully-spec'ed RFC standard?!?! What the hell?

Happily
(?) my workaround is to use MS Entourage for the Mac, which will
round-trip appropriately and sync itself with the Exchange server at my
work.  For those who cannot, perhaps there is a VBA
solution.  If you have one, please comment; I will update if I
discover how to get Outlook 2003 to accept the notes fields.

[HOWTO] Getting your profit-sharing plans rolled out of Fidelity's non-prototype retirement accounts as qualified distributions to separate IRAs or 401ks.

Wednesday, December 31st, 1969

If you are a small business with a Profit Sharing Plan / defined
benefit plan set up through an independent benefit advisor firm,
someone may have counseled you to set up your investments at
Fidelity.  They will create a “Non-prototype retirement account”
in the name of your Profit Sharing Plan trust.  You can make
trades and do what you will (although as of late they refuse to let you
buy funds that have even potentially a short term sales charge, which
really drastically limits you) and it's all for the big pool of
money.  As a “non-prototype” plan, Fidelity washes their hands of
the actual record-keeping of who is owed what and how much is vested to
whom, etc.  That's why you're paying your independent benefits
advisor all those fees each year, right?

When you discover that the costs invoved are so high as to cut
seriously into your returns, you'll want to dissolve your profit
sharing plan and distribute the assets among the beneficiaries so each
can put his funds into a low cost IRA.  Your advisor will have you
prepare corporate resolutions to that effect and tell you to distribute
the funds payable to the IRAs or 401ks of the beneficiaries, so that
they are qualified rollovers and so nobody has to withhold taxes for
the IRS.

Try telling that to Fidelity.  If your experience is like mine,
they'll have no idea, then check on things for you.  They'll come
back and say that they can make a check payable to the Trustees, to the
Plan, or they can do a qualified rollover to Fidelity.  They will
swear up and down that they can't send the money to a “contra FI”
(another bank).  They'll transfer you to “Retail Distribution,”
who will tell you that they can only pay out to the order of the
trustees, and that maybe you could have your trustees all sign the
check and then cash it at a bank, but that oh, yes, maybe, I suppose
you could get checkwriting privileges on the Fidelity account
itself.  If you are unlucky, you might try to do this.

However, if things get more and more fubared on your phone call, you
might get transferred to “Retirements Department” where someone puts
you on hold two more times to research and then discovers that yes,
those things mentioned above (only payable to the trustees or via a
Fidelity rollover) are trueish but there is one magical thing to do
otherwise, that will without any fee, cause the funds to be sent to the
new banks and the new IRAs, and that is this.

Prepare a letter containing these magical 5 elements:

1. Direction to Fidelity to make a check payable to “Contra FI FBO
Employee Name Account” (e.g., “Vanguard Funds FBO John Smith Rollover
IRA”), in an exact dollar amount, and with the address to which to send
that check.

2. Certification by the trustees that the distribution is an “eligible rollover distribution.”

3. Statement that the trustees assume all responsibility for
record-keeping for the plan assets and for reporting the distribution
to the IRS for tax purposes.

4. Statement that the trustees indemnify and hold Fidelity harmless for
any liability with respect to processing the direct rollover.

5. Original signature with a bank's signature guarantee from EACH of
the trustees (each must take the letter to the bank and sign in their
presence).

Send this mystical incantation, the specs of which are not available to phone reps or on the web site, to:

Fidelity Investments
Attn: Distribution Services
PO BOX 770001
Cincy OH 45277-0035

To their (sort of) credit, they had previously hinted to another
trustee that they needed a “distribution letter,” but did not mention
the 5 requirements, and when I called back to ask about it, had to put
me on hold 6 times and transfer me twice to get me the magical list of
things to do.  My cell phone is now almost out of batteries after
nearly 40 minutes on the line with them.  They were certainly
polite about the whole thing but it does seem a bit disingenuous of
them to keep insisting that we roll into Fidelity IRAs and “forgetting”
about this handy exception.

Of course, if you do this, you had better be damn sure that your p's
are crossed and your q's are dotted with respect to telling Uncle Sam
about the whole thing since Fidelity has now washed its hands of you.

Enjoy your new, rolled-over, low-overhead IRAs and 401ks!

[WARN] MD5 sums irredeemably broken

Wednesday, December 31st, 1969

The MD5 hash function is dangerously unusable at this point.  I
was under the impression, casually following crypto over the last
couple years, that it was weak but likely “good enough” for
non-military, non-banking types of applications.  Dead wrong.

There are now known attacks — and doubtless toolchains for specific
exploits — that permit creating two completely different (but valid)
pieces of plaintext that generate the same MD5 sum.

See http://www.doxpara.com for an example of two mocked-up HTML pages,
one for “Lockheed” and one for “Boeing,” that share the same MD5 hash
sum.

See also Wikipedia's MD5 entry (which does not NEARLY sufficiently raise the alarum on this) at http://en.wikipedia.org/wiki/Md5

You might pooh-pooh my admittedly somewhat superficial take on this,
but ignore me at your peril: bad guys are doubtless developing toolkits
for creating two docs, one legit, one malicious, that share the same
MD5 sum.

Bottom line: time to use SHA1 (for a while until someone figures out
how to do the same thing).  Simple enough on debian; “sha1sum” is
in coreutils and is a seeming drop-in replacement for MD5 sums.

Unwire Portland (OR) Project: Public benefit through the "drinking fountain" model

Wednesday, December 31st, 1969

Portland, Oregon is working toward a citywide, privately-operated
wireless network, under a public-private partnership model that
leverages city rights-of-way, among other assets, in return for certain
“public benefits.”  I strongly support this effort (the “Unwire
Portland” project).

The issue at hand is that the currently-proposed public benefit
structure is to create a “walled garden” of hand-picked sites that will
be freely available to the public.  A few moments' reflection
should alarm the reader: who will pick these sites, using what criteria
and what process for review, etc.?  Who will get sued when someone
inevitably disagrees with the choices?

My answer to these concerns is to do away with the “walled garden” and
in its place put a “drinking fountain” model, where each passerby may
take a small “trickle” of an unrestricted Internet connection for free.

I have put together a document supporting the adoption of the drinking fountain model here: http://rlucas.tercent.com/wifi.html

Your comments and suggestions are welcome.