rlucas.net: The Next Generation Rotating Header Image


Unwire Portland (OR) Project: Public benefit through the "drinking fountain" model

Portland, Oregon is working toward a citywide, privately-operated
wireless network, under a public-private partnership model that
leverages city rights-of-way, among other assets, in return for certain
“public benefits.”  I strongly support this effort (the “Unwire
Portland” project).

The issue at hand is that the currently-proposed public benefit
structure is to create a “walled garden” of hand-picked sites that will
be freely available to the public.  A few moments' reflection
should alarm the reader: who will pick these sites, using what criteria
and what process for review, etc.?  Who will get sued when someone
inevitably disagrees with the choices?

My answer to these concerns is to do away with the “walled garden” and
in its place put a “drinking fountain” model, where each passerby may
take a small “trickle” of an unrestricted Internet connection for free.

I have put together a document supporting the adoption of the drinking fountain model here: http://rlucas.tercent.com/wifi.html

Your comments and suggestions are welcome.

[FIX] XFree86 stuck at 640 x 480 under Linux with Dell Dimension or Optiplex

With a fresh install of Red Hat 9 on a Dell Dimension 4600, the only video mode that would work with XFree86 was 640 x 480, which is ludicrously big on a decent-sized monitor.  Changing the config didn't do anything, even though the config was well within my monitor's limits.

The solution was to go into the BIOS setup and change the Integrated Devices (LegacySelect Options) / Onboard Video Buffer setting from 1 MB to 8 MB.  I'm not sure what the tradeoff with other aspects of the system is, but X nicely starts up at 1280 x 1024.  Apparently, this is the solution for other Dell models as well, including the Optiplex GX260; mine had Dell BIOS Revision A08.  Also, it seems to be the case that the problem is general to XFree86, although it manifested for me under Red Hat 9.

Thanks to Erick Tonnel at Dell, who kindly provided the solution here:



Apple Security Update 2003-03-24 Breaks Many Things?

For Mac OS X users who installed the Software Update with a security component on 24 March 2003, some things might be broken if you use Apache, Sendmail, or the Perl PostgreSQL module DBD::Pg.

1) Regarding Sendmail:

See http://www.macosxhints.com/article.php?story=20030306145838840

(relevant error message: Sendmail might complain in /var/log/mail.log of “Deferred: Connection refused by localhost “)

(summary: Apple makes sendmail look at /etc/mail/submit.cf instead of sendmail.cf)

2) Regarding Apache:

See http://ganter.dyndns.org/misc/apple_ssl.php and http://apple.slashdot.org/comments.pl?sid=58276&threshold=1&commentsort=0&tid=172&mode=thread&cid=5640470

(relevant error message: Apache segfaults out on some SSL requests with the crash message [try /var/log/system.log]

Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_INVALID_ADDRESS (0x0001)

… and specifically complains that the error is in ssl_var_lookup_ssl)

(summary: Apple supplies a faulty libssl.so for Apache; a working version is provided)

(fix: see dyndns link above or try http://cyber.law.harvard.edu/blogs/gems/rlucas/libssl.so NO WARRANTY courtesy only mirror)

3) Regarding DBD::Pg
See http://gborg.postgresql.org/pipermail/dbdpg-general/2003-March/000039.html

(relevant error message:

dyld: perl Undefined symbols:

… and more, whenever a script uses DBD::Pg.)

(summary: Perl scripts now crash out. Might be because PostgreSQL was compiled before the security update. Does anyone know otherwise?)

I am going to install the July Security Update to see if it fixes things at all.

UPDATE: The July security update does not fix it. However, recompiling PostgreSQL fixes most of the errors (see 25 July 2003 entry for a persistent error with utf-8 support).

[BUG] Mail::Mailer, Mail::Internet, and MIME::Entity fork / eval oddity

The Perl module Mail::Mailer, and those modules that rely upon it (at
least, Mail::Internet and MIME::Entity), have an undocumented fork that
can wreak havoc with your code if you call the send() method within an
eval {} block.  The solution is to either be very anal about
checking for PIDs or to use a different means for sending your
messages, like MIME::Lite.

Briefly, the problem is that the sending procedure forks, using the
open(“|-“) idiom to create a filehandle for writing to the child, which
immediately exec()'s a sendmail (or whatever) process.  The parent
returns the filehandle, to which is printed the message; the filehandle
is then closed for final sending (this is all hidden in the
Mail::Internet and MIME::Entity classes' send() method).  However,
if you are running in taint mode with an insecure path (for one
example), the exec() will fail in the child and will die.

If you were running this in an eval {} block, and didn't account for
the possibility of a fork within the eval{}, you could find that both
code paths — the success AND the failure code blocks — get
executed.  Since this is often done for db transactions or other
things that might be shared external resources, this could lead to some
nasty race conditions.

In defense of Mail::Mailer, it is *technically* the job of the coder to
check on forks, but this argument ad absurdum would have every line
that calls module code wrapped in an elaborate eval with checking of
the PIDs.  Clearly not OK.

I have explained this bug and opened it up to discussion on
perlmonks.org, at http://perlmonks.org/index.pl?node_id=459739 and have
reported the bug in Mail::Mailer under the MailTools distribution at

The workaround at present is to either 1. obsessively check the PIDs
before and after the eval, or 2. use MIME::Lite, which appears not to
fork.  NOT a valid workaround would be to ignore this becaues your
exec() hasn't died yet or to turn off taint mode.

FIX: Can't locate object method "select_val" via package "DBI::st" under Class::DBI 0.95

[warning: see updates below]

Between Class::DBI 0.94 and 0.95, something changed causing my classes that override db_Main() to set the db connection (instead of, e.g. using set_db) to stop working.  Specifically, when their sequence functions were invoked, I got an error of:


Can't locate object method “select_val” via package “DBI::st” (perhaps you forgot to load “DBI::st”?) at /usr/lib/perl5/site_perl/5.6.1/Class/DBI.pm line …


I was able to replicate this on Mac OS X 10.2 with Perl 5.6.0 and Red Hat 9 with 5.6.1 (don't ask about the latter version magic…).


If you get select_val errors with Class::DBI 0.95, here are two workarounds:



I am not sure why this is (comments are welcome) and have submitted a bug to the developers as CPAN #5522.

Update: Thanks to Tony Bowden, maintainer of Class::DBI, for his reply:


Full context and any attached attachments can be found at:
<URL: http://rt.cpan.org/NoAuth/Bug.html?id=5522 >

On Mon, Mar 01, 2004 at 06:42:38PM -0500, Guest via RT wrote:
> In Class::DBI 0.95, setting up the DB connection by overriding db_Main
> breaks Ima::DBI->select_val and the methods that rely on it (like sequence
> and count_all)

You need to call Ima::DBI->connect rather than DBI->connect in your
overriden db_Main.



Still not certain, though, why it is that it breaks in 0.95 and not 0.94.

Update: Thanks again to Tony, who writes:

The reason this broke in going from 0.94 to 0.95, btw, is that the
select_val stuff was only added to Ima::DBI recently, so Class::DBI .94
didn't use these, but instead rolled its own long winded versions.

0.95 uses the new methods in Ima::DBI and is a lot better off for it! πŸ™‚


Update 23 April 2004: Things are all wacky now.  You should be careful and should probably NOT do any of the above.  See the 06 April 2004 exchanges on the CDBI mailing list.  If you do what is described above you stand to get reconnections that will mess up things like auto_increments in MySQL.  At present the issue of how to deal with db_Main() seems unresolved.

Once Upon A Time

[Update: As is often the case when lots and lots of people (say, the whole Internet)
look at a problem, I came to this conclusion independently along with a
whole bunch of other folks.  I wrote this freshman effort at
blogging prior to becoming aware of the “Eternal September” concept;
however, this trope of pre/post-1993 Internet quality is much more
concisely described by the “Eternal September” entry in Wikipedia:
http://en.wikipedia.org/wiki/Eternal_September .  My take on it
doesn't put as much blame directly on AOL users as the folk wisdom of
Eternal September does; I try to look at structural differences in the
modes of communication and speculate as to their effects on the types
of interactions that went on.]

Once upon a time, the Internet was cool (circa pre-1993). At that time
there was a lot of info with a decent signal to noise ratio, and a lot
of knowledgable people, You could read the FAQs for a newsgroup on a
subject (anything from hang gliding to Germany) and get a fairly good
dose of knowledge on the topic, as well as a direct line to a bunch of
people who knew it well. Is there a way to get something as cool as
that back out of today's incarnation of the Internet (that is, the
largely Web-mediated experience)? I hold that maybe there is some hope
and that we can get the Internet back to being somewhat collaborative
and useful again.

If the Internet was so grand, what did people
do with it back then? There was the normal Internet stuff that still
goes on today and will probably go on forever: email and FTP, which
respectively served the most personal and most technical needs of its
users (sending letters and distributing software). There was real-time
chatting of various types, much as there is today. But the big
difference in the way people interacted then and now is the difference
between Usenet and the Web.

Usenet (a.k.a. netnews or
newsgroups) provided for the syndication of so-called “news” messages
grouped into subject-matter categories. In practice, these newsgroups
weren't really news per se. They were rather forums for discussion and
debate by people, often quite knowledgable people, about defined
subject areas (of all sorts, but most commonly political/religious
debate, hobbies, and computer/technical issues). People built up their
reputations by contributing constructively to these discussions but the
most presitigious thing you could do within the context of a newsgroup
was to help maintain its FAQ. The Frequently Asked Questions list was
kind of a “greatest hits” of the newsgroup's content. Most of the
active newsgroups had these FAQs, and they were routinely made
available in the context of the newsgroup itself as well as being
archived and distributed as ends in themselves. The maintainers of an
FAQ of course had to be able contributors who would structure and even
add novel material to the FAQ, but the document really represented a
collaborative effort of the group's active members, and was often
largely paraphrased or excerpted from newsgroup postings (with
attribution; another honor for the constructive group member).

was of course no such thing as a newsgroup that had only one member who
wrote the FAQ based upon his own discussion with himself and the
questions he had answered. The idea would be preposterous; newsgroups
were collaborative centers.)

(Note that the kind of knowledge
I'm discussing here is not the easy kind, like stock quotes, movie
times, sports scores, etc., which various companies have already
handled quite well [and which, I may add, were not nearly so easily
available during the Usenet era]. I call that the “easy” kind of
information because it's easy to imagine the SQL statement that
retrieves it, e.g. select showtime, location from movie_showings where
film_id = 38372 and city_name = 'boston'. I'm more interested in domain
knowledge of a particular field, such as “what are some good books I
should read to learn about hang gliding,” or “what does it mean if
program foo version 4.21 says 'error xyz-2?'”)

Sometime after
1993 a bunch of things started happening: commercial spam began to fill
up Usenet and folks' email boxes; waves of the uninitiated began
incurring the wrath of old-timers by their breaches of netiquette,
leading to a general lowering of the signal-to-noise ratio; and, of
course, people got turned on to this whole idea of the Web. Here was a
medium in which anyone could become a publisher! If you were expert on
a topic, or if you had a cool digital photo, or if you just happened to
know HTML, you could publish a Web site and become sort of famous! Of
course, this was a pain in the ass: posting on Usenet just meant typing
an email message, but having a web page required knowing and doing a
lot of tedious but not very interesting stuff, so you really had to
have some time to put into it.

However, the Web had pictures and
clicking with the mouse, while Usenet had boring words and typing —
and AOL users were starting to come onto the Internet. So the Web took

The dominant mode for interaction on the Internet — but
more importantly, for publishing of subject-matter knowledge — moved
away from Usenet to the Web. (Of course, Usenet is still around, and
the newsgroups generally put their FAQs on the Web, but a newcomer to
the Internet might never even hear of Usenet during his Web-mediated
experience.) Rather than posting an article to a group and waiting to
read other articles posted in response, you now published a “site” and
counted how many visitors came. (Plus, you could enjoy hours on the web
without ever using your keyboard, which meant of course that its users
were even physically disconnected from the means of actually inputting
any information.)

Everyone who was an aspirant to Web fame and
had an interest in model trains, say, would create his own model trains
Web site, provide his own set of (supposedly) interesting content, and,
often, maintain his own FAQ of questions asked of him by visitors to
the site. At first, these aspirants were individuals, but soon enough
affinity groups or associations and commerical interests got involved,
doing basically the same thing. Perhaps you see where I am going with
this, gentle reader. The way in which personal knowledge was packaged
up and distributed became centered on the individual, and the
relationship changed from one of collaboration between peers to one of
publisher and reader.

A well-known lament about web publishing
is that unlike print publishing, the cost is so low as to admit
amateurs, crazies, and just plain bad authors — anyone with sufficient
motivation to brave the arcana of FTP and HTML. On the other hand, I
have just complained that the model simultaneously changed from a
peer-to-peer to a client-server relationship. Could it be that both of
these charges are true? It seems this would be the worst of both
worlds: not only are people no longer as engaged in the constructive
addition to the commons, but those that control the production and
distribution of knowledge aren't even filtered out by the requirements
of capital investment. It's like creating a legislature by taking the
worst parts each from the House and Senate. Sadly, this describes much
of the past ten years of the Internet's history.

However, there
is some hope. Whereas previously, “anyone” could have a Web site but
precious few put in the many hours it required in practice, the promise
of Weblogs is to actually open Web publishing to “anyone.” This won't
filter out the crazies, but at least it won't artificially inflate
their importance by raising the bar just high enough to keep everyone
else out. Comment forums, content-management systems, Wikis,
trackbacks, and the like are helping to re-enable the sort of
collaboration that made the Usenet system work.

Bottom line: it rather feels like we're almost back to 1993.

Next time: future directions, pitfalls, and why blogging (alone) is not the answer.

FIX: Compiling SWI-Prolog on Mac OS X 10.2

SWI-Prolog is available as prepackaged binaries for Mac OS X 10.3+, but
not for 10.2.  If you try and install the 10.3 binary package, you
will get errors (at least, I did).  The answer is to compile from
source.  You are probably compile-savvy if you are looking to
install a Prolog interpreter, but if not, it's a fairly painless
./configure, make, make install process.

1. However, the docs warn that you'll want readline and a number of
other libraries installed.  There are some binary packages on the
SWI-Prolog site.  If you want to use those, and you don't have any
other versions of the libraries, so be it — but I would recommend
using Fink instead, so that you can install the most up to date

2. Especially if using Fink, be sure to alert the ./configure script to
the locations by including LDFLAGS=”-I /sw/include” and CFLAGS=”-L
/sw/lib” (or wherever).

For me, all it took was pointing the configure script to the /sw tree and it compiled with no further questions.

FIX: GIMP can't open PostScript (PS, EPS, PDF) files under Windows

The GIMP (GNU Image Manipulation Program) is a neat tool for people with needs too casual or cheap for PhotoShop, but too much for various paintbrush type tools.

However, if you install the GIMP under Windows 2000, like me, EPS or PS PostScript files will not open properly, instead barfing with:

Error opening file: C:tempmyfile.eps

PS: Can't interprete file [sic]

You'll need to do the following to make it work:

1. Install GhostScript for Windows.


2. Install the GIMP.


3. Set your environment variables to include:



Typical paths in which to find your GS stuff after a default install might be C:gsgs8.11bingswin32c.exe and C:gsgs8.11lib

(One way to get your environment set in Windows is Start: Settings:Control Panel:System:Advanced:Environment Variables.  In non-NT versions you might need to change AUTOEXEC.BAT to include SET directives)

4. Restart the GIMP and you should be up and running.

Check-cutters drop ball, bash Harvard, circle wagons; "consumerist" attitudes toward computing.

Paymaxx, a payroll services provider, recently confessed to a major
mistake that essentially made public many of their customers'
employees' W-2 forms. My firm uses Paymaxx to run payroll. So, as it
happens, does another Harvard-associated person's small computer firm.
This person, however, has more time (or more curiosity) than I, and
discovered a gaping hole in the system serving W-2 forms, a hole that
made it trivial to retrieve others' forms. This person did not create
the hole or “crack into” the system — just stumbled upon the hole left
open. What happened next was unfortunate.

The discoverer of the hole was in a bind; to confirm the existence
and nature of the hole, he necessarily performed some testing and
experiments. Upon forming a supported theory of the problem, he
contacted the company with his complaint, and a sales pitch for his
services to fix it. Was this morally correct? Certainly, he was
compelled to take action by knowledge that his security and privacy was
threatened; certainly, he was correct to inform the company. Certainly,
he was under no obligation to provide his expertise without
compensation. However, the quandary seems to center on the nature and
specificity of his notice / sales pitch to the company: did he wrongly
withhold information about the problem in a manner as to constitute
(morally, if not legally) a form of extortion?

The response of Paymaxx was less than satisfactory as well. In a letter to its customers, Paymaxx stated:

The hacker, is a 21 year-old Harvard student (or
graduate) with a history of similar stunts. He was a PowerPayroll
customer for nearly four years. In mid- February when we informed him
(and the rest of our customer base) of the availability of 2004 W-2
information on-line, he e-mailed one of our sales reps informing him
that he had found a flaw in the security aspects of our on-line W-2
application and that he would tell us about it if we would hire his
firm. We considered this a sales pitch and dismissed him.

The remainder of the letter is a bunch of hand-waving.
However, it is this paragraph that is most troubling. Why was their
customer referred to as a “21-year old Harvard student?” This seems to
me nothing more than an attempt to excuse their incompetence by
averring that it required an evil genius from Harvard (that spooky and
much-maligned ivory tower of mysterious egghead commies) to get into
their systems. Bad job, Paymaxx — there went your opportunity to own
up to your screw-up, be clear about how and why you screwed up, and
demonstrate the objective steps you've taken to prevent it in future.
Instead, you pled the Harvard defense, and tried to shift the blame
onto someone else. However, rather than inveigh against Paymaxx for
their wounded-animal response, I'd rather look to the systemic reasons
why we can expect this kind of problem throughout corporate America for
the forseeable future. I'll begin with a brief technical description,
and then give my theory on the attitude that leads to this kind of

The problem was, schematically, that the URLs for retrieving W-2 forms were like this:


Where, as you might guess, the next employee's form is 123457. This
is not exactly how the problem manifested, but it's close enough to
illustrate: the engineers who put that into play were either lazy or
stupid, not taking into account that changing digits in the URL is
trivial. Put in the right number, and you get the W-2 form, with name,
address, and earnings.

(Merely to demonstrate that I am not declaiming against their engineers
uninformedly, let me state that what needs to have been done is to 1.
use HTTPS, if they had not, and 2. engineer the sharing of a true,
non-trivially guessable secret (for example by snail-mailing a PIN to
each employee), and 3. putting a guess-number-count limit on the
retrieval dialog to prevent brute-force attacks. In defense of Paymaxx,
they are probably just the first payroll company to get caught with
something like this — I have chosen to stay with them despite, and
somewhat because of, their experience with this problem, since now they
should be more rightly paranoid about security and because I don't
expect any better from other firms.)

I can only speculate at the reasons behind this goof, but it does
fit with a general pattern I have witnessed, of what I term a “consumer
attitude” to data and computing. This attitude is promoted by the false
promises of the software industry to liberate us from the burdensome
task of comprehension — the notion that all software can be
“intuitive” and that humans and computers can interact without the
humans holding up their end of the bargain. Holding this attitude leads
to the implicit adoption of certain maxims;

  • All that is displayed visually (representation) is the thing itself
    (underlying form) and can only be manipulated thereby, and conversely,
  • How something can be manipulated via a visual interface is the only means of manipulating it.
  • (or, things work as they apparently do, and they don't work in other ways.)
  • The visual interface must permit a user with no or cursory
    training to access any conceivable functionality (by conceivable, I
    mean conceivable by a lay person with experience in the problem domain
    and describable in plain language, for example, “move the invoice date
    to the first Monday of the month;” I except functionality that lay
    persons would not think themselves qualified to describe, such as
    certain mathematical wrangling), and therefore,
  • Any program functionality that is reasonably described in plain
    layman's terms by someone familiar with the problem domain should be
    simple to implement, by a layman who is made familiar with computing
    tools (rather than by a programmer who is made familiar with the
    problem domain).

The attitude brings with it the conceit of thinking that others will
share the attitude — an assumption that always proves fatally flawed,
for even imagining a world devoid of legitimate curious “hackers,”
there will always be black-hat “crackers” who shun the maxims of
consumer attitudes in favor of experimenting, breaking things, and
seeking alternative scenarios. The consumer attitude is one of taking
the image on the screen at face value; of seeing the shiny parts of the
system as the important onces. It is also, unfortunately, the reigning
attitude in the business world, because having a “producer” orientation
to data and computing is hard and often unpleasant — much easier to
fire up Excel or Solitaire, than to write code! The consumer attitude
makes one believe that links are something clicked upon and not
manipulated, and dulls one to critical and proactive thinking about

am not suggesting that every executive be intimately familiar with Web
application security before leading his company to make use of the Web,
but in the Paymaxx case, it apepars that even their engineers
manifested the consumer attitude, thinking shallowly about their
application's security.  Hiring these engineers, therefore was the big problem.  If executives have ONE imperative in their relationship to technology, it's responsible vendor selection! 

I suggest therefore that executives be made aware
of the existence of the consumer attitude and the problems with it, and
be trained to evaluate solutions and providers with an eye toward
avoiding “consumerist” technology thinking. Those who design, create,
manage, and maintain our technology infrastructure must have a
“producer's” attitude toward technology, understanding what the hard
problems are, and that they are hard, and not shying from depth of
understanding. Inevitably, this will grow to include executives at most
kinds of businesses, as all forms of organization rely increasingly on
information technology.

We are in a unique historical
moment with regard to this problem of attitude. The past century did
not suffer so greatly, for every shipping concern would naturally have
been managed by men who had sailed on ships, and every bridge-building
outfit would have been managed by engineers and architects — because
ship's officers and engineers had existed as professions for
generations. There might be one generation of management-age persons
who have a solid generalist background in computer science as of today,
and these few are a tiny fraction of the number needed to fill the
ranks of executive positions at IT-reliant firms. As a result, we are
stuck with dilettante consumers making critical decisions for
productive firms. Who would hire someone to oversee a pharmaceutical
plant's operations on the basis of his qualification of taking medicine
daily? It is absurd — but every time we put a “consumerist” person in
charge of an IT-reliant operation, we do the same thing.

There was a time when people did not hold a consumer attitude towards IT; indeed, the pendulum was too far in the other direction. People were scared witless about computers, and
they were seen as the domain of “wizards.” Indeed, secretaries became “pseudo-wizards” in their own right,
memorizing WordPerfect macros, and in effect writing their own programs
for routine tasks. This, of course, did not last: while some arcane jobs will always require engineers, for the
most part people got over their computer fears with training. 

It was accepted that to use a computer required
training and knowledge, as with using an automobile or a welder's
torch.  Then, with the rise of the Gog and Magog of Windows and
Macintosh, we found ourselves in the middle of an apocalyptic war
between two indistinguishable armies — meet the new boss, same as the
old boss. What they fought over was market share, but what they agreed
upon was promising the world that computers should be easy and
effortless.  Details of interface were the ideas in dispute,
rather than the underlying metaphors, attitudes, and concepts. And it
was amidst this battle — waged over the turf of the newly discovered
mass-market for computing — that the consumer attitude was
propagandized to the masses as well as the elites.

It made sense, too, in a world where computers were machines for
three families of applications: word processing and spreadsheets,
email, and custom (internal) applications. Word processing — at least
at a casual to moderate use level — is a great candidate for WYSIWYG,
know-nothing interfaces. Spreadsheets had the beautiful characteristic
of direct analog to well-understood ledger books and pocket
calculators, combined with a spatial orientation that paralleled the
WYSIWYG ideal of the word processor. Email was a finite
domain, and it had similar metaphors to familiar tools. And custom
applications, internal to a given organization, were the special
exceptions to the know-nothing rule — staffs were trained on workflow
processes, order entry “screens,” predefined queries written for a
particular purpose. Each internal application was like a special tool
inside the firm, usable for its one purpose, and only by those who were

how well this regime worked for a while! Get familiar with the clicking
and typing bits, and you've got the word processing, spreadsheet, and
email stuff down pat. Watch the training video or read the manual, and
you can use your company's order-tracking system or pull the
quarter-to-date sales figures from the Oracle database.  But what
happens as soon as Visual Basic for Applications is embedded in your
word processor?  What happens when your Excel model requires a
procedural language routine, or sources data from an external database?

If businesspeople are to operate effectively in the world of
computing, I believe that we must produce a thriving culture of rounded
generalist executives, interacting with honest vendors who make the problems
of computing as simple as possible — but no simpler! 
We must expect people to learn some of the underlying ideas behind the
abstractions; just as a freight forwarder must understand the underying
limitations and strengths of various forms of transport, regulations,
etc., an author of a complex data report must understand the
limitations and strengths of his data sources, the concept of the
normalization of data, timeliness and validity, etc.

Future directions: why a
consumerist “know-nothing,” and a technician, “specialized tool” model
are both insufficient ways for businesspeople to approach computing.
Necessity of generalist computing knowledge. Folly of having businesses
driven by IT run by modern-computing-illiterate executives (would one
run an oil company with no chemical engineers or geologists on the
management team?). Folly of expecting interfaces to require a constant
amount of learning (zero) while they expose a geometrically expanding
range of functionality to the user. Uniqueness of the generalist
computing skill set and how it is already as important to an executive
to understand data as it is to understand accounting and bookkeeping —
even if this is not accepted today.

FIX: Suppress Perl "uninitialized value" warnings (strongish medicine for clueful users only)

If you have written any Perl of moderate complexity, and especially if your Perl of moderate complexity has included CGI and Database interactions, (or any time you have to deal with undef or NULL vs. a blank string, and you might have either of the two), you have run across warnings like this (maybe to STDERR, or maybe in your /etc/httpd/logs/error_log):

Use of uninitialized value in concatenation (.) or string at ...

Use of uninitialized value in numeric gt (>) at ...

etc.  How can you stop these error messages (warnings, really) from blocking up your logs and your STDERR?

In fact, you should be somewhat concerned at your uninitialized value warnings.  After all, they are there for a reason.  A really good piece of code ought not to generate them, at least in theory.  However, sometimes you want the benefit of use strict and -w warnings, and you have at once good reason not to want to know about uninitialized values.  What might these be?

  • You are doing string interpolation into something where undef and "" are equivalent for your purposes (most web page work)
  • You are doing some conditionals or string comparisons based upon data that come in from one crufty source or another, like CGI, and you don't want to make a separate case for undef and "".
  • Relative quick-and-dirtiness where you want at least use strict in order to prevent egregious code but you don't need to hear about the semantic difference between undefined and the emtpy string.

In these cases, if you are using Perl 5.6+, you are in luck.  You can carefully wrap the specific section of code that has a good reason for not caring about undef values in a block (curly braces) and write:

  no warnings 'uninitialized';
  if ( CGI::param('name') ) {
    print "Hello, " . CGI::param('name');
  else {
    print "Hi there.";