Archive for December 31st, 1969

Once Upon A Time

Wednesday, December 31st, 1969

[Update: As is often the case when lots and lots of people (say, the whole Internet)
look at a problem, I came to this conclusion independently along with a
whole bunch of other folks.  I wrote this freshman effort at
blogging prior to becoming aware of the “Eternal September” concept;
however, this trope of pre/post-1993 Internet quality is much more
concisely described by the “Eternal September” entry in Wikipedia:
http://en.wikipedia.org/wiki/Eternal_September .  My take on it
doesn't put as much blame directly on AOL users as the folk wisdom of
Eternal September does; I try to look at structural differences in the
modes of communication and speculate as to their effects on the types
of interactions that went on.]

Once upon a time, the Internet was cool (circa pre-1993). At that time
there was a lot of info with a decent signal to noise ratio, and a lot
of knowledgable people, You could read the FAQs for a newsgroup on a
subject (anything from hang gliding to Germany) and get a fairly good
dose of knowledge on the topic, as well as a direct line to a bunch of
people who knew it well. Is there a way to get something as cool as
that back out of today's incarnation of the Internet (that is, the
largely Web-mediated experience)? I hold that maybe there is some hope
and that we can get the Internet back to being somewhat collaborative
and useful again.

If the Internet was so grand, what did people
do with it back then? There was the normal Internet stuff that still
goes on today and will probably go on forever: email and FTP, which
respectively served the most personal and most technical needs of its
users (sending letters and distributing software). There was real-time
chatting of various types, much as there is today. But the big
difference in the way people interacted then and now is the difference
between Usenet and the Web.

Usenet (a.k.a. netnews or
newsgroups) provided for the syndication of so-called “news” messages
grouped into subject-matter categories. In practice, these newsgroups
weren't really news per se. They were rather forums for discussion and
debate by people, often quite knowledgable people, about defined
subject areas (of all sorts, but most commonly political/religious
debate, hobbies, and computer/technical issues). People built up their
reputations by contributing constructively to these discussions but the
most presitigious thing you could do within the context of a newsgroup
was to help maintain its FAQ. The Frequently Asked Questions list was
kind of a “greatest hits” of the newsgroup's content. Most of the
active newsgroups had these FAQs, and they were routinely made
available in the context of the newsgroup itself as well as being
archived and distributed as ends in themselves. The maintainers of an
FAQ of course had to be able contributors who would structure and even
add novel material to the FAQ, but the document really represented a
collaborative effort of the group's active members, and was often
largely paraphrased or excerpted from newsgroup postings (with
attribution; another honor for the constructive group member).

(There
was of course no such thing as a newsgroup that had only one member who
wrote the FAQ based upon his own discussion with himself and the
questions he had answered. The idea would be preposterous; newsgroups
were collaborative centers.)

(Note that the kind of knowledge
I'm discussing here is not the easy kind, like stock quotes, movie
times, sports scores, etc., which various companies have already
handled quite well [and which, I may add, were not nearly so easily
available during the Usenet era]. I call that the “easy” kind of
information because it's easy to imagine the SQL statement that
retrieves it, e.g. select showtime, location from movie_showings where
film_id = 38372 and city_name = 'boston'. I'm more interested in domain
knowledge of a particular field, such as “what are some good books I
should read to learn about hang gliding,” or “what does it mean if
program foo version 4.21 says 'error xyz-2?'”)

Sometime after
1993 a bunch of things started happening: commercial spam began to fill
up Usenet and folks' email boxes; waves of the uninitiated began
incurring the wrath of old-timers by their breaches of netiquette,
leading to a general lowering of the signal-to-noise ratio; and, of
course, people got turned on to this whole idea of the Web. Here was a
medium in which anyone could become a publisher! If you were expert on
a topic, or if you had a cool digital photo, or if you just happened to
know HTML, you could publish a Web site and become sort of famous! Of
course, this was a pain in the ass: posting on Usenet just meant typing
an email message, but having a web page required knowing and doing a
lot of tedious but not very interesting stuff, so you really had to
have some time to put into it.

However, the Web had pictures and
clicking with the mouse, while Usenet had boring words and typing —
and AOL users were starting to come onto the Internet. So the Web took
over.

The dominant mode for interaction on the Internet — but
more importantly, for publishing of subject-matter knowledge — moved
away from Usenet to the Web. (Of course, Usenet is still around, and
the newsgroups generally put their FAQs on the Web, but a newcomer to
the Internet might never even hear of Usenet during his Web-mediated
experience.) Rather than posting an article to a group and waiting to
read other articles posted in response, you now published a “site” and
counted how many visitors came. (Plus, you could enjoy hours on the web
without ever using your keyboard, which meant of course that its users
were even physically disconnected from the means of actually inputting
any information.)

Everyone who was an aspirant to Web fame and
had an interest in model trains, say, would create his own model trains
Web site, provide his own set of (supposedly) interesting content, and,
often, maintain his own FAQ of questions asked of him by visitors to
the site. At first, these aspirants were individuals, but soon enough
affinity groups or associations and commerical interests got involved,
doing basically the same thing. Perhaps you see where I am going with
this, gentle reader. The way in which personal knowledge was packaged
up and distributed became centered on the individual, and the
relationship changed from one of collaboration between peers to one of
publisher and reader.

A well-known lament about web publishing
is that unlike print publishing, the cost is so low as to admit
amateurs, crazies, and just plain bad authors — anyone with sufficient
motivation to brave the arcana of FTP and HTML. On the other hand, I
have just complained that the model simultaneously changed from a
peer-to-peer to a client-server relationship. Could it be that both of
these charges are true? It seems this would be the worst of both
worlds: not only are people no longer as engaged in the constructive
addition to the commons, but those that control the production and
distribution of knowledge aren't even filtered out by the requirements
of capital investment. It's like creating a legislature by taking the
worst parts each from the House and Senate. Sadly, this describes much
of the past ten years of the Internet's history.

However, there
is some hope. Whereas previously, “anyone” could have a Web site but
precious few put in the many hours it required in practice, the promise
of Weblogs is to actually open Web publishing to “anyone.” This won't
filter out the crazies, but at least it won't artificially inflate
their importance by raising the bar just high enough to keep everyone
else out. Comment forums, content-management systems, Wikis,
trackbacks, and the like are helping to re-enable the sort of
collaboration that made the Usenet system work.

Bottom line: it rather feels like we're almost back to 1993.

Next time: future directions, pitfalls, and why blogging (alone) is not the answer.

Apple Security Update 2003-03-24 Breaks Many Things?

Wednesday, December 31st, 1969

For Mac OS X users who installed the Software Update with a security component on 24 March 2003, some things might be broken if you use Apache, Sendmail, or the Perl PostgreSQL module DBD::Pg.


1) Regarding Sendmail:

See http://www.macosxhints.com/article.php?story=20030306145838840

(relevant error message: Sendmail might complain in /var/log/mail.log of “Deferred: Connection refused by localhost “)

(summary: Apple makes sendmail look at /etc/mail/submit.cf instead of sendmail.cf)


2) Regarding Apache:

See http://ganter.dyndns.org/misc/apple_ssl.php and http://apple.slashdot.org/comments.pl?sid=58276&threshold=1&commentsort=0&tid=172&mode=thread&cid=5640470

(relevant error message: Apache segfaults out on some SSL requests with the crash message [try /var/log/system.log]

Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_INVALID_ADDRESS (0x0001)

… and specifically complains that the error is in ssl_var_lookup_ssl)

(summary: Apple supplies a faulty libssl.so for Apache; a working version is provided)

(fix: see dyndns link above or try http://cyber.law.harvard.edu/blogs/gems/rlucas/libssl.so NO WARRANTY courtesy only mirror)


3) Regarding DBD::Pg
See http://gborg.postgresql.org/pipermail/dbdpg-general/2003-March/000039.html

(relevant error message:

dyld: perl Undefined symbols:
_BIO_free
_BIO_new_mem_buf
_DH_check
_DH_generate_parameters
_DH_size
_ERR_get_error
_ERR_reason_error_string
_EVP_PKEY_free
_PEM_read_DHparams
_PEM_read_PrivateKey
_PEM_read_X509
_PEM_read_bio_DHparams
_SSL_CTX_ctrl
_SSL_CTX_free
_SSL_CTX_load_verify_locations
_SSL_CTX_new
_SSL_CTX_set_tmp_dh_callback
_SSL_CTX_set_verify
_SSL_CTX_set_verify_depth
_SSL_connect
_SSL_free

… and more, whenever a script uses DBD::Pg.)

(summary: Perl scripts now crash out. Might be because PostgreSQL was compiled before the security update. Does anyone know otherwise?)


I am going to install the July Security Update to see if it fixes things at all.


UPDATE: The July security update does not fix it. However, recompiling PostgreSQL fixes most of the errors (see 25 July 2003 entry for a persistent error with utf-8 support).

Solving a Real Problem

Wednesday, December 31st, 1969

OK, I have determined what blogs are for. They give an easy way to publish aggregated technical fix information in a search-engine-friendly format. Aggregated: quite often, fixing a specific technical problem (even a common one!) requires looking around the web at a number of false leads on mailing list archives, tech docs, knowledge bases, etc. Putting the whole solution, once found, into a single blog entry (including links / attribution to the original solvers) makes sense. Seach-engine-friendly: mailing list archives are good, but only if they get web-published and Googled. Realistically, if it's not in Google, it doesn't exist — especially in the realm of technical problems that could have any number of origins (imagine compiling an XML to Excel Perl module on Mac OS X: is your problem with GCC, libxml, Excel, Perl, or Mac OS X? Which mailing lists do you search first?). Additionally, most computer problems have a characteristic error message which appears with some limited amount of variation. That message is easy to post on a blog. I originally believed that the solution to the aggregated technical fix information was a search-engine feeder backed by an RDBMS, but it is clear that any schema will be too inflexible for the variety of problems. Better to post error messages verbatim, try to be as explicit about keywords as practicable (if it segfaults, include the words “crash', “segmentation fault”, and “segfault” as an aid to searching), and let Google handle the hard stuff far better than a FTS through an RDBMS could hope. Why do this? Well, this is a case of the comedy of the commons: figuring out a solution like this on one's own or by searching through mailing lists piecemeal could consume hours or days of productive time. However, posting a solution once found is trivial, taking mere minutes. If even one other person posts a solution that I find which saves me 3-4 hours, it's worth all the time I'll ever spend in posting such things.

FIX; DBD::Pg _is_utf8_string bug with Perl 5.6.0 on Mac OS X 10.2.2

Wednesday, December 31st, 1969

After having fixed the DBD::Pg bug resulting from the faulty Apple Security Update, which necessitated recompiling Postgres and running sudo ranlib /usr/local/pgsql/lib/libpq.a I discovered another bug.

My PostgreSQL was compiled with UTF-8 support, and my DBD::Pg was rebuilt/reinstalled after the Postgres recompile. However, my scripts were still bombing out with:


dyld: perl Undefined symbols:
_is_utf8_string
Trace/BPT trap

This posting speculates at the solution, which happily works:
http://www.geocrawler.com/mail/msg.php3?msg_id=10411736&list=105

Specifically, commenting out the code between the ifdefs in the section that refers to is_utf8_string (circa line 1482 of dbdimp.c), then make clean / make / make install allowed DBD::Pg 1.21 to install OK and stopped perl from crashing out with the above error.

Now, to see if that completely breaks something else…

FIX: Apache 1.3.2x compiling with mod_ssl on Red Hat 9 / shrike bombs out with krb5.h error

Wednesday, December 31st, 1969

If you try to compile Apache (1.3.28) with mod_ssl, following the plain vanilla directions in the relevant sources, under Red Hat 9 (and using RH9's installation of openssl), you are likely to get an ugly error like this:

gcc -c -I../../os/unix -I../../include -DLINUX=22 -I/usr/include/gdbm -DMOD_SSL=208115 -DUSE_HSREGEX -DEAPI -fpic -DSHARED_CORE `../../apaci` -DSHARED_MODULE -DSSL_COMPAT -DSSL_USE_SDBM -DSSL_ENGINE -DMOD_SSL_VERSION="2.8.15" mod_ssl.c && mv mod_ssl.o mod_ssl.lo

from mod_ssl.h:116,

from mod_ssl.c:65:

/usr/include/openssl/kssl.h:72:18: krb5.h: No such file or directory

In file included from /usr/include/openssl/ssl.h:179,

from mod_ssl.h:116,

from mod_ssl.c:65:

/usr/include/openssl/kssl.h:132: parse error before "krb5_enctype"

/usr/include/openssl/kssl.h:134: parse error before "FAR"

/usr/include/openssl/kssl.h:135: parse error before '}' token

/usr/include/openssl/kssl.h:147: parse error before "kssl_ctx_setstring"

/usr/include/openssl/kssl.h:147: parse error before '*' token

...

Apparently, this is due to a problem with the compiler adequately finding the kerberos part of the installed openssl package.  To fix it, you can 1. make clean your apache src dir, 2. place the following lines into your configure script in the apache-1.3.xx directory, around line 96 (exact location not critical):

 

if pkg-config openssl; then

     CFLAGS="$CFLAGS `pkg-config --cflags openssl`" 

     LDFLAGS="$LDFLAGS `pkg-config --libs-only-L openssl`"

fi

 

Many thanks to Matthias Saou for this solution, found at:

https://listman.redhat.com/archives/shrike-list/2003-April/msg00160.html

Downgrading to Apache 1.3 from Apache 2 under Red Hat 9

Wednesday, December 31st, 1969

Apache 2.0 can be a real cast-iron bitch.  It's got this cool
support for threading that you think will make your life easier but it
turns out to have all sorts of little API differences that break your
legacy apps, in really horrifyingly difficult to discern ways. 
This might be the apps' fault or Apache's fault, but either way it
makes your life hard if you are used to things working smoothly under
Apache 1.3x and then get jolted into the cruel world of 2.x.

 

Red Hat has put out Apache 2.0 since at least Red Hat 8.0.  Red
Hat 9 comes with Apache 2 as well.  However, Red Hat knows about
the problems as well as anybody: the non-gratis Red Hat Enterprise
Linux distros come with Apache 1.3x!  [UPDATE: Enterprise
Linux 2.x came with 1.3.x; Red Hat has made
the questionable choice of putting Apache 2 in Enterprise Linux
3.0 and removing things like the venerable Pine…] 
It's clear that if you actually want stability, you should use 1.3
until the rest of the world catches up with Apache 2.0.

(Other people know this too: see http://linux.derkeiler.com/Mailing-Lists/RedHat/2003-07/0726.html and http://lists.freshrpms.net/pipermail/rpm-list/2003-May/004682.html )

If you want to downgrade your Red Hat 9 Apache version, you can
either try to compile, which is fairly straightforward once you find
the fixes I mention below, but if you want mod_perl and mod_ssl, and if
you are not very smart, like me, you really are better off using the
rpm packaged versions.

 

Unfortunately, the openssl 9.7 that comes with Red Hat 9 will
prevent you from installing mod_ssl 2.8 to go with Apache 1.3. 
So, here's how to install Apache 1.3 with mod_perl and mod_ssl:

 

1. Erase the apache 2.0 rpm from the system.  Nuke the dependent packages as well (mod perl, mod ssl, php, etc).

2. Get:

apache-1.3.27-2.i386.rpm

mod_perl-1.26-5.i386.rpm

mod_ssl-2.8.12-2.i386.rpm

openssl-0.9.6b-32.7.i386.rpm

 Warning: these packages are
end-of-lifed and may present security hazards (read: your box could be
owned if you do this!).  I am no longer running a box with these
packages and I suggest you do not either!  I currently recommend
apachetoolbox (apachetoolbox.com) for a quick and relatively painless
recompile, rather than relying upon these old dusty rpms.

3. Install apache and mod_perl.  Ensure that it works fine (sanity check).

4. Back up /usr/share/ssl/* to e.g. /usr/share/ssl9.7a/

5. Back up /usr/bin/openssl to e.g. /usr/bin/openssl9.7a

6. Install openssl with a suitable command line:

rpm -ivh openssl-0.9.6b-32.7.i386.rpm –excludedocs –oldpackage –force

7. Now back up /usr/share/ssl/* to /usr/share/ssl9.6b/ and restore ssl9.7a/

8. Back up /usr/bin/openssl to openssl9.6b and restore openssl9.7a

9. You should now have both openssl 9.6b and 9.7a installed on your system.  You can verify this with rpm -q openssl

10. Now install the mod_ssl RPM.

 

Make sure you've used lokkit (or manually arranged) to open up both
port 80 and 443 or else you'll drive yourself crazy for 20 minutes,
like I did, wondering why http is running but not answering.

"Can't coerce GLOB to string in entersub" means "File not found"

Wednesday, December 31st, 1969

For users of the Perl modules XML::LibXML and XML::LibXSLT, you will save yourself much puzzlement if you understand that “Can't coerce GLOB to string in entersub” really means “file not found.”

NOTE that the file which is not found might be your XML, your XSLT, or the schema / DTD for these things! Maybe some -e tests are in order (but don't forget that filenames hidden in your XML pointing to bad DTD paths, for example, will throw the same cryptic error).

See also http://maclux-rz.uibk.ac.at/~maillists/axkit-users/msg05794.shtml

[FIX] DBD::mysql installation on Red Hat 9 fails with "Unsuccessful Stat" messages.

Wednesday, December 31st, 1969

If you go to install the Perl module DBD::Mysql on Red Hat 9 with MySQL 3.23 (and probably other versions as well), two gotchas might appear.  First, if the MySQL bin directory is not in your path, then you won't be able to have it pull the options automatically.  Make sure that when the Makefile.PL runs (either because you're running it or CPAN is) it can find and run mysql_config.

 

The second gotcha is that Red Hat shipped 9 with default LANG=en_US.UTF-8 in the shell environment.  This will cause your makefile to have some oddly malformed lines around line 89, and will cause a blizzard of these complaints:

Unsuccessful stat on filename containing newline at /usr/lib/perl5/5.8.0/ExtUtils/Liblist/Kid.pm line 97.

 

The solution, according to the kind David Ross at Linuxquestions.org, is to

export LANG=C

before running Makefile.PL. Many thanks.

 

http://www.linuxquestions.org/questions/history/62975

 

 

FIX: SSH or telnet sessions timeout and drop connection on DSL or Cable modem behind NAT router

Wednesday, December 31st, 1969

I use SSH for everything from tunnelling outbound mail in order to avoid port 25 blocks on the freenet providers (such as www.personaltelco.net) to simple terminal sessions.  Also, most all of the time I am hooked up via a DSL or Cable modem with a router in front of it playing NAT tricks to get me to the outside network.  After about an hour (sometimes less) the SSH just hangs; from a Mac OS X terminal session it's just unresponsive and needs to be killed, whereas on PuTTY on Windows, once it realizes the connection is no good it pops up a “Network error: Software caused connection abort” message.  The problem seemed to be worse with DSL from Verizon and Qwest, and seemed to be very mild with AT&T/Comcast cable in Cambridge, 02138 (advice: in Cambridge, you can't go wrong with the Comcast digital cable.  I was getting speeds of (I seem to remember) almost a megabyte per second down and could pull down entire ISOs in minutes; debian net install on an old p233 was disk i/o limited and not network limited by what I could tell.

Happily, the good people at DSL Reports (www.dslreports.com) have put together an FAQ on this subject including some specific configuration options and links to more info:

http://www.dslreports.com/faq/7792

More on this as I determine if it actually works.

Update: So far, so good; a thunderstorm passing through caused a brief power cycle and that definitely reset the connection, but it seemed to hold otherwise, for example during lunch.  The real test will be leaving terminal sessions overnight.

Update: While looking for info on a superficially related problem, I came across this slashdot thread:

http://slashdot.org/askslashdot/00/06/24/0622236.shtml

This may also provide some assistance to seekers of info on this topic.  However! importantly, you should also examine the lengthy parenthetical in http://blog.rlucas.net/ancient/info-what-happens-to-ssh-every-21115/ to determine if this is really your problem — the link to a TCP/IP theory page should help you as well.  This caveat is necessary because there are really two opposite problems that both manifest as “dropped ssh terminal sessions:” one, a NAT table on a cable/dsl router could be timing out (which argues for more frequently sending keepalive packets), or two, your connection could be flaking out briefly but coming back up fairly quickly (which argues against sending frequent keepalives).

FIX: "Undefined subroutine CGI::dump" crashes a formerly working script.

Wednesday, December 31st, 1969

Possible scenario: you wrote an ancient script using the CGI.pm module by Lincoln Stein, and it ran fine on your old RedHat 6.2 box with Perl 5.00503 and an ancient version of CGI.pm.   However, after reinstalling your script on a newer box with Perl 5.6, or else after upgrading your perl and/or CGI.pm, your script is broken and says

Undefined subroutine CGI::dump

Answer: in version 2.50 of CGI.pm, CGI::dump was changed to CGI::Dump. Try:

perl -pi -e 's/CGI::dump/CGI::Dump/' yourscript.pl