rlucas.net: The Next Generation Rotating Header Image

December, 1969:

INFO: Hawing PS12U Printserver CUPS URI for Linux printing

I have a Hawking Printserver, model number PS12U.  I had already set its IP address using the Windows software (it should be noted that you can ARP the printserver from Linux if need be; google for more info).  However, in order to set it up as a printer on my Linux machine, I needed the appropriate URI to feed to lpadmin.  I tried a number of things like ipp://, etc., but finally gave up and used “printconf.”  The proper URI / URL to use, it appears, for addressing the Hawking PS12U is:




Where the IP address in the middle is naturally the one you've set for the Printserver and the “lp1” to “lp3” corresponds to the physical port on the PS12U to which you've connected the printer.

I didn't say it was groundbreaking or an awesome fix, just info that I hadn't been able easily to find.

INFO: Apache SSL error: You have to perform a *full* server restart when you added or removed a certificate …

Have you seen this spuriously:

Ops, no RSA or DSA server certificate found?!
You have to perform a *full* server restart when you added or removed a certificate and/or key file

in your ssl error log (and of course your Apache didn't successfully start: ps -aux | grep httpd | wc -l is zero…) when you were using

apachectl restart

…or some utility that in turn used that method for restarting apache?  Try substituting

apachectl stop && apachectl start


FIX: Evil MSN Search spyware behavior for DNS errors in MSIE

In Microsoft Internet Explorer >= 5, a DNS error (asking for a hostname that doesn't exist) causes IE to pull an evil stunt and feed the requested URL to http://search.msn.com/dnserror.aspx , where it is used to search MSN and logged for who knows what nefarious purpose.  Rather than being opt-in behavior, this is opt-out, and rather than making it clear with a choice in options like “Automatically search when a site is not found”, opting out is a matter of making this choice:

Tools:Internet Options:Advanced:Search from the Address bar:Do not search from the Address bar.


INFO: What happens to ssh every 2:11:15?

I was getting a weird artifact in my logs.  A daemon process that was in charge of keeping an ssh connection open to a remote host was restarting ssh every two hours eleven minutes:

myuser 15208 0.0 0.0 0 0 Z 02:01 0:00 [ssh <defunct>]
myuser 15511 0.0 0.0 0 0 Z 04:12 0:00 [ssh <defunct>]
myuser 15548 0.0 0.0 0 0 Z 06:24 0:00 [ssh <defunct>]
myuser 15584 0.0 0.0 0 0 Z 08:35 0:00 [ssh <defunct>]
myuser 15619 0.0 0.3 3408 1704 S 10:46 0:00 ssh -T myhost ...

What the heck is going on? I was running this from behind a DSL modem, and I had experienced some intermittent problems with it before. Was it the modem? Googling on the model # indicated nothing similar reported by others. Was it my ISP or Telco? Phone calls to them indicated that 2 hours was the median time between dropped connections for some old modems, but not mine and not my circuit type. Hmm. Many people pointed to the TCP KeepAlive default of 7200 seconds — two hours — but my problem had a period of over two hours. Almost exactly, consistently, two hours eleven minutes.

As it turns out, the TCP KeepAlive time of 7200 seconds plus the default KeepAlive probe interval (75) times the default probe count (9) add up to 2:11:15.

If you want to change this for one reason or another, try:

echo "30" > /proc/sys/net/ipv4/tcp_keepalive_time

… or likewise (remember that you'll still have 11:15 worth of probe * count; lower those too if you need to know sooner). Better yet, read http://av.stanford.edu/books/tcpip/tcp_keep.htm for some actual theory on the subject.

One good use for this information is if you want to keep a persistent connection open between two machines using, e.g., Net::SSH::sshopen2 for a bidirectional remote connection to a process executed on a remote machine, but you're on a kind of flaky connection that can cause the connection to get dropped often but briefly, and the nature of the stuff you're doing is such that you want it to re-connect and try again rather than obliviously sit through the blip.

(The reason I ramble so lengthily on what particularly one might use this for is because you do NOT want to follow these directions if you're having a more common “momentarily flaky” connection sequela, such as you have terminal sessions that you wish to keep open despite a moment of flakiness — in that case, you do NOT want to enable short TCP keepalives, since they are really “detect deads,” and they will increase the likelihood that your blip in the connection will kill your terminal session.  In that case, you pretty much want to do the OPPPOSITE of this, excepting that 1. if you are behind a NAT router and your connection isn't actually flaky, you might really be seeing a timeout of the NAT table, not connection flakeage, and so you DO want to put a keepalive in shorter than the NAT table timeout [it's all a bit much, isn't it?] 2. you are probably best off just using “screen” and doing a screen -r to reconnect to an old screen when you get reconnected [screen is awesome for all sorts of reasons, and wth screen, if you can divorce yourself from the graphical burden, you've essentially got a total multitasking virtual desktop with persistent state as long as you've got a vt100 terminal].)

The way I would recommend would be the following:

1. Set up your local ssh_config to make sure you're using KeepAlive yes.

2. Set up your local tcp settings to have a short keepalive time and probe interval/count.  (Some kernels apparently don't behave with less than 90 seconds as the keepalive time but I have had success with much lower numbers.)

3. Set up your remote sshd_config to use the ClientAliveInterval and ClientAliveCountMax with reasonable values.  What this does is sort of a reverse and in-band version of what the TCP keepalive is doing on the local machine; the ssh daemon will send an encrypted signal across every ClientAliveInterval seconds and will hang up the connection if it misses CountMax of them in a row; this makes sure that the process you run on the remote machine gets hung up OK.

4. Make sure that your sshopen2 call and the sending and receiving of things along it recognizes when the SSH connection gets closed out and deals with it, such as by an eval loop and a reconnection in the event of $@ .


FIX: Suppress Perl "uninitialized value" warnings (strongish medicine for clueful users only)

If you have written any Perl of moderate complexity, and especially if your Perl of moderate complexity has included CGI and Database interactions, (or any time you have to deal with undef or NULL vs. a blank string, and you might have either of the two), you have run across warnings like this (maybe to STDERR, or maybe in your /etc/httpd/logs/error_log):

Use of uninitialized value in concatenation (.) or string at ...

Use of uninitialized value in numeric gt (>) at ...

etc.  How can you stop these error messages (warnings, really) from blocking up your logs and your STDERR?

In fact, you should be somewhat concerned at your uninitialized value warnings.  After all, they are there for a reason.  A really good piece of code ought not to generate them, at least in theory.  However, sometimes you want the benefit of use strict and -w warnings, and you have at once good reason not to want to know about uninitialized values.  What might these be?

  • You are doing string interpolation into something where undef and "" are equivalent for your purposes (most web page work)
  • You are doing some conditionals or string comparisons based upon data that come in from one crufty source or another, like CGI, and you don't want to make a separate case for undef and "".
  • Relative quick-and-dirtiness where you want at least use strict in order to prevent egregious code but you don't need to hear about the semantic difference between undefined and the emtpy string.

In these cases, if you are using Perl 5.6+, you are in luck.  You can carefully wrap the specific section of code that has a good reason for not caring about undef values in a block (curly braces) and write:

  no warnings 'uninitialized';
  if ( CGI::param('name') ) {
    print "Hello, " . CGI::param('name');
  else {
    print "Hi there.";

FIX: GIMP can't open PostScript (PS, EPS, PDF) files under Windows

The GIMP (GNU Image Manipulation Program) is a neat tool for people with needs too casual or cheap for PhotoShop, but too much for various paintbrush type tools.

However, if you install the GIMP under Windows 2000, like me, EPS or PS PostScript files will not open properly, instead barfing with:

Error opening file: C:tempmyfile.eps

PS: Can't interprete file [sic]

You'll need to do the following to make it work:

1. Install GhostScript for Windows.


2. Install the GIMP.


3. Set your environment variables to include:



Typical paths in which to find your GS stuff after a default install might be C:gsgs8.11bingswin32c.exe and C:gsgs8.11lib

(One way to get your environment set in Windows is Start: Settings:Control Panel:System:Advanced:Environment Variables.  In non-NT versions you might need to change AUTOEXEC.BAT to include SET directives)

4. Restart the GIMP and you should be up and running.

FIX: Can't locate object method "select_val" via package "DBI::st" under Class::DBI 0.95

[warning: see updates below]

Between Class::DBI 0.94 and 0.95, something changed causing my classes that override db_Main() to set the db connection (instead of, e.g. using set_db) to stop working.  Specifically, when their sequence functions were invoked, I got an error of:


Can't locate object method “select_val” via package “DBI::st” (perhaps you forgot to load “DBI::st”?) at /usr/lib/perl5/site_perl/5.6.1/Class/DBI.pm line …


I was able to replicate this on Mac OS X 10.2 with Perl 5.6.0 and Red Hat 9 with 5.6.1 (don't ask about the latter version magic…).


If you get select_val errors with Class::DBI 0.95, here are two workarounds:



I am not sure why this is (comments are welcome) and have submitted a bug to the developers as CPAN #5522.

Update: Thanks to Tony Bowden, maintainer of Class::DBI, for his reply:


Full context and any attached attachments can be found at:
<URL: http://rt.cpan.org/NoAuth/Bug.html?id=5522 >

On Mon, Mar 01, 2004 at 06:42:38PM -0500, Guest via RT wrote:
> In Class::DBI 0.95, setting up the DB connection by overriding db_Main
> breaks Ima::DBI->select_val and the methods that rely on it (like sequence
> and count_all)

You need to call Ima::DBI->connect rather than DBI->connect in your
overriden db_Main.



Still not certain, though, why it is that it breaks in 0.95 and not 0.94.

Update: Thanks again to Tony, who writes:

The reason this broke in going from 0.94 to 0.95, btw, is that the
select_val stuff was only added to Ima::DBI recently, so Class::DBI .94
didn't use these, but instead rolled its own long winded versions.

0.95 uses the new methods in Ima::DBI and is a lot better off for it! 🙂


Update 23 April 2004: Things are all wacky now.  You should be careful and should probably NOT do any of the above.  See the 06 April 2004 exchanges on the CDBI mailing list.  If you do what is described above you stand to get reconnections that will mess up things like auto_increments in MySQL.  At present the issue of how to deal with db_Main() seems unresolved.

[FIX] XFree86 stuck at 640 x 480 under Linux with Dell Dimension or Optiplex

With a fresh install of Red Hat 9 on a Dell Dimension 4600, the only video mode that would work with XFree86 was 640 x 480, which is ludicrously big on a decent-sized monitor.  Changing the config didn't do anything, even though the config was well within my monitor's limits.

The solution was to go into the BIOS setup and change the Integrated Devices (LegacySelect Options) / Onboard Video Buffer setting from 1 MB to 8 MB.  I'm not sure what the tradeoff with other aspects of the system is, but X nicely starts up at 1280 x 1024.  Apparently, this is the solution for other Dell models as well, including the Optiplex GX260; mine had Dell BIOS Revision A08.  Also, it seems to be the case that the problem is general to XFree86, although it manifested for me under Red Hat 9.

Thanks to Erick Tonnel at Dell, who kindly provided the solution here:



[SANITY CHECK] Apache 2 hangs with lots of STDERR output from CGI

You are not crazy. It is not an infinite recursion in your logic. Your code doesn't take that long to execute.

you output to STDERR (in Perl, this means Carp or warn or the venerable
print STDERR among others) from a CGI script under Apache 2.0, and you
end up dumping more than approximately 4k (note that if you are using
“warn” or “Carp” you may have extra stuff on there so that you only
output 3k or so but the extras bring it up to 4k), Apache 2 will hang
forever (as of today, 30 March 2004).

See this bug report: http://nagoya.apache.org/bugzilla/show_bug.cgi?id=22030

There are some patches proposed in the link above on the Apache project bugzilla, but they are not production releases.

case you were wondering,
http://blogs.law.harvard.edu/rlucas/2003/08/26#a13 shows some helpful
hints on how you can back down to version 1.3.

Question to all:
what are folks' recommendations for an Apache 1.3 packaged install? I
would tend to prefer statically linked with SSL and mod_perl, but the
only one I've seen folks using is from n0i.net which isn't entirely
satisfying because I don't speak Romanian.

Update: Using Apachetoolbox
(see Google), you can fairly simply compile apache 1.3 + mod_ssl + mod_perl
+ php and whatever 3rd party modules you like.  This makes for a fine alternative to RPM
versioning hell, or even to traipsing around your src tree typing make.  Be sure that if you do this, you specify
mod_perl 1.29 rather than 1.99, if you compile mod_perl in.

Concentrating in History and Science at Harvard

Having had a not entirely satisfactory experience in the History and Science department at Harvard, I have been making notes ever since on how things might have been improved, and on things I wish I'd known before starting out.

Prospective concentrators in the department of History of Science at Harvard may want to check my notes, linked below:


These notes are not intended to be universally applicable — caveat lector — but would help someone like me who was considering, or struggling with, a concentration in History and Science.