rlucas.net: The Next Generation Rotating Header Image

ancient

FIX: "Undefined subroutine CGI::dump" crashes a formerly working script.

Possible scenario: you wrote an ancient script using the CGI.pm module by Lincoln Stein, and it ran fine on your old RedHat 6.2 box with Perl 5.00503 and an ancient version of CGI.pm.   However, after reinstalling your script on a newer box with Perl 5.6, or else after upgrading your perl and/or CGI.pm, your script is broken and says

Undefined subroutine CGI::dump

Answer: in version 2.50 of CGI.pm, CGI::dump was changed to CGI::Dump. Try:

perl -pi -e 's/CGI::dump/CGI::Dump/' yourscript.pl

 

Class::MethodMaker v2 dies with cryptic "Unknown error" in compilation with bad arguments to use / require

If you use Class::MethodMaker and have a subtle error in your

use Class::MethodMaker [ whatever…];

line, such as not quoting a bareword, you can end up with this error:

Unknown error
Compilation failed in require.
BEGIN failed–compilation aborted.

If this happens, scrutinize your “use” lines and especially your C:MM line.If you use Class::MethodMaker and have a subtle error in your

use Class::MethodMaker [ whatever…];

line, such as not quoting a bareword, you can end up with this error:

Unknown error
Compilation failed in require.
BEGIN failed–compilation aborted.

If this happens, scrutinize your “use” lines and especially your C:MM line.

[Gedankenexperiment] I have released EULAVirus 1.0 into the wild.

“I have created a computer virus and released it over the Internet.  It is named “EULAVirus” version 1.0.

“The virus takes the following actions, besides replicating
itself.  It seeds a pseudorandom number generator with a number
based upon the machine's unique characteristics, such that the PRNG
sequence will always be the same for the same machine.  Then,
during a dialog box, wizard, browser window, or other interactive
session (“dialog”) when certain key words and / or pixel combinations
are detected, it takes a “fingerprint” of the dialog based upon certain
characteristics, and uses the PRNG to determine whether to act on that
dialog.  The manner in which this is done ensures that for a given
dialog on a given machine, the same action will always be taken. 
If a dialog box is acted upon, the virus will cause all text to be
scrolled through, and an approval button to be “clicked” (it does so by
interacting with the operating system at a lower level).  This all
takes places nearly instantly, so that any human watching the computer
perform this will be unable to perceive what has occurred, beyond
perhaps a brief flash of the dialog on a slow computer.

“I have deleted all traces of the virus and any of its documentation
from all computers I control, but not before propagating it out to the
Internet.  It is spreading rapidly but it is exceedingly stealthy,
and it is engineered to avoid detection at all costs.  In order to
prevent its detection, I will not say which operating systems it runs
on, nor will I identify specific vectors of transmission.”

Now: can a EULA (end-user license agreement) ever again be considered legally binding?

FIX: SSH or telnet sessions timeout and drop connection on DSL or Cable modem behind NAT router

I use SSH for everything from tunnelling outbound mail in order to avoid port 25 blocks on the freenet providers (such as www.personaltelco.net) to simple terminal sessions.  Also, most all of the time I am hooked up via a DSL or Cable modem with a router in front of it playing NAT tricks to get me to the outside network.  After about an hour (sometimes less) the SSH just hangs; from a Mac OS X terminal session it's just unresponsive and needs to be killed, whereas on PuTTY on Windows, once it realizes the connection is no good it pops up a “Network error: Software caused connection abort” message.  The problem seemed to be worse with DSL from Verizon and Qwest, and seemed to be very mild with AT&T/Comcast cable in Cambridge, 02138 (advice: in Cambridge, you can't go wrong with the Comcast digital cable.  I was getting speeds of (I seem to remember) almost a megabyte per second down and could pull down entire ISOs in minutes; debian net install on an old p233 was disk i/o limited and not network limited by what I could tell.

Happily, the good people at DSL Reports (www.dslreports.com) have put together an FAQ on this subject including some specific configuration options and links to more info:

http://www.dslreports.com/faq/7792

More on this as I determine if it actually works.

Update: So far, so good; a thunderstorm passing through caused a brief power cycle and that definitely reset the connection, but it seemed to hold otherwise, for example during lunch.  The real test will be leaving terminal sessions overnight.

Update: While looking for info on a superficially related problem, I came across this slashdot thread:

http://slashdot.org/askslashdot/00/06/24/0622236.shtml

This may also provide some assistance to seekers of info on this topic.  However! importantly, you should also examine the lengthy parenthetical in http://blog.rlucas.net/ancient/info-what-happens-to-ssh-every-21115/ to determine if this is really your problem — the link to a TCP/IP theory page should help you as well.  This caveat is necessary because there are really two opposite problems that both manifest as “dropped ssh terminal sessions:” one, a NAT table on a cable/dsl router could be timing out (which argues for more frequently sending keepalive packets), or two, your connection could be flaking out briefly but coming back up fairly quickly (which argues against sending frequent keepalives).

[FIX] Adobe Reader v.5 fails to open PDF with "There was an error opening this document. A temporary file could not be opened."

If you see “There was an error opening this document.  A temporary file could not be opened.” when trying to open a PDF file, you may need to clean out C:Documents and SettingsUSERNAMELocal SettingsTempAcr*.tmp

Cheers to “gprellwitz” who suggests this here:

http://www.experts-exchange.com/Web/Graphics/Adobe_Acrobat/Q_20790405.html

Jeers to the MSFT developer who decided that “Documents and Settings” with spaces and mixed caps was a better home directory prefix than “home”

BUG/FIX: Empty "script" tags may cause IE to display nothing

I use Microsoft Internet Explorer version 6 (IE6) on Windows 2000 when
I have to (much better to use Mozilla or Opera in my opinion; even some
of the Microsoft guys are now eschewing IE for security reasons). 
I tested a document that passed some pretty strict validation and was
showing up fine in Mozilla, under IE6.  The title appeared, but
the body was blank.  What?!

It turns out that the problem was independent of quirks mode on/off
(google for quirks mode if you don't know).  It was dependent upon
two “<script language='javascript' src='blah' />” tags in the
<head> section.  By changing the <script /> to
<script></script> (explicit closing tags), the body
reappeared OK.

[FIX] DBD::mysql installation on Red Hat 9 fails with "Unsuccessful Stat" messages.

If you go to install the Perl module DBD::Mysql on Red Hat 9 with MySQL 3.23 (and probably other versions as well), two gotchas might appear.  First, if the MySQL bin directory is not in your path, then you won't be able to have it pull the options automatically.  Make sure that when the Makefile.PL runs (either because you're running it or CPAN is) it can find and run mysql_config.

 

The second gotcha is that Red Hat shipped 9 with default LANG=en_US.UTF-8 in the shell environment.  This will cause your makefile to have some oddly malformed lines around line 89, and will cause a blizzard of these complaints:

Unsuccessful stat on filename containing newline at /usr/lib/perl5/5.8.0/ExtUtils/Liblist/Kid.pm line 97.

 

The solution, according to the kind David Ross at Linuxquestions.org, is to

export LANG=C

before running Makefile.PL. Many thanks.

 

http://www.linuxquestions.org/questions/history/62975

 

 

[BUG] ActiveRecord woes with SQL Server odd names (spaces and brackets), ntext data types

ActiveRecord (latest versions of ODBC, DBI, and AR as of today
2005-12-01) seems to be having trouble with at least two things that
SQL Server 7 does:

1. The SQL Server adaptor (sqlserver_adaptor.rb) get_table_name sub
expects a name to have no whitespace in it.  The conditional and
regex need to be changed to look for bracketed names like [Poorly
Designed Table].  Then, the columns sub needs to know to take the
brackets off the ends of the names when it looks up the table by its
textual value.  To complicate this, according to
http://msdn2.microsoft.com/en-us/library/ms176027.aspx you can have
either double-quotes or square brackets as your delimiters in SQL
Server names, and you can even escape brackets by doubling.  I
have written hackish code that solves for simple [Dumb Name] tables but
not the whole enchilada, so I'm not posting it here yet.

2. The data type “ntext” seems to create memory allocation problems; I get an error of:

/usr/lib/ruby/site_ruby/1.8/DBD/ODBC/ODBC.rb:220:in `fetch': failed to allocate memory (NoMemoryError)

Running this on CYGWIN_NT-5.1 RANDALL-VAIO 1.5.19s(0.141/4/2) 20051102 13:29:13 i686 unknown unknown Cygwin on Win XP Pro.

[HINT] Preprocessing mongo XML files for use with XML::Simple

If you are a reasonable Perlista, the first thing you will do when you
have to do some modest but non-trivial munging of data locked up in XML
is to use XML::Simple.  The API is nearly perfect (absent the lack
of some defaults that could be more helpfully set for strictness) for
purposes of comprehensibility and transparency.

However, if you prototype on a small document, and then try to use your
code on a much bigger XML document, you will find the drawback:
tree-building is costly, and you may spend the vast majority of your
program's time parsing in the document.  One handy solution is to
preprocess your XML — just run XML::Simple's XMLin sub, and use
Data::Dumper to spit out the structure that results to a file. 
When you want to use it, you can simply “eval” it, for it defines a
native Perl structure, and you can use the remainder of your code
unchanged.  This resulted for me in a 2x – 10x speedup for certain
documents and certain sizes.

However — now imagine that you have some real torture-test data — 10
MB, heavily nested monstrosities of XML.  The Dumper output of the
parsed tree is now working on 100 MB!  Slurping this in and
evaling it is now the real problem.

Here's an idea: rather than slurping and evaling, try inlining it at
the compilation stage.  That's right — make use of Perl's much
more efficient way of slurping and evaling a filehandle with a pipe:

cat preprocessed_xml.dd myscript.pl | perl

It's somewhat unorthodox, but entirely functional.  Combined with
judicious use of gzip, this could be a very efficient way to get
little-changing XML documents into perl quickly — often very important
when doing dev work for which numerous iterations are required and for
which a minutes-long parse stage would adversely affect progress.

Update: It occurred to me that
using Storable or a Cache::* module might be faster yet.  At this
point, my work proceeds with tolerable speed using Data::Dumper, plus I
like using Dumper so that I can edit the output structures by hand if
need be.  But perhaps you should try those modules if you need
even better performance, or cringe at the hackishness of catenating
files piped to perl.

"Can't coerce GLOB to string in entersub" means "File not found"

For users of the Perl modules XML::LibXML and XML::LibXSLT, you will save yourself much puzzlement if you understand that “Can't coerce GLOB to string in entersub” really means “file not found.”

NOTE that the file which is not found might be your XML, your XSLT, or the schema / DTD for these things! Maybe some -e tests are in order (but don't forget that filenames hidden in your XML pointing to bad DTD paths, for example, will throw the same cryptic error).

See also http://maclux-rz.uibk.ac.at/~maillists/axkit-users/msg05794.shtml