Archive for the ‘bugfix’ Category

CHCBP-013 describes unlimited former spouse coverage under CHCBP

Tuesday, April 8th, 2014

(This is extremely specialized subject material. If you found it while searching, you probably understand this, but if you’re just a curious reader, you might want to skip it.)

The Continued Health Care Benefit Program is analogous to COBRA for former TRICARE-covered persons; it is mostly a transitional way to get TRICARE-like (TRICARE less military facilities) benefits self-paid after one ceases to be eligible for TRICARE itself.

Generally the CHCBP eligibility period is limited. However, in the case of former spouses of servicemembers who have not remarried before age 55, it is possible to qualify for unlimited (lifetime) eligibility. This is obvious from law and from published stuff (e.g. continuing legal education materials for the tiny subset of lawyers for whose clients this is a very important topic), but it is nearly impossible to get anything from the “horse’s mouth,” that is, the actual CHCBP plan administrator, which is Humana Military Healthcare Services (Humana Military).

If you call Humana Military at 1-800-444-5445 and ask for the rules regarding former spouse unlimited eligibility, they will punt and tell you it’s something that is decided only at the point at which the normal 3-year eligibility has expired. If you persist and ask how it’s decided, they’ll tell you that there is a questionnaire which is sent, but that they don’t have access to it and can’t send a copy. If, incredulous, you insist that some person, perhaps that person’s manager, does in fact know where the questionnaire is, since it’s obviously already written and gets sent out to people all the time, you will get transferred to a kindly manager who takes your FAX number and swears up and down that they’ll send it to you tomorrow morning since the office is about to close.

If you’re a real ornery gadfly, though, and you’re ready to start writing letters to your congressman etc., you might try sending a letter to Humana Military asking for this information and documenting your attempts to get it and the promises which were made and unfulfilled about sending it to you before. If you do all this, they will mail you a redacted copy of an actual form CHCBP-13, which had been sent to an actual former spouse whose eligibility was expiring. (Maybe, in fact, the first representative was truthful and this is not a document that they have ready, and they write up a new one every time and so the only way to send it out is to take the latest one and redact the policyholder’s personal info. I doubt it.) (For this courtesy I am grateful, O unnamed mail-replier at Humana Military, but the policy of keeping this information completely off the public Web is either backwards or sinister.)

Note that the operative requirements are:

1. A “signed statement of assurance that you have not remarried before the age of 55.”
2. Proof (the standard of which is left to the imagination) that you were “enrolled in, or covered by, an approved health benefits plan under Chapter 55, Title 10, United States Code as the dependent of a retiree at any time during the 18-month period before the date of the divorce, dissolution, or annulment (Both TRICARE and CHCPB would qualify as such a plan).” [sic — Humana’s language implies that the enrolee must have been a dependent of a “retiree” but the actual language of 10 USC 55 sec. 1078a (b) (3) talks about a “member or former member” of the armed forces]
3. “A copy of the final divorce decree or other authoritative evidence” that you are actually receiving or have a court order to receive part of the retired or retainer pay of the service member, or have a written agreement that the service member will elect to provide you an annuity. [DO NOT rely on this nor on Humana’s exact language, for this, get your ducks in a row with the statute etc. — it’s slightly complicated.]
4. The executed (signed) renewal notice page of CHCBP-013 and a check for the next premium payable to “United States Treasury.”

Given that this is a hugely important item to a few (but maybe not all that few) people out there, and that it related to the execution of very clear federal law, I’m rather shocked that there’s no mention ***anywhere on the Internet prior to today(!!!)*** of CHCBP-013 and its contents and requirements.

So, with no further ado, I present you a redacted copy of form letter CHCBP-13, which shows the actual procedural requirements for obtaining unlimited former spouse CHCBP coverage.

Django auto_now types wreck get_or_create functionality

Tuesday, February 11th, 2014

I recently had occasion to lazily use the Django “get_or_create” convenience method to instantiate some database ORM records. This worked fine the first time through, and I didn’t have to write the tedious “query, check, then insert” rigamarole. So far so good. These were new records so the “or_create” part was operative.

Then, while actually testing the “get_” part by running over the same input dataset, I noticed it was nearly as slow as the INSERTs, even though it should have been doing indexed SELECTs over a modest, in-memory table. Odd. I checked the debug output from Django and discovered that get_or_create was invoking UPDATE calls.

The only field it was updating was:

updated_at = models.DateTimeField(auto_now=True)

Thanks, Django. You just created a tautological truth. It was updated at that datetime because … you updated it at that datetime.

Interestingly, its sister field did NOT get updated:

created_at = models.DateTimeField(auto_now_add=True, editable=False)

This field, true to its args, was not editable, even by whatever evil gnome was going around editing the updated_at field.

Recommendation: if you want actual timestamps in your database, why don’t you use a real database and put a trigger on it? ORM-layer nonsense is a surefire way to have everything FUBAR as soon as someone drops to raw SQL for performance or clarity.

Don’t Bother with S3FS for Mounting Amazon S3 on Mac OS X (2014)

Friday, January 24th, 2014

As of January 2014, it’s not worth bothering with the “s3fs” software for mounting Amazon S3 buckets on your local filesystem.

The idea of s3fs is simple and great: use FUSE (Filesystem in Userspace) to “mount” the S3 bucket the same way you’d mount, say, an NFS drive or a partition of a disk. Manipulate the files, let s3fs sync it in the background. Sure, you lose some reliability, but we’ve had NFS and SMB and all kinds of somewhat-latent-over-an-unreliable-link-but-mostly-with-filesystem-semantics software for decades now, right?

Well, forget it. s3fs as of January 2014, used on Mac OS X and against an existing set of buckets, is so utterly unreliable as to be useless.

First, s3fs cannot “see” existing folders. This is because folders are a bit of hack on S3 and weren’t done in a standardized, documented way when s3fs was first written. However, since then, at least two other ways of creating folders on S3 have gained currency: an older, deprecated way with Firefox plugin S3Fox, and a newer defacto standard way with Amazon’s own management dashboard/browser for S3. Whatever the historical reason, you can’t see the existing folders.

Second, although from mailing list posts, theoretically you can *create* an s3fs folder with the same name as your existing folder, and then its contents will magically become visible, empirically something rather different happens. A mkdir on the s3fs mount leads to the creation of a mangled *regular* file on the S3 dashboard. Now you have two “folders,” each of which is unusable as a folder on the other (S3 dashboard, or s3fs) system. Argh.

Finally, you might say, ok, fine, this will just make me use flat-level, non-folder-nested choices about my S3 architecture. (Leave aside for the moment that the very reason you want to use S3 is probably exactly so that you can have lots and lots and lots and lots (like 10^8+) files in a way that will cripple any reasonable filesystem tools that see them all in one “directory.”) However, even that doesn’t work reliably, as s3fs demonstrated today when it went into “write-only mode” such that I could create files locally that would show up on S3 but that subsequently would disappear from my local filesystem. WTF?!?

The unfortunate answer is: S3 is not a filesystem, and it was created by people who are smarter than you, and who have very craftily calculated that if you are forced to weave in the S3 API and its limitations into your application code, you will have a damn hard time ripping it out of your infrastructure, and so they are going to have you do just that. They do not want it to be used as a filesystem, and so guess what: you are *not* going to use it as a filesystem. Not gonna happen.

Say what you will, but our hometown heroes here in Seattle are no dummies. Embrace, extend, extinguish, baby. Not just for OS companies anymore…

(Yes, I know that s3fs is not an Amazon project. But it appears to be the community’s best attempt to put filesystem semantics around S3, and that attempt has been rejected by AWS’s immune system.)

MySQL silently ignores aggregate query errors by default

Monday, January 14th, 2013

In a SQL query, if you use aggregate functions (min, max, count, sum, etc.) and mix them with non-aggregate columns, you have to indicate how to “group” things. Otherwise, the output is not predictable.

MySQL by default will just ignore these problems and make up something. This can make bugs in complex queries hard to track down (and it virtually guarantees that a novice or dullard will slip some errors into such queries eventually).

You can fix this with:

SET SQL_MODE=(SELECT CONCAT(@@sql_mode,’,ONLY_FULL_GROUP_BY’));

(That is, you want the “ONLY_FULL_GROUP_BY” option set. The SET above can be run in the mysql> prompt and affects that session only; thinking DBAs should strongly consider enforcing this as a server option.)

I am too tired and busy to give in to the temptation to unleash a rant about MySQL here, but PLEEEEZ. It’s the year 2013 and this is still an issue??

Hat tip Michael McLaughlin: http://blog.mclaughlinsoftware.com/2010/03/10/mysql-standard-group-by/

MySQL docs on this “extension” http://dev.mysql.com/doc/refman/5.1/en/group-by-extensions.html

Force a reference to System.Core in Visual Studio 2010

Saturday, April 9th, 2011

There are reasons why you might need to add a reference to “System.Core” to your Microsoft Visual Studio project. (For example, if you wish to compile/build both inside the IDE and from the command-line with MSBuild.exe.)

However, if you try to do this through the IDE, it will barf at you: “A reference to ‘System.Core’ could not be added. This component is already automatically referenced by the build system.”

Alas, that’s a big fucking lie. It’s referenced by the IDE when it invokes the build system, but not by MSBuild itself. So sometimes, you indeed must add such a reference, but you can’t do it from IDE-land. So close VS2010 and fire up vim. Add a line to the .csproj file in question, telling it to:

<Reference Include="System.Core" />

Hat tip to Ashby at StackOverflow:
http://stackoverflow.com/questions/1302488/the-type-or-namespace-name-linq-does-not-exist-in-the-namespace-system-data/4331322#4331322

I got to this point when recompiling something for .NET 3.5 that had originally been coded/built for .NET 4.0; it wouldn’t import the LINQ namespace without being told to import System.Linq, but then couldn’t find it without the System.Core reference being made explicit. Arrgh.

Why is Port 21 apparently open on my firewall?

Monday, February 28th, 2011

Scenario: You set up a server somewhere on the public Internet.  You lock down its ports to the minimal subset you need using firewall(s).  Yet, somehow and for some strange reason, nmap reports that port 21 (FTP) is open on your server!  Sure enough, you do a “telnet myhost.cxm 21” and it connects!  Shit-damn, what’s going on??

Don’t bang your head against your iptables or pf or PIX or ASA config.  First, check to make sure that the environment you’re checking from behaves right.  Do a “telnet google.com 21” and see if it connects.

Some NAT setups in offices apparently try to do some stateful inspection of outbound active FTP in order to rewrite the addresses/ports involved, and these can intercept outbound requests on port 21, making it seem like any host is picking up on that port.

Try nmap’ing or telnet’ing from an outside host directly connected to the public internet.  And make sure that your subsequent security scans/checks come from such a host.

secedit for setting security policy in windows server 2008 r2 server core

Saturday, January 15th, 2011

Concepts:

There is a policy running on the system. There may be one or more databases in .sdb files which are files representing a possible policy that could be run. These are stored in c:\windows\security\database\. One may “export” a policy out to a configuration .ini file (the docs for secedit say .inf, but it is clearly the venerable .ini format), which policy comes either from a database .sdb file, or from the current running system policy (if no policy is specified when running secedit.exe on the command line).  One may edit this configuration .ini file (the docs ambiguously call it a “security template” as well, but the command line options all say “cfg”).  You then create a new security database .sdb file with the “import” syntax.  Contrary to a lot of stuff on the Web, you don’t need to put it into some particular magic database (but see below for path gotchas) like the original secedit.sdb; put it in a new one.  Once you have a new, valid, legit database .sdb file, you only then use “configure” to apply the database file to the current system.

Lots of gotchas here.

Sometimes when secedit.exe fails, it is silent, like a good UNIX program, but it will return an %ERRORLEVEL% so check that or you will be bamboozled.  It is noisy when it succeeds and sometimes even noisier when it fails (except when it’s silent).

Secedit silently failed in many confusing ways if either the security database .sdb or the configuration .ini was located on my z: drive, which in this case was a VMware shared folder on a Mac OS X system.  Move stuff to a C: temp dir, then clean up afterwards, because hey, writing xcopy lines in batch files is fun.

The configuration .ini files are in full on, utf-16 format.  Two fucking bytes per character.  Nice.

If you try to create a brand-new configuration .ini file without reference to anything, you do not get a listing of the default settings, but rather an unhelpful, nearly-empty file that informs you that Description=Default Security Settings. (Windows Server).

If you try to look up the values for the various sections, you just plain can’t get them anywhere.  If you go to this horrible javascript monstrosity of a reference site and click Technical Reference (for Windows Server Group Policy), you can get a giagantic, unstructured spreadsheet that has relatively lengthy (but non-technical) prose about the various settings.  But it won’t tell you which sections they go under in the .ini file, nor the requirements for each field, some of which are registry settings and some of which are not.  If you want to see something that was apparently done as a class project by a charity course taught for retarded non-native English speaking remedial computer science students by the generous and distinguished engineers of Microsoft (who are clearly working on better and more interesting problems than, oh, say, making sure the OS’s core security API is sensical and internally consistent), you can look at this horrifying jumble of shittiness, which can’t search for shit but mollifies you with funny Engrish when it fails (I am not kidding: “We probably hit search limit.  Try to redefine your search string.”) and as a bonus demonstrates the sloth of Azure serving AJAX (because, you know, actually putting the documents into an html page where anyone’s browser “find” function could speedily search for it would lack the pedagogical value to the retard-children-programmers).

You can maybe stay sane if you learn that the registry values are prepended by an integer and a comma, where that int seems to specify the data type of the reg value (4=integer, 7=text, 1=some other kind of text).

Specifically in the [Event Audit] section, there are values that are not registry values.  They are ints that appear to be bitmask math fanciness.  (Remember setting options on visual basic windows back in the ’90s, where you got to add up powers of 2? ).  It so happens that they all have two bits, the first one being “log successes” and the second being “log failures.”  So 0 is neither, 1 is successes, 2 is failures, 3 is both.  But this isn’t, as far as I can tell, anywhere on MSFT’s site and it’s sure as fuck not in the giant unstructured spreadsheet reference.

Much of the configuration .ini file can be omitted (so you can just overlay the parts you want).  But you MUST include the [Version] and [Unicode] sections or it will barf.  Use secedit /validate to check it.  However, “validate” does not mean that it will actually round-trip and work right; it doesn’t check the security identifiers in the [Privilege Rights] section so it’s quite possible to have a valid cock-up (see round-trip gotcha below).

Biggest one: secedit CANNOT ROUND-TRIP. The security policy “export” may will (and does for me) produce an output with entries in the [Privilege Rights] section that refer to “Classic .NET AppPool” among others.  If you try to import and configure with this, you’ll get “No mapping between account names and security IDs was done” in the error log.  Turns out you have to manually fix this by adding “IIS AppPool\” before the names of these AppPool entities.  (Hat tip)  If you want to actually find out whether that, or some other hackery, fixes it to something that can be mapped to an SID, find yourself the PSTools download and test the name with PSGetSID.exe.  Awesome.

The “configure” option only really needs a .sdb database specified by /db.  If you give it an additional /cfg parameter, it will muddy up the .sdb with the contents of the specified config ini.  There is no benefit to using this, ever, other than skipping a step that could result in you keeping a sanity-preserving intermediate state backup.

The “overwrite” option doesn’t do what you think it does.  Especially with “configure.”  Just don’t use it, unless you are planning on destroying what is in your .sdb file(s).  The .ini configuration already wins in a tie.

The “configure” option is NOT ATOMIC, and it will happily set your system’s security policy partially to be what was in the file you indicated, and partially not (for example, with the broken round-tripping of IIS AppPool names).  There’s no way to find out whether or not the configuration will succeed, except to “suck it and see.”  And once it does partially, non-atomically make a goulash out of the then-current and database-specified settings, there’s no way to tell what succeeded or failed, except by reading the log, which is formatted in an unparseable mess.  Fantastic.

Web.config and App.config file gotchas

Friday, November 19th, 2010

If you try to use idiomatic .NET, and you have even modest configurability architecture requirements, you will almost certainly want to use the *.config system (App.config or Web.config). According to old hands at Win32 programming, this is quite a step forward from *.ini files or registry manipulation. Perhaps so.

However, the *.config regime is extraordinarily fragile and surprise-prone once you start trying to do more than just add name/value pairs to the <appSettings> section. The following are some gotchas that I hope you can avoid if you have to deal with this.

.EXE assembies get App.config, and Web DLLs get Web.config, but non-Web DLLs (e.g. tests) get App.config.

.EXE assemblies look for filename.exe.config, which is in App.config format.  Normally, DLLs do NOT get a config file; rather, they inherit / acquire whatever config is in place in their runtime environment.  But there are two important exceptions.  Web services / sites get built as DLLs.  Their execution environment (presumably IIS or the dev server) looks for Web.config and its format.  Test projects (of the MS type that Visual Studio 2010 makes by default) get built as DLLs, as well, but they get run by (mstest? vstudio?) an execution environment that looks for an App.config file.

So, to sum:

  • .EXE => App.config
  • .DLL Web project => Web.config, via its server runtime
  • .DLL Test project => App.config, via the test / IDE runtime
  • .DLL other => none, inherits runtime environment

Sections such as <appSettings> can be externalized into other files, but there are two subtlely different and incompatible ways to do so.

Specifically, you can add a “file” or “configSource” attribute to your appSettings section.  If you use “file,” that file will be read and will override default values that are set in that section in the .config main file.  If you use “configSource,” however, you must not set any values in your .config main file, and instead must entirely scope out that section (and that section alone, save for the XML declaration) in the file whose name you specify.

Frustratingly, “file” and “configSource” have different rules for what may be included (relative / absolute paths, parent directories, etc.).  Especially restrictive are the rules for Web.config, I believe, though I don’t have them straight.  Effectively what this means is that if you have several Web projects that require a shared configuration section, you cannot put your customSection.config in a parent directory and have your projects pull it in (thereby keeping a Single Point of Truth); rather, you have to propagate multiple copies out to all of the sub-Projects (ick).

For more on this: re: configSource, re: file from MSDN.

Web.config settings are mostly inherited from machine.config down through a hierarchy, but confusingly stop being inherited at the sub-directory level in IIS.

Sometimes, or at least most of the time by default, Web.config settings for a given directory are merged with those of parent directories, and are merged with machine-level config as well.  This can lead to somewhat unfortunate results if you have an app in a subdirectory of another app with divergent configruation requirements.  This fellow seems to have figured out how to resolve this.

Be alert, though, because not all settings *do* propagate properly.  First, a parent Web.config can indicate that its settings should not be inherited.  Second, collection settings are merged together, not replaced, by child specifications.  Third, some settings, just seem stubbornly not to propagate (see this MSDN article which suggests that “anonymousIdentification” does not propagate because it is a secret never-properly-set default magical element).  Finally, the above-quoted MSDN article raises the good point that Web.config only applies to ASP.NET stuff, and that there is an entirely different regime for static content and plain old ASP files.  So watch yourself, there, Tex.

iTunes “Sound Enhancer” Considered Harmful

Monday, October 18th, 2010

I have now on two occasions, with two separate, quiet background, vocal-heavy songs, noticed significant and highly distracting audio artifacts introduced by the default “Sound Enhancer” on iTunes for Mac (specifically, iTunes 9.2.1 (5) on Mac OS X 10.6.4, on a Quad-Core Mac Pro with an embarrassingly large amount of RAM).  The two songs were “You make me feel so young” sung by Frank Sinatra (from “Songs for Swingin’ Lovers”) and “Do I love you?” sung by Peggy Lee (from “Beauty and the Beat!”).

Both of these problems occurred with CD-ripped highish-bitrate audio tracks (MP3 at 160kbps and AAC at 256kbps/VBR, respsectively).  I originally thought the problem was that iTunes was using a faulty MP3 decoder until I determined that the problem was for AAC as well.  What finally sealed it was using QuickTime Player to listen to the same file and discovering the noise artifact had disappeared.

Compare these two, which I digitally captured using Audio Hijack (trial version; buy it if you like it!):

Version with iTunes “Sound Enhancer” enabled: http://rlucas.net/audio/do_i_love_you-itunes-sound-enhancer.aiff

Version played through Quicktime (no “Enhancer”): http://rlucas.net/audio/do_i_love_you-quicktime.aiff

(P.S., rightsholders don’t even think about flexing your DMCA at me.  These are 20-second audio quality demonstrations for nonprofit, educational and research purposes, and have no negative impact on market value of the works.  You are on notice that any DMCA abuses will be met with bad-faith treble-damage vengeance.)

Notice, on the second “Do I?” that the iTunes version has a big burst of static on the “I,” while the Quicktime version does not.  The artifact on “..so young” is similar, on the “You and I” at 1:57, but I don’t have the time to capture and post that as well.

The morals of this story:

  • TURN OFF “Sound Enhancer” in preferences.  Just do it.  Not worth it.
  • Upgrade your headphones.  Never noticed this until I moved up to some better hardware.  It gets lost in the Apple earbuds.
  • Don’t trust that because a device/program/product has a big following that it does the right thing, at least from a quality perspective.  (Yes, I know, this should be self-evident to anyone who’s been awake since, say, the Industrial Revolution, or since the invention of mass brewing, but give me a break; I’ve been in the grips of Apple fanboydom since 2003 or so.)
  • Trust your ears, debug your equipment, and don’t put up with shit.  I didn’t believe for a good long time that the problem existed at all, much less that it could have been upstream from my headphones, until I debugged it.  Digital audio holds a promise for mankind, damn it, and you’ve got to make it live up to its potential.

RubyGems assorted errors, mutating APIs, and fixes

Thursday, April 29th, 2010

If you’re trying to use a newer (e.g. 1.3.6) version of the ruby “gem” system to install ruby packages (like rails, rake, etc.) on a legacy system with an older (v. 1.9.0) ruby installed, you might find yourself running into problems like this:

$ gem install rake

ERROR: While executing gem ... (ArgumentError)
illegal access mode rb:ascii-8bit

If you run with debugging flags, you might get a slightly more informative stack trace:

$ gem --debug install rake
Exception `NameError' at /usr/local/lib/site_ruby/1.9/rubygems/command_manager.rb:163 - uninitialized constant
Gem::Commands::InstallCommand
Exception `Gem::LoadError' at /usr/local/lib/site_ruby/1.9/rubygems.rb:778 - Could not find RubyGem test-unit (>= 0)

Exception `Gem::LoadError' at /usr/local/lib/site_ruby/1.9/rubygems.rb:778 - Could not find RubyGem sources (>
0.0.1)

Exception `ArgumentError' at /usr/local/lib/site_ruby/1.9/rubygems/format.rb:50 - illegal access mode rb:ascii-8bit
ERROR: While executing gem ... (ArgumentError)
illegal access mode rb:ascii-8bit
/usr/local/lib/site_ruby/1.9/rubygems/format.rb:50:in `initialize'
/usr/local/lib/site_ruby/1.9/rubygems/format.rb:50:in `Gem::Format#from_file_by_path'
/usr/local/lib/site_ruby/1.9/rubygems/installer.rb:118:in `initialize'
/usr/local/lib/site_ruby/1.9/rubygems/dependency_installer.rb:257:in `Gem::DependencyInstaller#install'
/usr/local/lib/site_ruby/1.9/rubygems/dependency_installer.rb:240:in `Gem::DependencyInstaller#install'
/usr/local/lib/site_ruby/1.9/rubygems/commands/install_command.rb:119:in `execute'
/usr/local/lib/site_ruby/1.9/rubygems/commands/install_command.rb:116:in `execute'
/usr/local/lib/site_ruby/1.9/rubygems/command.rb:258:in `Gem::Command#invoke'
/usr/local/lib/site_ruby/1.9/rubygems/command_manager.rb:134:in `process_args'
/usr/local/lib/site_ruby/1.9/rubygems/command_manager.rb:104:in `Gem::CommandManager#run'
/usr/local/lib/site_ruby/1.9/rubygems/gem_runner.rb:58:in `Gem::GemRunner#run'
/usr/bin/gem:21

Ignore the first couple exceptions and focus on the ArgumentError right before the stack trace. What you’re seeing there is a use of a syntax for defining the binary read encoding mode for reading in a file, but it’s a syntax that didn’t make it into ruby core until ~ version 1.9.1.

However, the relevant part of rubygems/format.rb that checks for RUBY_VERSION to determine what syntax to use simply checks for RUBY_VERSION > ‘1.9’.

If you patch that to check for RUBY_VERSION > ‘1.9.0’ you’ll make some progress, but you’ll get stuck again with a similar error:

/usr/local/lib/site_ruby/1.9/rubygems/source_index.rb:91:in `IO#read': can't convert Hash into Integer (TypeError)

Although this one looks quite different, it’s the same effect at play: rubygems/source_index.rb checks for RUBY_VERSION < '1.9', when it really ought to check for RUBY_VERSION < '1.9.1' when it uses a 1.9.1+ specific API (specifying the encoding in IO#read). I've added bugs at RubyForge: [#28154] Gem.binary_mode version test for Ruby 1.9 sets invalid rb:ascii-8bit mode and [#28155] source_index.rb uses 1.9.1 IO#read API under RUBY_VERSION 1.9.0; other 1.9.0 issues Hot-patching may be required if you find yourself needing to get gem 1.3.6 working under Ruby 1.9.0. (For example, this is the Ruby 1.9 that comes packaged with Debian 4.0.) If so, it should be safe to make the changes I've indicated above. I'm hesitating to provide a patch file as I am not certain that I've got this 100% right; YMMV. Thank goodness for open source. That said, WTF is a core API item like IO#read doing changing between point versions, without it being loudly obvious in the docs??