rlucas.net: The Next Generation Rotating Header Image

Postgresql speedup of length measurements: use octet_length

I was looking at some very rough metrics of JSON blobs stored in Postgres, mainly doing some stats on the total size of the blob. What I really cared about was the amount of data coming back from an API call. The exact numbers not so much; I mainly cared if an API usually sent back 50 kilobytes of JSON but today was sending 2 bytes (or 2 megabytes) — which is about the range of sizes of blobs I was working with.

Naively, I used

SELECT source_id, sum(length(json_blob_serialized)) FROM my_table GROUP BY source_id WHERE ;

But for larger (> 10k rows) aggregates, I was running into performance issues, up to minutes-long waits.

Turns out that length(text) is a slow function, or at least in the mix of locales and encodings I am dealing with, it is.

Substituting octet_length(text) was a 100x speedup. Be advised.

Finally, I wouldn’t have known this necessarily without a process-of-elimination over the aggregates I was calculating in the query, COMBINED with the use of “EXPLAIN ANALYZE VERBOSE.” Remember to add “VERBOSE” or else you won’t be given information on when, where, and how the aggregates get calculated.

“AttributeError: exp” from numpy when calling predict_proba()

If you’ve been trying out different types of Scikit-learn classifier algorithms, and have been merrily going along calling predict(X) and predict_proba(X) on various classifiers (e.g. DecisionTreeClassifier, RandomForestClassifier), you might decide to try something else (like LogisticRegression), which will seem to work for calling predict(X) but maddenly fails with “AttributeError: exp”

If you follow the stack trace and the error is when “np.exp” is being called within _predict_proba_lr, you might have my problem, namely, you have some un-casted booleans within your X. This causes the predict_proba method to fail with linear models (though not with classifiers).

You can fix this by converting your X to floats with X.astype(float) explicitly before passing X to predict_proba. Careful; if you have values that ACTUALLY don’t cast to float intelligently this will probably do terrible things to your model.

If you formed up your X as an np.array natively, you probably don’t get this behavior, since np.array’s constructor seems to convert your bools for you. But if you started with a pd.DataFrame or pd.Series, *even if you converted it to an np.array*, it will consider the bools as objects and they will bomb out in predict_proba.

(np = numpy, pd = pandas by convention)

import numpy as np
import pandas as pd
a = np.array([1,2,3,True, False])
b = pd.Series([1,2,3,True, False])
c = np.array(b)
d = c.astype(float)

## native np.array is OK, because bools were converted.
Out[64]: array([ 2.71828183, 7.3890561 , 20.08553692, 2.71828183, 1. ])

## pd.Series can usually be used where np.array, but not when exp can't handle bools:
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
AttributeError: exp

## merely explicitly creating a np.array first won't solve your problem.
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
AttributeError: exp

## explicit cast to float works.
Out[67]: array([ 2.71828183, 7.3890561 , 20.08553692, 2.71828183, 1. ])

Vagrant/Ansible SSH problem with older OpenSSH lacking ControlPersist

Intro; skip to the meat below if you found this through a google search on the error about “-c ssh … ControlPersist”

Vagrant is a Ruby-based abstraction layer that manages a mixture of VirtualBox (or other VM software), SSH, and “Provisioners” like Chef, Puppet, or in our case, Ansible. It’s meant mainly for setting up development and testing environments consistently; it lets you ignore the vagaries of each dev’s local box mess by actually running and testing the software inside a well-defined, consistently configured virtual machine.

Ansible is a Python-based configuration management tool that has a much more straightforward “up and running” learning curve than its ostensible peers. Notably, it is generally “agentless,” in the sense that all of the Ansible software gets run on your local box (your devops guy’s box at world headquarters) without any part of Ansible being installed on your remote nodes, and the actual process of configuring each node is done mainly by opening up ssh connections to each box and running generic, non-Ansible software (such as that remote box’s package manager).

Vagrant can invoke Ansible as a provisioner. Ansible can also be invoked to provision “real” machines, like EC2 instances or (does anyone even have anymore?) actual physical machines.

The holy grail of devops here would be to re-use your dev, test, and prod configs, varying them only in the necessary parts. Ansible is modular enough to do this, and so in theory, you do something very schematically like this:

core_software: x, y, z ...
some_debugging_stuff: a, b, c ...
real_live_security_stuff: m, n, o ...

dev_vm: core_software, some_debugging_stuff
prod_box: core_software, real_live_security_stuff

You can now invoke Vagrant to create a VM and provision it with Ansible to give you a “dev_vm”, while directly using Ansible to create a “prod_box” at your data center. Theoretically, you now have some assurance that the two boxen have exactly the same core software of x, y, z.

The meat of it

Ansible’s heavy reliance upon outbound SSH connections from your local box is OK but throws some kinks in the works when you try to use identical Vagrant + Ansible configurations on two machines that do not share identical software versions like SSH (say, one brand-new Mac OS X and one older Linux). Specifically, you may see this fatal error while performing a “vagrant up” or “vagrant provision”:

using -c ssh on certain older ssh versions may not support ControlPersist

If it’s not clear, that error is coming from Ansible which is sanity-checking the SSH options which it’s being asked to use by Vagrant. Test your local system with:

$ man ssh | grep ControlPersist

If that fails, you have an SSH which is too old to support the ControlPersist option, but Ansible thinks it’s being asked to use that. (ControlPersist is used by default by recent Ansible versions to speed up the reinvocation of SSH connections, since Ansible uses lots and lots and lots of them.)

Optional: to help you understand debug this, you’ll need to get Ansible more verbose. You can do this through the Vagrantfile you’re using by giving the option:

ansible.verbose = "vvv"

The error message you get from Ansible will suggest that you set ANSIBLE_SSH_ARGS=”” as a remedy. If you try this on the command line while invoking Vagrant merely by prepending it, like ‘ANSIBLE_SSH_ARGS=”” vagrant provision’, it won’t work; the “-vvv” output from Ansible will show that it’s been invoked with a long list of ANSIBLE_SSH_ARGS including the troublesome ControlPersist.

Further Googling may suggest that you can override the ssh args either in an “ansible.cfg” file (in one of /etc/ansible/ansible.cfg, ./ansible.cfg, or ~/.ansible.cfg) or in the Vagrantfile with “ansible.raw_ssh_args=[”]”. It is possible that none of these will seem to work; read on.

After much stomping around and examining of the Vagrant source as of 4ef996d at https://github.com/mitchellh/vagrant/blob/master/plugins/provisioners/ansible/provisioner.rb the problem became clear: Vagrant’s “get_ansible_ssh_args” function WILL permit you to set an empty list of ssh_args (thereby leading to Vagrant setting ANSIBLE_SSH_ARGS=”” for you), but only if there are NONE of the following set: an array (more than one) in config.ssh.private_key_path, true in config.ssh.forward_agent, or ANYTHING in the raw_ssh_args. If anything is set in any of those at that point, the ControlMaster and ControlPersist options will be set.

It’s kind of vexing because you don’t expect that setting forward_agent will cause these other things always to be set, even when you have tried explicitly to set raw_ssh_args to empty.

So in sum:

  • No ssh_args in any of the ansible.cfg files that may be looked at
  • Vagrantfile: ensure no more than one private ssh key in config.ssh.private_key_path
  • Vagrantfile: ensure config.ssh.forward_agent=false
  • Vagrantfile: ensure ansible block has ansible.raw_ssh_args=[]

(My problem was with OpenSSH OpenSSH_5.3p1 Debian-3ubuntu7.1, Ansible ansible 1.7.1, and Vagrant Vagrant 1.6.3, and was specifically triggered by the config.ssh.forward_agent=true.)

That solves it for me — I am happy with it because it’s almost 100% used for local Vagrant VMs. I have yet to see how managing remote boxes works from my older Linux machine with the ControlPersist optimization removed (though remember, in the case that you’re using Ansible directly and not through Vagrant, the above fix won’t apply.)

Pandas merge woes on MultiIndex, solved

Perhaps you are data munging in Python, using “pandas”, and you attempt to use DataFrame.merge() in order to put two DataFrames together.

“cannot join with no level specified and no overlapping names”

This happens when you have two DataFrames with one having a MultiIndex type, which *could* play nice together (e.g. you have “year, month” on the left, and “year” on the right, *but do not have names set.*

You’ll need to explicitly set names with

leftdf.index.levels[0].name = “onename”
leftdf.index.levels[1].name = “twoname”
rightdf.index.levels[0].name = “onename”

Alternatively, you can make it work if you reindex the right-hand side by the left hand side:

rightdf2 = rightdf.reindex(index=leftdf.index, level=0) ## NOTE Assignment, does not modify rightdf in-place

CHCBP-013 describes unlimited former spouse coverage under CHCBP

(This is extremely specialized subject material. If you found it while searching, you probably understand this, but if you’re just a curious reader, you might want to skip it.)

The Continued Health Care Benefit Program is analogous to COBRA for former TRICARE-covered persons; it is mostly a transitional way to get TRICARE-like (TRICARE less military facilities) benefits self-paid after one ceases to be eligible for TRICARE itself.

Generally the CHCBP eligibility period is limited. However, in the case of former spouses of servicemembers who have not remarried before age 55, it is possible to qualify for unlimited (lifetime) eligibility. This is obvious from law and from published stuff (e.g. continuing legal education materials for the tiny subset of lawyers for whose clients this is a very important topic), but it is nearly impossible to get anything from the “horse’s mouth,” that is, the actual CHCBP plan administrator, which is Humana Military Healthcare Services (Humana Military).

If you call Humana Military at 1-800-444-5445 and ask for the rules regarding former spouse unlimited eligibility, they will punt and tell you it’s something that is decided only at the point at which the normal 3-year eligibility has expired. If you persist and ask how it’s decided, they’ll tell you that there is a questionnaire which is sent, but that they don’t have access to it and can’t send a copy. If, incredulous, you insist that some person, perhaps that person’s manager, does in fact know where the questionnaire is, since it’s obviously already written and gets sent out to people all the time, you will get transferred to a kindly manager who takes your FAX number and swears up and down that they’ll send it to you tomorrow morning since the office is about to close.

If you’re a real ornery gadfly, though, and you’re ready to start writing letters to your congressman etc., you might try sending a letter to Humana Military asking for this information and documenting your attempts to get it and the promises which were made and unfulfilled about sending it to you before. If you do all this, they will mail you a redacted copy of an actual form CHCBP-13, which had been sent to an actual former spouse whose eligibility was expiring. (Maybe, in fact, the first representative was truthful and this is not a document that they have ready, and they write up a new one every time and so the only way to send it out is to take the latest one and redact the policyholder’s personal info. I doubt it.) (For this courtesy I am grateful, O unnamed mail-replier at Humana Military, but the policy of keeping this information completely off the public Web is either backwards or sinister.)

Note that the operative requirements are:

1. A “signed statement of assurance that you have not remarried before the age of 55.”
2. Proof (the standard of which is left to the imagination) that you were “enrolled in, or covered by, an approved health benefits plan under Chapter 55, Title 10, United States Code as the dependent of a retiree at any time during the 18-month period before the date of the divorce, dissolution, or annulment (Both TRICARE and CHCPB would qualify as such a plan).” [sic — Humana’s language implies that the enrolee must have been a dependent of a “retiree” but the actual language of 10 USC 55 sec. 1078a (b) (3) talks about a “member or former member” of the armed forces]
3. “A copy of the final divorce decree or other authoritative evidence” that you are actually receiving or have a court order to receive part of the retired or retainer pay of the service member, or have a written agreement that the service member will elect to provide you an annuity. [DO NOT rely on this nor on Humana’s exact language, for this, get your ducks in a row with the statute etc. — it’s slightly complicated.]
4. The executed (signed) renewal notice page of CHCBP-013 and a check for the next premium payable to “United States Treasury.”

Given that this is a hugely important item to a few (but maybe not all that few) people out there, and that it related to the execution of very clear federal law, I’m rather shocked that there’s no mention ***anywhere on the Internet prior to today(!!!)*** of CHCBP-013 and its contents and requirements.

So, with no further ado, I present you a redacted copy of form letter CHCBP-13, which shows the actual procedural requirements for obtaining unlimited former spouse CHCBP coverage.

Django auto_now types wreck get_or_create functionality

I recently had occasion to lazily use the Django “get_or_create” convenience method to instantiate some database ORM records. This worked fine the first time through, and I didn’t have to write the tedious “query, check, then insert” rigamarole. So far so good. These were new records so the “or_create” part was operative.

Then, while actually testing the “get_” part by running over the same input dataset, I noticed it was nearly as slow as the INSERTs, even though it should have been doing indexed SELECTs over a modest, in-memory table. Odd. I checked the debug output from Django and discovered that get_or_create was invoking UPDATE calls.

The only field it was updating was:

updated_at = models.DateTimeField(auto_now=True)

Thanks, Django. You just created a tautological truth. It was updated at that datetime because … you updated it at that datetime.

Interestingly, its sister field did NOT get updated:

created_at = models.DateTimeField(auto_now_add=True, editable=False)

This field, true to its args, was not editable, even by whatever evil gnome was going around editing the updated_at field.

Recommendation: if you want actual timestamps in your database, why don’t you use a real database and put a trigger on it? ORM-layer nonsense is a surefire way to have everything FUBAR as soon as someone drops to raw SQL for performance or clarity.

Don’t Bother with S3FS for Mounting Amazon S3 on Mac OS X (2014)

As of January 2014, it’s not worth bothering with the “s3fs” software for mounting Amazon S3 buckets on your local filesystem.

The idea of s3fs is simple and great: use FUSE (Filesystem in Userspace) to “mount” the S3 bucket the same way you’d mount, say, an NFS drive or a partition of a disk. Manipulate the files, let s3fs sync it in the background. Sure, you lose some reliability, but we’ve had NFS and SMB and all kinds of somewhat-latent-over-an-unreliable-link-but-mostly-with-filesystem-semantics software for decades now, right?

Well, forget it. s3fs as of January 2014, used on Mac OS X and against an existing set of buckets, is so utterly unreliable as to be useless.

First, s3fs cannot “see” existing folders. This is because folders are a bit of hack on S3 and weren’t done in a standardized, documented way when s3fs was first written. However, since then, at least two other ways of creating folders on S3 have gained currency: an older, deprecated way with Firefox plugin S3Fox, and a newer defacto standard way with Amazon’s own management dashboard/browser for S3. Whatever the historical reason, you can’t see the existing folders.

Second, although from mailing list posts, theoretically you can *create* an s3fs folder with the same name as your existing folder, and then its contents will magically become visible, empirically something rather different happens. A mkdir on the s3fs mount leads to the creation of a mangled *regular* file on the S3 dashboard. Now you have two “folders,” each of which is unusable as a folder on the other (S3 dashboard, or s3fs) system. Argh.

Finally, you might say, ok, fine, this will just make me use flat-level, non-folder-nested choices about my S3 architecture. (Leave aside for the moment that the very reason you want to use S3 is probably exactly so that you can have lots and lots and lots and lots (like 10^8+) files in a way that will cripple any reasonable filesystem tools that see them all in one “directory.”) However, even that doesn’t work reliably, as s3fs demonstrated today when it went into “write-only mode” such that I could create files locally that would show up on S3 but that subsequently would disappear from my local filesystem. WTF?!?

The unfortunate answer is: S3 is not a filesystem, and it was created by people who are smarter than you, and who have very craftily calculated that if you are forced to weave in the S3 API and its limitations into your application code, you will have a damn hard time ripping it out of your infrastructure, and so they are going to have you do just that. They do not want it to be used as a filesystem, and so guess what: you are *not* going to use it as a filesystem. Not gonna happen.

Say what you will, but our hometown heroes here in Seattle are no dummies. Embrace, extend, extinguish, baby. Not just for OS companies anymore…

(Yes, I know that s3fs is not an Amazon project. But it appears to be the community’s best attempt to put filesystem semantics around S3, and that attempt has been rejected by AWS’s immune system.)

Never Trust Microsoft Products On Mac OS X, Again

Something in OS X 10.7+ and MS Office 2008 for Mac has made the entire Office suite, apparently, stop respecting symlinks.

I know that Windows took until well into the 21st century to really deal with symlinks even halfway, but come on. It worked before. There are some directories into which I routinely save document files which are literally not reachable by click navigtation any other way but through symlinks.

If you click on the symlink in the “Save” dialog, it will change the name of the file you’re saving to the name of the symlink and generally fail all over itself.

Never Trust Microsoft Products On Mac OS X

In Microsoft Word 2008 for Mac OS X, if you open a “Find” dialog, and if it has focus, it will capture cmd-S (the universal “Save” command) and cause it to toggle the “sounds like” option in advanced find options. It will sort of capture cmd-W (the universal “close Window,” which will close the Find dialog). It will not capture cmd-Q (the universal “Quit”), which will quit the program and close all its windows, assuming your document has been saved.

Which you thought you’d been doing by hitting cmd-S. And then you wonder why your search results were so weird.

Thanks again, Mac BU!

LinkedIn CPC Ads behaving weirdly

This June/July, over at SimpleVerity, we’ve been running some very tiny, super-targeted CPC ad campaigns on LinkedIn to recruit trial users of our system. We’ve noticed the following odd behaviors, and I wanted to ask the Web if anyone else had seen this:

– Lower bids (bottom of the range) outperform higher bids on both CPC and conversion (learn more form on my landing page). Weird, but perhaps explicable; it could be that for our particular appeal, the “lower ranked” ad slots work better.

– Temporary “spike” in CPC, and BIG spike in conversion, right after a change to the campaign configuration. This is strange, because it seems to result not only in a change in how the LinkedIn users react (could be a result of the ad rotation algorithms after a change), but also how they behave once they get to my landing page. No theory for how this works.

– Auto-Optimize among campaigns seems to prematurely start making optimization decisions before reaching anything close to statistical significance. For example, I was running a campaign where the long-term equilibrium average CTR seemed to be about 0.9% (not bad, right?). I had 15 variations running. However, the auto-optimize algorithm started heavily favoring a few ads before all variations had received even 1,000 impressions. That’s just crazy; the ads being disfavored were about 50/50 bets at that point to exceed the average, yet the algorithm was “optimizing” them into oblivion.

My takeaway is that, at least when working with heavily targeted LinkedIn CPC ads, you have to stay very, very close to the “metal” and pay attention to oddities. Don’t assume that paying the highest bid (or even paying more than the bare minimum CPC price) will work better; relentlessly experiment with spend and bid. Also, don’t let auto-optimize kick in until you are comfortable that you are approaching significance on the expected CTR (at the very least, the CTR that you are hoping to “beat”).