s3put just stops working with “broken pipe”

July 21st, 2015

So your cron job, which has been dutifully stuffing away into s3 your backups nightly or hourly or whatever, just stops working. s3put just breaks with the unhelpful complaint, “broken pipe.”

You can try running s3put with “–debug 2” added to your flags, and watch the lower protocol-level stuff seem to go along just fine until it barfs with the same error.

Check the size of your file. If you’ve got a backup that’s been slowing creeping up in size and is now over 5.0 GB, that’s your issue. AWS apparently has a 5 GB s3 limit for single-part HTTP PUT.

s3put accepts a “–multipart” option, but only if it can find the necessary Python libraries including “filechunkio,” so install filechunkio and try again. With any luck, you can just add –multipart to your s3put command and it will Just Work.

Python matrix initialization gotcha

June 30th, 2015

If you want to spin up a list of lists — a poor man’s matrix — in Python, you may want to initialize it first. That way you can use indices to point directly (random access) into the matrix, with something like:


without having to worry whether you’ve managed to make the matrix “big enough” through appending , looping, whatever.

If you are an idiot like me, you will skim StackOverflow and come away with the naive use of the “*” operator to create lists.

In [1]: lol = [[[None]*1]*3]*2

In [2]: lol
Out[2]: [[[None], [None], [None]], [[None], [None], [None]]]

That seems to work fine for our case — a small 3-D matrix (trivial in the third dimension I admit) initialized to None, the pseudo-undefined object of Python. Sounds good. Wait…

In [3]: lol[0][0][0] = 'asdf'

In [4]: lol
Out[4]: [[['asdf'], ['asdf'], ['asdf']], [['asdf'], ['asdf'], ['asdf']]]

Um. Since the same singleton None object was assigned to each of the slots in the matrix, changing it in one place changes it everywhere.


To do what you actually want to do, use the list comprehension syntax and leave the monstrosity of the * operator alone:

In [21]: lolfixed = [[[None for k in range(1)] for j in range(3)] for i in range(2)]

In [22]: lolfixed = [[[None for k in range(1)] for j in range(3)] for i in range(2)]

In [22]: lolfixed
Out[22]: [[[None], [None], [None]], [[None], [None], [None]]]

In [23]: lolfixed[0][0][0] = 'asdf'

In [24]: lolfixed
Out[24]: [[['asdf'], [None], [None]], [[None], [None], [None]]]

Danielle Morrill is mostly right about VC deal sourcing – here’s how she’s wrong.

March 13th, 2015

Danielle Morrill has put out a fascinating TechCrunch article about the art vs. science of how VC “source deals” (find investments). It’s a rare candid peek into a side of venture capital, and from a perspective, that is foreign to most writing from outside the industry. Danielle is spot on with most of her article, but there are a few glaring holes in the picture she paints.

First, what’s right:

1. Old guard vs. new guard. Danielle is absolutely correct that in VC, as in most professions, the older cohort is in conflict with the newer, rising cohort. And in general, the older cohort is the group that controls the power and the economics: by definition, they’re the survivors who’ve made it to late-career stage and have done pretty well, and so they will tend to bring a status quo bias. Change always threatens the status quo — even in an industry that outwardly worships “disruption.”

2. Cargo cult or “pigeon superstition.” Danielle nails it on the head that most VCs — firms and individuals — refer to “pattern matching” and rely on it to a degree that is often indistinguishable from superstition. Generals are often guilty of “fighting the last war” instead of seeing new situations for what they are, and the same is true with investors.

Now, Danielle is a promoter and a hustler (and I mean both terms in the good sense), and it’s natural for her to think about the world — and to critique the VC industry — in terms of an aggressive outbound sales process. But there’s a problem with this approach. VC is not sales, and seeing only through a sales lens will give you a distorted view.

Here’s what Danielle’s article missed by a mile:

1. Deal Types, and why sourcing doesn’t suffice. From the investor’s perspective, there are two types of VC deals.

  • Type 1: Deals that will get done whether you do them or not.
  • Type 2: Deals that won’t necessarily get done if you don’t do them.

Type 1 deals
are “obvious.” Given decent market conditions (e.g. not in a crash/panic), they are going to get funded by *somebody*. For example: I just heard a pitch from two Stanford grads, one of whom had already started and sold a company, and whose traffic was growing 10-12% a week in a reasonably hot sector. They want a reasonably sized and priced Series A round. That deal is going to happen, no matter what.

If you want to invest in a Type 1 deal, you have to *win* it. You probably have to “source” it to win it, but that alone won’t do. You either need to pay a higher price (valuation), and/or bring more value to the table (domain expertise, industry connections, personal competence / trust).

    • Pricing: To be able to profitably pay a higher valuation, you need to have a pricing knowledge edge (which is, in more traditional areas of finance, *the* edge — why do you think Wall Street pays all that money for high-performance computing?). To get that edge, you must know something that other investors don’t about the industry, technology, or people.
    • Value-add: To bring more value to the table, you need to have something rare and desirable, and which you can *apply* from the board / investor level. That typically means you’ve previously made deep “investment” of skill, connections, and knowledge in the relevant industry, technology, or people.

Winning Type 1 deals isn’t about sourcing. It’s about front-loaded work: work spent building up knowledge, connections, reputation, skill, etc., and then demonstrating and exploiting that front-loaded work to add value.

Type 2 deals aren’t obvious. Maybe the team is somehow incomplete; maybe the sector is out of favor. Maybe there’s “hair on the deal,” as we say when things are complicated. Maybe there’s no actual company yet, as happens with spin-outs or “incubated” ideas.

If you want to invest in a Type 2 deal, you have to *build* it. It’s true you need to “source” it, but often times “it” doesn’t even look like a deal when you first learn of it. Maybe that means building up a team or negotiating a technology license from a corporate parent. Maybe it means helping make crucial first customer intros (and watching the results). Maybe it means “yak shaving,” getting some of that wooly hair off of the deal and cleaning it up. (I like that metaphor better than polishing the diamond-in-the-rough, but same idea.)

Almost always, it means building a syndicate. That starts with building consensus and credibility with the entrepreneurs and within one’s own firm. Then, it usually means building the co-investor relationship and trust needed to get the round filled out. (For Type 1 deals, it’ll tend to be easy to win over partners and co-investors. Not so Type 2.)

Winning Type 2 deals isn’t about sourcing. It’s about back-loaded work: building up a fundable entity and a syndicate to support it.

2. Pattern matching and non-obvious rationality.

The way that VCs approach “pattern matching” seems irrational when Danielle describes it. Why do those behaviors persist? It’s because they’re rational, in a non-obvious way.

If you lose money on a deal, you’ll be asked “what happened.” If your answer is “X,” and you’ve never encountered X before, there’s a narrative tidiness to the loss, especially if you resolve not to let X happen again.

If you lose money on a subsequent deal, and you also say it’s due to “X,” then things start to get problematic. Losing money on X twice starts to sound like folly. What’s the George W. Bush line? “Fool me twice … um, … you can’t get fooled again.”

If you lose money three times due to “X,” well, then, you will have real problems explaining to your upstream investors why that was a good use of their money. (Even if, in a Platonic, rational sense, it was.)

In an early-stage tech world swirling with risks, so many you can’t possibly control them all, you grab a hold of a few risk factors that you *can* control, which risks — if they bite you again — will have outsized career / reputation / longevity risk for you. And that gets called “pattern matching.”

(The same applies on the upside. If you attribute making money to Y once, it’s nice. But if you make money twice in a row, and claim that it was due to Y, and your early identification and exploitation of Y, you look like a prescient investing genius.)

Now, I don’t believe that the “X factors” and “Y factors” are all meaningless, or that pattern matching is a worthless idea. But even if you did believe that (overly cynical) idea, given the reasoning above, you should still consider it rational for VCs to behave exactly the same regarding “pattern recognition.”

3. Teams do work.

Although Danielle is right that sourcing, winning, and, ultimately, exiting profitable deals is the formula for individual success in VC, that ignores the very real role that firm “franchise” and teamwork can and should play.

Throwing ambitious investor types into the same ring and letting them fight it out like wild dogs isn’t the route to VC firm success. Well, in certain markets it probably works very well — but it’s incredibly wasteful of time and talent to have unmitigated, head-on competition between a firm’s own investors.

No. In fact, teams can and do work. Danielle’s own article manages to quote both Warren Buffett and his longtime partner, the less well-known but still mind-bogglingly successful investor Charlie Munger. Do you think Berkshire Hathaway board meetings are dominated by infighting as to whether Charlie or Warren deserves credit for the latest M&A deal? Hell, no.

Teams work in investing when, between teammates, there’s enough similarity to ease the building of mutual trust and respect, but enough difference to bring something new and useful to the shared perspective. That can be a difference in geographic, sector, or stage focus, as is classically the case. Or, I would argue, it can even be a difference in the part of the lifecycle of a VC investment that best suits a particular investor.

Let’s do a thought experiment. Let’s assume we have two partners in our firm, GedankenVC: Danielle Morrill’s clone, Danielle2, and Charlie Munger’s clone, Charlie2.

Assume there’s a hot new startup out there, let’s call it Software-Defined Uber for Shoes (SDUfS), led by a young charismatic team, who’s intent on building out a social media presence, throwing parties and events to attract energetic new employees, handing out free custom shoes around San Francisco, and otherwise making the best of their recent oversubscribed $2.5 M seed round.

Who’s going to source that deal? Danielle2 or Charlie2? (Sorry, Charlie.)

Now, fast-forward 3.5 years, and everything is amazing, if complicated. They’re on three continents, Goldman Sachs has done a private placement from their private client group, bringing equity capital raised to $600 M, and they’ve floated the first ever tranche of $750 M in Software-Defined Shoe Bonds (SDSBs). Revenues are forecast for $3 B next year, but there’s trouble going public because of regulatory uncertainty around the Argentine government’s treatment of their main on-demand shoe 3-D printing factory in suburban Buenos Aires, and the complicated capital structure. Underwriters and investors are skittish about ponying up for the IPO.

Who should be on the board of directors of SDUfS? Danielle2 or Charlie2? (No offense, Danielle.)

GedankenVC will do best if the person who can source that deal sources it, and if the person who can manage that complex cap structure to exit manages it. Teams do work.

(Disclaimer: I help “source deals” for Seattle-based B2B VC firm, Voyager Capital, but on this blog I speak only for myself.)

Avoid sequential scan in PostgreSQL link table with highly variant density

January 9th, 2015

I had a particularly knotty problem today. I have two tables:

data_file (id, filename, source, stuff)
extracted_item (id, item_name, stuff)

A data file comes in and we store mention of it in data_file. The file can have zero, or more commonly, some finite positive number of items in it. We extract some data, and store those extracted items in, you guessed it, extracted_item.

There are tens of sources, and over time, tens of thousands of data files processed. No one source accounted for more than, say, 10% of the extracted items.

Now, sometimes the same extracted item appears in more than one file. We don’t want to store it twice, so what we have is the classic “link table,” “junction table,” or “many-to-many table,” thus:

data_file_extracted_item_link (data_file_id, extracted_item_id)

There are of course indices on both data_file_id and extracted_item_id.

Now, most data files have a tiny few items (1 is the modal number of items per file), but a couple of my data sources send files with almost 1 million items per file. Soon my link table grew to over 100 million entries.

When I went to do a metrics query like this:

select count(distinct data_file.filename),
count(data_file_extracted_item_link.*) from data_file left join
data_file_extracted_item_link on
where source=$1 and [SOME OTHER CONDITIONS]

I would sometimes get instant (40 ms) responses, and sometimes get minutes-long responses. It depended upon the conditions and the name of the source, or so it seemed.

ANALYZE told me that sometims the Postgresql planner was choosing a sequential scan (seqscan) of the link table, with its 100 million rows. This was absurd, since 1. there were indices available to scan, and 2. no source ever accounted for more than a few percent of the total link table entries.

It got to the point where it was faster by orders of magnitude for me to write two separate queries instead of using a join. And I do mean “write” — I could manually write out a new query and run it in a different psql terminal minutes before Postgres could finish the 100 million + row seqscan.

When I examined pg_stats, I was shocked to find this gem:

select tablename, attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename='data_file_extracted_item_link';
tablename | attname | null_frac | avg_width | n_distinct | correlation
data_file_extracted_item_link | extracted_item_id | 0 | 33 | 838778 | -0.0032647
data_file_extracted_item_link | data_file_id | 0 | 33 | 299 | 0.106799

What was going on? Postgres though there were only 299 different data files represented among the 100 million rows. Therefore, when I went to look at perhaps 100 different data files from a source, the query planner sensibly thought I’d be looking at something like a third of the entire link table, and decided a seqscan was the way to go.

It turns out that this is an artifact of the way the n_distinct is estimated. For more on this, see “serious under-estimation of n_distinct for clustered distributions” http://postgresql.nabble.com/serious-under-estimation-of-n-distinct-for-clustered-distributions-td5738290.html

Make sure you have this problem, and then, if you do, you can fix it by issuing two DDL statments (be sure to put these in your DDL / migrations with adequate annotation, and be aware they are PostgreSQL-specific).

First, choose a good number for n_distinct using guidance from http://www.postgresql.org/docs/current/static/sql-altertable.html

(In a nutshell, if you don’t want to be periodically querying and adjusting this with an actual empirical number, you can choose a negative number from (-1, 0] to force the planner to guess that the sparsity is abs(number), such that -1 => 100% sparsity.)

Then, you can simply

alter table data_file_extracted_item_link alter column data_file_id set (n_distinct = -0.5);
analyze data_file_extracted_item_link;

After which, things are better:

select tablename, attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename='data_file_extracted_item_link';
tablename | attname | null_frac | avg_width | n_distinct | correlation
data_file_extracted_item_link | extracted_item_id | 0 | 33 | 838778 | -0.0032647
data_file_extracted_item_link | data_file_id | 0 | 33 | -0.5 | 0.098922

and no more grody seqscan.

Postgresql speedup of length measurements: use octet_length

January 8th, 2015

I was looking at some very rough metrics of JSON blobs stored in Postgres, mainly doing some stats on the total size of the blob. What I really cared about was the amount of data coming back from an API call. The exact numbers not so much; I mainly cared if an API usually sent back 50 kilobytes of JSON but today was sending 2 bytes (or 2 megabytes) — which is about the range of sizes of blobs I was working with.

Naively, I used

SELECT source_id, sum(length(json_blob_serialized)) FROM my_table GROUP BY source_id WHERE ;

But for larger (> 10k rows) aggregates, I was running into performance issues, up to minutes-long waits.

Turns out that length(text) is a slow function, or at least in the mix of locales and encodings I am dealing with, it is.

Substituting octet_length(text) was a 100x speedup. Be advised.

Finally, I wouldn’t have known this necessarily without a process-of-elimination over the aggregates I was calculating in the query, COMBINED with the use of “EXPLAIN ANALYZE VERBOSE.” Remember to add “VERBOSE” or else you won’t be given information on when, where, and how the aggregates get calculated.

“AttributeError: exp” from numpy when calling predict_proba()

November 13th, 2014

If you’ve been trying out different types of Scikit-learn classifier algorithms, and have been merrily going along calling predict(X) and predict_proba(X) on various classifiers (e.g. DecisionTreeClassifier, RandomForestClassifier), you might decide to try something else (like LogisticRegression), which will seem to work for calling predict(X) but maddenly fails with “AttributeError: exp”

If you follow the stack trace and the error is when “np.exp” is being called within _predict_proba_lr, you might have my problem, namely, you have some un-casted booleans within your X. This causes the predict_proba method to fail with linear models (though not with classifiers).

You can fix this by converting your X to floats with X.astype(float) explicitly before passing X to predict_proba. Careful; if you have values that ACTUALLY don’t cast to float intelligently this will probably do terrible things to your model.

If you formed up your X as an np.array natively, you probably don’t get this behavior, since np.array’s constructor seems to convert your bools for you. But if you started with a pd.DataFrame or pd.Series, *even if you converted it to an np.array*, it will consider the bools as objects and they will bomb out in predict_proba.

(np = numpy, pd = pandas by convention)

import numpy as np
import pandas as pd
a = np.array([1,2,3,True, False])
b = pd.Series([1,2,3,True, False])
c = np.array(b)
d = c.astype(float)

## native np.array is OK, because bools were converted.
Out[64]: array([ 2.71828183, 7.3890561 , 20.08553692, 2.71828183, 1. ])

## pd.Series can usually be used where np.array, but not when exp can't handle bools:
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
AttributeError: exp

## merely explicitly creating a np.array first won't solve your problem.
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
AttributeError: exp

## explicit cast to float works.
Out[67]: array([ 2.71828183, 7.3890561 , 20.08553692, 2.71828183, 1. ])

Vagrant/Ansible SSH problem with older OpenSSH lacking ControlPersist

September 16th, 2014

Intro; skip to the meat below if you found this through a google search on the error about “-c ssh … ControlPersist”

Vagrant is a Ruby-based abstraction layer that manages a mixture of VirtualBox (or other VM software), SSH, and “Provisioners” like Chef, Puppet, or in our case, Ansible. It’s meant mainly for setting up development and testing environments consistently; it lets you ignore the vagaries of each dev’s local box mess by actually running and testing the software inside a well-defined, consistently configured virtual machine.

Ansible is a Python-based configuration management tool that has a much more straightforward “up and running” learning curve than its ostensible peers. Notably, it is generally “agentless,” in the sense that all of the Ansible software gets run on your local box (your devops guy’s box at world headquarters) without any part of Ansible being installed on your remote nodes, and the actual process of configuring each node is done mainly by opening up ssh connections to each box and running generic, non-Ansible software (such as that remote box’s package manager).

Vagrant can invoke Ansible as a provisioner. Ansible can also be invoked to provision “real” machines, like EC2 instances or (does anyone even have anymore?) actual physical machines.

The holy grail of devops here would be to re-use your dev, test, and prod configs, varying them only in the necessary parts. Ansible is modular enough to do this, and so in theory, you do something very schematically like this:

core_software: x, y, z ...
some_debugging_stuff: a, b, c ...
real_live_security_stuff: m, n, o ...

dev_vm: core_software, some_debugging_stuff
prod_box: core_software, real_live_security_stuff

You can now invoke Vagrant to create a VM and provision it with Ansible to give you a “dev_vm”, while directly using Ansible to create a “prod_box” at your data center. Theoretically, you now have some assurance that the two boxen have exactly the same core software of x, y, z.

The meat of it

Ansible’s heavy reliance upon outbound SSH connections from your local box is OK but throws some kinks in the works when you try to use identical Vagrant + Ansible configurations on two machines that do not share identical software versions like SSH (say, one brand-new Mac OS X and one older Linux). Specifically, you may see this fatal error while performing a “vagrant up” or “vagrant provision”:

using -c ssh on certain older ssh versions may not support ControlPersist

If it’s not clear, that error is coming from Ansible which is sanity-checking the SSH options which it’s being asked to use by Vagrant. Test your local system with:

$ man ssh | grep ControlPersist

If that fails, you have an SSH which is too old to support the ControlPersist option, but Ansible thinks it’s being asked to use that. (ControlPersist is used by default by recent Ansible versions to speed up the reinvocation of SSH connections, since Ansible uses lots and lots and lots of them.)

Optional: to help you understand debug this, you’ll need to get Ansible more verbose. You can do this through the Vagrantfile you’re using by giving the option:

ansible.verbose = "vvv"

The error message you get from Ansible will suggest that you set ANSIBLE_SSH_ARGS=”” as a remedy. If you try this on the command line while invoking Vagrant merely by prepending it, like ‘ANSIBLE_SSH_ARGS=”” vagrant provision’, it won’t work; the “-vvv” output from Ansible will show that it’s been invoked with a long list of ANSIBLE_SSH_ARGS including the troublesome ControlPersist.

Further Googling may suggest that you can override the ssh args either in an “ansible.cfg” file (in one of /etc/ansible/ansible.cfg, ./ansible.cfg, or ~/.ansible.cfg) or in the Vagrantfile with “ansible.raw_ssh_args=[”]”. It is possible that none of these will seem to work; read on.

After much stomping around and examining of the Vagrant source as of 4ef996d at https://github.com/mitchellh/vagrant/blob/master/plugins/provisioners/ansible/provisioner.rb the problem became clear: Vagrant’s “get_ansible_ssh_args” function WILL permit you to set an empty list of ssh_args (thereby leading to Vagrant setting ANSIBLE_SSH_ARGS=”” for you), but only if there are NONE of the following set: an array (more than one) in config.ssh.private_key_path, true in config.ssh.forward_agent, or ANYTHING in the raw_ssh_args. If anything is set in any of those at that point, the ControlMaster and ControlPersist options will be set.

It’s kind of vexing because you don’t expect that setting forward_agent will cause these other things always to be set, even when you have tried explicitly to set raw_ssh_args to empty.

So in sum:

  • No ssh_args in any of the ansible.cfg files that may be looked at
  • Vagrantfile: ensure no more than one private ssh key in config.ssh.private_key_path
  • Vagrantfile: ensure config.ssh.forward_agent=false
  • Vagrantfile: ensure ansible block has ansible.raw_ssh_args=[]

(My problem was with OpenSSH OpenSSH_5.3p1 Debian-3ubuntu7.1, Ansible ansible 1.7.1, and Vagrant Vagrant 1.6.3, and was specifically triggered by the config.ssh.forward_agent=true.)

That solves it for me — I am happy with it because it’s almost 100% used for local Vagrant VMs. I have yet to see how managing remote boxes works from my older Linux machine with the ControlPersist optimization removed (though remember, in the case that you’re using Ansible directly and not through Vagrant, the above fix won’t apply.)

Pandas merge woes on MultiIndex, solved

July 3rd, 2014

Perhaps you are data munging in Python, using “pandas”, and you attempt to use DataFrame.merge() in order to put two DataFrames together.

“cannot join with no level specified and no overlapping names”

This happens when you have two DataFrames with one having a MultiIndex type, which *could* play nice together (e.g. you have “year, month” on the left, and “year” on the right, *but do not have names set.*

You’ll need to explicitly set names with

leftdf.index.levels[0].name = “onename”
leftdf.index.levels[1].name = “twoname”
rightdf.index.levels[0].name = “onename”

Alternatively, you can make it work if you reindex the right-hand side by the left hand side:

rightdf2 = rightdf.reindex(index=leftdf.index, level=0) ## NOTE Assignment, does not modify rightdf in-place

CHCBP-013 describes unlimited former spouse coverage under CHCBP

April 8th, 2014

(This is extremely specialized subject material. If you found it while searching, you probably understand this, but if you’re just a curious reader, you might want to skip it.)

The Continued Health Care Benefit Program is analogous to COBRA for former TRICARE-covered persons; it is mostly a transitional way to get TRICARE-like (TRICARE less military facilities) benefits self-paid after one ceases to be eligible for TRICARE itself.

Generally the CHCBP eligibility period is limited. However, in the case of former spouses of servicemembers who have not remarried before age 55, it is possible to qualify for unlimited (lifetime) eligibility. This is obvious from law and from published stuff (e.g. continuing legal education materials for the tiny subset of lawyers for whose clients this is a very important topic), but it is nearly impossible to get anything from the “horse’s mouth,” that is, the actual CHCBP plan administrator, which is Humana Military Healthcare Services (Humana Military).

If you call Humana Military at 1-800-444-5445 and ask for the rules regarding former spouse unlimited eligibility, they will punt and tell you it’s something that is decided only at the point at which the normal 3-year eligibility has expired. If you persist and ask how it’s decided, they’ll tell you that there is a questionnaire which is sent, but that they don’t have access to it and can’t send a copy. If, incredulous, you insist that some person, perhaps that person’s manager, does in fact know where the questionnaire is, since it’s obviously already written and gets sent out to people all the time, you will get transferred to a kindly manager who takes your FAX number and swears up and down that they’ll send it to you tomorrow morning since the office is about to close.

If you’re a real ornery gadfly, though, and you’re ready to start writing letters to your congressman etc., you might try sending a letter to Humana Military asking for this information and documenting your attempts to get it and the promises which were made and unfulfilled about sending it to you before. If you do all this, they will mail you a redacted copy of an actual form CHCBP-13, which had been sent to an actual former spouse whose eligibility was expiring. (Maybe, in fact, the first representative was truthful and this is not a document that they have ready, and they write up a new one every time and so the only way to send it out is to take the latest one and redact the policyholder’s personal info. I doubt it.) (For this courtesy I am grateful, O unnamed mail-replier at Humana Military, but the policy of keeping this information completely off the public Web is either backwards or sinister.)

Note that the operative requirements are:

1. A “signed statement of assurance that you have not remarried before the age of 55.”
2. Proof (the standard of which is left to the imagination) that you were “enrolled in, or covered by, an approved health benefits plan under Chapter 55, Title 10, United States Code as the dependent of a retiree at any time during the 18-month period before the date of the divorce, dissolution, or annulment (Both TRICARE and CHCPB would qualify as such a plan).” [sic — Humana’s language implies that the enrolee must have been a dependent of a “retiree” but the actual language of 10 USC 55 sec. 1078a (b) (3) talks about a “member or former member” of the armed forces]
3. “A copy of the final divorce decree or other authoritative evidence” that you are actually receiving or have a court order to receive part of the retired or retainer pay of the service member, or have a written agreement that the service member will elect to provide you an annuity. [DO NOT rely on this nor on Humana’s exact language, for this, get your ducks in a row with the statute etc. — it’s slightly complicated.]
4. The executed (signed) renewal notice page of CHCBP-013 and a check for the next premium payable to “United States Treasury.”

Given that this is a hugely important item to a few (but maybe not all that few) people out there, and that it related to the execution of very clear federal law, I’m rather shocked that there’s no mention ***anywhere on the Internet prior to today(!!!)*** of CHCBP-013 and its contents and requirements.

So, with no further ado, I present you a redacted copy of form letter CHCBP-13, which shows the actual procedural requirements for obtaining unlimited former spouse CHCBP coverage.

Django auto_now types wreck get_or_create functionality

February 11th, 2014

I recently had occasion to lazily use the Django “get_or_create” convenience method to instantiate some database ORM records. This worked fine the first time through, and I didn’t have to write the tedious “query, check, then insert” rigamarole. So far so good. These were new records so the “or_create” part was operative.

Then, while actually testing the “get_” part by running over the same input dataset, I noticed it was nearly as slow as the INSERTs, even though it should have been doing indexed SELECTs over a modest, in-memory table. Odd. I checked the debug output from Django and discovered that get_or_create was invoking UPDATE calls.

The only field it was updating was:

updated_at = models.DateTimeField(auto_now=True)

Thanks, Django. You just created a tautological truth. It was updated at that datetime because … you updated it at that datetime.

Interestingly, its sister field did NOT get updated:

created_at = models.DateTimeField(auto_now_add=True, editable=False)

This field, true to its args, was not editable, even by whatever evil gnome was going around editing the updated_at field.

Recommendation: if you want actual timestamps in your database, why don’t you use a real database and put a trigger on it? ORM-layer nonsense is a surefire way to have everything FUBAR as soon as someone drops to raw SQL for performance or clarity.