Rails form_tag Changes in Rails 1.2

August 6th, 2007

I recently updated my dev machine (Mac OS X) to the latest Rails gems, and was getting deprecation warnings for using form_tag in its old, non-block, pre-Rails 1.2 way.

Then, in moving between my development and acceptance-testing boxes (you do have a mirror of your production environment running as an acceptance testing server before you push things from your laptop to production, right?) I started getting blank HTML pages out of the testing box. Whoops.

Well, one thing is that the two different versions of form_tag act differently with respect to output — so with the old one, you needed to put:

<%= form_tag ... %>  

While the new one takes:

<% form_tag ... do %>   ... <% end %>  

(Note lack of = sign in the new, block version.)

But that wasn’t it. My problem was that, even with the equal signs fixed, I was getting no love from my testing box. Things that should have been enclosed in the form tag block were just not happening.

My hunch was that the old version of Rails was barfing (this was sort of true: the new block form of form_tag is not backwards compatible). I updated Rails with a one-two punch of apt-get update; apt-get upgrade mixed with a gem install rails. No luck. Aha! Have to kill and restart the server process: still, I got no form tag love.

I checked the Rails version with rails -v and got 1.2.3, the latest version. gem list showed that 1.2.3 was coexisting with some older versions. Aha! And a real aha this time — for this was, it turns out, the problem.

Thanks to the folks who author the acts_as_authenticated wiki. It was there that I found a reminder that the RAILS_GEM_VERSION variable, in config/environment.rb, can be set to peg which, among several possible installed versions of Rails, the app will use.

It appears that if you comment out RAILS_GEM_VERSION, you get the latest installed version — which in my case fixed it to use 1.2.3, thereby giving me my form_tags back.

God Help You If You Get Derailed: "Model is Deprecated"

August 3rd, 2007

The comprehensible but often superfluous model method in Rails is used in an ActionController to tell it about an ActiveRecord model that it ought to have loaded in order to have the AR classes available to it. It’s kind of got the feeling of a require or a use in Perl. It’s fairly straightforward to reason by analogy about what it does.

(The only confusing thing, I think, is that it works by imputing a filename from a symbol representing the class to do its “magic” so if you define multiple AR classes in a single file, you’d want to make sure that the symbol that matches the filename containing the multiple classes is what you pass to the model method.)

But in a nutshell, you stick model :my_object_class in, say, the base ApplicationController class and you’re good to go to use MyObjectClass and any subclasses defined in the same file.

Well, it was the way you do it. Around Rails 1.2, it started barfing up preemptive deprecation warnings: model is deprecated and will be removed from Rails 2.0.

So, you follow the URL they give you for more info. Unhappily, nowadays (August 2007), the page they point you to, http://rubyonrails.org/deprecation, doesn’t say anything about model. WTF, guys?

Googling around tells you that you should use require_dependency instead. Oh, good. Way longer to type and harder to remember, but it’s OK, because require is part of the language itself and is familiar to those who understand it. Er, wait: it’s require_dependency, not require: it’s a Rails feature, not a Ruby core language feature.

Fine, you say, I’ll do it if I must. Just do a replace on those lines, and you’re good, right? Oh, wait again, now my app is bombing with an error 500, and the log says undefined method `ends_with?' for :my_object_class:Symbol. I won’t keep you in suspense: you can’t give a symbol :my_object_class to require_dependency, you have to give it a String ("my_object_class").

All of this highlights a pretty big issue with Rails. It’s really an infantile, nascent culture. To keep up with it, you really need to be in constant conversation with the community (like constant: I mean, you need to be sitting in the session at RailsConf with Colloquy open on your MacBook, chatting about stuff on an hour-by-hour basis). This isn’t bad, but you better understand it. And you better be refactoring your app constantly in order to keep up with best practices and to be able to use new plugins, etc. — which also isn’t bad, but it’s expensive and a hassle.

And to anybody who thinks a Perl app is tough to maintain: yeah, right. Try Rails code from a year or so ago if all you know is Ruby and you haven’t been heavily engaged in Rails culture during that time.

“Convention over Configuration” is fine and dandy as long as you’re steeped in the culture that maintains the shared conventions.

VC Career Snippets: "The Wormhole"

July 16th, 2007

This is the first of a series of “snippets” about getting a job in VC. I get asked about this approximately weekly, so I am going to try and do a highlights reel of things I tell people or thoughts I come up with on the topic.

In a nutshell, VC is this weird parallel universe into which there are very few wormholes. (To start a career in VC) It helps to distort the space-time continuum with an extremely concentrated mass of money that you already have.

Swik Has Jumped the Shark

July 2nd, 2007

Seattle-based Open Source startup Sourcelabs put together the Swik.net wiki a year or two ago. They seeded it with some links of moderate usefulness, and for a brief time, it was a decent, if hit-or-miss, way to find information about an open source project or tool that you were considering using.

No more. Not only are most of the pages I’m finding on Swik these days simply a one-link screen-scrape to the actually interesting page (which often ranks higher in Google alone, anyhow), but Swik has committed the cardinal sin of infovoria: playing audio that automatically starts on page load.

(They do this by means of an auto-starting video advertisement that spams up the top bit of the page. Not quite as egregious as a MIDI object, but every bit as annoying.)

Unbelievable. I thought we’d gotten past this with the turn of the millennium. But everything old, it seems, is new again. Swik, however, now gets the same mental category as “About.com,” namely, ad-filled, nearly unusable results not to click when they appear in a web search.

Spry VPS Mixed Experience Report

June 27th, 2007

I’ve been using an “el cheapo,” $15 / month, 64M RAM Debian GNU/Linux VPS from Spry for a couple weeks now. Some things to note:

  • You get X amount of RAM with “burstable” extra ram, but you’re not going to get any from the burst. Everyone else is using it.
  • There is NO SWAP as far as I can tell. When you hit 63.9M of RAM, processes start segfaulting and blowing up.
  • free and top will lie to you. ps -aux seems to tell the truth.
  • You really need to check /proc/user_beancounters as root to get the real number of (4k) pages of memory used, and the number of faults, if you’re curious.

Too early for a verdict. But don’t count on being able to run the same stuff on a 64M Spry VPS as you would on a 64M box with a half-gig of swap (grinding its way through but eventually getting the job done).

Relational Database Problems

June 26th, 2007

Many of these problems arise because RDBMSs are designed typically with the conceit of being the sources of truth within the organization (e.g. invoice #1234 exists because the database says so), but are often used to reflect external truths about the world, which are input messily and which themselves are often shifting or subject to change in ways not contemplated by the schema.

I. The entity deduplication problem.

Alice is entering information about companies, and creates a record #1001 for “Acme Widgets.” She then adds a bunch of information, including relating other tables to Company #1001. For example, she may put in a press clipping about Acme Widgets, which gets linked by a link table to Company.id==1001.

Then, Bob comes along entering information about companies, and creates a record #1234 for “Acme Widget Corporation.” He adds stuff, and includes a different press clipping, which gets linked to Company.id==1234.

Later, Charlie arrives, and notices that there are two records starting with “Acme.” He investigates a bit, and discovers that “Acme Widget Corporation” is the full legal name of the company that is familiarly called “Acme Widgets.”

How can Charlie cause the database only to reflect a single Acme which is associated correctly with both press clippings?

II. The undo problem.

I do some stuff. Then I change some stuff. Then later, I want to see how it was before I changed it. Tough luck.

III. Bitemporality

Let’s include financials on our companies! Acme revenues are $12 M. (wait a year.) Now Acme revenues are $15 M. But wait: we now need a couple of slots for revenues. Oh, ok, a revenue report is associated with a year. Acme 2006 revenues were $12 M, 2007 revenues were $15 M. Wait. I was wrong. Acme 2006 revenues were $9 M in reality. OK, update that. Now the boss calls up and wants to know who it was that we told the wrong 2006 number to. But we can’t, since we don’t know what we thought we knew when we thought we knew it.

(In fairness, bitemporality is somewhat easier than the other ones; you just double-time-stamp everything. But it’s still a pain in the ass.)

IV. The incomplete multi-table entity problem

This one is not really an RDBMS problem so much as a Web applicaiton architecture problem.

Doug wants to create a record in our database — let’s say, a “publication.” The publication must have an “editor” (n:1) and at least one “author” (n:m, m >= 1). Doug gets to inserting the publication, but is stopped because it has a null editor_id, violating the editor constraint. D’oh! OK, we can work around this: good databases permit deferring of constraints until the end of a transaction. But in order to make that work with your application, you now must tightly couple the transactionality between the RDBMS and the server part of the Web application.

Let’s try again with Doug. He inserts the publication, no sweat, and now can insert an editor. Then, he adds some other stuff, like some authors. (Maybe Doug is using an AJAX-y Web front-end that lets him add new authors on the fly without going to a new “screen,” or maybe he has to navigate between modal “screens” to do this.) Because he’s added multiple different entities (a publication, an editor, some authors), he gets to feeling that what he’s done is already written to the database. He leaves without hitting save. Do’oh! Two things have now happened: first, the entire graph of all the entities he’s added is in limbo, and second, the server part of the Web app is holding open a (possibly scarce) connection to the RDBMS (which may or may not, depending on how fancy the example gets, be holding locks with a SELECT FOR UPDATE…).

The first sub-problem — the limbo — seems not that bad, because Doug is used to losing all his data when he forgets to hit “save” on a desktop application, so he’s fairly well trained and will avoid this. But if the program is “DRY” (Don’t Repeat Yourself), and especially if it had modal “screens,” Doug probably hit “save” or “update” or “submit” several times on interfaces that look like the normal (non-multi-table-entity graph) ones in order to add his multiple required entities (editors and authors). Therefore, Doug has a not unreasonable expectation that he’s done his part for saving, based on the fact that when he edits or creates an author entity in isolation, he uses that same workflow and it saves OK.

The second sub-problem — the RDBMS connection — is sort of more pernicious, because now Doug isn’t just losing (or jeopardizing) data he though he’d saved, but because he’s potentially contributing to a denial or degradation of service for all users. This is sort of an artifact of the way that DB connection pooling evolved from the mid-90s to today. Traditionally, DB connections have been scarce and costly to set up (network IO in addition to whatever socket / semaphore / locks / whatever had to be written to disk). Therefore, the widespread practice evolved in Web application design to use connection pooling — where some subsystem in the server layer makes a bunch of connections to the database, and then does fast handoffs of DB connections to application requests, freeing them up when each request is done. That way, you can run, say, 300 near-simultaneous requests with, say, 30 database connections.

Of course, if you’re pooling database connections, you pretty much have to run your entire transaction, succeed or fail, within the space of one application request (hopefully a sub-500ms affair), since transactions bind to database connections, but connections don’t bind to end-clients (stateless HTTP again). You can try and bind the DB connection to a particular client’s cookie- or token-identified session (like Seaside does, I think), but then you lose out on the ostensible benefits of pooling — now you need 300 RDBMS connections for your 300 clients, and your DB machine is choked. What’s more, if Doug leaves his transaction open, and you don’t fairly quickly time it out and kill it, you could end up needing more than 300 DB connections for your 300 clients, because you also have 300 old “Dougs” who’ve left stale transactions open — and now your DB machine is crashed.

I will update this post periodically with other relational database problems, and, I hope, solutions.

Apple Occasionally Kicks Ass: A Tech Support Experience

June 22nd, 2007

Last night, I was typing happily along on my MacBook Pro, unplugged and on battery power with about 40% indicated remaining, when bam, the power went right out like a light. Curse words. Flipped the thing over, and got no LED action from the battery charge indicators. Plugged it in and it would boot, but pull the plug and it goes down hard again.

I had just bought the thing in December 2006, refurbished direct from Apple and with a 3 year Applecare extended warranty.

Well, I hopped on the bike and made it to the Apple store in the peculiar ring of hell known as “University Village” (an upscale retail cesspool here in Seattle. Aside #1: Never go there. Aside #2: When you must, ride a motorcycle or bicycle, because parking is impossible, and the sheer Schadenfreude of watching the SUVs joust and jockey while you glide to within a few meters of your destination almost makes the place bearable).

The Apple storeites had me belly up to the Genius Bar (their open-air repair facility). I handed the MacBook over with a brief description of the problem. As the guy was keying in something (serial number?) he started to regale me with an explanation of the normal sleep procedure and how the battery is supposed to work. “Oh shit,” I thought, “here comes runaround city.”

The guy leaves, I get ready to get comfortable and either wait it out or raise sufficient hell to get what I need.

To my surprise and delight, when the “Genius” returned, he had a box in his hand just slightly bigger than the batter in question. Snip, snip, out comes the battery, and into my laptop it goes. Hands it over without a word and gets to printing me a receipt. Holy shit — he just silently and competently fixed my problem, without me paying a dime or sitting through any bullshit (other than the complimentary, brief lecture on hibernate vs. power down while he looked up the inventory for the replacement part)!

Too many other tech repair facilities have long algorithms they have to go through, debugging procedures, etc. — all of which ostensibly save money on parts but waste uncounted hours of techs’ and customers’ time. Apple, despite really dropping the ball when my wife’s laptop started crashing intermittently eleven-and-a-half months into owning it, really came through on this one.

Pros: This is exactly how customers should be treated. Fix it. Sign for it. Send ’em on their way. Go Apple!

Cons: This probably means that the Genius bar sees quite a few cases of MacBook Pro battery-sudden-death.

Truth and Fiction in Motorcycle Gear: Cortech DSX Jeans

June 21st, 2007

I’ve been looking for good, mostly-civilian-looking riding jeans with armor and abrasion resistance. Cortech DSX Jeans use leather instead of the kevlar / aramid that some (e.g. Draggin’ Jeans) use in the knees and seat. They do have some knee armor. Plus, the price was right (about $70 shipped) vs. the kevlar ones, so I ordered a pair.

Levi 501s in 38″ waist, 32″ length fit me pretty well. Snug, perhaps, but not tight (neither gangsta nor European). So, I was hoping to get a similar fit out of the Cortech DSX. At risk of giving you more info than you need, I have kind of thick legs, a residue from too much rowing in college.

No such luck. Cortech DSX jeans are cut for fatasses. The 38″ waist hangs off me like a 40″. I have to cinch it up so much with my belt that there’s a fold. I really ought to have it taken in 1-2 inches by a tailor. And my thickish legs flop around in the bagginess of the jeans.

The cut of the jeans is sort of a low-rider type thing. These are definitely cut to be worn lowish on the hips (lowish for men’s jeans, that is; not hip-huggers, but probably 4″ lower on the body than Western jeans).

Accordingly, the 32″ seems to be measured from the floor to a lower height than with e.g. Levis or more Western type jeans. When I sit on my bike, the jeans pull up almost to the top of my Cortech Tourmaster riding boots.

Long story short: if you wear a 38-32 in Levi 501s, you probably should order a 36-34 to get jeans that 1. fit snugly (and therefore keep the armor and leather in place), and 2. don’t ride up when you’re on the bike.

Other than that, I can’t complain. They are some nice jeans, they just happen to be cut without reference to the actual measurements on the tag…

Truth and Fiction in Motorcycle Gear: Olympia "Waterproof" Gloves

June 12th, 2007

Motorcycle gear is tough to buy. There are lots of places that sell some stuff, but precious few that sell a really good selection. Also, there are tons of vendors available online, but since you can’t touch or try the stuff before you get it, you’re SOL. My goal is to write up all the moto gear I buy so that folks can make good decisions about it.

To kick it off, I’m going to pick on Olympia’s “Waterproof” gloves.

My first pair of bike gloves were the Olympia Cold Throttle gloves. These worked well for me in coldish weather (say, 40’s; it would get chilly in there but livable). However, after a single day on the MSF course range during a rainy March day, the gloves were completely soaked through. (In fact, they got totally soaked in the first 4 hours or so of rain.) I don’t mean that they leaked — which they might have — but that the leather itself was completely soaked. You could take the glove off and literally wring out the leather and have ounces of water come out. I couldn’t get the glove dry overnight, even putting it on top of a fan on top of a heater all night.

Still, when not wet they worked well and fit nicely. Plus, the longish nylon gauntlet played nice with my textile jacket and prevented water from coming up the sleeve, at least.

Then, a couple weeks into owning the gloves, the pull-cord-to-tighten-stopper-thingy (the thing you squeeze to let the cord loosen up) just snapped right off of one glove during a routine closure of the cord. University Honda was good enough to take them back on a warranty replacement and give me a new set. “Maybe,” I thought, “this pair will actually be waterproof!”

Well, no such luck. A 20 minute ride in a steady rain last Saturday soaked the leather straight through.

(Now, it should be said that all gloves will accumulate sweat inside of them, whether or not they keep out rain. This is, of course, what Gore-Tex ostensibly fixes. Note that it won’t fix it fully, because the sweat vapor really has nowhere to go when the outside of the glove is covered in water…)

So, although I have read good things about Olympia on the web, and although these gloves offered beefy knuckle protection and good insulation when dry, I can’t recommend the Olympia Cold Throttle “waterproof” gloves for any wet applications.

Highway 202 North of Snoqualmie Falls Sucks Right Now

May 30th, 2007

A quick note for any of you thinking of heading out to Snoqualmie Falls en moto right about now (late May 2007). The road getting there from I-90 was just fine, and indeed, at the falls there were plenty of bikers (no place to park, even!).

However, if you continue north on Highway 202 (SE Fall City-Snoqualmie Rd) after the falls, you’ll hit about a mile of twisties with heavily grooved pavement. It sucks. Be real aware of this if you’re planning to visit (or pass by) Snoqualmie Falls on your bike this summer — hills plus curves plus thick grooves make for an unpleasant if brief part of the trip.