Wednesday, December 22, 2004
more PostgreSQL performance junk
We think the big delete was taking so long because PostgreSQL may try to keep a rollback buffer for the whole delete operation in memory or something like that, causing thrashing or something like that. I haven't tried this on Oracle, but I'm guessing that it and other databases may be smarter about managing their physical storage directly (RAM vs disk) rather than relying on the underlying OS.
but then again, if you're touching 100k rows at once, probably not a bad idea to commit every so often anyway, so as to avoid a long-running transaction that could potentially hose other users.
PostgreSQL on cygwin: "Bad system call"
Turns out I had forgot the magic word: "CYGWIN=server". The first time I installed PostgreSQL (and read the docs), I had just set the var on the command line (CYGWIN=server pg_ctl start ...) and never put it in my profile. Easy enough to fix.
Wednesday, December 15, 2004
PostgreSQL performance of "where exists"
select foo_id from fooI was surprised to find out that this query actually ran faster when I restructured it with a SELECT DISTINCT and a JOIN:
where exists (select 1 from bar where bar.foo_id = foo.foo_id)
select distinct(foo_id) from barSome references on the web I've found suggest that EXISTS is the preferred way to write the above query in general. Because it's a boolean condition, in theory the database needs to scroll fewer rows because it can stop as soon as the first match is found; and the DISTINCT can be expensive if the results from the join version would not have been unique.
join foo on bar.foo_id=foo.foo_id
An ancient PostgreSQL mailing list post indicates that rewriting the query as a JOIN may be faster than EXISTS in PostgreSQL, because the join can take advantage of indexes while EXISTS does a nested loop. But, then again, I'm still using PostgreSQL 7.3.x, and EXISTS handling may well have been improved in 7.4.
Saturday, December 11, 2004
"Client CVS Branch" anti-pattern
Scenario: Your team developed a custom application for Client A. The application is generally useful, so it gets re-sold to Client B. Client B wants some customizations, which are at first superficial (CSS, images, etc.), and the client expects a quick turnaround. So, you need a way to store Client B’s new version of the app in source control somehow, and you take the first approach that comes to mind: you create a branch of the original, and make the customizations for Client B on the new branch.
At first, this works pretty well. But over time, you sell the same app to more clients, and they are each asking for more substantial new feature development. The strategy starts to break down, causing a whole series of problems:
- Bug fixes have to be explicitly merged into each client’s branch individually.
- Since the code in each client branch diverges over time as different things are added or changed in one or the other, merging bug fixes results in more manual conflict resolution. (The same thing goes for new features.) This also makes for more re-testing of the same thing.
- There is a potential for wheel reinvention. If you develop a new feature for Client C, and Client D asks for a new feature that is “close but not quite” the same as Client C’s, it may be developed independently twice rather than building it once in a way that accommodates both sets of requirements.
- You fail to realize potential economies of scale of in support or maintenance that you should get from having a single solution, since each client effectively has their own one-off version.
The problem stems from the lack of a well-defined “trunk” in source control that provides the common baseline functionality. Instead, each client’s version was branching off of another client’s branch (Client A’s) rather than from a common trunk. So there was no way to nail down which part of the code stays constant for all clients.
Here's a few ways to solve this problem:
- Have each client’s version be a branch from a common trunk, and have the discipline to make as much functionality in the trunk configurable at deployment/runtime as possible (the later the binding, the better). That way, you increase the percentage of code that all clients have in common, and establish a common baseline version that multiple clients share. Also, there will then be a well-defined process for upgrading a client’s branch to a new version of the baseline “core” code, and many fewer post-merge conflicts to resolve manually.
- Most teams won’t actually have the discipline to consistently put in the extra effort to make new features configurable. So you can take this a step further: give each client is own separate repository that contains only the customizations for that client, then apply those customizations as a patch against the common baseline version. This will force you to think about whether any given code should be common and shared across all clients or whether it is a customization.
- There is also a management aspect to solving this problem: make sure someone is accountable for the entire solution as deployed for all clients, not just each individual client’s project. When you’re only accountable for your own client, you’ll inevitably take the path of least resistance to keep your own client happy and not see the bigger picture of delivering for all clients more effectively (this is actually rational behavior according to game theory).