Saturday, June 18, 2005

commons-logging classloader pain

I recently learned about commons-logging's classloader behavior the hard way when moving applications that use log4j from Tomcat to WebLogic. I had naively put commons-logging.jar in my WebLogic system classpath to emulate Tomcat's behavior, and things subsequently broke. This web site provided a very detailed and erudite explanation:

http://www.qos.ch/logging/classloader.jsp

Basically, it boils down to weird subtlety in commons-logging's behavior. If commons-logging.jar is in the system classpath in a J2EE environment, the "current" or "thread context" classloader for each individual webapp contains WEB-INF/lib, but commons-logging itself is loaded in the system (parent) classpath. commons-logging looks for log4j using the thread-context classloader and therefore will "see" log4j in WEB-INF/lib, but tries to actually load it in the system classpath, and breaks with a NoClassDefFoundError.

Various solutions:

  • Only put commons-logging-api.jar in the system classpath, as Tomcat does. (I didn't realize there were two different variations of the JAR at first. I know, RTFM.) This stripped-down commons-logging will pretend that log4j doesn't exist. Downside: logging output from third-party APIs like Hibernate, Digester, etc. will go who-knows-where, but not the same place as your application's log4j output.

  • Put log4j in the system classpath too. This works, but has the bad side effect that all applications will share a single log4j configuration and this makes logging unmanageable.

  • Put commons-logging.jar in your WEB-INF/lib directory. I'd heard bad things about this, especially if you want to deploy the same WAR in Tomcat. But for whatever reason it worked fine with Tomcat 4.1.31 and WebLogic 8.1. So this may be my preferred solution.



Anyhow, if this was confusing to someone who has been doing J2EE professionally for 5-6 years, I can imagine how confusing it must be to someone just trying to do this for the first time. The BileBlog puts it a lot less charitably.

Saturday, June 04, 2005

cost of supporting old browsers

I've begun to question the wisdom of standards for websites that require support for older browsers like Netscape 4. Backwards compatibility is all well and good, but I feel like often times clients--government ones in particular--forumulate these requirements without any real cost-benefit analysis.

Keeping support for Netscape 4, IE 3, etc. has a very real cost, with respect to CSS and DHTML/Javascript one-offs, or having to pass on new patterns like AJAX. But many statistics show that fewer than 1% of all browsers are Netscape 4--even going back a year or two. (See here for an example.) Most people are using IE 6.

On the other hand, a commercial client I visited had done a very careful cost-benefit analysis on operating system support--they knew exactly how many of their users were still using Windows 98 and how much it would tick them off if they were forced to upgrade. So they might be stuck supporting it, but at least they could justify the decision.

So, before making requirements to support Netscape 4, clients should do their homework, and ask themselves what the benefit is--is anyone really still using it and can't upgrade? And how much extra time and money are the developers going to spend on workarounds to support these old browsers, compared to the value delivered?

Saturday, February 05, 2005

Wireless adventure

I must have bad luck with my home wireless network. I had a D-Link DI-514 wireless router, which worked great until I got a new IBM Thinkpad X40 with the built-in Centrino. After various firmware upgrades, configuration changes, etc., I could not get the two to work together--I was getting roughly 30% packet loss and 500 ms ping times, with the notebook right next to the router antenna.

So I got a new DI-524 which had a big Centrino-verified logo on the front. But, after unpacking it and hooking it up, it won't talk to my cable modem (also a D-Link product, a DCM-200) and the "WAN" link light doesn't even come on.

I take the thing back to the store, bring home another DI-524 *and* a Linksys BEFW11S4 just to be safe. Sure enough, they *all* have the same problem with the WAN link light. What are the odds?

After some googling and a call to D-Link tech support (who was way more knowledgeable than I expected) it turns out that this mass-market consumer equipment doesn't do a good job of auto-sensing 10- vs. 100-mbit ethernet, and my old cable modem is only 10mb. After a firmware upgrade it all worked.

The design lesson here is the "principle of least astonishment"--I'm somewhat knowledgeable about network stuff, but so far every piece of 100mbit ethernet equipment I've used doesn't just break if the other end is only 10mbit. So when that link light didn't come on, I immediately assumed the unit was defective, especially when there was no documentation of this issue. I could have saved myself (and Best Buy's returns counter) a whole lot of trouble if the problem were more clear.


Saturday, January 15, 2005

displaytag sorting/paging performance vs. native SQL

displaytag is an easy-to-use JSP tag library for printing out HTML data tables with sorting and paging. But, there's a catch. In an enterprise application, collections are the result of some database query, so implementing sorting and paging entirely within the presentation layer is not going to be as efficient as doing both natively in the database with ORDER BY, LIMIT and OFFSET (assume PostgreSQL syntax). Instead, displaytag expects the entire collection to be in memory, displaying only the necessary ones, and sorting the Java collection itself.

I ran some tests to see exactly how much this affects performance. On a non-scientific test (YMMV) with a 250 row result set, sorting with displaytag added an 34% overhead over a native ORDER BY. There was a much more dramatic difference for paging, though. On a test where I loaded 250 rows from the DB and displayed only 25 (pagesize=25), displaytag took 141% longer on average than a "LIMIT 25" which only returns the 25 rows to actually be displayed. The overhead factor will scale with the total number of rows in the original, unlimited query.

So, displaytag clearly is not as efficient as sorting and paging in native SQL. But is it enough of a difference to matter? Well, it depends on your application--it's a classic tradeoff between performance and elegant design. Separation of concerns says paging results conceptually belongs in the presentation layer; in a layered J2EE application, you can push sorting/paging down through the business and persistence layers, but it isn't pretty.

My personal opinion, though, is that performance normally shouldn't be enough of a factor to dissuade you from using displaytag for sorting/paging results. If your query returns so many rows that you can't afford the overhead from the undisplayed rows, then you may not be thinking about paging the right way--paging should be thought of as purely a UI layer construct to save the user from scrolling or downloading a large HTML document, not to save the database from working too hard. How meaningful is it to jump from page 1 to page 47 out of 62 anyway? If your query returns more than, say, 300 rows (15 pages with 20 rows each), you probably should think about making the user provide additional search criteria first rather than blindly paging the output.

What about the other option--coupling the presentation layer directly to the database? It's easy to imagine an extension to displaytag that takes a SQL query rather than a Java collection, and handles sorting/paging natively by dynamically appending ORDER BY and LIMIT/OFFSET clauses. (Indeed, the MS toolset seems to actively encourage this pattern.)

This is fine for prototyping or simple apps that don't have much business logic beyond the basic CRUD transactions. But, this can rapidly become unmaintainable--not only are there many more places to touch if the DB schema changes, but you are also at risk of introducing bugs by accessing tables directly and potentially bypassing domain logic (business rules) tied to particular fields. Still, it may be worth considering tihs as a strategy for hand-optimizing queries with special performance requirements that outweigh maintainability, especially if the query is more relational than object-oriented in nature.

In summary, displaytag is a good, simple choice for many applications that need HTML tables with sorting and paging. There may be a performance hit, but a clean, maintainable design often is more
important.

Sunday, January 09, 2005

USB flash drives and Linux

I've had to set up a USB stick with Linux recently. It's fairly easy if you know the right magic.

Here are some links to useful resources:

Post on ExtremeTech.com
Flash memory HOWTO on ibiblio

Essentially it boils down to adding a line to /etc/fstab and then mounting /dev/sda1 or /dev/sdb1 on some mount point (directory). You don't strictly need to edit /etc/fstab, but if you don't you will need to be root in order to write anything on it. Here's the magic for /etc/fstab to allow non-root to get read/write access:
/dev/sda1 /mnt/usbstick vfat        rw,user,noauto 0 0

(assuming your mount point is /mnt/usbstick.)

If that doesn't work, check /var/log/messages and see if your stick is on /dev/sdb1 instead.

I'm looking forward to it being as easy to use USB sticks under a default Linux distro as it is under Windows, though. I hate to say it, but under Windows, they "just work."

Saturday, January 01, 2005

Avoiding Anemic Domain Models with Hibernate

One of Hibernate's most under-appreciated features is its ability to persist private fields. This feature is useful for avoiding what Martin Fowler calls the Anemic Domain Model anti-pattern, where domain objects (entities) are reduced to "dumb" record structures with no business logic. In an Anemic Domain Model, you lose all the benefits of OOP: polymorphism, data hiding, encapsulation, etc.

The Anemic Domain Model may have originally evolved from EJB CMP, which requires any persistent field to be accessible directly with a public getter/setter. Developers using POJO frameworks like Hibernate often duplicate the same pattern, though, simply replacing the entity beans with POJOs.

This is not just an academic discussion; this has real consequences for the quality of a codebase. (Academically, this is part of the OOP-RDBMS "impedance mismatch"--in particular, that there is no distinction between a setter/constructor call that actually mutates/constructs an object and one that is merely incidental to materializing an existing object's state from persistent storage.) Let's say you're developing a system for issue tracking with a business rule like "anyone can create a ticket or change its status, but only managers can raise it to 'critical.'" A fragment of an Issue object might look like this (some detail omitted to focus on encapsulation/data hiding issues):
public class Issue {

private String m_status;
public String getStatus() {
return m_status;
}
public void setStatus(String newStatus) {
if (newStatus == STATUS_CRITICAL && !getCurrentUser().isManager()) {
throw new SecurityException("critical.requires.manager");
}
m_status = newStatus;
}
}
This looks great until you realize that setStatus(STATUS_CRITICAL) is also going to be called from the persistence layer in materializing an existing Issue that is already critical, not just when making an explicit change through the UI workflow. Since anyone can view any issue, SecurityException will be thrown when a non-manager tries to view an issue that is already critical. We immediately recognize that the persistence layer needs a way to get "privileged" access to set the underlying field directly, bypassing business logic.

The typical workaround is to give up encapsulation and move the business logic into the corresponding service layer object (e.g., stateless session bean) for issue transactions:
public class IssueManager {

public Issue findIssueById(Long id) ;
public Issue newIssue(... fields ...) {
// begin TX
// ... setup new issue
if (status == STATUS_CRITICAL && !getCurrentUser().isManager()) {
throw new SecurityException("critical.requires.manager");
}
issue.setStatus(status);
// ...
// commit TX
}
public void changeStatus(Long id, String status) {
// begin TX, load issue
if (status == STATUS_CRITICAL && !getCurrentUser().isManager()) {
throw new SecurityException("critical.requires.manager");
}
issue.setStatus(status);
// commit TX
}
}
Now, two real consequences are apparent. First, giving up encapsulation leads to cut-and-paste programming, violating the "don't repeat yourself" principle; this increases the risk of error of the business rule not being cut-and-paste again somewhere it's needed. Second, you lose polymorphism; it is now very difficult to have a subclass of Issue with slightly different business rules. (For example, maybe the main Issue has no restriction on setting status, but a specific type of issue has the critical-requires-manager rule.)

It's true that you could have two separate sets of getters/setters in the Issue itself, one that applies business logic and one that allows direct access and is only used by persistence. This would address the polymorphism issue. But if that direct accessors are also public (as EJB CMP requires) then you still lose data hiding; nothing prevents your service layer/transaction scripts from calling these methods directly.

If you're using Hibernate, though, there is a very elegant solution. Hibernate is effectively "privileged" by manipulating bytecode, so it can touch private fields directly. Hibernate gives you two options in the above scenario:
  • You can have two separate bean-style properties linked to the same underlying field, one with private getters/setters and the other with public. The private methods access the underlying field directly, and the public ones apply business rules. This is the preferred approach, but has the downside of verbosity, plus you have to use different property names in HQL (private) and everywhere else (public).
  • Hibernate can also persist fields directly by using the "access" attribute on @hibernate.property and so on. The upside is that this is more concise with only a single public bean-style property, but using access="field" requires the field name to exactly match the private instance variable name; this won't work if you have some kind of Hungarian naming convention like "m_foo". You can do something like access="MyFieldAccessor" where MyFieldAccessor is a custom class implementing net.sf.hibernate.property.PropertyAccessor, implementing your naming convention (mapping bean property names to member var names) but that requires extra effort.
There are other uses for this feature in Hibernate:
  • Primary keys are generally supposed to be immutable by normal business logic, set only within the persistence layer. So, "setId" methods can almost always be private or protected.
  • Collections getters and setters can also be kept private, to preserve data hiding (prevent rep exposure). Otherwise, when business logic can manipulate a collection directly, it's difficult to enforce business rules on the collection elements, or even to ensure the elements are of the correct type. (The latter may partially be addressed by generics in Java 5 and/or Hibernate 3.)
I believe JDO also instruments classes at runtime to get similar privileged access to persistent fields.