Wednesday, December 31, 2008

Why to Consider Using an Application Server Tier

Toward the end of .NET Rocks!' latest interview with Oren Eini (which I enjoyed), Oren mentioned that he was thinking about "how to kill three-tier architecture", going on to say that the one reason you wanted to have an application server was "connection pooling". Richard Cambell went on to say that you might use an application server if you have "application resources or some kind of a set of objects that are processor intensive or long running that are independent, the execution of that is somewhat independent with what the web server has to do".

(It's great that .NET Rocks! has transcripts, which makes it extremely easy to quote this stuff. Thanks Carl.)

I think the discussion above is an incomplete view which sells an independent application server tier far short. No, you don't always need application servers, but they do have practical uses beyond the scenarios mentioned above, and they are often borderline-essential for a robust web application architecture.

The rest of this post is primarily intended for my .NET brethren. Three-tier server architecture discussions get rather confusing in the Microsoft world because a) Microsoft's tooling doesn't really support it well out of the box (and in fact much of it pushes you very hard in the two-tier direction), and b) both the web server and application server typically end up being IIS. Java and LAMP developers are quite familiar with using Apache to front for Tomcat, JBoss, or their company's favorite expensive commercial JEE server.

Perhaps most importantly, having separate application servers is critical to the proper implementation of a DMZ, which you most likely want if you are running on the capital-I Internet. In a nutshell, this architecture allows you to better protect your database if your externally-facing web server tier is compromised. See this link into the Wikipedia article for a better explanation.

Additionally, separate web and application tiers allow for limitless options in independent provisioning, scaling, and tuning. The power here should not be underestimated. You can tune your web servers for I/O throughput and your application servers for raw processing horsepower if that is what your application demands. If you are serving a lot of static content, you can move it forward to the web server tier and take the load off your more expensive application servers. You can use different load-balancing and encryption strategies. You can also choose to use a cheaper computing platform on one tier (typically, the web tier) to save on hardware costs and licensing fees.

I have a book that I like to refer people to when this subject comes up -- Architecting Enterprise Solutions: Patterns for High-Capability Internet-based Systems, by Paul Dyson and Andrew Longshaw. It's a patterns book that discusses the tradeoffs involved in robust Internet architectures. I'm not aware of anything else quite like it, so if you're interested (or if you would simply like to read something that will probably expand your engineering horizons), get it.


I've been known to use the title of this post, "Why to Consider Using an Application Server Tier", as an interview question. I guess I might not be able to do that anymore. But it's a fun blog discussion topic, which is worth it in my opinion.

Tuesday, December 23, 2008

Integrating Log4Net and WF

Like many people, I've been trying to take greater advantage of the public library lately. Of course, the supply of technical books is fairly limited, so I basically get whatever they happen to have on the shelf on a given day that looks relevant. (That's saying nothing about the great digital offerings that most libraries now have, but they're a little hard to read on the bus.)

One of the books that I have now is Pro WF. In the process of working through this book and improving my core Workflow Foundation skills, I've been trying to build some useful tools, such as the NUnit test fixture class that I wrote about recently.

I've grown a little tired of writing CodeActivities here and there just to print out the current value of some variable, so I started thinking about something more general and with a wider applicability. Put that together with the fact that I've never been particularly satisfied with Microsoft-supplied logging solutions (which would be the subject of another post), and what I decided to do was create a custom WF activity that logs using Log4Net and takes advantage of dependency properties to configure the variables to log. The source code is on my Code Gallery page here.

Using the Log4NetActivity is very straightforward. It does assume that you configure Log4Net prior to use -- that should be handled in the startup of your host application. Then you just need to give it the name of a logger along with a message.

The message can -- and probably will -- use the syntax of a format string, which can pull from up to ten dependency properties (Arg0 to Arg9) configured on the activity. That's the only remotely-interesting thing about the design itself. Originally I wanted to allow a dynamic set of dependency properties, but in the end I didn't see any way to make that work. The closest thing I could find was this excellent post and sample code by Guy Burstein, but had I gone that route, I believe the best I would have been able to do was force you to declare each argument to log before binding, and that would have been annoying.

The activity can also use the Exception-logging capability of the Log4Net APIs, but the Exception itself is not configurable via a dependency property, limiting its usefulness. (I have no better ideas here.)

Certainly nothing revolutionary, but hopefully somewhat useful.

Friday, December 19, 2008

Dozer Custom Converter for Joda DateTime

I mentioned in my last post how excited I was to try out Dozer. Today I got the chance to play around with it a little bit. As a proof-of-concept for myself, I wrote a custom converter to handle mapping of Joda-Time DateTime's to and from Java Calendars. I thought that this should be very easy, and I was happy to find out that it was. The first step was to actually implement the CustomConverter interface:
package com.scottmcmaster365.dozerdemo;

import java.util.Calendar;

import net.sf.dozer.util.mapping.MappingException;
import net.sf.dozer.util.mapping.converters.CustomConverter;

import org.joda.time.DateTime;

/**
* Custom converter for Joda DateTime's.
* @author Scott McMaster
* http://www.scotmcmaster365.com/
*
*/
public class DateTimeConverter implements CustomConverter {

 @SuppressWarnings("unchecked")
 @Override
 public Object convert(Object existingDestinationFieldValue,
   Object sourceFieldValue, Class destinationClass, Class sourceClass)
 {
  if( sourceFieldValue == null )
  {
   return null;
  }
  
  if( sourceFieldValue instanceof Calendar )
  {
   // Note that DateTime is immutable, so
   // we can't do much with the existingDestinationFieldValue.
   return new DateTime(sourceFieldValue);
  }
  else if( sourceFieldValue instanceof DateTime )
  {
   Calendar result;
   if( existingDestinationFieldValue == null )
   {
    result = Calendar.getInstance();
   }
   else
   {
    result = (Calendar) existingDestinationFieldValue;
   }
   result.setTimeInMillis(((DateTime) sourceFieldValue).getMillis());
   return result;
  }
  
  throw new MappingException("Misconfigured/unsupported mapping");
 }

}

The logic here is very straightforward, although you could imagine creating many more Dozer converters for DateTime's. (One that accepts a formatted String comes immediately to mind.) You can wire this up with a little bit of XML as so: Here are a couple of unit tests that demonstrate the capability. Not shown are a couple of little classes, TypeWithCalendar and TypeWithDateTime, which have bean properties called "value" of type java.util.Calendar and org.joda.DateTime, respectively.
/**
 * Test mapping from Calendar <-> DateTime
 * using Dozer.
 * @author Scott McMaster
 *
 */
public class TestDozerDateTimeConverter
{
 private DozerBeanMapper mapper;

 @Before
 public void setUp()
 {
  List mappingFiles = new ArrayList();
  mappingFiles.add("dozermappings.xml");
  mapper = new DozerBeanMapper();
  mapper.setMappingFiles(mappingFiles);
 }
 
 @Test
 public void testToCalendarType()
 {
  DateTime then = new DateTime(1995, 11, 27, 0, 0, 0, 0);
  TypeWithDateTime joda = new TypeWithDateTime();
  joda.setValue(then);
  
  TypeWithCalendar calendar = 
    (TypeWithCalendar) mapper.map(joda, TypeWithCalendar.class);
  
  assertEquals((Long) then.getMillis(), (Long) calendar.getValue().getTimeInMillis());
 }
 
 @Test
 public void toJodaType()
 {
  Calendar then = Calendar.getInstance();
  then.set(1995, 10, 27);
  TypeWithCalendar calendar = new TypeWithCalendar();
  calendar.setValue(then);
  
  TypeWithDateTime joda =
   (TypeWithDateTime) mapper.map(calendar, TypeWithDateTime.class);
  
  assertEquals((Long) then.getTimeInMillis(), (Long) joda.getValue().getMillis());
 }
}

Tuesday, December 16, 2008

Dozer Can Help Me Manage Parallel Object Hierarchies!

Some time ago, I wrote a post called "Robust Architecture for Object-Relational Mapping", in which I discussed and diagrammed a separation of "Business/Domain" and "Persistence" objects into separate, parallel object hierarchies. The idea is to isolate lazy loading and other ORM features (as well as any database schema oddities) away from the actual objects that implement the business logic. The main drawback to this approach is that it requires you to implement potentially-tedious and error-prone logic to transfer data back and forth between the layers. Taking this further, I think it's perfectly reasonable for a modern enterprise application to have up to four parallel object hierarchies:
  1. Persistence objects, which can closely mirror the database schema if necessary and guarantee that "transparent" ORM stays that way.
  2. Business/Domain objects, which reflect the application's core entities and the business logic that manipulates them.
  3. View objects, which are data objects optimized to serve the View layer.
  4. Message objects, which are exposed to the outside world via services and need to remain stable in the face of changes to the business layer.

In anything other than a trivial application and given current technologies, it is very difficult to use a single object model to serve all of the needs described above. (Trust me, I've tried.) So in reality, we spend a lot of time in the transfer implementation, moving values back and forth between the different flavors of objects, either with tedious set/get code, a hand-rolled reflection-based framework, or (more likely) a combination of both.

Clearly this is annoying. So I was excited today when one of my friends told me about Dozer, which purports to be a "powerful, yet simple Java Bean to Java Bean mapper". I haven't tried it yet -- just spent a lot of time going through the documentation. But it appears that Dozer will allow me to wire up hierarchies like those I describe above in a minimal-code, declarative fashion. I'm looking forward to trying it out and reporting my experiences.

Sunday, December 7, 2008

Unit Testing WF Workflows with NUnit

When I looked around for approaches to testing Windows Workflow Foundation workflows, I was surprised at how few examples I found. There is a fair amount of buzz around testing custom activities, but not much on entire workflows. Maybe that's because folks consider testing a complete workflow to be an "integration testing" task rather than a "unit testing" one. As an eternal pragmatist, that is a distinction that I could not possibly care less about if I tried. I mean, you really should be testing your complete workflows to the extent possible, and if your unit testing tools work for that, why not use them? So I set out to create a base class using NUnit which could easily be extended by test fixtures that want to run workflows.

My first cut at this is on my Code Gallery page. The base class is called WorkflowTestFixture. It takes a generic type parameter which is the (sequential) workflow that you want to execute. The resulting test fixture code comes out looking like this:

[TestFixture]
public class
TestMyWorkflow : ScottMcMaster365.WorkflowTestFixture<MyWorkflow>
{
    [Test]
   public void
TestNormalExecution()
   {
      AddArgument(
"AccountNumber", 60005)
;
      Run();
      Assert.AreEqual(20000.00M, GetOutput("AvailableCredit"));
   }
}


AddArgument, Run, and GetOutput are base class methods which set up the workflow's inputs, synchronously execute the workflow, and retrieve output parameters, respectively. The actual implementation of WorkflowTestFixtue<> is nothing special -- in fact, it comes out looking a lot like some of the code that K. Scott Allen posted for testing custom activities. (I'm not sure why Scott didn't create a base class for the stuff that he finds overly verbose. Maybe he did that in a later post that doesn't Google as well.)

Clearly there are some workflows and styles of workflow for which this approach will not work well. Right now I see it as primarily useful for sequential workflows which can execute directly to completion without a lot of external integrations with databases, services, etc. That said, I believe that still constitutes an important class of workflow. As for the limitations, I hope to remove some them in the future. Suggestions are welcome.

Tuesday, November 25, 2008

Looking for Tips on Formatting Code in Blogger

OK, if you read my last post, you probably noticed that the formatting of the Java and (especially) XML code was, well, not so great. And it took me some time just to get it that good. I have some code-to-highlighted-HTML tools that I like to use, but when I push it into Blogger, it gets mangled in various ways. For all its many faults, Spaces at least didn't give me a lot of trouble in this department. I searched around a bit for techniques and tools to improve this situation. Offhand, I didn't find anything that really impressed me as a good solution. I'll keep looking and trying new things, but in the meantime, if you have some good tips here, please share them.

dbUnit and HSQLDB Databases with Schema Names

There are several ways to manage unit tests that involve a data layer. Someday I'll enumerate them in another post. But when I'm doing Java, one of my favorites is using HSQLDB as a fast, in-memory stand-in for a more heavyweight database. I also like dbUnit to manage setting and resetting the data in the in-memory database. I ran into an interesting problem when trying to use dbUnit with an HSQLDB database that employed named schemas (like, say, Oracle). In case you've never had to create your own schemas in DDL before, you can create the schema "SCOTTSCHEMA" like this:
Statement ddl = conn.createStatement(); ddl.executeUpdate("CREATE SCHEMA SCOTTSCHEMA AUTHORIZATION DBA");

Now, when you get ready to use dbUnit and create your IDatabaseConnection, you need to pass the schema name:

IDatabaseConnection dbUnitConn = new DatabaseConnection(conn, "SCOTTSCHEMA");

Now I should be able to load up XML files using dbUnit that look like this for a hypothetical "CONTACTS" table:

<?xml version="1.0" encoding="UTF-8"?> <dataset> <SCOTTSCHEMA.CONTACTS FIRST_NAME="Scott" LAST_NAME="McMaster"/> <SCOTTSCHEMA.CONTACTS FIRST_NAME="Tracy" LAST_NAME="McMaster"/> </dataset>

It turns out that passing the schema name in the second parameter is necessary, but not sufficient: If you try to run a database operation against dbUnitConn at this point, you'll get the following exception:

org.dbunit.dataset.NoSuchTableException: SCOTTSCHEMA.CONTACTS at org.dbunit.database.DatabaseDataSet.getTableMetaData(DatabaseDataSet.java:222) at org.dbunit.operation.DeleteAllOperation.execute(DeleteAllOperation.java:109) at org.dbunit.operation.CompositeOperation.execute(CompositeOperation.java:79) at ...

When I first hit this, it was easier to figure out what was going on from the dbUnit code rather than the documentation. I discovered rather quickly that you also need to set FEATURE_QUALIFIED_TABLE_NAMES in the IDatabaseConnection's config:

dbUnitConn.getConfig().setFeature(DatabaseConfig.FEATURE_QUALIFIED_TABLE_NAMES, true);

Now dbUnit will build its internal map of database tables prefixed with the schema name and will be able to find the table metadata when you execute an operation. It seems like it would make sense to default this feature to "true" when you create a database connection with a schema name...

Saturday, November 22, 2008

My New Blog Location

Well, I finally did it. My old buddy Chad has been on me about moving my blog to a different platform. I was reluctant, because I feel like I've built up a pretty good body of content on my old blog, and I didn't want to risk losing regular readers or anyone who found me while looking for solutions to their technical problems on Google. But some time ago, I turned off comments due to an endless stream of spam appearing on ancient posts. I'm tired of not having comments. I want comments. And I want a whole lot of other features that Windows Live Spaces does not provide, even after all these years. So here I am on Blogger. I plan to keep the content the same -- book and article reviews and suggestions, discussion of the software development tools that I use, sample code in Java, .NET, Ruby, and anything else that I find interesting. So I'm glad you found this and hope you'll stick with me while I get this blog going. Of course, you can always find my previous work here.