Toward the end of .NET Rocks!' latest interview with Oren Eini (which I enjoyed), Oren mentioned that he was thinking about "how to kill three-tier architecture", going on to say that the one reason you wanted to have an application server was "connection pooling". Richard Cambell went on to say that you might use an application server if you have "application resources or some kind of a set of objects that are processor intensive or long running that are independent, the execution of that is somewhat independent with what the web server has to do".
(It's great that .NET Rocks! has transcripts, which makes it extremely easy to quote this stuff. Thanks Carl.)
I think the discussion above is an incomplete view which sells an independent application server tier far short. No, you don't always need application servers, but they do have practical uses beyond the scenarios mentioned above, and they are often borderline-essential for a robust web application architecture.
The rest of this post is primarily intended for my .NET brethren. Three-tier server architecture discussions get rather confusing in the Microsoft world because a) Microsoft's tooling doesn't really support it well out of the box (and in fact much of it pushes you very hard in the two-tier direction), and b) both the web server and application server typically end up being IIS. Java and LAMP developers are quite familiar with using Apache to front for Tomcat, JBoss, or their company's favorite expensive commercial JEE server.
Perhaps most importantly, having separate application servers is critical to the proper implementation of a DMZ, which you most likely want if you are running on the capital-I Internet. In a nutshell, this architecture allows you to better protect your database if your externally-facing web server tier is compromised. See this link into the Wikipedia article for a better explanation.
Additionally, separate web and application tiers allow for limitless options in independent provisioning, scaling, and tuning. The power here should not be underestimated. You can tune your web servers for I/O throughput and your application servers for raw processing horsepower if that is what your application demands. If you are serving a lot of static content, you can move it forward to the web server tier and take the load off your more expensive application servers. You can use different load-balancing and encryption strategies. You can also choose to use a cheaper computing platform on one tier (typically, the web tier) to save on hardware costs and licensing fees.
I have a book that I like to refer people to when this subject comes up -- Architecting Enterprise Solutions: Patterns for High-Capability Internet-based Systems, by Paul Dyson and Andrew Longshaw. It's a patterns book that discusses the tradeoffs involved in robust Internet architectures. I'm not aware of anything else quite like it, so if you're interested (or if you would simply like to read something that will probably expand your engineering horizons), get it.
I've been known to use the title of this post, "Why to Consider Using an Application Server Tier", as an interview question. I guess I might not be able to do that anymore. But it's a fun blog discussion topic, which is worth it in my opinion.