The system was not keeping up with their traffic.
Typically, in a situation like this, we would recommend re-architecting the application, piece by piece, replacing IIS with LAMP and optimizing database access.
In this case, the client was low on budget and didn’t want to make too many changes. They were looking for a quick fix.
Following careful review of their .asp application, it became clear we’re dealing with a chaotic buggy system and that we would have to cut deep, if we want to optimize existing code.
So we decided to go with a different approach.
Keep everything as is and use Nginx to reverse-proxy all incoming requests.
What is a Reverse Proxy?
A Reverse Proxy is a web server that handles all incoming requests from end-users, caching, load balancing and communicating with your back end primary servers as necessary.
IIS is slow. Nginx is super fast.
If we can’t rewrite the code, let’s have Nginx handle all traffic, connect to IIS internally and then cache the response from IIS, so that future requests can be fulfilled without ever hitting IIS.
The idea is to switch 1 million users downloading an image from IIS, with those users downloading everything from Nginx directly. Nginx is faster, light weight and scales easily.
Whenever we setup reverse proxies, one of our favorite options is Squid.
Squid has been around for a long time, very easy to setup and provides a good reverse-proxy caching solution.
In this case however, incoming requests required further logic before a request could be routed to IIS. Nginx is just as fast and offers greater flexibility by letting us use PHP.
We provisioned a new dedicated server for the client and installed Nginx with PHP-FPM.
Analyzed all possible requests the IIS system was handling. They were all HTTP_GET requests, with varying parameters. IIS handled several vhosts, so we had to properly handle http://DomainA.com/dosomething?a=b and http://DomainB.com/dosomething?a=b
Configured Nginx to rewrite all requests for files that did not exist, to go to a notfound.php script:
if (-d $request_filename)
if (!-f $request_filename)
rewrite ^(.*)$ /notfound.php?$1 last;
In notfound.php, we would connect to IIS to retrieve the image / static page / dynamic content, then save it locally.
The IIS system served different content based on the user’s ip address and origin, so we had to take that into account when saving file names. (/us/google/welcome.gif vs /canada/yahoo/welcome.gif)
After testing everything locally, we had the client update their DNS, sending all traffic to Nginx instead of IIS.
The impact on performance has been very noticeable.
IIS CPU utilization went down from 70% to below 5% at all times and Nginx was barely breaking a sweat, handling the majority of the requests locally, reverting to IIS only when presented with a new combination of parameters that was never seen before.
We later developed a simple way to “expire” content on Nginx so that whenever the client updated the IIS Content Management System, the changes would propagate properly.
There is one aspect of this solution that is still lacking and worth mentioning. In the event of a sudden burst of new requests with never-seen-before parameters, the current implementation will revert all requests to IIS until files are created locally. A better approach would be to queue requests for new content, avoiding hitting IIS more than once when there’s a sudden burst of new requests.
Implementing a RabbitMQ/Cassandra queue for new requests would be the next step here, so we can avoid an initial slowdown when hit with a burst of new requests.
SPI engineers came up with a quick fix, that didn’t involve any changes to the original application and made a huge impact on throughput and the number of concurrent connections the service can handle.
If you’re dealing with massive traffic and you’re not using Nginx yet, you owe it to yourself to take it for a spin.