Fast Polling vs. Websockets

Fast Polling vs. Websockets

WebSockets are a great addition to the HTTP protocol suite, but there are numerous situations where they cannot be used.

  • Some companies have firewalls that will prevent WebSockets from working.
  • If you are deploying software in a shared hosting environment, you may not be permitted to use WebSockets.
  • If you are behind a reverse proxy that isn’t configured or the software doesn’t support pass-through of WebSocket protocol, WebSockets won’t work.

Many people use NGINX because it is a very fast static WWW server and reverse proxy. See for details about how to set up reverse proxy for WebSockets, but also note this was unsupported before version 1.3.13.

People have implemented libraries of client and server side code to assure something like WebSockets work. For example, Socket.IO is built upon It claims to support WebSockets, but fall back to Flash Sockets or long polling. The idea is you write your code as if WebSockets work, to a WebSocket flavored API, and the communication is done under the hood in whatever manner that works. This seems like a great solution because you can deploy your code and it should work even for IE6 users communicating to the internet over dialup modem.

Let’s assume we’re in a situation where WebSockets don’t work though, and examine the alternatives.

Flash Sockets work on computers that have Flash installed. You cannot count on Flash, but it’s OK to fallback to polling if Flash isn’t supported. There’s another case where Flash may be problematic – I notice in my Safari browser for OSX Mavericks that flash content appears but you have to click on the Flash element to enable it to run. This is a battery saving move by OSX that makes a lot of sense. If power saver stops the Flash Sockets from working, your socket library may take a very long time to figure it out before falling back to polling. See:

The next option is long polling. For this technique, the browser does an XHR request and the server simply doesn’t respond until it has something to send. For example, the request might be “send me new messages” and the server will wait until there are new messages to send and sends them as a normal HTTP response.

There are issues with long polling, though.

If you want to do 2-way communications with the server, you are effectively using 2 sockets. One is tied up hanging/waiting for the long poll response, and the other is sent by the client to send new information to the server.

Long polling is also problematic because the client has to be able to handle XHR errors, some of which are tricky to handle or even impossible to handle.

Consider the case where the user is riding on a train. His wireless signal and connection to the Internet is spotty and even unavailable as the train goes through tunnels. If a long poll request has been made, and the user’s connection drops, what happens?

Your client code may or may not be able to detect the connection has been lost. This post:
discusses how none of the error handling mechanisms in XHR trigger when the connection is dropped and a long poll request is pending/outstanding. The answer provided is to implement a timer via setTimeout() to monitor the connection and use xhr.abort() to kill the hung request. The problem is that you will have a really big latency (the amount of time in your setTimeout call) before you know the connection has been dropped.

Long polling also requires the server to cooperate, and that can be costly. For example, if we’re asking for new messages, the server has to loop, “are there new messages? are there new messages?” And both the client and server have TCP settings that can timeout the connection outside of the browser API.

All this leads to the one way to implement Comet-style bidirectional communication in a way guaranteed to work: short polling. Short polling is sure to work because it is just another port 80 request from the browser. If you can browse WWW pages, you can perform short polls.

For short polling, you fire off an XHR request and the server immediately responds. It is a form of Remote Procedure Call (RPC). The client uses setInterval() to send the XHR request. The response is routed to the appropriate handler logic.

As with any short XHR request, you have to deal with connection failures and timeouts. The answer is to use setTimeout() to monitor the connection and call xhr.abort() if the server doesn’t respond in a timely manner.

Short polling is slightly more expensive than WebSockets in some senses and less expensive in others.

Modern browsers will make multiple socket connections to a server during the fetch of a WWW page. Once the initial HTML is fetched from the server, the browser can concurrently download the static items it references, such as the CSS files, the images, and so on. These connections are generally keep-alive type so there isn’t a need to open a new socket for each static item – a time consuming and resource intensive operation.

After a timeout period elapses where the keep-alive sockets are not used, they will be closed by the server. Some servers will close these connections after just one second.

Short polling takes advantage of these keep-alive socket connections. As if you are using an actual WebSocket socket, the underlying TCP/IP connection is persistently open. The downside is that for each short poll request, the client sends over a complete set of HTTP headers and the server’s response also contains a set of headers. I figure 200 users doing short polls once per second would consume a megabit of bandwidth just for the headers alone. If you can poll every 2 seconds, your bandwidth consumption is obviously cut in half.

Something that is in short polling’s favor is that plain old HTTP protocol permits gzip encoding; you cannot count on WebSockets to support or implement any compression at all.

The following (untested) code illustrates how you would implement short polling on the client side, using jQuery.

// shortpoll.js
(function() {
    // constants
        // set this to URL of your backend handler:
    var POLL_URL = '/remote_endpoint',
        // set this to the short poll frequency in milliseconds:
        POLL_FREQUENCY = 1000;  // every second

    // true private variables
    var queue = [],
        inProgress = false;

    // create namespace
    $.fastPoll = $.fastPoll || {};

    // send arbitrary messages to server side
    $.fastPoll.sendMessage = function(message) {

    // application will replace this with an actual handler
    $.fastPoll.receiveMessage = function(message) {
        console.log('received message');

    function shortPoller() {
        // only one oustanding request at a time
        if (inProgress) {
        inProgress = true;
        var messages = queue;
        queue = [];

        $.ajax('/remote_endpoint', {
            type: 'POST',
            dataType: 'json',
            data: JSON.stringify(messages),
            contentType: ‘application/json',
            timeout: 5000,  // optimistic 5 seconds timeout
            success: function(data) {
                // data is an array of messages
                for (var i=0, len=data.length; i<len; i++) {
                inProgress = false;
            error: function() {
                // error, retry
                queue = queue.concat(messages);
                inProgress = false;

    $(document).ready(function() {
        setInterval(shortPoller, POLL_FREQUENCY);

This implementation mimics the behavior of WebSockets. The application calls $.fastPoll.sendMessage(message) to send any arbitrary message to the server side. The application would override $.fastPoll.receiveMessage with a function that handles incoming messages. It would be trivial to implement an evented wrapper around this code.

The shortPoller() method is called once every second as coded. The frequency can be adjusted by changing POLL_FREQUENCY to 2000 for every 2 seconds, or 500 for twice a second, etc.

What’s sent to the server and received by the client are arrays of arbitrary messages. The code doesn’t care what the message is – it can be a complex JavaScript object or a primitive type (number, string, etc.). As long as it can be encoded as JSON, it will be sent.

I did not present any server logic, because that is dependent on the platform you might use. The server logic would simply iterate through the received message array and process each message. The client logic presented above does exactly that in the success handler of the $.ajax call.

Messages to be sent are simply added to the end of the queue[] array. The shortPoller function removes these and resets the array before doing the AJAX request. On error, the messages that failed to send are added back to the front of the queue[] array.

The inProgress variable assures that only one fast poll AJAX request at a time is pending.

The benefits of this implementation are that the error handling is trivial, only one socket is used for bi-directional communication, it acts like a WebSocket (programmatically), it runs in any browser (including IE6), it is not likely to be blocked by firewalls or mishandled by routers on the network between client and server, and it does not require any brewer plugins like Flash or Java.

The downside is the bandwidth cost of sending and receiving the HTTP headers each AJAX request. This blog post is a bit old, but it discusses bandwidth usage of HTTP headers. You can control the headers the server sends, but the HTTP specification does require certain headers to be included.

Like What You See?

Got any questions?