<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Simon Baynes : Software Architect]]></title><description><![CDATA[I am a London based Software Architect. I specialise in building high performance, scalable, robust applications with high levels of test coverage.]]></description><link>https://baynesblog-ghost2.azurewebsites.net/</link><generator>Ghost 0.9</generator><lastBuildDate>Sat, 11 Apr 2026 19:27:02 GMT</lastBuildDate><atom:link href="https://baynesblog-ghost2.azurewebsites.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Google AMP, My Take]]></title><description><![CDATA[<p>Personally I think <a href="https://github.com/ampproject/amphtml">Google AMP</a> is a travesty. It is pitched as a way to speed up content delivery on mobile, which it undoubtedly achieves. However, it does this by bullying you into having to do it, as you will be penalised in search results for not having an AMP</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/google-amp-my-take/</link><guid isPermaLink="false">cce36782-663c-40ed-9666-2d4c8d035c60</guid><category><![CDATA[Google AMP]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Thu, 07 Apr 2016 16:26:58 GMT</pubDate><media:content url="http://blog.bayn.es/content/images/2016/04/new-google-logo.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://blog.bayn.es/content/images/2016/04/new-google-logo.jpg" alt="Google AMP, My Take"><p>Personally I think <a href="https://github.com/ampproject/amphtml">Google AMP</a> is a travesty. It is pitched as a way to speed up content delivery on mobile, which it undoubtedly achieves. However, it does this by bullying you into having to do it, as you will be penalised in search results for not having an AMP version of your content. It is not Google's role to police the Worldwide Web. Stopping publishers from jamming their pages with ads, etc and slowing down their pages. Improving this situation should be driven by market forces, and not what Google thinks. What is wrong with responsive design? Google AMP feels like the bad old days of building WML pages.</p>

<blockquote>
  <p>Personally I think Google AMP is a travesty.</p>
</blockquote>

<p>Google AMP is limited, it does not allow you to do lots of things that a modern publisher would like to be able to do. A good example of this is real-time updates. There is nothing in the current standard that will support live updates in an article, there is just an <a href="https://github.com/ampproject/amphtml/issues/909">issue on GitHub</a> which has some underwhelming suggestions. If you want to do this in standard HTML, no problem there are several options as long as you can scale your infrastructure. This is one of my main issues, that of no competitive advantage. If you have a great idea you want to deliver and you want to put it in front of your growing mobile audience then if AMP doesn't support it then you are going to have to make suggestions to the specification, and wait and see. This is terrible. AMP by its nature will always trail behind what can be done with HTML and JavaScript.</p>

<p>AMP adds another presentation you have to maintain. AMP doesn't replace the mobile representation of you pages, it adds another one. This means that a customer coming from a link in an email will end up on you standard HTML pages, whereas one coming from a Google search will see the AMP version. This adds a serious additional expense to publishers in design, development and testing. It also creates a confusing experience for customers.</p>

<blockquote>
  <p>AMP adds another presentation you have to maintain.</p>
</blockquote>

<p>So coming back to my original point that this was pitched as a way to speed up content delivery on mobile. I actually do not believe Google at all on this. For me it is something they had to do in response to the advertising dollar land grabs made by Facebook with Instant Article, and Apple with Apple News. So my cynical view is that this is just a defensive manoeuvre by Google to protect its revenues.</p>

<blockquote>
  <p>So my cynical view is that this is just a defensive manoeuvre by Google to protect its revenues.</p>
</blockquote>]]></content:encoded></item><item><title><![CDATA[Introducing Html2Markdown]]></title><description><![CDATA[<p>I originally created <a href="https://github.com/baynezy/Html2Markdown">Html2Markdown</a> in 2013, as a .Net class library. So I could create <a href="https://daringfireball.net/projects/markdown/">Markdown</a> versions of my blog posts, so I could migrate to a new blogging engine. Back then, I only needed support for a subset of the standard, so that was all I included.</p>

<p>About six</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/introducing-html2markdown/</link><guid isPermaLink="false">e587bebe-b277-4a60-b67c-19faeb6f1cc6</guid><category><![CDATA[HTML]]></category><category><![CDATA[Markdown]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Wed, 02 Dec 2015 16:06:02 GMT</pubDate><media:content url="http://blog.bayn.es/content/images/2015/12/Html2Markdown-medium.png" medium="image"/><content:encoded><![CDATA[<img src="http://blog.bayn.es/content/images/2015/12/Html2Markdown-medium.png" alt="Introducing Html2Markdown"><p>I originally created <a href="https://github.com/baynezy/Html2Markdown">Html2Markdown</a> in 2013, as a .Net class library. So I could create <a href="https://daringfireball.net/projects/markdown/">Markdown</a> versions of my blog posts, so I could migrate to a new blogging engine. Back then, I only needed support for a subset of the standard, so that was all I included.</p>

<p>About six months later I received my first pull request adding <code>&lt;blockquote&gt;</code> support. It was at this point it seemed other people had use cases, so I decided to finish support for the <a href="https://daringfireball.net/projects/markdown/">Markdown specification</a>.</p>

<h2 id="tryitout">Try it out</h2>

<p>If you want to try it out I've created an application that showcases what it can do. </p>

<p><a href="http://html2markdown.bayn.es">http://html2markdown.bayn.es</a></p>

<h3 id="nuget">NuGet</h3>

<p>If you want to include this library in your project then it is available as a <a href="https://www.nuget.org/packages/Html2Markdown/">NuGet package</a>.</p>

<pre><code>PM&gt; Install-Package Html2Markdown
</code></pre>]]></content:encoded></item><item><title><![CDATA[Real-time Web Apps with Server-Sent Events (pt 2)]]></title><description><![CDATA[<p>This is the second part of a two part series about building real-time web applications with server-sent events.</p>

<ul>
<li><a href="http://bayn.es/real-time-web-applications-with-server-sent-events-pt-1/">Building Web Apps with Server-Sent Events - Part 1</a></li>
</ul>

<h2 id="reconnecting">Reconnecting</h2>

<p>In this post we are going to look at handling reconnection if the browser loses contact with the server. Thankfully the native</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/real-time-web-apps-with-server-sent-events-pt-2/</link><guid isPermaLink="false">c806ca6b-a1a1-4852-b642-8de413f21fc4</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Server-Sent Events]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Thu, 20 Aug 2015 05:27:18 GMT</pubDate><media:content url="http://blog.bayn.es/content/images/2015/08/html5.png" medium="image"/><content:encoded><![CDATA[<img src="http://blog.bayn.es/content/images/2015/08/html5.png" alt="Real-time Web Apps with Server-Sent Events (pt 2)"><p>This is the second part of a two part series about building real-time web applications with server-sent events.</p>

<ul>
<li><a href="http://bayn.es/real-time-web-applications-with-server-sent-events-pt-1/">Building Web Apps with Server-Sent Events - Part 1</a></li>
</ul>

<h2 id="reconnecting">Reconnecting</h2>

<p>In this post we are going to look at handling reconnection if the browser loses contact with the server. Thankfully the native JavaScript functionality for SSEs (the <a href="https://developer.mozilla.org/en-US/docs/Web/API/EventSource">EventSource</a>) handles this natively. You just need to make sure that your server-side implementation supports the mechanism.</p>

<p>When the server reconnects your SSE end point it will send a special HTTP header <code>Last-Event-Id</code> in the reconnection request. In the previous part of this blog series we looked at just sending events with the <code>data</code> component. Which looked something like this:-</p>

<pre><code>data: The payload we are sending\n\n
</code></pre>

<p>Now while this is enough to make the events make it to your client-side implementation. We need more information to handle reconnection. To do this we need to add an event id to the output.</p>

<p>E.g.</p>

<pre><code>id: 1439887379635\n
data: The payload we are sending\n\n
</code></pre>

<p>The important thing to understand here is that each event needs a unique identifier, so that the client can communicate back to the server (using the <code>Last-Event-Id</code> header) which was the last event it received on reconnection.</p>

<h2 id="persistence">Persistence</h2>

<p>In our previous example we used <a href="http://redis.io/topics/pubsub">Redis Pub/Sub</a> to inform <a href="https://nodejs.org/">Node.js</a> that it needs to push a new SSE to the client. Redis Pub/Sub is a topic communication which means it will be delivered to all <em>connected clients</em>, and then it will be removed from the topic. So there is no persistence for when clients reconnect. To implement this we need to add a persistence layer and so in this demo I have chosen to use <a href="https://www.mongodb.org/">MongoDB</a>.</p>

<p>Essentially we will be pushing events into both Redis and MongoDB. Redis will still be our method of initiating an SSE getting sent to the browser, but we will also be be storing that event into MongoDB so we can query it on a reconnection to get anything we've missed.</p>

<h2 id="thecode">The Code</h2>

<p>OK so let us look at how we can actually implement this.</p>

<h3 id="updateserverevent">Update ServerEvent</h3>

<p>We need to update the ServerEvent object to support having an <code>id</code> for an event.</p>

<pre><code>function ServerEvent(name) {
    this.name = name || "";
    this.data = "";
};

ServerEvent.prototype.addData = function(data) {
    var lines = data.split(/\n/);

    for (var i = 0; i &lt; lines.length; i++) {
        var element = lines[i];
        this.data += "data:" + element + "\n";
    }
}

ServerEvent.prototype.payload = function() {
    var payload = "";
    if (this.name != "") {
        payload += "id: " + this.name + "\n";
    }

    payload += this.data;
    return payload + "\n";
}
</code></pre>

<p>This is pretty straightforward string manipulation and won't impress anyone, but it is foundation for what will follow.</p>

<h3 id="storeeventsinmongodb">Store Events in MongoDB</h3>

<p>We need to update the <code>post.js</code> code to also store new events in MongoDB.</p>

<pre><code>app.put("/api/post-update", function(req, res) {
    var json = req.body;
    json.timestamp = Date.now();

    eventStorage.save(json).then(function(doc) {
        dataChannel.publish(JSON.stringify(json));
    }, errorHandling);

    res.status(204).end();
});
</code></pre>

<p>The <code>event-storage</code> module looks as follows:</p>

<pre><code>var Q = require("q"),
    config = require("./config"),
    mongo = require("mongojs"),
    db = mongo(config.mongoDatabase),
    collection = db.collection(config.mongoScoresCollection);

module.exports.save = function(data) {
    var deferred = Q.defer();
    collection.save(data, function(err, doc){
        if(err) {
            deferred.reject(err);
        }
        else {
            deferred.resolve(doc);
        }
    });

    return deferred.promise;
};
</code></pre>

<p>Here we are just using basic MongoDB commands to save a new event into the collection. Yep that is it, we are now additionally persisting the events so they can be retrieved later.</p>

<h3 id="retrievingeventsonreconnection">Retrieving Events on Reconnection</h3>

<p>When an <code>EventSource</code> reconnects after a disconnection it passes a special header <code>Last-Event-Id</code>. So we need to look for that and return the events that got broadcast while the client was disconnected.</p>

<pre><code>app.get("/api/updates", function(req, res){
    initialiseSSE(req, res);

    if (typeof(req.headers["last-event-id"]) != "undefined") {
        replaySSEs(req, res);
    }
});

function replaySSEs(req, res) {
    var lastId = req.headers["last-event-id"];

    eventStorage.findEventsSince(lastId).then(function(docs) {
        for (var index = 0; index &lt; docs.length; index++) {
            var doc = docs[index];
            var messageEvent = new ServerEvent(doc.timestamp);
            messageEvent.addData(doc.update);
            outputSSE(req, res, messageEvent.payload());
        }
    }, errorHandling);
};
</code></pre>

<p>What we are doing here is querying MongoDB for the events that were missed. We then iterate over them and output them to the browser.</p>

<p>The code for querying MongoDB is as follows:</p>

<pre><code>module.exports.findEventsSince = function(lastEventId) {
    var deferred = Q.defer();

    collection.find({
        timestamp: {$gt: Number(lastEventId)}
    })
    .sort({timestamp: 1}, function(err, docs) {
        if (err) {
            deferred.reject(err);
        }
        else {
            deferred.resolve(docs);
        }
    });

    return deferred.promise;
};
</code></pre>

<h2 id="testing">Testing</h2>

<p>To test this you will need to run both apps at the same time.</p>

<pre><code>node app.js
</code></pre>

<p>and </p>

<pre><code>node post.js
</code></pre>

<p>Once they are running open two browser windows <a href="http://localhost:8181/">http://localhost:8181/</a> and <a href="http://localhost:8082/api/post-update">http://localhost:8082/api/post-update</a></p>

<p>Now you can post updates as before. If you stop <code>app.js</code> but continue posting events, when you restart <code>app.js</code> within 10 seconds the <code>EventSource</code> will reconnect. This will deliver all missed events.</p>

<h2 id="conclusion">Conclusion</h2>

<p>This very simple code gives you a very elegant and powerful push architecture to create real-time apps.</p>

<h3 id="improvements">Improvements</h3>

<p>A possible improvement would be to render the events from MongoDB server-side when the page is first output. Then we would get updates client-side as they are pushed to the browser.</p>

<h3 id="download">Download</h3>

<p>If you want to play with this application you can fork or browse it on <a href="https://github.com/baynezy/RealtimeDemo/tree/part-2">GitHub</a>.</p>]]></content:encoded></item><item><title><![CDATA[Running a Script on Start Up on Windows 10]]></title><description><![CDATA[<p>Previously on Windows 7 I have used the <a href="http://windows.microsoft.com/en-gb/windows/run-program-automatically-windows-starts#1TC=windows-7">Startup folder</a> technique to run scripts periodically.</p>

<p>I spent some serious time head scratching trying to replicate that on Windows 10. Thankfully I have managed to work it out, with the help of some friends.</p>

<p>Open the run dialog (<code>Windows+R</code>) then</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/running-a-script-on-start-up-on-windows-10/</link><guid isPermaLink="false">c90885e6-a6a2-423e-8a1b-5918f4369d59</guid><category><![CDATA[Windows 10]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Mon, 10 Aug 2015 20:53:04 GMT</pubDate><media:content url="http://blog.bayn.es/content/images/2015/08/windows10-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://blog.bayn.es/content/images/2015/08/windows10-1.jpg" alt="Running a Script on Start Up on Windows 10"><p>Previously on Windows 7 I have used the <a href="http://windows.microsoft.com/en-gb/windows/run-program-automatically-windows-starts#1TC=windows-7">Startup folder</a> technique to run scripts periodically.</p>

<p>I spent some serious time head scratching trying to replicate that on Windows 10. Thankfully I have managed to work it out, with the help of some friends.</p>

<p>Open the run dialog (<code>Windows+R</code>) then type <code>shell:startup</code> then press <code>Enter</code>.</p>

<p>This will open up the startup folder. Once there you can create any shortcut you need.</p>

<p>Happy scripting.</p>]]></content:encoded></item><item><title><![CDATA[Real-time Web Apps with Server-Sent Events (pt 1)]]></title><description><![CDATA[<p>Recently I've been researching how to build real-time web applications, where content is <a href="https://en.wikipedia.org/wiki/Push_technology">pushed</a> to clients rather than them having to poll or refresh the browser. A lot of people at this point jump straight to <a href="https://en.wikipedia.org/wiki/WebSocket">websockets</a>. However, these can often be more powerful than you need. Websockets provide a</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/real-time-web-applications-with-server-sent-events-pt-1/</link><guid isPermaLink="false">d94e0ecf-fc3b-40e1-bc22-f40df220455e</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Server-Sent Events]]></category><category><![CDATA[Websockets]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Sun, 09 Aug 2015 07:41:13 GMT</pubDate><content:encoded><![CDATA[<p>Recently I've been researching how to build real-time web applications, where content is <a href="https://en.wikipedia.org/wiki/Push_technology">pushed</a> to clients rather than them having to poll or refresh the browser. A lot of people at this point jump straight to <a href="https://en.wikipedia.org/wiki/WebSocket">websockets</a>. However, these can often be more powerful than you need. Websockets provide a rich protocol to perform bi-directional, full-duplex communication. These are great when you want to do <a href="https://en.wikipedia.org/wiki/Multicast">multicast</a> communication between many-to-many clients. If all you need to achieve is one-to-many multicast from the server then <a href="https://en.wikipedia.org/wiki/Server-sent_events">Server-Sent Events (SSEs)</a> are a powerful and simpler alternative. SSEs are sent over traditional HTTP; which means you can use a standard webserver rather than getting a websockets server.</p>

<h2 id="browsersupport">Browser Support</h2>

<p>At first glance of the <a href="http://caniuse.com/">CanIUse website</a> it would appear that <a href="http://caniuse.com/#feat=websockets">websockets</a> have better browser support than <a href="http://caniuse.com/#feat=eventsource">SSEs</a>. However, there are many <a href="https://en.wikipedia.org/wiki/Polyfill">polyfills</a> to enable SSEs to function in unsupported browsers.</p>

<h2 id="exampleimplementation">Example Implementation</h2>

<h3 id="prerequisites">Pre-Requisites</h3>

<p>The following technologies are required for this demo application. </p>

<ul>
<li><a href="https://nodejs.org/download/">Node.js</a></li>
<li><a href="http://redis.io/download">Redis</a></li>
</ul>

<h3 id="buildinterface">Build Interface</h3>

<h4 id="indexhtmlhttpsrawgithubusercontentcombaynezyrealtimedemopart1staticindexhtml"><a href="https://raw.githubusercontent.com/baynezy/RealtimeDemo/part-1/static/index.html">index.html</a></h4>

<pre><code>&lt;!DOCTYPE html&gt;
&lt;html lang="en"&gt;
&lt;head&gt;
    &lt;title&gt;Realtime Demo&lt;/title&gt;
        &lt;link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css" /&gt;
&lt;/head&gt;
&lt;body&gt;
        &lt;h1&gt;Realtime Demo&lt;/h1&gt;
        &lt;ul id="live-updates"&gt;&lt;/ul&gt;
        &lt;script src="https://code.jquery.com/jquery-2.1.4.min.js"&gt;&lt;/script&gt;
        &lt;script&gt;
             var live = {
                 init : function() {
                             var source = new EventSource("http://localhost:8081/api/updates");
                             source.addEventListener("message", function(event) {
                                     var data = jQuery.parseJSON(event.data);
                                                live.addItem(data.update);
                             }, false);
                     },

                     addItem : function(data) {
                                     $(live.constructItem(data)).hide().prependTo("#live-updates").fadeIn(1000);
                     },

                     constructItem : function(data) {
                         return "&lt;li&gt;" + data + "&lt;/li&gt;";
                     }
            };

            $(document).ready(function(){
                    live.init();
            });
         &lt;/script&gt;
    &lt;/body&gt;
&lt;/html&gt;
</code></pre>

<p>The main things to focus on here are the creation of an <code>EventSource</code> object.</p>

<pre><code>var source = new EventSource("http://localhost:8081/api/updates");
</code></pre>

<p>This creates an open connection to the updates URI. This will be the end point that will be serving our SSEs. Which we can now consume by attaching a handler to the message event.</p>

<p>Now we need to write our Node end point to publish messages.</p>

<h3 id="serversenteventspublishing">Server-Sent Events Publishing</h3>

<h4 id="appjshttpsrawgithubusercontentcombaynezyrealtimedemopart1appjs"><a href="https://raw.githubusercontent.com/baynezy/RealtimeDemo/part-1/app.js">app.js</a></h4>

<pre><code>var express = require("express"),
    mustacheExpress = require("mustache-express"),
    dataChannel = require("./custom_modules/data-channel"),
    bodyParser = require("body-parser"),
    app = express();

app.engine('html', mustacheExpress());
app.set('views', './views')
app.set('view engine', 'html');
app.use(express.static("./static"));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: true}));

app.get("/api/updates", function(req, res){
        initialiseSSE(req, res);
});

app.get("/api/post-update", function(req, res) {
    res.render("postupdate", {});
});

app.put("/api/post-update", function(req, res) {
    var json = JSON.stringify(req.body);
    dataChannel.publish(json);
    res.status(204).end();
});

function initialiseSSE(req, res) {
    dataChannel.subscribe(function(channel, message){
        var messageEvent = new ServerEvent();
        messageEvent.addData(message);
        outputSSE(req, res, messageEvent.payload());
    });

    res.set({
        "Content-Type": "text/event-stream",
        "Cache-Control": "no-cache",
        "Connection": "keep-alive",
        "Access-Control-Allow-Origin": "*"
    });

    res.write("retry: 10000\n\n");
}

function outputSSE(req, res, data) {
    res.write(data);
}

function ServerEvent() {
     this.data = "";
};

ServerEvent.prototype.addData = function(data) {
    var lines = data.split(/\n/);

    for (var i = 0; i &lt; lines.length; i++) {
        var element = lines[i];
        this.data += "data:" + element + "\n";
    }
}

ServerEvent.prototype.payload = function() {
    var payload = "";

    payload += this.data;
    return payload + "\n";
}

var server = app.listen(8081, function() {

});
</code></pre>

<p>Key parts of this to look at are the headers required to make the SSEs work correctly.</p>

<pre><code>res.set({
    "Content-Type": "text/event-stream",
    "Cache-Control": "no-cache",
    "Connection": "keep-alive",
    "Access-Control-Allow-Origin": "*"
});
</code></pre>

<p>The first three headers are mandatory, but the <code>Access-Control-Allow-Origin</code> is optional, and is how you can control cross domain access with <a href="https://en.wikipedia.org/wiki/Cross-origin_resource_sharing">CORS</a>.</p>

<p>Next is the construction of the SSEs. <a href="http://cjihrig.com/blog/about-me/">Colin Ihrig</a> does a fine write up of <a href="http://cjihrig.com/blog/the-server-side-of-server-sent-events/">the server side of server-sent events</a>, which I used as a resource to put this together.</p>

<h4 id="datachanneljshttpsgithubcombaynezyrealtimedemoblobpart1custom_modulesdatachanneljs"><a href="https://github.com/baynezy/RealtimeDemo/blob/part-1/custom_modules/data-channel.js">data-channel.js</a></h4>

<pre><code>var redis = require("redis");

module.exports.subscribe = function(callback) {
    var subscriber = redis.createClient();

    subscriber.subscribe("liveupdates");

    subscriber.on("error", function(err){
        console.log("Redis error: " + err);
    });

    subscriber.on("message", callback);
};

module.exports.publish = function(data) {
    var publisher = redis.createClient();

    publisher.publish("liveupdates", data);
};
</code></pre>

<p>Here we are just utilising the <a href="http://redis.io/topics/pubsub">Pub/Sub</a> functionality of Redis, which is really simple as you can see.</p>

<p>To see this working just download the full <a href="https://github.com/baynezy/RealtimeDemo/tree/part-1">working application</a> from GitHub. Then browse to <a href="http://localhost:8082/api/post-update/">http://localhost:8082/api/post-update/</a> and fill in the form while browsing <a href="http://localhost:8081">http://localhost:8081</a> and you will see the events updating in realtime.</p>

<h2 id="conclusion">Conclusion</h2>

<p>So as you can hopefully see this is really pretty straightforward. Node + Redis hugely simplifies the server-side functionality, and the clientside integration is uncomplicated.</p>

<p>I'll be doing a follow up post on how to handle the reconnection.</p>]]></content:encoded></item><item><title><![CDATA[New Role]]></title><description><![CDATA[<p>Well I am pleased to announce that I have accepted the role of Technical Architect at <a href="http://www.hostelbookers.com">HostelBookers.com</a>. I had initally set my sights on contracting, but I felt this was an opportunity I would regret turning down.  </p>

<p>I am going to really miss my old colleagues at <a href="http://www.haymarket.com">Haymarket</a>. Easily</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/new-role/</link><guid isPermaLink="false">44960578-2250-452d-9b2e-828cfb4226b4</guid><category><![CDATA[Personal]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Thu, 21 Jan 2010 18:27:40 GMT</pubDate><content:encoded><![CDATA[<p>Well I am pleased to announce that I have accepted the role of Technical Architect at <a href="http://www.hostelbookers.com">HostelBookers.com</a>. I had initally set my sights on contracting, but I felt this was an opportunity I would regret turning down.  </p>

<p>I am going to really miss my old colleagues at <a href="http://www.haymarket.com">Haymarket</a>. Easily the most talented and hard working people I have ever worked with. I&apos;ll be leaving many friends behind which will be a wrench, but thankfully I&apos;m only 20 minutes down the Piccadilly Line so I am sure I&apos;ll see them all regularly.  </p>

<p>I start on 8th February so I get a nice week of R&amp;R before I start.</p>]]></content:encoded></item><item><title><![CDATA[Moving On]]></title><description><![CDATA[<p>Well after four and a half years I have decided to move on from Haymarket. It was a difficult decision but I felt that I needed a change. I will be leaving behind some proud achievements, great friends and colleagues. I&apos;ll be working till the end of the</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/moving-on/</link><guid isPermaLink="false">728bc079-8bb7-490c-b77e-3e33725a07b8</guid><category><![CDATA[Personal]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Tue, 05 Jan 2010 22:48:03 GMT</pubDate><content:encoded><![CDATA[<p>Well after four and a half years I have decided to move on from Haymarket. It was a difficult decision but I felt that I needed a change. I will be leaving behind some proud achievements, great friends and colleagues. I&apos;ll be working till the end of the month and then I&apos;ll be going contracting, well at least that is the plan anyway.  </p>

<p>Wish me luck.</p>]]></content:encoded></item><item><title><![CDATA[Moving from Apache to IIS]]></title><description><![CDATA[<p>My most recent project has been moving all our production websites to IIS from Apache. This mainly quite low level and not at all complicated. The real challenge comes because we are heavy users of <a href="http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html">mod_rewite</a>.  </p>

<p>Fortunately <a href="http://www.helicontech.com">Helicon</a> have a product called ISAPI rewrite 3 which has very similar</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/moving-from-apache-to-iis/</link><guid isPermaLink="false">3324bae0-0aa0-4f00-94c4-6f6d3b950a2a</guid><category><![CDATA[IIS]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Tue, 24 Feb 2009 19:36:07 GMT</pubDate><content:encoded><![CDATA[<p>My most recent project has been moving all our production websites to IIS from Apache. This mainly quite low level and not at all complicated. The real challenge comes because we are heavy users of <a href="http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html">mod_rewite</a>.  </p>

<p>Fortunately <a href="http://www.helicontech.com">Helicon</a> have a product called ISAPI rewrite 3 which has very similar syntax to mod<em>rewrite. There are a few gotchas to bear in mind. The one that I spent too long solving was that the URL that you are assessing with RewriteRule in ISAPI rewrite starts with a / which is not the case in mod</em>rewrite. However, I was very thankful to discover that I did not need to change all my rules, I just had to add RewriteBase / just after the RewriteEngine on directive.  </p>

<p>The other huge one for me was that environment variables were not supported in ISAPI Rewrite. This did cause me to have to rearchitect thew rewrites I had for my search. However the upside of this was that I got a lot better at rewrites and realised that my initial implementaion was over complicated.</p>]]></content:encoded></item><item><title><![CDATA[IIS returning 403.1 error]]></title><description><![CDATA[<p>Had a strange issue with IIS this afternoon at work that I thought I would share.  </p>

<p>I am in the process of migrating an application from Apache to IIS and during the testing I discovered a strange error. When trying to return an HTML file IIS was returning a 403.</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/iis-returning-4031-error/</link><guid isPermaLink="false">3bd44cd2-472c-4f20-bd6c-59137f02007e</guid><category><![CDATA[IIS]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Mon, 16 Feb 2009 22:33:11 GMT</pubDate><content:encoded><![CDATA[<p>Had a strange issue with IIS this afternoon at work that I thought I would share.  </p>

<p>I am in the process of migrating an application from Apache to IIS and during the testing I discovered a strange error. When trying to return an HTML file IIS was returning a 403.1 status code and moaning about execute permissions.  </p>

<p>I spent a while checking all manner of security permissions only to discover that it was because the file was in a folder whose name ended in &apos;.com&apos;. Apparently this causes issues because IIS thinks that you are trying to execute a file with a &apos;.com&apos; extension.  </p>

<p>Sadly I had to recode my architecture to change the folder name as there did not seem to be a work around.  </p>

<p>Anyway hope this helps someone someday who is scratching their head!</p>]]></content:encoded></item><item><title><![CDATA[Using Java to Optimise Looping Over Lists]]></title><description><![CDATA[<p>Once you have been using ColdFusion for a while you will have undoubtedly had to loop through a list. Now this is very straight forward operation and one where you rarely have to consider performance. However, if you are reading in a 10,000+ row CSV file this iterative process</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/using-java-to-optimise-looping-over-lists/</link><guid isPermaLink="false">7c651263-a2a6-40c8-b171-5dc27e2b576b</guid><category><![CDATA[ColdFusion]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Sun, 16 Jul 2006 10:24:23 GMT</pubDate><content:encoded><![CDATA[<p>Once you have been using ColdFusion for a while you will have undoubtedly had to loop through a list. Now this is very straight forward operation and one where you rarely have to consider performance. However, if you are reading in a 10,000+ row CSV file this iterative process can be very slow. Many of you would do something like this:-  </p>

<pre><code>&lt;cfloop list="#myCSV#" index="thisRow"&gt;
    &lt;!--- here is where you would do your row processing ---&gt;
&lt;/cfloop&gt;
</code></pre>

<p>Now there is essentially nothing wrong with this code, however this processing is not optimal performance-wise. Fortunately you can access the power of the underlying Java to up the performance.  </p>

<pre><code>&lt;!--- here we use the java.lang.String class to convert our CSV into an array using the split() method ---&gt;
&lt;cfset myCSV = createObject("java", "java.lang.String").init(myCSV).split(chr(13) &amp; chr(10))&gt;

&lt;cfloop from="1" to="#arrayLen(myCSV)#" index="thisRow"&gt;
    &lt;!--- here is where you would do your row processing ---&gt;
&lt;/cfloop&gt;
</code></pre>

<p>I have seen this provide enormous performance gains.</p>]]></content:encoded></item><item><title><![CDATA[SQL Optimisation Tip]]></title><description><![CDATA[<p>There is a saying &apos;There are many ways to skin a cat&apos; this is rarely as true as it is with optimising SQL.  </p>

<p>However, I have a tip for you for SQL Server. You can specify that you wish to do a &apos;Dirty Read&apos;. This means</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/sql-optimisation-tip/</link><guid isPermaLink="false">d03608ed-f377-4d7d-9641-6fcc4a6e5a90</guid><category><![CDATA[SQL Server]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Tue, 16 May 2006 13:51:45 GMT</pubDate><content:encoded><![CDATA[<p>There is a saying &apos;There are many ways to skin a cat&apos; this is rarely as true as it is with optimising SQL.  </p>

<p>However, I have a tip for you for SQL Server. You can specify that you wish to do a &apos;Dirty Read&apos;. This means that you want the select to go ahead without worrying if there are locks on the table / row because of inserts, updates, and deletes. So if a row that fits your criteria is in the process of being deleted it will still end up in your results.  </p>

<p>So obviously this is not a solution for every query but it can really relieve some pressure if you do not mind the draw backs.</p>

<pre><code>SELECT *
FROM myTable WITH (NOLOCK)
</code></pre>

<p>It will also work when you do joins but you must include the <code>WITH (NOLOCK)</code> command after every table declaration.</p>

<pre><code>SELECT *
FROM myTable a WITH (NOLOCK)
INNER JOIN myOtherTable b WITH (NOLOCK)
    ON a.ID = b.ID
</code></pre>]]></content:encoded></item><item><title><![CDATA[Promotion]]></title><description><![CDATA[<p>I got promoted yesterday. I am now the Senior Developer at Haymarket Publishing. Well done me!</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/promotion/</link><guid isPermaLink="false">59819743-d2bf-4d0c-bff1-6c6b9715088f</guid><category><![CDATA[Personal]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Thu, 20 Apr 2006 22:40:04 GMT</pubDate><content:encoded><![CDATA[<p>I got promoted yesterday. I am now the Senior Developer at Haymarket Publishing. Well done me!</p>]]></content:encoded></item><item><title><![CDATA[CGI Scope Fun and Games]]></title><description><![CDATA[<p>I recently discovered that if you use &lt;cfdump&gt; to output the CGI scope then it doesn&apos;t actually display all the keys in the CGI structure. It actually displays a defined list of keys which I unfortunately only realised after about an hour trying to work out</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/cgi-scope-fun-and-games/</link><guid isPermaLink="false">3d8b8955-4ddc-423d-86d6-0379879cba5c</guid><category><![CDATA[ColdFusion]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Wed, 29 Mar 2006 15:21:44 GMT</pubDate><content:encoded><![CDATA[<p>I recently discovered that if you use &lt;cfdump&gt; to output the CGI scope then it doesn&apos;t actually display all the keys in the CGI structure. It actually displays a defined list of keys which I unfortunately only realised after about an hour trying to work out where the variables Apache was supposedly setting were.  </p>

<p>Very annoying.</p>]]></content:encoded></item><item><title><![CDATA[Using Array Notation in ColdFusion]]></title><description><![CDATA[<p>Referencing dynamic variable names can be tricky, unless you are aware of a few basic ColdFusion concepts.  </p>

<p>I see things like this:-  </p>

<pre><code>&lt;cfloop collection="#form#" item="iFormKey"&gt;  
    &lt;cfset temp = evaluate(form.#iFormKey#)&gt;
    #temp#
&lt;/cfloop&gt;
</code></pre>

<p>and it drives me insane. It is so unnecessary</p>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/using-array-notation-in-coldfusion/</link><guid isPermaLink="false">e593c7e7-53b5-48bc-b8e2-88459b2d291d</guid><category><![CDATA[ColdFusion]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Wed, 22 Mar 2006 18:53:30 GMT</pubDate><content:encoded><![CDATA[<p>Referencing dynamic variable names can be tricky, unless you are aware of a few basic ColdFusion concepts.  </p>

<p>I see things like this:-  </p>

<pre><code>&lt;cfloop collection="#form#" item="iFormKey"&gt;  
    &lt;cfset temp = evaluate(form.#iFormKey#)&gt;
    #temp#
&lt;/cfloop&gt;
</code></pre>

<p>and it drives me insane. It is so unnecessary not to mention messy.  </p>

<p>Here it is again but using array notation.</p>

<pre><code>&lt;cfloop collection="#form#"; item="iFormKey"&gt;
    #form[iFormKey]#
&lt;/cfloop&gt;
</code></pre>

<p>Now this is not only optimal it is also clear. If you find yourself using the <code>evaluate()</code> function you are either going about it in the wrong way or you are trying to cheat ColdFusion into doing something that it doesn&apos;t want to do.  </p>

<p>Also bear in mind that barring a few exceptions all ColdFusion variables are in a struct.  </p>

<pre><code>&lt;cfscript&gt;
    myVar1 = 1;
    myVar2 = 2;
    myVar3 = 3;
    myVar4 = 4;
    myVar5 = 5;

    for (i = 1; i LTE 5; i = i + 1) {
        // now we use the fact that by default any non-scoped variables are put in the variables scope to our advantage
        writeOutput(variables["myVar" &amp; i] &amp; "&lt;br /&gt;");
    }

&lt;/cfscript&gt;
</code></pre>

<p>So as you can see array notation is very powerful and clean.</p>]]></content:encoded></item><item><title><![CDATA[Default Proxy for ColdFusion]]></title><description><![CDATA[<p>Ever had to develop an application that used <code>cfhttp</code>? Ever done this on a machine that is behind a proxy?  </p>

<p>If the answer to those two questions is yes then you may have written some bung code like this:-  </p>

<pre><code>&lt;cftry&gt;
    &lt;cfhttp url="http://www.simonbaynes.com/</code></pre>]]></description><link>https://baynesblog-ghost2.azurewebsites.net/default-proxy-for-coldfusion/</link><guid isPermaLink="false">3940a3ba-4f49-44c2-96ad-2c542016c238</guid><category><![CDATA[ColdFusion]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[Simon Baynes]]></dc:creator><pubDate>Fri, 17 Mar 2006 15:09:11 GMT</pubDate><content:encoded><![CDATA[<p>Ever had to develop an application that used <code>cfhttp</code>? Ever done this on a machine that is behind a proxy?  </p>

<p>If the answer to those two questions is yes then you may have written some bung code like this:-  </p>

<pre><code>&lt;cftry&gt;
    &lt;cfhttp url="http://www.simonbaynes.com/rss.cfm" proxyserver="255.255.255.255" port="80" throwonerror="true" /&gt;

    &lt;!--- if there is an error then we are on live ---&gt;
    &lt;cfcatch type="any"&gt;
        &lt;cfhttp url="http://www.simonbaynes.com/rss.cfm" throwonerror="true" /&gt;
    &lt;/cfcatch&gt;
&lt;/cftry&gt;
</code></pre>

<p>This is totally unnecessary as with some <code>jvm.config</code> arguments you can set a default proxy for your ColdFusion instance.  </p>

<pre><code>-DproxySet=true -Dhttp.proxyHost=255.255.255.255 -DproxyPort=80#
</code></pre>]]></content:encoded></item></channel></rss>