Monday, October 19, 2015

Error handling in a request/response model with the mediator pattern

The mediator pattern

Addy Osmani has a great blog post introducing the mediator pattern in his post Patterns For Large-Scale JavaScript Application Architecture. I’ve recently started working with this pattern, using the mediator.js implementation available via npm. Overall the pattern works well when notifying one module of changes in another; for example when publishing state changes.

However one area where the pattern seems to fall short is when one module wants to request information from another. For example, consider a module that manages a list of jobs. Now consider another module that wants to retrieve a job from that list. An initial implementation using the mediator pattern could look like:

A naive object request implementation

mediator.subscribe('job:loaded', function(job) {
      console.log('job loaded');
    mediator.publish('job:load', jobId);

Some problems with the above implementation include:

  1. There is a memory leak with the subscription object returned from the subscribe method.

  2. A race condition exists whereby a second job:load event may be fired and complete before the first one completes; our listener would receive this second job object.

An improved object request implementation

The first issue above can be resolved by appending the jobId value to the event job:loaded event. The second issue can be resolved by un-subscribing from the event in the callback function. It turns out this is a common enough pattern that we have the shorthand once method that accomplished this for us.

mediator.once('job:loaded:'+jobId, function(job) {
      console.log('job loaded');
    mediator.publish('job:load', jobId);

This second implementation indeed corrects the problems of the first implementation, but we still have a source for a potential memory leak, whereby the job:loaded:# event never fires. We are left with a dangling subscription; there is no error handling in the above implementation.

A possible gotcha can be uncovered if the job load task is synchronous, then the subscription must be created before the requesting event is triggered.

I also don’t like the verbosity of the approach. Triggering the event and registering a callback for the corresponding success event results in a lot of repeated boilerplate code.

Object request with error handling

The error handling issue can be resolved by introducing an error event in addition to the already used success event. To keep things clear I’ll adjust the naming convention of the events to look like:


An event requesting for the job to be loaded.


An event fired when the job is successfully loaded


An event fired when an error is encountered while loading the job

Secondly we need to set a timeout to ensure any unfulfilled subscriptions do not result in a memory leak.

The result of this is a whole heap of boilerplate code:

var complete = false;
    var done = mediator.subscribe('job:load:'+jobId+'done', function(job) {
      complete = true;
      console.log('job loaded');
    var error = mediator.subscribe('job:load:'+jobId+'error:', function(job) {
      complete = true;
      console.error('error loading job');
    setTimeout(function) {
      if (!complete) {
        mediator.remove('job:load:'+jobId+'done', done);
        mediator.remove('job:load:'+jobId+'error:', error);
        console.error('timeout loading job');
    }, 2000);
    mediator.publish('job:load', jobId);

Introducing the mediator.request method

This can be simplified by encapsulating the above logic in a helper function I call mediator.request, and adopting the Promise API:

mediator.request('job:load',  // the request event
      jobId,                      // the request parameter
      'job:load:'+jobId+'done:',  // the request success event
      'job:load:'+jobId+'error:', // the request error event
      2000                        // timeout after which an error event is published
    ).then(function(job) {
      console.log('job loaded');
    }, function(error) {
      console.error('error loading job');

Lastly, we can simplify the above invocation by adopting a naming convention for our success and error events using the :<param>:done and :<param>:error suffixes respectively (allowing for overrides of course). The resulting API then looks like:

mediator.request('job:load', jobId, [options])
      .then(function(job) {
        console.log('job loaded');
      }, function(error) {
        console.error('error loading job');

Concerns and conclusion

The above approach for dealing with a request/response communication model between modules using the mediator pattern is loosely based of the HTTP model, where the mediator events map to URLs. The proposed mediator.request method API is then analogous to the request npm module, and the API could be extended using that module as inspiration.

Finally I’ll mention that I have also considered that it may be an inappropriate use of the mediator pattern when a request/response form of inter-module communication is required. However I feel that with adopting the above API we can maintain the benefits of having loosely-coupled modular architecture provided by the mediator pattern, while addressing the reql-world concern of one module requesting data from another.

Tuesday, July 7, 2015

Rx.js and d3.js in the Red Hat Summit Middleware Keynote Demo

At this year’s DevNation/Red Hat Summit I was part of the team that created the Red Hat Summit Middleware keynote demo. I made the custom front-ends in the demo using Reactive Extensions for javascript (Rx.js) to manipulate the datastreams from the various back-end systems and to transform that data into a form that can drive the UI using d3.js.

If you missed the keynote, you can watch it on YouTube below (the keynote demo starts at 19m). In this blog post I will cover at a high level the UIs I created for the keynote demo, setting the stage for a more in-depth analysis with subsequent posts.

Rx.js in the Keynote Demo

This year’s Red Hat Summit Middleware keynote demo showcased 3 major technologies at Red Hat:

  • IoT using Red Hat JBoss A/MQ as the message broker, and Apache Spark for streaming analytics

  • Docker at Scale using OpenShift

  • Mobile with the Red Hat Mobile Application platform

To showcase the data we collected from the systems, I built 3 user interfaces using Rx.js to manipulate the data, and d3.js to visualize it. Developing these UIs provided me with an excellent opportunity to dive deep into these libraries and learn many of their ins and outs.

IoT Beacon Location

The data collected for the IoT portion of our keynote demo consisted of millions of scan events from 300 Gimbal BLE beacons handed out to DevNation attendees. The scanners were made using Raspberry PIs that transmitted the scan events to a Red Hat A-MQ message broker. These numerous events were then processed using Apache Spark, reducing the millions of beacon events into tens-of-thousands of meaningful "business events" representing attendee movements throughout the conference.

IoT Map Visualization

The Spark-processed data was retrieved from the message broker into a node.js back-end, was stored in a mongodb datastore, and then was re-broadcast to the browser clients. I used Rx.js throughout both the node.js back-end and in the browser to:

  • wrap a STOMP over Websocket protocol connection.

  • wrap the websocket broadcasts and clients

  • filter and map the data streams

  • bind to DOM events

  • etc.

Using Rx.js throughout the application meant that any idiosyncrasies of the the various protocols were encapsulated within the Observable definition, and the rest of the application adhered to a consistent API for authoring Asynchronous javascript code.

The map UI uses d3.js to bind the beacon location data to SVG circles in the browser. I then used the d3.js force layout to animate the circles and move them around the map, mimicking the movement of the real-world beacons.

Watch the video of the keynote linked above to see the UI backed with real data. I additionally embedded a code-pen that shows the beacon-location visualization backed by simulated data. (If the animation is already complete, hover over the codepen and hit the "rerun" button in the bottom right).

See the Pen Beacon Location by Brian Leathem (@bleathem) on CodePen.

IoT broker Visualization

The map UI worked really well to display the business value of the beacon data collected. We needed a second UI however to demonstrate both the sheer volume of data that our A-MQ message broker was receiving from the Raspberry Pis, and the dramatic reduction in data volume as Spark processed the data.

Again using a STOMP over websockets connection to the message broker, I used Rx.js to manipulate the data, and d3.js to visualize it. This broker UI is running in the codepen below backed by simulated data.

See the Pen Broker by Brian Leathem (@bleathem) on CodePen.

1k Containers Visualization

The power of docker, kubernetes, and OpenShift was demonstrated with a rapid scale-up of 1026 containers live on stage. To visualize this scale-up, we developed a custom "hexboard" UI, where each hexagon represents an individual kubernetes pod. The lifecycle of each individual pod was reflected as a falling hexagon indicating each stage transition, culminating with the shadowman Red Hat logo when all pods were online.

I’ve included a codepen below showing the hexboard UI, again backed by "simulated" data.

See the Pen Hexboard by Brian Leathem (@bleathem) on CodePen.

Claim you container

Audience members were then provided an opportunity to claim their own piece of "the cloud" by drawing a sketch on their mobile devices and posting those sketches to an individual kubernetes pod via the Red Hat Mobile Application Platform back-end. The audience participation was incredible, and we quickly filled the hexboard with user-generated sketches!

The source

The source code for the beacon location UI, and the 1k container UI are both available on github. The goal over the next few weeks is to clean up the keynote demo code, and provide it in a form where folks can run it themselves. Stay tuned for further developments in this regard.

Wednesday, June 24, 2015

Rx.js Session at DevNation

I presented a Session on Rx.js at DevNation this year. My goal was to impress upon the audience how Observables can be interpreted as a collection-in-time. This analogy was very well described by @jhusain his Async Javascript at Netflix talk that initially got me excited about Reactive Functional programming. My contribution to this idea is to present it in a visual way.

To visualize both a "regular" collection, as well as a collection-in-time, I used the d3.js library to visually present a collection of javascript objects that each represent a shape with a given size, color, and shape-type. In my presentation I embedded these visualizations using as Codepens, which I’ve included in this blog post below.

The slides from my presentation are available at:

A Collection

Here is the visualization of a Collection. With a "regular" collection we can get a reference to every object in the collection at any given point in time. Go ahead, "grab" a shape with your mouse!

See the Pen Collection | Iden by Brian Leathem (@bleathem) on CodePen.

An Observable

And here we have the visualization of an Observable as a Collection-in-time. An Observable is different from a "regular" collection in that we cannot grab a reference to every object at any given point in time. The objects stream past us as time progresses.

See the Pen Observable by Brian Leathem (@bleathem) on CodePen.

Reactive Extensions

In the session I then proceed to use these visualizations as basis for describing some of the tools we use for manipulating Collections/Observables:


With the map function we can operate on each item in the collection. In our example mapping function we map each shape into a new shape of the same size, but of type square and color green.

.map(function(x) {
      return {
      , color: 'green'
      , size: x.size
      , type: 'square'

Collection map

Here we have the above map function applied to a "regular" Collection:

See the Pen Operating on a Collection by Brian Leathem (@bleathem) on CodePen.

Observable map

The above map function applied to an Observable:

See the Pen Map an Observable by Brian Leathem (@bleathem) on CodePen.


The mergeAll function is used to "flatten" a 2-dimensional collection into a 1-dimensional collection. In this code sample we map each shape into a pair of shapes which we return as an array. The resulting "array of arrays" is then passed to the mergeAll function where it is flattened.

.map(function(x) {
        var y = _.clone(x); = + 80;
        y.color = 'green';
        var z = _.clone(x);
        y.size = y.size / 1.5;
        z.size = z.size / 1.5;
        return [y, z];

Nested Collections

This visualization shows the above mapping without the mergeAll applied. Notice how the resulting collection consists of object pairs. We do not have a flat collection. Try to grab one of the shapes with your mouse and see for yourself!

See the Pen Map a nested Collection by Brian Leathem (@bleathem) on CodePen.

Nested Collections mergeAll

With the mergeAll function applied to the Nested collection we now have a flattened collection, which we can continue to operate on with our tool set.

See the Pen MergeAll a Collection by Brian Leathem (@bleathem) on CodePen.

Observable mergeAll

The mergeAll function applied to a 2-dimensional Observable.

See the Pen MergeAll an Observable by Brian Leathem (@bleathem) on CodePen.


It turns out the mapmergeAll combination is a pattern we apply so often that we created the flatMap function as a shorthand. We can then rewrite the above transformation as:

.flatMap(function(x) {
        var y = _.clone(x); = + 80;
        y.color = 'green';
        var z = _.clone(x);
        y.size = y.size / 1.5;
        z.size = z.size / 1.5;
        return [y, z];


A common use case for analyzing collections is the reduce function, where one iterates over a collection and "accumulates" a value for each object in the collection. In this code sample we are accumulating the size of each shape, and using that to create a new shape of the accumulated size.

var outputData = inputData
      .reduce(function(acc, x) {
        return {
        , color: 'green'
        , size: acc.size + x.size
        , type: 'square'
      }, {size: 0});

Collection reduce

The above reduce function applied to a collection:

See the Pen Reduce a Collection by Brian Leathem (@bleathem) on CodePen.

Observable reduce

The reduce function applied to an Observable:


You will want to click the RERUN button that appears when you mouse-over this codepen. Then wait until the input Observable terminates to see the reduce result.

See the Pen Reduce an Observable by Brian Leathem (@bleathem) on CodePen.


The last function we will look at is the zip function which is used to combine many Observables into a single observable. It accomplishes this by taking each Observable as a parameter, followed by a function that is used to "combine" the object retrieved from each Observable.

In the following code sample we combine our shapes by creating a new shape with the color of the first shape, but the size and type of the 2nd shape.

var outputData =
      function(x1, x2) {
        return {
        , color: x1.color
        , size: x2.size
        , type: x2.type

Observable zip

See the Pen Zip an Observable by Brian Leathem (@bleathem) on CodePen.

The rest of the talk

In the remaining slides I discuss creating and subscribing to Observables, and went through a number of use cases and examples. I ended with a preview and brief code walk-through of the Red Hat Summit Middleware keynote demo, that I wrote using Rx.js. But that is a topic for another post.

The slides are available at:

Thursday, February 5, 2015

Google Play Services Oauth2 Token Lookup via Cordova

Delegating to 3rd parties to manage your authorization is incredibly helpful when developing a new application. A benefit to users and developers alike, this task is made all the more helpful with the number of social networks providing Oauth2 APIs that we can use for our authorization. In this blog post I will address using the Google Play services on Android from a hybrid mobile Cordova application to retrieve an Oauth2 token that we can then use with Google’s Oauth2 REST API.

There are a number of blogs and how-tos on the web that show us how to use the Cordova InAppBrowser to trigger an Oauth2 token request. This approach works well, and indeed achieves the desired result of authenticating the user and retrieving an Oauth2 token. However the user experience is poor, requiring the user to enter their credentials. Why not leverage the already authenticated user logged into the mobile device? To achieve this we will have to make use of the Google Play services API.

Google Play services API

The Android documentation on Authorizing with Google for REST APIs is quite clear. We can use the Android API GoogleAuthUtil.getToken() method to retrieve an Oauth2 token for the logged-in user. The only missing link then is invoking the Android API from our javascript application.

A Cordova plugin

To close this gap, I created a Cordova plugin that invokes the GoogleAuthUtil API from a line of javascript, and returns the retrieved Oauth2 token to the javascript environment using a callback function. The Cordova Plugin Development Guide does a good job in describing how to author plugins. I recommend giving it a read if you are not familiar with developing Cordova plugins.

The only "gotcha" I had to deal with was the UserRecoverableAuthException that is thrown when first trying to retrieve the token. The above-mentioned Android documentation does a good job on describing how to catch the exception and retrieve appropriate permissions, but the Oauth2 token seems to get lost in the process. It turns out the token can be retrieved from an "Intent Extra" in the onActivityResult method of our plugin. Check out the plugin source if this is meaningful to you.

Consuming the plugin

The plugin I created is available on Github, and is installed using the command:

cordova plugin add

Remove this plugin with the command:

cordova plugin remove ca.bleathem.plugin.OauthGoogleServices

Invoke the plugin from your javascript:

window.cordova.plugins.oauth([scope], done, [err]);
  • scope optional: the scope for the Oath2 token request. Default:

  • done required: a success callback invoked the Oauth2 token as its single parameter

  • err optional: a failure callback invoked when there is an error retrieving the token

Example usage

In my Angular.js application I used a Promise API to retrieve the token:

var localLogin = function() {
      var deferred = $q.defer();
      $window.cordova.plugins.oauth.getToken('openid', function(token) {
      }, function(error) {
      return deferred.promise;

I then posted the token to my backend where the token was verified and used to lookup/create a user. I set up a fallback mechanism to use the InAppBrowser approach to retrieve a Oauth2 token in cases where the Google Play services API was not present:

if ($window.cordova && $window.cordova.plugins && $window.cordova.plugins.oauth) {
      return localLogin().then(verifyToken, remoteLogin);

The Final Word

This was the first Cordova plugin I created, and I must say I’m impressed at how easy it was to implement. I’ll definitely keep this tool close-at-hand when developing hybrid mobile applications.

Hopefully this Cordova plugin is useful to someone else; it certainly is easier to use than setting up the InAppBrowser solution!

Wednesday, January 28, 2015

Scalable splash screens with Cordova

Adding a splash screen to your mobile application is useful to provide users with feedback that their application is starting while performing any initialization tasks. In this blog post I will summarize how I created a scalable splash screen and how I configured my Cordova application to use it.

Drawing the splash screen

If you’re not an artist (as I am not!) then creating a graphical splash screen can be a somewhat daunting task. Finding a relevant image to use as a starting point can be a big help. I used the Creative Commons Search tool to find my initial image.

Edit the image using your favorite imaging editing software (I’m a fan of GIMP). Android pre-defines a set of sizes which you should target with your image:

  • xhdpi: 640 x 960

  • hdpi: 480 x 800

  • mdpi: 320 x 480

  • ldpi: 240 x 320

Similarly Apple has a set of predefined sizes, see the Apple HIG for details, but I’ll be addressing specifically Android with this post.

To keep things simple, I created a single square image to use for both portrait and landscape orientations:


Making the splash screen scalable

Scaling the above image to fit the size and shape of a phone or tablet would distort the graphic and the text. To get around this we use the 9-patch file format to mark the areas of the image that can safely be stretched.

To convert the PNG file you created above into a 9-patch file, use the draw9patch application distributed with the android SDK. The basic steps of the conversion are as follows:

  1. Open you PNG file with the draw9patch application

  2. Drag with you mouse to mark the areas on the top and left margins where it is safe to stretch the image.

    • The right and bottom margins can be used to mark where content should be inserted into the image, and are useful when creating images for buttons. However this does not apply to splash screens, and you can safely ignore the right and bottom margins.

  3. Save your file with the *.9.png extension. The extension is critical, otherwise your splash screen image will not be interpreted as a 9-patch file, and the stretching will not be applied. I named mine splash.9.png.

Further details on using draw9patch can be found in the Android documentation.

Cordova build hooks

I like to structure my cordova projects to leave the platforms and plugins folders out of source control. I then use Cordova build hooks to install plugins and copy resources. For more information on build hooks, refer to this great post on using build hooks.

I place the above 9-patch image in the project folder config/android/res/screens and use the following build hook to copy this scm-controlled resource into the platforms folder:

#!/usr/bin/env node
    // This hook copies various resource files
    // from our version control system directories
    // into the appropriate platform specific location
    // [{source: target}]
    var filestocopy = [
      , {
    var fs = require('fs');
    var path = require('path');
    var rootdir = process.argv[2];
    filestocopy.forEach(function(obj) {
        Object.keys(obj).forEach(function(key) {
            var val = obj[key];
            var srcfile = path.join(rootdir, key);
            var destfile = path.join(rootdir, val);
            console.log("copying "+srcfile+" to "+destfile);
            var destdir = path.dirname(destfile);
            if (fs.existsSync(srcfile) && fs.existsSync(destdir)) {

I’ve taken a one-size-fits-all approach with my splash screen. This works because I place my splash screen file in the platforms/android/res/drawable folder. If you want to create a different splash screen to accommodate each of the different screen sizes in either the portrait or landscape orientations, you can modify the above build hook to copy the appropriately sized files into each of the platforms/android/res/drawable-{port|land}-{ldpi|mdpi|hdpi|xhdpi}/ folders.

Cordova configuration

Finally we will install the cordova splashscreen plugin. This plugin allows us to manage the splash screen from our Cordova application.

We configure the default timeout and the name of our splash screen file in the config.xml of our project:


    <preference name="SplashScreen" value="splash" />
    <preference name="SplashScreenDelay" value="10000" />

Using an appropriate listener in your Cordova application, dismiss the splash screen:



Getting the above pieces correctly lined up was surprisingly difficult. If the files are not named correctly, or placed in the wrong folder, everything falls apart. The Cordova documentation on the topic provides some help, but leaves out a lot of important details. This is apparent in the number of forum, stack overflow, and github issue threads on the subject. Hopefully this post helps someone shortcut the frustration of getting this working.

Monday, January 26, 2015

Vert.x with gulp.js

Vert.x is often put forward as a polyglot alternative to node.js that runs on the JVM. A read through the vert.x javascript docs indicates that javascript is a first-class language in vert.x, and both node.js and vert.x use an event-driven, non-blocking I/O programming model. But to what degree will a node programmer feel at home in writing a vert.x application?

In this blog post I will look at using gulp, a node.js build tool, to build a vert.x 2 module.

Platform Installation

Before proceeding, be sure to have both the vert.x and node.js platforms installed. Vert.x will provide the run-time for our application, and node.js will provide us with the build environment for our project. Refer to the vert.x install docs and the node and npm install docs for further details.

Project layout

The gulp.js build tool has us apply transformations to streams of our source code, and as such doesn’t dictate how we structure our source code within our project. The structure I chose is as follows:

    ├── gulpfile.js
    ├── node_modules
    │   └── ...
    ├── package.json
    ├── src
    │   ├── app.js
    │   ├── mod.json
    │   └── ...
    ├── tasks
    │   ├── vertx.gulp.js
    │   └── zip.gulp.js
    └── vertx_modules
        └── ...

The package.json file manages the npm dependencies for our gulp.js build, where those dependencies are stored in the node_modules folder. The gulpfile.js file is our gulp build file, and incorporates individual build tasks defined in the tasks folder. The src folder contains our the source for our vert.x module, and finally the vertx_modules folder contains the vert.x modules on which our application depends.

The gulp build

The gulp build file (gulpfile.js) is pretty straightforward; the top-level gulp file is used to configure the project, with individual tasks defined in separate files. These files are then included using the require statement.

process.env.VERTX_MODS = 'vertx_modules';
    var gulp = require('gulp');
    var opts = {
      module: {
        group: 'ca.bleathem',
        artifact: 'demo',
        version: '0.0.1'
      paths: {
        src: 'src/**/*',
        dist: 'dist'
    }; = + '~'
                     + opts.module.artifact + '~'
                     + opts.module.version + '.zip';
    opts.paths.cp = 'src';
    require('./tasks/vertx.gulp.js')(gulp, opts);
    require('./tasks/zip.gulp.js')(gulp, opts);
    gulp.task('default', ['vertx']);

Notice the VERTX_MODS environment variable is set in the gulpfile. Using the build file to programtically set environment variable depending on the deployment target (production/development) can be a powerful technique. Here we set VERTX_MODS to store vertx modules in a folder paralleling the node.js modules.

The *.gulp.js build files containing the individual gulp task definitions are stored in the gulp sub folder.

    ├── tasks
    │   ├── vertx.gulp.js
    │   └── zip.gulp.js

Let’s explore these vertx and zip tasks further.

The vert.x gulp task

The gulp plugin guidelines recommend not creating a plugin for a task that can "be done easily with an existing node module". To this end, we’ll start by seeing how far we can by leveraging the abilities of node to spawn a child process. Below is a gulp task that runs the vert.x module that is our sample application:

var spawn = require('child_process').spawn
      , gutil = require('gulp-util');
    module.exports = function(gulp, opts) {
      gulp.task('vertx', [], function(done) {
        var child = spawn('vertx', ['runmod',, '-cp', opts.paths.cp ], {cwd: process.cwd()}),
            stdout = '',
            stderr = '';
        child.stdout.on('data', function (data) {
            stdout += data;
            gutil.log(data.slice(0, data.length - 1));
        child.stderr.on('data', function (data) {
            stderr += data;
            gutil.log(, data.length - 1)));
        child.on('close', function(code) {
            gutil.log('Done with exit code', code);

The bulk of the above listing deals with re-directing and formatting the output of the vert.x child process. The invocation of the spawn function is the interesting part, and is where we pass our arguments to the vert.x process. In our case we want to run the module that is our sample project, and we set the vert.x classpath to our source folder to allow for on-the-fly code changes.

Invoking the build via the command gulp vertx will start vert.x, running the module in our project.

The zip gulp task

The distribution format for vert.x is a wonderfully simple zip format. This makes it easy to use a the gulp-zip plugin to zip up the file and create a bundle for our module.

var zip = require('gulp-zip');
    module.exports = function(gulp, opts) {
      return gulp.task('zip', function() {
        return gulp.src(opts.paths.src)

The above source transformation is a trivial one. Those familiar with gulp will recognize we could easily add additional stream transformations here, eg. compiling coffescript, minifying client code, compiling sass etc.

On to vert.x 3

The above build works well for vert.x 2. However vert.x 3 is around the corner and introduces many changes. The changes relevant to our gulp build include:

  1. Vert.x 3 will do away with modules and flatten the classpath across verticals. This will directly affect how we structure our source code and invoke vert.x from our gulpfile.

  2. Vert.x 3 will also resolve packaged verticles from npm, which will align nicely with our npm-based build approach.

Stay tuned for a new post addressing a gulp.js build targeting vert.x 3.

Tuesday, January 20, 2015

RichFaces 4.5.2.Final Release Announcement


RichFaces 4.5.2.Final is now available for download! Check out @Michal Petrov's blog for the detailed RichFaces 4.5.2.Final release announcement.