Setting up Chrome Extensions for use with ES6

First time setup of Chrome Extensions can be painful if you’ve never done it before. Add to that setting them up for use with ES6 and you can end up spinning your wheels longer than writing code. I recently went through this while creating Reading List, which makes heavy use of ES6 as well as Ramda for the functional work. While Babel setup is fairly easy, the module loading presented some challenges. Having originally gone with SystemJS I faced a lot of difficulty in getting the tests to run. After switching to Webpack, for all the horror stories I had heard about it, the issues I was facing were resolved within the hour.

TLDR: You can see an example of the setup here It is somewhat barebones – intentionally – as so many JavaScript developers waste their time with tool configuration these days. This setup is meant to get you off the ground ASAP.

We’ll step through the setup as follows:

  • Transpiling – Babel
  • ES6 module bundling & loading – Webpack
  • Setting up the Chrome extension
  • Setting up unit tests

Transpiling – Babel

This part is pretty simple. Install the Babel tools we’ll need with the command below:

What does this install? Because Babel can compile several ECMAScript specs we need to install the preset for the version we want to use, in this case ES2015 (ES6). If we wanted ES7 we could install a preset for that too. We also need to install babel-loader so that we can integrate with Webpack. Lastly, babel-register is needed so that we can run our Mocha tests.

Next step is to tell Babel what presets we want to enable. Create a .babelrc config file if you haven’t already and add the following:

And of course if you want to use ES7 features you would add the ES7 preset to this config.

That takes care of Babel.

ES6 module bundling & loading – Webpack

We’ll be using the import / export statements from ES6, formatting our modules as ES6 rather than AMD or CommonJS. Webpack will bundle these modules up for loading in the browser. Install with:

Next we need to add a webpack.config.js file to the root of our project. Configure it like so:

The entry point for our app contains imports of the other files used in the project. It might look something like this:

bundle.js is the output of our modules after they’ve been run through Babel and Webpack. If you have any 3rd party libraries, include them in the externals property so that they won’t be included in the bundle. Otherwise all the code for that library will get bundled up and dramatically increase the file size.

From the command line, run the following in order to actually create the bundle and it’s source map:

Now we need to configure our npm run start command so that it does this bundling and serves up the files in one step. Add this to package.json:

That takes care of Webpack.

Setting up the Chrome extension

Chrome extensions have a config file of their own, manifest.json. Here’s the one from my project:

I won’t go into too much detail as this config can get really complex, but the main things to know are that you specify the icon, the HTML file you want to run when you click the extension icon, what Chrome API’s you need under permissions, then add your content scripts, which are scripts needed by the HTML file you specify. Disclaimer: you can also specify background scripts that run, but I did not make use of these. This setup is not tested for use with background scripts, although they may run just fine.

We take the bundle file output from Webpack and use it as our content script. An important thing to note is that you can specify when this file should run using "run_at". This is especially useful for when you need to use DOM events such as DOMContentLoaded, as extensions seem to block this event from firing. The "run_at" property is useful because it tells the script to execute when you specify, in this case at the start of the document load.

Next we need to add the bundle file to our HTML:

A side note here: I had to add the Ramda library to the HTML even though it was specified in the Webpack config as an external library. Not sure if this is the correct way or not, but it works. YMMV.

That takes care of Chrome.

Setting up unit tests

Now we just need to set up our unit tests. If you don’t already have mocha installed, run npm install --save-dev mocha . Add this to the “scripts” property of the package.json file:

Most info you’ll find on setup will recommend
"test": "mocha --compilers js:babel-core/register test pattern here"
but this seems to be outdated and the Mocha docs recommend just using –require babel-register. From the docs:
“If your ES6 modules have extension .js, you can npm install –save-dev babel-register and use mocha –require babel-register; –compilers is only necessary if you need to specify a file extension.”

Run npm run test and watch your tests run.
That takes care of unit tests.


Why is `compose` right to left?

I’ve been working my way through the excellent Mostly Adequate Guide to Functional Programming recently. Although I’ve been working with compose for a bit now, I needed to review why it operates from right-to-left (also known as “right associative”). A review of mathematical theory will help answer this question.

Function Composition

functional composition

The image above is read as “g ∘ f”, or “g composed with f”, or “g-compose-f”. For example, (g ∘ f )(c) = #. If we change ‘c’ to ‘x’, this function would read as g(f(x)). As you’ll remember from high school algebra and composite functions, this means you plug in something for ‘x’, then plug that value into ‘f’, then plug that value into ‘g’. This evaluation occurs from right to left, hence why compose is right associative.

To illustrate compose, consider the following:
f(x) = 2x + 2 and g(x) = –x + 3, find (g o f)(1)
(g o f)(1) = g(f(1))
f(1) = 2(1) + 2 = 4
g(4) = -4 + 3 = -1 (remember, 4 was return value of f(1))
…and -1 is our answer from this function composition.

Left to Right

Developers don’t always think in right-to-left fashion, so if you want the reverse pipe (or sequence depending on the library), is an alternative to compose that operates from left-to-right.


Uploading Node.js package to AWS Lambda

Quick tip: for those developing AWS Lambda applications using Node.js, if you’re uploading a zip package rather than editing inline, something you might get stuck on while trying to test your function is the below error:

Unable to import module 'index': Error at Function.Module._resolveFilename (module.js:325:15) at Function.Module._load (module.js:276:25) at Module.require (module.js:353:17) at require (internal/module.js:12:17)

First, make sure the name of your handler in the AWS console matches the name of your “main” JavaScript file (the one containing your exports.handler function). Screen Shot 2016-07-06 at 11.01.28 AM

If your file with the exports.handler function is named “index.js”, then in the AWS console, name it as “index.handler”.

Next, something that really tripped me up was not having this index.js file in the root of my .zip. This was what ultimately led to the Unable to import module 'index' error I kept getting. So make sure this file is in the root of the package.


Hartford Edison Hackathon


Intel Edison Virtual Reality

This weekend I developed a project (github source here) as part of the Hartford June 25th, 2016 Hackathon. You can view projects created by other participants here. Intel and Seeed provided Intel Edison and Grove Starter kits to all participants. This project demonstrates the use of the Edison as a sensor gateway, connecting to AWS IOT service for use by a client utilizing Google Cardboard VR glasses.

The Edison takes sensor readings which are then published to a topic bound to AWS IOT. This service in turn takes all sensor readings received and, through the rule engine, publishes them onto a queue (SQS). For the web app, the ThreeJS library provides the graphics and stereoscopic view needed for the Cardboard glasses. The client is using the AWS SDK for JavaScript in the Browser to poll the queue to get sensor readings, which are used to affect how fast the “strobe” is spinning in the scene. You can view the client in a web browser on your phone, placed inside the Cardboard.

This project was an exercise to learn more about ThreeJS, Virtual Reality, and how the real, physical world can be used as inputs to a constructed, virtual world.

Some Findings

  • Initially I was using the AWS IOT rule engine to route all messages received to DynamoDB, using the ${timestamp()} ‘wildcard’ as the hash key to keep all entries unique. However, Amazon Web Services DynamoDB does not provide a way to query the last element added, so I ran into issues when trying to poll the data from the web application (which is using the data to affect the VR world). Unfortunately, DynamoDB is currently the only database that the IOT rule engine supports, otherwise I likely could have gone with RDS (Relational Database Service). I also considered using S3 (Simple Storage Service), but each message would end up in the S3 bucket as an individual JSON file, making querying and pulling the data difficult. Another alternative would have been setting up DynamoDB ‘triggers’ using the Lambda service to respond to database changes, but this still felt kind of hacky. Because my data did not need to be persisted, Simple Queue Service (SQS) provided a viable alternative, and that was what I ended up going with.
  • SQS is not time-ordered. I’m not sure if any queueing systems are time-ordered, but I found out that due to the way SQS is distributed across AWS servers, getting your message perfectly in order is not possible. For my purposes, the sequencing was close enough.
  • SQS has a purge limit of 60 seconds, and because I was reading from the queue every half second, I was not able to immediately delete the message after reading it. If I stick with SQS, an option might be to set the message retention period to match how often I’m reading the queue, although given some latency at various points in my system, it might be better to set the retention period to twice that of the read frequency.
  • Because I did not need to do anything server-side with the messages stored in SQS, I chose to poll the queue directly from the client code. You can use the ‘AWS SDK for JavaScript in the Broswer’ for this. If you only have unauthenticated users accessing the application, the code to authenticate the application to AWS is as simple as below:
  • AWS Identity and Access Management can be pretty confusing. In order to setup the app-level authentication, you have to go to the ‘Cognito’ service, and create a new federated identity. Then use the pool id from there. The service is nice enough to give you the code to drop in.

Future State

AWS is supremely powerful, but as I improve my project, I’d like to try using a different MQTT client for the publishing and subscribing functionality and potentially remove AWS from the equation altogether. Because I would be subscribing to the topic from the web app, I would have to find a MQTT client that can subscribe from a browser. Going with this approach would limit me from the functionality and services AWS provides, but it may be a cleaner approach for the use case of this project.


ES6: Mutability of ‘const’

When first hearing about const in ES6, I was excited about the possibility of having immutability in native JavaScript. For developers programming in the functional style, this would have come in handy, but it turns out const is not actually immutable. It allows mutable properties. For example, all of the below is valid:

while the below is not valid:

So the object cannot be reassigned, but the value of the property can be changed and properties can be added and removed. It seems so similar to immutability, but it’s not and it’s an important distinction to make.

It’s probably known by now that if you need to make an object’s values immutable, you can use Object.freeze(), but be aware that this freezes the object entirely. You can’t add more properties or remove properties after it’s been frozen.

It’s probably a good idea to use const as much as possible as it discourages unnecessary re-assignment and forces the developer to think about how the variable will be used. If you need true immutability, you can use the Immutable library by Facebook.

This post is part of a continuing series on practical application of ES6 features. To view the first in this series, check out this link. More are on the way.


Functional Programming as the Paradigm for IOT


As the Internet of Things reaches maturation and begins to become commonplace in our lives, the technology used to support IOT must be chosen well. With potentially millions of devices connected, developing applications to support these devices and the data they produce, while converting that data into something meaningful, will require thoughtful attention to technology choices. In building any system, attention to architecture and technology stacks is important, but if IOT delivers on its promise of scale, the technology implications will be very different than what we’ve had to solution and develop for before. It will not be enough to simply “use what we’ve always used” and to keep building things the way we’ve been building them. The challenges are far too complex to not take a step back and look at other options.

What challenges does IOT present?

Scalability and concurrency are likely the two biggest challenges that IOT will bring about. Think of the scale of data that these devices will produce and the number of applications that will be developed to handle those devices and their data; there is the potential for great complexity in designing these systems. While scaling problems can sometimes be solved by adding more infrastructure, this solution won’t apply to the potentially massive amount of Internet-connected devices. And concurrency is an even bigger problem. Millions of devices and real-time communication amongst these devices and consumer-end applications means millions of concurrent connections. Thread-locking and race conditions get hairy fast. Great strides have been made in recent years with non-blocking technology such as Node.js, but this of course won’t be nor should it be the only solution used.

As systems become more complex, so does the underlying codebase, and so we can consider code readability to be just as important as the other two factors.

Functional Programming as the paradigm

Functional programming is well-suited to help solve these challenges. The properties of functional programming – preference for immutability, function composition, avoiding side-effects, less code, etc. – will help avoid many of the pitfalls of an IOT world. Immutable data helps solve the concurrency issue as locks can be avoided. Real-time communication is also better supported by FP. As an aside here, it should be noted that not all FP languages are strictly immutable (for example Haskell has mutable data structures). Furthermore, not all FP languages are created equal when it comes to concurrency management – some perform better than others. This is important to keep in mind when selecting the right language for your application use-case.

Another benefit is side-effect free functions.  While some FP languages are more liberal than others in their allowance of side-effects, FP as a whole favors side-effect free.  This is of great use when programming IOT applications as it makes scaling easier while making the code easier to reason about.  Functions without side-effects can be run in parallel much easier than functions with side-effects as functions that only take inputs and produce outputs only care about their individual inputs and outputs, not other operations like database calls.  This same reason is why side-effect free functions also have the benefit of being able to be better optimized.

Lastly, with FP there is just less code to write, which means less bugs, which means better programs.


What IOT-like applications are currently using FP languages?


  • RabbitMQ
  • WhatsApp
  • Chef
  • League of Legends chat
  • Facebook chat (first version, now using C++)
  • Numerous game servers (Call of Duty, Battlestar online)


  • Netflix
  • Walmart


  • Senseware
  • CargoSense


  • IMVU
  • Numerous trading/financial companies

As you can see, many of these applications above have similar challenges as those posed by IOT, namely many concurrent connections (chat, WhatsApp, games servers) and scale (all of the above). FP has proven itself in the above applications, furthering the argument that it is a prime candidate for IOT.

There’s still room for OOP

There’s still room at the table for Object-Oriented programming, although it probably shouldn’t be the dominant paradigm. It’s called Internet of Things for a reason, and OOP can still be useful in describing and reasoning about those things. However, central to IOT is data and communication, and it’s easier to reason about this with FP than OOP.

A better glue

Out of the box, Internet-connected devices will rely on the applications and systems that support them to maintain this connectedness and communication, and so these supporting systems must have a good way of doing so. As John Hughes worded it in his “Why Functional Programming Matters” paper, “… a language must provide good glue.” Functional programming is that “good glue” that will help enable technologists to solve many of the challenges brought about by IOT.


Overriding WSDL endpoint in node-soap

Recently at work I needed to use node-soap to interface with some old SOAP-based systems. It’s certainly a pain compared to REST, but node-soap is a useful npm module should you ever find yourself needing to call SOAP methods from node.js.

Something that tripped me up as we were moving packages up to higher environments was overriding the default endpoint in the WSDL. Using node-soap, when you create the soap client, you can optionally pass in an endpoint to override the SOAP service’s host specified in the .wsdl file. If you do this, you must pass it in in the format { endpoint: 'your-endpoint-here'}. This was confusing as this format is not documented in the README or the unit tests. If you have multiple silos or environments, you’ll want to use process.env to store the endpoint for each environment and reference that environment variable as the value for the endpoint property.

Hopefully this will help anyone else who might get stuck on this.


My Alexa Skills entry

Had some fun over the weekend learning how to build skills using Amazon Alexa. My project is here:


PM software should include a ROI feature

Project management software, and the project management practice in general, should include a Return on Investment feature.

The problems with modern-day implementations of Agile and its various flavors have already been detailed by many (here’s my favorite), but like it or not, the practice is here to stay.  That is, here to stay until the backlash becomes so strong that the same entities who sold its unsuspecting victims on the practice are forced to go back to the drawing board and invent (or rather, vampirically subvert) another methodology that will in turn be sold back to the same companies.

Rather than complain and suffer under the tyranny of poorly-implemented Agile practices, software development teams can help lessen the blow by requesting the return on investment for each user story or feature be committed to by business/product/UX folks.  Just as developers commit to X number of user story points during iteration planning, those driving the requirements would also commit to a quantifiable – or qualifiable depending on the context – return for the feature.  Agile project management software holds developers to high levels of accountability through its various tracking mechanisms.  Adding a ROI feature would bring the same accountability to product requests and ensure they are thought out enough to warrant putting the work in the developers’ queue.  If you’ve ever worked with bad product managers, you’ll have seen the negative consequences that random, thoughtless feature requests have on your technical team.

Implementation of this idea is not meant to be dogmatic, as there are certainly many plausible scenarios in which quantifying such would be impractical.  Some business features are pure experimentations, just as technical spike user stories are often meant to be exploratory, and we shouldn’t inhibit experimentation.  But there is a huge difference between experimentation, which requires a hypothesis, and absent-minded, I-have-no-idea-what-I’m-doing, throwing requirements and features at the wall and seeing what sticks.  Just as good developers will question and test their code, good product folks will question the necessity of their requests and have a clear vision of the benefit their request will bring to the user, to the product, and to the company in financial return.

Another point to make very clear is that this proposition is not meant to divide the business and technical teams.  Quite the opposite.  The best product managers I’ve worked with have been empathetic towards the technical team while also understanding the big picture enough to understand where a user story and its value fits within the overall product or project.  Likewise, some of the best technical people I’ve worked with have demonstrated a solid understanding of how their code ties to a particular feature and the value that code brings to the business.

For developers, this suggested change in Agile practices even presents an opportunity – an opportunity to become more than a code monkey by contributing to the product and business.  Few words have saturated the conversation around the alignment of business and technology as much as “yeah, we really need technical people that can understand the business.”  Such a statement has always seemed so vague and useless to me.  What exactly does “understand the business” mean?  And what is meant by “the business”?  Instead of thinking about business and technology alignment like something as vague as the statement above, what if you, as a developer, started thinking about how your code brings in money for the company?  I can guarantee that with such a change in mindset, you would quickly begin noticing opportunities for potential revenue-generating features, ways to improve the user experience, and process improvements that can help deliver projects faster.  What if you even began helping the business work through its ideas to come up with an identifiable return on investment?  You would start to become more than a software developer.  You would start to become a product developer.

The criticism launched against Agile implementations is often that it voids developers of creativity, and from what I’ve experienced this can certainly be true.  But I believe product developers can rise above this limitation.



ES6: Destructuring

ES6-destructuringThis is the first post in a series I’ll be doing about new ES6 features.  The goal is not to merely explain the concepts, but to also show “real-world” – or real enough – applications of the concepts so that you can understand why and when you might use them.  Hopefully you will be able to start recognizing scenarios or areas in your codebase that could benefit from these new features. After all, new language features should be used to not only help us write cleaner, more expressive code, they should also help us or even challenge us to think about the way we solve problems.

The first topic feature that will be covered is destructuring.


Destructuring is a way of breaking down a data structure (de-structuring) into smaller parts.  ES6 adds this feature for use with arrays and objects.


Destructuring provides a cleaner, less verbose way of extracting values from objects and arrays.  Rather than having to write

or in the case of an array, explicitly specify the index for the value you’re trying to get, you can instead write

There are several other selling points, but before we dive into those, let’s see some code.



Let’s start with objects.  The syntax for destructuring follows the same syntax as an object literal itself, a block statement.  Consider the code below:

Destructuring can be done in one of two ways:

The result of this is three variables – ‘color’, ‘type’, and ‘name’, all with the value of their respective property values.  It should be noted here that all of the three variable types – var, let, const – need an initializer (the object or array to the right of the assignment operator (=)).  As a side note, while var and let do not need to be initialized for non-destructured assigments, const always needs to be initialized, regardless of whether it’s a destructured value or not.

If we print out these values, the result will be as below:

As you’re hopefully already starting to see, the same syntax that is used to construct data can now be used to extract data.

Important to note here is that we actually aren’t changing the object itself, which is why node.type still returns “Test” although we assigned the variable value to “Homework”.  Destructuring doesn’t modify the source, whether it is var, let or const.  Only the destructured variables (if they’re var or let) are modified.

Assigning destructured variable to a different name

What if you don’t want to use the property name as the variable name? You can change it like so:

Side note: what happens if the object property or object name is wrong?  It will throw a ReferenceError:

Nested objects

Destructuring is also applicable to nested objects, like below:


Array destructuring is much like object destructuring, with the main difference being that you don’t specify the index number.

We can skip values in the array by leaving them blank. As you can see, thrName is an arbitrary name, in this case referring to the third position in the array.

Nested arrays

Just like with nested objects, so too can nested arrays be destructured:

Mixed data structures

Lastly, it is possible to apply what we’ve learned above in order to destructure mixed data structures, like below:

Side notes

Sometimes you will see the object or array literal to the right of the destructuring statement or expression:

With arrays, you can use the rest operator (another ES6 feature) to iterate through the values without having to explicitly call them out:

Default values can be assigned if the object property or array value does not yet exist:


If you’re looking to convert some of your ES5 code to ES6, or just want to be aware of use cases for this new feature as you’re developing a current or future application, the following will be patterns to keep an eye out for.

As mentioned in the beginning of this post, a big selling point for destructuring is its cleaner way of extracting data from a data structure, instead of having to write something verbose like let val = someObject.someProperty.maybeSomeNestedProperty or something repetitive like

Another great use case is swapping values. Traditionally, developers have had to make use of a temp variable in order to swap values between variables, but now we can do this:

Destructuring can be used with arrays and objects returned from a function, too:

That’s it for this week’s post. It’s sometimes difficult to read code interspersed with text, so I’ll put the code on GitHub.

I have a whole backlog of topics for future posts, and I’m not sure if the next one will be on ES6 or not. If you find this post useful, would like more clarifications on the concept, or – most importantly – would like to better understand why you might use this feature, please comment down below.