Closing connections and returning results using node-oracledb

If you’re using the npm module node-oracledb to connect to an Oracle database from Node, consider using this Promise-based and cursor-based wrapper/utility to return results from your queries and close connections:

This wrapper provides the following:

  • Only one function to call – executeSQL()
    • Pass in your SQL or stored procedure and any connection parameters
  • Promise-based, so chain off executeSQL() to return your execution results or catch any errors
  • Automatically closes the connection to Oracle and the result set returned from the database so no need to worry about memory leaks

I wrote this for a few reasons, the primary being a separation of concerns.  Instead of the calling code having to worry about getting the database rows from the cursor, checking for empty sets, closing the result set, and closing the connection to the database, this is all wrapped up in one nice function that handles this automatically for you.  Your code won’t be littered with node-oracledb module-specific code when all you want to do is get results back from the database.  Also, it’s very easy and common to get memory leaks when your result sets and connections aren’t closed so this prevents that.


Setting up Chrome Extensions for use with ES6

First time setup of Chrome Extensions can be painful if you’ve never done it before. Add to that setting them up for use with ES6 and you can end up spinning your wheels longer than writing code. I recently went through this while creating Reading List, which makes heavy use of ES6 as well as Ramda for the functional work. While Babel setup is fairly easy, the module loading presented some challenges. Having originally gone with SystemJS I faced a lot of difficulty in getting the tests to run. After switching to Webpack, for all the horror stories I had heard about it, the issues I was facing were resolved within the hour.

TLDR: You can see an example of the setup here It is somewhat barebones – intentionally – as so many JavaScript developers waste their time with tool configuration these days. This setup is meant to get you off the ground ASAP.

We’ll step through the setup as follows:

  • Transpiling – Babel
  • ES6 module bundling & loading – Webpack
  • Setting up the Chrome extension
  • Setting up unit tests

Transpiling – Babel

This part is pretty simple. Install the Babel tools we’ll need with the command below:

What does this install? Because Babel can compile several ECMAScript specs we need to install the preset for the version we want to use, in this case ES2015 (ES6). If we wanted ES7 we could install a preset for that too. We also need to install babel-loader so that we can integrate with Webpack. Lastly, babel-register is needed so that we can run our Mocha tests.

Next step is to tell Babel what presets we want to enable. Create a .babelrc config file if you haven’t already and add the following:

And of course if you want to use ES7 features you would add the ES7 preset to this config.

That takes care of Babel.

ES6 module bundling & loading – Webpack

We’ll be using the import / export statements from ES6, formatting our modules as ES6 rather than AMD or CommonJS. Webpack will bundle these modules up for loading in the browser. Install with:

Next we need to add a webpack.config.js file to the root of our project. Configure it like so:

The entry point for our app contains imports of the other files used in the project. It might look something like this:

bundle.js is the output of our modules after they’ve been run through Babel and Webpack. If you have any 3rd party libraries, include them in the externals property so that they won’t be included in the bundle. Otherwise all the code for that library will get bundled up and dramatically increase the file size.

From the command line, run the following in order to actually create the bundle and it’s source map:

Now we need to configure our npm run start command so that it does this bundling and serves up the files in one step. Add this to package.json:

That takes care of Webpack.

Setting up the Chrome extension

Chrome extensions have a config file of their own, manifest.json. Here’s the one from my project:

I won’t go into too much detail as this config can get really complex, but the main things to know are that you specify the icon, the HTML file you want to run when you click the extension icon, what Chrome API’s you need under permissions, then add your content scripts, which are scripts needed by the HTML file you specify. Disclaimer: you can also specify background scripts that run, but I did not make use of these. This setup is not tested for use with background scripts, although they may run just fine.

We take the bundle file output from Webpack and use it as our content script. An important thing to note is that you can specify when this file should run using "run_at". This is especially useful for when you need to use DOM events such as DOMContentLoaded, as extensions seem to block this event from firing. The "run_at" property is useful because it tells the script to execute when you specify, in this case at the start of the document load.

Next we need to add the bundle file to our HTML:

A side note here: I had to add the Ramda library to the HTML even though it was specified in the Webpack config as an external library. Not sure if this is the correct way or not, but it works. YMMV.

That takes care of Chrome.

Setting up unit tests

Now we just need to set up our unit tests. If you don’t already have mocha installed, run npm install --save-dev mocha . Add this to the “scripts” property of the package.json file:

Most info you’ll find on setup will recommend
"test": "mocha --compilers js:babel-core/register test pattern here"
but this seems to be outdated and the Mocha docs recommend just using –require babel-register. From the docs:
“If your ES6 modules have extension .js, you can npm install –save-dev babel-register and use mocha –require babel-register; –compilers is only necessary if you need to specify a file extension.”

Run npm run test and watch your tests run.
That takes care of unit tests.


ES6: Mutability of ‘const’

When first hearing about const in ES6, I was excited about the possibility of having immutability in native JavaScript. For developers programming in the functional style, this would have come in handy, but it turns out const is not actually immutable. It allows mutable properties. For example, all of the below is valid:

while the below is not valid:

So the object cannot be reassigned, but the value of the property can be changed and properties can be added and removed. It seems so similar to immutability, but it’s not and it’s an important distinction to make.

It’s probably known by now that if you need to make an object’s values immutable, you can use Object.freeze(), but be aware that this freezes the object entirely. You can’t add more properties or remove properties after it’s been frozen.

It’s probably a good idea to use const as much as possible as it discourages unnecessary re-assignment and forces the developer to think about how the variable will be used. If you need true immutability, you can use the Immutable library by Facebook.

This post is part of a continuing series on practical application of ES6 features. To view the first in this series, check out this link. More are on the way.


ES6: Destructuring

ES6-destructuringThis is the first post in a series I’ll be doing about new ES6 features.  The goal is not to merely explain the concepts, but to also show “real-world” – or real enough – applications of the concepts so that you can understand why and when you might use them.  Hopefully you will be able to start recognizing scenarios or areas in your codebase that could benefit from these new features. After all, new language features should be used to not only help us write cleaner, more expressive code, they should also help us or even challenge us to think about the way we solve problems.

The first topic feature that will be covered is destructuring.


Destructuring is a way of breaking down a data structure (de-structuring) into smaller parts.  ES6 adds this feature for use with arrays and objects.


Destructuring provides a cleaner, less verbose way of extracting values from objects and arrays.  Rather than having to write

or in the case of an array, explicitly specify the index for the value you’re trying to get, you can instead write

There are several other selling points, but before we dive into those, let’s see some code.



Let’s start with objects.  The syntax for destructuring follows the same syntax as an object literal itself, a block statement.  Consider the code below:

Destructuring can be done in one of two ways:

The result of this is three variables – ‘color’, ‘type’, and ‘name’, all with the value of their respective property values.  It should be noted here that all of the three variable types – var, let, const – need an initializer (the object or array to the right of the assignment operator (=)).  As a side note, while var and let do not need to be initialized for non-destructured assigments, const always needs to be initialized, regardless of whether it’s a destructured value or not.

If we print out these values, the result will be as below:

As you’re hopefully already starting to see, the same syntax that is used to construct data can now be used to extract data.

Important to note here is that we actually aren’t changing the object itself, which is why node.type still returns “Test” although we assigned the variable value to “Homework”.  Destructuring doesn’t modify the source, whether it is var, let or const.  Only the destructured variables (if they’re var or let) are modified.

Assigning destructured variable to a different name

What if you don’t want to use the property name as the variable name? You can change it like so:

Side note: what happens if the object property or object name is wrong?  It will throw a ReferenceError:

Nested objects

Destructuring is also applicable to nested objects, like below:


Array destructuring is much like object destructuring, with the main difference being that you don’t specify the index number.

We can skip values in the array by leaving them blank. As you can see, thrName is an arbitrary name, in this case referring to the third position in the array.

Nested arrays

Just like with nested objects, so too can nested arrays be destructured:

Mixed data structures

Lastly, it is possible to apply what we’ve learned above in order to destructure mixed data structures, like below:

Side notes

Sometimes you will see the object or array literal to the right of the destructuring statement or expression:

With arrays, you can use the rest operator (another ES6 feature) to iterate through the values without having to explicitly call them out:

Default values can be assigned if the object property or array value does not yet exist:


If you’re looking to convert some of your ES5 code to ES6, or just want to be aware of use cases for this new feature as you’re developing a current or future application, the following will be patterns to keep an eye out for.

As mentioned in the beginning of this post, a big selling point for destructuring is its cleaner way of extracting data from a data structure, instead of having to write something verbose like let val = someObject.someProperty.maybeSomeNestedProperty or something repetitive like

Another great use case is swapping values. Traditionally, developers have had to make use of a temp variable in order to swap values between variables, but now we can do this:

Destructuring can be used with arrays and objects returned from a function, too:

That’s it for this week’s post. It’s sometimes difficult to read code interspersed with text, so I’ll put the code on GitHub.

I have a whole backlog of topics for future posts, and I’m not sure if the next one will be on ES6 or not. If you find this post useful, would like more clarifications on the concept, or – most importantly – would like to better understand why you might use this feature, please comment down below.


Authoring Yeoman Generators


The last couple of days I’ve been playing around with authoring a Yeoman generator for scaffolding out a Sketch app plugin.  While it’s not completely done yet, it’s in a “good enough/just ship it” state to put the source on github.  I’ll be doing some posts in the future on how to create your own Sketch plugins, for which this generator will come in handy, but the purpose of this post is to go over some of the hurdles I faced and some “not-easily-found” documentation for those who are in the process of building their first Yeoman generators.  The existing documentation is pretty helpful but as with any software project, you sometimes need to know where to look to find the information you need.

Some things to be mindful of:


The first thing you’ll likely do when creating your generator is add your package.json file.  Most generators are structured like so:


and if you have sub-generators, you’re structure might look like this:


Yeoman will look in ./ and generators/ for available generators.  If you’ve got sub-generators, the key is to add them to your package.json file, like so:

Yeoman uses the Grouped-queue project to group tasks into a priority queue.  The priorities are as follows

  1. initializing – Your initialization methods (checking current project state, getting configs, etc)
  2. prompting – Where you prompt users for options (where you’d call this.prompt())
  3. configuring – Saving configurations and configure the project (creating .editorconfig files and other metadata files)
  4. default – If the method name doesn’t match a priority, it will be pushed to this group.
  5. writing – Where you write the generator specific files (routes, controllers, etc)
  6. conflicts – Where conflicts are handled (used internally)
  7. install – Where installation are run (npm, bower)
  8. end – Called last, cleanup, say good bye, etc

This is something that is important to be aware of.  It’s in the official docs, but easy to skip over.

If you want to put tasks in the default task (#4 above), you can code them like so

Question object

Another piece of the docs that’s easy to miss – when you’re coding the prompting function, the available prompt properties are

  • type: (String) Type of the prompt. Defaults: input – Possible values: input, confirm, list, rawlist, password
  • name: (String) The name to use when storing the answer in the answers hash.
  • message: (String|Function) The question to print. If defined as a function, the first parameter will be the current inquirer session answers.
  • default: (String|Number|Array|Function) Default value(s) to use if nothing is entered, or a function that returns the default value(s). If defined as a function, the first parameter will be the current inquirer session answers.
  • choices: (Array|Function) Choices array or a function returning a choices array. If defined as a function, the first parameter will be the current inquirer session answers. Array values can be simple strings, or objects containing aname (to display in list), a value (to save in the answers hash) and a short (to display after selection) properties. The choices array can also contain a Separator.
  • validate: (Function) Receive the user input and should return true if the value is valid, and an error message (String) otherwise. If false is returned, a default error message is provided.
  • filter: (Function) Receive the user input and return the filtered value to be used inside the program. The value returned will be added to the Answers hash.
  • when: (Function, Boolean) Receive the current user answers hash and should return true or false depending on whether or not this question should be asked. The value can also be a simple boolean.

This question object is from Inquirer.js, another project from the Grouped queue author.

Running plugin locally

In order to test out your plugin, from the root of your generator project run npm link and this will symlink your generator folder so that you can run ‘yo <your plugin name>’ without having to export the project as an npm module or install it as a module.

I would recommend installing the yeoman-generator package globally, because even though this should be a dependency in your package.json, when I ran the symlinked project it had issues finding the package.

A word on cloned git repos

If you’re developing and debugging your generator from a git clone – as I was – you might run into issues with the generator behaving oddly.  In particular, running my code from this git clone caused issues with the ‘writing’ function.  This function would get skipped over and I was not able to figure out why.  Maybe the problem is obvious to others, but if you face similar issues, I would recommend copying to a fresh folder and doing your development from there.

Why author a generator?

If you’re only vaguely familiar with this technology you might wonder what benefits it provides.  I would recommend using generators for two reasons:

  1. By quickly scaffolding an application, you’re able to save a lot of potential headaches and spend a lot more time actually building your application or tool
  2. If you work in a large, enterprise type environment, there are likely multiple teams working on similar applications and technology stacks.  Utilizing a generator can help ensure you’re following the same patterns for structuring applications across teams.

That’s it for now.  There are plenty of tutorials out there that will walk you through building a generator, but hopefully this post will help you navigate past some of the gotchas I encountered.

*As a note to myself, some features I’d like to add to the Sketch generator in the future are:

  • Prompt validations
  • Rewrite in ES6




This past Wednesday was the inaugural meeting of the Hartford meetup group.  The meeting, courtesy of group owner and ambassador Paul Langdon, was in an awesome industrial loft/incubator/co-working space called reSET near downtown Hartford.


reSET space (not a pic from our meeting), based in San Francisco, is a pretty cool concept – part hardware meetup organization, part hardware-sharing library.  So far there are about a hundred meetup groups throughout the U.S. (globe?), and they supply the individual groups with hardware like Intel Edison’s, Arduino’s, Amazon Echo’s, etc. with the goal of members taking the hardware home, building something with it, then passing the hardware along to others in the group.

The website is similar to Instructables, but with a cleaner UI and easier way of progressing through the project tutorials in my opinion.  Projects posted to the site list all steps on the same page, which is much easier to navigate through than having to jump pages on Instructables’ site.  It’s also really easy to favorite projects and follow other makers.

On loan from library

On loan from library

The items I got to take home this month are the Freedom Development Board and the Arduino MKR1000.  Really stoked on the MKR1000 as it has built-in Wifi connectivity and a Li-Po charging circuit, so the thing charges itself when plugged into power.  No plans as of yet for what I’ll be doing with the boards, but I’ll be making a post with whatever I do come up with.

There’s an upcoming Amazon Alexa skills competition that I’ll be competing in as well and will be posting on the results.