Getting started with physical computing in JavaScript


May 15, 2015 Jean-Philippe Côté
published in

So, you want to start leveraging your JavaScript skills to interact with the physical world ? You have come to the right place. This article will paint a broad portrait of what can be achieved in the tangible world by using JavaScript. It will also point you in the direction of various tools, libraries, frameworks, devices and tutorials that can help you get started. Here we go! As you can see on the following diagram, the typical physical computing projects can usually be broken down into 3 main parts: the inputs, the processing and the outputs. Inputs represent the information coming into the system from the environment. The outputs represent the system acting upon the environment. The processing represents the logic of the system that decides what to do with inputs and outputs. The Physical Computing Ecosystem If we were to use a human analogy, the sensors would be our 5 senses, the actuators would be our muscles and our brain would be in charge of the processing. This is also the breakdown I will be using for this article. Let’s start with the big red dot.

Processing

This is where most of the programming will take place. Basically, you take whatever inputs are fed into your system and turn them into the desired outputs you want. Here’s what you typically need for that:

  • a programming language
  • a platform & runtime environment
  • a development environment

Obviously, the programming language is JavaScript. However, depending on the platform you want to deploy to, the runtime environment will change. If you want to deploy to mobile devices, you should look into Cordova (PhoneGap). If you want to deploy to smaller, perhaps embedded devices, you should look at micro-controllers that natively support JavaScript: Tessel and Espruino. Other alternatives would be micro-controllers that support Node.js such as the Raspberry PI. In the case of this article, I will focus on projects driven by desktop/laptop-based computers. In such an environment, you have 3 main alternatives:

  1. If you only need visual output, you can simply use the good ol’ browser and make it fullscreen. However, if you are here, I suspect you want to do more.
  2. If you do not need visual output but want access to external devices, you should use Node.js. There are hundreds of node-compatible modules that will let you access devices from robots to MIDI instruments and everything in between.
  3. If you need both (which is most likely the case), I urge you to try out NW.js. NW.js (previously node-webkit) lets you create desktop applications using HTML, CSS and JavaScript. These applications get full access to the local file system and to local resources. Even better, it lets you access any and all node-compatible modules. You get the best of both worlds.

As far as I’m concerned, NW.js is one of the top assets you could have in your physical computing toolbox. Another good alternative is Electron. However, I find NW.js to be a more appealing option. You can read why I think so in my article title: NW.js & Electron Compared.

To get started with NW.js, I suggest you read Creating a Native App with HTML, CSS and JavaScript. Another asset I urge you to look into is a good IDE (integrated developement environment). Obviously, you could simply use a text editor. However, an IDE will offer productivity-boosting features that you should not live without: syntax highlighting, code inspection, code completion, refactoring, debugging, etc. I personally use WebStorm. It is a great IDE that has built-in NW.js support. Now that the processing side is covered, let’s look at inputs.

Inputs

Physical computing implies that you will need to gather input from the environment, typically with something else than the mouse or keyboard. Let’s look a different ways to sense the real world. One of my personal favorites is the webcam. NW.js let you easily access a webcam through the getUserMedia API. Once you have a video feed coming in, you can do amazing stuff. For example, you could:

The getUserMedia API also grants you access to microphone input. Together with the WebAudio API, this can be used to detect the pitch (frequency) and the volume (amplitude) of live sound input. A good place to start is the dart-mic library. The webcam and mic inputs are pretty obvious. What else can we use ? Well, a set of devices I often use for input gathering are the Phidgets. These sensing boards will give you access to a wide range of information such as:

  • distance, proximity, motion
  • pressure, force, compression, vibration
  • temperature, air pressure
  • touch, bending
  • acceleration, GPS position, rotation
  • magnetism, light, pH
  • voltage, current
  • etc.

These sensors are super easy to use and will let you sense pretty much anything you need. They hook up to the general-purpose Phidget Interface Kit which connects to your computer via USB. You can use them in NW.js by simply installing the phidgets module available on npmjs.org. The Phidget Interface Kit also allows you to connect up to 8 binary inputs such as switches (mat switches, light switches, tilt switches, magnetic switches or any custom-made switch). Speaking of switches, if all you need is to grab input from switches, you could use a keyboard emulator to do that even more easily. For more information on this topic, read the article Switches, keyboard emulators & JavaScript.

Outputs

The only thing that is missing from your project is its ability to insert itself into the real world. The easiest way is, obviously, to use the video output from your computer. Simply make your application fullscreen by modifying its package.json file and you have a clean video output to use. You can then display this output on one or more monitors (many graphic cards have dual or triple output nowadays) or even project it on regular flat surfaces or on objects. A nifty library that comes to mind when speaking of projections is Maptastic. This library lets you perspective-correct anything on a web page (video, canvas, or any DOM-element) so that it matches the surface upon which it is projected. The second obvious output are the speakers. While the WebAudio API handles sound playback without problems, you probably should use a library to make the experience easier. Some suggestions:

What else ? Well… how about making things move ? Our good friends at Phidgets Inc, sell a line of linear actuators that you could use. Linear actuators are piston-like devices that can push or pull along their axis. You can easily control them with a servo-controller. There are various ways to control the servo from JavaScript. One of them would be to use the johny-five robotics library and a device like the Arduino or the Raspberry PI. Another way, would be for you to support 😉 the addition of the servo module to the phidgets Node library (I have not had time to do it yet…). You could also do the same with rotating motors (stepping motors or continuous movement). Again, you will need a motor-controller that you will control from a Node-compatible module. Another nice thing to try out is a relay. A relay is a switch that you can control from your code. It would allow you to power up or power down any electric device. You can control your relay by plugging it to a Phidget Interface Kit and using the Node-compatible phidgets module. Anything else ? Well, it could be nice to control other types of devices. Depending on what you want to do, there are various protocols you could use to talk to external devices:

There are about a million other devices we could talk about here. You want to control robots ? Use Cylon.js. You want to use a Kinect sensor. Use the kinect Node module. You want to… well you get the idea. In the coming months, you will see more and more tutorials on this site pertaining to specific devices but for now, that’s it.


The nice header image has been taken by Dmitry Baranovskiy and is used here under the terms of the Creative Commons Attribution 2.o Generic license.


1 Comment

Post a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.