This is where most of the programming will take place. Basically, you take whatever inputs are fed into your system and turn them into the desired outputs you want. Here’s what you typically need for that:
- a programming language
- a platform & runtime environment
- a development environment
- If you only need visual output, you can simply use the good ol’ browser and make it fullscreen. However, if you are here, I suspect you want to do more.
- If you do not need visual output but want access to external devices, you should use Node.js. There are hundreds of node-compatible modules that will let you access devices from robots to MIDI instruments and everything in between.
As far as I’m concerned, NW.js is one of the top assets you could have in your physical computing toolbox. Another good alternative is Electron. However, I find NW.js to be a more appealing option. You can read why I think so in my article title: NW.js & Electron Compared.
Physical computing implies that you will need to gather input from the environment, typically with something else than the mouse or keyboard. Let’s look a different ways to sense the real world. One of my personal favorites is the webcam. NW.js let you easily access a webcam through the getUserMedia API. Once you have a video feed coming in, you can do amazing stuff. For example, you could:
- perform face detection or substitution with clmtrackr;
- detect object or lines with jsfeat;
- detect features, track colors or tag people with tracking.js;
- grab pictures or detect motion.
The getUserMedia API also grants you access to microphone input. Together with the WebAudio API, this can be used to detect the pitch (frequency) and the volume (amplitude) of live sound input. A good place to start is the dart-mic library. The webcam and mic inputs are pretty obvious. What else can we use ? Well, a set of devices I often use for input gathering are the Phidgets. These sensing boards will give you access to a wide range of information such as:
- distance, proximity, motion
- pressure, force, compression, vibration
- temperature, air pressure
- touch, bending
- acceleration, GPS position, rotation
- magnetism, light, pH
- voltage, current
The only thing that is missing from your project is its ability to insert itself into the real world. The easiest way is, obviously, to use the video output from your computer. Simply make your application fullscreen by modifying its package.json file and you have a clean video output to use. You can then display this output on one or more monitors (many graphic cards have dual or triple output nowadays) or even project it on regular flat surfaces or on objects. A nifty library that comes to mind when speaking of projections is Maptastic. This library lets you perspective-correct anything on a web page (video, canvas, or any DOM-element) so that it matches the surface upon which it is projected. The second obvious output are the speakers. While the WebAudio API handles sound playback without problems, you probably should use a library to make the experience easier. Some suggestions:
- TUIO: for various interaction devices (particularly multitouch devices)
- OSC (Open Sound Control): compatible with tons of hardware devices and software applications
- MIDI: for musical instruments, lighting and show control
There are about a million other devices we could talk about here. You want to control robots ? Use Cylon.js. You want to use a Kinect sensor. Use the kinect Node module. You want to… well you get the idea. In the coming months, you will see more and more tutorials on this site pertaining to specific devices but for now, that’s it.