OriDomi
Fold the DOM up like paper.


The web is flat, but with OriDomi you can fold it up. OriDomi is an open-source paper-folding library with animation queueing, touch support, and zero dependencies.
Thing Translator is an open-source app that demonstrates the simplicity and charm of harnessing modern machine learning techniques – namely, computer vision and natural language translation. You can watch a video explaining how it works here.
It was built with some friends at Google Creative Lab as part of the A.I. Experiments series.
The web is flat, but with OriDomi you can fold it up. OriDomi is an open-source paper-folding library with animation queueing, touch support, and zero dependencies.
Gilded Gauge is an experiment in visualizing relative wealth in terms viewers may find more natural to grasp.
Enormous numbers become tangible via comparisons to the Fall of Rome, the distant future, and cascades of emoji commodities.
Each falling menagerie represents an exact representation of the value in question, down to the exact dollar.
Gilded Gauge is entirely open source.
Pozaic uses WebRTC to connect friends (or strangers) in live video compositions you can turn into animated gifs. The medium captures a single second in time across geographic gaps.
EXIF is a type of metadata that is embedded in photo files from most types of cameras and phones.
This metadata includes information about the device used to capture the photo, but also often includes the GPS coordinates of where the photo was taken.
Many users unknowingly share this information with the general public and site/app owners when uploading photos online.
This has been a common vector of privacy lapses, including cases where journalists have unintentionally published photos with geotagging data intact.
Recent press has also revealed the NSA’s collection of EXIF data in its XKeyscore program.
ExifExodus is a small piece of open-source code that runs directly in your browser and strips EXIF data out of your photos before you upload them.
You can run ExifExodus whenever you’re uploading photos by using its bookmarklet (available on the site)
When ExifExodus encounters a JPG file, it will remove the EXIF data by copying the pixels to a new image file, similar to taking a screenshot of something.
Alternatively, you can drop your files in the dropzone at the top of the site) and receive versions free of EXIF data. You can then save these new files and upload them wherever you’d like.
That’s certainly not the implication of this project. Metadata adds another dimension to photos and is valuable for preserving context. This project aims to educate and give users a choice in the matter of sharing it with specific services (and the web at large).
Yes. Although this prevents the general public from accessing your EXIF data, you should be aware that the end recipient is free to use or store the metadata before removing it.
The ExifExodus bookmarklet won’t work with any site that uses Flash (or any other proprietary plugins like Silverlight) to upload files. For such sites, use the dropzone converter, save the output files, and upload those instead.
ExifExodus only works with JPG files (which is the most common image format to carry EXIF metadata).
Cellf is an interactive experiment that reflects you and your surroundings as you play.
More simply put, it’s a twist on sliding tile games written in ClojureScript, React (via Om), and core.async.
Try it for yourself.
Set daily goals and visualize your progress in an intuitive and visual way. Motivate yourself to adopt new positive habits one day at a time.
Transform arrays of any length into cubes that can be rotated infinitely. Originally developed as the time picking interface for ChainCal, I expanded it to visualize arbitrary arrays and wrote an article detailing the process on Codrops.
Skew the shapes of elements without distorting their contents. Maskew creates a parallelogram mask over the element and supports touch/mouse manipulation of the skew amount.
Written quickly to scratch an itch; not intended to be an accurate algorithm.
Watch the demo to see what it does.
Usage:
new TuringType(domElement, 'Just some text.');
With some options:
new TuringType(domElement, 'Terrible but fast typist.', {
accuracy: 0.3,
interval: 20,
callback: allDone
});
Have fun.
Natal is a simple command-line utility that automates most of the process of setting up a React Native app running on ClojureScript.
It stands firmly on the shoulders of giants, specifically those of Mike Fikes who created Ambly and the documentation on setting up a ClojureScript React Native app.
Before getting started, make sure you have the required dependencies installed.
Then, install the CLI using npm:
$ npm install -g natal
To bootstrap a new app, run natal init
with your app’s name as an argument:
$ natal init FutureApp
If your app’s name is more than a single word, be sure to type it in CamelCase. A corresponding hyphenated Clojure namespace will be created.
By default Natal will create a simple skeleton based on the current stable version of Om (aka Om Now). If you’d like to base your app upon Om Next, you can specify a React interface template during init:
$ natal init FutureApp --interface om-next
Keep in mind your app isn’t limited to the React interfaces Natal provides templates for; these are just for convenience.
If all goes well your app should compile and boot in the simulator.
From there you can begin an interactive workflow by starting the REPL.
$ cd future-app
$ rlwrap natal repl
(If you don’t have rlwrap
installed, you can simply run natal repl
, but
using rlwrap
allows the use of arrow keys).
If there are no issues, the REPL should connect to the simulator automatically.
To manually choose which device it connects to, you can run rlwrap natal repl --choose
.
At the prompt, try loading your app’s namespace:
(in-ns 'future-app.core)
Changes you make via the REPL or by changing your .cljs
files should appear live
in the simulator.
Try this command as an example:
(swap! app-state assoc :text "Hello Native World")
When the REPL connects to the simulator it will begin to automatically log
success messages, warnings, and errors whenever you update your .cljs
files.
Having rlwrap
installed is optional but highly recommended since it makes
the REPL a much nicer experience with arrow keys.
Don’t press ⌘-R in the simulator; code changes should be reflected automatically. See this issue in Ambly for details.
Running multiple React Native apps at once can cause problems with the React Packager so try to avoid doing so.
You can launch your app on the simulator without opening Xcode by running
natal launch
in your app’s root directory.
By default new Natal projects will launch on the iPhone 6 simulator. To change
which device natal launch
uses, you can run natal listdevices
to see a list
of available simulators, then select one by running natal setdevice
with the
index of the device on the list.
To change advanced settings run natal xcode
to quickly open the Xcode project.
The Xcode-free workflow is for convenience. If you’re encountering app crashes, you should open the Xcode project and run it from there to view errors.
You can run any command with --verbose
or -v
to see output that may be
helpful in diagnosing errors.
As Natal is the orchestration of many individual tools, there are quite a few dependencies. If you’ve previously done React Native or Clojure development, you should hopefully have most installed already. Platform dependencies are listed under their respective tools.
>=1.4
>=4.0.0
>=2.5.3
>=0.38.2
>=2.0.0
>=6.3
>=10.10
>=3.7.0
>=0.42
(optional but recommended for REPL use)>=0.1.7
(install with npm install -g react-native-cli
)You can get the latest version of Natal by running npm install -g natal
again.
Om Next app with Python server by David Mohl, demonstrated in a talk at the Tokyo iOS Meetup.
Zooborns by Jearvon Dharrie, demonstrated in a talk at Clojure/conj 2015.
Contributions are welcome.
For more ClojureScript React Native resources visit cljsrn.org.
If you’re looking for a simple ClojureScript wrapper around the React Native API, check out the companion library Natal Shell. It is included by default in projects generated by Natal.
Every Git repository is full of latent omens waiting to be divined through complex Lisp augury.
Tasseographer scours commit log hashes and cross references words in
/usr/share/dict/words
.
Target any Git repository:
$ tasseographer [dir]
Natal Shell is a simple convenience wrapper around the React Native API, offering a simple Clojure-ready interface out of the box.
It is designed as a companion library to Natal (a command line utility for quickly bootstrapping React Native projects in ClojureScript), but can be used completely independently.
Natal Shell exposes React components as macros which you can require like so:
(ns future-app.core
(:require [om.core :as om])
(:require-macros [natal-shell.components :refer [view text switch-ios image slider-ios]]))
Use them like this:
(text {:style {:color "teal"}} "Well isn't this nice.")
You can pass children as the trailing arguments or as a collection:
(view
nil
(interleave
(map #(text nil %) ["un" "deux" "trois"])
(repeat (switch-ios {:style {:margin 20}})))))
All component names are normalized in Clojure’s snake-case, for example:
;; Using SegmentedControlIOS
(segmented-control-ios {:values ["Emerald" "Sapphire" "Gold"]})
The same rule applies to API methods.
APIs are divided into separate Clojure namespaces like so:
(ns future-app.actions
(:require-macros [natal-shell.components :refer [text]]
[natal-shell.alert-ios :refer [alert prompt]]
[natal-shell.push-notification-ios :refer [present-local-notification]]))
(text {:onPress #(alert "Hello from CLJS")} "press me")
Natal Shell provides a simple macro called with-error-view
that you can wrap around
the body of your component’s render
to get visible feedback when an error is thrown:
(defui HomeView
Object
(render [_]
(with-error-view
(view
nil
(throw "...")))))
A red screen with a stack trace will be shown, making it easier to realize where something’s gone awry.
Natal Shell is automatically generated from scraping the React Native docs via
the script in scripts/scraper.clj
.
Future areas of improvement may include optionally omitted prop arguments and automatic conversion of snake-case keys in maps.
// denotes that add() accepts two numbers and returns a third:
add = t('n,n n', function(a, b) { return a + b });
add(3, 7);
// => 10
add('3', '7');
// => Taxa: Expected number as argument 0, given string (3) instead.
Taxa is a small metaprogramming experiment that introduces a minimal grammar for type annotations to JavaScript (and by extension, CoffeeScript).
Unlike other projects of this nature, Taxa is purely a runtime type checker rather than a static analyzer. When a Taxa-wrapped function receives or returns arguments of the wrong type, an exception is thrown.
Further unlike other type declaration projects for JavaScript, Taxa’s DSL lives purely within the syntax of the language. There is no intermediary layer and no preprocessing is required.
Taxa type signatures are intended to be quick to type and to occupy few additional columns in your code.
Following this spirit of brevity, examples are also shown in CoffeeScript as it’s a natural fit to Taxa’s style.
In the following, Taxa is aliased as t
(though $
or taxa
feel natural as well):
t = require 'taxa'
# or in a browser without a module loader:
t = window.taxa
var t = require('taxa');
// or in a browser without a module loader:
var t = window.taxa;
A type signature is composed of two halves: the argument types and the return type, separated by a space.
pluralize = t 'String String', (word) -> word + 's'
var pluralize = t('String String', function(word) {
return word + 's';
});
The above signature indicates a function that expects a single string argument and is expected to return a string as a result. If any other types are passed to it, an informative error will be thrown:
pluralize 7
# => Taxa: Expected string as argument 0, given number (7) instead.
pluralize(7);
// => Taxa: Expected string as argument 0, given number (7) instead.
Taxa provides a shorthand for built-in types, indicated by their first letter. The following is equivalent to the previous example:
exclaim = t 's s', (word) -> word + '!'
var exclaim = t('s s', function(word) {
return word + '!';
});
Capital letter shorthand works as well:
exclaim = t 'S S', (word) -> word + '!'
var exclaim = t('S S', function(word) {
return word + '!';
});
The shorthand mapping is natural, with the exception of null
:
0 => null
a => array
b => boolean
f => function
n => number
o => object
s => string
u => undefined
Multiple arguments are separated by commas:
add = t 'n,n n', (a, b) -> a + b
var add = t('n,n n', function(a, b) {
return a + b;
});
The above function is expected to take two numbers as arguments and return a third.
Occasionally you may want to ignore type checking on a particular argument.
Use the _
character to mark it as ignored in the signature. For example, you may
have a method that produces effects without returning a value:
Population::setCount = t 'n _', (@count) ->
Population.prototype.setCount = t('n _', function(count) {
this.count = count;
});
Or a function that computes a result without input:
t '_ n', -> Math.PI / 2
t('_ n', function() {
return Math.PI / 2;
});
Similarly you can specify arguments as optional and their type will only be checked if a value is present:
t 's,n? n', (string, radix = 10) -> parseInt string, radix
t('s,n? n', function(string, radix) {
if (radix == null) {
radix = 10;
}
return parseInt(string, radix);
});
For polymorphic functions that accept different types of arguments, you can use
the |
character to separate types.
combine = t 'n|s,n|s n|s', (a, b) -> a + b
var combine = t('n|s,n|s n|s', function(a, b) {
return a + b;
});
For each argument and return type in the above function, either a number or a string is accepted.
If you’d like to enforce types that are more specific than primitives, objects, and arrays, you’re free to do so:
makeDiv = t '_ HTMLDivElement', -> document.createElement 'div'
var makeDiv = t('_ HTMLDivElement', function() {
return document.createElement('div');
});
makeBuffer = t 'n Buffer', (n) -> new Buffer n
var makeBuffer = t('n Buffer', function(n) {
return new Buffer(n);
});
Since all non-primitive types are objects, specifying o
in your signatures will
of course match complex types as well. However, passing a plain object or an
object of another type to a function that expects a specific type (e.g. WeakMap
)
will correctly throw an error.
Keep in mind that Taxa is strict with these signatures and will not walk up an object’s inheritance chain to match ancestral types.
Like any other function, those annotated with Taxa carry a bind
method, which
works as expected with the additional promise of modifying the output function’s
Taxa signature.
For example:
add = t 'n,n n', (a, b) -> a + b
add2 = add.bind @, 2
add2 3
# => 5
var add = t('n,n n', function(a, b) {
return a + b;
});
var add2 = add.bind(this, 2);
add2(3);
// => 5
Under the covers, add2
‘s type signature was changed to n n
.
You can add your own custom shorthand aliases like this:
t.addAlias 'i8', 'Int8Array'
t.addAlias('i8', 'Int8Array');
And remove them as well:
t.removeAlias 'i8'
t.removeAlias('i8');
You can disable Taxa’s type enforcement behavior globally by calling t.disable()
(where t
is whatever you’ve aliased Taxa as). This will cause calls to t()
to
perform a no-op wherein the original function is returned unmodified.
This is convenient for switching between environments without modifying code.
Its counterpart is naturally t.enable()
.
Take a look at the test cases in ./test/main.coffee
for more examples of
Taxa signatures.
When a function is modified by Taxa, its arity is not preserved as most JS
environments don’t allow modifying a function’s length property. Workarounds to
this problem would involve using the Function
constructor which would introduce
its own problems. This only has implications if you’re working with higher order
functions that work by inspecting arity.
It should go without saying, but this library is experimental and has obvious performance implications.
Taxa is young and open to suggestions / contributors.
From the Ancient Greek τάξις (arrangement, order).
stream-snitch is a tiny Node module that allows you to match streaming data
patterns with regular expressions. It’s much like ... | grep
, but for Node
streams using native events and regular expression objects. It’s also a good
introduction to the benefits of streams if you’re unconvinced or unintroduced.
The most obvious use case is scraping or crawling documents from an external source.
Typically you might buffer the incoming chunks from a response into a string
buffer and then inspect the full response in the response’s end
callback.
For instance, if you had a function intended to download all image URLs embedded in a document:
function scrape(url, fn, cb) {
http.get(url, function(res) {
var data = '';
res.on('data', function(chunk) { data += chunk });
res.on('end', function() {
var rx = /<img.+src=["'](.+)['"].?>/gi, src;
while (src = rx.exec(data)) fn(src);
cb();
});
});
}
Of course, the response could be enormous and bloat your data
buffer. What’s
worse is the response chunks could come slowly and you’d like to perform
hundreds of these download tasks concurrently and get the job done as quickly as
possible. Waiting for the entire response to finish negates part of the
asynchronous benefits Node’s model offers and mainly ignores the fact that the
response is a stream object that represents the data in steps as they occur.
Here’s the same task with stream-snitch:
function scrape(url, fn, cb) {
http.get(url, function(res) {
var snitch = new StreamSnitch(/<img.+src=["'](.+)['"].?>/gi);
snitch.on('match', function(match) { fn(match[1]) });
res.pipe(snitch);
res.on('end', cb)
});
}
The image download tasks (represented by fn
) can occur as sources are found
without having to wait for a potentially huge or slow request to finish first.
Since you specify native regular expressions, the objects sent to match
listeners will contain capture group matches as the above demonstrates
(match[1]
).
For crawling, you could match href
properties and recursively pipe their
responses through more stream-snitch instances.
Here’s another example (in CoffeeScript) from soundscrape that matches data from inline JSON:
scrape = (page, artist, title) ->
http.get "#{ baseUrl }#{ artist }/#{ title or 'tracks?page=' + page }", (res) ->
snitch = new StreamSnitch /bufferTracks\.push\((\{.+?\})\)/g
snitch[if title then 'once' else 'on'] 'match', (match) ->
download parse match[1]
scrape ++page, artist, title unless ++trackCount % 10
res.pipe snitch
$ npm install stream-snitch
Create a stream-snitch instance with a search pattern, set a match
callback,
and pipe some data in:
var fs = require('fs'),
StreamSnitch = require('stream-snitch'),
albumList = fs.createReadStream('./recently_played_(HUGE).txt'),
cosmicSnitch = new StreamSnitch(/^cosmic\sslop$/mgi);
cosmicSnitch.on('match', console.log.bind(console));
albumList.pipe(cosmicSnitch);
For the lazy, you can even specify the match
event callback in the
instantiation:
var words = new StreamSnitch(/\s(\w+)\s/g, function(m) { /* ... */ });
stream-snitch is simple internally and uses regular expressions for flexibility, rather than more efficient procedural parsing. The first consequence of this is that it only supports streams of text and will decode binary buffers automatically.
Since it offers support for any arbitrary regular expressions including capture
groups and start / end operators, chunks are internally buffered and examined
and discarded only when matches are found. When given a regular expression in
multiline mode (/m
), the buffer is cleared at the start of every newline.
stream-snitch will periodically clear its internal buffer if it grows too large, which could occur if no matches are found over a large amount of data or you use an overly broad capture. There is the chance that legitimate match fragments could be discarded when the water mark is reached unless you specify a large enough buffer size for your needs.
The default buffer size is one megabyte, but you can pass a custom size like this if you anticipate a very large capture size:
new StreamSnitch(/.../g, { bufferCap: 1024 * 1024 * 20 });
If you want to reuse a stream-snitch instance after one stream ends, you can
manually call the clearBuffer()
method.
It should be obvious, but remember to use the m
(multiline) flag in your
patterns if you’re using the $
operator for looking at endings on a line by
line basis. If you’re legitimately looking for a pattern at the end of a
document, stream-snitch only offers some advantage over buffering the entire
response, in that it periodically discards chunks from memory.
ear-pipe is a duplex stream that allows you to pipe any streaming audio data to your ears (by default), handling any decoding automatically for most formats. You can also leverage this built-in decoding by specifying an output encoding and pipe the output stream somewhere else.
ear-pipe relies on the cross-platform audio utility SoX, so make sure that’s installed first.
$ npm install --save ear-pipe
var EarPipe = require('ear-pipe'),
ep = new EarPipe(/* <type>, <bitrate>, <transcode-type> */);
When arguments are omitted (e.g. ep = new EarPipe;
), the type defaults to
'mp3'
, the bitrate defaults to 16
, and the third argument is null
indicating that the pipe destination is your ears rather than a transcoded
stream.
If your input encoding isn’t mp3, make sure you set it to one of the formats supported by SoX:
8svx aif aifc aiff aiffc al amb au avr cdda cdr cvs cvsd cvu dat dvms f32 f4 f64
f8 fssd gsm gsrt hcom htk ima ircam la lpc lpc10 lu maud mp2 mp3 nist prc raw s1
s16 s2 s24 s3 s32 s4 s8 sb sf sl sln smp snd sndr sndt sou sox sph sw txw u1 u16
u2 u24 u3 u32 u4 u8 ub ul uw vms voc vox wav wavpcm wve xa
Let’s pipe some number station audio to our ears right as it comes off the wire:
http.get(
'http://ia700500.us.archive.org/12/items/ird059/tcp_d1_06_the_lincolnshire_poacher_mi5_irdial.mp3',
function(res) { res.pipe(new EarPipe) });
If your connection and speakers work, you should hear it as it downloads.
Let’s send multiple audio streams to the same ear-pipe:
var ep = new EarPipe,
telstar = fs.createReadStream('./telstar.mp3'),
cream = fs.createReadStream('./cream.mp3');
http.get('http://127.0.0.1/sirens.mp3', function(res) { res.pipe(ep) });
telstar.pipe(ep);
cream.pipe(ep);
Since only one chunk passes through at a time, this DJ set should have plenty of cuts.
Since we’re decoding the audio on the fly, we can specify that we’d like to use that output for another destination besides our ears:
// null arguments mean defaults, true implies default output encoding (wav)
var ep = new EarPipe(null, null, true),
hotel = fs.createReadStream('./hotel.mp3');
hotel.pipe(ep).pipe(fs.createWriteStream('./hotel.wav'));
Or pipe to another process:
var ep = new EarPipe('wav'),
epTrans = new EarPipe(null, null, true),
audio = someStreamingNetworkData();
audio.pipe(epTrans).pipe(ep);
epTrans.pipe(anotherStreamingAudioConsumer);
Kill an ear-pipe instance by calling its kill()
method. If you’re interested
in the underlying SoX process, access an instance’s .process
property.
As a system executable:
$ npm install -g statmap
When used as an executable, statmap returns JSON over stdout.
To map the current directory:
$ statmap > stats.json
Pass an optional argument for a different directory:
$ statmap .. > parent.json
The JSON will contain a recursive representation of the directory and all
children. Each key is a file or directory name with a corresponding value
containing a stats
object and a children
object if it is a directory.
Directories also are also given a sum
property which reflects the size of all
children recursively, unlike the typical size
property of directory’s stats
object.
Here’s an excerpt of the output for the package itself:
{
"statmap": {
"stats": {
"dev": 16777220,
"mode": 16877,
"nlink": 9,
"uid": 501,
"gid": 80,
"rdev": 0,
"blksize": 4096,
"ino": 141035615,
"size": 306,
"blocks": 0,
"atime": "2013-11-25T01:02:05.000Z",
"mtime": "2013-11-25T01:02:05.000Z",
"ctime": "2013-11-25T01:02:05.000Z"
},
"sum": 165329,
"children": {
"README.md": {
"stats": {
"dev": 16777220,
"mode": 33188,
"nlink": 1,
"uid": 501,
"gid": 80,
"rdev": 0,
"blksize": 4096,
"ino": 141057002,
"size": 550,
"blocks": 8,
"atime": "2013-11-25T01:02:05.000Z",
"mtime": "2013-11-25T01:01:52.000Z",
"ctime": "2013-11-25T01:01:54.000Z"
}
},
"index.js": {
"stats": {
"dev": 16777220,
"mode": 33188,
"nlink": 1,
"uid": 501,
"gid": 80,
"rdev": 0,
"blksize": 4096,
"ino": 141035626,
"size": 1180,
"blocks": 8,
"atime": "2013-11-25T01:02:06.000Z",
"mtime": "2013-11-25T00:51:31.000Z",
"ctime": "2013-11-25T00:51:31.000Z"
}
},
"node_modules": {
"stats": {
"dev": 16777220,
"mode": 16877,
"nlink": 3,
"uid": 501,
"gid": 20,
"rdev": 0,
"blksize": 4096,
"ino": 141036545,
"size": 102,
"blocks": 0,
"atime": "2013-11-25T00:53:55.000Z",
"mtime": "2013-11-24T23:00:54.000Z",
"ctime": "2013-11-24T23:00:54.000Z"
},
"sum": 124651,
"children": {
"async": {
"stats": {
//...
Using this data, you could create something like a D3 zoomable treemap of your hard drive.
As a library:
$ npm install --save statmap
Pass a path and a callback:
var statmap = require('statmap');
statmap('./spells', function(err, stats) {
console.log(utils.inspect(stats, { color: true, depth: null }));
});
When used as a library, a live object is returned rather than a JSON string.
Commune.js makes it easy to run computationally heavy functions in a separate thread and retrieve the results asynchronously. By delegating these functions to a separate thread, you can avoid slowing down the main thread that affects the UI. Think of it as a way to leverage the web workers API without ever having to think about the web workers API.
Using straightforward syntax, you can add web worker support to your app’s functions without the need to create separate files (as web workers typically require) and without the need to change the syntax of your functions. Best of all, everything will work without problems on browsers that do not support web workers.
Here’s an example where the first argument is the function to thread, the second argument is an array of arguments to pass to it, and the third is a callback to handle the result once it comes through:
var heavyFunction = function(a, b, c){
// do some work 100 million times
for(var i = 0; i < 1e9; i++){
a++;
b++;
c++;
}
// return arguments modified
return [a, b, c];
}
commune(heavyFunction, [1, 2, 3], function(result){
console.log(result); // [100000001, 100000002, 100000003]
});
//go ahead and continue with more work in the main thread without being held up:
console.log('I will appear before the loop finishes.');
setTimeout(function(){
console.log('I probably will too, depending on how fast your CPU is.');
}, 500);
In a browser that supports worker threads, the above will output:
I will appear before the loop finishes.
I probably will too, depending on how fast your CPU is.
[100000001, 100000002, 100000003]
In a browser without web worker support, everything still works, just in a different order:
[100000001, 100000002, 100000003]
I will appear before the loop finishes.
I probably will too, depending on how fast your CPU is.
With Commune.js, we could proceed with our work in the main thread without waiting to loop 100 million times.
Further proof:
commune(heavyFunction, [1, 2, 3], function(result){
console.log(result); // [100000001, 100000002, 100000003]
});
commune(heavyFunction, [50, 60, 70], function(result){
console.log(result); // [100000050, 100000060, 100000070]
});
commune(heavyFunction, [170, 180, 190], function(result){
console.log(result); // [100000170, 100000180, 100000190]
});
Running the above in a browser with worker support, you’ll see the results of each function call appear simultaneously, meaning that none of these large loops had to wait for the others to finish before starting. Using Commune.js with care, you can bring asynchronicity and parallelism to previously inapplicable areas.
To simplify things more, you can DRY up your syntax with the help of
communify()
which transforms your vanilla function into a Commune-wrapped
version:
var abcs = function(n){
var s = '';
for(var i = 0; i < n; i++){
s += 'abc';
}
return s;
}
// Communify the function for future calls:
abcs = communify(abcs);
// Or designate some partial application:
abcs = communify(abcs, [5]);
// Then call it later in a simplified manner:
abcs(function(s){
console.log('my opus:', s);
});
// Even cleaner with named functions:
abcs(alert);
// If you didn't use partial application with the original communify call:
abcs([10], alert);
When you pass a new function to Commune.js, it creates a modified version of the function using web worker syntax. Commune.js memoizes the result so additional calls using the same function don’t have to be rewritten.
Just write your functions as you normally would using return statements.
Commune.js automatically creates binary blobs from your functions that can be used as worker scripts.
Since web workers operate in a different context, you can’t reference any
variables outside of the function’s scope (including the DOM) and you can’t
use references to this
since it will refer to the worker itself. For functions
you want to use Commune.js with, use a functional style where they return a
modified version of their input.
Also, since this is an abstraction designed for ease-of-use and flexibility, it does not work exactly as web workers do – namely you can’t have multiple return events from a single worker.
Let’s say you have a modern single page web application with client-side URL routing (e.g. Backbone).
Since views are rendered on the client, you’ll likely use RESTful Express routes
that handle a single concern and return only JSON back to the client. The app’s
only non-JSON endpoint is likely the index route (/
).
So while /users
might return a JSON array when hit via the client app’s AJAX
call, you’ll want to handle that request differently if the user clicks a link
from an external site or manually types it in the address bar. When hit in this
context, this middleware internally redirects the request to the index route
handler, so the same client-side app is loaded for every valid route. The URL
for the end user remains the same and the client-side app uses its own router to
show the user what’s been requested based on the route. This eliminates the
tedium of performing this kind of conditional logic within individual route
callbacks.
$ npm install --save express-spa-router
In your Express app’s configuration, place this middleware high up the stack
(before router
and static
) and pass it your app instance:
app.use(require('express-spa-router')(app));
AJAX requests will be untouched, but valid routes called without AJAX will
result in the the index route’s result being returned. Non-matching routes will
be passed down the stack by default and will be end up being handled by whatever
your app does with 404s. This can be overridden by passing a noRoute
function
in the options object:
app.use(require('express-spa-router')(app,
{
noRoute: function(req, res, next) {
//handle unmatched route
}
}
));
Express’s default static paths are passed along correctly by default (as are
/js
and /css
), but if you use different paths or have additional static
files in your public
directory, make sure to specify them in the options
either via a regular expression or an array of directory names:
app.use(require('express-spa-router')(app, {staticPaths: ['js', 'css', 'uploads']}));
You may also have valid client-side routes that don’t exist on the server-side.
Rather than having them reach the 404 handler, you can specify them in the
configuration options using extraRoutes
and passing either a regular
expression or an array:
app.use(require('express-spa-router')(app, {extraRoutes: ['about', 'themes']}));
Finally, if you want to route non-AJAX GET
requests to certain routes normally,
pass paths in the ignore
option:
app.use(require('express-spa-router')(app, {ignore: ['api']}));
Monocat is ideal for deploying small, static, single-page sites where you want to minimize the number of http requests. Monocat compresses and writes the contents of external assets into the html source for an easy speed optimization.
You’ll need Node.js installed. Then:
$ npm install -g monocat
Monocat works sort of like a jQuery plugin, but from the commandline.
Just add the class monocat
to any <script>
or <link>
(stylesheets only)
tag you want to inline:
<link rel="stylesheet" href="css/main.css" class="monocat">
<script src="js/huge-lib.js"></script>
<script src="js/main.js" class="monocat"></script>
Notice that the second tag will be ignored since it lacks the monocat
class.
To create an optimized version of your html file, run this:
$ monocat index.html
By default, this will create a ready-to-deploy file called index_monocat.html
in the same directory.
Pass an optional output filename as the second argument:
$ monocat src/index.html build/index.html