A node port of the perceptualdiff image comparison (http://pdiff.sourceforge.net) with some additional features
#Usage:
The package can be used in two different ways:
- per command line; just as the original project
- through a class in your code
##Command-Line usage:
The command-line tool can be found in the bin directory. You can run the application with
node ./bin/perceptualdiff.js <image1>.png <image2>.png
Use image1 and image2 as the images you want to compare.
Note: This port only supports PNGs!
The command-line tool exposes a couple of flags and parameters for the comparison:
--verbose Turn on verbose mode"
--fov deg Field of view in degrees [0.1, 89.9] (default: 45.0)
--threshold p Number of pixels/percent p below which differences are ignored
--threshold-image p Number of pixels/percent p below which differences are not generated (see --output)
--threshold-type p 'pixel' and 'percent' as type of threshold. (default: pixel)
--pyramid-levels p Number of levels of Laplacian pyramids. (default: 3)
--gamma g Value to convert rgb into linear space (default: 2.2)
--luminance l White luminance (default: 100.0 cdm^-2)
--luminance-only Only consider luminance; ignore chroma (color) in the comparison
--color-factor How much of color to use [0.0, 1.0] (default: 1.0)
--scale Scale images to match each others dimensions
--sum-errors Print a sum of the luminance and color differences
--output o Write difference to the file o
--version Print version
--help This help
Most of the parameters were exposed as the original project does. However, I changed a couple of parameter signatures to make the interface a little bit more consistent.
--luminanceonlywas renamed toluminance-only--colorfactorwas renamed tocolor-factor
Since the PNG library, I use, does not support resampling, I needed to remove this feature for now.
So, there is no --resample. Please resample the images through other means before using this diff-tool.
I also added a couple additional features and some were exposed to the command-line tool:
--threshold-image pmakes it possible to skip some of the comparison, reducing the time spent analysing the images as node is a LOT slower than C is. This feature will also skip the creation of output images if this threshold is not reached; it simply stops caring if the difference is below the threshold.--threshold-type pchanges the threshold by considering absolute pixels or percentage of total pixels. The values arepixelandpercentrespectively.--pyramid-levels pspecifies the detail of the comparison - the higher the number is, the higher the comparison resolution but also the longer it will take.2is the lowest number possible. The original perceptualdiff tool used internally 8 as default. Again, node is just too slow.
##Class usage:
The package can also be used directly in code, without going through the command-line.
Example:
var PerceptualDiff = require('perceptualdiff');
var diff = new PerceptualDiff({
imageAPath: '...',
imageBPath: '...',
scale: true,
verbose: true,
pyramidLevels: 5,
thresholdType: PerceptualDiff.THRESHOLD_PERCENT,
threshold: 0.01,
imageThreshold: 0.005,
imageOutputPath: '...'
});
diff.run(function (passed) {
console.log(passed ? 'Passed' : 'Failed');
});All the parameters that were available in the command-line tool are also available through the class constructor - they use camelCasing instead of snake-casing. The class exposes additional parameters that are not available from the command-line.
imageAPathDefines the path to the first image that should be compared (required)imageBPathDefines the path to the second image that should be compared (required)imageOutputPathDefines the path to the output-file. If you leaves this one off, then this feature is turned-off.verboseVerbose output (default: false)luminanceOnlyOnly consider luminance; ignore chroma (color) in the comparison (default: false)sumErrorsPrint a sum of the luminance and color differences (default: false)fieldOfViewField of view in degrees [0.1, 89.9] (default: 45.0)gammaValue to convert rgb into linear space (default: 2.2)luminanceWhite luminance (default: 100.0 cdm^-2)thresholdTypeType of threshold check. This can be PerceptualDiff.THRESHOLD_PIXEL and PerceptualDiff.THRESHOLD_PERCENT (default: THRESHOLD_PIXEL)thresholdNumber of pixels/percent p below which differences are ignored (default: 100)imageThresholdNumber of pixels/percent p below which differences are not generated (default: 50)colorFactorHow much of color to use [0.0, 1.0] (default: 1.0)pyramidLevelsNumber of levels of Laplacian pyramids. (default: 3)scaleScale images to match each others dimensions (default: false)outputMaskRedRed intensity for the difference highlighting in the output file (default: 255 - full red)outputMaskGreenGreen intensity for the difference highlighting in the output file (default: 0)outputMaskBlueBlue intensity for the difference highlighting in the output file (default: 0)outputMaskAlphaAlpha intensity for the difference highlighting in the output file (default: 255)outputMaskOpacityOpacity of the pixel for the difference highlighting in the output file (default: 0.7 - slightly transparent)outputBackgroundRedRed intensity for the background in the output file (default: 0)outputBackgroundGreenGreen intensity for the background in the output file (default: 0)outputBackgroundBlueBlue intensity for the background in the output file (default: 0)outputBackgroundAlphaAlpha intensity for the background in the output file (default: 0 - transparent)copyImageAToOutputCopies the first image to the output image before the comparison begins. This will make sure that the output image will highlight the differences on the first image.copyImageBToOutputCopies the second image to the output image before the comparison begins. This will make sure that the output image will highlight the differences on the second image.
###Logging:
By default, the logger doesn't log events anywhere, but you can overwrite this behavior by overwriting PerceptualDiff.log:
var diff = new PerceptualDiff({
...
});
diff.log = function (text) {
// Do whatever you want to do
};
...#Example:
There are some examples in the examples folder, where I used screenshots of Wikipedia to check for visual regressions.
You can find examples for:
- Missing DOM elements in
hidden_regression - Disrupted sorting in
sorting_regression - Color changes in
style_regression - Text capitalization in
text_regression
All screenshots were compared to wikipedia_approved.png, a previously approved screenshot without a regression.
Each of the regression has the screenshot and the output result, highlighting the differences.
#TODO:
- Code documentation
#LICENSE
The original project was release with the GPL v2 license.