Thursday, April 28, 2016

Writing Web-Based Client-Side Unit-Tests with Jasmine and Blanket

Preface

When writing a website, or more often - a one page app - there is a need to test it, just like any other piece of code.

There are several types of tests, of course, including unit tests and integration tests.
While integration tests test flows of the entire application, end to end, and thus simulate user interaction (which requires special browser-based testing package), unit tests run specific functions.
However, when writing an entire application in JavaScript, running pieces of code is a bit more tricky.

On one hand, we are not used to writing unit tests in JavaScript, and run the tests completely in the browser. On the other hand, calling JavaScript code and then testing various members for values is much more easily done when written directly in JavaScript.

Luckily, the good people of the web has given us several JavaScript based packages for writing unit tests. I'll talk about Jasmine, and add some words about Blanket, that integrates with Jasmine to perform code-coverage.

Jasmine

Jasmine is a JavaScript based library to perform unit tests. It consists of several parts:
  1. The Runner
  2. Tests Framework
  3. Plug-ins

1. The Runner

The runner is an HTML file with base code that loads the tests framework and runs the tests. It will not do anything when you take it out-of-the-box. You have to add your own scripts in there, so consider it a template.

The base HTML looks like this:
<link rel="shortcut icon" type="image/png" href="jasmine/lib/jasmine-2.0.0/jasmine_favicon.png">
<link rel="stylesheet" type="text/css" href="jasmine/lib/jasmine-2.0.0/jasmine.css">

<script type="text/javascript" src="jasmine/lib/jasmine-2.0.0/jasmine.js"></script>
<script type="text/javascript" src="jasmine/lib/jasmine-2.0.0/jasmine-html.js"></script>
<script type="text/javascript" src="jasmine/lib/jasmine-2.0.0/boot.js"></script>

Next, you need to add your own application scripts:
<script type="text/javascript" src="src/myApp.js"></script>

And finally comes your tests scripts:
<script type="text/javascript" src="tests/myAppTestSpec.js"></script>

2. Tests Framework

Jasmine have several files that create the tests framework. The most basic ones are the one listed above, in the basic HTML example. Let's go over them quickly:

jasmine.js

The most basic requirement. This is the actual framework.

jasmine-html.js

This one is used to generate HTML reports. It is a requirement, even if you don't want HTML reports.

boot.js

This one was added in version 2.0 of Jasmine, and it performs the entire initialization process.


Writing Tests


Structure

The unit tests in Jasmine are called "Specs", and are wrapped in "Suites". It look like this:
describe("A suite", function() {
  it("contains spec with an expectation", function() {
    expect(true).toBe(true);
  });
});

The describe function describes a test suite, while the it function specifies a test.
Note that those two get as parameters a name and a simple function block, and that the it block is being called in the body of the describe function block. This means you can store "global" members for each test suite. Also it means that the tested code comes inside the it block, along with any assertions.

Expectations (a.k.a. Asserts in other test suites)

When writing a unit test you expect something to happen, and you assert if it doesn't. While in other test suites you usually use the term Assert to perform such operation, in Jasmine you simply Expect something.

The syntax for expectations is straight forward:
expect(true).toBe(true);

There are many "matchers" you can user with the expect function, including but not limited to:
  • toBe - test the value to actually BE some object (using '===').
  • toEqual - test the value to EQUAL some other value.
  • toMatch - tests a string against a regular expression.
  • toBeDefined / toBeUndefined - compares the value against 'undefined'.
  • toBeTruthy / toBeFalsy - tests the value for JavaScript's truthiness or falsiness.
  • toThrow / toThrowError - if the object is a function, expects it to throw an exception.
You can also negate the expectation by adding not between the expect and the matcher.

Spies

You can also use Jasmine to test if a function has been called. In addition, you can (actually, need) to define what happens when the function is called. The syntax looks like this:
spyOn(someObject, "functionName").and.callThrough();
spyOn(someObject, "functionName").and.returnValue(123);
spyOn(someObject, "functionName").and.callFake( ... alternative function implementation ... );
spyOn(someObject, "functionName").and.throwError("Error message");
spyOn(someObject, "functionName").and.stub();
Then, you can check (expect) if the function was called using:
expect(someObject.functionName).toHaveBeenCalled();
or
expect(someObject.functionName).toHaveBeenCalledWith(... comma separated list of parameters ...);

More info

There are many features you can use with Jasmine. You can read all about it in the official documentation at http://jasmine.github.io/

3. Plug-ins

Well, I'll only talk about Blanket, the code coverage utility that integrates with Jasmine.

In the runner, add the following line before tests specs scripts, but after the application scripts:
<script type="text/javascript" src="lib/blanket.min.jsdata-cover-adapter="lib/jasmine-blanket-2_0.js"></script>

and that's it!

Below the test results report there will be the code coverage report.

The blanket.js package can be found at http://blanketjs.org/ and the adepter for Jasmine 2.x can be found at https://gist.github.com/grossadamm/570e032a8b144ec251c1 (unfortunately, blanket.js only comes pre-packaged with an adapter for Jasmine 1.x).




Happy Coding!

Sunday, April 17, 2016

Profiling .NET performance issues


In this post I want to talk about a frustrating problem most developers will encounter sometimes during their career - Performance issues.
You write some code, you test and run it locally and it works fine- but once it is deployed , bad things start to happen.
It just refuses to give you the performance you expect to...
Besides doing the obvious (which is calling the server some bad names) - what else can you do?

In the latest use case we encountered, one of our Sw. engineers was working on merging the code from a few processes into a single process. We expected the performance to stay the same or improve (no need for inter-process communication) - and in all of the local tests it did.

However, when deployed to production, things started getting weird:
At first the performance was great but than it started deteriorating for no apparent reason,
CPU started to spike and the total throughput went down to about 25% worse than the original throughput.

The SW. engineer, which was the assigned to investigate the issue, started by digging into the process performance indicators, using ELK.

Now, we are talking about a deployment of multiple processes per server and of multiple servers- so careful consideration should go into aggregating the data.
Here is a sample of some interesting charts:



Analyzing the results, we realized the problem happened on all of the servers intermittently.
We also realized that some inputs will cause the problem to be more serious than others.
We used Ants profiling tool on a single process and fed it with some "problematic" inputs and the results were surprisingly, not very promising...:

a. There were no unexpected hotspots.
b. There were no memory leaks.
c. Generation 2 collection was not huge, but it had a lot of data- more than gen1 (but less than gen0).


Well this got us thinking, might our problem be GC related?
We now turned to the Perfmon tool.
Analyzing the %time in GC metric revealed that some processes spent as much as 50% of their time doing GC.



Now the chips started falling-
One of our original processes used to do some bookkeeping, holding some data in memory for a long duration. Another type of a process was a typical worker: doing a lot of calculations using some byte arrays and than quickly dumping them.
When the two processes were merged we ended up with a lot of data in gen2 , and also with many garbage collection operations because of the byte arrays - and that resulted in a performance hit.

Well, once we knew what was the problem, we had to resolve it - but this is an entirely different blog post altogether...


Sunday, April 3, 2016

Challenges of learning ordinary concepts

In the last four years convolution neural networks (CNNs) have gained vast popularity in computer vision application.

Basic systems can be created from off the shelf components allowing solving in a relative easy task problems of detection ("what is the object appearing in the image?"), localization ("where in the image there is a specific object?") or a combination of both .


Above: Two images from the ILSVRC14 Challenge


Most systems capable to create product level accuracies are limited to a fixed set of different predetermined concepts , and are also limited by the inherently assumption that a representing database of all possible appearance of the required concepts can be collected.
The above two limitations should be considered when designing such a system as concepts and physical objects used in everyday life may not be easily fitted to these limitations.

Even though CNN based systems that perform well are quite new, the fundamental questions outlined below relate to many other Computer Vision systems.

One consideration is that some objects may have different functionality (and hence a name) whereas they have the same appearance.


For example, the distinction between a teaspoon, tablespoon, serving spoon, and a statue of a spoon is related to their size and usage context. We should note that in such case the existence and definition of the correct system output is highly depending on the system's requirements.





In general, plastic artistic creations, raises the philosophical question of what is the shown object (and hence the required system's output). For example - is there a pipe shown in the below image?


When defining a system to recognize an object, another issue is the definition of the required object. Even for a simple daily object, different definitions will result in different set of concepts. For example, considering a tomato, one may ask what appearances of a tomato are required to be identified as a tomato.
Clearly, this is a tomato:

But what about the following? When does the tomato cease to be a tomato and becomes a sauce? Does it always turns to a sauce?

Since this kind of Machine Learning systems learns from examples, different systems will behave differently. One may use all examples of all states of a tomato as one concept, whereas another may split it to different concepts (that is, whole tomato, half a tomato, rotten tomato, etc.). In both cases, tomato that has a different appearance and is not included in none of the concepts (say, shredded tomato) will not be recognized.
Other daily concepts have a meaning functional (i.e. defined by the question "what is it used for?") whereas the visual cues may be limited. For example, all of the objects below are belts. Except for the typical possible context (possible location around the human body, below the hips) and/ or functional (can hold a garment), there is no typical visual shape. We may define different types of belts that interest us, but then we may need to handle cases of objects which are similar to two types and distinctively belongs to one type.

Other concept definition considerations that should be addressed may be:
- Are we interested in the concept as an object, location, or both? As an object (on the left) it can be located in the image, whereas the question "where is the bus" is less meaningful for the right image.


Theses ambiguities are not always an obstacle. When dealing with cases when the concepts have a vague definitions or a smooth transition from one concept to the other, most system outputs may be considered satisfactory. For example, if an emotion detection system's output on the image below is "surprise", "fear" or "sadness" - it is hard to argue that it is a wrong output, no matter what were the true feeling of the person when the image was taken .


Written by Yohai Devir