Sunday, November 23, 2014

PicScout is Hiring !!!


Hi all,

PicScout is looking for top notch SW engineers that would like to join an extremely innovative team.
If you consider yourself as one that can fit the team, please solve the below quiz and if your solution is well written - expect a phone call from us. You can choose any programming language you like to write the code.

Don't forget to also attach your CV along with the solution.
Candidates that will finally be hired will win a 3K NIS prize.


So here it is:
We want to build a tic–tac–toe server.
The server will enable two users to log in. It will create a match between them and will let them play until the match is over.

The clients can run on any UI framework or even as console applications.(We want to put our main focus on the server for this quiz). Extra credit is given for: good design, test coverage, clean code.



Answers can be sent to: omer.schliefer@picscout.com








Monday, November 17, 2014

Running UI tests in parallel mode

At Picscout, we use automated testing.
Running tests is an integral part of our Continuous Integration and Continuous Deployment workflow. 
It allows us to implement new features quickly, since we are always confident that the product still works as we expect.
The automation tool that we are using is Selenium (Selenium is an open source set of tools for the automatic running of browser-based applications).
Despite the many benefits of Selenium, there is one drawback that constitutes a bottleneck: the running time of the tests.
For example, in one of our projects, we need to run 120 UI tests, which takes 75 minutes - a long time to wait for a deployment.
To handle this situation, we developed a tool that runs the tests in parallel mode.
Using this tool, we were able to reduce the run time to 12 minutes of tests.

How it works:
Two tests can be run in parallel mode as long as they are not affected by each other.
We avoid writing tests that use and modify the same data, since they cannot be run in parallel.
The assumption is that two tests can be run in parallel mode if each test has a different id.


When writing a new Selenium test, we add a “Test Case” attribute to the test.
The attribute will be the id of the test.


The tool reads the Selenium project dll file,
Separates the tests into different queues according to their test cases and each queue runs in parallel mode.

13:43:51          Parallel Nunit Console
13:43:52
13:43:52  - Number of parallel queues: 6, number of tests: 29
13:43:52
13:43:52  - Number of serial queues: 1, number of tests: 3


The tool runs 10-15 threads of tests, instead of one thread in serial mode (one by one).
There are two more options in the tool. 
The first option is "update local DB." 
This creates or updates the local DB (a minimized DB, just for running UI tests). 
The second option is for the UI tool to run the tests in parallel mode. 
These two features allow us to run the tests on a developer’s station before the code is pushed, and on our build machine after the code is pushed.






That's how we run UI tests these days.

Wednesday, November 5, 2014

Code retreat about logging

At PicScout we have various ways to improve our software craftsmanship abilities. One of them is doing code retreats.
The code retreats are led by two of our software engineers. They are responsible to choose an interesting exercise for the R&D team to perform.
We usually do 2 or 3 sessions, each time writing code in pairs. Between each session we have a discussion about the previous session and decide what to do in the next session. In between we enjoy Pizza and beers at the curtesy of Picscout J


I would like to elaborate about our last code retreat, led by Ram and Galit from our R&D team.

Session 1

The first session was about writing some complicated code that does a lot of complicated logic with a movie repository. We were asked to implement an interface which retrieves bulks of movies from the repository, and filter them according to some given rules.
After the first session we regrouped and a few brave volunteers submitted their compiled code which was invoked by a small program written by Ram and Galit.
When the program returned wrong results, the developers that wrote it were asked to look at the logs and explain what went wrong.
They found it a bit hard to do because, well… there were no logs.

Than the real (evil) purpose of the code retreat was discovered:
How to write log messages that can be used in production environment to give useful insights for complicated production scenarios.

Session 2

In the second session we were asked to add logs to the code we wrote in the first session using our logging infrastructure (based on log4net).
After the second session we regrouped again, ran a few more examples and discussed how the log prints were implemented, and how to improve them.
For example:
  • Using logs to show the application flow.
  • What is the log level we expect the application to write in production?
  • Which message should be written in each level? (For example: No normal system behavior should be logged at a warning level.) 

Pizza and Beers


Session 3

For the final session we had, we were asked to write a small AOP (aspect oriented programming) example that was used to add some repeating logs messages.
This was a nice exercise that showed us how we can add logs to our code simply by adding attribute to our method thus keeping the code cleaner.

Saturday, September 27, 2014

Employment after the age of 40 (MngtTips Podcast)

Couple of weeks ago we've recorded a short session hosted by the MngtTips podcast.

In this session 3 PicScout's employees shared their experiences on searching and working in the hi-tech industry after the age of 40.
Though we're sure this issue isn't particular to the Israeli hi-tech industry, the difficulties to find a proper job after age of 40 in Israel are well known.

You can listen to the podcast here (Hebrew only).


By Beautification Syndrome

Monday, September 22, 2014

Software development apprenticeship program at PicScout



In May 2013 we were happy to start a software development apprenticeship program. This program is in cooperation with the Atidim program, which encourages engineering studies combined with an internship in Israeli high-tech and industrial companies.

Since the program started, we have focused our efforts on training apprentices for excellence and encouraging their growth and learning in accordance with PicScout's culture and core engineering principles:

·         Simplicity - creating simple and business oriented designs
·         Efficiency and transparency - writing clean and readable code that others can easily maintain & adjust per business requirements
·         Innovation ­- nurturing innovative thinking as a key to growth
·         Creativity ­- bringing new and better ideas and finding creative solutions for identified problems
·         One voice ­ - embracing collective responsibility


How does it work?

a) Our apprentices receive effective and solid training that provides them with the right tools on their way on becoming great developers.
b) The apprentices receive a wide range of tasks throughout PicScout's management and they experience the full lifecycle of a product.
c) With their arrival to PicScout a few mentors are assigned to help facilitate easy and friendly integration- both technically and socially.
d) The initial training period includes getting to know PicScout's products and architecture, understanding the organization's culture and going through technical training.
e) The apprentices are provided with an extensive reading list, tools and technology reviews. We don’t only focus on a high level review, but actually delve into details explaining the trends, whys and hows behind every tool and technology.
f) We review PicScout products, explaining why the business decisions were made as they made and how our architectural and design solutions meet the business requirements
g)The apprentices come to Picscout once or twice a week. Their ongoing work includes R&D tasks, design and code reviews, and participating in educational meetings, Hackathons and SW craftsmanship events.
h)In addition, each apprentice gets a chance to lead a small project end to end –
Starting from the design phase and going deep into production’s maintenance phase.


What's next?

We are very pleased with the progress of the program so far and we can already see the fruits of both sides efforts. We are looking forward to integrating the apprentices in our company once they finish their studies and to welcome additional apprentices in the future.

Wednesday, September 3, 2014

Selenium use at PicScout

At PicScout, we believe that every step in our development workflow that can be automated should be. Testing is one of those steps where automation is needed. Automated testing allows us to implement new features quickly, as we can always prove that the product still works as we expect.

In order to automate our testing process, we chose to work with Selenium among other tools we are using. Selenium is an open source set of tools for automating browser-based applications across many platforms. It is mainly used for web applications testing purposes, but is certainly not limited to just that.

Our approach is that most of the automation should run locally to keep the continuous integration environment clean as much as possible. This is very important step for stepping up to continuous deployment. Therefore, we build a mechanism that enables to run the automation including selenium locally. By using this mechanism, the SW engineers can ensure quality up to a certain level.

Each developer must first run the automation including Selenium on the local machine before his\her changes are pushed to the source control repository. To achieve this, each developer works on his own environment and doesn't interrupt other developers. That way the selenium testing environment stays "clean". In order to ease the developers' life a set of tools was developed which is responsible for updating the DB and running the tests.

The tests use dedicated DB which contains the relevant data; hence before running them the DB needs to be restored from a backup stored in the source control repository (aka GIT). In case a new selenium test is written which requires some data changes, the tool allows publishing the local DB to the repository.

Furthermore, selenium tests are written over NUnit Framework. NUnit Framework runs the tests in sequence, meaning one after the other. This can be time consuming since some tests can run in parallel as long as they are not affected by each other. Running tests in sequence takes about an hour, while in parallel it takes only 10 minutes!

Therefore, we developed a tool that supports serial and parallel modes for running tests. In order to run tests in parallel, we had to isolate each test from another. To achieve this, we had to make sure that the tests don't use shared DB resource or information (including but not limited stored procedure, tables etc.).  By doing that, we know for certain that at any given time, no test will disturb any other test while still pushing the performance to the limit and finishing all of the tests in minimal time, as opposed to running tests in sequence manner.

That's about it on how we use selenium at PicScout.

Tuesday, July 15, 2014

Picscout ‘s development process at a glimpse

The role of software engineers


    At Picscout, each SE takes full responsibility on a task. This is achieved by using the following guidelines:

  • User stories (can also be refer to as tasks) are written and described in advanced, either by product owner or R&D team, and inserted into a queue.
  • When a Software Engineer (SE) is available, they pull a US from the queue and start to work on it.
  • The SE should read the US and verify they understand it completely. For that they may turn to the person who wrote the US, other group members, group leader, operation team or anyone who can help them understand the US.
  • If the US is not well defined in such way the SE can't start working on it, they should raise a flag and talk to the person who wrote it. In this case, the US may return to the queue and it will be reviewed again.
  • US should not take more than 5 days. If there's a US which we think will take longer, it should be divided into smaller User Stories. It is up to the SE to decide and divide the US.
  • Once they started to work on a US, they should set a due date for it (no longer than 5 days as discussed previously). This due date helps planning the team schedule and set milestones.
  • A SE should handle each US end to end. Understanding, designing, implementing, testing and deploying are part of the task.
  • When a US is finished, it should be flagged as delivered or done. If the US involved code changes, it should be flagged as ready for 'Code Review'


Code quality

Maintaining our code in high quality is one of our main goals. In order to achieve this goal we use a variety of processes and tools:

Coding skills and analysis

We use code analysis tools such as FxCop and Sonar to give better insight into the code quality at both the developer’s and the project’s levels. At the same time, we put a big emphasis on the human factor:
  • Code reviews are done on a daily basis
  • Code reads are done once a week- A team member will show code that he/she has written to a group from R&D and the group will discuss the code (design, architecture, implementation considerations etc…)
  • Clean code lessons and educational meetings are done every 2-3 weeks
  • Special events such as hackatons and code retreats are done every 6 weeks 

Tesing

We do not have a QA team. We don’t believe we need one. Why?
Because we believe our software engineers should write code that works – that’s as simple as that.
hat seems like a very naïve idea of how software development can work in the real world. So how can we achieve this goal in practice?

When we develop new code we also develop some layers of testing for it:
  • Unit tests (Using NUnit)
  • Integration tests (Using NUnit)\
  • Acceptance tests (using Selenium, Specflow and NUnit )

we do have a small team of automation verification engineers that help us manage and thicken the Acceptance testing layer Some manual testing are sometimes needed of course, and our software engineers do manual testing whenever required.

To support the development of maintenance of our products we also require a very efficient process of CI – this was discussed in our previous post.

Sunday, June 22, 2014

The CI process at PicScout

 We've been using Jenkins as our build server for some time now, but recent switch from TFS to Git allowed us, among other things, to implement a more sophisticated approach to Continuous Integration process.

We've already had all CI principles covered before - single branch everyone was working and committing to on a (almost) daily basis, automated self-testing builds etc. But if you look at the definition of CI in Wikipedia, it says that in CI "no errors can arise without developers noticing them and correcting them immediately". Unfortunately this was not our case.

But first of all, why errors resulting in a broken build are such an issue ? Because it impacts entire team, anyone "getting latest" is likely to encounter issues at some stage because of the errors introduced by someone else and it may take a lot of time to realize that it's not your fault after all. And a check-in on a broken build makes things even worse. In addition, you are not guaranteed to have a stable version for deployment.

Why did it happen for us ?

A little bit of background to start with - we have one job (build in Jenkins) per solution, and we have dependencies set up between jobs to reflect dependencies between projects in separate solutions. Whenever job is run successfully, it will trigger its dependent jobs, so a check-in may actually result in multiple Jenkins jobs running in chain one after another. The whole process used to take more than 30 minutes, with developers waiting all this time for possible error notifications whereas the source was being dirty since the check-in. Moreover, any other check-in during this time frame somewhere along this chain resulted in jobs near the end of chain aggregating additional changes from TFS. Consequently, failure notification emails were sent to a group of developers and they were in no hurry to take responsibility for something that was not necessarily their fault.

What do we have now ?

First of all, we put a lot of effort to minimize jobs execution time and right now the longest chain of jobs completes in well under 10 minutes. But after migration to Git a major change was in our new strategy to tackle "broken builds". Each developer now works on a local branch, pulls from the central repository master branch but pushes back to his/her personal branch. Git Plugin allows Jenkins to merge master branch into this personal branch, run all necessary jobs on it and merge it back to master on success. In case of a failure, the broken branch is ignored, master branch remains untouched.  Any pushes made by other developers at the same time are run separately on different branches and don't affect each other. Feedback on success/failure of the build is sent only to the developer who triggered it, so no more lame excuses.

  
This concept is not new for CI servers, there is a "Gated Check-in" in TFS or "Delayed Commit" in TeamCity. What makes our approach a bit different is that the process is automated - no need to specify which build definition you want to use for your changes to be built, tested and pushed back to master. We have incorporated logic in Git post-receive hook that inspects the changes made by the developer, identifies and then triggers corresponding job in Jenkins. Another advantage compared to the 2 methods mentioned above is that the branch with broken build can be easily accessed by other developers for review or assistance. In fact, with this approach it can even be more productive to allow a broken build than to try to prevent it all the time.

That's about it on how we do CI these days.

Monday, April 28, 2014

How to split an array in C#?

Every now and then, we need to perform mundane operations that are very simple but don't have a built-in function in the language. So we write some ad-hoc code, maybe even copy something from StackOverflow and are done with it. What we sometimes fail to notice, however, is the affect this has on performance.
This entry will focus on splitting an array, but this is relevant for other operations as well.
The time it takes to perform an operation is only important if the code is in a time-critical section and/or is performed a large amount of times. If this is not the case, whatever implementation you choose will probably be OK.
Having said that, let's look at some of the ways to split an array. We'll use a medium sized byte array for this purpose (256), splitting it to a short array (16 bytes) and the rest.
Assuming you are using .Net 3.5+, the most natural way to go about this is to use LINQ:

private static void SplitArayUsingLinq(byte[] data)
{
         byte[] first = data.Take(16).ToArray();
         byte[] second = data.Skip(16).ToArray();
}

As you can see, the method is very elegant, only 1 line of code required to create each part. In addition, it seems like using Take and Skip on a small number of runs can't cause a performance issue.
However, running the above code a 1,000,000 times takes around 10 seconds, which is a considerable amount of time.
Let's compare the LINQ version to a good old for loop:

private static void SplitUsingForLoop(byte[] data)
{
 byte[] first = new byte[16];
 for (int i = 0; i < first.Length; i++)
 {
  first[i] = data[i];
 }
 byte[] second = new byte[data.Length - first.Length];
 for (int i = 0; i < second.Length; i++)
 {
  second[i] = data[i + first.Length];
 }
}

This yields a run time of less than 2 seconds – more than x5 improvement! The looping method seems to be much more efficient. Let's try to improve this some more.
If we do our Googling right, we find that copying arrays actually is a library function – Array.Copy. Let's test this:

private static void SplitArrayUsingArrayCopy(byte[] data)
{
         byte[] first = new byte[16];
         Array.Copy(data, first, first.Length);
         byte[] second = new byte[data.Length - first.Length];
         Array.Copy(data, first.Length, second, 0, second.Length);
}


We get a result of 250ms – another x8 improvement, a total of x40 compared to the LINQ version!
Digging even dipper and only in the case of byte[], we can use another method called Buffer.BlockCopy that actually performs a low level byte copy:


private static void SplitArrayUsingBlockCopy(byte[] data)
{
         byte[] first = new byte[16];
         Buffer.BlockCopy(data, 0, first, 0, first.Length);
         byte[] second = new byte[data.Length - first.Length];
         Buffer.BlockCopy(data, first.Length, second, 0, second.Length);
}


Now the results are 180ms, which is yet another improvement, albeit not as dramatic as the previous ones.
In conclusion:
Method
Time for 1,000,000 splits (s)
Improvement factor
LINQ
10.2
-
for loop
1.93
x5
Array.Copy
0.25
x40
Buffer.BlockCopy
0.18
x56

Kids, don't trust LINQ blindly for what really matters J