Thursday, August 8, 2013

GetHashCode() Is it that important ?

Introduction



The Idea for this blog entry came up after a pair code review session.
I’ve noticed the following method (from class that holds picture coordinates):

public override int GetHashCode()
{
    return X.GetHashCode() + Y.GetHashCode(); // Algo. 1
}

At first glance it looked like a fair implementation

I know a few rules of thumb for solid GetHashCode implementation:
•  Hash algorithm must be deterministic (for given input, output must be the same).
•  Equal objects must have the same HashCode.
•  Objects with the same HashCode aren’t necessarily Equals.

This entry overview implementation of GetHashCode.


Preliminary testing


Why it isn’t a solid implementation?
Well let’s test that, the testing will reveal the answers.
The test define image 1024 X 1024 in dimensions, 1M pixels in total.
Our class represents a single pixel on the image,
and override the GetHashCode() method.
Now, let’s calculate the Hash Code for each and every pixel on the image surface.
Values will be compared and summarize according to colliding Hash Code values.

The number of collisions per pixel will indicate the pixel color:
White for collisions free pixels and darker colors correlated
to the “popularity” (see legend) of the Hash Code.

This produces the resulting collisions map for the 1st algorithm result:

 










The results are even worse than expected!
This is mainly due to the fact that .Net implementation of int.GetHashCode()
returns the int value itself!
So,
Pixel(X=50, Y=55).GetHashCode() == 105
Pixel(X=55, Y=50).GetHashCode() == 105
Pixel(X=54, Y=51).GetHashCode() == 105
And so on and on...


Suggested alternative algorithms


Let’s try a slightly better solution:
public override int GetHashCode()
{
    return X.ToString().GetHashCode() 
              + Y.ToString().GetHashCode(); // Algo. 2
}

You can expect using Xor operator between the X and Y
will produce considerably better results:
public override int GetHashCode()
{
    return X.ToString().GetHashCode() 
              ^ Y.ToString().GetHashCode(); // Algo. 3
}

Common solution is to use "Mersenne prime"
public override int GetHashCode()
{
    return X + Y * 31; // Algo 4.
}

Assuming that images dimensions are restricted to 64K (X or Y)
we can create "Perfect Hash"
public override int GetHashCode()
{
    return (X << 16) + Y; // Algo.5
}
 

Testing output


2nd algorithm result:


3rd algorithm result (Although it looks better, it's even a bit worse):



4th algorithm result (good distribution across the field,
pretty impressive for such simple algorithm):

No use to show the 5th algorithm result, the image is perfect white,
no collisions at all!


Collision effect !


Why the collisions are so important?
Mainly, due to one big reason: Performance !

Let’s try to measure the time it takes to:
Insert / lookup in dictionary (Dictionary<XyPoint, int>) and calculate Hash Code.

This table illustrates the time it takes (1 million POCO items)
for our Algorithms (tested on Intel E8400 machine):

Time in ms.
Algo. 1
Algo. 2
Algo. 3
Algo. 4
Algo. 5
HashCode Calculation
34
280
283
30
31
Dictionary Insert
19,300
2,587
2,758
547
72
Dictionary Lookup
18,614
2,468
2,692
577
98
Total time
37,948
5,335
5,733
1,154
201


Conclusion


Collisions cause huge performance impact.
Even a good Hash algorithm can suffer from a bad implementation
due to incompatibility or misuse.
Although Algo. 1 was one of the fastest Hash calculations,
the overall performance compared to Algo. 5 were about 190 times slower!
This is due to the reason that colliding values are chained,
this requires doing additional search to find the value in the chain.
That’s incredible, small and grey line of code can degrade (or boost) the performance.

Probably next time you’ll override the GetHashCode(),
you’ll spend a little bit more time to find the better solution.

I hope this entry shed some light on the subject and emphasize its importance.

Monday, May 27, 2013

Apprenticeship Program


All professionals around the world need to be trained and software engineers aren't an exception.

Hence, we announce a unique program (and for sure the first in Israel) we are proud to kick off this week: a Software Development Apprenticeship Program.

PicSocut will hire and train apprentices; We will focus (but not limit) on clean code, reading/writing code, clean architecture, BDD, TDD, simple and business oriented designs, tools and best practices. In nutshell, all what you need to become a highly competent software engineer who cares and proud about his profession.

We are looking for couple of candidates to begin the program!

If you feel it's you, please feel free to send us your resume at jobs@picscout.com

Good Luck!



  

Wednesday, May 15, 2013

Building Lightweight Products

Here is a short talk about how we build lightweight products at PicScout (in Hebrew).
Unfortunately, the video has focused on the speaker, instead of on the slides... so ping us if you wish to receive the slides.


Sunday, May 12, 2013

Consumption of Large Json objects from the IIS


In my previous post I talked about binary serialization of Large Objects. Today, I‘m going to talk about the consumption of such objects from the IIS.

In this case our recommendation is: always stream objects, don't create intermediate strings or similar, because you will find yourself with "Out of memory etc..." exceptions.

First, let’s update our client so it will ask for “gzip” stream:
public class SpecialisedWebClient :  WebClient
{
      protected override WebRequest GetWebRequest(Uri address)
      {
           HttpWebRequest request = base.GetWebRequest(address) as HttpWebRequest;
           request.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
           request.Timeout = 90 * 60 * 1000;
           return request ;
      }
}
Next, an Asp .NET application usually has a controller which returns some JsonResult. We introduced the new class LargeJsonResult which is returned instead:
public class LargeJsonResult : JsonResult
{
       public override void ExecuteResult(ControllerContext context)
       {
            context.HttpContext.Response.ContentType = "application/json";

            if(ReturnCompressedStream())
            {
                 context.HttpContext.Response.AppendHeader("Content-encoding", "gzip");

                 using (GZipStream gZipStream = new GZipStream(response.OutputStream, CompressionMode.Compress))
                 {
                      SerializeResponse(gZipStream, Data); 
                 }
            }
            else
            {
                 SerializeResponse(gZipStream, Data); 
            }
       }

       private bool ReturnCompressedStream(ControllerContext context)
       {
            string acceptEncoding = context.HttpContext.Request.Headers["Accept-Encoding"];
            if (!string.IsNullOrEmpty(acceptEncoding) && acceptEncoding.ToLowerInvariant().Contains("gzip"))
            {                
               return true;
            }

            return false;
      }

      private static void SerializeResponse(Stream stream, object data)
      {
           using (StreamWriter streamWriter = new StreamWriter(stream))
           using (JsonWriter writer = new JsonTextWriter(streamWriter))
           {
                JsonSerializer serializer = new JsonSerializer();

                streamWriter.AutoFlush = true;
                serializer.Serialize(writer, data);
           }
      }
}
Here we use the Newtonsoft Json Serializer. As you can see, the Json object is streamed to the Client, so the last thing what we need to do is to update the Clients' code for the consumption of this object:
using (SpecialisedWebClient client = new SpecialisedWebClient())
{
      client.Headers.Add("Content-Type: application/json");

      using (Stream stream = client.OpenRead(serviceUri))   
      using (StreamReader reader = new StreamReader(stream, System.Text.Encoding.UTF8))
      using (JsonReader jreader = new JsonTextReader(reader))
      {
           JsonSerializer js = new JsonSerializer();    

           return js.Deserialize<JObject>(jreader);
      }
}
That's it. Now you can transfer GBs of Json data over the wire.

Note: In my previous post, I talked about large objects serialization and our custom implementation of ISerializable. Apparently, when the Json re-presentation of this object is streamed from the IIS using Newtonsoft serializer it calls to ISerializable method and instead of Json the binary stuff is displayed. In order to disable this behavior we need to add the attribute [JsonObjectAttributeon top of the object.

Wednesday, May 1, 2013

BDD @ PicScout

Lead

It is a well known fact that BDD (Behavior-driven development) is gaining increasing popularity last couple of years, both in theory and practice. Dan North, the acknowledged founder of this idea and creator of its first implementation, describes its main concepts beautifully in his Introducing BDD paper. 

Since so many words were already written in favor (and disfavor) of BDD, I will not get into details of this methodology and its techniques, but rather try to present PicScout’s perspective.


Our Case

We, at PicScout, came across this methodology while trying to improve our delivery cycles. On one hand we discovered that our unitests, though comprehensive, were not reducing enough the intensity of bugs in QA. On the other hand, we noticed ever-growing imparities between what product and business owners imagined to what our developers eventually delivered. Every approach we tried to apply in order to resolve that pitfall was doomed to failure. What happened eventually is that our QA-engineers were obliged to manually bridge this chasm by spending more time and focus on acceptance and regression. Needless to say it overwhelmed the QA-pipe.

What we found interesting about BDD is that it creates a common-language for product-owners, QA-engineers and developers - describing the feature with a "given-when-than" logical pattern. The developer can then go ahead and create (and test) the logic based on those guidelines. Correspondingly, the QA-engineer has better grasp of the cycle and product-owner gets a clear visibility of features in development.

It was all what we had hoped for!!!


Our Practice

However, as with any XP-derived methodology, embracing BDD requires an ideology revolution. To be honest, it was not something we were willing to do without giving it a thought. The first "D" in BDD is for "Driven" - as in TDD – which means design & development are totally directed by writing tests first. TDD critics argue that developing a real-world system from scratch with TDD is unreasonable or at least comes with excessive overhead. We at PicScout do not take sides in this theoretical war. We practice TDD not as a must but as a privilege, mostly in case of autonomic modules. This is why we decided to spare the "Driven" part in BDD, meaning system (or feature) development is mostly written in a code-first approach followed by a must-have unitests.

But the "B" in BDD is what intrigued us the most. Key-features are translated (not entirely though) to most valuable "given-when-then” scenarios, usually written by QA-engineer but always reviewed and edited by a developer. Scenarios are implemented usually during the development of logic and unitests (again, not necessarily in a TDD style). They can be implemented by the developer as a system-test (for that we're using SpecFlow) or by the QA-engineer as UI-test (with Selenium).


Our Gain

·         The developers - now fully aware of all important aspects in a user-story, which might have not been realized from the story directly - can provide better feature & code coverage.
·         The developers have a common guideline to testing logic beyond the unit-scope. As Dan north accurately describes, our developers can now answer the five common WH questions - Where to start, What (not) to test, How much to test, What to call a test and Why a test fails.
·         QA-engineers have a comprehensive acceptance and regression layer, assured to cover all main features of a system.  They are no longer a bottleneck.
·         Product and business owners get a quick-reference to what is\was delivered, thus maintaining and expanding production system becomes simpler.


Conclusion

BDD is an emerging development-methodology which aims to facilitate acceptance and regression pain. As with TDD, it earned critics along the way, mainly concerning the ability to be fully practiced in real-world large-scale systems.

At PicScout, we are embracing several aspects and technique of this approach, especially all that is oriented by the "Behavior" principal. What we can already determine is that the gain overcomes the extra-effort, both from development-perspective and business-value.


Monday, March 4, 2013

Access your organizational data securely and transparently


What do I have to hide?

If you are dealing with an information system, you always have something to hide. Maybe not at first, but as your system grows, it starts to accommodate more users of various roles and positions inside or outside the organization. Perhaps the types of information you store diversify as well. At some point, you're going to need some boundaries to protect that information, or your data management will end up looking like a Harlem Shake.
For instance, say you have a software system for managing scientific experiments and results. Its purpose is to allow the public easy access to research and to allow scientists to share their knowledge. So why hide anything? It may seem at first sight that you would not need any security here, but let us consider some example use cases:
  1. A general user who wants information about children's safe attachment
  2. A biologist wants to see the latest news about stem cell research
  3. A member of a cross-disciplinary research project wants to see its progress
You can see how all of these users are out for the same type of information (results of research), but as the architect of the system, you would want to give them different levels of access to it, since you wouldn't to expose the details of an ongoing project to anyone but the team and if you're a biologist you can probably see some partial and unverified results that perhaps you wouldn't want to share with the general public at this point.
Let's try to describe a simple security layer to address those needs.

Building a protection layer

Unknown location

The easiest and simplest way to keep a secret. You want to hide something? Don't tell anyone where it is. This is similar to how Google allows you to share photo albums with "Anyone with the link". Given that the link is random enough, it would be pretty hard for someone to locate it. However, once the link is leaked you lose control over it. You give one colleague the link to your research results and he sends it to his journalist friend and you don't know what happens next.

Password protecting

So maybe just put a password on anything you want to keep safe? Again, the password could be leaked, but even if it doesn't or you are able to change it in time, this is problematic since now you need to manage a whole range of passwords for all your data and remember everyone else's passwords. This is very cumbersome and inefficient and doesn't provide real security.

User identity

Why don't we approach this from a different angle and try to identify the user first and then let him access the application? Let's assume our program has some sort of login mechanism in which it identifies the user. The user then tries to find the relevant information. This approach works better than the previous ones since there is no need for user interaction after the first identification (which can sometime be done automatically using Singe Sign On). Also, the control is granular and dynamic per user, so we have lots of control.
But, as Uncle Ben once said, with great control comes great complexity. We now have the responsibility to assign (and maintain!) the different permissions for all users and data which can amount to thousands or even millions of combinations.

A simple, scalable solution

So we come to the conclusion that while using user based permissions is secure enough, we need a way to treat the users as groups with common properties rather than as individuals. If we could label the data in a smart way, users will have an easier time accessing it.
How can we do this? There are several common ways:
  1. Flat labeling – marking pieces of data with some sort of label or tag is the basis of Web 2.0. We can mark each project with a unique label, if needed.
  2. User groups – we can assign the user to various groups according to his affiliation. For example, "biologists from Oxford" can be such a group if we want to allow them to share information only they will see. Another option is to separate "Biologists" and "Oxford" and use some sort of combination logic.
  3. Roles – If you wish to expose only some of the data to general users, this can be accomplished by assigning a hierarchy of roles and only allowing "scientists or above" to view it.
Accessing the data can be done via a (simplified) method such as:


[Secured(true)]

IEnumerable<ResearchData> GetResearchData()
{
   return DAL.GetData<ResearchData>();
}

Notice a couple of things about this method:

  • The DAL object is some object that allows fetching data from a DB or a service without being aware of security limitations. Don't allow direct access to it.
  • It has a "Secured" attribute with the value set to true, which means it will trigger some code using an AOP technique.
This is what the implementation of "Secured" should look like:


public class SecuredAttribute : AOPAttribute

public override void OnSuccess(MethodExecutionArgs args)
{
   Credentials creds = GetUserCredentials();
   args.ReturnValue = creds.Filter(args.ReturnValue);
}


GetUserCredentials() gets a class that consists of the grouping of users which we talked about before and has logic to filter the input data with.

Summary

In this post, I gave an example of a security layer implementation. Of course, every system has its unique needs and should be considered independently. However, I feel there are general principles which hold true for all cases and they should be followed as guidelines:

  • Keep your security flexible – you never know when a security level for data or users will change.
  • Separate concerns – let your business logic do its thing, don't mix in security. Try to do as little as marking a method with an attribute, and perhaps not even that. Notice how in the example the logic of filtering by credentials is centralized in a single location, easy to understand and change. NEVER CHECK FOR USER CREDENTIALS IN DOMAIN CODE.
  • Identify your roles – This is important because once you've managed to map credential types to use cases, you've solved the main logical challenge. Don't get stuck too much on this though, since if you separate concerns properly, you will be able to change this later on.
Good luck!

Sunday, March 3, 2013

Large objects serialization with C#.


Preface, I’m going to talk about the serialization of Large objects (with size of hundreds of MBs or even GBs). It's better to keep things small, but it's not always possible due to large architecture changes, so we've decided to take it to the limits (where we actually are limited only by PC’s physical memory).

Let’s say we have the classes:
[Serializable]
public class Result
{
    public string Uri { get; set; }
    public List<Data> AData{ get; set; }
}

[Serializable]
public class Data
{  
    public string Data1{ get; set; }  
    public string Data2{ get; set; }
}
We want to Binary serialize the Result class with, for example, 10 million Data objects inside in order to persist to storage. Later, it should be de-serialized back.

First,we used the .Net binary serializer and got:
System.Runtime.Serialization.SerializationException: The internal array cannot expand to greater than Int32.MaxValue elements. You could find the explanation of that issue here.

Next step was to implement the ISerializable interface and handle the serialization of the Datas collection explicitly. We used the Newtonsoft Json serializer: 
[Serializable]
public class Result : ISerializable
{
    public string Uri { get; set; }
    public List<Data> AData{ get; set; }

    public Result()
    {
    }

    protected Result(SerializationInfo info, StreamingContext context)
    {
        Uri = info.GetString("Uri");
        AData= JsonConvert.DeserializeObject<List<Data>>(info.GetString("AData"));  
    }

    public void GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AddValue("Uri", Uri, typeof(string));
        info.AddValue("AData", (JsonConvert.SerializeObject(AData, Formatting.None)));
    }    
}
It didn't work either: 
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
   at System.Text.StringBuilder.ToString()
   at Newtonsoft.Json.JsonConvert.SerializeObject(Object value, Formatting formatting, JsonSerializerSettings settings) in JsonConvert.cs:line 755

The next one is the protobuf-net. You have to add attributes to your classes:
[Serializable]
[ProtoContract]
public class Data
{
     [ProtoMember(1)]
     public string Data1{ get; set; }
     [ProtoMember(2)]
     public string Data2{ get; set; }
}
Also in the Result class we added support to GzipStream:
[Serializable]
public class Result : ISerializable
{

    public void GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AddValue("Uri", Uri, typeof(string));
        PopulateFieldWithData(info, "AData", AData);   
    }

    protected Result(SerializationInfo info, StreamingContext context)
    {
        Uri = info.GetString("Uri");
        AData= GetObjectsByField<List<Data>>(info, "AData");
    }

    private static void PopulateFieldWithData<T>(SerializationInfo info, string fieldName, T obj)
    {
        using (MemoryStream compressedStream = new MemoryStream(),
               MemoryStream byteStream = new MemoryStream())
        {
              Serializer.Serialize<T>(byteStream, obj);
              byteStream.Position = 0;

              using (GZipStream zipStream = new GZipStream(compressedStream, CompressionMode.Compress))
              {
                  byteStream.CopyTo(zipStream);
              }

              info.AddValue(fieldName, compressedStream.ToArray());
       }
   }

   private static T GetObjectsByField<T>(SerializationInfo info, string dataField)
   {
         byte[] byteArray = (byte[])info.GetValue(dataField, typeof(byte[]));

         using (MemoryStream compressedStream = new MemoryStream(byteArray))
         using (MemoryStream dataStream = new MemoryStream())
         using (GZipStream uncompressedStream = new GZipStream(compressedStream, CompressionMode.Decompress))
         {
               uncompressedStream.CopyTo(dataStream);

               dataStream.Position = 0;
               return Serializer.Deserialize<T>(dataStream);
         }
   }
}
It didn't work as well. Even though it didn't crashed, apparently it entered to an endless loop.

Here, we realized that we need to split the Datas collection during the serialization/de-serilization.
Main idea is to take each time, let’s say, 1 million Data objects, serialize them and add to the Serialization Info as a separate field. During the de-serialization these objects should be taken separately and merged to one collection. I updated the Result class with a few more functions:

private const string ADataCountField= "ADataCountField";
private const int NumOfDataObjectsPerSerializedPage = 1000000;

public void GetObjectData(SerializationInfo info, StreamingContext context)
{
       info.AddValue("Uri", Uri, typeof(string));
       SerializeAData(info);   
}

private void SerializeAData(SerializationInfo info)
{
       int numOfADataFields = Datas == null ? 0 :
              (int)Math.Ceiling(Datas.Count / (Double)NumOfDataObjectsPerSerializedPage );

       info.AddValue(ADataCountField, numOfADataFields );

       for (int i = 0; i < numOfADataFields ; i++)
       {
             List<Data> page = Datas.Skip(NumOfDataObjectsPerSerializedPage * i).Take(NumOfDataObjectsPerSerializedPage ).ToList();
             PopulateFieldWithData(info, "AData" + i, page);
       }
}

protected Result(SerializationInfo info, StreamingContext context)
{
        Uri = info.GetString("Uri");
        DeserializeAData(info);
}

private void DeserializeAData(SerializationInfo info)
{
        AData = new List<Link>();
        int aDataFieldsCount = info.GetInt32(ADataCountField);

        for (int i = 0; i < aDataFieldsCount ; i++)
        {
            List<Data> dataObjects= GetObjectsByField<List<Data>>(info, "AData" + i);
            Datas.AddRange(dataObjects);
        }
}
Finally, it worked!

Tuesday, February 19, 2013

Continuous Integration (CI) and Continuous Deployment (CD). Feature Bits


For more than a decade CI (Continuous Integration) helps us to deliver better code in quicker and saver manner. There are a lot of aspects and rules how to do it right and you can find a lot of information on the Internet.

Before sharing with you how things are being done at PicScout we thought it would be a great idea to give an overview of possible solutions for what is common to be called a feature bit.

In this post I will introduce a flavor called “Feature Toggles (Bits)”:

A few words about the CI:
Why do we need it?

Generally, we want:
  •  Prevent integration problems, referred to as "integration hell" and stop spending nights while trying to release a new version to production
  • Always work on latest code
  • Detect problems early and not during integration/deployment
  •  Be almost 100% sure about current code
  • Sleep good in the night…
How can we do it?
  • Commit early – small changes
  • Work on main trunk
  • Update your code from Code repository at least once a day.
  • All commits should be RFD - “Ready for deployment”. This means, all unit/integration/acceptance tests are green. Commits could be executed in regular way or in two-phase (pre-tested) approach.
  • Many more…

Okay, let's say we've worked hard to implement CI/CD recommendations.
That's obviously great, but imagine that now we introduce a new piece of code to our code base.
This code could be:
  •   Safe. (Could be deployed without affecting various system components)
    •   Internal algorithm/behavior was changed.
    •   Column or table was created
    •  Some internal bug was fixed
  •  Dependent
    •    Could be deployed only after making changes in other system components
This new code deployment can be done by using “Feature Toggle (Bit)”.
The main idea is: New code should be enabled/disabled using feature bits.

if (FeatureBitSettings.BitEnabled("ScanPdf"))
{
 // enable new behavior
}
else
{
 // old behavior
}
If you develop code, which is part of different components, feature bits should be added everywhere.
(Note: “if” – is one of the code smells, but for sake of save deployment it could be used).
Additionally, new tests with/without feature bits should be added/adjusted.

Feature bits stay in configuration files.

  true
  false


How it should work? Feature bits life cycle:
1.       Develop new code with feature bits.
2.       Test it: with feature bits On/Off.
3.       Set feature bits Off.
4.       Deploy it.
5.       Everything is fine? Enable feature bits to On (by changing configuration) in right order (if there are dependencies between components).
6.       In the case of problems you can do the rollback quickly by setting bits to Off.
7.       After running for some time (days/weeks) code declared safe.
8.       Remove feature bits from the code. Leave only the “On” code.
9.       Deploy it.
10.   Remove redundant feature bits from configuration.

As we might see, feature bits allow us handle different integration/deployment issues quickly and safely.

Wednesday, January 2, 2013

WEB Developer: Don't be the designers Yes-Man


In an ideal world, the designers will be HTML experts who know how to make the most out of it. But in most cases, the designer and the developer are two different people which are speaking two different languages. In this post I will attempt to explain why the developer should not get a design and create an HTML out of it, but should take responsibility and point out why and where it is better to do things differently.

The developer's job:

While most web designers have the advantage of providing the consumer with a rich but flat product, the final product that comes out from the web developer consists of a combination of HTML, JS, CSS, images and layers that have the tendency of moving around. He also need to support and take consider of issues like: speed, SEO (Search engine optimization) , page changes, translations, different layouts, different page version, different browsers, different browser versions, different platforms, different resolutions, and other bad sickness‏‏.
This is why when a WEB developer gets the design of the page, he immediately begins to analyze and break down the page and organizes it according to the different elements. As part of the process, he first separates the pure HTML/CSS element from the elements that need external components and the native elements from the tricky ones. The second step is to assess the complexity and consequences of each of these elements.
The developer's job is not to build the page in accordance to the designer requirements, but take responsibility and locate and point out elements, which should be changed so that on one hand will not significantly effect on the visual aspect, and at the same time will be more effective, and even with more exciting advanced options.

Buttons as a Case Study

Image as a button
One of the major temptations facing designers is to provide an image for producing complex buttons. This method is graphically rich, rapid and mainly ensures that all users see the same thing, but still has major disadvantages:
  • The slightest change requires external intervention.
  • It's difficult to support multiple languages.
  • Less SEO optimized.
  • The need of loading external element.
Image of a pure CSS button
Most of the buttons can be produced by using pure HTML/CSS, which can be seen here. But unfortunately there are some critical drawbacks to this approach:
  • The lack of support for all browsers and platforms.
  • The complexity and difficulty in getting the exact look.
The ideal solution to the problem of the buttons example is not an off-the-shelf, but a process of breaking down the design element of the button and considering various alternatives for each one of them, keeping in mind the consequences of using each one of them, even if sometimes it will affect the visual aspect. In the next section will discuss the considerations.

The design elements

In this section we will try to dismantle the design foundations that create the various page graphical elements such as the buttons we had discussed. For each of them we will present the possible alternative and try to touch on the advantages and disadvantages.

Unavailable fonts and effects

Using images to display unavailable fonts reduces performance, is invisible for SEO and creates a mess when considering translations or changes. The alternatives are to download the font to the user's computer or use a variety of available live online fonts like the Google font. Therefore sometimes it should be better to ask the designer to change the font to one of the online fonts.
Online tool: 3D CSS Text
Online tool: Google fonts.

Texture & shades

There is no doubt that using a picture simplifies the use of texture. In many cases, the image can be replaced with CSS3 code that supports a wide range of textures solutions, which more flexible and efficient, rather than pure CSS texture to multiple background images. It is important to raise the implications on browsers that do not support CSS3. Before giving up the proper pure CSS3 solution or starting to write patches for old browsers, it will be best to examine the possibility of bringing old browsers to a satisfactory visual look, even if not the accurate one.
Image of live text with texture
Same live text on old browser
Examples: ButtonBar: Css text shadows. Online tool: Gradients.

Borders & rounded corners

Implementing round corners is one of the famous challenges that web developer had faced, so there is no surprise this problem has a rich supply of different solutions. In CSS3 it had finally came to the simple and smooth solution. But like textures, we have to consider the impact on older browsers. Describing the alternatives should be a post in itself.
It should be noted that often rounded corners are nice addition but not essential. A broad perspective will prefer to abandon the problematic browsers, and not put unnecessary complexity to the code.
Image of object with CSS rounded corners
Image of the same object on old browser
Online tool: Box-properties.

Shadows

Text shading was once one more key challenge, which often took the easy way using the image, but even this element had come to peace in CSS3. Of course, once again we need to consider if and how to support provide backwards compatibility.
Example: CSS drop-shadows.
Online tool: Text shadows.
Online tool: Box shadows.

Transforms & Animation

In contrast to all the previous sections, where the key was to offer alternatives to the designer's work, here the developer can excite the designer by offering him cool capabilities that come native with CSS3: Changing the size, space, color, location and much more. Knowledge about the various animation and transformation options will allow the developer to approach the designer with ideas that are native to the web and still have a huge impression on the user.
Example: Css playground. Online tool: Western civilisation pty. ltd.

A cautious summary

Knowledge of the various alternative options is not sufficient in itself. Before the developer can offer alternatives, he needs to be familiar of the limitations of each of the options and present them loud and clearly. Only then can he offer alternatives without upsetting the system while revealing the limitations in the different browsers. But once his suggestion will be heard, his status will change from Transparent Technician to a key figure and his work will be much more effective, enriching and pleasurable.
Online tool: CSS3 maker.
Online tool: CSS3 Playground.