.NET

Back to the future

It’s funny how sometimes things come full circle. When I started at my current job, 6 years ago, fresh out of college, one of the very first tasks I was assigned to was integrating a .NET component into a multi-tier VB6 system. The VB6 components communicated over sockets by means of binary-serialized structures, so I ended up implementing the same binary serialization/deserialization protocol in C#. We used it extensively for a while, but It never made it to production because our plans changed shortly thereafter. It was a pretty fun reverse-engineering project nevertheless.

Now I’m about to move on to a different job and while cleaning my workstation I found that little piece of code. It’s not like VB6 is the hot technology in 2012, but throwing it away was a bit of a shame. The company kindly agreed to open source it, so after a quick facelift here it is (and on NuGet), MIT-licensed and ready to be used in any kind of project. Ready to take contributions too!

If it serves as an Out of Jail Free card to at least one person stuck in a VB6 prison I would call mission accomplished!

Simple color-based search by image in F#

Last week I was playing with a photomosaic composer toy project and needed a simple search by image engine. By search by image I mean searching a database for an image similar to a given one. In this tutorial I will show you how you can implement this functionality –with some obvious limitations– in an extremely simple way just by looking at an image’s color distribution.

If you are looking for feature similarity (shapes, patterns, etc.) you most likely need edge detection algorithms (linear filters or other similar methods), which give excellent results but are usually quite complicated. I suppose that’s the way most image search engines work. Alternatively this paper describes the sophisticated color-based approach used by Google’s skin-detection engine.

In many cases however, finding images with a perceptually similar color distribution can be enough.
If you are in this situation, you may get away with a very simple technique that still gives pretty good results with a minimal implementation effort. The technique is long known and widely used, but if you have no experience in image processing this step-by-step guide may be a fun and painless warm-up to the topic.

I’ll show the concept with the help of F# code, but the approach is so straightforward that you should understand it even without prior knowledge of the language.

TL;DR:

This is the high level outline of the process.

Just once, to build a database “index”:

  • Create a normalized 8-bit color distribution histogram of each image in the database.

For every query:

  • Create a normalized 8-bit color distribution histogram of the query image.
  • Search the database for the histogram closest to the query using some probability distribution distance function.

If you are still interested in the details of each step, please read on.

Extracting an image’s color signature

Given that we want to compare images, we’ll have to transform them into something that can be easily compared. We could just compute the average color of all pixels in an image, but this is not very useful in practice. Instead, we will use a color histogram, i.e. we will count the number of pixels of each possible color.

A color histogram is created in four steps:

  1. Load/decode the image into an array of pixels.
  2. Downsample the pixels to 8-bit “truecolor” in order to reduce the color space to 256 distinct colors.
  3. Count the number of pixels for each given color.
  4. Normalize the histogram (to allow the comparison of images with different size).

1. Loading the image

This is almost trivial in most languages/framework. Here’s the F# code using the System.Windows.Media APIs:

 

2. Downsampling 32-bit color to 8-bit

 

8bit

With the help of some basic bitwise operations we reduce pixels from 32 bits down to 8. We discard the alpha channel and keep 2 bits for blue (out of the original 8), 3 for red and 3 for green (we discard the least significant bits of each color component). The result is that each pixel (being a byte) can represent one of exactly 256 colors. We obviously loose some color detail because we cannot represent all the original gradients, but having a smaller color space keeps the histogram size manageable.

Note: in general 8-bit images use a palette, i.e. every pixel value is a pointer to a color in a 256-color palette. That way the palette can be optimized to only include the most frequent color in the image. In our case the benefit would not be worth the trouble as we would need a common palette across all the images anyways (plus the above method is faster and simpler).

3. Creating the histogram

 

histogram

Nothing special here: we just count the number of pixels that are of a given color. The histogram is nothing more than a 256-elements array of integers (plus the image file name). You can read it like “this image has 23 “light green” pixels, 10 “dark red” pixels, etc.”
We then normalize the histogram by dividing each value by the total number of pixels so that each color amount is a float value in the 0 .. 1 range, where for ex. 0.3 means that a picture has 30% of pixels of that given color.

 

Comparing color histograms

Now we have a collection of histograms (the database) and a query histogram. In order to find the best matching image, we need a way to measure how similar two histograms are. In other words we need a distance function that quantifies the similarity between two histograms (and thus between two images).

You probably have noticed that a normalized histogram is in fact a discrete probability distribution. Every value is between 0 and 1 and the sum of all values is 1. This means we can use statistical “goodness of fit” tests to measure the distance between two histograms. For example the chi-squared test is one of those. We are going to use a slight variation of it, called quadratic-form distance. It is pretty effective in our case because it reduces the importance of differences between large peaks.
The test is defined as follows (p and q are the two histograms we are comparing):

[latex]distQF(p, q) = \frac{1} {2}  \sum_{i=0}^n \frac{(p_i – q_i)^2} {p_i + q_i}[/latex]

 

the implementation is straightforward:

The more two histograms are different, the larger is the return value of this test. The test returns 0 for two identical histograms.

A more sophisticated option is the Jensen-Shannon divergence, that is a  a smoothed version of the Kullback-Leibler divergence. While being more complicated, it has the interesting property that its square root is a metric, i.e. it defines a metric space (in layman’s terms, a space where the distance between two points can be measured, and where the distance A → B → C cannot be shorter than the direct distance A → B). This property is going to be useful in the next post when we’ll optimize our search algorithm.

The Kullback-Leibler and Jensen-Shannon divergences are defined as:

[latex]distKL(p, q) = \sum_{i=0}^n p_i ln \frac {p_i} {q_i}[/latex]

 

[latex]distJS(p, q) = \frac{1}{2} distKL(p, \frac{1}{2}(p + q)) + \frac{1}{2} distKL(q, \frac{1}{2}(p + q))[/latex]

 

This is the corresponding F# code:

This paper includes an interesting comparison of various distance functions.

At this point our problem is almost solved. All we have to do is iterating through all the samples measuring the distance between query and sample and selecting the histogram with the smallest distance:

Notice that I use head because I’m only interested in the best matching item. I could truncate the list at any given length to obtain n matching items in order of relevance.

Optimizing the search

Maybe you’ve noticed one detail: for every query, we need to walk the full database computing the distance function between our query histogram and each image. Not very smart. If  the database contains billions of images that’s not going to be a very fast search. Also if we perform a large number of queries in a short time we are going to be in trouble.

If you expected a quick and easy answer to this issue I’m going to disappoint you. However, the good news is that this problem is very interesting, much more so than it may look at first sight. This will be the topic of the next post, where I’ll write about VP-Trees, BK-Trees, and Locality-Sensitive Hashing.

Grab the source

The complete F# source of this tutorial is available on GitHub (a whopping 132-lines of code).

Thanks to my brother Lorenzo for the review and feedback.

Real world F#: my experience (part two)

The second project I recently completed in F# is a completely different animal. While the first one is a pet project I’ve put together in my spare time (with no deadline at all), this one has been a full-time work for my company (for this reason I cannot disclose some details or share source code). Additionally, time available was limited. Very limited. Like 2 weeks limited. That’s 10 working days plus a 2-days emergency buffer.

A load simulator tool

My company produces a high performance client-server platform that ships with our own proprietary database engine. After some important changes to the server and database codebase, we needed to test the system’s behavior under heavy load, i.e. when a large number of users are connected and firing queries.

As you can guess, hiring and coordinating hundreds of people to load the system the way you need is very impractical, if possible at all. Maybe it’s doable if you have your own Army of Clones, but we don’t have one, so we had to somehow automate the process. Keep in mind that the server interface is proprietary, i.e. it’s not http, SQL, or anything similar: we have to go through our library and API to access the server. For that reason we could not use any existing tool.

The application I was going to build was meant for internal use but it was clear that something usable by non-über-geeks would have been nice to have at some point (for example to help sizing hardware for large customers). Anyways, being the deadline very close, it was imperative (no pun intended) to focus on the most important stuff.

Writing your own DSL

I decided to define an external DSL to describe the simulation scenarios. The language would let you express the creation of users, connections, queries, pauses, etc. in a simple way.
The second decision was to use F#. Fortunately nobody objected (again no pun intended). I was to work on the project alone, so I could basically use whatever I liked.

Once I defined the grammar I went to step 2, i.e. parsing. Obviously I was not going to reinvent the wheel by rolling out my own lexer and parser so the choice was between parser generators (FsLex/FsYacc, Irony for C# & co.) and combinator libraries à la FParsec. After taking some advice from the great F# community on Twitter (thanks Robert!), I opted for FParsec. I admit it looked a bit intimidating, but the idea of not introducing a tooling step in the build process was appealing, plus I had never used a combinator library before and was curious.

Here starts the amazement. As mentioned, at first FParsec looks slightly cryptic, but once you get the main concepts and get over a few gotchas it just “clicks”. You quickly reach a point where reading the parser code is almost like reading the grammar definition. Making changes is a matter of a few minutes with a very low risk of introducing new errors. FParsec gives you an enormous flexibility and even if the learning curve is steeper than learning parser generators I suggest you look at it if you’ve never done it before. The official documentation is great too.

Anyways, in a few days I had a parser that lifted the input program to the abstract syntax tree. Sweet!

Note: in case you are wondering, the language I defined was not super complicated but also not trivial. It supports regular loops as well as parallel ones (iterations are executed in parallel), nested loops and a plethora of options on all the various commands. I opted for a rich syntax that results in programs that are almost written in natural language. I cannot disclose all the details, but you can get an idea by looking at the screenshots.

Capture4

Walking the tree

Second amazement: thanks to discriminated unions and pattern matching, walking the syntax tree is an incredibly fluid and easy process. The code is so compact and elegant that I keep opening that file just to look at it. No boilerplate, no class proliferation, no wasted characters. Just the code.

Unfortunately I could not leverage the powerful F# concurrency features to run parallel loops because the client library that interfaces with our server is not thread-safe, so all I could do was starting new threads with each its own separate AppDomain. My skills on asynchronous workflows & co. are still limited so I don’t know if there’s a better way. If that’s the case, I’d love to hear your feedback in the comments.

GUI and extras

With parsing and interpreting done, the bulk of the job was over. I just needed to add logging and a less geeky interface than the command line. With room to spare, I created a WPF GUI that controls the execution and reads logs to display status and stats. This was nothing particularly exotic, but I was able to fit in some nice touches like a graphical timeline to represent operations executed on the different threads. I wrote the GUI in XAML/C# using MVVM-Light. The parser/interpreter runs in a separate process, so that in case of a crash (not a remote possibility when you are pushing the hardware limits) the GUI keeps running and tells you what happened.

Capture3

So 10 days had passed and this is what had been done:

  • the DSL grammar definition
  • a parser and an interpreter for it. It took slightly more than necessary because I had to learn FParsec along the way (this talk by Robert Pickering has been very helpful).
  • a GUI with some bells and whistles

plus some extras (that as you know better than me, are very time consuming):

  • a (admittedly basic) distributable package
  • the syntax highlighting definition for Notepad++ Smile
  • several code samples that show the DSL capabilities
  • the user manual and language specification (I got some help with that)
  • a tutorial

Developing the GUI and producing the extras went at normal speed, but I’m positive that writing this parser and interpreter in C# would have taken me close to the ten days alone. Maybe my standards are low, I don’t know, but I’m honestly blown away by what I could achieve in such a short time. Also notice that I’m much more experienced in C# than in F#.

Truth to be told, I had another advantage: this project was done in the year ending period when several people are on holidays and the office is very quiet. I also put in some late evenings, but I have a family with two kids, I just cannot code 24*7 even if I wanted.

The stars of the show

The goal of this post is not telling the world how fast I work. It’s impossible for anyone to judge if a project would have needed 2 or 100 days without knowing all the details. No, I’m writing this because I know all the details and I know that F# gave me a huge advantage. Much more so than I imagined when I started.

These are things that I think make F# ideal for a project like this:

Higher order functions

These are what allow libraries like FParsec to exist, amongst the rest.

Discriminated unions, tuples and pattern matching

This trio is worth alone the price of entry. They make for very terse code and bring other great advantages on the table as well.

It works the first time

I still don’t get why it is so. Maybe it’s because of the lack of nulls. Maybe it’s because (as I’ve written in part one) I think functional programming forces you to think more and write/debug less. The net result is that when I write F# I mostly get it right the first time. Because of the higher-order functions there are less corner cases that suddenly appear and crash everything.

Now most of these features are available in several functional languages, however the seamless .NET integration was fundamental in my case (the libraries I had to use are .NET), and some F#-only constructs make coding fun and speedy at the same time.

Conclusion

If you’re not living under a rock (like I’m literally doing right now –but that’s another story) you’ve sure heard of F#. Maybe you’ve even seen some examples, but as I’ve heard many times from C# developers, they looked incomprehensible. Don’t let that stop you, it’s just not true. If you’re new to functional programming it looks that way because F# is (mostly) a functional language, i.e. you’re not only learning a new language, you’re learning a new paradigm. A different way of thinking of your programs. It does take some effort, for sure. Is it worth it? It’s up to you to decide. To me, getting back to functional programming with F# after several years of OOP/C# has been a real breath of fresh air.

If you decide to learn more, here are some great places to start:

Advice for getting started with F# by Richard Minerich
An overview of functional programming by Dorian Corompt (recursion, lists, more to come…)

I suggest starting with the basics: you can already accomplish a lot with just lists, sequences, tuples, unions and pattern matching. When you feel ready you can move on to the more advanced topics.
Have fun!

Again, many thanks to Steffen and Samuel for the feedback!

Real world F#: my experience (part one)

I’ve been playing with F# on and off for about one year, but only recently I was able to complete a few “real world” projects. I was so impressed that I decided to share my experience. In this two-part series I will talk about two very different projects to give you an idea of how wide the spectrum of applications is where F# feels right at home.

The first project

The first project is named VeloSizer. You can check it out here (I may release it as open source but I’m still undecided on what to do with it). I assume you are not a cycling geek so I’ll spare you the details, but in short this application computes the bike setup given your position and the frame geometry. If you’re interested there’s a detailed description on the application page. Surprisingly enough, I’ve never found anything that does this very thing (except for full blown and expensive CADs), so I decided to write it myself.

velosizer

The application is built in Silverlight: the XAML frontend is basically a glorified input form. It’s not particularly complex, some details are more complicated than it may look at first sight, but still there’s nothing extraordinary. I took a rather standard approach and employed the MVVM pattern (using MVVM-Light) for a clear separation of concerns. The View Model is C# while the Model –where the interesting stuff happens– is written in F#.

Solving this particular problem does not require very complicated mathematics, but involves a large number of geometrical operations (trigonometry and the likes). Without abstracting and hiding away all the math, the solution quickly becomes a nightmare that spirals out of control (don’t ask how I know). For this reason I’ve implemented a simple 2D CAD engine that sits at the application core.

How it went

Here are some things I noticed while using F# in a “real” project for the first time.

Units of measure

F#’s support for units of measures built straight into the type system has been very helpful to avoid stupid errors like mixing degrees with radians with millimeters, etc. It is really a plus when dealing with physical dimensions.

Conciseness

The language syntax is very light and unobtrusive, which makes it ideal to write mathematically-oriented code. The main benefit to me has been that the math stands out clearly, without parenthesis, type annotations or artifacts that make things harder to read. Also writing the code is a joy: you can really focus on the reasoning and almost forget that you are actually programming. In fact translating the equations written on paper to code is almost copy & paste.

Testability

I heartily agree with Richard Minerich when he says that testing does not replace a strong, theoretically-validated model. It’s the very same reason that pushed me to build most of this application’s engine on paper before writing a line of code. However I still make (lots of) mistakes when implementing a model –regardless of how correct it is– so I feel safer with the additional support of a solid testing framework.
The nature of functional programming makes it an ideal target for unit tests. Short, side-effects free functions are a joy to test. Result: it has been very easy to create a nice safety net in form of an NUnit project.
I must admit I would probably have written this library more or less using the same style in C#, but in functional programming this is the default.

Interoperatibily with C#/GUI

This is somewhat of a sore point. I don’t know if it is due to my lack of experience (likely) or the nature of a GUI-driven application, but I’ve ended up with many mutable (and not very idiomatic in general) classes, for two main reasons:

  • I had to persist the business objects (using the Sterling NoSQL database) and all the serializers for Silverlight need public setters as they are not allowed to use reflection.
  • With MVVM, each View is bound to its respective View Model, which is nothing more than a wrapper around its respective business object defined in the model (F#).
    Now when for instance the user changes a value in a TextBox, the new value is propagated to the View Model, which in turns propagates it to the Model. You can tell it’s not very practical to create a new instance of the model every time a value changes, so immutable objects do not adapt so well to the situation.

This means that the business objects are very C#-like. They still benefit from the lighter syntax, type inference, etc…, but they don’t fully leverage the power of the language. Fortunately the “application brain” does not suffer much from this.

Is this due to MVVM, XAML and in general GUI patterns being oriented towards the object-oriented paradigm? I don’t know. I’ve heard of a GUI framework specifically written for F#, but I don’t know much more.
I would be very interested to hear your opinion on this subject.

Note: as Stephen points out, keeping the model immutable may not be so much of a problem. I’ll give it a try.

Guidance and community support

The F# community is still small, but it more than makes up for it in quality. The active users on Stackoverflow and other sites are extremely competent. It’s rare to get bogus answers or to get stuck on a problem for long.
What I’ve found difficult though is getting guidance. I often ask myself if my code is well written or a pile of junk. I suppose the only solution is to refine my own sense by reading other people’s code.

Intellisense

Visual Studio’s Intellisense for C# is spectacular and has made us very lazy. F# support is much better than it was at the beginning, but it’s still not up to the same level of C#. In the end though it’s only lacking a few details like parameter names or support for the pipeline operator –the next release already includes some improvements in this area.

Debugging

Setting breakpoints and watching state change is not simple in functional code because (usually) there is no state. If you debug a lot, this may be a bit unsettling at first, but then you realize it is not so much of a drawback. It is a benefit in fact. Breakpoints are evil: building a half-working solution, running it through breakpoints and tune it until the result matches what you expect is very close to the definitions of cargo-cult programming/programming by coincidence.
It is my opinion that functional programming makes you think more and write/edit/debug less. I believe this has made me a better developer because I now tend to stop, think about the solution “offline” and only write it down when I get it.

Productivity

I can’t give any judgment on productivity because this application has been a pet project I’ve built alone without any deadline, working literally 15 minutes at a time. We recently welcomed another family member, which has made things even harder. Anyways it took me about 7 months to complete this project, but it’s very hard for me to tell if F# has given any productivity boost at all. More on this in part two.

Conclusion

It has been a real pleasure to write the F# part of this application. When you look at the application source, the first things that jumps to the eye is that the View Model (C#/OO) is way larger (in lines of code) than the model (F#/mostly functional), yet it only does “stupid” things: it’s almost exclusively made of property definitions, RaisePropertyChanged events, brackets, etc. It is like a very large box full of bubble wrap sheet, with only a small, precious gift in the middle.

That said I’ve been left with the impression that I haven’t used all of the language’s power. Writing the View Model in F# would only have slightly alleviated its ineffectiveness, what I need is probably a different pattern for GUI interaction.

In part two I’ll talk about a very different (and more interesting) project, where F# really shined. In the mean time I would be very interested to hear your opinions.

Thanks a lot to Steffen Forkmann and Samuel Bosch for proof reading and general feedback!

Silverlight unit testing with NUnit: yes you can (without hacks)!

It may be obvious for most of you, but it took a while to my caveman brain to realize this, so I figured I could post it for other cavemen. You (we) have got an excuse though: the short release cycle of Silverlight means that most stackoverflow questions and blog posts on the subject are out-of-date and refer to older versions of Silverlight (<=3) when what I describe was not possible.

Up to Silverlight 3 you had to use a Silverlight-specific unit testing tool, like the Silverlight Toolkit framework. These tools are quite awesome and have been fundamental, but it’s nice to have the full power of NUnit (or xUnit & co.) at disposal. In addition to the community and tooling support, it’s practical to use the same tool you already use for other .NET projects.

The game changer is called binary assembly compatibility, brought by Silverlight 4. In a few words this means that you can add a reference to a Silverlight assembly from a “full” .NET project (provided that you don’t use any Silverlight-only class).

If your application is correctly layered (for ex. with MVVM) in most cases it’s trivial to keep views and viewModels/models in separate assemblies. ViewModels and models usually don’t reference any Silverlight-only class (otherwise you may have a code smell!) and are 100% compatible with .NET, so testing them with NUnit is as easy as

  • create a .NET class library in your solution
  • add a reference to NUnit (NuGet it!)
  • add a reference to your model and/or viewModel assemblies
  • write your unit tests
  • run them with NUnit

no tweaking or hacking required and works fine with F# assemblies as well.

It is typical for views to use Silverlight-only classes, but this is generally not a problem because it doesn’t make much sense to unit-test them anyways as they are mostly XAML with very little amounts of code-behind.

Free tip: if you want to test internal classes and methods you can add the InternalsVisibleTo attribute to the target assembly.

Happy testing!

bear

Dynamic member binding in Silverlight 4

UPDATE: Xavier Decoster already wrote a nice article on this topic some time ago. Please check it out! (Note to self: improve google skills)

First, let me say that I’ll take the long route, so if you are already familiar with dynamic typing in C# you can probably jump straight to the last section. Otherwise read on, you may learn something cool that is not used every day but can save you in some situations.

In the [not so] old days of Silverlight 3 if you wanted to dynamically create a class you had to emit intermediate language instructions, etc. Definitely not so easy. Silverlight 4 (with C# 4.0) introduced support for dynamics and simplified this a lot.

Straight from the DynamicObject documentation, this simple implementation of a dynamic dictionary uses an internal dictionary to store string/object pairs where the key is the member name and value is its associated value.

public class DynamicDictionary : DynamicObject
{
    Dictionary<string, object> dictionary =
        new Dictionary<string, object>();

    public override bool TryGetMember(
        GetMemberBinder binder, out object result)
    {
        return dictionary.TryGetValue(binder.Name, out result);
    }

    public override bool TrySetMember(
        SetMemberBinder binder, object value)
    {
        dictionary[binder.Name] = value;
        return true;
    }
}

In a DynamicObject you have two methods (TryGetMember and TrySetMember) that are invoked every time someone tries to access the objects’ members. In this particular implementation, when this code is executed

dynamic myDynamicObject = new DynamicDictionary();
myDynamicObject.FirstName = "John";
myDynamicObject.Age = 18;

two pairs “FirstName”/”John” and “Age”/18 are stored in the internal dictionary. On the other hand when you do

string test = myDynamicObject.FirstName;

instead of calling the getter of FirstName (like any statically typed object would do), TryGetMember is invoked and the value corresponding to key “FirstName” is looked up from the internal dictionary.

The dynamic keyword tells the compiler that the member will be looked up at runtime, so you can set/get any member you want and the compiler won’t complain: he knows the members will be resolved while the program is running.

The binding problem

Now there is only a small problem with this approach (and it’s the whole point of this post): if you create a binding that targets a dynamic member you’ll get an error. It looks like the Silverlight binding engine “cannot discover” dynamic properties.

For example this does not work:

public dynamic MyDynamicDictionary {  get;     set; }

// ...

MyDynamicDictionary = new DynamicDictionary();
MyDynamicDictionary.Label = "Hello, I'm dynamic!";
 <Button Content="{Binding MyDynamicDictionary.Label}"/> 

brokenBinding

If you look at the output:

System.Windows.Data Error: BindingExpression path error: 'Label' property not found on 'SilverlightApplication22.DynamicDictionary'

Indexed binding to the rescue

Telerik’s Vladimir Enchev explains on his blog how this approach can be used to implement a dataTable-like structure that can back for ex. a datagrid. The clever bit is that he added to the DynamicDictionary the [] indexer:

public object this[string columnName]
{
    get
    {
        if (dictionary.ContainsKey(columnName))
            return dictionary[columnName];
        return null;
    }
    set
    {
        if (!dictionary.ContainsKey(columnName))
        {
            dictionary.Add(columnName, value);
            RaisePropertyChanged(columnName);
        }
        else
        {
            dictionary[columnName] = value;
            RaisePropertyChanged(columnName);
        }
    }
}

Now we have two alternatives to access the dynamically-created members:

// like before:
dynamic myDynamicObject2 = new DynamicDictionary();
myDynamicObject2.FirstName = "John";
myDynamicObject2.LastName = "Smith";

// using []:
var myDynamicObject3 = new DynamicDictionary();
myDynamicObject["FirstName"] = "John";
myDynamicObject["LastName"] = "Smith";

The two approaches have exactly the same effect (notice that in the second version the variable is declared with var instead of dynamic).

Using square brackets to access members has the advantage that you can actually create members using strings: let’s say you have a string/object dictionary, it’s easy to loop the dictionary entries and “create” a member for every key while setting the value as the member value. After this you’ll have an object that mirrors the dictionary:

var source = new Dictionary<string, object>();
source.Add("FirstName", "John");
source.Add("LastName", "Smith");
source.Add("Age", 18);

var target = new DynamicDictionary();
foreach (var entry in source)
    target[entry.Key] = entry.Value;

now target is the same as you would have after doing

new something() { FirstName = "John", LastName = "Smith", Age = 18 };

except that it “adapts” to any key/value you have in the dictionary. Cool eh?!

It turns out that the indexer has another side benefit (that solves the binding problem). In fact Silverlight 4 also introduced indexed bindings: you can create bindings that target indexed structures (like a list or a dictionary) simply using square brackets. The nice thing is that our dynamic class happens to have an indexer.

Let’s revisit our code: if we declare the property as DynamicDictyionary instead of dynamic (we now must set the properties using the indexer because the compiler only allows “non-existing” properties on object of dynamic type):

public DynamicDictionary MyDynamicDictionary { get; set; }

//...

MyDynamicDictionary = new DynamicDictionary();
MyDynamicDictionary["Label"] = "Hello, I'm dynamic!";

and change the XAML to look like this (notice the square brackets)

<Button Content="{Binding MyDynamicDictionary2[Label]}"/>

the dynamic binding does work fine:

okBinding

Happy dynamic binding!

Download the code

F# runtime for Silverlight version v4.0 is not installed.

After playing with some Silverlight beta bits, going back to RTM, etc. I could not compile F# projects for Silverlight 4 anymore. Even after reinstalling everything in [what I think is] the right sequence I was still getting this error:

F# runtime for Silverlight version v4.0 is not installed.
Please go to http://go.microsoft.com/fwlink/?LinkId=177463 to download and install matching F# runtime
C:\Program Files (x86)\Microsoft F#\v4.0\Microsoft.FSharp.Targets

I followed the link and re-installed the Microsoft Silverlight 4 Tools for Visual Studio 2010 (which I already did), but the error was still there. Looking into c:\program files (x86)\Microsoft F#\Silverlight\Libraries\Client I noticed that the folder v3.0 was there, but not v4.0.

The solution

If you have the same issue, go to http://go.microsoft.com/fwlink/?LinkId=177463 and download the Silverlight 4 Tools installer (Silverlight4_Tools.exe).
Don’t run it, instead extract its contents with your zip tool of choice (I used WinRar) and run FSharpRuntimeSL4.msi.
Now look again in c:\program files (x86)\Microsoft F#\Silverlight\Libraries\Client, a v4.0 folder should have appeared. If you have it your Silverlight 4 F# projects should now compile.

Windows Phone 7: how to reset the idle detection countdown

Windows Phone 7 like every other phone OS turns off the screen after a period of inactivity. This is not a problem most of the time because any user activity (namely finger interactions on the screen) resets the countdown, so if you are using an application the screen saver will not get in the way. However there are some particular cases where it is useful to disable the idle detection, for example in games or apps that require long reading (or watching). In that case you can completely disable idle detection:

PhoneApplicationService.Current.UserIdleDetectionMode =
    IdleDetectionMode.Disabled;

Keep in mind that this disables the “screen saver” at all, so be careful because you could drain the poor user’s battery if you do it without valid reason.

There is another more interesting case, though: suppose your app uses the accelerometer as its main user input. In this case there won’t be any user activity to trigger a countdown reset, but disabling it at all also doesn’t look like the best idea (what if the user puts the phone on the table to go grab a beer?).

The best in this case would be to reset the count-down when a movement is detected, i.e. treating accelerometer events like screen user input. How to do this?

The answer is extremely simple: you can just disable the IdleDetection and re-enable it again. This will reset the count-down. One little caveat: you cannot re-enable it immediately after having disabled it, the OS is smart enough not to be fooled and will ignore your two commands. You’ll have to wait a short while before re-enabling idle detection.

Here is an example: when I get a new reading from the accelerometer (I’m using the AccelerometerHelper) I check if there has been a large enough movement and in that case I disable the idle detection. Otherwise I enable it –this effectively resets the countdown every time the movement goes above a given threshold. Keep in mind that the accelerometer fires 50 times per second, that’s why I used a bool field to avoid unnecessary calls to the system setting. I’m not sure this prevents an actual performance loss, but it would be worth it to experiment and measure a little if you are using this technique in your apps.

double _currentValue;
bool _screenSaverEnabled = true;

private void OnAccelerometerHelperReadingChanged(object sender, AccelerometerHelperReadingEventArgs e)
{
    Dispatcher.BeginInvoke(() =>
        {
            // you'll have something more useful in your app
            computedValue = e.OptimallyFilteredAcceleration.X;

            var delta = Math.Abs(computedValue - _currentValue);
            if (_screenSaverEnabled)
            {
                if (delta > SOME_ARBITRARY_THRESOLD)
                {
                    _screenSaverEnabled = false;
                    PhoneApplicationService.Current.UserIdleDetectionMode = IdleDetectionMode.Disabled;
                    Debug.WriteLine("Screen saver disabled");
                }
            }
            else
            {
                _screenSaverEnabled = true;
                PhoneApplicationService.Current.UserIdleDetectionMode = IdleDetectionMode.Enabled;
                Debug.WriteLine("Screen saver enabled");
            }
            _currentValue = computedValue;
        }
    );
}

Happy coding!

Windows Phone 7: correct pinch zoom in Silverlight

Pinch zooming is one of those things that look incredibly simple until you actually try to implement them. At that point you realize it hides quite a number of intricacies that make it hard to get it right. If you tried to implement pinch zooming in Silverlight for Windows Phone 7 you probably know what I’m talking about.

What does it means getting it right?

Adrian Tsai already gave an excellent explanation, so I won’t repeat his words. The test is extremely simple: pick two points in the image (for example two eyes) and zoom with your fingers on them. If at the end of the zoom the two points are still under your fingers you got it right –otherwise you got it wrong.

Multitouch Behavior

Laurent Bugnion, Davide Zordan and David Kelly are the men behind  Multitouch Behavior for SL and WPF. It’s an impressive open source project and you should check it out. In addition to pinch-zooming it gives you rotation, inertia, debug mode and much more. It’s extremely easy to work with as you just need a couple of lines of XAML. The only shortcoming is that at the time of writing it seems that there is no way to read the current zoom state, making it difficult to fully support tombstoning. If you don’t need this, go grab Multitouch Behavior and stop reading: it will probably work better and you’ll save some time.

The XAML

This is the XAML we are starting with. Notice that our DIY implementation relies on the Silverlight Toolkit’s InputGesture. If you are not yet using it, please install the toolkit and add a reference to Microsoft.Phone.Controls.Toolkit in your project.

xmlns:toolkit="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit"
<Image x:Name="ImgZoom"
        Source="sample.jpg"
        Stretch="UniformToFill"
        RenderTransformOrigin="0.5,0.5"> 
    <toolkit:GestureService.GestureListener>
        <toolkit:GestureListener
                PinchStarted="OnPinchStarted"
                PinchDelta="OnPinchDelta"/>
    </toolkit:GestureService.GestureListener>
    <Image.RenderTransform>
        <CompositeTransform
                ScaleX="1" ScaleY="1"
                TranslateX="0" TranslateY="0"/>
    </Image.RenderTransform>
</Image>

The wrong way

I’ve seen this example several times around, I suppose you’ve seen it too somewhere on The Interwebs™:

double initialScale = 1d;

private void OnPinchStarted(object s, PinchStartedGestureEventArgs e)
{
    initialScale = ((CompositeTransform)ImgZoom.RenderTransform).ScaleX;
}

private void OnPinchDelta(object s, PinchGestureEventArgs e)
{
    var transform = (CompositeTransform)ImgZoom.RenderTransform;
    transform.ScaleX = initialScale * e.DistanceRatio;
    transform.ScaleY = transform.ScaleX;
}

Very simple and good looking. I love simple solutions and I bet you do too, but as someone once said “Things should be as simple as possible, but not simpler.” And unfortunately this is simpler than possible (is this even a sentence?). The problem is that the scaling is always centered in the middle of the image, so this solution won’t pass the poke-two-fingers-in-the-eyes test.

The better but still wrong way

The knee-jerk reaction is to move the scaling center between our fingers as we perform the scaling:

double initialScale = 1d;

private void OnPinchStarted(object s, PinchStartedGestureEventArgs e)
{
    initialScale = ((CompositeTransform)ImgZoom.RenderTransform).ScaleX;
}

private void OnPinchDelta(object s, PinchGestureEventArgs e)
{
    var finger1 = e.GetPosition(ImgZoom, 0);
    var finger2 = e.GetPosition(ImgZoom, 1);

    var center = new Point(
        (finger2.X + finger1.X) / 2 / ImgZoom.ActualWidth,
        (finger2.Y + finger1.Y) / 2 / ImgZoom.ActualHeight);

    ImgZoom.RenderTransformOrigin = center;

    var transform = (CompositeTransform)ImgZoom.RenderTransform;
    transform.ScaleX = initialScale * e.DistanceRatio;
    transform.ScaleY = transform.ScaleX;
}

This is better. The first time it actually works well too, but as soon as you pinch the image a second time you realize the image moves around. The reason: the zoom state is the sum of all the zoom operations (each one having its center) and by moving the center every time you are effectively removing information from the previous steps. To solve this problem we could replace the CompositeTransform with a TransformGroup and then add a new ScaleTransform (with a new center) at every PinchStart+PinchDelta event group. This will probably work: every scaling will keep its center and all is well. Except your phone will probably catch fire and explode because of the number of transforms you are stacking up. My team has a name for this kind of solutions, and it isn’t a nice one (fortunately there is no English translation for that).

The right way

It is clear by now that simply setting a scale factor and moving the center won’t take us far. As we are real DIYourselfers we will do it with a combination of scaling and translation. In the already mentioned article, Adrian Tsai uses this technique in XNA and we will apply the same concept in Silverlight. If an image is worth a million worth, a line of code is probably worth even more, so I’ll let the c# do the talking.

// these two fully define the zoom state:
private double TotalImageScale = 1d;
private Point ImagePosition = new Point(0, 0);

private Point _oldFinger1;
private Point _oldFinger2;
private double _oldScaleFactor;

private void OnPinchStarted(object s, PinchStartedGestureEventArgs e)
{
    _oldFinger1 = e.GetPosition(ImgZoom, 0);
    _oldFinger2 = e.GetPosition(ImgZoom, 1);
    _oldScaleFactor = 1;
}

private void OnPinchDelta(object s, PinchGestureEventArgs e)
{
    var scaleFactor = e.DistanceRatio / _oldScaleFactor;

    var currentFinger1 = e.GetPosition(ImgZoom, 0);
    var currentFinger2 = e.GetPosition(ImgZoom, 1);

    var translationDelta = GetTranslationDelta(
        currentFinger1,
        currentFinger2,
        _oldFinger1,
        _oldFinger2,
        ImagePosition,
        scaleFactor);

    _oldFinger1 = currentFinger1;
    _oldFinger2 = currentFinger2;
    _oldScaleFactor = e.DistanceRatio;

    UpdateImage(scaleFactor, translationDelta);
}

private void UpdateImage(double scaleFactor, Point delta)
{
    TotalImageScale *= scaleFactor;
    ImagePosition = new Point(ImagePosition.X + delta.X, ImagePosition.Y + delta.Y);

    var transform = (CompositeTransform)ImgZoom.RenderTransform;
    transform.ScaleX = TotalImageScale;
    transform.ScaleY = TotalImageScale;
    transform.TranslateX = ImagePosition.X;
    transform.TranslateY = ImagePosition.Y;
}

private Point GetTranslationDelta(
    Point currentFinger1, Point currentFinger2,
    Point oldFinger1, Point oldFinger2,
    Point currentPosition, double scaleFactor)
{
    var newPos1 = new Point(
        currentFinger1.X + (currentPosition.X - oldFinger1.X) * scaleFactor,
        currentFinger1.Y + (currentPosition.Y - oldFinger1.Y) * scaleFactor);

    var newPos2 = new Point(
        currentFinger2.X + (currentPosition.X - oldFinger2.X) * scaleFactor,
        currentFinger2.Y + (currentPosition.Y - oldFinger2.Y) * scaleFactor);

    var newPos = new Point(
        (newPos1.X + newPos2.X) / 2,
        (newPos1.Y + newPos2.Y) / 2);

    return new Point(
        newPos.X - currentPosition.X,
        newPos.Y - currentPosition.Y);
}

Also note that in the XAML we must set the RenderTransformOrigin to 0,0.
This finally passes the fingers-in-the-eyes test! Now we can add some bells and whistles like handling dragging, blocking the zoom-out when the image is at full screen, and avoiding that the image is dragged outside the visible area. For those extra details please see the sample solution at the end of the article.

What about MVVM?

You are using MVVM-light for your WP7 app, aren’t you? We all agree my code is ugly and not very MVVM friendly, I’ll make no excuses. However it’s all strictly UI code, so it doesn’t feel so bad to have it in the code behind. What you will probably do is wire TotalImageScale and ImagePosition to your ViewModel. Those two values fully define the state of the zoom, so if you save and reload them in your ViewModel you will be good to go.

Download

Here is the full sample project so that you can play with the code within the comfort of your Visual Studio (my daughter is in the picture, please treat her with respect :-) ).
Feel free to use the code in your project. As always, any kind of feedback is deeply appreciated!

WP7 icons quick and undirty

An unexpectedly time consuming part of Windows Phone 7 development are icons. Developers often don’t put much care into icons, and they are wrong. Your app is listed in the marketplace with an icon and most users just skip the crappy ones. If you make a bad icon most users won’t even read what the application is about, let alone download and install it.

That said, as a developer with some occasional design inspirations I found Expression Blend to be the perfect tool to generate WP7 graphics. The simple, minimalist style of WP7 icons just fits well with Blend and XAML in general.  Pro designers will probably be better off with specific graphic tools, but to me it’s just easier and faster to “program” my icons in Blend. I’ve had some decent results to support this approach but of course YMMV (the smile below is a placeholder and should be judged as such :-) ).

wp7buddy3

The main issue in creating the graphics with Blend is that you spend a lot of time cropping pictures to the correct size. That’s why I built myself a raw tool that is now decent enough to share with the world. It’s really raw, but it does the job. In fact it’s nothing more than a Blend/VS solution with correctly sized canvas and the ability to export all the images in one shot. The code is horrible and all, but it saved me a lot of time.

wp7buddy1

Pixel-perfect

The Windows Phone 7 marketplace requires you to create several icons in different sizes. Don’t take this as an unnecessary hassle, it is in fact an opportunity: it means you can create a pixel-perfect image for every size. Do not create an image and just resize it to each size. There are good reasons against this:

1. The tile image is not a simple icon. It will be shown on the main phone page and includes at least the application name. That’s why your image must have an offset to take this into account. My solution overlays the system settings icon, so that you can check if your logo is correctly centered. If your icons are full-width you can ignore this.

wp7buddy2

2. You can (and should) use a different detail level for every size. A good looking 173×173 icon may look like an undefined mass of blurry pixels when resized to 62×62. Just keep the general theme and image consistent.

3. Straight lines will become anti-aliased and look blurry when you resize them (in XAML when you use a viewbox). It’s simple: the width of a line when stretched could become a non-integer value (for ex. 3.5 pixels) and will look blurry. If you have a different image for every size you have full control and can make one-pixel changes to avoid this effect. Look at this example: it may not look obvious but on a close look you’ll see that the left picture is not as well defined as the right one. On the phone, the difference is even more obvious.

wp7buddy4

Download

Usage is simple: open the solution in Blend 4 or Visual Studio 2010 (it’s a WPF application), delete the placeholder smile and put your graphics in its place. Run the application and hit the export button to save the images. Tip: use resources for colors, shapes, etc. so that you can change them in one shot.
Enough said: download WP7IconBuddy and use it at your own risk. I’d love to hear some feedback.