Silverlight/WPF RGB color in c#

Sometimes you have a color in XAML and want to use it in c#. In other words you want to translate something like:

<Border Background=“#AA0FCC1B”>;

to:

Border.Background = …something…

So you start looking for an IValueConverter that create a color from a RGB string, or translate the hex values to decimal, etc… STOP it!
All you need is:

Border.Background = new SolidColorBrush(
                            Color.FromArgb(0xaa, 0x0f, 0xcc, 0x1b));

doh

Ok, it may sound stupid, but you never can tell…

Silverlight 4 is out…

…just a word of caution for developers: if you are still developing with Silverlight 3 and VS2008 don’t install the Silverlight 4 runtime. If you do, you won’t be able to build your SL3 application anymore and you’ll spend the next hour

a) looking for a way to make your app build again

b) looking for the SL3 runtime (that you won’t find anywhere –and won’t correct the situation anyway).

This problem should not arise if you are already on VS2010 because it allows you to choose your target between SL3 and SL4, but if you are stuck with VS2008 you are out of luck.
It seems that the folks at MS think that everybody can just go ahead and migrate all their solutions to VS2010 and SL4 the next day things are released.

Epic fail! Rant over.

P.S. the correct way to make things work again is:

– uninstall the Silverlight 4 runtime (listed as “Silverlight” in the Programs & Features panel)
– restart your machine (no, you cannot skip this step!!!)
– download the Silverlight3 Developer Runtime and install it (hurry up because as soon as they’ll notice they will remove it from the download server!)

And no, you won’t be able to view SL4 websites –but at least your app will build.

UPDATE: forget the rubbish above. If you install the Silverlight 4 developer runtime you will be able to run SL4 apps and build SL3 apps (of course if you had the SL3 SDK). Just don’t install the “normal” SL4 runtime.

WPF is dead. Long live WPF!

Some months ago I read in a blog post that Silverlight ate WPF from the inside. I had a good laugh and thought it was the most foolish thing I’ve read in a while. I even posted a comment that (thankfully) never got published. Having worked extensively with both WPF and Silverlight I thought the two things were not even remotely comparable. While WPF provided great power, Silverlight was full of limitations and getting any real work done was frustrating and painful.

Turns out I was wrong. Completely wrong! This week I attended TechDays (the small version of MIX that Microsoft does in European countries) and while nobody says it explicitly, the strategy at Redmond seems pretty clear. Silverlight is progressing at an impressive pace and WPF is not getting many exciting improvements. The gap is still there (still large to say the truth), but seeing SL reach and eat WPF is not that difficult. I think MS is pushing in that direction with all their forces.

Out of the browser was almost a gimmick in SL3, but with SL4 they revealed their cards: they added so many features (even COM support when running in Windows) that it’s now doable to build a desktop application entirely based on SL. You can even deploy it directly on the desktop without any browser interaction.

I’m pretty sure it will only take a few of years for Silverlight to be the Windows UI library, with the big bonus of true multiplatform, small runtime and web deployment with a single codebase. WPF won’t loose anything as it will just be part of Silverlight.

This is the future I think. Unless I’m completely wrong again.

Leaky Abstraction strikes again: FileStream.Lock() gotchas

First, if you don’t know the Law of Leaky Abstractions go on and read here (10 minutes well spent!).

.NET’s FileStream.Lock() is a handy method that lets you lock a section of a file (instead of locking it completely) so that other processes/threads cannot touch that part.

The usage is fairly simple: you specify from where to lock the file and how long is the section you want to protect. However, despite its simplicity, there are a couple of things it’s better to keep in mind or you’ll be scratching your head in front of the screen.

First: contrarily to what some articles say, this method locks part of the file for write but also for read access. Maybe those articles refer to an older framework version or whatever else, but a simple test seems to confirm that a process cannot read a part of a file that has been locked.

The second thing can be tricky.
Let’s make a simple experiment: we write 100 bytes in a new file and we lock everything except the very first byte. Then we launch another process that reads the first byte.

// first process:
using (var fs = new FileStream("myFile.txt",
                                           FileMode.Create,
                                           FileAccess.Write,
                                           FileShare.ReadWrite))
{
    using (var bw = new BinaryWriter(fs))
    {
        bw.Write(new byte[100]);
    }
    // locks everything except the first byte
    fs.Lock(1, 99);
    Console.ReadLine();
    fs.Unlock(1, 99);
}

// second process (first process is waiting at Console.ReadLine()):
using (var fs = new FileStream("myFile.txt",
                                          FileMode.Open,
                                          FileAccess.Read,
                                          FileShare.ReadWrite))
{
    using (var br = new BinaryReader(fs))
    {
        // read the first byte
        var b = br.ReadByte();
    }
}

What happens? The second process throws an exception: The process cannot access the file because another process has locked a portion of the file .
Why? We didn’t try to access the locked portion, so this should not have happened!

At first you may believe that Lock() is buggy and locks the whole file. But this is not true, in fact Lock() works correctly.
The answer is in the FileStream’s buffer (I hear the “aha!”). In fact when you ask FileStream to read a single byte, he’s smart enough not to read a single byte but to fill its internal buffer (4K by default) to speed up the reading. So it tries to read into the locked part and fails.

Now that you know why this is happening you can more or less easily solve the problem depending on your situation: you may for ex. adjust your buffer size depending on the length of the chunks you are reading.

In the example above it’s enough to pass 1 as buffer length to the second process FileStream’s constructor (after line 21) to make it work (just to show the theory, not that this is a good practice!).

I really think that the FileStream abstraction should handle this case and avoid the “leak”, but the .NET framework guys are smart people and I bet there is a good reason if it doesn’t.

Delivery room (humble) photography tips

While I’m not a photographer by any stretch of the imagination, I just became father of a beautiful girl (this explains the lack of recent posts…) and I’d like to share some photography tips for the magic moment.

1 – Equipment choice

There are some things you will have to prepare in advance. Actually there are many things, but I will focus on the photography aspect alone ;-)

Camera

First, you will have to decide what equipment to use. If you have a DSLR I strongly recommend you to use it, but only if you are familiar with it.
If you are not 100% sure on how to use it, forget it. Better to bring a small point & shoot that you can actually use than take horrible out-of-focus pictures with an SLR.

Lens

The delivery room itself plays the most important role in deciding what lens is appropriate. If you can visit the delivery room in advance, do it. Look at the lighting, room size, etc… (by the way a visit is recommended to the wife too, so she knows what to expect).
In my case the lights were dim and soft so a fast lens was mandatory. I used a 50mm f/1.8 prime lens (with a Nikon D80) and it was fine. Maybe slightly long, but it did the job very well.

The delivery room is not a good place to fiddle with multiple lenses, so pick one and stick with it. Leave the other lenses at home: you will have all the time later to experiment with different equipment.

Flash

Forget the flash, it’s too obtrusive and it’s pretty much guaranteed that after a flash or two your wife will get angry at you. Leave it at home –the less stuff you will have to bring along, the better.
As mentioned above, keep in mind that without a flash at your disposal you will probably need a fast lens.

2 – Make ready

In the delivery room you will basically forget everything you know, so you will have to prepare everything you can in advance:

Know your equipment

If your camera is relatively new, or you bought a new lens, make sure you are familiar with it. Take indoor shots from a distance that may reflect the available space in the delivery room.

ISO

As I mentioned before, come that moment you will forget everything, so it’s a good idea to preset your camera. If the lights are dim and low like in my case you will have to shoot at high ISO: If your camera supports it, it may be a good idea to turn Auto-ISO on and practice a bit. One less thing to take care of.

RAW!

Shoot in RAW! You will need to adjust white balance because of the weird lights and it will be a lot easier to deal with it later at home when the adrenaline is gone. You probably won’t take hundreds of pictures, but anyways make sure your memory card is large enough.

Mode

The Program mode is probably the best as it does not fire the flash but will select shutter speed/aperture for you. Avoid Auto because of the automatic flash and Aperture/Shutter priority because as I said you will make any kind of mistake (except maybe if you are a professional photographer and/or at your third child).
Place the dial in the correct position well before the event. It sounds stupid, but you will feel more stupid when you’ll notice an hour later that you took all your pictures with the wrong settings.

Keep the camera ready

In the last few weeks of pregnancy make sure your camera battery is fully charged and the memory card empty. Put the camera in your wife’s bag so you won’t forget it at home.
If you put it anywhere else when you’ll be in a hurry to go to the hospital you will think “to hell with the camera” and leave it at home –only to regret it later.

3 – When the time has come

Fast forward a while and you are there in the delivery room. My advice is to forget the camera, be there to support your wife and enjoy the moment. Then when the baby is out, crying and looking around for the first time, grab the camera and do your best to keep your hand steady. Check with the hospital before, but in our case right after the birth they left us almost alone for an hour or two with the baby. That was a great time to take pictures.

By the way, babies are not that pretty in their first minutes of life. But they will look beautiful to you :-)

 

That’s all I can say. Please take all this advice with a grain of salt and most important congrats on the imminent birth and prepare for one of the best times of your life!

DSC_4523_edited

Convert FILETIME to Unix timestamp

Yes I know, C/C++ is not trendy these days. I don’t care.

So if you are trying to convert a FILETIME date that comes for example from the FindFirstFile/FindNextFile Win32 API you may find it’s not completely straightforward. Don’t try to accomplish this with some date function conversion because you will probably just waste a lot of time –I know because I did.

A UNIX timestamp contains the number of seconds from Jan 1, 1970, while the FILETIME documentation says: Contains a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).

Between Jan 1, 1601 and Jan 1, 1970 there are 11644473600 seconds, so we will just subtract that value:

LONGLONG FileTime_to_POSIX(FILETIME ft)
{
	// takes the last modified date
	LARGE_INTEGER date, adjust;
	date.HighPart = ft.dwHighDateTime;
	date.LowPart = ft.dwLowDateTime;

	// 100-nanoseconds = milliseconds * 10000
	adjust.QuadPart = 11644473600000 * 10000;

	// removes the diff between 1970 and 1601
	date.QuadPart -= adjust.QuadPart;

	// converts back from 100-nanoseconds to seconds
	return date.QuadPart / 10000000;
}

And some code to show its usage (with various checks omitted for the sake of simplicity):

#include "stdafx.h"
#include "windows.h"

LONGLONG FileTime_to_POSIX(FILETIME ft);

int _tmain(int argc, _TCHAR* argv[])
{
	char d;
	WIN32_FIND_DATA FindFileData;
	HANDLE hFind;

	hFind = FindFirstFile(TEXT("C:\\test.txt"), &FindFileData);
	LONGLONG posix = FileTime_to_POSIX(FindFileData.ftLastWriteTime);
	FindClose(hFind);

	printf("UNIX timestamp: %ld\n", posix);
	scanf("%c", &d);

	return 0;
}

The Old New Thing* – a theme for Thunderbird 3

Yesterday I downloaded Thunderbird 3, the latest incarnation of the popular email client. While I like the new features and appreciate the work they put in this release, I don’t like the icons.
They are coherent with the Windows 7 default theme, but I still think they are a bit too bright and distracting. On the other hand the icons in the Thunderbird 2 default theme were perfect on this regard.

Faster done than said – I packaged the old icons in the default TB3 theme. To make things clear, I haven’t created anything, I just took the TB2 icons and put them in the TB3 default theme.
You can download it from here until it gets approved at mozilla addons.

A couple of screenshots: default TB3 theme:

screenshot_1

The Old New Thing in action:

screenshot_2

*not related to Raymond Chen’s excellent weblog – I’m not very creative in chosing names, sorry.

update: now marked to work with any 3.* version of Thunderbird.

WCF/Silverlight – some “benchmarks”

I took some very simple measurements from my recent experiments with Silverlight and WCF web services. These are so simple and unscientifical that I suggest you take them only as a general indication.
Please test your scenario to get an accurate picture!

That said, some differences are so large that it already gives you a general idea. These are the http bindings I tested:

1) text formatter (default), i.e. SOAP XML
2) binary formatter, i.e. binary XML
3) text formatter with http gzip compression (see my previous post)
4) binary formatter with http gzip compression

Here are the results.

Response time

time

You can see that text formatter is incredibly slower than binary XML.
One interesting thing I noticed is that this extreme slowness of the text formatter happens only with Silverlight (3). That is, if you use a Windows client (console app or wpf), text formatter is still slower than binary formatter but not that much (compare the red bars).

The Silverlight runtime is probably slower than the Windows runtime and I guess that deserializing a huge xml message is one of the things that clearly expose this difference.

Another observation is that with gzip compression the response time is slightly slower. Keep in mind that these numbers come from a connection on a single machine. In real world, with a large message, size will be so much smaller with compression that the compression overhead is probably largely compensated.

(Side note: these tests were quick and dirty, but I still did some “warmup” calls and measured over multiple runs, so these timings are pretty stable.)

Message size

size

The message I used was quite large and included a pseudo-dataTable. This means that XML serialization results in a lot of string repetitions: a particularly good target for zip compression. Other cases may not benefit so greatly from gzip compression.

Conclusion

This is clearly not a conclusive test -it may not even be enough to be called a test- but one thing is clear: it is worth it spending some time to play with the different binding options as the benefits you could reap may be huge.

GZIP Compression – WCF+Silverlight

In a previous post I wrote a method to enable GZIP compression on a self-hosted WCF service that communicates with Silverlight. Unfortunately that method does not work. I still haven’t understood if it stopped working at some point for some reason, or if I was fooled into thinking that it worked but it didn’t.
Whatever, what I’ll describe here takes slightly more work but gives the desired result.

In brief, this is what we’ll need.

On the client side: still nothing. The browser’s http layer handles it all nicely. When an http response has a Content-Encoding: gzip header it decompresses the message body before feeding Silverlight.
This is also true if you use the SL3 built-in stack.

On the server side: instead of using a basicHttpBinding, we’ll use a custom binding that will handle these steps:

1. Read the request’s Accept-Encoding HTTP header
2. Depending on that header, compress the response body
3. If the response was compressed, add a Content-Encoding: gzip HTTP header

Steps 1. and 3. are managed by a MessageInspector. Step 2. is managed by a MessageEncoder –we will base this on the Microsoft’s Compression Encoder sample. Please download it as I’ll only tell you what to modify there.
It may  be a good idea to study it a bit before starting.

Step 1. and 3. – Managing the HTTP headers

We have to create a MessageInspector that performs the actual work and a Behavior that tells the service endpoint to use the inspector.
The inspector is what we are most interested in: look at AfterReceiveRequest() and BeforeSendReply(). In AfterReceiveRequest we’ll look for “gzip” inside the Accept-Encoding header. If we find it, then we’ll add an extension to OperationContext so that later we will know if we can compress the response (or if we have to return it uncompressed).

public class GzipInspector : IDispatchMessageInspector
{
    public object AfterReceiveRequest(ref Message request, IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
    {
        try
        {
            var prop = request.Properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty;
            var accept = prop.Headers[HttpRequestHeader.AcceptEncoding];

            if (!string.IsNullOrEmpty(accept) && accept.Contains("gzip"))
                OperationContext.Current.Extensions.Add(new DoCompressExtension());
        }
        catch { }

        return null;
    }

    public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        if (OperationContext.Current.Extensions.OfType().Count() > 0)
        {
            HttpResponseMessageProperty httpResponseProperty = new HttpResponseMessageProperty();
            httpResponseProperty.Headers.Add(HttpRequestHeader.ContentEncoding, "gzip");
            reply.Properties[HttpResponseMessageProperty.Name] = httpResponseProperty;
        }
    }
}

And this is the Extension. There is nothing inside it, it’s just a way to store in OperationContext the information “The response of this message will need to be compressed”. We will use this information later in the compression encoder.

public class DoCompressExtension : IExtension
{
 public void Attach(OperationContext owner) { }
 public void Detach(OperationContext owner) { }
}

Finally, we have to provide a behavior that adds our MessageInspector to the service endpoint. His goal is just to tell the endpoint to inspect incoming and outgoing messages with GZipInspector.

public class GZipBehavior : IEndpointBehavior
{
    public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)
    { }

    public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
    {
        throw new Exception("Behavior not supported on the client side");
    }

    public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
    {
        endpointDispatcher.DispatchRuntime.MessageInspectors.Add(new GzipInspector());
    }

    public void Validate(ServiceEndpoint endpoint)
    { }
}

public class GzipBehaviorExtensionElement : BehaviorExtensionElement
{
    public GzipBehaviorExtensionElement()
    { }

    public override Type BehaviorType
    {
        get { return typeof(GZipBehavior); }
    }

    protected override object CreateBehavior()
    {
        return new GZipBehavior();
    }
}

Step 2. – Compressing the response body

As mentioned, we will modify the MS Compression Channel sample (WCF/Extensibility/MessageEncoder/Compression). We need the files inside the GZipEncoder project and we will made some minor changes to the GZipMessageEncoder class (GZipMessageEncoderFactory.cs) and GZipMessageEncodingElement (GZipMessageEncodingBindingElement.cs).

These are the changes to GZipMessageEncoder. First, the sample uses content-type to mark the message as compressed. Since we are using http headers to achieve this, we can leave content-type and media-type intact (we must in fact). Change them to look like this:

public override string ContentType
{
    get { return innerEncoder.ContentType; }
}

public override string MediaType
{
    get { return innerEncoder.ContentType; }
}

Second, we have to skip decompression when reading the message since our requests are always uncompressed. This simplifies the two ReadMessage methods:

public override Message ReadMessage(ArraySegment buffer, BufferManager bufferManager, string contentType)
{
    return innerEncoder.ReadMessage(buffer, bufferManager, contentType);
}

public override Message ReadMessage(System.IO.Stream stream, int maxSizeOfHeaders, string contentType)
{
    return innerEncoder.ReadMessage(stream, maxSizeOfHeaders, contentType);
}

Third, we have to add a condition to WriteMessage as we have to compress the response only when our MessageInspector told us to do so.

public override ArraySegment WriteMessage(Message message, int maxMessageSize, BufferManager bufferManager, int messageOffset)
{
    if (OperationContext.Current.Extensions.OfType().Count() > 0)
    {
        ArraySegment buffer = innerEncoder.WriteMessage(message, maxMessageSize, bufferManager, messageOffset);
        return CompressBuffer(buffer, bufferManager, messageOffset);
    }
    else
        return innerEncoder.WriteMessage(message, maxMessageSize, bufferManager, messageOffset);
}

public override void WriteMessage(Message message, System.IO.Stream stream)
{
    if (OperationContext.Current.Extensions.OfType().Count() > 0)
    {
        using (GZipStream gzStream = new GZipStream(stream, CompressionMode.Compress, true))
        {
            innerEncoder.WriteMessage(message, gzStream);
        }
        stream.Flush();
    }
    else
        innerEncoder.WriteMessage(message, stream);
}

One last details: if you want to create the binding from app.config you may want to make a small change to the GZipMessageEncodingElement class: by default it creates a TextMessageEncodingBindingElement without specifying anything. However since we are trying to replicate a basicHttpBinding, we have to specify Soap11 as message version, and also UTF8 encoding:

public override void ApplyConfiguration(BindingElement bindingElement)
{
    GZipMessageEncodingBindingElement binding = (GZipMessageEncodingBindingElement)bindingElement;
    PropertyInformationCollection propertyInfo = this.ElementInformation.Properties;
    if (propertyInfo["innerMessageEncoding"].ValueOrigin != PropertyValueOrigin.Default)
    {
        switch (this.InnerMessageEncoding)
        {
            case "textMessageEncoding":
                binding.InnerMessageEncodingBindingElement = new TextMessageEncodingBindingElement()
                {
                    MessageVersion = MessageVersion.Soap11,
                    WriteEncoding = Encoding.UTF8
                };
                break;
            case "binaryMessageEncoding":
                binding.InnerMessageEncodingBindingElement = new BinaryMessageEncodingBindingElement();
                break;
        }
    }
}

Putting it all together

Now we have a Behavior/MessageInspector that handle HTTP headers and we have a MessageEncoder that compresses the response body. We only have to tell our service to use them.

The binding: instead of a basicHttpBinding we will use a custom binding. From config:

<customBinding>
  <binding name="BufferedHttpSampleServer">
    <gzipMessageEncoding innerMessageEncoding="textMessageEncoding" />
    <httpTransport transferMode="Buffered"/>
  </binding>
</customBinding>

From code:

var encoding = new GZipMessageEncodingBindingElement(new TextMessageEncodingBindingElement(MessageVersion.Soap11, Encoding.UTF8));
var transport = new HttpTransportBindingElement();
var b = new CustomBinding(encoding, transport);

If you don’t need interoperability you can use binary XML instead of plain XML:

var encoding = new GZipMessageEncodingBindingElement(new BinaryMessageEncodingBindingElement());

It’s possible that on the Silverlight side if you add your service with “Add Service Reference,” the app.config is not created correctly. In that case just modify it to use a basicHttpBinding (or a custom binding with http transport and binaryEncoding if you are using binary XML).

Registering the behavior:

using (ServiceHost myServer = new ServiceHost(typeof(IMyServer)))
 {

myServer.Description.Endpoints[0].Behaviors.Add(new GZipBehavior());
myServer.Open();

Of course this can be also specified in config.

Performance Considerations

With gzip compression you’ll introduce a small overhead and on very fast networks (or on localhost) you may notice slightly slower response times. However the benefit in real world is well worth it in particular if you are transferring large messages using plain XML.

For example I had a 1Mb message that got down to 65Kb with compression. This message included a pseudo-dataset (a lot of repetitions in xml) and was an ideal situation for gzip compression. However I suppose that in most situations you will improve greatly your service response times.

A disturbing trend

Today I was reading through the list of speakers for an upcoming conference on software design, user interaction, etc. It seems that everyone there is working for some company with a cool “web 2.0” name that does something social and collaborative.
Call me stupid, but looking at these companies’ websites I really don’t understand what they do. I mean what they produce. I mean what they sell to pay the bills. The buzzword-meter is at the top end of the scale, but I still don’t get it.

Maybe in two years I will read this post and realize I was wrong (as usually) –for now my buzzword-meter and my BS-meter are very closely related.

 

Edit: it seems that these people think that social networking will produce huge sales in the next years. You can download the full report here. Yes, the PDF download costs $1749. Maybe I start understanding…