.NET

Leaky Abstraction strikes again: FileStream.Lock() gotchas

First, if you don’t know the Law of Leaky Abstractions go on and read here (10 minutes well spent!).

.NET’s FileStream.Lock() is a handy method that lets you lock a section of a file (instead of locking it completely) so that other processes/threads cannot touch that part.

The usage is fairly simple: you specify from where to lock the file and how long is the section you want to protect. However, despite its simplicity, there are a couple of things it’s better to keep in mind or you’ll be scratching your head in front of the screen.

First: contrarily to what some articles say, this method locks part of the file for write but also for read access. Maybe those articles refer to an older framework version or whatever else, but a simple test seems to confirm that a process cannot read a part of a file that has been locked.

The second thing can be tricky.
Let’s make a simple experiment: we write 100 bytes in a new file and we lock everything except the very first byte. Then we launch another process that reads the first byte.

// first process:
using (var fs = new FileStream("myFile.txt",
                                           FileMode.Create,
                                           FileAccess.Write,
                                           FileShare.ReadWrite))
{
    using (var bw = new BinaryWriter(fs))
    {
        bw.Write(new byte[100]);
    }
    // locks everything except the first byte
    fs.Lock(1, 99);
    Console.ReadLine();
    fs.Unlock(1, 99);
}

// second process (first process is waiting at Console.ReadLine()):
using (var fs = new FileStream("myFile.txt",
                                          FileMode.Open,
                                          FileAccess.Read,
                                          FileShare.ReadWrite))
{
    using (var br = new BinaryReader(fs))
    {
        // read the first byte
        var b = br.ReadByte();
    }
}

What happens? The second process throws an exception: The process cannot access the file because another process has locked a portion of the file .
Why? We didn’t try to access the locked portion, so this should not have happened!

At first you may believe that Lock() is buggy and locks the whole file. But this is not true, in fact Lock() works correctly.
The answer is in the FileStream’s buffer (I hear the “aha!”). In fact when you ask FileStream to read a single byte, he’s smart enough not to read a single byte but to fill its internal buffer (4K by default) to speed up the reading. So it tries to read into the locked part and fails.

Now that you know why this is happening you can more or less easily solve the problem depending on your situation: you may for ex. adjust your buffer size depending on the length of the chunks you are reading.

In the example above it’s enough to pass 1 as buffer length to the second process FileStream’s constructor (after line 21) to make it work (just to show the theory, not that this is a good practice!).

I really think that the FileStream abstraction should handle this case and avoid the “leak”, but the .NET framework guys are smart people and I bet there is a good reason if it doesn’t.

WCF/Silverlight – some “benchmarks”

I took some very simple measurements from my recent experiments with Silverlight and WCF web services. These are so simple and unscientifical that I suggest you take them only as a general indication.
Please test your scenario to get an accurate picture!

That said, some differences are so large that it already gives you a general idea. These are the http bindings I tested:

1) text formatter (default), i.e. SOAP XML
2) binary formatter, i.e. binary XML
3) text formatter with http gzip compression (see my previous post)
4) binary formatter with http gzip compression

Here are the results.

Response time

time

You can see that text formatter is incredibly slower than binary XML.
One interesting thing I noticed is that this extreme slowness of the text formatter happens only with Silverlight (3). That is, if you use a Windows client (console app or wpf), text formatter is still slower than binary formatter but not that much (compare the red bars).

The Silverlight runtime is probably slower than the Windows runtime and I guess that deserializing a huge xml message is one of the things that clearly expose this difference.

Another observation is that with gzip compression the response time is slightly slower. Keep in mind that these numbers come from a connection on a single machine. In real world, with a large message, size will be so much smaller with compression that the compression overhead is probably largely compensated.

(Side note: these tests were quick and dirty, but I still did some “warmup” calls and measured over multiple runs, so these timings are pretty stable.)

Message size

size

The message I used was quite large and included a pseudo-dataTable. This means that XML serialization results in a lot of string repetitions: a particularly good target for zip compression. Other cases may not benefit so greatly from gzip compression.

Conclusion

This is clearly not a conclusive test -it may not even be enough to be called a test- but one thing is clear: it is worth it spending some time to play with the different binding options as the benefits you could reap may be huge.

GZIP Compression – WCF+Silverlight

In a previous post I wrote a method to enable GZIP compression on a self-hosted WCF service that communicates with Silverlight. Unfortunately that method does not work. I still haven’t understood if it stopped working at some point for some reason, or if I was fooled into thinking that it worked but it didn’t.
Whatever, what I’ll describe here takes slightly more work but gives the desired result.

In brief, this is what we’ll need.

On the client side: still nothing. The browser’s http layer handles it all nicely. When an http response has a Content-Encoding: gzip header it decompresses the message body before feeding Silverlight.
This is also true if you use the SL3 built-in stack.

On the server side: instead of using a basicHttpBinding, we’ll use a custom binding that will handle these steps:

1. Read the request’s Accept-Encoding HTTP header
2. Depending on that header, compress the response body
3. If the response was compressed, add a Content-Encoding: gzip HTTP header

Steps 1. and 3. are managed by a MessageInspector. Step 2. is managed by a MessageEncoder –we will base this on the Microsoft’s Compression Encoder sample. Please download it as I’ll only tell you what to modify there.
It may  be a good idea to study it a bit before starting.

Step 1. and 3. – Managing the HTTP headers

We have to create a MessageInspector that performs the actual work and a Behavior that tells the service endpoint to use the inspector.
The inspector is what we are most interested in: look at AfterReceiveRequest() and BeforeSendReply(). In AfterReceiveRequest we’ll look for “gzip” inside the Accept-Encoding header. If we find it, then we’ll add an extension to OperationContext so that later we will know if we can compress the response (or if we have to return it uncompressed).

public class GzipInspector : IDispatchMessageInspector
{
    public object AfterReceiveRequest(ref Message request, IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
    {
        try
        {
            var prop = request.Properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty;
            var accept = prop.Headers[HttpRequestHeader.AcceptEncoding];

            if (!string.IsNullOrEmpty(accept) && accept.Contains("gzip"))
                OperationContext.Current.Extensions.Add(new DoCompressExtension());
        }
        catch { }

        return null;
    }

    public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {
        if (OperationContext.Current.Extensions.OfType().Count() > 0)
        {
            HttpResponseMessageProperty httpResponseProperty = new HttpResponseMessageProperty();
            httpResponseProperty.Headers.Add(HttpRequestHeader.ContentEncoding, "gzip");
            reply.Properties[HttpResponseMessageProperty.Name] = httpResponseProperty;
        }
    }
}

And this is the Extension. There is nothing inside it, it’s just a way to store in OperationContext the information “The response of this message will need to be compressed”. We will use this information later in the compression encoder.

public class DoCompressExtension : IExtension
{
 public void Attach(OperationContext owner) { }
 public void Detach(OperationContext owner) { }
}

Finally, we have to provide a behavior that adds our MessageInspector to the service endpoint. His goal is just to tell the endpoint to inspect incoming and outgoing messages with GZipInspector.

public class GZipBehavior : IEndpointBehavior
{
    public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)
    { }

    public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
    {
        throw new Exception("Behavior not supported on the client side");
    }

    public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
    {
        endpointDispatcher.DispatchRuntime.MessageInspectors.Add(new GzipInspector());
    }

    public void Validate(ServiceEndpoint endpoint)
    { }
}

public class GzipBehaviorExtensionElement : BehaviorExtensionElement
{
    public GzipBehaviorExtensionElement()
    { }

    public override Type BehaviorType
    {
        get { return typeof(GZipBehavior); }
    }

    protected override object CreateBehavior()
    {
        return new GZipBehavior();
    }
}

Step 2. – Compressing the response body

As mentioned, we will modify the MS Compression Channel sample (WCF/Extensibility/MessageEncoder/Compression). We need the files inside the GZipEncoder project and we will made some minor changes to the GZipMessageEncoder class (GZipMessageEncoderFactory.cs) and GZipMessageEncodingElement (GZipMessageEncodingBindingElement.cs).

These are the changes to GZipMessageEncoder. First, the sample uses content-type to mark the message as compressed. Since we are using http headers to achieve this, we can leave content-type and media-type intact (we must in fact). Change them to look like this:

public override string ContentType
{
    get { return innerEncoder.ContentType; }
}

public override string MediaType
{
    get { return innerEncoder.ContentType; }
}

Second, we have to skip decompression when reading the message since our requests are always uncompressed. This simplifies the two ReadMessage methods:

public override Message ReadMessage(ArraySegment buffer, BufferManager bufferManager, string contentType)
{
    return innerEncoder.ReadMessage(buffer, bufferManager, contentType);
}

public override Message ReadMessage(System.IO.Stream stream, int maxSizeOfHeaders, string contentType)
{
    return innerEncoder.ReadMessage(stream, maxSizeOfHeaders, contentType);
}

Third, we have to add a condition to WriteMessage as we have to compress the response only when our MessageInspector told us to do so.

public override ArraySegment WriteMessage(Message message, int maxMessageSize, BufferManager bufferManager, int messageOffset)
{
    if (OperationContext.Current.Extensions.OfType().Count() > 0)
    {
        ArraySegment buffer = innerEncoder.WriteMessage(message, maxMessageSize, bufferManager, messageOffset);
        return CompressBuffer(buffer, bufferManager, messageOffset);
    }
    else
        return innerEncoder.WriteMessage(message, maxMessageSize, bufferManager, messageOffset);
}

public override void WriteMessage(Message message, System.IO.Stream stream)
{
    if (OperationContext.Current.Extensions.OfType().Count() > 0)
    {
        using (GZipStream gzStream = new GZipStream(stream, CompressionMode.Compress, true))
        {
            innerEncoder.WriteMessage(message, gzStream);
        }
        stream.Flush();
    }
    else
        innerEncoder.WriteMessage(message, stream);
}

One last details: if you want to create the binding from app.config you may want to make a small change to the GZipMessageEncodingElement class: by default it creates a TextMessageEncodingBindingElement without specifying anything. However since we are trying to replicate a basicHttpBinding, we have to specify Soap11 as message version, and also UTF8 encoding:

public override void ApplyConfiguration(BindingElement bindingElement)
{
    GZipMessageEncodingBindingElement binding = (GZipMessageEncodingBindingElement)bindingElement;
    PropertyInformationCollection propertyInfo = this.ElementInformation.Properties;
    if (propertyInfo["innerMessageEncoding"].ValueOrigin != PropertyValueOrigin.Default)
    {
        switch (this.InnerMessageEncoding)
        {
            case "textMessageEncoding":
                binding.InnerMessageEncodingBindingElement = new TextMessageEncodingBindingElement()
                {
                    MessageVersion = MessageVersion.Soap11,
                    WriteEncoding = Encoding.UTF8
                };
                break;
            case "binaryMessageEncoding":
                binding.InnerMessageEncodingBindingElement = new BinaryMessageEncodingBindingElement();
                break;
        }
    }
}

Putting it all together

Now we have a Behavior/MessageInspector that handle HTTP headers and we have a MessageEncoder that compresses the response body. We only have to tell our service to use them.

The binding: instead of a basicHttpBinding we will use a custom binding. From config:

<customBinding>
  <binding name="BufferedHttpSampleServer">
    <gzipMessageEncoding innerMessageEncoding="textMessageEncoding" />
    <httpTransport transferMode="Buffered"/>
  </binding>
</customBinding>

From code:

var encoding = new GZipMessageEncodingBindingElement(new TextMessageEncodingBindingElement(MessageVersion.Soap11, Encoding.UTF8));
var transport = new HttpTransportBindingElement();
var b = new CustomBinding(encoding, transport);

If you don’t need interoperability you can use binary XML instead of plain XML:

var encoding = new GZipMessageEncodingBindingElement(new BinaryMessageEncodingBindingElement());

It’s possible that on the Silverlight side if you add your service with “Add Service Reference,” the app.config is not created correctly. In that case just modify it to use a basicHttpBinding (or a custom binding with http transport and binaryEncoding if you are using binary XML).

Registering the behavior:

using (ServiceHost myServer = new ServiceHost(typeof(IMyServer)))
 {

myServer.Description.Endpoints[0].Behaviors.Add(new GZipBehavior());
myServer.Open();

Of course this can be also specified in config.

Performance Considerations

With gzip compression you’ll introduce a small overhead and on very fast networks (or on localhost) you may notice slightly slower response times. However the benefit in real world is well worth it in particular if you are transferring large messages using plain XML.

For example I had a 1Mb message that got down to 65Kb with compression. This message included a pseudo-dataset (a lot of repetitions in xml) and was an ideal situation for gzip compression. However I suppose that in most situations you will improve greatly your service response times.

Gzip compression between WCF web service and Silverlight

Update (12/1/2009): this seems not to work anymore. I’m still investigating on the cause as it was definitely working (I even performed some comparison benchmarks for our product) –or maybe I had some kind of hallucinations, it happens too.
If you have any idea please share in the comments below! Thanks!

Please read this post for a working solution.

The scenario is the following: you have a Silverlight 3 application that consumes a WCF web service. By default the data exchange is serialized in XML, thus the network payload quickly becomes high if your web service sends significant amounts of data.

XML content is a very good candidate for zip/gzip compression, so it’s logical to look in this direction.
The bad news is that I found no documentation or forums/blog entries about this particular case, so it took me a while to find the solution. I even played with the compression encoder WCF sample only to find that it’s done the way it is because it refers to a different scenario (but I learned a lot in the way).
The good news is that the solution is so simple it’s not even funny, that’s probably why nobody explains how to do it.

HTTP Content-Encoding

Most modern web servers support compression and so do most browsers. When you browse the web, you often receive compressed content even if you don’t notice anything.  You can use Fiddler2 to realize this fact.

How it’s working: the browser tells the server what kinds of compression he supports by setting the Accept-Encoding HTTP header. For example with

Accept-Encoding: gzip, deflate

a browser is saying “I can read content compressed with the gzip or deflate algorithms”. The server then decides what is best depending on several factors (for ex. the kind of data he’s sending). If he opts for compression (let’s say gzip), he compresses the content and adds this HTTP header to the response:

Content-Encoding: gzip

The browser now knows that the content is compressed and decompresses it with the appropriate algorithm before giving it to the html parser or whatever.
This is exactly the mechanism we will leverage.

Client Side (Silverlight)

Let’s see how to prepare the client-side.
We have to do two things: 1) tell the server we support compression 2) decompress responses if they are compressed.

How? The browser does all this for you. That’s simply because any web server call made by Silverlight is handled (by default) by the browser’s HTTP stack. This means that the browser manages all the low level stuff such as adding the proper Accept-Encoding header and decompressing content based on Content-Encoding.
The whole process is completely transparent to Silverlight: we don’t have to do anything.

Wonderful, but since compression is still not happening it means we have to do something on the server.

Server Side (WCF)

Here we have to distinguish two cases: the service can be hosted in IIS or self-hosted. If it’s hosted in IIS you’ll just have to enable compression in the IIS control panel.

In my case however the service is self-hosted.
The key here is the endpoint’s binding. Silverlight 3 only supports basicHttpBinding, so your WCF service probably uses the same (as they must match). The problem is that basicHttpBinding is “basic” indeed, and cannot do advanced things like handle compression.

The trick is to use a more “advanced” binding that can handle http compression but is still compatible with the client’s basicHttpBinding. Here is a customBinding that mimics the basicHttpBinding:

<customBinding>

<binding name=customHttpBinding>

<textMessageEncoding messageVersion=Soap11Addressing1
writeEncoding=utf-8/>

<httpTransport />

</binding>

</customBinding>

the same in c#:

var customBinding = new CustomBinding(

new TextMessageEncodingBindingElement(

MessageVersion.Soap11,

Encoding.UTF8),

new HttpTransportBindingElement());

That’s all. Yes, nothing more is needed! This binding will automatically deal with compression the same way as IIS does.
You can fire up Fiddler2 and look at your web service calls. You can play with the Accept-Encoding header and see that the server behaves accordingly.

Maybe in some future version Silverlight will support more advanced bindings, in that case you won’t probably even need this.

Two Silverlight 3 Gotchas

I’m recently working quite a bit with Silverlight 3 and here are a couple of weird problems I encountered. They are easy to solve, but I hope this post will save you some head scratching.

Gotcha #1 – Vertical scrollbar in IE8

When Visual Studio 2008 generates a test page for your Silverlight application, it creates an <object> tag with width and height set to 100%.

When you look at that page in any browser other than IE8 everything is working fine: the Silverlight control takes the whole page –no scrollbars. Now if you open your website in IE8 you may notice a vertical scrollbar and a small white space below your Silverlight control.

scroll

Let’s look at the html generated by Visual Studio. Near the end there are these lines:

   76         </object>

   77       <iframe id=’_sl_historyFrame’ style=’‘></iframe>

   78     </div>

   79   </body>

   80 </html>

Nothing suspect at first sight. However, the problem is right there: it seems that IE8 allocates some vertical space for the whitespaces/tabs/newlines between the closing tags.
The solution is easy: you can either remove the spaces/newlines:

   77   <iframe id=’_sl_historyFrame’ style=’‘></iframe></div>

   78 </body>

   79 </html>

or change the embedded style and set overflow to “hidden” instead of “auto”:

    7 <style type="text/css">

    8 html, body {

    9     height: 100%;

   10     overflow: hidden;

   11 }

Gotcha #2 – “GET silverlight 3” Badge in Firefox 3.5

There is a neat way to pass startup parameters to Silverlight from the host html page. You can specify them inside the initParams tag. For example:

   67 <object data="data:application/x-silverlight-2">

   68     <param name="initParams" value="param_1=a; param_2=b" />

However, if you don’t have any parameter, you may be tempted to just leave the value attribute empty:

   68 <param name="initParams" value="" />

Don’t do it. For some reason, Firefox 3.5 has a problem with that empty attribute and instead of loading your application will believe that the Silverlight runtime is not installed. Your page will show the nice “Install Silverlight” image and will not request any xap file.

InstallSilverlight 
The solution is again pretty simple: if you don’t have any parameter you can completely remove the whole tag or put in some random characters (assuming that they won’t mess with your Silverlight app). And no, a single space won’t work.

Pretty dumb, I know…