Wednesday, December 28, 2011

Ninject, Interception and out/ref arguments/parameters

While implementing an aspect using Ninject along with its Ninject.Extensions.Interception I found that it wouldn’t work with arguments (parameters) that are ‘out’ or ‘ref’ (‘out’ is just a special case of ‘ref’ really). I found a temporary solution at CodeProject and in parallel submitted an issue on Github for the Ninject guys to have a look at. I expected this to take a few weeks or even months to be resolved. But they surprised me positively. Support for out/ref arguments/parameters was implemented within 5 hours. Remo, do you sleep at all?

Remo Gloor, one of the Ninject project contributors, responded with saying that the feature will be included in Ninject 3.0. I checked and he had just committed the needed changes. Thank you, Remo, for the fast turn around! To confirm I updated my local sources from Github, compiled them and it all worked as expected. Sweet!

According to Remo, support for out / ref parameters / arguments will only be available when you use DynamicProxy2. If you want to make use of it prior to the 3.0 release of Ninject just pull the latest set of sources and build it yourself. Happy coding!

Saturday, December 24, 2011

Razor, MVC3, Client-Side Validation and Multiple Submit Buttons

In a details view for creating a new model object I wanted to replace the Cancel link with a Cancel button. I wanted both to be buttons as I felt that to be more consistent. I didn’t realize that I would be up for a surprise when I placed two submit buttons into a single form since, of course, not matter what button I would click I would always end up in the same controller method. It was even worse. As client-side validation was enabled, clicking the Cancel button would perform client-side validation and not even make it to the server.

I googled and found a couple of ideas for how to handle multiple submit buttons. For example one type of suggestions is based on implementing a class derived from ActionMethodSelectorAttribute, for example here or here. Another type of suggestions is based on implementing a class derived from ActionNameSelectorAttribute, for example here.

I couldn’t get any of these to work the way I wanted. I found that they would work only when I disabled client-side validation or JavaScript on the client altogether. Only in these cases, and by giving the submit button a name using the ‘name’ attribute, would the button value be included in the request. Therefore this was just a partial solution as I wanted to support both scenarios (JavaScript disabled, JavaScript enabled). I tried the actionmethod attribute on the input element but either I used it incorrectly or the handling of this new HTML 5 element is not yet correctly handled by either the browsers or MVC on the server. I didn’t investigate this in more detail.

Here is the solution that worked for me.

First I implemented a JavaScript function which I placed at the end of the view (.cshtml):

<script type="text/javascript">
   function OnFormButtonClick(action, validate) {
      if (!validate) {
         document.forms[0].noValidate = true;
         document.forms[0].action = '' + action;
         document.forms[0].submit();
      }
}
</script>

The first parameter of this short method is the name of the action (controller method) to be invoked on the server side while the second parameter determines whether or not to validate on the client side. Of course the latter would only kick in when JavaScript is enabled on the client side. If you want to use multiple submit buttons in several place you may want to put this function in a shared file such as ‘_Layout.cshtml’ or similar.

With the JavaScript function in place I then added the two submit buttons to the Razor view (HTML form) as follows:

<p>
   <input type="submit" value="Create" name="button"
    onclick="OnFormButtonClick('Create', true);" />
   <input type="submit" value="Cancel" name="button"
    onclick="OnFormButtonClick('Cancel', false);" />
</p>

This would now work with JavaScript enabled on the client. How about the scenario when JavaScript was disabled? In that case submitting the form would always submit to the same action (‘Create’ in my case) no matter what button was clicked. Some server side code was required to distinguish this. This is the point I added a class which is derived from ActionNameSelectorAttribute:

/// <summary>
/// Attribute that helps MVC with selecting the proper method when multiple submit buttons
/// exist in a single form.
/// </summary>
/// <remarks>The implementation is partially based on  </remarks>
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = true)]
public class MultiSubmitButtonAttribute : ActionNameSelectorAttribute {
   public override bool IsValidName(ControllerContext controllerContext, 
                                    string actionName, 
                                    MethodInfo methodInfo) {
      // Implementation derived from:
      // http://blog.maartenballiauw.be/post/2009/11/26/Supporting-multiple-submit-buttons-on-an-ASPNET-MVC-view.aspx
      if (controllerContext.RequestContext.HttpContext.Request["button"] != null) {
         // JavaScript disabled.
         return controllerContext.RequestContext.HttpContext.Request["button"] == methodInfo.Name;
      }

      return actionName.Equals(methodInfo.Name, StringComparison.InvariantCultureIgnoreCase) 
         || actionName.Equals("Action", StringComparison.InvariantCultureIgnoreCase);
   }
}

The first if()-statement in this implementation yields to true only if JavaScript is disabled on the client. In that case the request contains the information about which submit button was clicked and this can be compared to the name of the controller method (bear with me for a little longer to see how this will work). Having this new attribute available I was now able to add it to the controller methods handling the two cases (‘Create’ and ‘Cancel’):

[HttpPost]
[MultiSubmitButton]
public ActionResult Create(FormCollection collection) {
   // TODO: insert your 'Create' logic here
   return RedirectToAction("Index");
}

[HttpPost]
[MultiSubmitButton]
public ActionResult Cancel() {
   return RedirectToAction("Index");
}

When MVC tries to route the action to a controller method it will now also invoke the IsValidName method in the MultiSubmitButtonAttribute class. The two scenarios now work as follows:

Case 1 - JavaScript enabled: The little JavaScript function is executed on the client side and an appropriate action will be requested. The action is simply routed to the controller method of the same name. Depending on the second parameter of the call to the JavaScript function (and subject to server side settings in web.config) client-side validation is executed or not.

Case 2 – JavaScript disabled: The client includes the name of the submit button and the MultiSubmitButtonAttribute helps MVC to select the correct controller method.

Extending HtmlHelper to Simplify View Implementation

There is a way to simplify the view implementation by extending the HtmlHelper class as follows:

public static class HtmlHelperExtensions {
   public static MvcHtmlString SubmitButton<T>(this HtmlHelper<T> helper, string value, string action, bool validate) {
      return new MvcHtmlString(String.Format("<input type=\"submit\" value=\"{0}\" name=\"button\" onclick=\"OnFormButtonClick('{1}', {2});\" />", value, action, validate ? "true" : "false"));
   }
}

With this the submit buttons can be coded like this:

@* Namespace of the HtmlHelper.SubmitButton() implementation: *@
@using Web.Helpers; 

... other code for Razor based view

<p>
   @Html.SubmitButton("Create", "Create", true)
   @Html.SubmitButton("Cancel", "Cancel", false)
</p>

Note that although it appears as if the first two parameters here are duplicated, they are not. The first parameter is the label that appears on the submit button and you may want to localize it. The second parameter is the name of the action which you typically don’t want to change. Even if everything is in the same language (e.g. English) they can be the same but don’t have to.

Automatically Injecting JavaScript

So far the solution requires manually adding the OnFormButtonClick() function either to the Razor view or to have it in some shared file. I would like to remove this by making the HtmlHelper.SubmitButton<T>() implementation a little smarter:

public static class HtmlHelperExtensions {
   public static MvcHtmlString SubmitButton<T>(this HtmlHelper<T> helper, string value, string action, bool validate) {
      var javaScript = string.Empty;
      const string functionName = "_OnFormButtonClicked";
      if (!helper.ViewData.ContainsKey(functionName)) {
         helper.ViewData.Add(functionName, true);
         const string linefeed = " \n\r";
         // Inspiration for the following JavaScript function from:
         // http://www.javascript-coder.com/html-form/html-form-action.phtml
         javaScript = "<script type=\"text/javascript\">" + linefeed
                    + "   function " + functionName + "(action, validate) {" + linefeed
                    + "      if (!validate) {" + linefeed
                    + "         document.forms[0].noValidate = true;" + linefeed
                    + "         document.forms[0].action = '' + action;" + linefeed
                    + "         document.forms[0].submit();" + linefeed
                    + "      }" + linefeed
                    + "   }" + linefeed
                    + "</script>" + linefeed;
      }
      return new MvcHtmlString(String.Format("{0}<input type=\"submit\" value=\"{1}\" name=\"button\" onclick=\"{2}('{3}', {4});\" />", javaScript, value, functionName, action, validate ? "true" : "false"));
   }
}

The basic idea is that if the JavaScript function doesn’t exist yet in the view this code adds it. Then it adds one key to the ViewData. If that key is already present this code assumes the JavaScript function has already been added. With this, the JavaScript function can now be removed from the view and all that is needed is this:

  1. MultiSubmitButtonAttribute applied to appropriate controller methods
  2. Use of HtmlHelper.SubmitButton() in view definition

The JavaScript injection is taken care of automatically. Happy coding!

Monday, November 28, 2011

Automapping with Fluent NHibernate: ShouldMap() implementation

When you use the automapping feature of Fluent NHibernate you will very quickly encounter a class called DefaultAutomappingConfiguration. This is the base class for your own automapping configuration class which you want to implement to control certain aspects of the automapping feature.

During startup of your application you have to provide NHibernate with information about how to map your domain classes to the database schema and back. Fluent NHibernate allows doing this automatically. Well, most of the time. There are cases when you want to be a little more specific with regards to those automatically created mappings.

The code you want to execute during startup looks similar to the following:

private static ISessionFactory CreateSessionFactory() {
  var rawConfig = new Configuration();
  rawConfig.SetNamingStrategy(new PostgresNamingStrategy());
  var sessionFactory = Fluently.Configure(rawConfig)
    .Database(PostgreSQLConfiguration.PostgreSQL82
      .ConnectionString(Common.DatabaseConnectionString))
    .Mappings(m => m.AutoMappings
      .Add(AutoMap
        .AssemblyOf<Customer>(new DbMappingConfiguration())
      .Conventions.Add(ForeignKey.EndsWith("Id"))));
   return sessionFactory.BuildSessionFactory();
}

The interesting piece in this case is line 9 where you provide the assembly that contains all your domain classes that you want to map. In this case you only need to provide one such class and Fluent NHibernate will also try to automatically map all other types that it can find in that assembly. Sometimes this is exactly what you want. In other cases you may not want to map all of the classes. Enter the DefaultMappingConfiguration class.

You can implement you own mapping configuration by deriving from DefaultMappingConfiguration. You can override just the aspects that you want to change, everything else is taken care of by the default implementation.

In this post I want to show one way for controlling which classes are mapped. A simplistic approach would be implementing your custom mapping configuration as follows:

public class DbMappingConfiguration 
    : DefaultAutomappingConfiguration {
  public override bool ShouldMap(Type type) {
    return type.Equals(typeof(Customer))
      || type.Equals(typeof(Order))
      || type.Equals(typeof(Address))
      || type.Equals(typeof(Invoice));
  }
}

This works. However as you can imagine this is not very intuitive and you will have to maintain this code as you modify your domain. Each time you add, remove or rename any of your domain classes you will have to update this method (admittedly the renaming is not a problem if you use a refactoring tool).

So let’s think of a better solution. The one I want to present here is based on a marker attribute. The implementation PersistentDomainClassAttribute is very simple:

[AttributeUsage(AttributeTargets.Class, 
                AllowMultiple = false, 
                Inherited = false)]
public class PersistentDomainClassAttribute 
  : Attribute {
}

The usage is fairly simple, too. Here is an example:

[PersistentDomainClass]
public class Customer {
  public virtual Guid Id { get; private set; }
  public virtual string FirstName { get; set; }
  public virtual string LastName { get; set; }
}

Having this in place on all domain classes the custom mapping configuration can now be simplified as follows using a generic implementation:

public class DbMappingConfiguration 
   : DefaultAutomappingConfiguration {
  public override bool ShouldMap(Type type) {
    var attr = type.GetCustomAttributes(
                  typeof(PersistentDomainClassAttribute),
                  true);
    return attr.Length > 0;
  }
}

As a result you now have a solution that lets you choose which domain classes to make persistent. At the same time you no longer have to maintain the ShouldMap() implementation of your custom mapping configuration either.

Sunday, November 27, 2011

Behavior of DirectoryInfo.Delete and Directory.Exists: Directories reappear!?

Please note that the following may be a corner case or an isolated case. If not then something is not quite right with running NUnit based test suites from within Visual Studio 2010 using ReSharper 6. Unfortunately I don’t have proof either way but I still wanted to share my observation just in case you have encountered a similar issue.

A couple of weeks ago I wrote some test code that cleaned up folders after tests were executed. The code looked as follows:

public static void CleanUpFolders(
                    List<DirectoryInfo> directoryInfos) {
   foreach (var directoryInfo in directoryInfos) {
      foreach (var file in directoryInfo.GetFiles("*", 
                            SearchOption.AllDirectories)) {
         file.IsReadOnly = false;
      }
      directoryInfo.Delete(true);
   }

   foreach (var directoryInfo in directoryInfos) {
      Assert.IsFalse(Directory.Exists(directoryInfo.FullName));
   }

   directoryInfos.Clear();
}

For each directoryInfo in the collection the code iterates through all files and sets the read-only attribute to false for each of them. Then the directory is deleted with all its contents.

Then I executed the test suite from within Visual Studio 2010 using ReSharper 6. It turned out that in some cases the assertion in the above code would fail. Despite the directories having been deleted the call to Directory.Exists would return true! So for some reasons the directory was deleted but then it wasn’t. When I reran the tests sometimes this assertion would fail and sometimes it wouldn’t. There were days where it was fine and there were days when the assertion would fail in 90% of the cases.

In addition to this the assertions would only fail on two particular directories but not on any other. I couldn’t identify the commonality between the directories on which the assertion would fail and the directories that were successfully deleted.

Initially I thought that maybe a different process or a different thread would have a handle to the directory. In that case my test would just ask the operation system (Windows 7, 64 bit) to mark the directory as deleted and as soon as the last handle was closed it would be removed. However, the tests would generate names and then create directories with that name. No other thread or process knew about the names. I didn’t have any thread in my system under test that would scan the parent directory thus ‘learning’ about the generated directory names.

To diagnose this issue I tried various things including restarting of Visual Studio and rebooting the workstation. I also tried some of Sysinternals’ tools.

The biggest surprise was an observation that I made when running the tests under the debugger. I set a breakpoint just after the loop that deleted all the directories. At that point Windows Explorer would report those directories as gone. Also when continuing the execution the assertions would not fail. However, once the entire suites was complete, two directories would reappear in Windows Explorer! So despite Directory.Exists stating the Directory to be non-existent it reappeared! This is repeatable.

To take elements out of the equation I got the latest revision of csUnit’s source code and upgraded it to VS2010 and .NET 4.0 (these changes are available at Sourceforge). Then I executed the same test suite without any modification using csUnit. In this case the directories were properly deleted and did not reappear.

Where does that leave me? I don’t know. All I can say is this: My test suite creates a number of directories with generated names. When I execute this suite from within VS2010 and using ReSharper 6, two folders reappear despite DirectoryInfo.Delete() being executed successfully and Directory.Exists() confirming that. However when I execute the same test suite from csUnitRunner, the code behaves as expected and the folders remain deleted. Despite searching for a long time I have not been able to find the reason for this difference in behavior.

The only reasonable conclusion at this stage seem to be that as developers we always need to be suspicious about the tools we use. While they may work in almost all the cases, sometimes they may be the cause for something that doesn’t work.

Tuesday, November 22, 2011

DirectoryInfo.Delete() when files are read-only

DirectoryInfo.Delete() will fail with UnauthorizedAccessException if that directory or any of its subdirectories contains a file that is read-only.

One solution is to remove the read-only attribute from all files. You can do so while recursively deleting directories. This approach is mentioned in some blogs and it looks as follows:

public static void RecursivelyDeleteDirectory(
                   DirectoryInfo currentDirectory) {
   try {
      currentDirectory.Attributes = FileAttributes.Normal;
      foreach (var childDirectory in 
                        currentDirectory.GetDirectories()) {
         RecursivelyDeleteDirectory(childDirectory);
      }

      foreach (var file in currentDirectory.GetFiles()) {
         file.IsReadOnly = false;
      }

      currentDirectory.Delete(true);
   }
   catch (Exception ex) {
      Console.WriteLine(ex); // Better option: Use log4net
   }
}

While this works some people do not like recursion. So here is an option for how the same can be achieved without recursion:

public static void DeleteDirectory(
                   DirectoryInfo currentDirectory) {
   try {
      foreach (var file in currentDirectory.GetFiles(
               "*", SearchOption.AllDirectories)) {
         file.IsReadOnly = false;
      }
      currentDirectory.Delete(true);
   }
   catch (Exception ex) {
      Console.WriteLine(ex); // Better option: Use log4net
   }
}

Please note that these implementations report exceptions at the console. A better option would be to use a standard logging framework like log4net.

Tuesday, November 08, 2011

ReSharper Not Executing Your Tests?

Today I ran into a small issue as ReSharper was not willing to execute my tests. Generally it does but today it was different. Let me explain.

I have two assemblies, one that contains the unit tests and one that contains the code under test. In ReSharper’s session window the unit tests were properly listed. I was even able to “execute” them. However it would always show that them as not executed with a gray bullet in front of them. This was the behavior for both “Run Test” and “Debug Test”. The Output window was empty and didn’t show anything that would indicate what was wrong.

I also check my unit tests but all attributes were properly in place and both the class and the methods were public and the methods had the correct signature. So what was going on?

To diagnose the issue I launched the unit test tool as a stand-alone application. When I tried to load the assembly with the unit tests it was immediately clear what was wrong when I saw the word BadImageFormatException displayed in a message box.

It turned out that the assembly with the unit tests was set to build for “Any CPU” while the assembly under test was build for “x86”. Since I am using a 64 bit machine the two assemblies were compiled as per those targets. The unit tests as 64 bit while the assembly under test as 32 bit. No wonder it didn’t work.

This is easy to fix, in my case by setting the target for the assembly with the unit tests to “x86” as well. It would, however, be nice if ReSharper had reported the exception in some form as it would have saved time. Well, maybe in the next version? Maybe this post helps you to save some time.

(Note: I’m using ReSharper version 6.0.2202.688 with Visual Studio 2010 SP1 on a 64 bit Windows 7 Enterprise Edition)

Friday, November 04, 2011

Fluent NHibernate and Fluent Migrator

A question at stack overflow whether NHibernate and migratordotnet play nicely together caught my interest. So I started a little experiment to find out myself. Instead of migratordotnet, however, I wanted to use Fluent Migrator because one of the teams I’ve been working with recently used it as well.

The first challenge I ran into was figuring out how to invoke and use Fluent Migrator from within an assembly. When you look at the sources of Fluent Migrator you will notice that it has several runners that can be used from the command line and from NAnt and msbuild. None of these was what I was looking for. Instead I wanted an API. I couldn’t find one so decided to implement my own API.

About one interface, two classes and about 150 lines of code later I had it running. During startup, e.g. a static initializer in the assembly the migrations are executed using the following statement:

new Migrator(new MigratorContext(Console.Out) {
   Database = "postgres",
   Connection = ConnectionString,
   MigrationsAssembly = typeof(Global).Assembly
}).MigrateUp();

Migrator is one of the classes I implemented. It serves as the façade to Fluent Migrator. Within my assembly I now can write and implement regular migrations, e.g. like the following:

[Migration(201110181840)]
public class CreateUserTable : Migration {
   public override void Up() {
      Create.Table("Users");
   }

   public override void Down() {
      Delete.Table("Users");
   }
}

As a result I can now write additional migrations as needed. This approach helps with development as we no longer need to run a separate tool. Each time the startup sequence was executed, e.g. running developer tests which loads the assembly, the database schema was brought up to date. And since I used the automapping feature of Fluent NHibernate I didn’t have to write mappings either.

What I noticed, though, is that although both Fluent NHibernate and Fluent Migrator now play together nicely, there appears to be code that looks similar. Take the following example of a domain class.

public class InputFile {
   public virtual Guid Id { get; private set; }
   public virtual Job Job { get; set; }
   public virtual string FileName { get; set; }
}

To have a table for this I also have the following Migration in the code:

[Migration(201111040531)]
public class CreateInputFilesTable : Migration {
   public override void Up() {
      Create.Table(TableName)
         .WithColumn("Id").AsGuid().PrimaryKey()
         .WithColumn("JobId").AsGuid().Indexed()
         .WithColumn("FileName").AsString();
   }

   public override void Down() {
      Delete.Table(TableName);
   }

   private const string TableName = "InputFile";
}

As you can see there are things that should not be required. For example the domain class already specifies that the property Id is of type Guid. This is equivalent to saying that the table should have a column named “Id” of type Guid.

An additional issue can arise when you add a column/property in one place but forget to add it to a different place. Of course this will be reported next time via an exception. However, it would be nice if I had to change it in a single place only.

So there appears to be an opportunity to simplify the code. Maybe I could rewrite the migration using reflection? And maybe I could rewrite that migration in a kind of generic way so I could reuse at least parts of it for other migrations.

Next I’ll be looking into how an improved solution might look like ideally generic or at the very least with much less duplication. Being able to avoid having to change two files when modifying one domain class would be a nice first step.

Update 13 Nov 2011: I have created a fork of Fluent Migrator on Github. The address for the fork is https://github.com/ManfredLange/fluentmigrator. In that fork I have added a new project FluentMigrator.InProc that contains the sources mentioned in this post.

Saturday, October 22, 2011

FluentConfigurationException: Pitfalls of Auto-Completion

Working on some code I ran into a FluentConfigurationException when trying create the session factory. With (much) more experience I probably would have been able to resolve it much faster. I’d like to share my findings with you. Maybe it’ll help you saving time.

The usual message I got was: “An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.” I followed the advice and checked the PotentialReasons collection. It had a count of 0. In other words it was empty.

Next I checked the inner exception and it was of the same type FluentConfigurationException and the message was the same. Again the PotentialReasons collections was empty. However, there was an exception inside of the inner exception. This was of the type System.InvalidOperationException. The message was: “Unsupported mapping type 'DataAccessTest.Job'”. Searching the internet didn’t yield much of an answer.

I tried a lot of different things and created even a new project. In the end I found that I was a victim of autocomplete. Here is the offending code:

public class JobMap : ClassMap<JobMap> {
   public JobMap() {
      Id(x => x.Id());
   }
}

As I typed Job to provide the type parameter to the generic base class ClassMap<T> the autocomplete feature turned it into JobMap. Since this was just a test all I wanted to map was just the Id. Had I mapped other members I would have noticed that the line where it maps the id is incorrect as well. Have a look at the class that I wanted to map:

public class Job {
   public virtual Guid Id { get; private set; }
}

You will notice that the member ‘Id’ is a property and not a method. In the mapping class above the lambda expression ‘x => x.Id()’ tries to map a method ‘Id()’. The base class ClassMap<T> obviously has a method ‘Id()’ so was happily inserted by auto-completion as well. Had I mapped additional Job properties in the JobMap class I would have noticed that something was wrong. This way however, I learned that although auto-completion has made me much more productive there are times when it pays off to be very careful about what it does.

To complete this post here is the correct code for the mapping:

public class JobMap : ClassMap<Job> {
   public JobMap() {
      Id(x => x.Id);
   }
}

Notice the correct class name (‘Job’) as type parameter for the generic based class ClassMap<T> and also the correct lambda expression now mapping a property rather than a method.

Monday, October 17, 2011

Strong Name Utility Message: "x.dll does not represent a strongly named assembly"

We are in the process of migrating our code base from Visual Studio 2008 to Visual Studio 2010. To give you an indication for the size of the effort: Our product consists of multiple solutions most of them being pure C#. One of the solutions, however, contains about 70 projects most of which are C++ with a mix of both managed and unmanaged C++. Some of these projects were originally created in the 90s and upgraded multiple times to Visual Studio 2008. We are talking about over 1 million lines of C++ code.
To prepare for the actual upgrade we have a code branch separate to trunk. In this separate branch we run test upgrades on a regular basis. For a successful upgrade we need to eliminate all build errors that are reported after the migration. In addition to this we also endeavor to resolve all warnings although we are aware that we may not be able to resolve them completely.
Today I came across this warning/error message today in Visual Studio 2010 SP1 (Service Pack 1). While building I ran into a bug that was reported for Visual Studio 2010 and that was supposed to be fixed in SP1. Signing a C++/CLI assembly in VS 2008 worked but is broken in VS2010. A work around has been provided for both VS2010 and VS2010 SP1.
When I used this work around it worked like a charm for one project but failed for the next project. In my view this indicated that there had to be at least one other factor that influences the outcome.
I used a diff tool to compare both project files (*.vcxproj) to see differences in the compiler and linker options but also to eliminate these differences one at a time. In my particular case I found that I was able to resolve the problem by removing the ‘/clr’ from the project settings and instead applying ‘/clr’ to each file source file (*.cpp) that requires individually.
Update: It some cases switching off incremental linking seems to help.
(Disclaimer: This solution may not work in all cases.)

Sunday, July 31, 2011

C# Guideline: Use String.Empty When You Can

Whenever you are tempted to write “” as a string somewhere in your C# code use String.Empty instead. What is the difference? Technically one is a constant while the other is a read-only static field. There is no difference in behavior in this case. However be aware that both are different in their implementation.

The inline constant “” is introduce at compile time. This means once it has been compile it cannot be changed any more.

With the read-only field it is a little different. During compile time only a reference to the field is added. This value of this field is then read during runtime.

This can be quite a difference. For example if the read-only property is define in a different assembly and then you update that assembly with the read-only property now having a different value, that new value will then be used from then on.

Why do I believe this is a good guideline for C#? When you add your own “constants” you typically want to define and maintain them in one place only. With a static property you can achieve that. The code then looks as follows:

public class Foo {
   public static string LastName = "Smith";
}

And here is how you then use it:

public void Bar() {
   var address = "Mr. " + Foo.LastName;
   // ... remainder left out
}

If you ever need to change the LastName value you only need to change it in one place. Although String.Empty is very unlikely to change, for consistency reasons you should use it rather than “”.

The same guideline applies to all other types of constants as well, e.g. Type.EmptyTypes.

There is (at least) one exception to the use of String.Empty. If you have a switch statement on a string and one of the cases is String.Empty you will find that it won’t compile. The ‘case’ statement requires a constant (which evaluate at compile time) and doesn’t allow for read-only fields (which evaluate at runtime). So the only option then is to write the following:

switch(aString) {
   case "": // Can't write ‘case String.Empty:’ here!
      // something important to happen here
      break;
}

Monday, June 27, 2011

Cannot Connect To IIS running On Windows 7

Today I tried to set up a web server connected to my wireless router. The objective was to be able to test a web site I’m working on. I have a Netgear N300 Wireless Dual Band ADSL2+ Modem Router model DGND3300v2. I am this specific for making it easier for people to find this blog post.

The first challenge was to set up the port forwarding. By default the wireless router blocks all connections that are initiated from the outside. Some people refer to this as incoming traffic allow this is strictly speaking not precise.

What you want to set up is an inbound rule for the HTTP service. There is a good description for how to do this on Netgear’s support site.

To make sure your web server doesn’t change it’s internal IP address you may want to assign a permanent address instead of using DHCP. Again this is easy to do. Just open the configuration tool on the router and go to the “LAN Setup” which is found under “Advanced” in the left hand menu. Don’t forget to click the “Apply” button!

At this point you should be able to see new entries in the log file of the router. These entries should show something like “Mon, 2011-06-27 20:51:18 - TCP Packet - Source:x1.x2.x3.x4,49402 Destination:y1.y2.y3.y4,80 - [HTTP rule match]” where x is the source IP address and y is the destination IP address. (For obvious reasons I have left them out of this post.)

You may still receive the “HTTP 504 error” which indicates that the web server is taking too long to responds. In my particular scenario it meant that it wasn’t answering at all. After quite some researching I found that on my Windows 7 machine the Windows Firewall had the rule “World Wide Web Services (HTTP Traffic-In)” disabled resulting in all inbound traffic on port 80 to be rejected. Since IIS is listening on port 80 by default there was no wonder it took too long …

After I enabled the rule thus enabling inbound traffic on port 80 all worked like a charm. I wrote this post hoping it may safe other people some time. Good luck!

BTW: I’m very satisfied with the Netgear N300 router. It works like a charm and I never had an issue with it. Prior to this I had a Linksys router, which needed a reset at least once a month and at times more than that because one of the computers at my place wouldn’t be able to connect unless I had rebooted the router. The Linksys router also didn’t have a reset button or a power button. To make it reboot I had to unplug it from the wall power outlet. The Netgear N300 router has a much better range, does 801.2n as well and support both 2.8 GHz and 5 GHz. If you look for a reliable router then I would strongly recommend to take a look at the Netgear N300. (No, I didn’t get it for free from Netgear and had to buy it like everybody else.)

Tuesday, June 21, 2011

BIN Deploying ASP.NET MVC 3 with Razor to a Windows Server without MVC installed

Scott Hanselman discussed in his blog some time ago the challenge with deploying MVC 3 applications where MVC is not installed on the server. He described several options.


I am not challenging that the different options that he offers come with different advantages and disadvantages. So it is essentially up to you to decide which option you want to use.


In this post I want to summarize the option that I used for a deployment where MVC 3 was not installed on the server. In a first step I created a web site project and inside of the project folder I created a folder that I named “mvc3”. Inside of that folder I copied the following assemblies:


  • Microsoft.Web.Infrastructure.dll
  • System.Web.Helpers.dll
  • System.Web.Mvc.dll
  • System.Web.Razor.dll
  • System.Web.WebPages.Deployment.dll
  • System.Web.WebPages.dll
  • System.Web.WebPages.Razor.dll

Next I added all of these as references to the web site project. In all cases I set “Copy Local” to “true” to make it would be picked up upon publication.


Finally I build the site and then used “Publish…” to get the web site onto the server.


With these changes is just worked like a charm.


If at some point the server supports MVC 3 out of the box then certainly you will want to remove the MVC 3 related assemblies.


This solution may not be the right choice for your scenario. Check Scott’s post for other options that may work better for you.

Saturday, April 23, 2011

Fluent-NHibernate, PostgreSQL and Identifiers

In PostgreSQL identifiers for tables, columns, etc. are case sensitive. The problem is, though, that when you access PostgreSQL through the .NET data provider (e.g. Npgsql) and don’t double-quote the identifiers, they will be interpreted as lower case. As a result PostgreSQL may tell you that it doesn’t know table ‘MyTable’ as the query is sent to PostgreSQL as ‘mytable’, which is a different identifier than ‘MyTable’.

When you build your SQL queries yourself this is not a major issue. Just add the double quotes. It becomes more of a challenge when you want to use Fluent-NHibernate.

I searched the internet but couldn’t find a solution that worked for me. For example one answer at Stack Overflow suggested to make all identifiers lower case, e.g. have table, column names, etc lower case. While this may work in some cases it doesn’t work in others. Changing the database schema was not an option in my case.

Others (e.g. here) recommend the use of FluentConfiguration.ExposeConfiguration(cfg => cfg.SetProperty("hbm2ddl.keywords","auto-quote") but according to several sources it doesn’t seem to work properly or at all. This solution didn’t work for me either.

Fabio Maulo describes the official programmatic way for NHibernate to enable quoting tables and columns as follows:

SchemaMetadataUpdater.QuoteTableAndColumns(configuration);

I couldn’t get this to work in combination with Fluent-NHibernate either. NHibernate.Dialect reported a System.NotSupportedException:

image

A first workable option is providing the identifiers via the domain mappings. Let’s look at an example for this approach:

public class User {   
   public virtual int Id { get; private set; }
   public virtual string Name { get; set; }   
   public virtual string Password { get; set; }
}

A simple mapping for this class including specifying the names using double-quotes looks like this:

public class UserMapping : ClassMap<User> {
   public UserMapping() {
      Table("\"User\"");
      Id(x => x.Id).Column("\"Id\"");
      Map(x => x.Name).Column("\"Name\"");
      Map(x => x.Password).Column("\"Password\"");
   }
}

This works but has the draw back that you have to specify the names in every single case. Although a one-off, this could still be quite some work if you have a large database schema with over a thousand tables. I wanted to have something simpler, something that would have the logic in a single place.

And I didn’t want to modify Fluent-NHibernate or NHibernate sources either. Instead I wanted to use the official interfaces.

The solution that worked for me was implementing the INamingStrategy interface from the NHibernate.Cfg namespace. Before I show you the implementation, here is how you can use it:

public static ISessionFactory CreateSessionFactory() {
   var rawConfig = new Configuration();
   rawConfig.SetNamingStrategy(new PostgresNamingStrategy());
   var fluentConfiguration = Fluently.Configure(rawConfig)
      .Database(PostgreSQLConfiguration.PostgreSQL82
                  .ConnectionString(ConnectionString))
      .Mappings(m => m.FluentMappings.AddFromAssemblyOf<User>())
      .BuildConfiguration();
   return fluentConfiguration.BuildSessionFactory();
}

This adds only a small amount of additional code to the creation of the session factory. First we create a raw NHibernate.Configuration() object (line 2) and set the naming strategy (line 3). From thereon I can use the fluent interface by passing the raw Configuration object as the parameter to Fluently.Configure() (see line 4). Note that to come into effect the naming strategy must be set before any mappings are added to the configuration.

As a result of setting the naming strategy you can simplify the mapping to:

public class UserMapping : ClassMap<User> {
   public UserMapping() {
      Id(x => x.Id);
      Map(x => x.Name);
      Map(x => x.Password);
   }
}

The need to specify quoted column names is gone. Equally we don’t need to provide the quoted table name anymore.

And here is the implementation of the INamingStrategy interface:

internal class PostgresNamingStrategy : INamingStrategy {
   public string ClassToTableName(string className) {
      return DoubleQuote(className);
   }
   public string PropertyToColumnName(string propertyName) {
      return DoubleQuote(propertyName);
   }
   public string TableName(string tableName) {
      return DoubleQuote(tableName);
   }
   public string ColumnName(string columnName) {
      return DoubleQuote(columnName);
   }
   public string PropertyToTableName(string className, 
                                     string propertyName) {
      return DoubleQuote(propertyName);
   }
   public string LogicalColumnName(string columnName, 
                                   string propertyName) {
      return String.IsNullOrWhiteSpace(columnName) ?
          DoubleQuote(propertyName) :
          DoubleQuote(columnName);
   }
   private static string DoubleQuote(string raw) {
      // In some cases the identifier is single-quoted.
      // We simply remove the single quotes:
      raw = raw.Replace("`", "");
      return String.Format("\"{0}\"", raw);
   }
}

Note that in some cases you may have to remove single quotes first before you add double quotes, e.g. when an identifier is a reserved name. See implementation of the private method DoubleQuote().

27 May 2011: Update this article with actually working code. Thanks for the feedback from various people.

Disclaimer: Source code is provided “as-is”. Use at your own risk. In your environment this solution may need to be adapted or may work at all. The configuration I used for my experiments was PostgreSQL 9.0 running on 64 bit Windows 7, Npgsql 2.0.11.0, Fluent-NHibernate 1.2, and Visual Studio 2010.

Monday, March 28, 2011

PostgreSQL and (Index) Names

Creating an index in PostgreSQL can be achieved by executing the following SQL command:
CREATE UNIQUE INDEX "<indexName>" 
ON "<tableName>" 
USING btree (<columnNames>);

There is one caveat, though. If you specify an index name that is longer than 63 characters PostgreSQL will truncate this and not tell you. By issuing the following command you can list all indexes present in the database:
SELECT * FROM pg_catalog.pg_indexes;

In general a limit of 63 characters should not be a problem. However, in my case I was working on a tool which generates index names and it hit this limit. Of course, now I’m using a different algorithm to generate the index names and the problem is solved.
I also suspect that this 63 character limit applies to other names as well. PostgreSQL has defined a type ‘name’ which has a size of 64 (including the terminating character). This type is used in several places and its definition info is available via the statement
SELECT * 
FROM pg_catalog.pg_type
WHERE typname='name';

Personally I would prefer if PostgreSQL would reject the CREATE INDEX statement and instead return an error message.
Note: The above SQL statements may use PostgreSQL specific syntax, tables, views and functions. Other database systems may require a different syntax and may have different limitations on names. For my experiments I used PostgreSQL 9.0.2 (64bit) running on Windows 7.

All About Agile

I just added another entry to the “Suggested Links” section. There are quite a few sites about agile approaches in particular for software development. Kelly Waters has put a lot of effort in her site – “All About Agile” - over the last view years and I find the material and links to further information very valuable. Have a look and I’m sure you will find nuggets, too.

Friday, March 04, 2011

Cannot toggle breakpoint with F9 key

Embarrassing but I still would like to share this in case it drives you nuts ... well I even re-installed Visual Studio to resolve this. What happened?

All the sudden setting breakpoints using F9 stopped working. I had just finished installing service pack 1 for my OS so I had a prime suspect. Or so I thought. After removing one plug-in at a time, repairing Visual Studio 2010 and eventually reinstalling it, it still wouldn't work.

I don't know what made me check this but for some reason I found that some function keys would still do something. After some experimentation I found that my keyboard has a special key "F Lock" and after I pressed that all was back to normal. Normally keyboards don't have that key. However, I have a Microsoft natural ergonomics keyboard and it has that key. Apparently I must have had pressed the "F Lock" key accidentally. Oh, well! I guess it's Friday night and time to get some sleep ...

Monday, February 21, 2011

All About Agile

I just added another entry to the “Interesting Links” section. There are quite a few sites about agile approaches in particular for software development. Kelly Waters has put a lot of effort in her site – “All About Agile” - over the last view years and I find the material and links to further information very valuable. Have a look and I’m sure you will find nuggets, too.