MVC4, Progressive Enhancement and MVVM

By Mike Hanson at February 15, 2013 17:04
Filed Under: .NET, MVC, Commentary

So I am “resting” again between contracts and have been getting stuck in to re-writing my Virtual Cycling community site with ASP.NET MVC4.  Apart from the obvious exists so that I have a public place to practice and show off my ASP.NET and web development skills.  It started out as a standard WebForms 1.0 site then progressed to ASP.NET Ajax and I even added a few new features using Silverlight.  Now with HTML5 becoming ever more prominent (even though it still isn’t a finished standard) I think it is time for a complete re-write, which provides me with an opportunity to morph it into a more general site for active people while retaining the existing functionality.


I’ve been learning MVC4 for a year or so now (I think I mentioned before when MVC and Silverlight were released around the same time I opted for throwing myself into Silverlight), which has made for a nice distraction from doing primarily Silverlight or WPF work for the last few years.


Kindle on my iPad is packed with books on ASP.NET MVC, HTML5, CSS3 and JavaScript that I have been using to bone up on relevant subjects and Progressive Enhancement struck me as pretty crucial for a modern web site/application.  I also noted the rise of KnockoutJS and the use of the MVVM pattern, which I like the idea of (being very familiar with it in my Silverlight and WPF work).  I’ve tinkered with several architectures/models/patterns for the new site and have settled on a pretty stable set work for me.  I’ve changed direction a few times and learned a lot of lessons in the process.  So I thought others might benefit from seeing what I have settled on and decided to write a series of blog posts to document it.  If nothing else I will have documented it for myself and demonstrated a grasp of MVC4 and other technologies to potential employers.


So to start I am just going to list the patterns and technologies I have settled on, which will at least provide a taster for what is to come and maybe tempt you back to read the follow up articles.


Back End/Middle Tier
    • AutoFac 3 (IoC container)
  • Entity Framework 5.0 (Code First and Migrations)
  • SQL Server 12
  • NLog
  • JavaScript Libraries
  • jQuery
  • jQuery UI
  • KnockouJS
  • log4javascript
  • HistoryJS
  • js-signals
  • Testing
  • NUnit – TDD Unit Tests
    • FluentAssertions
  • SpecFlow – BDD Acceptance Tests
  • SpecSalad
  • Telerik Testing Framework (Free) – UI Automation



Visual Studio 2012

ReSharper 7

NCrunch – Continuous Test

.NET Demon – Continuous Build

BitBucket – Source control repository

TortoiseHG – Source control client

VisualHG – IDE source control

NuGet – Package management


I won’t necessarily be covering usage of all the tools, but happy to answer any questions regarding usage or choices.

NLog MVC Integration

By Mike Hanson at April 05, 2012 19:23
Filed Under: .NET, Logging, NLog, MVC, DbContext, Code First, Migrations

Silverlight 2 and ASP.NET MVC showed their heads around the same time and despite all my experience developing with ASP.NET WebForms I jumped in the direction of Silverlight and missed the magic of MVC.  Recently I had the opportunity to work on a little MVC project and really enjoyed the experience.  Since then I have been working on re-writing a personal project, initially I started with MVC 3 but have since moved it up to 4.  I generally use NLog for logging and wanted to integrate it into my MVC project and wanted to use Entity Framework DbContext, Code First and Migrations.  I figured this might be something others would want to do so set about creating a little library to make it easy and fairly painless.  The project lives on github at and includes the following:


  • A custom NLog Target that writes log entries to a database using a DbContext instance
  • A custom HandleErrorAttribute that logs exceptions via the NLog Target
  • A Logger utility class that you can instantiate or inject for other components to implement logging throughout your application.
  • I have documented NLog.Mvc within the project wiki at github,, but feel free to ask questions or give me feedback or event request features.  It is working as I need it to in my project, but I haven’t exhaustively tested in other scenarios.

The next Killer Tool for TDD

By Mike Hanson at April 04, 2012 16:58
Filed Under: Visual Studio, Testing, Test Runners, TDD, Productivity

If you practice TDD in any fashion you are going to want to get NCrunch.  It is still in beta, and may become a commercial product at some point but it is pretty stable and for me has become an essential tool.  Imagine being able to just write code (test or functional) without having to pause to run tests.  Imagine making a change to existing code and knowing immediately if it broke any tests without having to manually run the tests.  This continuous development ability is what NCrunch delivers and it has made me significantly more productive in my coding and in a couple of weeks I found this tool as necessary as having a productivity tool like CodeRush or Resharper.


Just spend a few minutes watching the video on the NCrunch home page and if you aren’t immediately hooked on the idea I will be very surprised.

Ease your SpecFlow pain with SpecSalad

By Mike Hanson at April 04, 2012 16:41
Filed Under: BDD, Acceptance Testing, Testing, Test Runners, SpecFlow

Just finished a contract where we used SpecFlow to define our acceptance tests, and I have been using it on my personal projects for a while.  Anyone using SpecFlow for a while quickly learns that you have to be careful how you organise your step definitions or you can get in a real mess.  It is a great tool but if you tend to just dive in without first finding out the best practice it can be painful.


Recently I discovered SpecSalad.  Duncan Butler has ported CukeSalad over to .NET for us and I for one am glad he did.  Basically SpecSalad takes the pain away by allowing you to define your system as a set of Roles and Tasks.  Steps are replaced by the Task and as each one is a separate class it is much easier to keep track or reuse.  I won’t regurgitate the “how to” as Duncan has some great posts on his blog to get you started.


SpecSalad necessarily constrains the vocabulary you must use, but this has a cool side effect of making it easier for new adopters of BDD to learn the vocabulary.  On my last project the vocabulary was somewhat of a hurdle for the business to getting on board and defining the acceptance tests, for much of it we the developers were defining acceptance tests after the fact, never a good thing in my book.

PDF in WPF Application without Acrobat

By Mike Hanson at November 09, 2011 05:07
Filed Under: .NET, WPF

A big part of the application I am working on at the moment is viewing PDF documents.  It is a WPF application for a change and I picked up the task of resolving some bugs around the use of Adobe Acrobat Reader.  We were following the accepted model of hosting the Acrobat ActiveX control in a WinFormsHost control.  The killer for us was the fact that Acrobat has a fixed limit to the number of files that can be opened in a session, and we are regenerating files as part of the app and re-opening them.  There were some other issues around installed versions on users desktop, but the fixed file limit topped all of these.  So I started looking around for alternatives and after a lot of Googling and blog reading finally found PDFView4NET from O2 Solutions.


I am working on some getting started articles that will follow soon, but I wanted to get something out there that might help others find this tool, but more importantly to communicate the great support experience I had with these guys.  The library and in particular the PDFPageView control is easy to use and there are plenty of samples to help you get started, but the best thing is the response from the support guys.  The control works well and we had no major problems with the general output, however the documents we generate are legal documents so we had to be ultra fussy about what is displayed on screen and we had a few minor issues to report.  Also we are implementing MVVM and some of the properties we wanted to set through binding weren’t dependency properties.  I contacted the O2 Solutions support team with these issues and they not only responded very quickly but the very next day I had a new set of binaries with most of our issues resolved and the properties I mentioned could now be controlled through binding.  Over the next couple of days I had other suggestions and a few more really trivial issues, which were also dealt with quickly with new binaries delivered just as prompt.


I can’t recommend these guys enough, and the product is excellent we have it integrated in our application and no longer any dependencies on Acrobat and all the issues we had with it.  Our support team are really happy that they don’t have to deal with the job of making sure hundreds of desktops have the correct version of Acrobat installed and locking them down to prevent incompatible versions from being installed by users.  I and the other developers on the team are really happy we don’t have to deal with the COM Interop issues, and particularly with inconsistent behaviour of Acrobat under automation in our acceptance tests.


If you are working on a WPF, WinForms (and very soon Silverlight) and need to display or work with PDF files then make sure you look at the PDF4NET range of products.

UI Automation Build Validation

By Mike Hanson at July 30, 2011 05:26
Filed Under: .NET, Testing, UI Automation, Visual Studio

On my last few contracts I have advocated acceptance testing through UI automation, but one of the most common wishes expressed by those coding up tests was for some way to validate that our automation wrappers actually still reflected the control they wrapped in the application under test.  Something as simple as renaming a button would break our tests.  It wasn’t until the next scheduled run of our acceptance tests that the breakage would be identified.


As mentioned in my last post I am working on a Windows Automation Toolkit (WATKit), I haven’t finished it yet but I figured an early feature should be some kind of build time validation to resolve the above issues.  As of today WATKit includes an MSBuild Task and a few attributes to do just that.




The build task basically compares types in a test assembly that are decorated with AutomationTypeMappingAttribute against elements on types in one or more source assemblies.  The test assembly and source assemblies are specified in the build configuration file (aka VS 2010 project file).  Here is the what I added to the WATKit.Tests.csproj file in the WATKit source available on GitHub:


   1: <UsingTask TaskName="WATKIt.Build.WATKitBuildTask"
   2:             AssemblyFile="$(TargetDir)\WATKit.dll"/>
   3:   <Target Name="AfterBuild">
   4:         <WATKitBuildTask TestAssembly="$(TargetDir)WATKit.Tests.dll"
   5: SourceAssemblies="$(SolutionDir)\WATKit.TestApp.WPF\bin\Debug\WATKit.TestApp.exe" />
   6:         <WATKitBuildTask TestAssembly="$(TargetDir)WATKit.Tests.dll"
   7: SourceAssemblies="$(SolutionDir)\WATKit.TestApp.WinForms\bin\Debug\WATKit.TestApp.exe" />
   8:   </Target>


So the first thing it does is reference the build task with a <UsingTask /> element.  With this in place the task can be referenced as an element within any <Target /> element.  I simply uncommented the AfterBuild target that is included in most project files.  I have the task running to validate against the two test apps in the solution.


The build task has two required properties:


TestAssembly points to the assembly containing the wrappers to be validated.  It doesn’t have to be a test assembly, if you keep your wrappers in a separate class library then it is this library you should reference.


SourceAssemblies is a comma separated list of one or more assemblies that contain real controls and windows.  It works with WPF and WinForms applications (I haven’t tested it on other project types, I don’t see any reason it won’t work with Silverlight 4.* assemblies, but I haven’t tested it yet)




In the test assembly only types with the AutomationTypeMappingAttribute are checked.  Each Property on these types is checked unless it is decorated with an IgnoreAttribute.  Methods and Fields are implicitly ignored.


This attribute has a single constructor argument that must be a fully qualified name of the type to validate against in one of the source assemblies.  For example the MainWindow wrapper control in the WATKit.Tests project is declared like this:

   1: [AutomationTypeMapping("WATKit.TestApp.MainWindow")]
   2: public class MainWindow : Window
   3: {
   4: }

This tells the build task to validate MainWindow in the test assembly against the first WATKit.TestApp.MainWindow it finds in the source assemblies (the search is carried out in the order assemblies are listed)




By default the build task will check every Property on the wrapper type unless you tell it otherwise by decorating a Property with this attribute.  You can optionally specify a Reason for ignoring it, so that others will know why.  For example the DynamicButton property of MainWindow mentioned above looks like this:

   1: [Ignore(Reason = "Button does not exist until run time")]
   2:  public Button DynamicButton { get { // removed for brevity } }

Many properties in the base classes in WATKit are decorated with this attribute to avoid build failures.




By default the build task will look for an exact match of the wrapper Property name with the name of a Field (child controls in WPF and WinForms are exposed as fields not properties) in the type it is validating against.  If for some reason the names do not match you can decorate the wrapper property with this attribute and specify the name to match.  For example the ChangeMyNameButton property of MainWindow mentioned above looks like this:

   1: [AutomationMemberMapping("IChangeMyNameButton")]
   2: public Button ChangeMyNameButton { get { // removed for brevity } }

This tells the build task to validate the ChangeMyNameButton property against a field named IChangeMyNameButton.


That’s it, no rocket science but it does the job and based on my experiences should help to give early feedback about breaking changes in applications being tested via UI Automation.  The build task is not limited to use with tests that use features of WATKit, it should work with any automation framework as long as you create wrappers for your controls and those wrappers expose properties representing child controls and elements exposed by the real types.


If you have any feedback or suggestions for improving this please let me know by commenting here or over at GitHub.

Fluent Automation

By Mike Hanson at July 24, 2011 18:59
Filed Under: .NET, Testing, UI Automation

For those who don’t like reading long posts:  I have created a Fluent API for automated UI testing based on components in the System.Windows.Automation namespace that ship with .NET 3.0.  I have posted the early code for it on GitHub at  It will be fully documented on the GitHub Wiki and I will post on it here.  If you want to know how I got to this point and why I am creating another UI Automation API read on.


After an aborted attempt to retire early I am back in a new contract in central London working as part of an agile team on a WPF fat client application.  As is common in agile teams developers have to code acceptance tests that automate the UI.  As earlier posts demonstrate I have some experience of this with web apps and Silverlight but I had never had to do it with a WPF app.  Coincidentally I have started work on a desktop client for a community site I run at and I am doing so using BDD with SpecFlow.  Ranorex is the platform of choice at work, but this a commercial product that is outside of my budget, so I started looking at free/open source WPF automation options in my spare time.


Since v3.0 the .NET Framework has included an automation framework in the System.Windows.Automation namespace.  This is all packaged in a set of assemblies in C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 beginning with UIAutomation.  These are not specifically for automated UI testing but make it possible.  They aren’t difficult to use but the code is quite verbose and almost immediately I wanted a more developer friendly API.  I already knew of an open source project on Codeplex called White that I had reviewed for use with Silverlight.  I looked for others but couldn’t find anything that wasn’t commercial.  So I downloaded White and started wiring it into my acceptance tests.


As far as launching my application and finding windows and controls went it worked fine, and looked promising, but, my very first really simple attempt to create a CustomUIItem failed miserably.  If you have read any of my articles on WebAii you will know I recommend creating wrappers for windows, views and controls in your application and have your tests drive these wrappers rather than finding individual elements repeatedly.  The CustomUIItem base class is mean to be the way to do this with White.  I followed a simple example in the documentation to create a wrapper for a WPF ValidationSummary control I had created, it should have worked but White could not find my control.  I posted a question on the White Codeplex site and got a couple of responses from the original author.  Google turned up a number of people with the same problem and offering workarounds but to be honest I didn’t see the point of using White for the workarounds you could achieve the same thing without the CustomUIItem base class.  Anyway not being a patient person, by the time I got a response to my post on Codeplex I had already started writing my own little API to make working with the Microsoft UI Automation framework easier, mostly a set of extension methods that made code less verbose (the route a number of the Google results indicated others had taken).


As indicated by my previous post I have become a fan of Fluent APIs like Fluent Assertions and at work I am using Fluent NHibernate for the first time.  As I developed my API I started seeing the evolution of a Fluent Automation API and the more I did the more I liked it.  Combined with Fluent Assertions my test code was reading like a Story and the more I saw this the more I liked it and have spent a significant amount of time in the last few weeks re-factoring what I had started into a more complete Fluent Automation API and will continue to do so until it is fully usable.  At first I was doubting the sense of creating another API rather than working to figure out the kinks in White, but the fluent aspect of the API changed this as I am fairly sure it makes it unique (at least amongst public offerings) and I am really enjoying working on it at the moment, so will stick with it to completion.


Enough of the blabbering, here are some tasters of what my Fluent Automation API looks like:


   1: var uat = Fluently.Launch("C:\MyApp.exe")
   2:                 .WaitUntilMainWindowIsLoaded()
   3:                 .WithDefaultMainWindow();

This is the start point and launches the application under test returning the default wrapper for the main window of the application.


   1: var uat = Fluently.Launch("C:\MyApp.exe")
   2:                 .WaitUntilMainWindowIsLoaded()
   3:                 .WithMainWindowAs<MyMainWindow>();

This is my preferred alternative that allows you to specify that a strongly typed wrapper is used for the main window.

   1: var button = aut.MainWindow
   2:         .FindControl()
   3:         .WithId("MyButton")
   4:         .IncludeDescendants()
   5:         .Now()
   6:         .As<Button>();
Having launched your application you can use the MainWindow property to start finding elements.  This example uses .As<Button> to return a strongly typed wrapper that is included in the API.  You can also use AsDefault() to return the element as a base AutomationControl.  If the button is not actually found you get a proxy that can be used to repeat the find or execute a Wait on.
   1: button.Wait()
   2:     .UntilExists()
   3:     .TimeoutAfter(TimeSpan.FromSeconds(5), true);

This is how you would get a proxy to wait for the button to exist.  The second argument to TimeOutAfter indicates an exception should be thrown on timeout. The default is not to throw an exception but you can check the state of the button to identify if it is still a proxy or the real thing.


Well that is enough for now, let me know what you think and feel free to contribute ideas and comments here or on GitHub at


NB: After I started this post Telerik posted an update to the WebAii framework that included WPF support.  I took a quick look and it, and whilst it is a welcome addition the fluent aspect of my API is still unique so I will be sticking with it.

Don’t Moq Me I’m Becoming Fluent

By Mike Hanson at April 14, 2011 17:14
Filed Under: .NET, Mocks, MS Test, Testing

Having recently finished a contract I am “resting” for a while and got to spend some time on my personal projects.  While doing so I took a closer look at NuGet and while browsing the gallery I cam across two packages that piqued my interest.  NSubstitute and Fluent Assertions.


Previously I have been a big fan of Moq and recommended it in my professional life as well as for my personal projects.  After visiting the home of NSubstitute and reading the getting started guide at I decided to see what the world was thinking and after a bit of Googling came across, this is a great series comparing Rhino Mocks Moq and NSubstitute.  It was enough for me so no more Moq I have switched to NSubstitute and if you like concise highly readable test code you should take a look.


I also took a look at to see what Fluent Assertions was all about and was impressed.  I know it is not good form to assert too many things in a test, but sometimes it just makes sense and writing multiple Assert.* statements never looks pretty, especially when you are repeatedly passing the same object to the assertion method.  Fluent Assertions provides a mass of Extension Methods that IMHO significantly improves the readability of test code.  A side effect of using Fluent Assertions is that your test code becomes highly portable, being able to switch from one testing framework to another very easily (Not that I have had to to do this frequently, but it is a nice thought)


I am not going to regurgitate all the documentation since the links above provide all you need but I’m providing an extract from a test for a presentation model I am working that I hope demonstrates how much cleaner and readable the code becomes using NSubstitute and Fluent Assertions.


   1: [TestInitialize]
   2: public void Initialise()
   3: {
   4:     this.controller = Substitute.For<IController>();
   6:     this.activityLookupServiceClient = Substitute.For<IActivityLookupServiceClient>();
   7:     this.activityLookupServiceClient.GetActivityTypes().Returns(activityTypes.ToObservable());
   9:     this.activityTemplateServiceClient = Substitute.For<IActivityTemplateServiceClient>();
  10:     this.activityTemplateServiceClient.GetActivityTemplates().Returns(activityTemplates.ToObservable());
  11: }
  13: [TestMethod]
  14: public void OnCreationctivityTypesContains2ListItemModels()
  15: {
  16:     var model = new NewActivityModel(this.controller, this.activityLookupServiceClient, this.activityTemplateServiceClient);
  17:     model.ActivityTypes
  18:         .Should()
  19:         .NotBeEmpty()
  20:         .And.HaveCount(2)
  21:         .And.ContainItemsAssignableTo<IListItemModel>();
  22: }
  24: [TestMethod]
  25: public void OnCreationFirstActivityTypeIsSetAsSelected()
  26: {
  27:     var model = new NewActivityModel(this.controller, this.activityLookupServiceClient, this.activityTemplateServiceClient);
  28:     model.SelectedActivityType
  29:         .ShouldHave()
  30:         .Properties(d => d.Id, d => d.Name)
  31:         .EqualTo(activityTypes[0]);
  32: }
  34: [TestMethod]
  35: public void OnCreationSessionTypesContains1ListItemWithIdOfSessionTypeThatIsChildOfFirstActivityType()
  36: {
  37:     var model = new NewActivityModel(this.controller, this.activityLookupServiceClient, this.activityTemplateServiceClient);
  38:     model.SessionTypes
  39:         .Should()
  40:         .NotBeEmpty()
  41:         .And.HaveCount(1)
  42:         .And.ContainItemsAssignableTo<IListItemModel>();
  43:     model.SessionTypes
  44:         .First()
  45:         .ShouldHave()
  46:         .Properties(d => d.Id, d => d.Name)
  47:         .EqualTo(activityTypes[0].SessionTypes[0]);
  48: }

MSTest vs NUnit Run Times

By Mike Hanson at February 27, 2011 20:38
Filed Under: .NET, CodeRush, Testing, NUnit, MS Test, Test Runners
Technorati Tags: ,,
Earlier this week I had a somewhat heated argument with some of my colleagues over the performance difference of MSTest and NUnit. I have used both professionally in the last five years and I always use MSTest as it comes with Visual Studio in my own personal projects. I have never really thought about the performance of either, I had heard from others that NUnit was faster to execute, but prefer the fully integrated nature of MSTest and generally stick with it. During the discussion one of my colleagues was adamant that the performance of MSTest was an order of magnitude slower than NUnit, so much so that on previous projects he has switched from MSTest to NUnit because of the time it took to run 7000 unit tests severely impacted his TDD productivity.
I decided I would test out my colleagues claims and try to understand why this might be the case. I created a new VS 2010 solution and added a class with a single method that multiplied two numbers without using the multiplication operator (I frequently ask interviewees to write this method). I then added an MSTest and an NUnit test project to the solution and to each I added 100 test classes each with 10 test methods that called my Multiply method and asserted the return value was correct.
I executed all MSTest tests by pressing Ctrl + R, A then I executed all the NUnit tests using NUnit GUI and lo and behold it seems my colleague was right as VS 2010 took almost 4 times as long to run the tests.
I thought about this for a while, then I asked myself whether this was a fair comparison. NUnit GUI is a standalone process and does nothing but run the tests, while VS 2010 is not only doing it’s thing but it is integrating the test results into several views. I also started thinking about how developers actually run their tests, it has been some time since I worked without either CodeRush or Resharper and I had TestDriven.NET available more often than not. On my current project we all have Resharper 5.* and everyone runs tests using this rather than NUnit GUI so to do the job properly and find out if it is the actual execution of the tests that accounts for the time difference or the extra stuff that runners do.
On my personal projects I use CodeRush, I installed TestDriven.NET and the trial of Resharper 5.* and did three runs of each set of tests with each runner including VS 2010 and NUnit GUI. The following table shows the run time results in seconds as published by the runner. Following this I have included some observations, because I noticed some odd differences, particularly when it comes to rendering results.
VS IDE 13 14 14 N/A N/A N/A
NUnit GUI N/A N/A N/A 2.51 2.03 1.38
TestDriven.NET 5.49 5.45 5.4 5.11 5.23 5.54
CodeRush 5.15 3.03 3.25 3.07 3.24 3.04
Resharper 1.9 1.54 1.67 1.26 1.26 1.29



The figures shown clearly suggests running all tests in a solution is much slower than any of the other runners, but if you drill down into individual tests the execution times are very low and adding them up does not equal the total time reported. So my conclusion is that the total time reported is the total time to report and render results as well as execute tests. There is a a lot more going on with the VS IDE MSTest integration so it is not surprising it takes more time, they could and should optimise this somewhat but not sure they will. I once did a similar exercise with MSBuild and found it was significantly slower when run within VS that when run from the command line. It seems the deep integration with the IDE comes within a significant cost, but I personally don’t see this as a problem.
Clearly very fast, but not totally surprising since it is a lightweight process that does very little more than run the tests and render some results. Personally I prefer an integrated solution so I would never use the NUnit GUI on a day to day basis.
A pretty simple add-in that runs the tests and outputs the results to the Output window of the VS IDE. This is probably the fairest comparison of all and it produces comparable results, I guess this is down to the fact it does not attempt to render the results in any way. NUnit appears to be marginally faster to execute but there is only fractions of a second in it.
I couldn’t figure out why the first MS Test run reported a slower total time. I ran the MS Tests a few extra times beyond the three and it consistently reported times in the 3.* second range. My conclusion on CodeRush is that it is comparable for both MS Test and NUnit, and that it is fast enough for my needs. A good thing since I have just renewed my subscription for another year.
I found the results for Resharper a bit odd. The reported execution times were comparable with NUnit coming out slightly ahead, but visually there was a huge difference. The rendering of MS Test results was significantly slower taking almost 20 seconds to complete, even though there is no difference in what is rendered. If Resharper is using the native XML results output by MS Test and NUnit then I could understand some of this difference as the MS Test output is significantly more complex than that of NUnit, but not this level. VS does a lot more with the results than Resharper but is significantly faster. CodeRush does pretty much the same with both sets of results and there is no discernable visual difference between MS Test and NUnit rendering performance. My only conclusion with regard to Resharper is that I am glad I use CodeRush for my own stuff.


Overall Conclusion

My overall conclusion from this exercise is that the actual Unit testing framework you choose is not so important as the test runner you choose.  Obviously there are features that NUnit has that MSTest doesn’t that might lead you in the direction of NUnit just as the use of a single fully integrated tool might lead you to go for MSTest, but when it comes down to performance the test runner seems to me to be the deciding factor.

Silverlight Testing with WebAii - Part V

By Mike Hanson at November 13, 2010 22:22
Filed Under: Silverlight, Testing, WebAii

The fifth part of my series on testing Silverlight applications with the WebAii UI Automation Fx is now available here

Tag cloud

Previous Articles