Tuesday, 23 October 2012

Getting all profile happy with JustTrace

Recently I have been developing a fair size application and so far everything seemed to be "just working" fine. However during testing I noticed a few times the app seemed to stall up. This left me quite unhappy and I knew it was time to relook at some of my "decisions". Initial glances through the code didn't really highlight anything so I thought it about time to profile the application and see what it can find.

In the past I have used Redgate's Ants Profiler and an early version of Telerik's JustTrace. I decided to give Telerik's JustTrace a go again as I have heard alot about it and wanted to see how it's changed / grown.

Alas, my initial attempt to use JustTrace ended quite abruptly, I've been developing on Windows 8 and VS2012 exclusively for about 5 months now and sadly JustTrace didn't support IIS8 or IIS Express 8. I was gutted to find this out but a quick chat to the Telerik Dev Team and Chris Eargle gave me hope as they advised me the Q3 update was right around the corner, 1 week to be exact, and it brought all the new toys into the game,

HURRAH!

So tonight I got it all updated and fired up, and started profiling my app. Profiling is immensely easy to perform, simply enable justtrace in the Telerik menu and hit F5 :) You are asked how you want to profile, i chose to simply [].

You then simply start using your application like normal and JustTrace sits there observing and recording what it is up too. Once you have spent some testing you can really start digging around. The first thing to notice is the Timeline view, this is showing all the CPU activity whilst running through the application. You want to start looking at where you have some peaks, and in particular extended CPU activity.
Timeline View
By clicking on an area of the timeline you get a larger view where you can highlight a time snapshot to view.
Show Snapshot

You can then look at a ton of options to find your issue, by looking at the call trees across all threads, all methods and even look straight into "hot spots". Hot spots are each threads expensive method in terms of time. In my current snapshot Assembly.Load was very expensive, but this was app startup, not what I was noticing.

Hot Spot View


By looking at the other peaks I did find where 3 seconds was going, executing queries against the database. Now this could be down to bad queries, but its more likely to be a bad use of Linq with Entity Framework, and not compiling queries etc. However now I know where to look and it only took me 5 - 10 minutes of profiling to find out. Fantastic, stuff and I've not even touched memory profiling !

My Hot Spot
Hopefully I can do before my trial runs out, but so far it looks worthwhile investing in, here's hoping I can get some BizSpark discount ;)

Wednesday, 10 October 2012

Deploying Non Project Files with Web Deploy

Deploying Websites and Web Applications has changed massively over the years, I have seen the likes of FTP, Copy & Paste over RDP (why!!) etc but there has always been a degree of, did I copy the correct files, did I miss any, did I upload the correct client config etc.

Since VS2010 I have been a big fan of always publishing my applications to either a local folder and then upload or directly to the server, depending on the setup.

One thing I have always struggled with is publishing files that aren't part of my project, and ensuring the correct client config settings are uploaded. Now I'm running VS2012 full time I decided it was time to put to bed these issues and ensure I could use one click, or as near to one as possible, to deploy stage configurations of my applications as well as live.

VS2012 has tweaked publishing again, it now includes a much fuller publish model, with WebDeploy amongst the usual FTP, FileSystem etc. The Package / Publish process is just an extension of MSBuild, which means you can perform any process / action that you would normally do with MSBuild, this could mean perform minimization / compression of css / js etc. Other people have written about this, http://sedodream.com/2011/02/25/HowToCompressCSSJavaScriptBeforePublishpackage.aspx

When you deploy MSBuild performs two steps, first of all it bundles all your content into a temporary location and then copies it all to the final location, or compiles a package if that's what you have configured

What I needed to do was alter the process to pull in some additional files into the temporary location so that they then get published to the final place.

This is possible by extending the PipelineCollectFilesPhaseDependsOn phase. We can simply add a build target to our file and then run this in the PipelineCollectFilesPhaseDependsOn phase.

Our Target:


<Target Name="CopyIOCFiles">
    <Message Text="Copy IOC Files" Importance="high"/>
    <ItemGroup>
      <_CustomFiles Include="$(ProjectDir)\IOC\**\*" />
      <FilesForPackagingFromProject  Include="%(_CustomFiles.Identity)">
        <DestinationRelativePath>bin\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
      </FilesForPackagingFromProject>
    </ItemGroup>
  </Target>

It's worth noting here that my target could just contain the ItemGroup element however I like to know the target has run so by sticking in a message element we know it's ran by looking at our build output,

This target is then run in the correct phase via the following:


 <PropertyGroup>
    <PipelineCollectFilesPhaseDependsOn>
      CopyIOCFiles;
      $(PipelineCollectFilesPhaseDependsOn);
    </PipelineCollectFilesPhaseDependsOn>
  </PropertyGroup>

Hurrah! That's it, actually a lot easier than I thought, when you combine this with web.config transformations on a per configuration basis you really can deploy one application in several configurations in one click.



Sunday, 10 June 2012

Shims Shim Shims

Tonight I started a small project and I thought I'd take the opportunity to explore the new Microsoft.QualityTools.Testing.Fakes library. In case you don't know this library basically allows you to "easily" mock out those pesky sealed classes within the .Net Framework, I've only used it to do System.IO tonight but it does do more. I have a feeling though System.IO will be used heavily with it.

What the library allows you to do is to generate a shim version of each sealed class within a dll. So what is a shim? MSDN at the moment says the following:
Shim types allow detouring of hard-coded dependencies on static or non-overridable methods.
What this means to you and I is you get to create a wrapper type, ShimDirectoryInfo for example, you then set delegates for each method and property that you are going to use on the type and let these return your expected values. When your real code then goes to use the original type, DirectoryInfo in our example, it will automatically use the shimmed methods you have defined. What's clever about this is you can not only pass your shims into code by getting the shims generated instance type but you can also declare an AllInstance implementation which any code that creates your type uses. This is neat as you don't have to then modify existing code to take this types.

A simple example that isn't playing with System.DateTime.Now ;) A lot of the examples on the net use DateTime.Now as an example of shims but I wanted to show how I started using them for testing code that relies on System.IO.

I have a simple class LocalStorage that has a constructor that takes a file path. all we do for now in this constructor is check for a null / empty path and whether the directory actually exists, and if it doesn't throw.

 /// <summary>
 /// Creates a new local storage class for accessing file and directory information
 /// </summary>
 /// <param name="rootPath">Base path to storage location</param>
public LocalStorage(string rootPath)
{
  if(String.IsNullOrWhiteSpace(rootPath)){
    throw new ArgumentNullException("rootPath");
  }
  if (!System.IO.Directory.Exists(rootPath))
  { 
    throw new IOException("Path does not exist: " + rootPath);
  }
  _rootPath = new DirectoryInfo(rootPath);
}
Nothing to complicated there but how would we test the directory not existing and returning an exception. Previously I would have done something like pass in a path that "should" never exist however this isn't a particularly elegant way of doing it. So how would shims help.

Here's my new test:

[Fact]
public void NonExistantPathThrowsIOException()
{
  using (ShimsContext.Create())
  {
  //use fakes to say the folder doesn't exist without having to actually hit the operating system
    System.IO.Fakes.ShimDirectory.ExistsString = (path) => false;
    Assert.Throws<System.IO.IOException>(() => new LocalStorage("m:\\test"));
  }
}

To start using Shims after creating the shim types of an assembly, we new up a ShimsContext, ShimsContext.Create(), note how this needs to be in a using statement as it's disposable and for good reason. Shims exist for the lifetime of the appdomain, so if you forget to dispose of your context any shims you have created would be used for all tests until the appdomain was shutdown. Therefore by using a using statement and thus the dispose method you are creating a scope and context.

Now within our context we get to set a delegate against what we want faking. In this case we want to fake our call to Directory.Exists. As this is a static method we can set this globally. This is done by using the ExistsString delegate on the ShimDirectory class. We can use a lambda here to make this really short and sweet.

That's it! Now when we run our test our fake method is used when ever Directory.Exists is called, no more dependency on the real filesystem. Nice :)

Now you can take this quite far and start mocking out further things, GetFiles for example, I want to test that my business logic which trims out specific files that can't be handled by the default GetFiles method.

What you find with methods which take overloads is that when you go to set the delegate you have to ensure you set the correct one. GetFiles for example has the following delegate methods on ShimDirectoryInfo: GetFiles, GetFilesString, GetFilesStringSearchOption. One for each overload combination. It's worth ensure you check the method signature to ensure you get the correct delegate otherwise you'll end up chasing a bug which doesn't exist in the real code but in your test ;)

Finally not everything is implemented via shims, there is still stuff missing. I found that ShimFileInfo doesn't provide a way to fake File write times, but it does names etc. It is a step in the right direction though when dealing with "stubborn" code.

Enjoy



Thursday, 10 May 2012

Portable Class Library Projects and your Build Server

Yesterday my build server started failing on a particular project and after a quick look at the log I found that it simply wasn't building the solution. Peculiar as I knew it built fine on several development machines, however I dug into the logs and found the following:
  •  error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\Portable\v4.0\Microsoft.Portable.CSharp.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
Oh dear, now I know Portable Class Libraries is an add on so I went to MSDN Visual Studio Gallery and got the installer so that I could chuck it on to the build server.

However when attempting to install I got a great message saying I needed VS2010 and SP1 installed, obviously for a build server this wasn't going to happen. After a bit of digging on MSDN I found that the installer actually has a switch that when used just installs the targets for MSBuild.
To install the Portable Class Library Tools on a build machine without installing Visual Studio 2010, save the download file (PortableLibraryTools.exe) on your computer, and run the installation program from a Command Prompt window. Include the /buildmachine switch on the command line.

Simply run the installer with the /BuildMachine switch and everything installs hunky dory.

PortableLibraryTools.exe  /buildmachine


To my great relief the build server started building again and things went back to normal.

Thursday, 12 April 2012

Using the SQL Server Export generating an excel spreadsheet from an access database

What a wordy title, go read it again, it's crazy but true.

This post covers generating an Excel spreadsheet from an Access Database but using a SQL Server tool. It's crazy but great and easy!

Welcome!
I am assuming you have SQL Server installed and the import export wizard, if you don't have this go get it first before carrying on.

First up open the Import and Export Data application, note this has to be the x86 edition so if you are running an x64 machine, which I think most dev's are nowadays, ensure you choose the correct edition.
This will give you the welcome screen.

Source Screen
Click next to get the ball rolling, this first screen is where we want to get the data from, in this case access, so use the drop down box to select Microsoft Access. Then provide the path to the database file, you can see my file path was actually on the network and it worked fine.






Destination Screen

Next up the destination, well again Excel is what we want so highlight in the list and provide a path to save too. You can also choose the version of Excel you want the file format to be, lots of options which is great.

Now upon clicking Next here two things may happen, you might proceed to the data copy screen or your might get an error. On my machine I got an error however on others I did not. The error was: The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine. (System.Data)

Copy or Query
If you don't have office installed going to http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=13255 and downloading the 32bit Access Database Engine will fix this however I already has Office 2010 so this wasn't the issue. However if you remember earlier I mentioned about the difference between x86 and x64 well it's back to haunt us, I installed 64bit Office, at the time I thought why not I have 16GB ram, something needs to attempt to use it.... well this meant that only a 64bit provider was registered. I attempted to download the 32bit access database engine but it won't install if you have 64 bit office, which does make sense. So I rolled myself back to 32bit Office, sad but true.

Any how, with all that sorted you will be at the copy or query screen, if you simply want all your data out of your database in the form it is stored you could use the copy table option. However I wanted to pull data from several tables and output it into a specific format so I chose the query option.

What a query
You then get a nice big text box to enter your query, you could be hard core and enter this direct and ensure it parses etc but I'm guessing like most people you would have this query tested in Sql Management Studio and would simply paste it in or use the browse button to point it to a saved query.

It is worth noting here that I believe there is a bug with this window, if you browse and load a file I was finding that only half of my query was in the window, however if I copy pasted it worked fine.

The next window asks you to confirm the tables and views to copy, when you are using a custom query there isn't a lot more you can do on this screen so click next again.

This next screen, Run Package, gives you the option to run the export now or to save it as a package that can be reused or used later. However you can only do this on SQL Server Standard, Enterprise, Developer or Evaluation. If you have SQL Server Express, Web, or Workgroup you can only run it immediately.

Success!
When you choose to run the package you get a progress window showing what's happening, what's left etc, very similar to the SQL installation progress screens, however when it's done you should have all green ticks and an excel file with you access data in.

Cracking stuff, if you haven't before you should take some time looking at the SQL Import and Export tool and DTS more fully, it allows you to import and export data from a vast array of sources into just as many again. I have used it to take old Access DB's and create SQL ones and managing data migrations, it's a tool that should be in every developers belt. We don't need to continually write console or windows form apps for managing data we already have a tool that will do the majority of use cases, we just have to use it and write the queries!

I hope this helps, I know it saved me a ton of time :)



Thursday, 29 March 2012

Revisting Linq To Lucene

Last July I wrote a blog post about prototyping getting Entity Framework CodeFirst working with Linq To Lucene. I hadn't realised it was quite that long ago but alas time flies especially with a baby.

Anyhow, I never finished the work nor made it "feature" ready or submitted it back to the project, this was partly due to the apparent inactivity. However this week the Linq To Lucene project has become quite active, and as I now need to use the implementation I spent a few hours to finish it. This included working out how to get all the tables off of a DbContext, which involved a bit of reflection magic and then writing some tests to ensure it worked at least how the LinqToSql version had done.

I have now submitted this as a patch, http://linqtolucene.codeplex.com/SourceControl/list/patches - item id 11857, and hope its available in the main trunk soon, enjoy!

Wednesday, 21 March 2012

Troubleshooting NuGet failing to load

Today I had the unfortunate circumstance for VS2010 to crash whilst I was working, nothing to big I thought, I've had this happen before.

I reopened VS went to open my solution and got the following message:

---------------------------
Microsoft Visual Studio
---------------------------
The 'NuGet.Tools.NuGetPackage, NuGet.Tools, Version=1.6.21215.9133, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' package did not load correctly.

EEK!

Nuget is smegged :( I immediately thought to uninstall the extension and then reinstall it, same thing happened :(

I then, (looking back this was foolish as NuGet is shared), thought I know I just load it in VS11 that will work, it also errored.

Not cool, the error message however directs you to start VS with logging enabled by using the /log command argument and then to look at the activity log.  The activity log is located in your appdata roaming folder, simply stick %appdata%\Microsoft\VisualStudio\10.0\ActivityLog.xml into run or %appdata%\Microsoft\VisualStudio\11.0\ActivityLog.xml dependant on your VS version. This will then load a load of XML with the VS loading activity in it. After a bit of scrolling I came across this:
 <entry>
    <record>505</record>
    <time>2012/03/21 09:49:33.568</time>
    <type>Error</type>
    <source>VisualStudio</source>
    <description>SetSite failed for packageSetSite failed for package</description>
    <guid>{5FCC8577-4FEB-4D04-AD72-D6C629B083CC}</guid>
    <hr>80131500</hr>
    <errorinfo>The composition produced a single composition error, with 2 root causes. The root causes are provided below. Review the CompositionException.Errors property for more detailed information.   1) '.', hexadecimal value 0x00, is an invalid character. Line 1, position 1. Resulting in: An exception occurred while trying to create an instance of type 'NuGet.VisualStudio.VsSettings'.
Resulting in: Cannot activate part 'NuGet.VisualStudio.VsSettings'.
Element: NuGet.VisualStudio.VsSettings -->  NuGet.VisualStudio.VsSettings -->  CachedAssemblyCatalog
Resulting in: Cannot get export 'NuGet.VisualStudio.VsSettings (ContractName="NuGet.ISettings")' from part 'NuGet.VisualStudio.VsSettings'.
Element: NuGet.VisualStudio.VsSettings (ContractName="NuGet.ISettings") -->  CachedAssemblyCatalog
.........… rest removed due to length .........
</errorinfo>
  </entry>
The key information here is the invalid character and the NuGet.VisualStudio.VsSettings. This looks like the NuGet.Config is corrupt, the next question is where is it. Turns out this is also in app data %appdata%\nuget\ Once here I opened the file in notepad and found it was empty all bar the Unicode start file character.

To restore the file you just need to put in the location of package sources NuGet should use, the default is:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet official package source" value="https://go.microsoft.com/fwlink/?LinkID=230477" />
  </packageSources>
  <activePackageSource>
    <add key="NuGet official package source" value="https://go.microsoft.com/fwlink/?LinkID=230477" />
  </activePackageSource>
</configuration>
Once this was saved I tried running NuGet via the command prompt and got the familiar available commands listing.

Phew

A bit of a pain but the experience lead me to learning about the VS activity log which I'm sure will be useful to know in the future. Now time to get back on with my project....

Tuesday, 13 March 2012

Developing with MonoTouch - Day 1 - Setup

Introduction

So recently I have started a company and been working on a rather large project, initial plans were to use Window 7 or Windows 8 tablets for the "mobile" solution however for various reasons the plan has somewhat changed to now use iPads.

As primarily a .Net developer this was a tough decision but a necessary one. However I often talked about using MonoTouch for doing iOS develop after successfully doing some work with MonoDroid when it was in beta. So this is my solution, use MonoTouch to code as much as possible in C# and then Xcode where necessary.

I thought it would be a good idea to blog my experiences and any insights I gleam during my first few weeks. I have to admit I have never owned a Mac or an iPad so its all very new, although I have used other peoples.

Stage 1 - Order the kit!
First things first the kit, as most of my work won't actually be on the iPad application but on Azure services etc I decided that the Mac Mini was the way to go. I opted for the basic model i5 2.3 Ghz with 500Gb hard disk. However I did go buy 8GB ram, it was only an extra £35. Due to my existing office setup I chose to reuse my monitor, keyboard and mouse and simply switch my usb hub connection when I work on the Mac so no additional expense there. I also preordered a "new iPad" I won't dwell too much on that as its standard.

Next comes software, I purchased MonoTouch from the Xamarin website and then the Apple Dev Licence, all pretty basic. It is worth noting that MonoTouch does have a trial / evaluation edition, you are just limited to only having apps run in the simulator.

Total setup cost to get going: ~£868

Stage 2 - Setting up the Mac / Installing Everything



Failure to install
So the Mac arrived, its incredibly easy to setup and the packaging is very well designed. The first thing I did was to attempt to get the iOS SDK installed, this is done by installing Xcode. However I immediately hit a road block :(

I was missing a ton of updates preventing the app from installing, not cool for a new mac :(



Hours Later
So I had to then go get the software updates, annoyingly its over 1.6GB ;( This would take time on my home connection let alone my office one which is far from speedy at 2 - 3 Mb :( Eventually this finished time for Xcode...... Oh wait another 1.5Gb ;(
Warning this make take longer than 1 tea break...
Eventually after getting all of this sorted it was time to get MonoTouch installed. I have to say the installer for this was pretty painless, It does have to download and install Mono, MonoDevelop and MonoTouch but it went very quickly and was super simple to finish. In fact apart from the time it took to download everything apps on the Mac are super simple to install now theres the app store. One thing I have taken from this is I can't wait to see the Windows App Store in Windows 8, it's long over due and should hopefully make lives easier for everyone.

I then took the jump and opened up MonoDevelop, I have heard good things about it but have never used it personally.

Woohoo! MonoDevelop
And there we are. Unfortunately I was hoping to have a good play with MonoDevelop but it took over 7 hours to get all the downloads and install which ate all of my day :( There's always tomorrow .....

Tuesday, 14 February 2012

DotNetOpenAuth 4 beta with Windows Azure

Recently I blogged about using DotNetOpenAuth, I got it working within my local MVC3 Web Application after fixing the dependency issue however when I came to put it into Windows Azure I was getting a random error when the compute emulator started up.

---------------------------
Microsoft Visual Studio
---------------------------
Windows Azure Tools for Microsoft Visual Studio

There was an error attaching the debugger to the IIS worker process for URL 'http://127.255.0.0:82/' for role instance 'deployment16(174).xxxxxxxxxxx_IN_0'. Unable to start debugging on the web server. See help for common configuration errors. Running the web page outside of the debugger may provide further information.

Make sure the server is operating correctly. Verify there are no syntax errors in web.config by doing a Debug.Start Without Debugging. You may also want to refer to the ASP.NET and ATL Server debugging topic in the online documentation.

Nice! How confusing, the compute process didn't even startup but yet if I run it using the built in webserver all is fine. I knew it had to be related to DotNetOpenAuth as this was the only thing I had changed in my project since the last successful run.

So I started looking into the issue, now I knew it worked on the local dev server however I realised that it wasn't using IIS7 or IIS7.5 but the built in VS. This doesn't run anywhere near the same as IIS7.x which is what Azure is built upon. So I switched my MVC3 Web Application to use IISExpress, which is a lightweight version of IIS, and fired up the app.

I instantly got where the error message was:

HTTP Error 500.19 - Internal Server Error

The requested page cannot be accessed because the related configuration data for the page is invalid.

Config Source:

    7:   <configSections>
    8:     <section name="uri" type="System.Configuration.UriSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> 
    9: <sectionGroup name="dotNetOpenAuth" type="DotNetOpenAuth.Configuration.DotNetOpenAuthSection, DotNetOpenAuth">

This section element wasn't needed, .Net 4 already includes System.Configuration.UriSection within the machine.config This is trying to load the v2 component, which is required for .Net 2 - .Net 3.5.

So where did it come from? Well the answer lies with NuGet, I had installed DotNetOpenAuth using Nuget
Install-Package DotNetOpenAuth -Pre
Which along with the library references updated the web.config. The error here is not checking what .Net version the project is before it updates the web.config with the required sections.

As this is a prerelease version of DotNetOpenAuth my guess is it will be fixed before release, I'll be raising it as an issue any how.

Hope this helps.

Monday, 13 February 2012

Using DotNetOpenAuth and getting "DotNetOpenAuth.Messaging.OutgoingWebResponseActionResult"

Today I started integrating OpenID into my latest web application. I chose to use the DotNetOpenAuth library as a helper, it's top and makes Open ID really easy, supports Web Forms, MVC and even classic ASP. Anyhow there are many how too guides, including one from my good friend Danny Tuppeny, so when it comes to setting it up I refer you to those.

However today I found when I was using the AsActionResult extension method my web page would come back blank with just  DotNetOpenAuth.Messaging.OutgoingWebResponseActionResult written on screen. For some reason the ToString was being called and returned.

Commonly it seems that the issue is caused by not having a binding redirect for older versions of the MVC assembly however my one was set, but what I did notice was the following:

<dependentAssembly>
        <assemblyIdentity name="DataAnnotationsExtensions" publicKeyToken="358a5681c50fd84c" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-1.0.1.0" newVersion="1.0.1.0" />
        <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-3.0.0.0" newVersion="3.0.0.0" />
      </dependentAssembly>
The  DataAnnotationsExtensions also had an assemblyIdentity reference to MVC and this was set to version 2. But I was using MVC 3, so I believe what is happening is that the assembly redirect is looking at the first option and then using the wrong version.
I simply removed this redirect and let the main one be used and everything kicked back in.
A lesson learned :)

Friday, 27 January 2012

Windows Azure and type initializer for 'System.ServiceModel.Diagnostics.TraceUtility' threw an Exception

I've been working on a new product that I'm with Windows Azure and today whilst I came to debug a WCF service within it stumbled apon an error that at first glance doesn't make a whole lot of sence, but with a bit of working backwards turns out is quite simple to fix.

The scenario: I have a WCF service setup to run in a WorkerRole as part of an Azure solution, pretty simple.

To enable me to see what was going on clearer with my service i decided to configure my system.diagnostic listeners and added the default AzureLocalStorage listener, which is created automatically for you, to my service model and message logging.
<system.diagnostics>    
    <sharedListeners>
      <add name="AzureLocalStorage" type="ServiceAuthenticationGatewayWorkerRole.AzureLocalStorageTraceListener, ServiceAuthenticationGatewayWorkerRole"/>
    </sharedListeners>
    <sources>
      <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing">
        <listeners>
          <add name="AzureLocalStorage"/>
        </listeners>
      </source>
      <source name="System.ServiceModel.MessageLogging" switchValue="Verbose">
        <listeners>
          <add name="AzureLocalStorage"/>
        </listeners>
      </source>
    </sources>
   </system.diagnostics>
However upon running the application nothing would happen for around 30 - 60 seconds and then I was greeted with a nice TypeInitializationException.

Viewing the detail of this and the inner exceptions eventually led me to the following message:
{"Could not create xxxxxWorkerRole.AzureLocalStorageTraceListener, xxxxxxxx."}

So the AzureLocalStorageTraceListener was blowing up on type initialisation, popping open the code you see that the constructor does some work, specifically it gets the log directory of the WCF log file and combines the filename and path etc.

This looked normal, I put a breakpoint in and stepped through the GetLogDirectory method to find which line threw, my guess would be its where it calls into the RoleEnvironment and gets the local resource. Turns out this is exactly where it threw.

I had a quick gander at MSDN and found that you pass the local resource name to the method you want to load. This led me to think that the resource wasn't defined so next up was the storage configuration.

With Azure you configure local storage per role, to change the settings right click the worker role within the main Azure project, it will be in a folder called roles and open properties. You will then get a screen similar to what I had.
You will note, as I did, considering I thought I had local storage and was accessing it via the key, it wasn't setup within the role properties :(

Simply using the Add Local Storage button and entering the correct details let the project startup and start logging correctly.

What I found here is another case where the framework gives you the generic error message and not the exception that was really being thrown, RoleEnvironmentException which would have made me look at the local storage settings a lot quicker.

Any way there it is, I hope this helps :)

Tuesday, 24 January 2012

Tidying up user data for display

Today I was tidying up a quick WPF application i had built for a client. It simply renders "today's schedule" in a way that is meaningful for them and allows them to view it on multiple screens and print as necessary.

Development was quick as it only consumed several Google Calendar feeds and based upon my test data worked well.

However today seeing used in real life made me rethink how i rendered the data, in my test data I entered items how I would enter them, this is important, I entered them using sentence casing for paragraphs, title casing for titles etc. However due to the way some of the users worked everything was being entered in CAPS :(

Functionally the app works fine, however for me, and I'm no designer, it looked odd. So I opened the solution again and started to look at ways of improving this. My initial thoughts were to apply some sort of styling, think text-transform:capitalize; if you were using CSS, but alas XAML doesn't have this. I then thought I could implement a custom formatter that I could use in the XAML upon databinding, I could have done this, but I chose not too. Although it makes sense to do so, I started thinking about how I will probably end up using the data access / domain code in this project later on in another GUI where I will no doubt have the same problem.

As a result I wanted to "fix" this data at the domain level in C#. As soon as you hit the code you know there are many ways of achieving this, you could go down the route of RegEx replacing, iterating through the string looking for .'s or whitespace if you want title case etc and then do some replacing. You could even just specify everything is lowercase, but for me none of these fully fitted what I wanted / effort level I wanted to put in for an issue that only I really had.

What I really wanted was a ToTitleCase or ToSentanceCase that already exists the framework, I didn't want to go grab extension methods which I'm almost certain there will be many of. A quick bit of poking around led me to this gem.
TextInfo.ToTitleCase
I refer to MSDN the ToTitleCase method "Converts the specified string to titlecase." Great, exactly what I wanted. It's easy to use too:



// Defines the string with mixed casing.
      string myString = "wAr aNd pEaCe";

      // Creates a TextInfo based on the British culture.
      TextInfo myTI = new CultureInfo("en-GB",false).TextInfo;

      // Changes a string to titlecase.
      Console.WriteLine( "\"{0}\" to titlecase: {1}", myString, myTI.ToTitleCase( myString ) );

Fantastic, build, run, enjoy.....
Well kinda.... This is one method you really do need to read the remarks for on the MSDN page,
this method does not currently provide proper casing to convert a word that is entirely uppercase, such as an acronym.
and
the ToTitleCase method provides an arbitrary casing behavior which is not necessarily linguistically correct. A linguistically correct solution would require additional rules, and the current algorithm is somewhat simpler and faster. We reserve the right to make this API slower in the future.

This actually meant that in my case, when people added descriptions entirely in uppercase the method did nothing. Bit of a shame, I made the conscious decision that Titles I would use ToTitleCase in the hope to improve titles where people enter one with all lower case or mix use, but if they use entirely uppercase then I am unfortunate. However for descriptions I decided to lowercase the string and then use ToTitleCase. Now this isn't sentence casing but it does look better than all caps. This is a compromise, I was able to improve the app without spending too much time on it.

ToTitleCase is one of those hidden gems, (just like using XmlConvert.ToString with a DateTime will give you the DateTime in RFC 3339 which is fantastic for use with Google Calendar API's etc... but that's a blog post for another day ... ), which can save you time and provide quick wins, it's also culture sensitive which can really help you out if you have a globalised project, just be clear on what it does and what it doesn't do.

Enjoy

Monday, 16 January 2012

Tricky times using the MVC3 Date Validator and JQuery UI DatePicker

Today I came across a strange problem whilst seemingly writing a simple MVC3 prototype. My prototype had a textbox for a date value which I then used jQuery UI to append a date picker too. I was also using unobtrusive validation and data annotations on my models to seemingly "speed up" developing my prototype.

Although I thought I had wrote everything correctly whenever I tried to submit my form in Chrome I was getting a random error: Please enter a valid date. Originally I had suspected I had messed up the date time formatting, as I was using en-GB format, not the default en-US. Much time wasted and many things I tried had no effect.

I then by chance loaded the page in Internet Explorer only to find the issue had "vanished", so I went back to Chrome and nope it was back. Upon double checking my code I pondered if using the classname of "date" on my element could cause problems and a bit of Googling confirmed this. In short don't apply a date picker to an element with a class name of "date" otherwise Chrome gets all confused. I believe this is due to the page being HTML5 and the way Chrome parses elements etc, but for now I don't need to know too much about that ;)

Hurrah everything works again and only x amount of time wasted :(