Thursday, 1 October 2009

Why we have to be more careful about what we read and more importantly what we write

In this day and age it is very uncommon to not use the internet to research or solve problems. Our reliance on printed reference books and even reference sites has dwindled massively.

As developers, especially budding developers, we often just Google our problems, in fact I think most of our senior dev's often say to us "have you Googled it?" When asked about something.

Now googling things of course has changed our industry, we can often solve problems or get good starting points within seconds.
This on its own is not a bad thing, we google we get the results and we crack on. The problem however is when you pick the first item or a random article and use what someone else has written as FACT.

The problem with our "Google" culture, this applys to more than programming, is that we often don't filter what we read. We suffer from fps, first page symdrom. If its on the first page of our results it has to be correct.

Sadly though all too often the actual blogs or forum posts we end up reading are from FACT. Today I was investigating an issue with a JS tab solution I had wrote and sadly found a ton of some very poor "tutorials". These articles / blogs although well presented and often written with the best intentions often lead people to learn / pick up bad habits. I won't name the article that prompted me to write this but to say the solution was so far wrong is an understatement.

New developers will always trust what they read, I think it stems from how our education systems work. I believe we need to refine our "google" culture tendencies and in particular our FPS.

How do we change this? Firstly we need to encourage people to not just read the first article / blog they reach from a search. Instead to open several tabs of articles on the subject matter and then read each one, and then and only then look at the common concepts / answers they provide. We need to consider multiple sources before something is FACT.

Also I believe blog and article writers also have an obligation to research / check out what others think or do regarding a subject before they post onto the internet. This also applies to big sites like the BBC, in fact the bigger you are the more this applies.

Its great to share solutions to problems and to write about things we like, things we have done, things we think are cool but we must ensure that what we write is technically sound, otherwise we continue to breed a culture and community of half baked products and websites.

This is where I believe sites like stack overflow and all their derivatives will help. As these sites continue to grow and questions with highly voted answers appear in our search engines, hopefully quality will begin to cut through the noise.

Our "google" culture no doubt makes things easier and quicker and I'm a believer in "have you Googled for it?" , but I do think us as writers and us as searchers need to be more analytical of what we read in order to produce things of higher quality and to grow in our profession.

Wednesday, 26 August 2009

TDD Masterclass in the UK

Roy Osherove is giving an hands-on TDD Masterclass in the UK, September 21-25. Roy is author of "The Art of Unit Testing" (, a leading tdd & unit testing book; he maintains a blog at (which amoung other things has critiqued tests written by Microsoft for MVC - check out the testreviews category) and has recently been on the Scott Hanselman podcast ( where he educated Scott on best practices in Unit Testing techniques. For a further insight into Roy's style, be sure to also check out Roy's talk at the recent Norwegian Developer's Conference (

Full Details here:

bbits are holding a raffle for a free ticket for the event. To be eligible to win the ticket (worth £2395!) you MUST paste this text, including all links, into your blog and email with the url to the blog entry. The draw will be made on September 1st and the winner informed by email and on

Friday, 26 June 2009

FlickVimTube - An FCKEditor Plugin

First things first, ignore the random title for this post, it does have a meaning and it's the best I could up with ;)

The other evening I was hitting some downtime and rather than carry on playing FarCry 2 I decided I'd write a quick FCKEditor plugin which I've been meaning to write for a while. By default the editor comes with functionality to insert flash files into your content which works well however I wanted to have a way to only insert online videos from Flickr, Vimeo or YouTube. To insert these I was having to manually go into source view, paste in the embed code etc. A chore and a heartache.

So enough was enough and I banged together a quick plugin that would take a YouTube Embed URL and then insert the appropiate embed HTML for it. This was actually quite simple. I worked out there was four main steps, 1. Extract the video ID from the URL 2. Create suitable embed markup and insert into the editor 3. Create a preview video so you can ensure it works before clicking ok. 4. Be able to view and update the video after inserting.

Extracting the video ID I'm sure could be done using some clever regex expression however for simplicity and speed I opted to simply slice the YouTube url at ?v= bit and then use the remainder as the ID. As the official embed url is in the format{id} I have decided for my first version of this plugin I can nievly split, however I should really parse it properly.

To insert the embed object I made use of some build in FCKEditor methods to create the object and then assign the attributes to it.

e = FCK.EditorDocument.createElement('EMBED');
SetAttribute(e, 'src', embedUrl);

SetAttribute(e, 'type', 'application/x-shockwave-flash');
SetAttribute(e, 'pluginspage', '');

SetAttribute(e, "width", GetE('txtWidth').value == '' ? 360 : GetE('txtWidth').value);
SetAttribute(e, "height", GetE('txtHeight').value == '' ? 150 : GetE('txtHeight').value);
SetAttribute(e, "allowscriptaccess", "always");
SetAttribute(e, "allowfullscreen", "true");

After getting the basics working I decided to add support for Vimeo and Flickr. This was a case of just working out which service it was and then parsing out the id's and then setting the correct embed URL on the embed object.

FCKEditor plugin's are dead easy to configure, in your fckeditor settings file simply register the plugin using FCKEditor.Plugins.Add and ensure your custom toolbar settings include the button, 'OnlineVideo'

FCKConfig.ToolbarSets["mjjames"] = [

FCKConfig.Plugins.Add('OnlineVideo', 'en');

The plugin file includes language settings for english, however adding additional languages can be added by simply providing translations for the labels and then changing your plugin registration to include the new file name. If you want to add French for example create a file called fr.js in the plugins languages and then the plugin registration becomes:

FCKConfig.Plugins.Add('OnlineVideo', 'fr');

The following language settings are available:

OnlineVideoTip, DlgOnlineVideoTitle, DlgNoVideo, DlgInvalidVideoUrl, DlgOnlineVideoURL, DlgOnlineVideoWidth, DlgOnlineVideoHeight, DlgOnlineVideoQuality, DlgOnlineVideoLow, DlgOnlineVideoHigh.

The plugin can be downloaded as a zip file and is licensed under Creative Commons Attribution-Share Alike 3.0 License

In the future I intend to extend this further, maybe to allow users to find and search for videos using the various API's provided by the video sources but that will be at a later date.

Tuesday, 19 May 2009

Wasting Bandwidth One Image at a Time....

Content Management Systems are great, they allow the average Joe to have a great level of control over their website. Gone are the days of clients asking for static pages to be amended, we live in the database powered give the power of editing and updating content to the client

Our clients get to use Rich Text Editors like FCKEditor. They look and feel just like Microsoft Word, they can play with text, upload images, resize them simply by dragging them and are very often really happy.

However this often comes at a cost. Most RTE's by default simply resize images by sticking on the HTML img attributes height and width. As many people are aware this doesn't actually resize the image, it just simply tells the browser take this massive image and render it smaller. The end user still has to download the huge image, I have seen on some sites 2000px x 1200px images being downloaded and then only shown at 250px x 120px!, which can take a while to load dependant their internet connection and ultimately we the developers have to pay the bandwidth cost for those images.

Tonight I decided to come up with something I could put on my church websites to combat this. It's more to aid the overall user experience rather than bandwidth cost but it all helps ;)

The Solution

The solution is actually really simple.

  1. Take the CMS content from the database

  2. Before rendering it to the page, parse it into a HTML Parser

  3. Using the parser find all the image tags

  4. Using the height,width and src attributes to generate a new URL to an image resizer app

  5. Replace the images src attribute with the new URL

  6. Render the content to the page

Parsing HTML in ASP.Net really is easy nowadays. Previously it was a pain, regex would never work properly or you could risk parsing your page as XML but luckily there are now tons of 3rd party libraries. I chose HTML AGility Pack as alot of people seemed to recommend it on stackoverflow

For my solution I simply did the following:

HtmlDocument doc = new HtmlDocument();
HtmlNodeCollection imgs = doc.DocumentNode.SelectNodes("//img");
if (imgs != null)
foreach (HtmlNode img in imgs)
if (img.Attributes["height"] != null && img.Attributes["width"] != null)
HtmlAttribute src = img.Attributes["src"];
string imgurl = src.Value;
src.Value = String.Format("/loadImage.aspx?image={0}&action={1}&width={2}&height={3}", imgurl, "resizecrop",img.Attributes["width"].Value, img.Attributes["height"].Value);
mainContent.Text = doc.DocumentNode.InnerHtml;

So what's going on here? Well first I create a new HtmlDocument, provided by the HTMLAgilityPack. I then load my html from my db model, page.body, then I use a nice simple XPATH syntax to pull out any img tags within the content and stick them into a HTMLNode collection. Next after ensuring I have some nodes, I spin through each of them and check to see if they have a height and width attribute. If they do then I use these and the original image url to write a new url, in my case I have a page that does this, ideally though this should be a httpmodule or something to be nice and tidy. Then with the img tags updated I render the HtmlDocument back out to the page.

That's it, a few line's of code. Now I was concerned I may have slowed down my page, as parsing and spinning through tag's could be CPU intensive however I found on my machine and quickly playing with it, I noticed no real difference. In production I'd output cache the page for a period of time anyway. I noticed 100KB difference on one of my site's home pages and they weren't massive images, so I think it's well worth it.

There you have it, at the end of the day CMS's provide users with alot of power / flexibility but we have to ensure as developers that we provide systems that cater for users abusing the system, be this uploading massive images or otherwise. In the case of images simply having something that resizes the images on render keeps the frontend responsive whilst allowing the client to provide high resolution images if needed, in this case it wasn't alot of effort and the improvement alone was worth it.

Wednesday, 29 April 2009

Content Disposition in different browsers

Today I had to resolve an issue where in different browsers the filed dynamically generated download worked very differently / at all

The setup, we had an xml file with a custom extension, say .mj, which was being served up by ASP. The HTTP Header had a content disposition header and a response type set.

Response.AddHeader "Content-Disposition", "attachment; filename=""our file.mj"""
Response.ContentType = "text/xml"

This worked fine in Internet Explorer, the file was downloaded as "our file.mj". However FireFox and Chrome acted very differently, in FireFox the file was downloaded as just "our", and Chrome as "our file.xml".

In FireFox it appears that the issue is caused by having a space in the file name, this forum post by funkdaddu helped me on this, so by removing the space FireFox could now download the file as "ourfile.mj".

Chrome however did not want to play ball. It was still insisting on changing the file extension to ".xml". I guessed it was because we were telling it we serving up text/xml mime/type under a different file extension, I decided to change the response type to "Application/Save" just to see if this would make a difference, and amazingly it did. Amazing!

So there we have it, changing the file name to have no spaces and ensuring that the content type is set to "Application/Save" seems to make all browsers behave at least some what consistently with the Content Disposition header. Its worth noting that Scott Hanselman has a great blog post The Content Disposition Saga which talks alot about how the different of IE handles it. Also GreenBytes has a ton Test Cases for HTTP Content-Disposition header which I certainly found helpful.

Thursday, 23 April 2009

Looking for a URL using Linq and SiteMaps

I've been off sick from work with Man Flu but this afternoon I was getting bored of staying in bed for the second day so I got out my laptop just to have a play in between blowing my nose.

I wanted to write a quick way of looking up the full url path for a page within a sitemap. The only bit of information I know is the key of the page. Now I knew that I could possibly make use of the IndexOf method. This expects a SiteMapNode of the value you are looking for and returns the index of the node, you then need to get the node out of your collection of nodes, example below.

SiteMapNodeCollection nodes = SiteMap.RootNode.GetAllNodes();
int nodeIndex = nodes.IndexOf(new SiteMapNode(SiteMap.Provider, pagekey));
return nodes[nodeIndex].Url ?? String.Empty;

Now that method does work, however it felt dead clunky, I was sure I could write a more .Net 3.5 shiny one line way of doing this using Linq. In fact it was dead easy. First I still needed to get all the nodes, SiteMap.RootNode.GetAllNodes(), but I then cast these to SiteMapNode, I could then use FirstOrDefault with a nice lambda expression that matches the key property. Code below.

SiteMapNode node = SiteMap.RootNode.GetAllNodes().Cast().FirstOrDefault(n => n.Key.Equals(pagekey));
return node != null ? node.Url : String.Empty;
//note: the </sitemapnode> is not part of the code, the code highlighter JS is randomly adding it.

Simple, my only concern is that the bigger the sitemap becomes the more memory / slower this becomes, especially if called multiple times per page. It may make sense to cache the sitemapnodecollection for the duration of the page however that's beyond the scope of this brief article.

What hopefully you can see though is that a little bit of Linq and Lambda expressions can take chunks of code that seem long winded and turn them into nice neat one liners, which I think is usually more readable.

Wednesday, 22 April 2009

Book Review: ASP.Net MVC 1.0 Quickly

So this month I again have the privaledge of writing another book review. This time in an area I have particular interest. ASP.Net MVC has recently been released and there are no end of books coming out of the market, one of these being Maarten Balliauw's ASP.NET MVC 1.0 Quickly.

When the book arrived I first noted how it used the traditional orange Packt colour scheme with an interesting picture of a pair of glasses on the beach. This I preferred over the look of the last book I reviewed, however I am yet to work out the significance of the picture if there even is one?

The book starts by saying that the book will take you through "the essential tasks" and "does not cover every single feature in detail". This is my opinion is not a bad thing. It is not a full reference book like ... but more of a rapid guide to get developers to start using MVC and know the basics preety much everything, you can then get the in depth knowledge as you go along.

The book covers the following topics, not necessarily in order:

  • What MVC Is

  • Brief comparison between ASP.NET web forms and ASP.NET MVC

  • What a Controller, View and Model is and how you go about creating them in VS

  • What the process of a page is and how you handle interactions

  • What is routing and what you can do with it

  • Customizing the framework

  • Using Web Forms features in MVC

  • JQuery and AJAX in MVC

  • Testing and Mocking

  • Deployment

It then has three appendices which I recommend NOT skipping, It has a full application with source code, information on the MockHandlers available and finally tons of links on where to get more information on topics. The links help to fill in the gaps that the book has left due to its "quickly" approach and for me at least have been the most thumbed pages of the book.

The format of the book is very clear, lots of examples in C#, screenshots where appropiate and well worded. In particular I like areas where Maarten Balliauw takes the time to explain all the options for an attribute. For example in Chapter 4 he outlines Action Method Attributes, rather than just give a brief description of what they are, Marten takes the time to outline briefly all the possible attributes with a simple piece of source code if appropiate. Again this highlights how the book is just trying to make you aware of what exists so when you come to write something you think about what you could use and then go away and find out more if needed.

I have to admit I really like the book as an intro into MVC and to get people aware of it, it highlights alot and get's you thinking about design methodologies. It's one I recommend others to read.


Presentation 8/10 - Overall this book feels well put together and everything is clearly laid out

Code Examples 9/10 - Quite a high score for this, the book it littered with short code snippets and examples but what really does it for me is the example application included in the appendices. Simply working through this highlights everything you have read so far and highlights more. Well worth looking at

Quality of Content 8/10 - Again repeating what I have said earlier but I feel the content has been well put together and arranged in a manner that is clear and conjusive to learning

Overall 8/10 - If you are looking at just finding out about this ASP.NET MVC is all about and just want an outline to get you started this book is for you. It's not claiming to be a reference but a starting block to use to get you started, the links in the back give you some where else to go afterwards. Well worth a read.

Saturday, 28 March 2009

Google SiteMap Generator + Input validation failed Error

Since Google release their Google Site Map Generator I have been using it on my web server for the sites that I manage. Setting it up and getting it running was fine and I haven't had a problem, that was until this week.

This week I noticed that as I was only letting the generator update the sitemap from actual URL hits quite often a few of my sites aren't hit for a day at a time which was resulting in empty sitemaps. This then causes Google WebMaster Tools to whinge at you which isn't a good thing. So I decided to update my settings to include parsing my IIS Log Files in the hope it would use previous days ones and not generate blank files.

This is where I hit a road block. When ever I changed a setting and clicked save the generator would be really useful and tell me that "Input Validation Failed" and to basically sort myself out. I was confused to say the least as everything was fine, no field was highlighted as being erroneous so I ended up giving up and leaving it.

Today I came back to it and tried again but the same error occurred. So I started to poke around and decided to manually update the sitesetttings xml file, usually located: C:\Program Files (x86)\Google\Google Sitemap Generator\sitesettings.xml. And this is when I noticed the issue that was affecting me.

Each site within your IIS setup has a node in the sitesettings xml file, here it has information about it's host name, whether it's setup for sitemaps etc. But it also contains the location of the IIS log files regardless of whether you are parsing them or not.

Now a few weeks ago I decided to move all my sites log files from the default location of C:\WINDOWS\system32\LogFiles\{site} to a more convenient location, for this example lets say E:\LogFiles\{site}, now this was all well and good for IIS etc but upon creation of sites the Google SiteMap Generator is logging these locations. So when I had moved the log files the generator was still looking at the old location, a bit of guessing / how I would do it lead me to believe that upon saying parse log files for sitemaps the generator checks to see if it can read the log files, but as they have moved it cant find them and errors.

Now all I did to fix this was manually do a find and replace on the log file locations within sitesettings.xml saved the file and restarted the generator to find it was finally happy and working OK. Hopefully Google in the next release of this generator will remove this issue / make it clearer what is wrong. Ideally upon startup or even when you choose to use log file parsing it should look at IIS to see if the path to the log files is the same as it has, if not update it before it validates. This would save heartache for a few people at least.

So I'm now happy again with the generator, the problem wasn't that hard to fix and upon spotting it made alot of sense, it just shows what a little bit of investigating can do.

Friday, 20 March 2009

Book Review: C# 2008 and 2005 Threaded Programming

So last month Packt Publishing contacted me regarding sending me a promotional copy of C# 2008 and 2005 Threaded Programming to review. This is the first time I have been asked to do a book review and decided to take them up on the offer.

Now I have been using ASP.Net for around three years now but I've never had to or decided to look into writing multi-threaded apps so the fact that this book was aimed at beginners meant that I was an ideal target audience for this book.
Packt shortly sent me the book and upon first looking at it thought it looked a bit ugly! I know you can't tell a book by its cover but this cover did put me off, the green and picture didn't do it for me but alas I carried on anyway.

The book is organised into several chapters and is example driven. What I mean by this is that it doesn't give you bags of theory and then an example, it takes the approach of you following along the code examples and then it has gaps explaining bits and pieces. More on this later.

The chapters within the book are organised in a way that as you progress each chapter delves into multi threading more. First of all it explains what multi threading is, then it looks as basic thread techniques, background workers, debugging multi threaded apps, thread pools all the way up to exploring the new future of multi threaded apps and new framework extensions to help with this. On the whole the chapter organisation made a lot of sense to me and allowed you to use what you had learnt before and build upon it. The one thing that struck me was that I expected ThreadPools to be talked about way before chapter 9 but that’s a minor thing.

One of the things I especially liked about this book is that at the end of each chapter you are given a quick pop quiz on the chapters content, this for me at least provided a quick way of ensuring I had understood the chapter and if I hadn't to go back and re read it, so this was good.

As I mentioned earlier the book is based on learning using examples and less about theory. Personally I'm not a huge fan of this technique; the writer Gastón C. Hillar does try to provide examples that are practical however I find that by simply following these you don't really learn what is going on; you learn how threading roughly works and that it’s there but when you need to use it in a real life application or you need to work out why something isn't working as expected you are left without the knowledge to solve these issues.

I do realise that this book is for beginners and is meant to get developers to look into and start writing multi-threaded apps and not be a complete resource, but personally I would prefer a touch more theory. In particular locking is over looked, what setting a WinForms app to [MTAThread] really means (you can't use dialogues for example). This was probably left out to try and keep things simple for beginners but not discussing locking or exceptions could mean bad practices are picked up and carried into production code.

It is worth mentioning that his book solely focuses on WinForm apps, it doesn't look into WPF or WebForms, and this is both a blessing and a curse in my eyes. With that said WinForms is simple to learn and the examples really do cover everything you need to get them working so if you have never used WinForms don't be put off reading this book, by the end of it you will not only know more about multi-threading but also how to write simple WinForm apps.
Also the book says that you can use Visual Studio 2008 Standard edition to debug multi threaded apps, I found out that sadly this isn't the case. In order to have the threads debug window you need the Pro edition or above version of Visual Studio.

Overall I find the book alright, personally the presentation of the book, colour schemes, internal typography could do with an improvement, the headings look like they are in Impact which is wrong on so many levels, and the examples can seem slightly farfetched but the book does cover a lot. As someone new to multi-threading by the end of it I felt confident enough in what I had learnt to write a simple multi threaded WinForm app for work to perform some tests.


Presentation - 6 / 10 - Although it’s clear to read the bulk of the content, the cover and headings for me let it down.

Code Examples – 7/10 – The code examples are clearly written and cover all the detail you need I feel that they aren’t as real life as they could be which hinders taking what they are meant to show you and apply it to real life scenarios.

Quality of Content – 7/10 – Overall I felt the quality content was quite good, potentially a bit over the top in places about being a “multi threaded guru” but overall OK. One down fall was to say that Visual Studio Standard edition can be used to debug multi threaded apps when it can’t.

Overall - 6.5 / 10
In light of everything I'm not going to suggest this is a book that everyone should read / own unlike over books like the pragmatic programmer etc. However if you are looking at learning about multi-threading and want something to ease you into it then this is for you, it will cover the basics of everything you need to know and what to expect in the future.

Thursday, 19 March 2009

MVC Snippets: must be a reference type in order to use it as parameter 'TModel' in the generic type or method 'System.Web.Mvc.ViewUserControl'

Recently I have been playing with ASP.NET MVC, in particular I have been building myself a new website. I thought it might be good to post any peculiar things / lessons I learn during this build. Tonight I stumbled across one of these lessons.

When you strongly type a view or partial view the type must be a reference type. Otherwise this means you get a HttpCompilation Error: "your data type" must be a reference type in order to use it as parameter 'TModel' in the generic type or method 'System.Web.Mvc.ViewUserControl'. Initially I couldn't figure out what this meant as I was passing my type through, it existed etc. However it was then I realised I had declared my type as a struct not a class.

If you are unsure of the difference between a class and a struct I recommend looking it up, the gist of it is that a class is a reference type and a struct isn't. As a struct isn't a reference type you can save memory due to it not having to allocate additional memory for referencing each object, this is great for short structures, but not for models in ASP.Net MVC

After changing my type to a class all was sorted and my view could compile again. A lesson learned, one of many I am sure ;)

Wednesday, 18 March 2009

Open Hack Day 2009

Well today registration for Yahoo! Open Hack Day 2009 opened, and with much excitement I signed up hoping for a place.

The previous Hack Day was really great fun even though my "hack team" failed to finish our project. This year I'm hoping to do something a lot simpler but as enjoyable.

Here's open I get a place ;)

Sunday, 15 February 2009

Parsing an XML Boolean

Today I came across a situation where I was taking a value from an XML file which is a boolean. Now being me I knew that someone would either use 1 or true to indicate this boolean value, I for example always use 1 and 0 but I know people that prefer the "proper" way of saying true or false, especially if you don't have an XSD handy.

The XML itself is fine however once I had loaded this XML file into my .Net application I needed to parse the value as a boolean. This is where I hit a roadblock. Bool.Parse will only parse "true" or "false" string values not "1" or "0".

A quick explore through various sources led me to find XmlConvert.ToBoolean(), which is part of System.XML. XmlConvert allows you to convert from XML Data Types to .Net Data Types and in the case of Boolean can convert 1 to true.

This saved me loads of time so I thought I'd post it up here for others to find and hopefully enjoy.

Saturday, 24 January 2009

Where's all my battery gone? / Windows 7 Hardware Interupts

So I got the Windows 7 Beta as soon as it came out and installed it on my laptop to give it proper "real world" testing, not a VM or a machine I use rarely. I chose to do this as it means I can give Microsoft proper feedback and if it all goes wrong I can simply reinstall Vista clean and restore data from the previous night

I have to say I really like windows 7, the UI changes do work for me, I like some of the new features and on the whole I'm happy with it. But this post isn't going to be about my last two weeks of running Windows 7, I'm already writing that post and need to finish it when I'm happy I've gave a real world usage. This post is about how since I've installed Windows 7 my battery life sucks.

Now this isn't a rant, instead I want to highlight how I found out that something was wrong with my install.

So when I started using Windows 7 I noticed that my system kept slowing down, I soon fired up Task Manager and found that AVG antivirus, I use the free edition for Home Users, was eating a large chunk of my CPU, it seems it was always running around 30% if not more. I checked the Windows 7 Blog and found they had a list of compatible anti virus solutions, AVG was one however it didnt indicate if the free version was so I decided to remove AVG and try the Kapersky beta instead.

After doing this the system did feel more responsive so I was happy with this and thought this was then end of it. However this week, in particular the last 3 days I noticed that my battery was running out in 60 mins or less, normally I get at least 1.5 hours, 2 if in "power saver" mode. Initially I pondered if the power profiles were running properly, I checked these and all seemed fine, "balanced" mode whilst running on battery would set the minimum cpu usage to 5%, all seemed well.

This morning I decided to look into this battery issue further, I started Task Manager and nothing seemed to be using the CPU but when I looked at the performance task I saw something very peculiar, see the image to the right, Core 0 was running at all times at 90%! This must be the cause of the battery being depleted so quickly, if the core is always active the power saving features can't kick in, thus burning good old power cells.

Unfortunately task manager wasn't showing what was using the CPU, not even showing processes from all users helped. So I went and got the only application you need to investigate memory leaks or CPU usage, process explorer. Upon firing this up I saw straight away the culprit, Interupts which is part of the System Idle Process and handles all Hardware Interupts was constantly eating ~40% CPU!

Now here's the problem, this is a system process, which has no threads, it's core, there's no easy way (that I'm aware of), to find what is causing the Interupts to constantly burn CPU, I looked at shutting everything down that could be causing the issue, my thoughts were the animated Synaptics TrackPad icon, as this responds to your input, the On Screen Display program that indicates when you turn Caps Lock on/off etc, but still nothing changed.

This sadly looks like a Windows 7 issue and one that may force me to reinstall my laptop back to Vista, has anyone else found had this issue or know what causes it? If so leave a comment, in the mean time I'm going to report the issue to the Windows 7 guys and I'll update this post if I hear anything back.


Well it looks like this is now solved, Intel have released a new Chipset Driver :
Intel Corporation driver update for Mobile Intel(R) 965 Express Chipset Family (Prerelease WDDM 1.1 Driver), this is an optional update for Windows 7, hence how I managed to miss it the first time I checked, so there it is if you find Hardware Interupts are cooking your CPU check for an updated Chipset driver, helped me out anyway.

Further Update - 27/01/2009

Much to my dismay this issue is still present, the driver update seemed to make things better however if my laptop goes to sleep and then I resume the issue reoccurs, hopefully Intel may release an updated driver to fix this issue....

Wednesday, 21 January 2009

Visual Studio Welcome Screen

Today a colleague noticed that whilst he was debugging his web application random requests were being made to Microsoft, these requests were also returning a HTTP/1.1 301 Moved Permanently header, and apart from being annoying was causing him a few issues.

From experience I soon realised that these requests were being made by the Visual Studio Welcome Screen. By default even if you don't use the welcome screen, Visual Studio keeps the Start Page News Channel RSS Feed up to date, the default in every 60 minutes. If you aren't using the welcome screen then its probably worth disabling the RSS feeds auto download content functionality, not only will this stop you seeing random requests, if you even notice them, but if you are working on a mobile and pay per KB save you some bandwidth.

To disable the auto content update simply: goto Tools > Options, click the Show All Settings checkbox, then under the Environment leaf find Startup. Once here simply untick the "download content every" check box. You can also configure it to update less frequently, alter the news channel location, for example you may prefer to use the DotNetShoutOut RSS Feed.