Saturday, 10 March 2018

WebUSB - An unexpected update...

In January at my new(ish) user group Momentum Meetups, I presented on "Reaching out beyond the Chrome" (although suffering from food poisoning so I was definitely going green at points!). It included WebUSB and WebBluetooth. I was excited by how powerful it was and even had a colleague present something we had been prototyping in out work place.

During the talk I mentioned security and how it had been designed so it only works on https and there's a permission model where you have to approve the device before it can be used.

Unfortunately this week it came to light that Authentication devices could be bypassed via USB. These devices are a great way of proving you are who you say you are on the web beyond basic 2FA text's or applications. So being able to be able to bypass them via WebUSB is a big deal :(

My Original Security Slide
A few days later Google then disabled WebUSB by default effectively killing it off until such a time where its made secure.

I'm now worried about the future of WebUSB as it's taken years to really gain adoption and it's never had cross-browser support. So my advice for now is to not consider using it and hopefully something else will come out or it will become more secure.

I really liked the prospect of WebUSB and the prototype my colleague and I made for our work was super powerful and opened up a new avenue for interactions but security has to be paramount over convenience. The particular attack vector feels limited to certain types of phishing attacks but security has to take priority.

[Update 12/03/2018] As expected the Chromium team have got a patch for this by baking in a blacklist that can be updated via the team. This has been pushed out via Chrome 65. It feels like a necessary change but the risk is still present for new devices.

Monday, 5 March 2018

a different day a different msbuild issue...

Recently I started working on a small tweak to an existing web project, its a small internal dashboard sort of thing nothing complicated about it.

However after I started working on it I found I could no longer build the project it came up with:
): error CS1525: Invalid expression term 'throw'
): error CS1002: ; expected
 error CS1043: { or ; expected
 error CS1513: } expected
: error CS1014: A get or set accessor expected
: error CS1513: } expected
 When I looked at the location of the build errors I could see some perfectly valid code, all be it C#7:

 public IEnumerable<AttemptResult> Attempts
get => _attempts;
set => _attempts = (value ?? Enumerable.Empty<AttemptResult>());

Why would it not like the C#7 code, i'm in VS2017 it should all be correct, when I double checked the language setting under Advanced Build Settings it correctly had C# latest major version, so it wasn't a case the project had got pinned to a language version.

I next turned to the build log to see what was happening and noticed something odd:
The core compile was using
c:\repositories\xxxx\packages\Microsoft.Net.Compilers.1.0.0\build\..\tools\csc.exe /noconfig /nowarn:1701,1702,2008 /nostdlib+ /errorreport:prompt /warn:4 /define:TRACE /errorendlocation /preferreduilang:en-US  etc...

This wasn't pointing at roslyn compiler unlike normal projects which always use:
 C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Roslyn\csc.exe /noconfig /nowarn:1701,1702,2008 /nostdlib+ /errorreport:prompt

So i inspected my nuget packages for the project and found it was using Microsoft.Net.Compiles and in its description i says:
 Referencing this package will cause the project to be built using the specific version of the C# and Visual Basic compilers contained in the package, as opposed to any system installed version.
The project referenced version 1 which was released in 2015, definitively before C#7 was a thing. I followed the link in the Nuget package to get to the Rosyln GitHub page which details the NuGet packages and what each version means.  

I was correct C#7 wasn't a thing until V2, so i can update the package and get it working. However I was puzzled as to why it was even referenced ... turns out I have no idea! Nothing was actually dependent on it, however we used to have Application Insights in the project so it might have been left over from then.

A problem solved and a lesson learnt for the future :)

Friday, 2 March 2018

project.json doesn't have a runtimes section, add '“runtimes”: { “win”: { } }' to project.json within a .Net Class Library

Recently I've started experiencing weird build errors when switch branches on a product.
Error : Your project.json doesn't have a runtimes section. You should add '"runtimes": { "win": { } }' to your project.json and then re-run NuGet restore.
It's a weird sounding error and the fact it mentions project.json makes it sound like a hangup from when .Net Core and Standard were using project.json files before they became .csproj files again.

My first thought to resolve this is always try a clean and build, particularity when swapping branches things could get left hanging around but this doesn't work.

I took a closer look at the projects affected and the cause is to do with one of the branches of our product. It's an early development branch of a new feature where the project has become .Net Standard 2.0 but also targets .Net 4.5, where as the support and main dev branch are still the full framework class libraries.  The cause is most certainly to do with how msbuild produces different artifacts upon each branch.

The solution: simply delete the obj folder from the affected projects when you switch branches, there are no left over artifacts then affecting the build.

For those that are interested or if you don't want to be deleting your obj folder all the time the error actually only happens if there is a project.assets.json within the obj folder as even the new .csproj generates this file. Delete it when you switch branches and your builds will work again.

[Update 16/03/2018]
This error can also seen as
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Microsoft\NuGet\15.0\Microsoft.NuGet.targets(186,5): error : Your project is not referencing the ".NETPortable,Version=v4.5,Profile=Profile7" framework. Add a reference to ".NETPortable,Version=v4.5,Profile=Profile7" in the "frameworks" section of your project.json, and then re-run NuGet restore.

Again its the same issue, delete the project.assets.json and everything will build fine. 

Wednesday, 28 February 2018

The curious case of hidden form fields changing their value....

So today I was looking into an odd issue our CEO experienced using a website. He would get a password reset email and upon following the link and entering a new password it would fail to change with a cryptic message.

I said I'd have a quick look and see what I could see. I signed up and triggered a password reset and found no issue. I was using Chrome and assumed he had but it turns out he was using Safari on his Mac not Chrome. So I loaded up Safari on my Mac and used my link to again find no issue.

To be thorough I asked him to send me his link, in Chrome no issue however this time in Safari I hit the issue. My first thought led me to then check what was posted to the server and sure enough in Chrome I could see an encoded access token sent but in Safari I saw my email address sent. I tried this on my CEO's machine and his machine posted his email address, it looked like Safari was autofilling hidden form fields as well as visible ones!

This is crazy!

So I performed a quick Google... Oh dear oh dear ... Now it seems Chrome and the others have fixed this but Safari hasn't. So avoiding the elephant in the room regarding Safari's data leak I next wondered why it would only happen on some links (like his) and not mine.

The difference it turns out is that my token ended in = where as his was a number, the token appeared to be base64 encoded. So Safari was only autofiling if the field contained alphanumerics and the field name was like email, userid etc.

Following finding this issue it made me want to find out what we could do to ensure our applications couldn't experience this issue, again it appears for hidden fields with names containing specific names, (email, userid, username, etc...) so it's scope is limited but for users affected it means they would be stuck with no obvious reason why.

It turns out the easiest way is to set the hidden field to be readonly, this stops Safari from auto filling it and allows the value to be posted back to the server.

So in HTML:
<input type="hidden" name="Model.UserID" value="1236712638768" readonly />

OR in Razor :
@Html.HiddenFor(m => m.UserID, new { @readonly="readonly"})

Note for Razor you need to prefix readonly with an @ due to readonly being a reserved word.

Now time to find out how many hidden form fields we have that need this applying..

A horrible issue but fortunately fixing it is fairly simple :)

Thursday, 11 January 2018

GZip Compression of JSON and IIS

Recently in work we've been monitoring the data usage of one of our main products. In today's modern age as developers we often think of bandwidth as cheap and usually have nice fast internet connections and don't overly worry about the *bloat* of our pages and applications.

Our application is often used on 4G connections and whilst they are fast and performance of the application is overall acceptable we found that its data usage was higher than expected and caused us a few concerns over the amount of GB consumed per month.

We've benefited from GZip compression for many years now and most people don't even think about whether its enabled or running as well as it should be. Turning it on and off is usually as simple as turning on the feature in Server Manager and then enabling Static Compression (for your static files, CSS, Javascript etc) and then enabling Dynamic Compression for the generated HTML etc. So it was assumed all was well.

As I'm paranoid about things I started profiling the data use of our application in day to day use using Google Chrome's Web Inspector to monitor the traffic and see its total data usage over the time. It's worth noting here that for this I enabled preserve log so that page reloads etc didn't wipe my history. As a fail safe I also ran Fiddler as this is more than up to the task of performing this task.

Ensuring the data isn't lost when pages change

Initially everything looked correct, the response headers on general requests indicated things were gzipped, although a few pages look a bit too bloated and have been earmarked for some optimisation in a later release. Then I noticed that our XHR requests returning JSON looked bigger than I expected a quick look at the response headers showed:

Um.. where's my GZIP...
So our dynamically generated content wasn't always being compressed. I double checked the dynamic html files again and they were still being compressed so it wasn't a server load issue. I then wondered what mime types does IIS actually compress when dynamic is enabled. Could it be only doing HTML files or text based types. I then jumped onto our staging environment to double check the configuration setup.

It's worth noting that for IIS 7 - 9 compression settings aren't site scoped, they are at the server level, but in IIS 10 you can now configure this per site. To view these settings open up IIS Manager, click your server and then drill into Configuration Manager. Here navigate to your system.webServer section and focus on httpCompression. Here it lists all of the options available. I then found the mimeTypes it was compressing for and found that although application/javascript was application/JSON wasn't!

O JSON, wherefore art thou JSON ?
In previous years we hadn't used JSON responses that often, alot of our pages simply returned HTML via XHR requests but as things have been modernised JSON has become more and more prevalent so this was quite a big deal.

I then used the UI to add this is and applied it to the server with an IIS Restart, all of our JSON requests were then being GZipped resulting in quicker pages and a significant reduction in bandwidth!

Compress that JSON!

It is worth noting that by adding this we have added a small amount of additional CPU load to the server as it compresses this content but it's more than worth the gain in performance and reduction in bandwidth. IIS is also clever enough to stop compressing Dynamic Content if the CPU load gets too high (you can also configure this via the above settings) so it's a fairly safe operation to perform.

Hope this helps and certainly worth checking your servers setup!