Removing NuGet Packages from a Git Repository

As part of my recent migration from GitHub to BitBucket, I decided to take advantage of a new feature in Nuget 1.6, restoring missing packages at build.  While my repo isn’t that large, yet, with Nuget, I no longer need to commit my referenced dll’s to source control, which should keep my pushes light over time.  Here are the steps that I used to keep my packages from being commited and to prune them from my commit history in Git.

1) Upgrade to NuGet 1.6

Make sure you have Nuget 1.6 installed.  You can download it from Codeplex, or install it via Extension Manager in Visual Studio. Configure your solution to automatically install missing packages.

2) Remove packages from old git commits

NOTE: Don’t do this to a public repo that other people pull from unless you really hate them. You have been warned!

#Change packages/ to your packages location
git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch packages/" HEAD

4) Add Packages folder to .gitignore file

#NuGet
packages/

4) Clean Up Git

rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

These commands will cleanup the temporary info left over from the filter-branch and run garbage collection.

5) Push –force

git push -f

This will push your new modified commits up to the server and rewrite history.

For More Information:

From GitHub to BitBucket: Changing git remote origin

Now that BitBucket is hosting Git repositories, I decided to migrate my private repositories from GitHub and save myself $7 a month. It turned out to be incredibly simple.

git remote rm origin  #Removes old origin
git remote add origin https://username@bitbucket.org/your.new.repo  #Adds new origin pointing to BitBucket
git push -u origin  #Pushes commits to new repo

My New Amazon Kindle 3: Day 1

I bought a Kindle yesterday.  I have been toying with the idea of buying an ebook reader for a couple of years now, but it hasn’t been until the last couple of months that I have really considered it.  Over the last couple of years, I have built a small collection of ebooks, some that I bought from Amazon, others that I have downloaded the internet.  I read a lot of sci-fi and fantasy, and tend to re-read series that I like, so I have been gradually replacing my paperback collection with ebooks and reading them on either my iPod Touch or my Windows Phone 7.

Earlier this month, I started a new job with a longer commute, and decided that I needed a way to carry multiple books with me on a device with enough battery life that i wasn’t constantly having to charge it all day long like I was with my cell phone.  Essentially, I was looking for a device that did the following things:

  • Enough battery life for multiple days of reading
  • Ability to read ebooks available from my library
  • Ability to easily add my personal library of ebooks onto it
  • Preferably able to read .epub format
  • Able to read my previously purchased Kindle books

No device right now does all of these things.  I was looking at the Kindle 3, the Barnes & Noble Nook, and the Sony PRS-350.  The last two both support epub and ebooks from my library, but only the Kindle will display the 25 books I already purchased from Amazon.  After about a week of research and waffling back and forth, I finally decided to buy a Kindle.  I have already made a significant investment in books from Amazon, and I trust them to stay in the ebook business longer then either Sony or Barnes & Noble.  Also, Amazon announced that later this year they will support Adobe Digital Editions, so I should be able to download ebooks from my local library onto the Kindle.  Plus, the Microsoft Store was offering a free cover and light with the purchase of a Kindle, so I saved $60 on that.

I have had it for 24 hours, and am happy with my purchase so far.  I bought a couple of new books from Amazon yesterday and the reading experience has been better then on my phone, although maybe not as much as I had thought it would be.  The text is definately crisper on the Kindle, so I am sure I will have less eyestrain in the long run.  I converted several ebooks from epub to .mobi format using Calibre and e-mailed them to my Kindle e-mail address, and they showed up within a couple of minutes, which is a significantly easier process then what I was doing to get ebooks onto my phone.  Hopefully, Amazon will offer .epub support in the future so I don’t have to convert every book in my collection to .mobi.

My only real complaint with the Kindle so far, isn’t even about the Kindle.  Publishers have Kindle book prices way to high.  One of the books I bought yesterday, was released yesterday on Kindle for $15.99 and in hardcover for $17.99.  I seriously debated buying it, and I probably would not have if I hadn’t just bought the Kindle and was looking for new books to put on it.  I could have bought the hardcover at Barnes and Noble, read it, and sold it back to my local used book store for half price and saved a lot of money.  The numerous benefits of an ebook reader don’t outweigh the price of purchasing a book, and publishers better figure out a fair pricing model before they run into the same problems the music industry did a decade ago.

WcfTestClient Not Starting When Debugging WCF Service

I don’t create enough new WCF services to remember this apparently, but in order to get WcfTestClient to run when debugging a service (hitting F5), open up the project properties and under the run tab, choose Start Action | Specific page and select the .svc file you want to debug.

I created a new WCF Service project, renamed the .svc and when I hit F5, no test client. I seem to recall running into this last year as well.

HP Elitebook 8450p and Z600 Workstation: A Software Developer’s Review

Note: I started writing this review back in May, but for whatever reason never finished it. HP has configured them a little differently then 7 months ago, but I would select the same options now given my company’s environment.

Last year I started the process at my company of ordering new developer hardware for my team. For years, we have been running on underpowered business-class hardware (with more ram) and were really feeling the pain running our existing tools. We had a couple of goals for new hardware, including:

  • Better multi-tasking. Ability to run multiple instances of Visual Studio, SQL Management Studio and other applications without watching the screen redraw or having applications crash.
  • Hardware that would handle the upgrade to Windows 7 without any noticable loss of performance. We are still running Windowx XP 32-bit throughout the company with no ETA on when Windows 7 or 64-bit would be available.
  • A good mix of portablility, performance, weight and battery-life in a laptop. Our previous laptops were 12-inch Dell Latitude D420‘s, which were light and portable, but extremely underpowered.

Really, that was it. We were ordering custom hardware so we had to come up with budget for hardware and negotiate with the IT department responsible for supporting workstations. What we came up with was the HP Elitebook 8540p laptop and the HP Z600 Workstation configured as followed:

HP Elitebook 8540p

  • Intel i7 620M CPU – 2.66 GHz processor
  • 15.6 inch HD+ 1600×900 screen with 2MP camera
  • 4GB RAM DDR3 1333MHz (2 DIMMS)
  • 320GB 7200RPM Hard Drive
  • NVIDIA NVS 5100M Graphics Card
  • HP 120W Advanced Docking Station

HP Z600 Workstation

  • Dual Intel Xeon E5630 2.53GHz processors
  • 4GB RAM DDR3 1333MHz (4 DIMMS)
  • NVIDIA Quadro FX580 512MB Graphics Cards
  • 250GB SATA 7200 RPM Hard Drive

We had to compromise on the laptops to fall within our budget and in-line with what was being supported by the enterprise, but I’m extremely happy with them. I am able to run 2-3 instances of Visual Studio 2010 Ultimate open with large solutions, SQL Management Studio, Word, Internet Explorer, Chrome, Outlook, with SQL Server 2008 R2 and IIS running in the background with no noticable lag switching between applications. The other day, I was able to run a large, long-running SSIS package in the background while developing in Visual Studio without any noticable sluggishness.

The Z600′s are fantastic workstations with server-class processors. Our daily work barely touches their capacity. Most of my team’s developers are using them and our two pairing workstations are Z600′s.

Once Windows 7 64-bit is an option, we will look at upgrading the RAM to at least 8GB, but we haven’t had any noticiable issues so far on the 3.2GB that Windows XP can actually see. With a slightly larger budget, I probably would have pushed for faster processors on the laptops, maybe the i7 640M in order to better future-proof against a 3-4 year replacement period. We had some discussions about using Solid State Drives instead of the 7200RPM drives we choose, but felt the price and potential for failure wasn’t worth the price.

For people with more freedom in budget or vendor, check out the following links:

Fixing Broken Paging Links in WordPress 3.0 Running on Windows

I posted an article earlier this year with instructions on removing the duplicate index.php in paging links produced by WordPress running on Windows. Today I finally upgraded from WordPress 2.9.3 to WordPress 3.0.3 and it seems that the clean_url() function in formatting.php has been renamed to esc_url(). The fix is still the same.

function esc_url( $url, $protocols = null, $_context = 'display' ) {
	$original_url = $url;

	if ( '' == $url )
		return $url;
		
	//Added line to Fix Broken Paging Link Problem
	$url = str_replace('index.php/Index.php','index.php',$url);
			
	...

	return apply_filters('clean_url', $url, $original_url, $_context);
}

This fix has to be applied EVERY time WordPress is upgraded. You have been warned.

Managing 301 Moved Permanently Redirects in ASP.NET

I came across a problem recently rewriting a website in ASP.NET MVC. The old site was written in PHP and the URL’s it produced are not going to match the structure I want to use in the new site. I spend a lot of time trying to optimize this site for Google and Bing indexing and didn’t want to have any broken links when I switch over.

I also want Google and Bing to update their search results to the new links as soon as possible, not keep them around forever. Basically, what I am looking for is an easy way to manage a couple hundred 301 redirects until the old URL’s fall out of use, and my hosting provider doesn’t provide access to the IIS7 UrlRewrite Module.

Step 1 – Create a Database Table

I want it to be easy to add and remove URL’s at will. I can export a complete list of indexed URL’s from Google Webmaster Tools to populate the table initially, and then add or remove URL’s later if I want to move content around on the new site.

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[RedirectUrls](
	[OldUrl] [nvarchar](255) NOT NULL,
	[NewUrl] [nvarchar](255) NOT NULL,
	[Active] [bit] NOT NULL
 CONSTRAINT [PK_RedirectUrls] PRIMARY KEY CLUSTERED 
(
	[OldUrl] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, 
	IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, 
	ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

Step 2 – Create a Entity Data Model

It’s only a single table, but it looks something like this.

Step 3 – Create an HttpModule

I created an HttpModule called RedirectModule.cs. I populated a Dictionary from my database table to make the lookups fast, then I wired up a Begin_Request eventhandler so that I can grab the URL of each incoming request and redirect if I find a match in the dictionary.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using RedirectFromDatabase.Models;
using System.Web.Caching;

namespace RedirectFromDatabase
{
	public class RedirectModule : IHttpModule
	{

		private static object syncronizationLock = new object();

		private RedirectsEntities context;
		private const string redirectCacheKey = "redirectUrls";

		public void Init(HttpApplication context)
		{
			context.BeginRequest += new EventHandler(Application_BeginRequest);
		}

		public Dictionary Redirects
		{
			get
			{
				if (HttpRuntime.Cache[redirectCacheKey] == null)
				{
					lock (syncronizationLock)
					{

						context = new Models.RedirectsEntities();
						Dictionary redirects = context.RedirectUrls.Where(x => x.Active).
																AsEnumerable().ToDictionary(x => x.OldUrl.ToLower(), x => x.NewUrl.ToLower());

						HttpRuntime.Cache.Add(redirectCacheKey,
												redirects,
												null,
												DateTime.Now.AddDays(1),
												Cache.NoSlidingExpiration,
												CacheItemPriority.Default,
												null);
					}
				}

				return (Dictionary)HttpRuntime.Cache["redirectUrls"];
			}
		}



		protected void Application_BeginRequest(object sender, EventArgs e)
		{

			string relativeUrl = HttpContext.Current.Request.Url.PathAndQuery.ToLower();

			if (Redirects.ContainsKey(relativeUrl))
			{
				string newUrl = Redirects[relativeUrl];

				HttpApplication application = sender as HttpApplication;
				HttpContext context = application.Context;
				application.CompleteRequest();
				context.Response.StatusCode = 301;
				context.Response.AddHeader("Location", newUrl);
			}
		}


		public void Dispose()
		{
			//Nothing to Dispose of
		}
	}
}

I’ve created a property to encapsulate loading and caching of the Dictionary from the database. My URL’s aren’t very volitile, so I can set caching to expire after 1 day.

Note: HttpModules are not very testable, so a better solution would be to refactor this into a seperate class that I can test more easily and call into that class from the module, but I wanted to keep this example simple.

Step 4 – Add RedirectModule to the web.config

In the system.web section of the web.config, register the HttpModule.


	

The source code can be found here.

Preventing Team Build From Deploying Files After a Failed Build

I just finished debugging an issue with one of my teams build scripts in TFS where all the files and folders on our dev and qa websites were deleted after the build failed to compile. The issue was that someone had changed a project reference to a DLL reference out of the /bin/debug/ folder of one of our projects. Visual Studio would build the solution successfully on our development machines and our integration machine, but the TFS build script failed when compiled on our build server.

Long story short, we should ALWAYS be using project references or referencing dll’s out of our /lib/ folder, and I shouldn’t write my build scripts to deploy to our web servers if the compile fails.

I found this answer on MSDN forums that sets a property called BuildFailed to true if the compilation fails and makes that a condition of the build scripts AfterDropBuild target.



     
        
     



     


Now all I have to do is modify and test 15 build scripts on 7 different TFS projects to ensure this never happens again. Sigh.

Fixing Encoded HtmlHelpers in ASP.NET MVC 3

In the process of upgrading one of my projects from ASP.NET MVC 2 to MVC 3 RC, I decided to modify all my views to use the new Razor view engine.  The process has been pretty painless, but one thing I noticed was that the dozen or so HtmlHelpers I built were returning HTML Encoded in my Razor views.

It turns out that using MvcHtmlString or HtmlString instead of String as a return type will prevent double HTML Encoding. Apparently, MvcHtmlString is not automatically encoded, but String is. For example:

public MvcHtmlString GetDiv(this HtmlHelper helper, string value)
{
     string div = "
{0}
"; return MvcHtmlHelper.Create(string.Format(div, value)); }

I found a few good questions on StackOverflow with the solution to this problem.

Preventing Static Content from Being Intercepted by an ASP.NET HttpModule

I am cleaning up some code in one of my projects using NHibernate and ASP.NET MVC and decided to rewrite my NHibernate code to use an HttpModule to create a Session Per Request. When debugging, I noticed that a session was being created for everying incoming request, including for static content like the .js and .css files and all my images.

Apparently, when impleneting an HttpModule you can choose to intercept all incoming requests, or just requests for dynamic content by changing a setting in the web.config file.


I found the solution to this at StackOverflow and IIS.Net.