Developers: We’re Hiring Wrong

Note: This entire post is an enormous, link-baiting rant, and completely full of my personal opinion, but I think there might be a kernal of value somewhere in here, so I am going to post it anyway. I am also tailoring this to .NET Web developers, but most of it applies web developers of any language (I think).

I go on an interview at least once a year. I make a point keeping my interview skills as sharp as possible. This is an industry where people change jobs every 2-5 years, and layoffs could happen for no logical reason, so I want to make sure I know the job market. I am also frequently involved in hiring for my teams, so I know, first-hand, exactly how little effort, preparation and thought goes into making a hugely disruptive change to a development team.

Hiring Guide For Dev Teams

1. Know what you are looking for

The absolute first thing to do, before posting the generic, HR-approved, complete bullshit job description, asking for 7 years experience in ASP.NET 4.5, HTML 5 and jQuery 2.0 is sit down as a team and figure out what skills you value and what skills you are looking to bring into the team. Be descriptive and be realistic.

For example, if you are on an collaborative, agile team, maybe something like this:

  • Strong experience in TDD
  • Able to pair program 4-6 per day without burning out
  • A focus on delivery
  • Experience in front-end development skills to round out the teams current strength in back-end.
  • Experience in responsive and fluid designs and progressive enhancement.

Notice that none of these are framework specific. We aren’t asking for a fantastic whiteboard coder, or someone who can efficiently reverse a linked list. We don’t care if you are still working in MVC 3. We don’t care if your previous experience was in AngularJS instead of KnockoutJS.  Frameworks and libraries are easy for skilled devs. .NET has a generic LinkedList in System.Collections.Generics. Yes, it can reverse itself. Instead, we focus on a very specific list of what our team does well, what it values, and what we need for future projects.

  • We consider TDD very important and set a high quality bar.
  • We are collaborative and value pairing and understand it can be draining for some people.
  • We know our skill set is back-end heavy, and that the industry is moving in a direction where front-end techniques and mobile are more important, and a new teammate can help us ramp up.

Now, write a job description that tells people what you are looking for. Include the standard framework laundry list, but put those things last. List MVC 5 and SQL Server 2014 as a nice to have. Focus the job description on what matters

2. Have the Team reach out to their networks

You are paying for referrals, right? The best chance you have for quality hires that will fit in well with your existing team members are people that they have already worked with and that they will vouch for. Ask your team to post on LinkedIn, Facebook, Google+, Twitter, etc. Encourage them to build a professional network if they don’t already have one. Ask them to mention the job at community events that they attend like code camps or meetups. Remind them that the company pays $XXXX for a quality referral. Ideally, if you can find someone without the mind-numbing agony that is the hiring process. You will remember I said this when you are reading through resumes.

3. Involve the whole team

Make sure that everyone on the team has an opportunity (snicker) to review candidate resumes, but let one or two people weed through the sea of shitty resumes. Buy them a beer afterwords for taking that hit so you didn’t have to. Put the quality ones in a pile. Spend an hour or two skimming through them as a team and toss any that someone on the team feels negatively about. You are all going to be working with this person. Make sure everyone has input. Quality people are worth the time and effort. Repeat this to yourself after you’ve read your 50th resume. Remind the team that this is what happens when we can’t hire via referral.

4. Use the phone screen effectively

You have invested a great deal of effort finding these candidates. So let’s make sure we blow it by asking a bunch of generic, easily google-able and vaguely insulting questions.

  • What is the difference between an Interface and an Abstract class?
  • What are the three properties of Object-Oriented design
  • Name the events in order of the ASP.NET Page Lifecycle.
  • What are the levels of garbage collection in the .NET framework and when do they occur?

Oh, wait. We made a list of the things we are looking for in a new developer. So, instead, maybe I’ll ask about them.

  • “I noticed at job x you wrote unit tests. What frameworks did you use?”
  • “So, at job x it looks like you worked with MVC 2. What challenges did you run into testing your controllers? How did you resolve them?”
  • “Did you use any DI or mocking frameworks. If so, which ones? What was your experience with them?”
  • “Have you ever unit tested multi-threaded code? (or something else potentially advanced) No? Ok, so how would you figure out how to?”

Four questions, focusing on something we value, tailored to a specific area of the candidates resume. No way to bullshit past this. It’s about experience and communication and depth of knowledge. Not rote memorization of Scott Hanselman’s New Interview Questions for Senior Software Engineers. If you aren’t getting the answers you are looking for or expecting, politely end the call. Don’t waste an hour of their time and yours when it is clear they don’t have what you are looking for.

Also, make sure they have time to ask you questions. Even if they don’t end up having any, leave time for them. There is literally nothing worse then coming out of a phone screen and not having a good sense of whether it is worth taking a full day off to go to an onsite interview. It wastes the candidates time and the dev team’s as well. Favor engaged candidates.

5. Give them homework

So, now you know your candidate can write a grammatically correct resume and has some experience in the things you care about. Great! Let’s make them burn a whole vacation day, schlep into our office wearing their freshly ironed interview khaki’s, and we will bounce them around in 30 minute increments between an HR rep, 5 devs, the hiring manager and the Director of App Development (Just for shits and giggles. I mean, honestly, what does he care?)

Instead, let’s give them a little homework first. Good developers like problems, especially fun problems. So, ask them to take an hour or two and build out a simple feature in a fun problem space. Blackjack, monopoly, hangman. Whatever. File | New a project and create a feature to return a shuffled deck of cards. Build a data structure that will represent a Monopoly board. Ask them to send a zip file of the code back. If they don’t want to do it, fine. They aren’t that interested in the job. Or they can’t do it. Either way.

Take a look at the code. Pass this job off to someone else on the team who didn’t just spend half a lifetime reading resumes and performing phone screens. Did the code come with passing unit tests? We did mention unit tests like six times, right? Is the structure and naming informative and maintainable? Is it intention revealing? Did they try to impress with an overly complicated implementation? Did they incorrectly use spaces and not tabs? Does it work?

6. Build something onsite

We are in an amazing position right now. Think about it. We have whittled down dozens of resumes to 2 or 3 candidates (if we’re lucky). Now, you have access to one of them for 4-8 hours. A person you might be working with 8 hours a day for years. You know they have skills you value. You have seen their code and you only said WTF 2 or 3 times. Maybe you even learned something from it.

Start them off with what they expect. 15-30 minutes with an HR rep or the hiring manager. Let them get the nerves out of the way. Then, start the real interview.

We are hiring someone to write code and conveniently, they have provided us some of there own before the interview. It seems rude not to make use of it. Come prepared with a list of features to add to our homework problem. Pair on the features, making sure to rotate out pairs frequently so that everyone has a chance to work with them. The focus here is coding, and coding as closely to the real work environment as possible. No whiteboard coding, no algorithm problems or language trivia. Just coding and pairing on a fun problem. Take frequent breaks give them a chance to play on that onsite Ping-Pong table no one uses and drink the free soda. Remember that we are trying to evaluate development skills and find out how someone interacts with the team during a workday. Few developers aren’t fried after a several hours of pairing. End the interview with a few more minutes with the hiring manager or HR.

7. Make your decision after the interview

The time to talk about each candidate isn’t after all the interviews are done, or the end of the week, or the next morning. Do it right after the interview is done. Get the team back together as the HR rep is walking your candidate from the building and decide if they are your hire. Do they have the skills you are looking for? Do they work well with the team? Look at they code that was produced. Was it high enough quality? Did we accomplish anything? What do they bring to the table? What will we have to do to bring them up to speed? Are they worth investing in? Could I work all day in a tightly enclosed cubical-farm with them?

For whatever reason, people like to draw this conversation out, especially if the answer is too not hire. Try to keep this conversation to 30 minutes. Give everyone a chance to say yes or no and why, but keep it brief. If the answer is no, then it’s no. Rinse and repeat. If the consensus is yes, then hire them and move on. There is a strong desire to believe that the next candidate might be better. Don’t believe it. They will be worse and you will be out another day. If you have found a candidate that meets your requirements and is well liked by the team, then your hiring process has worked exactly as it was intended. The grass isn’t greener. Seriously.

Removing NuGet Packages from a Git Repository

As part of my recent migration from GitHub to BitBucket, I decided to take advantage of a new feature in Nuget 1.6, restoring missing packages at build.  While my repo isn’t that large, yet, with Nuget, I no longer need to commit my referenced dll’s to source control, which should keep my pushes light over time.  Here are the steps that I used to keep my packages from being commited and to prune them from my commit history in Git.

1) Upgrade to NuGet 1.6

Make sure you have Nuget 1.6 installed.  You can download it from Codeplex, or install it via Extension Manager in Visual Studio. Configure your solution to automatically install missing packages.

2) Remove packages from old git commits

NOTE: Don’t do this to a public repo that other people pull from unless you really hate them. You have been warned!

#Change packages/ to your packages location
git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch packages/" HEAD

4) Add Packages folder to .gitignore file

#NuGet
packages/

4) Clean Up Git

rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

These commands will cleanup the temporary info left over from the filter-branch and run garbage collection.

5) Push –force

git push -f

This will push your new modified commits up to the server and rewrite history.

For More Information:

From GitHub to BitBucket: Changing git remote origin

Now that BitBucket is hosting Git repositories, I decided to migrate my private repositories from GitHub and save myself $7 a month. It turned out to be incredibly simple.

git remote rm origin  #Removes old origin
git remote add origin https://username@bitbucket.org/your.new.repo  #Adds new origin pointing to BitBucket
git push -u origin  #Pushes commits to new repo

My New Amazon Kindle 3: Day 1

I bought a Kindle yesterday.  I have been toying with the idea of buying an ebook reader for a couple of years now, but it hasn’t been until the last couple of months that I have really considered it.  Over the last couple of years, I have built a small collection of ebooks, some that I bought from Amazon, others that I have downloaded the internet.  I read a lot of sci-fi and fantasy, and tend to re-read series that I like, so I have been gradually replacing my paperback collection with ebooks and reading them on either my iPod Touch or my Windows Phone 7.

Earlier this month, I started a new job with a longer commute, and decided that I needed a way to carry multiple books with me on a device with enough battery life that i wasn’t constantly having to charge it all day long like I was with my cell phone.  Essentially, I was looking for a device that did the following things:

  • Enough battery life for multiple days of reading
  • Ability to read ebooks available from my library
  • Ability to easily add my personal library of ebooks onto it
  • Preferably able to read .epub format
  • Able to read my previously purchased Kindle books

No device right now does all of these things.  I was looking at the Kindle 3, the Barnes & Noble Nook, and the Sony PRS-350.  The last two both support epub and ebooks from my library, but only the Kindle will display the 25 books I already purchased from Amazon.  After about a week of research and waffling back and forth, I finally decided to buy a Kindle.  I have already made a significant investment in books from Amazon, and I trust them to stay in the ebook business longer then either Sony or Barnes & Noble.  Also, Amazon announced that later this year they will support Adobe Digital Editions, so I should be able to download ebooks from my local library onto the Kindle.  Plus, the Microsoft Store was offering a free cover and light with the purchase of a Kindle, so I saved $60 on that.

I have had it for 24 hours, and am happy with my purchase so far.  I bought a couple of new books from Amazon yesterday and the reading experience has been better then on my phone, although maybe not as much as I had thought it would be.  The text is definately crisper on the Kindle, so I am sure I will have less eyestrain in the long run.  I converted several ebooks from epub to .mobi format using Calibre and e-mailed them to my Kindle e-mail address, and they showed up within a couple of minutes, which is a significantly easier process then what I was doing to get ebooks onto my phone.  Hopefully, Amazon will offer .epub support in the future so I don’t have to convert every book in my collection to .mobi.

My only real complaint with the Kindle so far, isn’t even about the Kindle.  Publishers have Kindle book prices way to high.  One of the books I bought yesterday, was released yesterday on Kindle for $15.99 and in hardcover for $17.99.  I seriously debated buying it, and I probably would not have if I hadn’t just bought the Kindle and was looking for new books to put on it.  I could have bought the hardcover at Barnes and Noble, read it, and sold it back to my local used book store for half price and saved a lot of money.  The numerous benefits of an ebook reader don’t outweigh the price of purchasing a book, and publishers better figure out a fair pricing model before they run into the same problems the music industry did a decade ago.

WcfTestClient Not Starting When Debugging WCF Service

I don’t create enough new WCF services to remember this apparently, but in order to get WcfTestClient to run when debugging a service (hitting F5), open up the project properties and under the run tab, choose Start Action | Specific page and select the .svc file you want to debug.

I created a new WCF Service project, renamed the .svc and when I hit F5, no test client. I seem to recall running into this last year as well.

HP Elitebook 8450p and Z600 Workstation: A Software Developer’s Review

Note: I started writing this review back in May, but for whatever reason never finished it. HP has configured them a little differently then 7 months ago, but I would select the same options now given my company’s environment.

Last year I started the process at my company of ordering new developer hardware for my team. For years, we have been running on underpowered business-class hardware (with more ram) and were really feeling the pain running our existing tools. We had a couple of goals for new hardware, including:

  • Better multi-tasking. Ability to run multiple instances of Visual Studio, SQL Management Studio and other applications without watching the screen redraw or having applications crash.
  • Hardware that would handle the upgrade to Windows 7 without any noticable loss of performance. We are still running Windowx XP 32-bit throughout the company with no ETA on when Windows 7 or 64-bit would be available.
  • A good mix of portablility, performance, weight and battery-life in a laptop. Our previous laptops were 12-inch Dell Latitude D420‘s, which were light and portable, but extremely underpowered.

Really, that was it. We were ordering custom hardware so we had to come up with budget for hardware and negotiate with the IT department responsible for supporting workstations. What we came up with was the HP Elitebook 8540p laptop and the HP Z600 Workstation configured as followed:

HP Elitebook 8540p

  • Intel i7 620M CPU – 2.66 GHz processor
  • 15.6 inch HD+ 1600×900 screen with 2MP camera
  • 4GB RAM DDR3 1333MHz (2 DIMMS)
  • 320GB 7200RPM Hard Drive
  • NVIDIA NVS 5100M Graphics Card
  • HP 120W Advanced Docking Station

HP Z600 Workstation

  • Dual Intel Xeon E5630 2.53GHz processors
  • 4GB RAM DDR3 1333MHz (4 DIMMS)
  • NVIDIA Quadro FX580 512MB Graphics Cards
  • 250GB SATA 7200 RPM Hard Drive

We had to compromise on the laptops to fall within our budget and in-line with what was being supported by the enterprise, but I’m extremely happy with them. I am able to run 2-3 instances of Visual Studio 2010 Ultimate open with large solutions, SQL Management Studio, Word, Internet Explorer, Chrome, Outlook, with SQL Server 2008 R2 and IIS running in the background with no noticable lag switching between applications. The other day, I was able to run a large, long-running SSIS package in the background while developing in Visual Studio without any noticable sluggishness.

The Z600′s are fantastic workstations with server-class processors. Our daily work barely touches their capacity. Most of my team’s developers are using them and our two pairing workstations are Z600′s.

Once Windows 7 64-bit is an option, we will look at upgrading the RAM to at least 8GB, but we haven’t had any noticiable issues so far on the 3.2GB that Windows XP can actually see. With a slightly larger budget, I probably would have pushed for faster processors on the laptops, maybe the i7 640M in order to better future-proof against a 3-4 year replacement period. We had some discussions about using Solid State Drives instead of the 7200RPM drives we choose, but felt the price and potential for failure wasn’t worth the price.

For people with more freedom in budget or vendor, check out the following links:

Fixing Broken Paging Links in WordPress 3.0 Running on Windows

I posted an article earlier this year with instructions on removing the duplicate index.php in paging links produced by WordPress running on Windows. Today I finally upgraded from WordPress 2.9.3 to WordPress 3.0.3 and it seems that the clean_url() function in formatting.php has been renamed to esc_url(). The fix is still the same.

function esc_url( $url, $protocols = null, $_context = 'display' ) {
	$original_url = $url;

	if ( '' == $url )
		return $url;
		
	//Added line to Fix Broken Paging Link Problem
	$url = str_replace('index.php/Index.php','index.php',$url);
			
	...

	return apply_filters('clean_url', $url, $original_url, $_context);
}

This fix has to be applied EVERY time WordPress is upgraded. You have been warned.

Managing 301 Moved Permanently Redirects in ASP.NET

I came across a problem recently rewriting a website in ASP.NET MVC. The old site was written in PHP and the URL’s it produced are not going to match the structure I want to use in the new site. I spend a lot of time trying to optimize this site for Google and Bing indexing and didn’t want to have any broken links when I switch over.

I also want Google and Bing to update their search results to the new links as soon as possible, not keep them around forever. Basically, what I am looking for is an easy way to manage a couple hundred 301 redirects until the old URL’s fall out of use, and my hosting provider doesn’t provide access to the IIS7 UrlRewrite Module.

Step 1 – Create a Database Table

I want it to be easy to add and remove URL’s at will. I can export a complete list of indexed URL’s from Google Webmaster Tools to populate the table initially, and then add or remove URL’s later if I want to move content around on the new site.

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[RedirectUrls](
	[OldUrl] [nvarchar](255) NOT NULL,
	[NewUrl] [nvarchar](255) NOT NULL,
	[Active] [bit] NOT NULL
 CONSTRAINT [PK_RedirectUrls] PRIMARY KEY CLUSTERED 
(
	[OldUrl] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, 
	IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, 
	ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

Step 2 – Create a Entity Data Model

It’s only a single table, but it looks something like this.

Step 3 – Create an HttpModule

I created an HttpModule called RedirectModule.cs. I populated a Dictionary from my database table to make the lookups fast, then I wired up a Begin_Request eventhandler so that I can grab the URL of each incoming request and redirect if I find a match in the dictionary.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using RedirectFromDatabase.Models;
using System.Web.Caching;

namespace RedirectFromDatabase
{
	public class RedirectModule : IHttpModule
	{

		private static object syncronizationLock = new object();

		private RedirectsEntities context;
		private const string redirectCacheKey = "redirectUrls";

		public void Init(HttpApplication context)
		{
			context.BeginRequest += new EventHandler(Application_BeginRequest);
		}

		public Dictionary Redirects
		{
			get
			{
				if (HttpRuntime.Cache[redirectCacheKey] == null)
				{
					lock (syncronizationLock)
					{

						context = new Models.RedirectsEntities();
						Dictionary redirects = context.RedirectUrls.Where(x => x.Active).
																AsEnumerable().ToDictionary(x => x.OldUrl.ToLower(), x => x.NewUrl.ToLower());

						HttpRuntime.Cache.Add(redirectCacheKey,
												redirects,
												null,
												DateTime.Now.AddDays(1),
												Cache.NoSlidingExpiration,
												CacheItemPriority.Default,
												null);
					}
				}

				return (Dictionary)HttpRuntime.Cache["redirectUrls"];
			}
		}



		protected void Application_BeginRequest(object sender, EventArgs e)
		{

			string relativeUrl = HttpContext.Current.Request.Url.PathAndQuery.ToLower();

			if (Redirects.ContainsKey(relativeUrl))
			{
				string newUrl = Redirects[relativeUrl];

				HttpApplication application = sender as HttpApplication;
				HttpContext context = application.Context;
				application.CompleteRequest();
				context.Response.StatusCode = 301;
				context.Response.AddHeader("Location", newUrl);
			}
		}


		public void Dispose()
		{
			//Nothing to Dispose of
		}
	}
}

I’ve created a property to encapsulate loading and caching of the Dictionary from the database. My URL’s aren’t very volitile, so I can set caching to expire after 1 day.

Note: HttpModules are not very testable, so a better solution would be to refactor this into a seperate class that I can test more easily and call into that class from the module, but I wanted to keep this example simple.

Step 4 – Add RedirectModule to the web.config

In the system.web section of the web.config, register the HttpModule.


	

The source code can be found here.

Preventing Team Build From Deploying Files After a Failed Build

I just finished debugging an issue with one of my teams build scripts in TFS where all the files and folders on our dev and qa websites were deleted after the build failed to compile. The issue was that someone had changed a project reference to a DLL reference out of the /bin/debug/ folder of one of our projects. Visual Studio would build the solution successfully on our development machines and our integration machine, but the TFS build script failed when compiled on our build server.

Long story short, we should ALWAYS be using project references or referencing dll’s out of our /lib/ folder, and I shouldn’t write my build scripts to deploy to our web servers if the compile fails.

I found this answer on MSDN forums that sets a property called BuildFailed to true if the compilation fails and makes that a condition of the build scripts AfterDropBuild target.



     
        
     



     


Now all I have to do is modify and test 15 build scripts on 7 different TFS projects to ensure this never happens again. Sigh.

Fixing Encoded HtmlHelpers in ASP.NET MVC 3

In the process of upgrading one of my projects from ASP.NET MVC 2 to MVC 3 RC, I decided to modify all my views to use the new Razor view engine.  The process has been pretty painless, but one thing I noticed was that the dozen or so HtmlHelpers I built were returning HTML Encoded in my Razor views.

It turns out that using MvcHtmlString or HtmlString instead of String as a return type will prevent double HTML Encoding. Apparently, MvcHtmlString is not automatically encoded, but String is. For example:

public MvcHtmlString GetDiv(this HtmlHelper helper, string value)
{
     string div = "
{0}
"; return MvcHtmlHelper.Create(string.Format(div, value)); }

I found a few good questions on StackOverflow with the solution to this problem.