Help me by Improving my JavaScript talk

Hello dear friends, it's been a while! A year. It's been a year.

Today I've come not to share from the largesse of my stores of wisdom, but to get you to proofread something for me. You see, I am scheduled to present at the Houston.js meetup in October on testing. And for the second half of the talk, instead of lecturing, I'm trying to foster a community discussion on testing.

In support of this goal, I've decided to make a sort of icebreaker handout to help guide the discussion if no one wants to talk. And this is where you come in--you have the unique and limited-time opportunity to Improve my handout.

How you can help: just think of what you'd want to tell a group full of Houston.js attendees. Now cut that thing down to the most important bits so it fits in 5 minutes or less. For the purposes of the handout I have to assume there will be a rowdy, busy discussion, because if it's crickets, I'll just cut early. Then, and here's the trick, think of a question you'd put in a handout to bring you to think about sharing that amazing revelation about testing in JavaScript (or testing web apps in general), and suggest I add that question to the list.

A second, entirely different way you can help: tell me which questions I have that are awful and should be removed. I can take it.

Without further ado, I present the handout (ignore formatting):


Help make us all better by sharing what you know! While I go over Jasmine basics, prepare to answer anything that interests you. My goal is to help draw out the wisdom and experience from the audience (which means you)! Guidelines:

  • Don't feel pressured to answer. Instead, feel free to use this sheet of paper to draw demeaning caricatures of me, or ball it up and throw it at me. Fun fact: both paper and laser toner are good sources of dietary fiber.
  • Focus on how you can help someone else with your knowledge. E.g. we don't care if you use Jasmine or Mocha, but we do care why.
  • Summarize! Try and fit everything into one tweet-length answer. If you can't fit your answer in a single tweet, then ultra-summarize your answer until it does! If it's complex and can't be summarized, try to focus on the benefit you received, and share a link for us to learn more.
  • Tweet at me (@pseale) any links you want me to share with the group.
  • Remember that our goal is to share as much of the group's knowledge as possible—not necessarily by answering the questions below. If you have a unique perspective, by all means share it. If you would like, ignore the questions below and just ask for 5 minutes to talk. Or, feel free to directly ask the crowd a question.
Overall
  1. If you had one tip to give to novice testers, what would it be? If you had one tip to give to fellow grizzled test veterans, what would it be?
  2. Is there a unit testing technique or some learning resource that personally helped you learn unit testing? E.g. an open source project's test suite; a book; a conference video; a kata; a green wristband; ceremonial burning of green wristbands; etc.
Tools, Products, Services
  1. Do you use a CI server? If so, which one, and what is the main benefit? Do you utilize fun gimmicks when a build turns red?
  2. What PAID ($$$) commercial products/services do you use? Please give an experience report, good or bad. (Editor's note: I want to go out of my way to emphasize commercial products because fewer people use them, and so experience reports for commercial products are rare and valuable) E.g. Saucelabs, CircleCI, WebStorm
Browser (end-to-end) Tests
  1. Share any experience you have with browser testing, including:
    • Attempting to set up browser tests, then running into a serious issue, then giving up - if you're brave, we'd like to hear what step caused you to give up.
    • What tool you use, and why it is good for you?
    • If you use zombie.js or a similar browser emulator, have you experienced any gotchas, or has it been all positive?
    • What gives you the most problems attempting to set up browser tests? Or better, is there something you learned that solved a difficult problem?
Integration Tests
  1. If you test your APIs via HTTP client libraries, what are techniques you use to ensure you cover every aspect (security, bad data, junk data)? E.g. “we have shared ‘it' blocks” or “we wrote a giant library of test helpers”. What is the most painful thing about testing via raw HTTP/REST client libraries? Do you have similar tests in your browser test suite?
Unit Tests
  1. Do you write unit tests for JavaScript that runs in a browser (i.e. the front-end)? Do you isolate each piece (or module), or do you test it all integrated? If you've had success doing this, can you explain roughly what your front-end architecture looks like, and how the architecture helps (or gets in the way)? E.g., you can just say “we use Ember”, or “we use the Flux architecture from React/Facebook.” Or “our app is 5000 lines of jQuery in one method, and every night I cry myself to sleep.”
  2. Do you run server-side JavaScript on Node.js and successfully test it? Do you try to isolate every module, or only isolate the boundaries, or isolate nothing?
JavaScript xUnit Test Frameworks
  1. Do you use Jasmine or another JavaScript testing framework? Do you think it makes a difference if you use Jasmine versus Mocha, and most importantly, why? If you have converted a Jasmine test suite to another framework, how hard or easy was the conversion?
  2. Do you use a specific assert library, and if so, why does it matter?

Twitter Bootstrap: If you don’t know what it is, you need it

tl;dr Summary

ASP.NET MVC 5 includes (will include?) Twitter Bootstrap 3 in its default new project template, which means all new projects will have Bootstrap now, and that’s a good thing. If you aren’t familiar with CSS grid systems or Bootstrap (which is much more than just a grid system), then you need them.

Authority: I am a CSS grid system OG

OG means original gangster by the way.

Back in 2006, more than seven full years ago, I posted about Yahoo UI Grids, and in classic fashion on my blog, misrepresented the truth in a pithy manner. Today I’m here to give similar treatment to Twitter Bootstrap.

I’m so old.

What is Bootstrap?

It’s much, much more than just a CSS grid system.

I would go into detail explaining what Bootstrap does, but a) I’d just end up spreading misinformation, and more importantly, b) the Getting Started section of the Bootstrap project page is excellent and you will be better served just abandoning my post here and perusing their documentation.

They also have a pretty good working sample built with Bootstrap.

But Peter, will it work for the Enterprise?

No, definitely not. You’re better off asking your Websphere consultant what is best for your customized needs, which are (as you know) totally different from everyone else.

This blog post was harvested from my monopoly-dotnet sample MVC project

I’ve put together a sample ASP.NET MVC 5 project implementing a portion of Monopoly, You may find browsing the source illuminating. I wrote the project to to explore ASP.NET MVC 5, automated web testing, singleton abuse, basic EF 6, and a few other things.

Get the source (or browse the source online): https://github.com/pseale/monopoly-dotnet

MVC 5 Impressions: NuGet has taken over

tl;dr Summary

NuGet (the .NET package manager) has taken over in ASP.NET MVC 5, and that’s a good thing. The bad old days of manually managing DLLs are gone! Like a military coup overthrowing an oppressive dictatorship, we can hope that the new, oppressive NuGet dictatorship is better.

There’s not much meat to this post, so you might as well finish reading.

NuGet in ASP.NET

Here’s what I got when creating a new project with the VS 2013 Preview (note this is the Preview, not the RC or the RTM version)

Assuming your projects don’t use NuGet at all, this pile of packages might appear daunting. But no worries! What’s great about source control is that it allows me to commit everything to a safe, remote location, which then allows me to don a hockey mask and rev up the chainsaw, and go to work in the NuGet package manager.

I feel like maybe I should write more, but it’s the year 2013 and I don’t feel like writing an introduction to package managers or why they’d be useful. So, package managers are useful and have their own problems, discover why. Q.E.D.

In summary, I like how NuGet has become the standard for new ASP.NET MVC projects, as it’s one better default and one less thing to think about.

Image Links in MVC Are Magical (Mostly)

tl;dr Summary

Image links in the Razor view engine in ASP.NET MVC allow you to do magical things with image links.

The new magical syntax is as follows:

This post was mostly stolen from the StackOverflow question http://stackoverflow.com/questions/5331777.

Introducing the problem

Let’s say you need to add a dog image to your website product. Like, say:

dog

Pretty good, right? I’d say. I’m not saying I made it or anything, but if I had, I would be proud to say I did, it’s solid artwork.

Anyway, you have this (really excellent, well-crafted) file named dog.png. So far, so good. Now you just need to link that image from your ASP.NET MVC website.

First, we dump it in the project folder structure:

Now all we have to do is reference it from our Razor view.

At this point you’re presented with a dilemma.

The dilemma: how do I reference image files in my project again?

 

We’ll explore all the common solutions below.

The F Minus solution

Works on my machine, right? Ok, I don’t think people do this even on accident anymore. But I wanted to include this example for completeness, when we compare the “bad” answer versus the “good” answer. For the record, this is the bad answer. This. Everything else is great, compared to this.

 

With that said, there are other, far more common, wrong answers.

The absolutely wrong answer, and the relatively wrong answer

There are two common ways to go wrong linking images: use absolute paths, or use relative paths.

 

I won’t go too hard on either approach above, because ultimately if you don’t have broken image links, you’re fine. But you’ll break something eventually. Probably. I mean, most of the time you can get away with an absolute path, let’s be honest here.

With that said, there’s a low effort solution that everyone should use anyway.

The right answer: ~/ and Url.Content(“~/”)

So, not to spoil the ending, but to reliably build out image URLs in Razor, just add ~/ to the src attribute of your image tag. And for background images you link from stylesheets (also known as cascading CSS stylesheets sheets s sheets PIN number), use an longer-syntax-but-equally-effective Url.Content(“~/”) call.

You can do something similar with script tags

My exhaustive research has turned up a post explaining that this ~/ relative path helper can also be used on <script> tags.

Razor syntax versioning agnosticism

While attempting to figure out when this “~/” syntax was introduced, I came to the conclusion that the truth is unknowable, or at least not within 5 minutes of searching. I feel in my heart this syntax was (will be?) introduced in ASP.NET MVC 5, but again, there’s no true way to know for sure*.
* not actually true

This blog post was harvested from my monopoly-dotnet sample MVC project

I’ve put together a sample ASP.NET MVC 5 project implementing a portion of Monopoly, You may find browsing the source illuminating. I wrote the project to to explore ASP.NET MVC 5, automated web testing, singleton abuse, basic EF 6, and a few other things.

Get the source (or browse the source online): https://github.com/pseale/monopoly-dotnet

Git on Windows in 2013: Your Best Life Now

tl;dr Summary

Given Visual Studio 2013 has improved Git support, using Git on Windows is better than ever. Crucial ingredients are:

  1. Github for Windows
  2. Visual Studio 2013
  3. Honorable mention: SourceTree if you have bitbucket

Also, if you somehow still think Mercurial is “better for Windows”, it’s time to look again, Git has excellent support now.

Ingredient: having a Github and/or Bitbucket account

Yes. Bitbucket allows free private repositories; Github is Github.

Ingredient: Github for Windows

I almost recommend Github for Windows more for all the things it does for you during installation than anything after. Notably, Github for Windows:

  1. Creates a new private key and registers that key with Github. Double-clicking an installer is less stressful than worrying about US export controls when downloading an SSL provider to create my private key.
  2. Installs posh-git (which the start menu understands as “Git Shell”)
  3. Enables the handy “Clone in desktop” button on the github site.
  4. Gives you a GUI over your git repos. 
  5. Sets global settings for “difftool” to point to Visual Studio 2012, which is good…if you have VS 2012 installed. Anyway if you’ve read up on how to set custom git difftools and mergetools, you’ll be thankful.
  6. On new projects, creates reasonable .gitattributes and .gitignore files for .NET projects. I know there’s a forever war being fought over git’s CRLF settings, and Github for Windows is on the wrong side of that fight(* text=auto), but for the most part it’s good stuff.

Ingredient: posh-git

posh-git, the PowerShell git shell, is installed by Github for Windows. You can obtain it separately through chocolatey or by following the instructions at the posh-git project site.

Sure, you can still use git.exe on a cmd shell, but then you don’t get the fancy colors in your prompt or tab completion. You can also wear a blindfold while programming and program entirely by scent to present yourself with a challenge, I don’t want to tell you how to live your life. But, if you use the cmd shell when posh-git is installed, you’re doing it wrong. Not judging though.

Ingredient: Visual Studio 2013

Visual Studio 2013, which now has a working git source control provider. This means:

You can see when files have changed

Note the lock icons and the checkmarks next to each filename.

You can “diff” files from within Visual Studio

While this may not sound like a big thing to you, maybe you enjoy typing in hashes at the command line, maybe that’s your thing. For the rest of us:

I need to point out that the git diff command I posted above is nowhere near correct. The real command would have been way longer. For fun I tried to run the same diff from the command line…I gave up after 10 minutes.

I also need to point out that reading my commit log is so toxic that it may give you a rare bone disease. My commits were ugly, and I feel it’s important that you know that I am ashamed of them. For example, “Not 100% working” on a commit message is not a recipe for success.

Other things I haven’t used or aren’t available in the VS 2013 Preview build

There are a bunch of VS git integration features I didn’t use, like committing, undoing (checkout-ing?) changes, that may prove useful to others. Here’s some screenshots I collected (note this is the Preview build):

 

The Visual Studio team blog has a detailed, professionally-screenshot-adorned post explaining VS’s git support. Just stop reading once you get to the “team build” section, before you are exposed to an unfiltered view of a XAML-based build system. It’s not pretty. Like Gandalf tells Bilbo in the Hobbit 1 trailer, “…and if you do, you will never be the same.” Some things cannot be unseen.

Final words

And with that last parting shot at the build system, I’d like to wrap up. Git on Windows is better than ever. Most of the rough edges have been smoothed out, and the weird stuff you don’t understand probably wasn’t possible in TFS or SVN anyway.

This blog post was harvested from my monopoly-dotnet sample MVC project

I’ve put together a sample ASP.NET MVC 5 project implementing a portion of Monopoly, You may find browsing the source illuminating. I wrote the project to to explore ASP.NET MVC 5, automated web testing, singleton abuse, basic EF 6, and a few other things.

Get the source (or browse the source online): https://github.com/pseale/monopoly-dotnet

True Measure of Productivity, 2013 Edition

Looks like I’m way under my target productivity for the year. Numbers don’t lie.

Also as an extra bonus measure, in my 2011 productivity screenshot, you can see that I have a failing build on two entirely different CI systems. That has to apply some kind of bonus multiplier on all the dead web servers.

I also feel the need to point out that terminating web servers naturally removes the tray icon as expected. You only get a killing field of web server tray icons if you are doing something unnatural.

2013

Previously (2011)

Using Coypu with radio buttons: A tale of woe, suffering, and eventually, glorious triumph

tl;dr

If you’re using Coypu and need to inspect a field to find out which radio button is selected, beware of using browser.FindField(). Instead, use targeted calls to the specific radio button tag you’re looking for, then inspect the “Checked” attribute. Like so:

If you’ve already lost interest, well, at least now you’ve been warned. That was the good part.

Introducing Coypu

Coypu is a .NET Selenium wrapper that hides the ugly part of Selenium so you can get to work automating a browser. It replaces presumably horrible XPath queries with friendlier calls to methods like “FindField()” and “FillIn().With()” and “Choose()” and “ClickButton()”, and generally makes everything easier.

Curious? Go visit the Coypu project page and get more details from their README.

Introducing Radio Buttons in HTML

For those of you who have forgotten or have otherwise repressed the memories, radio buttons in HTML work a little funky. To be fair, they’re even worse (wait, I mean, funky) elsewhere.

To make a simple four-radio-button field, contrary to what your gut tells you, you don’t make a single tag surrounding the four radio buttons. Instead, you’ll do something like this:

…which renders most beautifully as:

The label tags aren’t strictly necessary, but then again, neither is showering. Hmm. Anyway, you can ignore the “label” tag along with the label text itself. The important guts of the radio buttons’ definition lie in input tag attributes.

Got it? Radio buttons are reasonably straightforward. Let’s move on.

Now for the problem: reading values from radio buttons

First, let me get this out of the way: it’s probably me. I haven’t really read up on Coypu, posted on the mailing list, contacted anyone, looked at a single other project using it, etc. I just discovered the problem, banged my head on the table repeatedly, eventually worked out the solution and am now posting this for the search engines. This is what I do.

With that said, here is my problem. I would like to write a test that submits a partially filled out form, gets redirected back to the same form, and checks to see if any of the data I filled in previously made it back to the refreshed page. It’s a simple enough setup. Here’s what the form looks like:

We’re not going to win any design awards for this form, but you get the idea.

So here’s my problem: how do you read the “Opponent 1” form value?

Act II: The Wrong Answer

Let’s start with the wrong answer, also known as “How Peter Lost A Few+ Hours” method:

Lulled into complacency by the smooth experience that was lines 1 and 2 above, I fell victim. Don’t be me.

What I’m saying is:

Act III: Triumph—You May Now Play the Rocky Theme

Here’s a better solution (or strictly speaking, a working solution if not necessarily better):

 

And as you might imagine, searching for a radio button by value can be tricky when there are two other radio buttons with the same answers, so to solve this problem, just do as the glowing green text says:

Final note

The Houston TechFest 2013 Cometh

The Houston TechFest is coming to the Reliant Center on September 28th. Which, depending on how you feel about it, will be too soon, not soon enough, or met with 100% total indifference.

Among the dozens (hundred-plus?) sessions, here are the few I’m personally interested in. I’m excluding talks I’ve seen before or for whatever reason just can’t muster enough interest to bother pasting into this hastily-assembled linkblog. The sessions that interest you will definitely vary from my list.

  • Rediscovering Modularity – the speaker claims to have techniques for organizing code that will help enforce architecture. It’s an intriguing idea, and appears to be, wait for it, wait for it, a new and/or novel idea. Or it will be all lies or some kind of trap where the solution to everything is TFS gated checkins. Anyway, color me intrigued.
  • Estimating: Updating your SWAG – and yes he definitely means swagger. Assuming the speaker speaks from experience, it will be a veritable goldmine of estimating wisdom. Or, again, lies.
  • Professional JavaScript – Chris will be talking about writing JavaScript in the context of JavaScript-heavy web apps, and as I have attended a past talk of his, I know he will be speaking from experience.
  • Rumors of my death have been greatly exaggerated – New Features in ASP.NET Web Forms 4.5 – just kidding, WebForms is in fact dead—dead like WPF and ColdFusion. Meanwhile while we’re on the subject, WPF 4.5 fixed the problem of creating objects off of the UI thread, so that’s nice, WPFs not dead and it’s getting new features too..Anyway, not attending, I just wanted to take some cheap shots at WebForms. Mission Accomplished.
  • Robotics and .NET – Robotics stuff from Phil Wheat. That is all you need to know.
  • Bits to Atoms – the world of 3D printers – 3D printing stuff from Phil Wheat. That is all you need to know.
  • Be thankful you don’t work in Java land – well that isn’t the name of the session but it’s what I got from reading the abstract. To be fair to this session, I could pick on most of the submitted sessions, but I enjoy hating on Java, so allow an old man his simple pleasures and let this one slide. I’m super old so I should be given extra leeway with my offensive jokes.
  • What, you don’t underscore yet? – as it says in the abstract, I’m one of the guilty people who have been meaning to learn more about underscore.js for the last ~year(s).
  • Real World Polyglot Persistence – another talk harvested from experience.
  • Interesting to me is how you can sniff out Agile 101 talks from the Agile war story talks by how the 101 talks focus on practices and the war stories focus on fixing dysfunction:
  • Programming Kinect – You had me at “cool demos.” The joke here is that the last words of this talk’s abstract are “cool demos”, so when I say “you had me at cool demos”, I mean “you had me only by the very end of your abstract, not a word before” See, explaining the joke ruins it.
  • Getting Started with Test Driven Development – at this point I can confidently say I will not learn a single thing in an intro TDD talk, but I’d show up to see if I learn something about teaching TDD. Well, probably not, since I think coding dojos and pairing, or maybe independent experimentation, are the only way to move people from “not doing TDD” to “doing some TDD”.. Anyway I’m sure lots of people who attend will have never seen an intro to TDD talk before, and to them I say, uh, enjoy I guess.

Summary

Houston TechFest 2013 features a surprisingly large pool of interesting talks, even to a grizzled tech event veteran such as myself. I’ll see you there.

Adopting Agile practices: still a good thing in 2013

Summary

Martin Fowler has written an excellent piece on estimation. Go read it. Inspired by his post, I have a tiny extra point to make: we still need to adopt Agile (whatever that means), even in 2013.

We're in the post-Agile era

It's 2013 and Agile hasn't solved every team's problems like we were promised. We blame Scrum, we blame ourselves for not having enough faith, and sometimes we blame individual Agile practices. Today the Agile practice under fire is estimation.

The short version of the complaint is that estimation is wasteful and you shouldn't need to spend valuable time doing it. Martin Fowler wrote an excellent piece that doesn't deny the cost of estimation, and gives some good tips for identifying when estimation is valuable. Go read the article.

Incidentally if it sounds like I disagree with his article in any way, just for the record, I'm on team Martin on this one. I don't pretend to have the authority to disagree anyway.

The danger of post-agile

The backlash against Agile is useful in that each time an Agile practice is assaulted, it is then defended. In turn the defense (like Martin's article) helps onlookers like me understand why the practice exists. That's a good thing.

The problem with all these assaults against Agile practices is that the old arguments for adoption don't fly as well as they used to. "Because Agile says so" is no longer a good enough reason to adopt it. And while examining each practice for waste is great for already high-performing teams, almost every team I've met or heard about could benefit by adopting ALL of Agile. ALL of it, including the wasteful parts--Agile is a net win for most everyone. Even in 2013. Even Scrum.

War stories

  • I've met a team that does exclusive checkouts. That was 2012. In case you've never seen this, it means that when one developer checks out a file, no one else can open it for editing and must wait for him to finish before even beginning their own edits.
  • The majority of projects don't use Continuous Integration, even to compile their project.
  • I've met a team that did estimation (in hours). When asked what they do when they are over their estimate, they said they close the original task and "borrow" hours from other features assigned to the developer. I hid my horror well when I heard that.
  • I have met developers that routinely check in broken code.
  • I have met guys a few years from retirement that contributed nothing to a project.
  • I saw a job posting from last year that said roughly "We have great developers, but they're not great with SVN. Your job duties include taking zip files via email and checking them in."

The point of these little bullets is to say that there is a lot of dysfunction in the world, and adopting Agile will at the least reveal that dysfunction.

Agile: bring the pain

So I entreat you, if you are starting out fresh somewhere and are wondering about this whole Agile stuff, look. You can skip Agile and write your functional-paradigm Lean project, no sprints, no formal retrospective meetings, no stand-ups, doing code reviews instead of pairing, doing BDD with no unit tests at all, all tests written after the production code, and all this done without estimation. Go do that, it's all good, you get it.

But if you're in a situation where you're not sure "what is it, you say, you do here," go ahead and buy the books and adopt strict XP/Scrum-flavored Agile and start fixing problems. And when someone brings up an argument saying "estimation is wasteful", maybe they're right, but it's more likely that they have never seen estimation done properly, and you just need to do inefficient Scrum by rote until you understand why. And if it truly isn't working, don't give it up just yet--try and understand many of the common team anti-patterns now disguised by post-Agile. A real problem I've seen with post-Agile is that people no longer try to make Agile work, and they justify all kinds of poor behavior under post-Agilism.

The World Of Duh: A blog series

Welcome to the World of Duh, a blog series in which I talk about something new to me in an informal, unresearched, and often factually inaccurate way. My goal with this series of posts is to help those similar to me. Given I didn't do much research on the topic, take it for what it's worth: just some guy's opinion. You're welcome.

ShiftIt makes window resizing tolerable on a Mac

Summary

Download ShiftIt, which helps you move/resize windows by assigning global hotkeys. Make sure you get version 1.6.

Introduction

I'm using a MacBook Pro on a day-to-day basis. My first (and lasting) impression is that they removed the 6 most precious keys on the keyboard--Home, End, PgUp, PgDn, Del, Ins, and they did it because they hate me.

But we're not here to talk about how much pain I've endured attempting to mentally map the Mac equivalents of "skip word", "go to end of line", "go to beginning of the line", etc. That kind of useless ranting is what Twitter is for.

Moving windows on a Mac - describing the problem

Today we're here to talk about the problem of moving windows around. Macs have inherited the Windows disease of opening each new application in a tiny portion of the available space (maybe they're sharing needles, I don't know), and Macs have gone a little further in that they made their maximize button tiny, secretly gave it two modes of operation, and refused to assign a global hotkey to either maximize operation. Safari must have gotten complications of the diseased window syndrome because it will simply not maximize, I don't know, Safari hates us I guess.

Most people I watch using a Mac go through a short-to-medium length ritual of opening a program--finding their program on the dock (of course), moving their trackpad mouse cursor over to the 5px wide button, clicking the green one, then watching as the program slowly maximizes to fill part of the screen. Then they remember, move their cursor over to the green button again, look down at the keyboard, find the shift key, press and hold the shift key, and click on the button. And the program really maximizes this time. And it's like a minute later, and they're done. Maybe this is how people take mental breaks--"I need a break. I know, I'll open a program on my Mac, that will give me at least a few minutes of downtime."

I don't know you people and I don't know why you're all so bad at this.

Anyway it's driving me a little bit crazy.

Stating for the record - Windows 7 is better out of the box

Windows 7 introduced keystrokes to maximize windows and move them from screen to screen. I won't belabor the point except to say that the details are here if you need them, and to say that this is a solved problem on Windows out-of-the-box.

ShiftIt to the rescue - moving/maximizing windows for Macs

Meanwhile all is not lost.

Some kind soul named 'fiknovik' on github is maintaining a perfectly good window management program called ShiftIt. Note I am linking directly to the (now hidden) downloads page of the project. In a fit of hilarity/incompetence/extreme unnecessary competence, I compiled my own version of shiftit before someone told me there's a downloads page.

Oops, I haven't even mentioned what ShiftIt does yet. ShiftIt assigns global hotkeys to common window resize/move tasks such as:

  • maximize window
  • move window to left/right half of the screen (I do this a lot with the Chrome Developer Tools window)
  • move window to the other monitor, assuming I have two monitors available

Basically, it solves the "How do I move this window" problem in a way familiar to my Windows-thinking brain.

Download

Download ShiftIt here.

Download ShiftIt 1.6, not 1.5

Version 1.6 introduced the ability to move a window to another monitor. 1.5 does not have this shortcut. 1.6 is labeled 'dev', but swallow your fear and be brave, and download the dev version so you can switch monitors painlessly.

The World Of Duh: A blog series

Welcome to the World of Duh, a blog series in which I talk about something new to me in an informal, unresearched, and often factually inaccurate way. My goal with this series of posts is to help those struggling with similar issues find a solution. Given I didn't do much research on the topic, the solution I propose may not be the best solution, just "a" solution. You're welcome.

The Discourse Source Code Has Already Helped Me

It took two minutes.

While browsing the discourse source code, and more specifically while attempting to load it in Sublime Text 2, I came across their sublime-project file. I'm already aware of these project files, which by the way are great for excluding files and folders you don't want to see in the sidebar, or see included in search results. I do a lot of 'Find in Files' in my day-to-day work, and sometimes get 5007 results, most of which come from a log file. Well I used to get the results, then I saw the light and used a sublime-project file and everything was great.

Fast forward to three minutes ago and my discovery of the Discourse project's sublime-project file.

You can set tab settings in your project files

This is something I had no idea was possible: project-specific tab settings!

  "settings":
    {
        "tab_size": 2,
        "translate_tabs_to_spaces": true
    }

Possibly the greatest thing about this little snippet is the line "tab_size": 2 is indented 4 spots from its parent. The rest of the file is consistently indented 2 spaces.

I don't know if the "for consistency, on this project we use 2 spaces for tabs universally, and btw this line is indented 4" situation is unintentional irony, but I'd like to think that someone did it on purpose. Because that's what I would do.

Final Note

Assuming I ever post again, when I do post again, it will be less researched, quick, unformed thoughts, or maybe things that are already obvious to everyone else. Basically like my twitter feed. I'm thinking of calling this series of posts The World Of Duh. You're welcome.

Don't Sign NDAs

Sometimes when working with the Microsoft stack, you'll be offered information on an "NDA", or non-disclosure, basis. Don't do it.

I'm still not sure why Microsoft keeps so much of their product development under wraps, but as they compete with Oracle and I don't, I don't blame them. It probably has something to do with "battlecards", which are the most ridiculous/effective thing I've ever heard of.

If you haven't heard of battlecards, imagine Pokemon, but with ECM systems. The IBM guy says "your database doesn't scale!" and if you haven't memorized the appropriate response line on the battlecard (by the way, the correct answer to any scaling question is "you're a towel!"), you lose the ECM Pokemon battle and surrender the sale. Whoever wins the most ECM Pokemon battles appears as a "visionary leader" at the top right of the Forrester magic quadrant.

Also, if we're playing ECM Pokemon, if one player offers to "build an ECM from scratch", they're tarred and feathered, and declared anathema, and a heretic. I don't make the rules, I'm just telling you what the rules are. Tarring and feathering is in the ECM Pokemon rulebook, right underneath the part endorsing referee bribes.

Anyway. I've only received NDA information a few times, and have never benefited from NDA knowledge.

Instead of benefiting from my NDA, all of a sudden I had to concentrate on censoring myself at all times. I had to censor my blog posts and conversations. And worse, my tweets. My tweets!

There's really not much more to say about NDAs in the Microsoft ecosystem. Unless the NDA offers a career-changer (such as getting access to the newest SP a full year ahead of the public), don't receive anything NDA. At best you'll satisfy your curiosity, and at worst you'll get yourself in trouble (the other career-changer).

Test-Driven Development

I won’t get preachy about this, I promise.

Apparently one of the topics of discussion at Pablo’s Fiesta was whether TDD is a fad.

As a kind of response to the question “is TDD a fad”, let me focus on something everyone likes to talk about, and that is me. Me me me me me. Not you--me.

My story before Test-driven development

It’s college. I’m learning about object-oriented programming and have a pretty firm grasp. I can make classes, methods, static methods, and even make the right decision as to whether to go with a struct rather than a class*. I even know about singletons.
*"struct vs class" – in case you're wondering, the answer is "always class, unless you're

Unfortunately, my compiler project is a complete mess. I use the same data structure (let’s call it a class) for each stage of the compilation process. I sit frozen at the keyboard, sometimes with a piece of paper and a box-and-arrows-looking diagram, sometimes just sitting slack-jawed staring through the wall behind the monitor, trying to figure out where to put that behavior I need to implement.

It’s slow going. It’s a lot of rewriting. There’s dead code everywhere, some of which I know about, some of which I don’t. I try to map out everything I need to get this working, and stub out some of the methods I will need later. Sometimes I forget what I’m doing mid-step and just…blank out.

It’s a bad time.

My story from last Friday*, at work

* Last Friday…in September. Through the magic of first forgetting, then rediscovering this draft, I am able to traverse time itself.

First, I read the user story to make sure I knew what I was supposed to do. Something about adding another bit of our application to be searchable. Check. Once I had a vague idea of what to do, the first thing I did was write a test that spans all the way from a SearchViewModel down to the database (and yes I said database. It’s a simple search, we’re not using Lucene or anything crazier, lay off me). Specifically, I wrote some code to a) create an entity, b) save the entity to the database from whence it will be searched, c) get me a SearchViewModel in as similar a fashion as possible to our WPF-based UI, d) ran the search, and e) inspected the ViewModel for the search results I expected.

With this large (and yes, slow) test harness supporting me, I went on to implement the search functionality.

An aside

Let me take a moment to talk about a few things I didn't test. I didn’t write a unit test for every interaction within my own internal search API. I didn’t write a test from the ViewModel that mocked out the search service (Searcher) to test interactions. In my test, I didn’t even inspect properties of the search results to ensure that I’m getting the right search results, just lazily counted how many search results come back. See, I already have tests that verify each and every property coming back from search results, so why would I bother checking every property in every test? Anyway. "What to test" is the subject for another day, and no, I don't have the final answer either.

Back to my last Friday

After implementing enough code to make the first test pass, I ran the UI on my machine and verified that searching did everything it was supposed to. No problems this time—wait, I somehow mismatched two properties—oops, need to fix that. Went back (without writing a test or adjusting a test to test for the bug I just identified), made the necessary code change, fired up the UI again and inspected.

After I verified the code was in a stable state, I went back looking for things to clean up. No methods to rename, no dead or “temporarily commented-out” code, no code sitting in the wrong class. This time. So no refactoring work needed.

Another side note

Most TDD instructions will tell you to only implement the bare minimum needed to make a test pass. This is good contextual advice, given that the vast majority of developers create "speculative" methods and functionality and need to learn how to do truly emergent design (also known as design-by-example, or, the "YAGNI-You Ain't Gonna Need It" principle of design) via TDD.

But, we are also told that it's okay to do a some up-front design. Depending on who you heard it from, sometimes you hash out a class diagram that fits on a napkin (then as they famously say, throw the napkin away). Or, you can use CRC cards or Responsibility Driven Development(?), and Spiking, all of which, even if you don't know what they mean, sound like they involve doing something above the bare minimum needed to make a test pass. Anyway. Kent Beck's TDD book even tells you Triangulation is only one of the approaches to making a test pass, another approach being "just write the code you actually want in your finished product AKA the Obvious Implementation".

Okay. So here we are. We have the same people telling you you should do a) practice "pure" TDD by doing zero up-front design, and then b) telling you all these other things that directly contradict a). I've personally reconciled the conflicting advice as follows:

  1. Practice YAGNI. Speculative design tends to be bad, and from what Resharper tells me, the rest of you leave a lot of completely and obviously dead code lying around, not to mention all the extra unused public methods and classes you can find with FxCop or Resharper's "Solution-wide analysis". You guys, you guys.
  2. But do spend some amount of time thinking ahead, and maybe just implement the code you want to end up with, not strictly the bare minimum needed to make a test pass. If you have an Add(a, b) method, instead of writing 50 tests, and triangulating towards "return a+b", you're allowed to write a+b the first time and write enough tests to catch all scenarios. Triangulation helps keep my mental load down so I can keep moving towards solving the problem, and because I often find that having solved the problem through Triangulation, I have followed YAGNI and an unexpected design has "emerged".
  3. If you're not sure if you should follow #1 or whether you are allowed to cheat and follow #2, well, follow #1. Don't be "pragmatic" and use the word "pragmatic" as the blanket you use to justify whatever you want to justify.

If the YAGNI style of programming is new to you, the amount of code you'll rewrite while attempting it at first will be staggering, but through the pain, after rewriting everything about 5 times and deleting 2/3rds of your codebase, you'll figure out that you haven't really been applying YAGNI. And then you'll rewrite everything a few more times.

I'm probably sounding a little preachy, so let me be clear: I'm talking to myself from a few years ago. And I'll probably come back some years later and yell at myself for dismissing the relative value of unit tests versus integration tests. Yes, see you in 2020 for the follow-up post: "Peter is wrong: Again. Part 17 and counting".

Back to last Friday (again)

I run the tests (I have about twenty by the time I'm totally finished), do a quick diff on the files I have checked out, check in, and verify the build remains green. Our build compiles and runs all tests in about ~30 minutes. It’s slow. And no, I didn't check in every time the tests turned green (though I do shelve frequently). This paragraph sponsored by twitter hashtag #realtalk. #hashtagsInBlogPosts #weUseTfs

A world of difference

Being able to take a vague idea of what I need to do and steadily translate bits of requirements into working code is a world of difference from me in college. I haven't reached the summit—I probably shouldn't have to write the phrase "I haven't reached the summit", but just in case: "I haven't reached the summit. No comments please."

And, back to the original question: is TDD a fad? As for me:

Learning (and applying) Test-Driven Development has improved my programming ability more than than anything else, by far.

So to answer the question: hey. Hey. Let's be pragmatic and not get too carried away. Maybe it is a fad. It's good to be pragmatic, not too far on the left, not too far on the right, but somewhere in between in the pragmatic zone (the region where pragmatism reigns). Pragmatism is delicious when spread evenly over toasted bread and served with tea. If someone is drowning, be pragmatic about the situation. If you had to choose between having a tail or the gift of flight, be pragmatic.

Today, Running a Linux Virtual Machine is Painless

tl;dr

If you're considering trying out Ruby, just run Ruby inside of Ubuntu on a VirtualBox VM instead of Ruby on native Windows, because the Ruby/Ubuntu/VirtualBox combo is completely painless. I might even be so bold as to say flawless.

Linux is painless (today)

As someone who has lost hours and hours and hours unsuccessfully troubleshooting, and as someone who has experienced a Personal Complete And Total Data Loss Incident, I want to acknowledge that running "linux" in its many flavors can be painful.

And it is with that in mind that I want to let you know that, as of January 2012, I'm having no problems.

With no special configuration required, I've set up:

  • VirtualBox (free for non-commercial use)
  • the newest stable Ubuntu 64-bit release
  • with an instantly adjustable monitor resolution via VirtualBox extensions for Ubuntu and RIGHT CTRL+F
  • with working sound *note: don't take this for granted, you punks
  • with both Chrome and Firefox
  • with internet access, even when I switch from ethernet to wireless and back (my fellow former VirtualPC/Virtual Server users are having PTSD flashbacks right now, sorry)
  • with equally snappy performance as the host Windows 7 machine
  • with easy and "Windows intuitive" text editing via Sublime Text 2 AKA "new hotness" – by the way, Sublime Text behaves identically on Windows and Ubuntu, so I'm having zero Text Editor Culture Shock. We can talk about text editors later—today, the point I want to make is that by using Sublime Text I can defer "the talk". Compare to the past where my choices were vim (:qA!), emacs (CTRL+X, CTRL+C), and pico (oh sweet, sweet menus written in English!); or the past where I couldn't figure out how to get pico installed and worked with vim. (Vim protip: press "i" and it goes into Normal Mode, then anytime it starts acting weird, hit ESC a bunch and try and get back into Normal Mode. And yes, I said "protip".)
  • with copy/paste between host and VM
  • with working VM pause/resume that takes a grand total of 2 seconds

So to be clear, it wasn't always this easy.

It probably took less time to install and update Ubuntu than a Windows 7 VM, and I've done both recently, so I guess that makes me a leading world authority on how long it takes to install operating systems on VMs.

It even took less time to blunder through apt-get-ting/gem-ing/bundle-ing all the dev tools on Ubuntu than to sleepwalk through the VS2010 + SP + SQL + SP installers.

So there's your anecdote. As of January of 2012, it's easy.

What does this mean

If you're considering tinkering with "the Ruby" or whatever*, just install VirtualBox and Ubuntu…or whatever works for you. I'm just here to tell you that it's very easy to get an Ubuntu VM set up and running, and it's easier than trying to get Ruby working on Windows.

And, when the Ruby On Windows Pain Factor dramatically drops (like it did with git—oh by the way—if you haven't heard already, running Git on Windows is easy now), maybe you'll hear from me again.

Don't Be Me

This may be good general advice, but today I just mean it in the context of using PowerShell's call operator (the glorious &, AKA "The Ampersand").

I could spend a lot of time building up to the good stuff, but I'll just get to the point. I'm going to run "echoargs" which most recently helped me troubleshoot calls from PowerShell to MSBuild.exe. You'll see why I need this utility soon enough:

image

Okay. That was the easy part. So far, when calling commands from PowerShell using the call operator, everything is pretty much working as expected. Now let's try something…different…:

image

I'm not exactly sure what to say here. The first example is a thinly-obfuscated real-world head-scratcher I've stumbled into over and over and over. The second line, I wrote to try and make some sense out of PowerShell's parsing rules. And when I got the output for the second line I can only make sense of by using parsing rules like "throw away some of the quotes, then start parsing" and "if the quote-marks are on the left side of the word, move them to the right". You won't find these parsing rules in an example in the dragon book.

So I kind of gave up.

You see, I had a longer, well-reasoned blog post planned out. In my pretend fairy land, I'd spend a few minutes doing research, master PowerShell's parsing rules, and write a helper method to encapsulate the weirdness so you and the rest of the world could live out your sheltered hobbit lives in the Shire, never understanding the service I provided for you. I'd be the Aragorn of this story, and would be pretty rad compared to you lame-os.

I even had a "reasonable explanation" for the weird behavior to link to here. And don't get me wrong, that's good information.

But nothing explains "   1 2 3 4", followed by "5", followed by "6    7 8 9" as your argument list.

Lesson learned: don't be me

There's probably a better lesson to be learned, like

a) trust PowerShell's call operator syntax about as far as you can throw it, and

b) when you throw it, watch the skies carefully, or the moment you turn away PowerShell will boomerang back at you and aim for your throat.

Okay.

Furthermore, echoargs.exe, which ships with the PowerShell Community Extensions, is built for the sole purpose of troubleshooting this kind of weirdness. It's useful, it's small, and it's safer than taking a boomerang to the throat every time you test.

Furthermore, when using the call operator (&), use the more explicit, longhand form. Even though it makes most calls unreadable to humans, for those of us who matter (the parser), it is clear as day. See screenshot + gaudy green text below:

image_thumb21

Furthermore, if you're writing a generic script that accepts input you can't control, and some of that input may or may not include quote-marks…find whoever is responsible for assigning you such a doomed task and punch THEM in the throat*. They deserve it**.
* don't do this
** they probably don't

By the way, if you know why these rules are the way they are, by all means answer the question here and I'll give you the appropriate whuffie or whatever they call it these days. And no, spell checker, 'whuffie' is not a misspelling.

Hope for the future

Just so you know, we may see a fix for this class of problem in PowerShell v3.

Highway To The Danger Zone

Or, "How to avoid crashing Visual Studio while working with XAML"

Our project may have problems. I don't know. What I do know is that, when you open a XAML file in Visual Studio, you are officially in The Danger Zone. And, after much careful thought and dozens of "unpredictable" crashes, I've identified the problem.

Well, of course I haven't identified the real problem. But I've found a suitable way to tiptoe around the problem.

I've developed a simple workflow that may help you as well.

A Simple Workflow

  1. Open the XAML file for editing. Whether or not you have design view visible in any way is immaterial. This happens in both code view and design view, and split-screen view.
  2. Make any and all changes.
  3. Save your file. (This step, while not necessary, will make it easier on you in the Visual Studio recovery process post-crash.) It's important to note that at this point you've entered The Danger Zone.
  4. Observe your task manager's real-time CPU chart max out one of your CPUs. You're still in The Danger Zone.
  5. Close all XAML files. You may leave code (C#) files open in Visual Studio if desired. This will trigger further processing.
  6. Continue to observe devenv.exe's CPU usage.
  7. When CPU usage drops to 0%, even for a short while, let out a yolp of joy! You've passed through The Danger Zone. Give yourself a pat on the back. (Yes, I mean physically give yourself a pat on the back. It's awkward, but you've earned it!)
  8. Now you can run your application without crashing Visual Studio!

Ways to know you're in The Danger Zone

1. Visual Studio crashes when attempting to "Play" or launch your WPF project from Visual Studio.

2. Visual Studio crashes when it receives focus again sometime while running your WPF app.

3. Visual Studio crashes when you terminate your WPF application.

Highway To The Danger Zone, by example

This just happened two minutes ago. I'll point out I avoided crashing Visual Studio again, thanks to my stick-to-it-iveness. Enjoy.

image

PS—I have a quad-core machine, so this graph represents one of the four CPUs entirely pegged by devenv.exe. I don't know why, nor am I particularly interested to report a bug. I just know how to avoid the crashing and whatnot.

I Hate You, OutDir parameter

tl;dr

MSBuild’s OutDir parameter must be of the form:
     /p:OutDir=C:\folder\with\no\spaces\must\end\with\trailing\slash\
…or of the form:
     /p:OutDir=”C:\folder w spaces\must end w 2 trailing slashes\PS\this makes no sense\\”

I have written a self-contained PowerShell function to handle OutDir’s mini-language that exists because…I don’t know why, because they hate us? Anyway, the script is all the way at the bottom. PS “backwards compatibility” is code for “we hate you,” in case you get “backwards compatibility” as the reason OutDir’s syntax is so hostile on your Connect issue you filed so diligently. That’s also a trick, because you’re not supposed to file Connect issues.

I hate you, OutDir parameter

Okay, so the post title is unhelpful. Deal with it. I’m in pain, and a suffering man should be afforded some liberties. I’m like Doc Holiday—minus tuberculosis, plus build script duties. Or the whooping cough. I didn’t pay much attention during Tombstone, but he did cough a lot. Could be parasites.

Build script duties are some of the worst, alongside SSRS reporting duties, SharePoint integration duties, auditor-friendly deployment documentation duties, or any combination of those three. I don’t know what IT auditors do for fun—I simply can’t imagine. I don’t know if they can either. Think about it.

…back to build scripts. A bad build script will kill your chances of getting any kind of an automated deployment working, and if you can’t do builds or deployments well, you end up editing your production web.config in production and writing Stored Procedures because deploying code is just so painful. And then no one wants to deploy because it takes about three weeks and seventeen tries before you get it right, and no one’s writing any sort of automated tests around your stored procedures (except that one guy who’s waaay to excited about T-SQL, but he writes try/catch blocks in T-SQL and is pushing for Service Broker, so…can’t trust him), and this has all kinds of implications, and then all of a sudden exclusive checkouts sound like a good idea, and you wake up one morning and you’re doing Access development. Again(!!!). Except less productive. And your customers don’t trust you, and then one day you’re just fired outright, and the next day you’re on the street, and then finally, out of options, you reach the lowest low—you develop and release an app on the iTunes app store. Lowest of the low. Can’t possibly get worse, unless you’re forced to write code in Ruby, which requires you join the Communist Party, as is clearly written in the AGPL (yes, this is why Microsoft wrote their own GPL—they’re fighting both terror and communism, and socialism—one license agreement at a time). This is why you read the EULA. Communism is why.

Anyway, MSBuild’s OutDir parameter isn’t making my build script duties any easier.

Regarding OutputPath

I tried researching OutputPath, but it looks like a different metaphorical universalist path up the same mountain named “appending 1 or more slashes to the end of everything for no reason”, so I gave up. When it comes to doing in-depth research on any framework, including, and today featuring MSBuild and its wonderfulness, you either find out that a) you were woefully ignorant all along and just needed that one tidbit of knowledge, with which you can SUCCEED, or b) you were unfortunately justified in distrusting your framework because your framework has FAILED you. After a few extremely painful episodes, I started giving up early and looking for a workaround, which turns out is what most people do anyway.

OutputPath smells like it has the same problems that OutDir has, so I just gave up on it and went with the workaround (below). I could be wrong about OutputPath. Blame SharePoint for my wariness.

But I’m not only here to complain

I’m here to complain, don’t get me wrong. Like a wounded Rambo provided with only fire, kerosene and his trusty serrated knife, I’m writing this post as a kind of Rambo shout before I pass out from the pain after cauterizing my wound the Rambo way. Life sucks*.
*not actually true

But I’m also here to let you know, hey, if you’re in the Cambodian jungle* with a bullet wound and you’ve got to do something, here’s what you do. Maybe you won’t bleed all over the flora and fauna** with your bullet wound in the Cambodian jungle as long as I did, maybe this post will help you along in your journey…whatever that journey is. It’s a journey of some kind. Let’s not stretch the metaphor too far. Wait, aren’t we talking about build scripts?
*I am not going to do any research, do not question or fact-check my Rambo knowledge. Just assume I got it right.
**it seemed like the right thing to say at the time

Why: A brief explanation why OutDir exists

Now, onto something resembling a technical blog post.

OutDir exists so that, when compiling a Project (e.g. “msbuild MyProject.csproj”) or Solution (e.g. “msbuild MyManyProjects.sln”), you can tell MSBuild where to put all the files. Or if you like fancy words, “compilation artifacts for your ALM as part of your SDLC”. You’re welcome. I’m SDLC certified 7-9 years experience, ALM 8.5 years, MS Word 13 years. Hire me, I’ve got an edge on the other candidate by 2.5 years SDLC and a whopping 9 years MS Word. Numbers can’t lie! Plus I’ve got 5 years OOP, 3 years OOA, 4.5 years OOD. You can’t argue with numbers.

Where were we? Ah, putting compilation artifacts in folders. Without OutDir, you don’t have that control.

Let’s take the simple example. “msbuild MyProject.csproj” will put MyProject.dll in the bin\Debug subfolder, just like compiling from Visual Studio. If you set the configuration to Release, ala “msbuild MyProject.csproj /p:Configuration=Release”, everything will be dumped into bin\Release. If you have no idea what’s going on and you make a third build configuration, e.g. “msbuild MyProject.csproj /p:Configuration=Towelie”, the files will be dumped in bin\Towelie.

You get the idea. By default, files go in build\$Configuration, whatever $Configuration happens to be at the time.

So here comes OutDir to shake things up. Let’s try a simple example:

msbuild MyProject.csproj /p:OutDir=C:\temp\MyProject

image

Haha! Tricked you! This simple example doesn’t work! You forgot the trailing slash!*
*serious aside: would it have taken more effort to write and localize an error message in seven hundred languages including Bushman from Gods Must Be Crazy 2, or just accept the path without a trailing slash and fix it for us? I can’t imagine it would be harder to just scrub the input. I’m serious. I’m Batman voice serious. Seriously.

Okay, let’s try this again, but after paying the syntax tax:
msbuild MyProject.csproj /p:OutDir=C:\temp\MyProject\

You get exactly one guess what happens. Okay, who cares, I’ll just show you.

image

So you get the idea.

A second example, this time illustrating the use of path names with spaces

Okay, first off, MSBuild’s OutDir parameter is only one of the many, many reasons that I dislike spaces in filenames, path names, even passwords. I mean passphrases. Of course I mean passphrases. Passwords are crackable. Passphrases are the way to go.

Don’t even get me started about Uñicode support.

Second, let me point out that I can work perfectly fine without setting OutDir. I know where my files go, and I know how to reliably copy files from bin\debug folders directly into production as part of my nightly build process (PS for the humorless, don’t try that). But, I need OutDir, because TFS’s default build definition uses OutDir whether you like it or not. And, in the course of setting up a working TFS 2010 build, at the time I needed to a) understand, and b) simulate TFS’s compilation process.

Anyway, some of our TFS build names have spaces in them, which means that some of the folder names have spaces in them, which means that my script that calls OutDir needs to handle folder names with spaces in them. Let’s try vanilla latte half-chop burned-choco cream soda vento rico suave way of calling OutDir and see what happens:

image

Okay, we cheated somewhat, because we didn’t even bother to surround our long path name with quotes. Rookie! Let’s try again:

image

Okay. Surrounding your long path name with quotes, along with the trailing slash isn’t cutting it.

This “Illegal characters in path.” error message is where I’ve lost probably…let’s not estimate, my professionalism will be called into question. Anyway, let’s just say “a lot of time” was lost on this problem.

So here’s the solution:

image

I don’t know why, and at this point, I’ve lost the fighting spirit. It’s setting an output folder in MSBuild after all, I’m not exactly writing a new OS scheduler, though I have a vague idea that OS scheduling is not like Outlook scheduling, and my resume says I have 3.5 years of OS Scheduler experience, so I can speak to it.

Someone in the comments of this blog post suggested the double trailing slash solution, and what you do know it worked, and here I am much later writing a blog post that is way too long to justify this much effort.

Wrapping up what we’ve learned today, in bullet point form

  • Doc Holiday has either TB or the whooping cough. Or parasites.
  • They hate us:
    • MSBuild’s OutDir parameter must be of the form:
      /p:OutDir=C:\folder\with\no\spaces\must\end\with\trailing\slash\
    • …or of the form:
      /p:OutDir=”C:\folder w spaces\must end w 2 trailing slashes\makes no sense\\”

Wrapping up what we’ve learned today, in PowerShell function form

Enjoy. There’s almost nothing special about this. The value Run-MSBuild gives you is that it hides (or if we’re using the fancy words, encapsulates) the horrible rules OutDir imposes on us, freeing the caller to worry about, oh, I don’t know, writing an OS Scheduler.

Feel free to cut-and-paste. I’m not going to force you to join the Communist Party like the AGPL does.

And do note the commented-out psake-friendly line. Psake’s Exec function exists to encapsulate the weirdness with executing DOS commands from PowerShell. I figure, if you’re calling MSBuild, chances are good you’re calling it from psake, but if not, here’s a script that will bubble up a reasonable error message to the user.

Psake or not, if you’re calling this PowerShell script from TeamCity, the error message will bubble up to the top. If you’re using TFS, follow these instructions to experience the joy that is visual programming (and yes, you’ll also get good error messages bubbled up to the top).

Also, this isn’t one of those bulletproof, general-purpose functions, what with proper types and default values for each argument, logging via write-verbose, a –whatif switch, documentation, and whatever else I’m ignorant of. Of. I don’t do that day-to-day for my PowerShell scripts. I just write what I need today, and maybe generalize what I have if I use the same function twice in a script. It’s not like sharing functions between PowerShell scripts is desirable. Like sharing needles. A discussion of the merits of needle sharing is a good way to wrap up a blog post. And on that note, here’s the script:

 

 

$msbuildPath = 'C:\windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe'
function Compile-Project($project, $targets, $configuration, $outdir) {
  if (-not ($outdir.EndsWith("\"))) {
    $outdir += '\' #MSBuild requires OutDir end with a trailing slash #awesome
  }
 
  if ($outdir.Contains(" ")) {
    $outdir="""$($outdir)\""" #read comment from Johannes Rudolph here:
http://www.markhneedham.com/blog/2008/08/14/msbuild-use-outputpath-instead-of-outdir/
  }
 
  #if you're calling this from psake, save yourself the trouble and use their "exec" command.
  #psake:
  #exec { & $msBuildPath """$project"" /t:$($targets) /p:Configuration=$configuration /p:OutDir=$outdir" }

  #Vanilla PowerShell, non-psake:
  & $msBuildPath """$project"" /t:$($targets) /p:Configuration=$configuration /p:OutDir=$outdir" 2>$msbuildErrOutput
  if ($lastExitCode -ne 0) {
    write-error "Error while running MSBuild. Details:`n$msbuildErrorOutput"
    exit 1
  }
}

 

Standard Disclaimer

Every post I write should come with this standard disclaimer. If I ever re-do my blog, I’ll link to this standard disclaimer from the top of every blog post.

This Stuff Just Doesn’t Matter

This stuff, this software development stuff in and of itself, just doesn’t matter. It isn’t the end goal. There are bigger things in life.

All things being equal, it is better to be a competent software developer than an incompetent software developer. This is why I write posts about how to invest my limited time.

All things being equal, it is better to be learned rather than ignorant about software development practices. This is why from time to time I feel the urge to linkblog posts I find on twitter that I believe my blog audience (i.e. you) haven’t seen, and may benefit from. My linkblog posts are gold, I tell you, gold.

All things being equal, it is better to be intentional about your career path and career goals, especially when it comes to dealing with Microsoft’s endless framework lahar. I see a lot of time wasted on studying for exams, and attention given to half-baked frameworks that subsequently under deliver. And I don’t know why, but I have the urge to fix this problem. For those of you who could not care less about helping others make wiser choices with their learning investments, sorry, but it’s who I am, and it bothers me enough to blog about the topic…frequently.

All things being equal, it is better to go to work and experience less unnecessary pain. This is where a lot of my “written for the search engine” and “suriving TFS” posts come from, and where I hope most people find value. I write many of my blog posts with the singular goal of reducing pain. Pain isn’t the ultimate evil. (There’s a great discussion about pain in A Canticle For Leibowitz, which by the way is the first post-apocalyptic book, but I’m too lazy to find the exact quote. PS—dork alert)

All things being equal, it’s more productive for me to blog here than to sit on the couch on a Saturday and take a nap while watching college football. Though there’s nothing wrong with any combination of naps and college football. It’s also better for me to blog than to play video games; or browse the gaming subreddits; or watch someone on twitch.tv live streaming while they play video games; or best of all watch someone on twitch.tv live streaming while they browse the gaming subreddits, which frees you from the chore of browsing the internet yourself. You should probably visit that hyperlink, because it’s just perfect. It’s like watching Inception if Inception featured laziness as its major theme. It just makes sense. Go watch Inception, and go click that link.

I’m not a super expert genius ninja samurai ZeroCool hacker

If it appears that I’m presenting myself as an authority on any topic, make sure I back it up with personal experience. If I don’t have the personal experience to back up my claims, take my argument for what it is: an unsupported opinion. I know that I’m not an expert, and when writing blog posts my self-image doesn’t change—but maybe here on the internet, where they don’t know you’re not a dog, you don’t read my posts the way I intend for you to.

I’m not an expert, but if it so happens I am, I’ll tell you why.

This is a good rule in general. Given blog posts aren’t built off of months of investigative journalism or academic research, the best blog posts are harvested from personal experience (as opposed to blog posts written by pundits with no experience). And let me draw one more point from this: a lot of .NET experts aren’t experts either on the subjects they write about. They are no more an expert, no more experienced, no more capable and have no better software development experience than you or me. They’re just people like you or me with better communication skills. With that said, some of them are true experts. The difference between a good blog post and a great blog post is, in my opinion, the great blog posts are harvested from years of painful experience. Compare this great blog post to my post on the same subject, but clearly written from a newbie’s perspective for an example of this in action.

One additional point I’d like to make is that I feel like I’ve crested the hill and I get it now. Software development is a known problem for me. I’m comfortable with the things I know, and I’m comfortable not knowing the things I’m fuzzy about and still working on (see: estimation; finding out what the customer wants), and I’m comfortable with the fact that I may never learn Haskell, or SmallTalk, or BizTalk, or Joomla. This greater sense of perspective wasn’t always how I was, and I get the idea that most of the working world is full of people who don’t get it yet. So yet another of my part-time crusades is to get everyone up to speed, at least to the point where they get it. I’ve met people who (without some help) will remain forever behind, forever…for lack of a better word: incompetent. And I don’t see my “getting-it-ness” as unique expertise but simply what all software developers should have. I look around and I don’t see that…getting-it-ness. Find me a better word. I can’t write more explanatory text right now without repeating myself.

I will make every effort not to blog work arguments, or be passive-aggressive in general

My theory is most blog posts spring forth from blog arguments or work frustrations, as I feel this urge to blog work arguments from time to time. If I won’t say it in person, I shouldn’t say it on the blog. And even if I say it in person, some work arguments should be kept in the family.

Every now and then I step out of bounds.

And finally, this stuff just doesn’t matter

Software development is not important in the grand scheme of things. Being a bad software developer in and of itself does not make you morally inferior. To pick on something specifically: software craftsmanship is not a new morality, whereby you are righteous (professional?) if you write clean code and unrighteous (unprofessional?) if you don’t. Depending on the bigger picture, and I place emphasis on the phrase bigger picture, you may be doing serious harm by e.g. overdosing radiation therapy patients via your software, or more likely, putting your company out of business because of your incompetence—but in and of itself, being a bad (worse than average?) software developer isn’t evil.

This stuff just doesn’t matter.

Every post I write, no matter how passionate I may sound, no matter if in truth I get carried away and lose perspective and start believing it, this stuff just doesn’t matter.

Metro and WinRT: Too early to call, but I’m paying close attention

I’ve posted the conclusion below in bullet point form. If you’re a dirty, filthy blog post skimmer, then head on down to the very bottom. I’ll see you there, fellow skimmer.

Microsoft has announced a great number of things at BUILD this week. First among them is the new tablet OS known as Windows 8. It happens to run on top of Windows 7 for now, but it’s clearly a tablet OS.

This is early, but it’s spinning around in my head, and I feel like I’ve got to write this somewhere. Consider this your warning.

I (and the rest of us as .NET developers) need to answer a question for ourselves, and soon:

The Big Question

As an enterprisey .NET developer with a day job doing non-WinRT-related work, is it worth my time to go out of my way to learn WinRT?

The Big Answer

I don’t know.

A Longer, More Rambling Answer

It’s complicated. On one side, Metro is slick and is clearly, obviously the better way to build apps in Windows going forward. On the other side, the 2005 version of me could have said the exact same thing about WPF, and a little before that, WinForms. Actually hey, let’s try doing a little Microsoft Framework Mad Libs and replace “WinRT” with older technologies. Here we go:

Microsoft UI Technologies mad libs

MAD LIBS 2002 edition: WinForms is slick and is clearly, obviously the better way to build apps in Windows going forward! And check out that designer!

MAD LIBS 2005 edition: WPF is slick and is clearly, obviously the better way to build apps in Windows going forward! It’s one of the unmovable, unshakeable, eternal pillars of Longhorn! And check out this cool designer called Sparkle! But don’t worry, graphic designers will do all the designing for us in Sparkle! It’s a new era! There’s also this sweet thing called “Windows Marketplace” where you can hock your apps! What’s that? WinForms? Well, it will still be supported, but you can mentally flush everything you know about WinForms down the drain. Unless you’re unlucky and stuck with a WinForms project, in which case…I guess it’s a good thing you know WinForms already.

MAD LIBS 2007-2008-2009-2010-ish edition: Silverlight is slick and is clearly, obviously the better way to build apps in Windows and the web going forward! WPF? Well, Silverlight uses XAML too! It’s like WPF, only less of it. Check out NetFlix! Oh, that isn’t really an application. Well, just trust me, it’s the future.

MAD LIBS 2011 edition: WinRT is slick and is clearly, obviously the better way to build apps in Windows going forward! Check out these sweet free tablets! There’s going to be an app store! Windows Marketplace? What? Oh, no one used that, it shipped with Vista. Don’t worry about it. This new app store is called, wait a minute, yeah. It’s still Windows Marketplace I think. Silverlight? Well, we’re not calling it that, no, but, there seems to be a lot of Silverlight here. But it’s not running on .NET, either the DLR or the CLR. We’re not sure yet*. But what’s clear is, WPF is no longer needed—remember how sluggish it was? Oh, are we not allowed to mention that yet? Ask me about performance next year. Maybe we’ll talk about performance then, if I can bend the messaging such that I am praising how good WinRT is in comparison to WPF. Designer?
*really, I’m not sure yet. WinRT most resembles Silverlight. Check out Rob (of Caliburn.Micro, and you should know what Caliburn.Micro is), he seems to be doing self-directed digging on WinRT and is on fire on the twitter.

Back on track

Ok. What I’m trying to say in the mad libs above is that you can’t trust Microsoft to stick with anything. You just can’t.

Everything sounds great right now, and yes, I do believe Metro is cool and slick and I could theoretically make sweet sweet tablet apps with it. Period. But. Comma.

But, I can’t trust them. The Longhorn demos were really, really good. I don’t remember reading anyone talking bad about WPF at the time. Sorry if I missed out, but I just don’t remember it. We all loved it. And what wasn’t to love? WPF is the future. Right?

Remember the three pillars of Longhorn?

As someone else pointed out on Twitter, remember the Office ribbon we were all going to put in all of our apps? Remember data access strategies? The Oslo hype? OSLO? OSLO!!!!!

Remember (dare I say it) app development in SharePoint? Disclaimer:I still like it as an intranet platform, a collaboration (power user) platform, and like it better than the more expensive/more enterprisey alternatives. Sorry guys.

And let’s focus on viability of the platform, not the viability of the tools

EDIT 2011-09-16: ninja edited this section to make complete sentences and generally wash away up-too-late-at-night-brain flavor.

And allow me to pre-emptively eliminate one common argument, since I’ve seen it crop up a lot in Windows Phone-land. Okay.

The Windows Phone has, by almost all accounts, a relatively good development platform. By mobile platform standards, it’s good. It’s probably* the easiest way to build simple apps for a phone.

With that out of the way, who’s buying Windows Phone apps from the app store today? And who’s paying you to develop a Windows Phone app? The vague, roughly accurate answer is no one.

So let’s not go and try to frame the entire discussion as a developer tool comparison. Tools matter, but a viable platform doesn’t necessarily have to have the best tools, and more importantly, good tools don’t guarantee a viable platform. A perfect case-in-point is WebOS.

I’ve heard good things about WebOS development. WebOS, for those of you not paying attention, is the platform that is now completely, 100% dead and represents a heavy loss of learning investment.

So to say this plainly, even if the tooling story is good, WinRT may already be circling the drain.

If you’re going to jump into Windows RT “whole hog”, the time is now

Let me try and focus this long, rambling answer into a focused discussion of cost (learning investment) versus reward.

If you learn WinRT now and it indeed turns out to be the future, you can end up like Josh Smith did with his WPF knowledge. Oops, wrong link, Josh Smith and WPF. Sorry about the confusion, I thought he had refocused on iPhone development there for a second. Must have been someone else.

Anyway, if you bet heavily on a platform, you’ll end up an expert, and hopefully that kind of early and deep expertise translates into more tangible rewards somehow.

As an additional bonus, outside of developing expertise for its own sake or for the sake of raising your value of your time to employers,  there may or may not be an early gold rush for WinRT tablet apps. You heard it here first: The WinRT app gold rush.

GOLD RUSH! GET WHILE THE GETTIN'S GOOD!

Now. If you wait, you are potentially missing out on your chance to make $2000 a month writing games for cats. That wasn’t in my original “gold rush” linkblog post, but I think it’s important enough to note that people are spending $2000 a month buying iPad apps for…cats. For cats! FOR CATS.

Time to wrap this up

I still don’t have the answer, but I feel better. If you tl;dr skimmed my entire post, let me summarize it as follows:

  • BUILD announced the Windows tablet developer framework called WinRT. There is a whiff of a hint (though I may be way off, someone confirm this) that WinRT may eventually be the development platform for Windows Phone. Unconfirmed.
  • I am deciding whether to go above and beyond and try and really get into this whole Windows tablet thing. At this time I don’t know.
  • The tablet has a lot of nice features and from all appearances, looks like it will be a success.
    • But so did WPF back in 2005-ish.
  • If I’m going to get into really learning tablet development as a sort of expertise, I should do it now, as there are both “gold rush” benefits and “deep expert” benefits.
    • But if it dies altogether, I will have essentially wasted any effort learning it.
  • Let’s talk about how this can end up:
    • Worst case: WinRT limps along for a few years and I am never able to a) use it on a work project, b) create a successful app with it. Hundreds (or maybe thousands) of hours are wasted learning WinRT minutia.
    • Best case: I elevate myself above commodity .NET developer. There is an almost unlimited best case. Bill Gates shows up on my doorstep to personally deliver a bag of money (though it’s certainly not all about money).
    • More reasonable best case: I have a lot of fun building tablet apps, get paid, and only enhance my .NET/Microsoft-guy skillset in the process.
    • Worst case 2: I blog about “The Decision” deciding to go “whole hog”, then get lazy and do nothing. See you next year. Currently at laziness DEFCON 4. Or laziness threat level orange. This means you’re going to have to go through the full body scanner to detect hidden laziness about your person whenever you’re at the airport now. I’m already on “the list” for known potential threats of laziness.

And let me be clear, I’m not choosing whether to read a blog post here and there, maybe watch a screencast, buy a book and not read it (most of my tech book reviews are as follows: Minty smell! Excellent binding. Looks and feels heavy.) I’m not deciding whether to dip my toe in to test the water, I’m choosing whether to jetpack cannonball jump off a cliff/dedicate most of my available “non-work dev time” to this. So it’s one of those “The Decision” moments, albeit no one cares about my decision. You get the idea.

And it’s only been a few days since the announcement. I don’t have to make the decision today. I can let the marketing funk that is BUILD (that has permeated every nook and cranny of the .NET community) wash out of my stinky, marketing-funk-permeated clothes. Maybe give them a double-wash, hang ‘em up and let ‘em flap in the breeze for a while. But maybe, I’ll discover a faint discoloration on the sleeve. Maybe I’ll discover that after the marketing funk has washed away, a metaphorical grape juice stain of opportunity remains.

As a final note, I will only say you can be thankful I wrote this so you can enjoy twitter again. I apologize for the last few days of nonstop #Win8 tweets, and you’re welcome.

Windows 7 Tool Roundup: Small But Explosive, Just Like Dynamite

Having just lost my previous Windows 7 to what I hope is a freak accident that will never recur, and subsequently having reinstalled Windows 7 from scratch, the list of customizations and programs I install on Windows 7 is a particularly fresh memory.

This is a .NET developer-oriented build and some of the things I do may not make sense to you.

Hopefully one of these tidbits may prove useful to you.

Windows Customizations

  1. Set your keyboard repeat rate to the fastest setting. You’re not your arthritic grandmother, and you can handle the extra speed. I wrote six full paragraphs about this subject in 2007, so if you’re curious as to why you’d make this change, well, I explain keyboard repeat rates in as much detail as anyone else ever has or ever will. I even introduce a keyboard repeat-rate mascot!
  2. Make the same changes to Windows Explorer you’ve made a thousand times before, and will make a thousand times again:
    image
  3. I’m a little crazy, so I have created local accounts for my ASP.NET app pool and SQL Server service account. I know, it’s a little unhealthy.
    1. Get to Computer Management and from there, create your service accounts accounts.
    2. Now that you’ve created these accounts, they unfortunately show up on your Windows login screen. Clutter! To hide these service accounts from the login screen,  follow these instructions. No, I am not bothering putting together a PowerShell script to hide them—tag you’re it.
  4. Now for the dumb optional parts I do:
    1. Change Windows to the puke green I’ve demonstrated above, or if you don’t like my (delightful!) shade of puke green, feel free to choose your own shade of puke green. Your shade of puke green is clearly superior, I admit. To do this, hit the Windows key to bring up the Start Menu and type “glass” into the search bar.
    2. Change the Windows login screen. Choose something like this little piece of awesomeness for your login screen. Let the haters hate (and trust me, they will hate, often).
    3. For a little extra class, change your Windows login picture to be your avatar. Do it especially if your avatar is as awe-inspiring as mine. It won’t be, but you can try your best (and fail).

Windows Features to install

To bring up the “Windows Features” dialog, hit the Windows key to bring up the Start Menu and type “windows features” into the search bar.

  • Pretty much everything resembling the letters “I”, “I”, “S”. Everything IIS, just install it. Don’t install FTP. Note that even if you don’t want to install the server, all the management tools and PowerShell cmdlets are installed here too.
  • Telnet client – Telnet is admittedly horribly insecure, and you should use something more secure. But, I need this telnet client every blue moon to test raw TCP connections to SMTP servers or SQL servers. And yes, I know, there’s PuTTY.

Programs to install

Some of these will pass without explanation. E.g. it’s Firefox, you use it for browsing, no further explanation should be needed.

  1. Mozilla Firefox
  2. Google Chrome Beta – along with being an excellent browser, Google Chrome is also now my favorite PDF Reader. That’s right: no more Adobe Acrobat, no more FoxIt, no more PDF reader we all moved to when FoxIt turned into Acrobat. Just associate PDFs with Google Chrome. Now, the problem with associating PDFs with Chrome is that you can’t find that pesky Chrome install!
    1. To find the Chrome .exe file, the key is to understand that Chrome installs itself in your user profile, not in the traditional “Program Files” location. Without further ado, paste this in your Explorer address bar when prompted to browse for an EXE to associate with PDF:
      %LOCALAPPDATA%\Google\Chrome\Application
  3. Sysinternals Suite – I follow Mr. Rogers advice and make pretend there’s an installer for this, and manually copy this into my C:\Program Files\ folder. I don’t know what most of these do, but Process Explorer (procexp.exe) is a totally tubular Task Manager replacement. Use it as such. I keep Process Explorer running at all times in my system tray and it lets me know when my computer is slow. That sounds trite, but it’s true. It helps to know that I’m not going crazy and my computer is in fact slow.
  4. Git for Windows – Word got out early that git doesn’t work on Windows. As of 2011-08-18, that’s a lying lie from a liar, who lies, from whom lies spew forth. Lies. Git works great on Windows now, and has a painless installer. Download as instructed below:
    image
  5. Paint.NET – honestly, Windows 7’s paint has improved considerably, even to the point where maybe you don’t need to install Paint.NET anymore. But, I’m now a master of Paint.NET and must have it! With it I’ve created the screenshot masterpieces you see above, among other masterpieces such as this timeless masterpiece which is a master work of mastery and a masterpiece. Masterpiece.
  6. Pidgin for IM, assuming you aren’t labeled a corporate security  VIOLATOR by running CATEGORY:UNAPPROVED SOFTWARE – this is the only unobtrusive IM client left. If you (like me) can’t help but look at the ads in all 3 places in MSN Messenger, and don’t like Digsby, well, I guess you’ll like Pidgin. Warning: if there’s a problem with your IM connection or with adding friends, blame Pidgin. I’ve had problems. Even with the need for random reinstalls and short jaunts to MSN Messenger to add friends, It’s still worth it to me to use Pidgin for everyday use.
  7. Nothing says “Windows developer” quite like a Ubuntu VM running inside VirtualBox. I will take this opportunity to point out VirtualBox is free for non-commercial use. So far, so good. I want to emphasize that a) my Ubuntu VM cold boots in 5 seconds or so, and saves or restores a running VM also in about 5 seconds. It’s really, really, really fast, and runs comfortably with 2GB of RAM allocated to it. Disclaimer: I’m running on an SSD and it’s fast. Envy me.
    1. Once you get the VM installed, you must install the VirtualBox utilities, which notably install the flexible, virtual driver that lets you resize your Ubuntu window anytime. Without them, you’ll have a horrible experience and run in a tiny porthole.
    2. Note that anytime you update your Ubuntu install, you will have to reinstall the VirtualBox utilities to again get minimally bearable display drivers. I am not sure I care why.
  8. Skype + headset: If you haven’t been paying attention to Skype recently, it’s both getting bloated and awesome. I’ll just focus on the awesome part today: with Skype, you can make a landline-quality voice call over the internet, plus screen sharing, for free. In case you didn’t get that, I said

    signbot

Upgrading Our ClickOnce App From .NET 3.5 to .NET 4.0

Fixing the “The customHostSpecified attribute is not supported for Windows Forms applications.” error

This one’s for the search engines. Sorry folks, none of my recent posts are readable by humans. Too bad.

Quick summary of what I did to fix the problem:

  1. Changed our MSBuild file ToolVersion property to 4.0. This changes the behavior of the GetFrameworkSdkPath operation, which tells us where to find the Windows SDK folder (which hosts mage.exe, which performs secret ClickOnce magic). Previously (before changing the ToolVersion to 4.0) it pointed to the v6 SDK; now it points to the v7 SDK. Quick note to help you  understand #2 below: we store this path in a variable called SdkPath.
  2. Changed the MSBuild variable containing the path to mage.exe to point to (note the added text):
    $(SdkPath)bin\NETFX 4.0 Tools\mage.exe
    We no longer just point to the bin\ folder, as bin\ still contains the .NET 3.5 version of mage.exe. The .NET 4.0 version is apparently housed in the “bin\NETFX 4.0 Tools” subfolder.

Thanks to this thread on MSDN forums for the tip. The troubleshooting exhibited in that thread is something of a comedy of errors, but eventually someone posted the correct solution, and for that I thank you.

Steal Ideas From These psake Scripts

A warning

This post in its entirety isn’t readable by humans. I’m sorry. I started by picking out a few psake scripts here and there, figuring hey. I’ll pick one or two examples and talk about what they’re doing.

The problem with writing a blog post about build scripts is it’s pretty boring. No one idly browsing their feed reader makes it through an entire post without being knocked unconscious. Ooh, that reminds me: if you’re currently operating heavy machinery or piloting a jet plane, for your safety please stop reading this blog post. Thanks.

But. But, even though it’s well known that this kind of stuff is boring to read about, I still want to collect all the knowledge on this earth related to psake and how people are using it. And I’ve done that below (at least as of 2011-08-10).

Unfortunately for you, my dear reader, I’ve made no attempt to process my raw data collection into something readable, what with sentences, paragraphs, code samples and topical grouping. That takes way too long. I’m too lazy for that.

Instead, I’m linkblogging a clump of psake scripts and mentioning what pieces you may want to steal for your own build script.

As a bonus (and because it’s part of what I’m researching), I’ve included a bunch of links to deployment-related blog posts and deployment scripts. These things are gold, and despite their seeming tinyness and insignificance, represent hours of sweat and toil.

Don’t Read This Blog Post – Search It

So I don’t expect anyone to, you know, read this post. But, if you’re like me, you’ll find that when it comes time to, say, add a NUnit test runner to your build script, or say, deploy to a remote IIS server, you’ll fire up your handy browser search (CTRL+F) and go looking for a script.

Well, maybe go ahead and read when I tell you to pay attention

A few places where I think a build script has done something novel, I’ll put a small note telling you to pay attention. It’s not meant to be insulting, but a way to un-zombify your brain so that you actually read that bullet point—so that it stands out from the endless sea of text and bullet points. I know, I could take the time to blog an entire post about each one of these points, and maybe I will. But for now, bet on my laziness and assume I won’t, and pay a little extra attention to how these folk put together their build scripts.

It’s like the famous quote from Passenger 57: “You ever play roulette? Always bet on Peter being lazy.” –Wesley Snipes, Passenger 57, word-for-word quote

Now that you’re mentally prepared for the hail of bullets that is to follow (bullet points, that is), have at it.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

JP Boodhoo wrote the first* non-trivial publicly-available psake script, and thus you’ll notice all the other scripts have borrowed bits and pieces from his script (particularly the ruby_style_naming_convention which_is_not_camel_case like_PowerShell_should_be):
*that I remember

  • build script
    • He is the only person who doesn’t rely on Solutions/Project files to compile his project, instead relying on aspnet_compiler.exe. Note, for those of you unaware, if you set the OutDir parameter for MSBuild, it will compile web application projects with surprisingly pleasant results.
    • He has written his own miniature database migration tool using only PowerShell. Not bad if I do say so myself.
    • He makes clever use of “dir” to lazily find all files he needs to compile (e.g. “dir * -include *.cs -recurse")

Ayende’s scripts:

  • Rhino ESB – default.ps1 and psake-ext.ps1
    • Compiles by running MSBuild on the .sln file
    • Packages with the NuGet.exe command-line tool
    • Zips files using the 7zip (7za.exe) command-line tool
    • Runs XUnit tests via xunit.console.clr4.exe
    • Generates AssemblyInfo.cs (which, if you’re unaware, is where you get your assembly version number from)
    • Pulls the desired version number from Git source control using the git.exe command-line tool
  • RavenDB – default.ps1 and psake-ext.ps1
    • Neat way to check for installed software (prerequisites)—this checks to ensure you have .NET 4.0 installed (see the “Verify40” task)
    • Runs a complex test scenario in the “TestSilverlight” task—it fires up a local Raven server in RAM, runs Silverlight-related unit tests, then kills the Raven server.
    • Packages files from disparate sources—RavenDB shows how it’s done. Hint: it’s not pretty.
    • Zips files using the zip.exe command-line tool (i.e., not the same tool at 7zip)
    • Builds what appears to be an intense NuGet package
    • Uploads static web content to a live environment using S3Uploader.exe
    • Note the simple build instructions found here
  • Texo (his jokingly/admittedly-NIH PowerShell Continuous Integration server)
    • builder.ps1
      • Sends email
      • Tries to get latest on a git branch via raw git.exe commands

DotLess:

  • default.ps1
    • Compiles by running MSBuild on the .csproj files
    • Runs ILMerge
    • Builds a gem (as in, RubyGems gem)
    • Builds a NuGet package

LINQToEPiServer:

  • default.ps1
    • Compiles by running MSBuild on the .sln file
    • Starts the MSDTC service (SQL Server distributed transactions) using net start
    • Does extreme funkiness with NUnit impersonating MSTest…I have no idea why.
    • Modifies all config files with a simple homebrew templating engine (think string.format’s {0} {1} etc.).

CodeCampServer:

  • psake.bat
    • A pretty good psake launcher that does everything you need to run the build script, plus highlights failed builds.
  • default.ps1
    • Compiles by running MSBuild on the .sln file
    • Includes a large number of helper functions. Pay attention to the fact that in psake, you don’t have to use tasks for everything—by all means write first-class functions that accept arguments! Arguments! They’re awesome! Use them!
    • Runs Tarantino (database migration tool)
    • Runs FXCop and something called “SourceMonitor”
    • Runs NUnit both with and without NCover code coverage metrics
    • Zips whole directories
  • nant.build
    • I know this has nothing to do with psake, but there’s a lot of stuff in there. A lot of the command-line call-outs can be converted to your needs.
  • Deployment helper functions nicely packaged into PowerShell module files (psm1)
    • Database.psm1 - Uses .NET’s SMO objects(?) to interact with SQL Server
      • Creates SQL Server user (an Integrated user, not a native SQL user) on the SQL instance and on the SQL database
      • Does something scary-looking that appears to export an entire database, but not the way you’re thinking—not the normal way of exporting a database.
    • Package.psm1 – Uses a COM object called “shell.application” to Zip a directory
      • Unlike my (and everyone else’s) implementation, this zip function makes use of object piping to receive the list of files. Nice.
    • ScheduledJobs.psm1 – Uses a COM object “schedule.service” to manipulate Windows Scheduled Tasks
      • Creates a new scheduled task.
    • Windows.psm1 – Uses PowerShell’s WMI support to create local (not domain) users and assigns users to groups.
      • Creates a local user on the machine
      • Adds a user to a local group
    • IIS.psm1 – uses the “WebAdministration” IIS cmdlets to manipulate IIS
      • Creates an IIS website object and actually sets the bindings successfully (yessssssssss).

Aaron Weiker’s blog series

  • sample psake script from his blog post
    • Compiles by running MSBuild on the .sln file
    • Configures app.configs with environment-specific modifications using XPath (i.e. a lot more like the NAnt/MSBuild helpers, and less hacky than doing string search & replace)
    • Runs RoboCopy
    • One neat thing I haven’t started doing, but desperately need to start doing, is to start throwing exceptions if script/function parameters are not passed in. Pay attention and see lines #1-4 of his psake script to see what I mean by this. I’ve lost hours of my life I will never get back troubleshooting PowerShell scripts over the years only to find that I passed in a paramter called “-name” when I needed to pass in a parameter called “-fullname”. So, if you don’t do this either, start doing it.

Darrel Mozingo’s blog series

  • sample psake script from his blog post
    • Compiles by running MSBuild on the .sln file
    • Runs NCover and NCoverExplorer
    • Includes helper methods that won’t make any sense to you until you actually use PowerShell and are annoyed by the same things that caused him to write those one-line helper methods. Pay attention to the little things he does in his helper methods that you probably think are fluff. Pop quiz: why did he write a create_directory helper method? I’ve experienced the pain and know the answer. If you haven’t, take my and his word for it and at least attempt to figure out why those helper methods exist.
  • Four-part series on deployment with PowerShell (1, 2, 3, 4)
    • Part 2:
      • Modifies web.config via PowerShell’s built-in [xml] object wrapper (but only making a minor edit)
      • Pre-compiles the ASP.NET site
      • Writes a CPS-style (CPS-style-style? I feel better now.) function that maps a network share, yields to the caller, then unmaps when done.
      • Takes a configuration backup of the live ASP.NET site
    • Part 3:
      • Remotely manages IIS via PowerShell remoting (starting & stopping IIS)
    • Part 4:
      • Rewrites the system hosts file
      • Tests current DNS settings (cool!)
      • Loads Internet Explorer to ping the website to force it to compile itself
      • Verifies emails are being sent (so hot!)

A blog series

  • psake script and run.bat (download demo.zip linked from this site if you want to see the raw psake script)
    • The run.bat sample does something novel—pay attention to how it loads PowerShell as a shell (REPL environment), not as a run-and-exit script. Smooth.
    • IIS adminstration via a mix of IIS (“WebAdministration”) cmdlets and WMI. Smooth. Creates a website and a new AppPool.

Señor Hanselman apparently wrote a whitepaper about deploying with PowerShell

  • Gets latest from a SVN repository via a .NET SVN library
  • Does heap big remoting work pre-PowerShell 2.0 (i.e. ,before PowerShell had any built-in remoting support)

Mikael Lundin (litemedia) blogged

I should mention Derick Bailey’s Albacore project for .NET – it’s a collection of Rake (Ruby) tasks that are the equivalent of a lot of what I’ve listed above. And from what I’ve seen, it has some things I haven’t covered above. Here’s list of things it does, machine-gun-style:

  • csc.exe, docu, FluentMigrator, MSBuild, MSpec, MSTest, NAnt, NChurn, NCover, NDepend, NUnit, NuSpec, Plink, SpecFlow, Sqlcmd, zip/unzip, XBuild, XUnit.

TFS As Your Build/CI Server: Only Positive Takeaways 2 of 2

I’m unmotivated today at work, partly because I’m switching us from MSTest to NUnit. I’ll be happy again once it’s done, but not until then.

With that in mind, I’m ready to give the second half of my “using TFS as a CI server” advice, borne out of my experience on a real team project running TFS as our CI server.

This one’s going to be less positive than my Using TFS as your CI Server part one, and if you’re not in the mood to read, I’ll just summarize:

  • Don’t use MSTest as your unit testing framework, and
  • If forced to use TFS 2010 as your CI server, minimize your exposure to the XAML build script, instead delegating your entire build script to PowerShell or MSBuild or whatever else tickles your fancy. Don’t use TFS 2010 Build XAML, it isn’t worth the effort to set up a real build entirely written in Workflow Activities. It’s probably possible, but not worth the effort.

Switch to NUnit: The MSTest test runner is non-deterministic and will do great harm to your CI experience

We’ve had serious problems getting consistent results out of our MSTest test runs for our two projects. Turning off various features (such as code coverage) has helped some, but not enough. It’s worth your effort to  switch to NUnit if you’re serious about doing unit testing. Sorry MSTest, I tried, but the test runner fails way too often.

For nitpickers, you don’t have to switch to NUnit. You could switch to anything.

Switch to NUnit: MSTest leaks memory and cannot support our test runs

This isn’t as important as the failing test run. It is important if it ever happens to you and you have to rework your test runs such that you don’t run out of memory any more. I’ve searched and we’re not the only people running into this problem.

I hate Windows Workflow Foundation, and Windows Workflow Foundation hates me

Ayende ruined me with his JFHCI series of blog posts (I blogged about the topic here). After being enlightened to the fact that code (or if you prefer, script) is better in every way* over XML configuration, I’m ruined on ever using Workflow Foundation for anything. Ever.
*exaggeration

With that in mind, I’m not a fan of the reworked TFS 2010 XAML build system. However, this post is only the positive takeaways, so I shouldn’t get carried away talking about the build system, and instead talk about what you should do when told to set up your build in something called “xaml”.

TFS 2010 Build XAML is a V1 Microsoft Product

Some of you are not going to like this, but: avoid the XAML.

Avoid the XAML build system. It takes a long time to test build scripts, it is painful and the designer is buggy, it has almost no built-in Workflow Activities (e.g. there is no a “copy file” activity), it is harder to follow, harder to modify, painful to use with multiple branches sharing the same build XAML. PowerShell’s REPL shortens the feedback loop to something like 10 seconds, and MSBuild and NAnt can be configured such that you get feedback within a few seconds as well. TFS Build’s feedback loop is something like 10+ minutes, depending on how long your entire build takes.

To be clear, the TFS Build feedback loop is as follows:

  • Save, wait 10+ seconds for the save operation to complete.
  • Navigate to the Source Control WIndow, check in the XAML file in the BuildProcessTemplates folder.
  • Navigate to the Team Explorer and kick off a build manually.
  • Wait until the build completes.
  • Open the build summary for the build you completed.

Takeaway: minimize your XAML exposure

My preferred method of avoiding the XAML build system is to call out to PowerShell immediately for your entire build script. I’m serious—don’t even try to build your entire build script in the XAML designer.

This blog post explains how to call PowerShell from TFS. I’m not giving you the full solution, because working with TFS build is demotivating and I don’t want to spend any more time than is necessary here, but I’ll link to a partial solution.

Here’s a rough idea of what to do:

  • Find a XAML build script for your starting point, delete almost all of it, and add one InvokeProcess activity that calls out to PowerShell.
  • Make sure to pass in necessary arguments like SourcesDirectory, BinariesDirectory, etc.
  • Put all your compiling, test running, ClickOnce manifest building, packaging, deploying to Dev environment-ing, XML configuration modifications…put all these things in the PowerShell script.
  • Investigate psake if you’re serious about doing your build in PowerShell.
  • If you’re not a PowerShell fan, by all means call out to MSBuild or NAnt using the InvokeProcess activity. Whatever you do, just don’t try and wrangle with the TFS 2010 build XAML.

It’s worth the extra effort to get the call-out mechanism working, even if it seems like “this is taking longer than it should.”

Use Arguments for your TFS 2010 Workflow Builds

The one thing I like about the TFS 2010 build system is the concept of workflow arguments, wherein you can change settings “at runtime,” or specifically when queueing up a build. This is particularly good for us if we want to temporarily turn off tests or run a “deploy” build from TFS with certain parameters provided only at runtime. In TeamCity there were a few freetext text boxes that allowed you to type whatever arguments you wanted, but there was no guidance per-se. Nothing to tell you “Our build script is looking for precisely three things: a) the NUnit tools directory (though I’ve provided a default); b) whether or not you want to deploy to the Dev environment; c) whether to run tests.” The TFS 2010 Workflow does exactly this in an extensible way. Nice.

You can set up your “call out to PowerShell/MSBuild/NAnt/whatever” activity to pass any of these runtime-provided arguments as you need them.

My framework/platform strategy

I have a few basic strategies for using frameworks or platforms or basically anything to do with computers:

  • If it works, learn to use it well. For example, Windows 7’s new features/hotkeys/start menu search, Google Reader hotkeys, C# syntax, ReSharper, the commercial ORM we’re using. I’ll generally spend the time it takes to a) learn the product, and b) use it as intended.
  • If it doesn’t work, avoid it. For example, Windows Vista’s start menu search—I turned it off completely. The MSTest test runner falls in this category. I am also not a fan of most of the more advanced WPF language features, and don’t use them.

I also react very differently to frameworks I trust and those I don’t trust (i.e. those that “work” and those that “don’t work”):

  • If I’m experiencing a problem with a framework I trust, I’ll read up and try to find the correct solution because I’ll assume I’m at fault. Today this means, if I see NUnit’s test runner throw an OutOfMemoryException, I’ll blame us first.
  • If I’m experiencing a problem with a framework I don’t trust, I’ll write the dirtiest, quickest workaround available because assume the framework at fault. I learned this lesson the hard way while working on a “quick” SP Workflow project a few years ago. Today this means, if I see MSTest’s test runner throw an OutOfMemoryException, I’ll blame MSTest and switch us to NUnit.

Something I don’t think I’m saying outright is, these labels of “it works” or “it doesn’t work” affect how I deal with everything I do with software. With TFS as a source control solution, I’m dealing with it as

  1. A product that works great for SVN-style source control. Edit, merge, commit. Works great. Merge even works as of TFS 2010. Try to figure out why you’re having problems.
  2. A product that does not work offline or remotely. Don’t try offline mode, period, and avoid doing heavy TFS work (e.g. moving directories of files around) remotely. Avoid or work around the problem, in other words.
  3. A product that branches, painfully. If you experience problems with branching, work around the problem, potentially by losing source control history. I’m okay with losing file history. A lot of people are not okay with that. Branch less, because it’s less painful than dealing with the problems of having too few branches (and boy howdy do we ever need more branches).

With MSTest, I deal with it as

  1. A unit test syntax and local test runner that works great (if slow). Learn how to use it properly.
  2. An inconsistent CI test runner. Avoid it if possible.

With TFS Build, I deal with it as

  1. A bad language/environment for writing build scripts. Avoid it/escape the XAML as soon as possible.
  2. A reasonably consistent CI server that is painful to navigate. Learn to use it, and make the conscious choice to lose 5-10 minutes every day to navigating TFS menus, and to allow for confusion given the TFS tray app doesn’t work well and most of the build status UI is confusing and inconsistent. Once you’re consciously okay with losing some time navigating through the menus and closing+reloading build status windows, you stop caring about those 5-10 minutes. It works. If you can’t stop caring about everything, you’ll eventually go crazy. Right?

Did you see the pattern there? I have an internal list in my head of which features I can trust and which ones I can’t trust. This list keeps me sane.

Do others maintain their own internal “I can trust this software” list, or am I just crazy?

The Ruby Train Goes Choo-Choo

Microsoft MVPs, all aboard!

It’s like we were all reinventing wheels and barrels in .NET land in the past 5 years, when just on the other side of the island, people were beginning to wonder what is the best material to pave a highway with? … It’s like the Ruby community lives 3x faster than the .NET community, and has been for the past 5 years.


Why is it a pattern that, … people try out Rails, and they just never come back?


I’m very happy with the tooling I have at hand at this point. I can’t really say, right now, that I’m missing anything from my .NET development environment. Quite the contrary, actually; not having to cope with the lockups of VS, the non-sense behavior of TFS, the testing-hostile tools and frameworks, has been a blessing.


ASP.NET MVC is a fine framework. I just don’t feel like it is as productive as it could be.


...the above list hastily compiled off the top of my head.

There are no .NET developers

Other than that, I'd rather not spend time on [learning .NET at home]. It's not that i don't like .NET, but i just don't find it a very interesting space to be in anymore. There's very little innovation going on and the new things that the community and Microsoft are working on most often seem like either new libraries or frameworks to keep doing the same things we've been doing for years, or building things that other development communities already have for a while now. It also doesn't help that a lot of the people who used to be in the ALT.NET community seem to be spending a lot of their spare time learning new languages and platforms instead of pushing for improvement in the .NET community like they used to do.”


What if .NET developers stopped identifying themselves as .NET developers? What if they just considered themselves to be developers? I think we’d see a lot less, “how do we get Microsoft X to catch up with Y?” and a lot more “Let’s just use Y because it already does what we want.”

Seriously, the amount of energy being poured into playing catch up is saddening. Imagine if all of that effort was poured into the tool that’s already better at this.

There are no .NET Developers. There are only developers who have been brainwashed into thinking they can only write code in .NET.

Takeaways

  1. Ruby (Rails) and other non-.NET frameworks are crossing the chasm into the mainstream.
  2. Rails is a better platform. Every former .NET developer who has first tried, then written, about Ruby on Rails has reported it’s both more enjoyable and more productive. Every, single, one. EDIT 2011-07-11: ok, maybe I exaggerated. Ken has something to say as a .NET/Ruby guy who still likes .NET as much as/more than Ruby
  3. I’m sensing (and feeling) Microsoft’s .NET platform is stagnating, especially recently. Aside from multiple positive reports [1, 2] on the NHibernate rewrite, I have nothing to look forward to in .NET. And while I’m here, let me be the first to say: providing a new platform for Windows development excites me in the same way that iPhone-platform development excites me—that is to say, not at all.
  4. You don’t have to self-identify as a .NET developer. Instead, self-identify as a developer whose skillset is in .NET. Learn another platform (which is surprisingly easy) instead of investing extra effort in .NET. I happen to like the WPF project I work on, and my next project will probably be .NET (given my skillset), but there’s no reason I have to assume it will be .NET.

EDIT 2011-07-14: New Takeaways

There have been many, many comments over what I’ve written. My average blog post gets 0 comments. The median for blog comments here is also 0. The 75% quartile for blog comments: also 0. The 90 percentile mark for blog comments—you guessed it—also 0! So it was something of a shock to see people are actually reading this post, and commenting or blogging responses.

And very few of them seem all that happy with my post.

Many of them assume that I am a Ruby zealot, or that this post was about “Ruby vs. .NET”, so I must have written something poorly above. I don’t know. My new takeaways (which supersede the old list) will hopefully give you a better idea of what I meant to say originally.

It’s important to note the context as well. My blog is mostly targeted at people like me, that is to say, .NET developers, and the people who forgot to unsubscribe when I stopped posting about SharePoint. The post should not categorically offend everybody, no matter what background, but from all the feedback I’m getting: it is.

On with the takeaways:

  1. .NET developers (i.e., YOU) should check out Rails. If you are a .NET developer, and you haven’t checked out other frameworks like Ruby on Rails, you should do so. Instead of learning about Silverlight, for example, or whatever v1 Microsoft product that comes out of BUILD, or waste your time studying for MS certifications (seriously?), check out Rails. Rails is a viable way do develop web applications and is worth the time investment. Somewhere down the line, you may even be able to get paid to do Rails work, even in a city like Houston, even outside of the startup scene. And, it is surprisingly easy to learn other platforms.
    PS--these are not strawmen alternative learning investments I’m setting up. There are real people, real .NET developers, who spend their time struggling through WCF books to take the exam, or go “all in” and study up on the newest MS framework, and never quite get caught up.
  2. Drop the “.NET developer” mindset. There is a kind of assumption among .NET developers that we are .NET developers, and will use whatever the .NET framework provides to solve our problems. If we need to develop a web application, for example, we’ll consider ASP.NET WebForms or MVC, or maybe one of the alternate .NET web frameworks. Or SharePoint. We don’t look outside the walls. So, look outside the walls. .NET isn’t as fresh and shiny as it used to appear, and the alternatives are getting quite good (some would say: better, believe it or not). Again, it is surprisingly easy to learn other platforms.