Setup: VirtualBox 4.3.6, Ubuntu 13.10 CLI, Node.js, Forever.js

In my last post I covered a popular implementation of BackboneJS in Visual Studio.

I was not happy with it. Primarily because mixing a JavaScript Client-Side MV* solution with a ASP.Net Server-Side MVC solution feels clunky.
So, I’ve set out to do a clean Backbone setup. Which led me to the desire for a clean Web Server, which led me to NginX, and then to Node.
So, this post is a walkthrough of setting up a Node Web Server in Virtual Box.

The Steps:

  1. Install Virtual Box
  2. Create a Virtual Machine with Ubuntu 13.10 64bit CLI iso
  3. Install Guess Additions for your VBox version
  4. Install Node
  5. Install Express
  6. Install Forever

For brevity, I will skip the Virtual Box and Guest Additions steps as I’ve already covered this before.
The ISO for Ubuntu 13.10.CLI is here:
For this walkthrough, ensure you get the Server version: 64-bit PC (AMD64) server install image

In the settings for the VM, you can do what you like or use what I did:

  • RAM: 4gb
  • Select “create a virtual hard drive now”
  • choose VDI
  • Use dynamically allocated
  • Give it 20GB
  • Go to “settings”
  • Select General –> Advance
  • Enable shared clipboard and drag ‘n’ drop
  • Uncheck the mini toolbar
  • Go to storage
  • Click the empty disk under controller: IDE
  • Click the little disk icon with a down arrow and select your Ubuntu server iso.
  • Networking:
  • Leave as NAT unless you know how to setup a bridged connection on your computer

ERROR ALERT: If you get the error: VT-x is disabled in the BIOS.

  • Reboot your machine into BIOS mode.

  • Find the Virtualization Setting
  • Enable it (sometimes there are two)
  • Save and Exit
  • Error should be fixed

Start the server and it will go into the Ubuntu install screens

ERROR ALERT: If you get the error: This kernel requires an x86 CPU, but only detected i686 CPU.
FIX: Close the machine, open settings, and change your OS to the Ubuntu/Linux 64 bit version.

Setup your username and password and jot that down for later.

When you get to the Software Selection screen; choose LAMP or SSH.
Azure Note: When you setup Ubuntu on a VM in Azure, this choice is made for you; SSH I believe.

Once complete, a command line interface will come back and ask you for the server login.
Enter your username and password

There are many, many ways to install Node on Unix. If you’re searching the intertubes, you will find a few different scenarios:

  • Installing from source on git. I don’t suggest this route
  • Installing through apt-get; this is a nice and easy route
  • Installing through NVM; this is also nice and easy
  • Installing straight from your Ubuntu; also nice and easy

There’s no “right way” so I will cover from git and from apt-get.
If you watch the training on Node; Joseph LeBlanc uses the NVM so he can quickly switch out which version of Node he is using.

Here is how to install it from GitHub…
First, install dependencies:

  sudo apt-get install g++ curl libssl-dev apache2-utils
  sudo apt-get install git-core
  sudo apt-get install make
  Then install:
  git clone git://
  cd node
  sudo make install

Here is how to install it from apt-get

sudo apt-get install nodejs

cd node/
sudo nano hello_world.js

var http=require('http');

http.createServer(function (req,res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Node\n');
}).listen(8080, "");

console.log('server running at port http://localhost:8080');

What you’ve done here is very rudimentary, but core.
You’ve bought in a core Node module called HTTP and set it to a local variable.
Then you used that local variable to access the “createServer” function to start a basic HTTP server.
Then you configured the “res”-response variable of that server to write out an HTTP 200 with a content type of text (not HTML).
Then you tell it to “listen” on port 8080 of your localhost.
And finally you write to the console where it should be running.

Save and exit.
Now use the “node” command to run your script.

sudo node hello_world.js

At this point you can open a browser to http://localhost:8080 and see your message.

This brings you to the first challenge of Node: How do you make it run as a service?
There are several ways, I will cover the Forever solution.


sudo npm install forever -g

This will use the Node Package Manager to install forever (-g makes it a global installation, so it’s usable at the command line).

Once complete:

sudo forever start hello_world.js

Now your node server is running as a service and if it dies, Forever will restart it.

Dev: Visual Studio 2013 Single Page Application with BackBoneJS

As part of my exploration from the last blog post I have been digging into BackBoneJS. Here I take a look at getting started with BackBoneJS in a Microsoft environment. Ultimately, I don’t think this is a very clean solution, so I’ll follow up with another that’s not integrated with ASP.Net’s MVC.

There’s a few requirements for this post.

Our goals with this website is to get a basic MVC website up and running using the BackBoneJS framework.
You can learn more about BackBoneJS here:
So, once you’ve got Visual Studio installed and running, and the BackBoneJS template installed, go ahead and create a new Visual C#  Web ASP.NET Web Application. It should look like this:
This will give you a new window of options; choose the Single Page Application.
Okay, let that build the solution. If you want to see what it does right off, run it with F5.
This theme uses the popular Bootstrap theme (CSS3) to achieve a “responsive” look and feel. Responsive simply means the website will attempt to mold itself to whatever screen size your users browse the site with. Be that a tiny screen through a smartphone or a big screen through a desktop computer. This concept can save you a lot of development time down the road when clients ask for a version of your site to work on their iPad. Responsive is better, in my opinion, than a mobile version of a website. This comic attempts to explain precisely why:

You can learn more about Bootstrap at their website:

We’re using Bootstrap with this theme automatically, but I don’t want to use the default Bootstrap theme. It’s unoriginal and sort of lazy to use the default theme. So, I’ll go to a website that offers other themes that work with Bootstrap: and download the “Slate” theme. Save the “bootstrap.css” and “bootstrap.min.css” to your projects “Content” folder. This will overwrite the defaults that came with the project.

Centering images in the JumboTron

Personally, I’m going for a pretty simple page here. A centered logo at the top, followed by some page content with images. For the “header” section of a web page, Bootstrap delivers JumboTron. In their words, “A lightweight, flexible component that can optionally extend the entire viewport to showcase key content on your site.” You can learn more about the JumboTron on their website:

What JumboTron does not do out of the box is give you a class to center your image. Developers will waste hours trying to hack the CSS, but, CSS requires finesse, not muscle. Here’s the code that accomplishes a centered image without much fuss:

<div class="jumbotron">
<div class="row well well-lg">
<div class="col-md-6 col-md-offset-3">
             <img src="~/Content/logo_bw_trans.png" alt="header" class="img-responsive text-center" />

I found this, like almost all code snippets, on stackoverflow:

The Grid System

Bootstrap uses a popular CSS technique for laying out web pages. In bygone years, this was popularized by the creators of CSS frameworks like the 960,, and BluePrint, From my perspective, these CSS frameworks became popular when UI developers realized the middle tier devs weren’t going to take the time to learn CSS and would keep using HTML tables to layout sites. So, they made CSS frameworks to try to help those same devs. Even then it took several years for frameworks like Bootstrap to make it easier. I believe Twitter’s Bootstrap may have grown up from HTML5Boilerplate, but, I don’t know.

The default template starts me off with a 3 section layout, but I only want 2. So, here is what they give us in the template:

<div class="row">
<div class="col-md-4">
<div class="col-md-4">
<div class="col-md-4">

Without understanding the grid system, you can quickly see there’s some logic to this. The class “col-md-4” seems to have a naming convention to it. It does and it is explained in detail here: If you’re guess was that they all add up to 12, then you’re right! I want 2 columns, so mine is reduced to this:

<div class="row">
<div class="col-md-6">
<div class="col-md-6">

Now, I want four rows of content with two columns, so I’ll just copy and paste that a few times and fill in the content. Once that’s done I want a section at the bottom with a big button telling my users what to do. As you are dropping content and images onto the page, you might notice that your images don’t come out the size you made them.

So if we look at this piece of code:

<img src="~/Content/dojo-path.png" alt="header" class="img-responsive text-center" />

You can see the class “img-responsive.” This is one of those magic Bootstrap CSS3 classes that makes your website scale for smartphones or big websites. While you may be tempted to take this off, I advise you leave it and let Bootstrap do what it knows best.
At the end of the page I want an email sign-up form so I can keep in touch with my prospective customers. Email sign up forms are something that almost every website in existence uses, so, there should be very little coding here. But you searched through the Bootstrap website and didn’t find it. Luckily there’s another website,, and if you do a quick search on sign up forms, you’ll see there are a few to choose from. I liked this one:

Well, that’s enough to get your basic functionality so you can wire in some email server. But I’d like to go a bit further.
I already have an account with MailChimp, a popular mailing list website,, so let’s just see what it takes to wire up a signup form to a mailchimp auto-responder list. So, if you have a mailchimp account, you can get your basic code for a signup form and combine with some of the Bootstrap visual enhancements and end up with code like this:

<!-- Begin MailChimp Signup Form -->
<div id="mc_embed_signup" class="text-center">
<form action="http:/url" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate>
<input type="email" value="" name="EMAIL" class="span6" id="mce-EMAIL" placeholder="email address" required>
                <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;"><input type="text" name="b_31c7d2f366bf7abc8b70e0bf3_64a94b06cb" value=""></div>

                    <button type="submit" id="mc-embedded-subscribe" class="btn btn-default btn-lg">
                        <span class="glyphicon glyphicon-off btn-lg"></span> Subscribe
<!--End mc_embed_signup-->

This gives you a decent looking sign up like this:
Which works. And when you hit submit, it opens a new window from mailchimp for the user to confirm their information… which sucks.
What I really want is to use the MailChimp API so you can handle the request from within your application. Since we’re not using WordPress or Drupal, we need to do this with ASP.Net. Unsurprisingly, someone has already done this, and their GitHub project is here:

So, let’s get to it. We’re going to install this into our project using the Package Manager Console [Tools–Library Package Manager–Package Manager Console] and type: Install-Package MailChimp.NET

That should get you a bunch of successful messages. Next I need my API Key from MailChimp. That’s covered here: essentially, it’s: primary dashboard–Account Settings–Extras–API Keys

Okay, you’ve imported the MailChimp API, you have your secret API key, now it’s time to go to your Controller and write your function.
Throw these imports into the top of the Controller:

using MailChimp;
using MailChimp.Lists;
using MailChimp.Helper;
Then add a function:
public void SubscribeEmail() {
            MailChimpManager mc = new MailChimpManager(&#8220;YourApiKeyHere-us2&#8243;);
            //  Create the email parameter
            EmailParameter email = new EmailParameter()
                Email = &#8220;;
            EmailParameter results = mc.Subscribe(&#8220;YourListID&#8221;, email);

But that will wait till next time.

More Holistic Web Architecture

A lot of architecture on the web discusses the problem from a less than holistic perspective.  With this blog I am attempting to start down a path that answers more than just the “web related” interests with its architecture.  So, it’s friendlier towards reporting, security, and operations teams.  A lot of my success comes from taking applications that were purely “developer centric” and teasing out messy bits to work more transparently for the ops teams and business leaders.

For this, the only real constraints I had were: ASP.Net, RESTful web service layer, and a three data center (global clients) web farm model.

It can be roughly described from the top-down as follows:


Use NGINX (a light weight web) as a reverse proxy to handle routing to three global web farms by IP address location.  Additional research has raised potentials for inserting more thorough DDOS detection at this layer.  Further research raises the potential for routing all static content from this level, potentially combining Varnish with NGINX, to reduce the number of hops for the user to get to the images and HTML for the site.


Maintain a User Interface layer using ASP.Net MVC4 combined with a BackboneJS framework along with underscoreJS and JQuery.  Further questions around whether SPA (Single Page Application, like HULU has) is better for you content or not.  Regardless, SPA has a lot of fans these days.  The frameworks seem to boil down to BackboneJS vs. KnockoutJS.  Further research revealed some opinion based leanings toward BackboneJS: it has a larger community of developers (unverified) and has built in hooks for a RESTful web service layer.  There is also a question of what is the best library or popular method to sanitize requests against XSS (cross site scripting) and SQLi (SQL injection).  I find some .Net/Java developers ignore the security layer because they feel safe within their frameworks.  However, I observe modern developers shifting towards faster and more responsive JavaScript libraries, and so, I want to keep an eye on this.  The frameworks only protect you if you use their compilers.


For the caching part I kept coming across success stories in web farms using Memcached.  Just to keep an eye on MS Azure, at this point, there is some potential interest in Windows Azure Caching (Preview).  However there appears to be a concern since MS Azure Caching in other forms has been cost prohibitive.  Also, as a MS developer, I’m just as concerned when choosing newer MS technologies as open source ones regarding the long term durability (is it maintained? Is there a healthy community).  Memcached apparently does the job well in web farm situations, so, it seems to be a first choice.


So, the Service layer.  ASP.Net Web API wins over WCF as a light weight RESTful web services that speaks in JSON.  Versioning in the services would be handled through the URI model and operations would be kept minimal to required functionality with the HTTP verbs.  Regarding speed…  I’ve been on both sides of this question: Use a service layer for Web-DB communications vs. regular code layer.  I know theoretically the straight code would be faster in a small app situation.  I know that, despite debating, that Web API would be faster than WCF in many situations.  I know that for any extensibility with external systems would be optimally built in a services fashion.  So, to me, this is less about writing SOA or not, and more about, if I have a team that already has to code out a services layer, why confuse them with internal/external questions.  I like to simplify things as much as possible up front, because I’ve seen many complex architectures fail out of the gate because the devs don’t get it and ultimately have a pressing deadline that takes priority over the purity of the concept.

This is where authentication is going to pass through, so we have Oath 2.0 vs. HMAC.  The traditional way is to do authentication over HTTPS encryption, but, that’s only encrypted over the wire and not at the end points which opens the application up to Man in the Middle attacks.  Research showed that Amazon, at some point, avoided this by not using OAuth and instead used HMAC.  Others did Two legged OAuth.  Regardless, caution needs to be taken here to choose a method that actually works before I start code.  The thought of implementing an unsecure authentication method out of ignorance is, in my mind, a pretty avoidable problem.


The data access code …  In fifteen years I’ve seen a lot of paths taken here.  Some of them were light and painless, but regarded by some architects as distinctly “un-MS.”  Personally, MS doesn’t pay me, so I have no loyalty to their lollipop data access flavors.  I have seen and used Entity Framework since its inception, and I pretty much find it a great example of a “ivory tower” concept that fails to live up to expectations in the real world.  I don’t need a DAL layer that knows how to talk to SQL, MySQL, Oracle, etc…  I never really have either.  Even in huge applications where mainframes were still in production this would not have helped.  Someone had already build that layer.  So, at this point I’d prefer a super simple layer with code minimized and tailored to the one database I have in production.  If down the road a merger took place and I ended up with 2 databases, I’d cross that bridge than rather than gimp a solution for things that “may occur.”  So, custom ADO.Net or an ORM or both.

Using ADO.Net to build the communications to a database usually means that SQLi has been defeated at this point.  That and ensuring that no user input is used to build any query strings dynamically.  Additionally at this point we have to consider making the calls to the database using TLS (Transfer Level Security).  I had an additional thought I have not seen implemented but I have wondered about.  The idea is my Services will request data from my database, but, how do I know all those requests came from the Services?  What is they were spoofed?  What if some savvy blackhat put a copy of my UI website on a thumb drive using WGET for the presentation layer and that site made a seemingly legit call back to my database?  I don’t know; could be paranoid, but these days…  So, the idea is to use something (HMAC) to make sure those requests are legit and then to route the other traffic to a honeypot database where I can monitor requests and try to track the traffic over time to find my little “helper.”

Down to the relational database layer…  Could be SQL Express, could be MariaDB (over MySQL).  Honestly, this doesn’t concern me because I wouldn’t choose to use many of the “bells and whistles” and I would choose to treat my database like a dumb trashcan for data that may blow up at any time.  It’s only value to me is that it’s cheap and fast, because if we’re successful, we’ll need more of them.  I’ve seen plenty of enterprise solutions use the most “pimped out” MS SQL servers they could have and they paid handsomely for it up front and down the road.  I prefer to let the programmers solve the hard problems and just use sharding to reduce the stress on a cheaper database.

Which brings me to Shard’ing.  I know Shard’ing scales better than Silo’ing, but I also know that the optimal sharding method requires some pretty insightful choices and a fast code layer to help the data calls get routed and bunched properly.  The example often given is by users alphabetically, but, I’m curious if there’s some more optimal way to choose that client shard’ing other than common sense.  Having studied MySpace and Amazon and others, this seems like a really painful road each company goes through and often takes a few tries to get just right.

So, at this point we have a basic architecture, but it’s missing, in my opinion some very key components.  A way to monitor everything and a way to get Sales/Marketing all those reports without screwing up my database traffic.  Oh, and giving the Security/Audit teams some toys would be nice.


I’ve worked with Ops guys and I’ve learned they can be your best friends or they can really hate you because you give them nothing to work with.  I like Ops.  So, I want to try out a distributed monitoring tool that has its hooks in everything without compromising.  From what I’m reading, and what I’ve experienced, this just isn’t one of those areas that everyone thinks about.  Ironic to me how most devs can debate endlessly about OOP or MVC vs. MVVM, but few have an answer to “how do you measure the “better-ness” of your OOP solution?”  Sometimes they say, that’s another team’s responsibility…  Now that’s team work.  Anyway, numbers are how we measure, not religious devotion to decoupled systems and high minded PhD white papers from MS/Oracle.

So, the weak consensus boiled down to a couple paths:

  • Ganglia (for metrics) + Nagios (for alerts)
  • Sensu + Collectd + Graphite + Logstash
  • Splunk

Now, all that really feels like heavy Ops, but not enough security.  It’s good to know when servers are tanking and databases are hung, but I’d sure like to know when a friendly is helping me “test” my system by initiating a DDOS attack on Web Farm A or port scan on one of my service layers.  So, where do we plug in SNORT or some other traffic monitoring security app?

Finally, the reporting.  I don’t know the statistics, but I’m pretty sure a high percentage of any “Data Warehouse” project I’ve ever observed from the sidelines failed miserably…  They failed in different ways.  Usually, the original devs were too busy so they just create reporting straight off production databases.  That works long enough for them to get a new job and a couple years layer business users start complaining about load times when they fire off a historical report against a database.  Hey, how are they supposed to know?  It was fine with Scott wrote it two years ago…  No, no one has cleaned out the history or log files or rebuilt indexes or whatever…  So eventually some BI company hears the complaints and sells them a big DW package which has more nobs than a space station.  Oh you wanted consulting?  That’s cost prohibitive, but we can teach your Dev for 2 hours and they’ll have it…  Oh you’re good devs don’t have time/interest in DW?  Just give me your worst, laziest, most checked out dev…  Okay, long story short, but that’s what I run into when it comes to the sad, sad land of reporting.

Which is even sadder, because REPORTS are for EXECUTIVES much of the time.  This is precisely how IT departments get judged and perceived by their corporations executive sales and marketing leaders.  Okay, so, here’s my new thought to solving this much unseen problem in IT.


You have a standalone SQL Enterprise Edition database just for reporting.  You setup a Quartz scheduler app to pull data every 2/4/6/24 hours from the prod databases, and transform it into quantitatively friendly tables for easy reporting.  Then you spend some cache and get Telerik Reporting with the responsive design so it works for mobile loaded up on a server and dishing those reports out.   I’m pretty sure this would take less time, despite costs, and satisfy more executives (who don’t want to come to the office to view a report), and really, outside of the data transformations, you could feasibly hand a Telerik solution to a B player on your team and it would still look like “magic rocket ships” to the leadership teams.  But, the data pulls…  have to be fast.  The new guy shouldn’t be handed Entity Framework with a blog on how to write LINQ and put in a corner.  This almost always results in high load times and absolutely unforgivable LINQ generated SQL.  I know, it’s not LINQs fault it’s smarter than the average dev, but, that’s the world we’re in.


This is a really fun thought experiment for me so I’m going to continue posts that begin building out each part to expose incorrect assumptions and show metrics where I can.

Windows Azure Cloud Costs Analysis

If you’ve ever tried to figure out what Cloud Hosting would cost, you may have thought, “wow, that’s complex.”

I’m here to confirm that thought and maybe help out with the knowledge I gained from 4 days of back and forth emails with Microsoft’s Billing Support.  They’re very helpful, but I think they’re pricing model is probably a barrier for some Architects.  Regardless of complexity, it turned out to be a very good deal.

Azure Hosting if you have a MSDN License:
If you currently have a Visual Studio Premium with MSDN subscription, that may include $100 monetary credit each month. Therefore, as long as your usage is within $100, you would not be billed. If your usage exceeds $100, you would be charged as per standard rates.

Standard Website
You will be metered for compute hours as you are using reserved units for your website.
How many compute hours are there in a month?  930 compute hours.
Pricing = 930 x 0.06 = 55.8 USD.

WordPress Websites MySQL Database

Since I made a WordPress site, I discovered I got 1 Free 20MB MySQL database license.  Additional MySQL databases will be charged.

How much would I be charged for a second MySQL database?
Standard Rates:

  • 0 to 100 MB         $4.99
  • 100 MB to 1 GB         $9.99
  • 1 GB to 10 GB         $9.99, for the first GB $3.99 for each additional GB
  • 10 GB to 50 GB         $45.96, for the first 10 GB $1.99 for each additional GB
  • 50 GB to 150 GB     $125.88, for the first 50 GB $0.99 for each additional GB

How much would a Microsoft SQL Server Database Cost (what are standard rates)?
When you set up a SQL Database in Azure you can choose:

  • 0 to 100 MB            0.5 DU
  • 100 MB to 1 GB        1 DU
  • 1 GB to 10 GB        1 DU for the first GB, 0.4 DU for each additional GB
  • 10 GB to 50 GB        4.6 DU for the first 10 GB, 0.2 DU for each additional GB
  • 50 GB to 150 GB        12.6 DU for the first 50 GB, 0.1 DU for each additional GB

These rates are pro rated.

Why does a “Windows Azure Website” show up under “Cloud Services”
For billing purposes only, Azure Web Sites (standard) uses Windows Azure Compute Hours meter (also used by Cloud Services). This is the reason why the website would show up as “Cloud Services” under the invoice/detailed usage report.

If I have 3 WordPress Websites, How can I minimize the costs?    
By default, if you create a Standard Website, there is a reserved unit created for that sub-region (ex: east-us).
Any additional websites that you create in the same sub-region, would be automatically linked to this reserved unit.
We will be billing only for the reserved unit and not for the number of websites.
Therefore, to save cost, you can create all three websites under the same sub-region.
You would be only for one reserved unit, i.e. 30 hours per day.

Well, hope that helps anyone out.

Dev Setup: Windows 7, IIS 7.5, SQL 2012, Visual Studio 2012, TFS

This entry attempts to cover, from start to finish, how one developer might setup a new development machine with Windows 7, IIS 7.5, SQL 2012, VS 2012, and Team Foundation Cloud Services.  I am publishing this because I found many snags along the way with no good answers to be found on the internet in a unified place.  SQL and .Net answers rarely seem to meet well.  Best practices often get chucked due to frustration.

Environment Assumptions:

Window 7

IIS 7.5

SQL 2012

Visual Studio 2012

Source Control: TFS Cloud Service


Install the Operating System first.

Do all the required windows updates.


Installing IIS

Control Panel -> Programs -> Programs & Features: Turn Windows Features On or Off

Expand Internet Information Services

Match this image

HRmonise3 Setup - IIS Install


Installing URL Rewrite (click the link and install)

(x86 – 32bit)


Do all the required windows updates.


Installing SQL 2012

Do not install Anaylsis Services or Reporting Services (unless you really work on a system with this…)

Choose Mixed Mode with a “sa” account password

Change the Default Location where the database/logs are stored (probably to you D drive)

Change SQL Server Service to be “automated”

After install, do all the required windows updates.

Restore a backup of SuperAwesomeCompanyWebsite database



Installing the Development Environment

Install Visual Studio 2012

Install Visual Studio Team Foundation Tools

Install SQL Server Data Tools

Do all the required windows updates.


CONFIGURING SuperAwesomeCompanyWebsite: Team Foundation Server (Cloud Service assumed to be setup)

In Visual Studio 2012

Open Team Foundations Browser

Connect to\DefaultCollection

Download the project to your computer (suggest a folder like “D:\TFS\ SuperAwesomeCompanyWebsite\”


CONFIGURING SuperAwesomeCompanyWebsite: IIS

Quick Explanation of IIS 7.5 and SuperAwesomeCompanyWebsite

Hierarchy in IIS = Website -> Application (one to many) -> Virtual Directory (one to many)

SuperAwesomeCompanyWebsite only has 1 application, and 1 virtual directory


Open IIS

Go to Default Website

Edit Bindings (right column): Add HTTP/HTTPS; remove all others (unless you really need that stuff…)

Right Click on Default Website -> Add an Application

Use the built in “DefaultAppPool”

Match the images provided:

HRmonise3 Setup - IIS HRmonise3 Basic Settings


HRmonise3 Setup - IIS HRmonise3 Advanced Settings

Click on SuperAwesomeCompanyWebsite -> Authorization Roles

HRmonise3 Setup - IIS HRmonise3 Authorization Rules

If you like, you may setup the Connection Strings here as well (will modify your web.config)



Click on Application Pool: DefaultAppPool and examine the Advanced Settings

                Notice its “Identity” is ApplicationPoolIdentity (not Network Service, not Local Service, not custom thingy…)

Quick Explanation of the ApplicationPoolIdentity (new in IIS 7.5) and NetworkService (what you used to see a lot)

                                ApplicationPoolIdentity: In IIS 7.5, the default Identity for an Application Pool is ApplicationPoolIdentity. ApplicationPoolIdentity represents a Windows user account called “IIS APPPOOL\<AppPoolName>”, which is created when the Application Pool is created, where AppPoolName is the name of the Application Pool. The “IIS APPPOOL\<AppPoolName>” user is by default a member of the IIS_IUSRS group. So you need to grant write access to the IIS_IUSRS group


Browse to root folder of your TFS project

Right click on the folder -> properties -> security: Edit: Add

Match the image provided

HRmonise3 Setup - Source Code Folder Permissions


CONFIGURING SuperAwesomeCompanyWebsite: SQL SERVER

In SQL Server SSMS

1. Create SQL Login

Open Security folder (on the same level as the Databases, Server Objects, etc. folders…not the security folder within each individual database)

Right click logins and select “New Login”

In the Login name field, type ‘IISSQL_Account’

Choose SQL Server authentication, set your password

Change Default Database to the Generic database

Click User Mapping

Grant db_datareader, db_datawriter on each database you want.

Leave default schema blank

2. Create SQL User on your SuperAwesomeCompanyWebsite database

Expand your database “SuperAwesomeCompanyWebsite_generic”

Expand: Security -> Users

Right Click Users: Add New User

user name: IISSQL_Account

login name: IISSQL_Account (from step 1)

default schema: leave blank

Membership: db_datareader, db_datawriter

3. Grant permissions to the new Login on your SuperAwesomeCompanyWebsite database

Run this SQL:



CONFIGURING SuperAwesomeCompanyWebsite: ASP.Net 4 Framework

Tell ASP.Net 4 Framework to take priority with Default website

Open Command Prompt with Administrative Rights

Browse to: C:\Windows\Microsoft.NET\Framework64\v4.0.30319

Run: aspnet_regiis.exe -i


Tell ASP.Net Web Service to start automatically

1) Start–> Administrative Tools –> Services

2) right click over the ASP.NET State Service and click “start”

*additionally you could set the service to automatic so that it will work after a reboot.



Final notes:

This document contains as many of the gotchas and annoying crap that isn’t well organized on the web as I could find.

Obviously you’ll run into your own peculiar problems and I’d be happy if you shared them with me (and their resolution).

© Copyright Duke Hall - Designed by Pexeto