Dev: Visual Studio 2013 Single Page Application with BackBoneJS

As part of my exploration from the last blog post I have been digging into BackBoneJS. Here I take a look at getting started with BackBoneJS in a Microsoft environment. Ultimately, I don’t think this is a very clean solution, so I’ll follow up with another that’s not integrated with ASP.Net’s MVC.

There’s a few requirements for this post.

Our goals with this website is to get a basic MVC website up and running using the BackBoneJS framework.
You can learn more about BackBoneJS here:
So, once you’ve got Visual Studio installed and running, and the BackBoneJS template installed, go ahead and create a new Visual C#  Web ASP.NET Web Application. It should look like this:
This will give you a new window of options; choose the Single Page Application.
Okay, let that build the solution. If you want to see what it does right off, run it with F5.
This theme uses the popular Bootstrap theme (CSS3) to achieve a “responsive” look and feel. Responsive simply means the website will attempt to mold itself to whatever screen size your users browse the site with. Be that a tiny screen through a smartphone or a big screen through a desktop computer. This concept can save you a lot of development time down the road when clients ask for a version of your site to work on their iPad. Responsive is better, in my opinion, than a mobile version of a website. This comic attempts to explain precisely why:

You can learn more about Bootstrap at their website:

We’re using Bootstrap with this theme automatically, but I don’t want to use the default Bootstrap theme. It’s unoriginal and sort of lazy to use the default theme. So, I’ll go to a website that offers other themes that work with Bootstrap: and download the “Slate” theme. Save the “bootstrap.css” and “bootstrap.min.css” to your projects “Content” folder. This will overwrite the defaults that came with the project.

Centering images in the JumboTron

Personally, I’m going for a pretty simple page here. A centered logo at the top, followed by some page content with images. For the “header” section of a web page, Bootstrap delivers JumboTron. In their words, “A lightweight, flexible component that can optionally extend the entire viewport to showcase key content on your site.” You can learn more about the JumboTron on their website:

What JumboTron does not do out of the box is give you a class to center your image. Developers will waste hours trying to hack the CSS, but, CSS requires finesse, not muscle. Here’s the code that accomplishes a centered image without much fuss:

<div class="jumbotron">
<div class="row well well-lg">
<div class="col-md-6 col-md-offset-3">
             <img src="~/Content/logo_bw_trans.png" alt="header" class="img-responsive text-center" />

I found this, like almost all code snippets, on stackoverflow:

The Grid System

Bootstrap uses a popular CSS technique for laying out web pages. In bygone years, this was popularized by the creators of CSS frameworks like the 960,, and BluePrint, From my perspective, these CSS frameworks became popular when UI developers realized the middle tier devs weren’t going to take the time to learn CSS and would keep using HTML tables to layout sites. So, they made CSS frameworks to try to help those same devs. Even then it took several years for frameworks like Bootstrap to make it easier. I believe Twitter’s Bootstrap may have grown up from HTML5Boilerplate, but, I don’t know.

The default template starts me off with a 3 section layout, but I only want 2. So, here is what they give us in the template:

<div class="row">
<div class="col-md-4">
<div class="col-md-4">
<div class="col-md-4">

Without understanding the grid system, you can quickly see there’s some logic to this. The class “col-md-4” seems to have a naming convention to it. It does and it is explained in detail here: If you’re guess was that they all add up to 12, then you’re right! I want 2 columns, so mine is reduced to this:

<div class="row">
<div class="col-md-6">
<div class="col-md-6">

Now, I want four rows of content with two columns, so I’ll just copy and paste that a few times and fill in the content. Once that’s done I want a section at the bottom with a big button telling my users what to do. As you are dropping content and images onto the page, you might notice that your images don’t come out the size you made them.

So if we look at this piece of code:

<img src="~/Content/dojo-path.png" alt="header" class="img-responsive text-center" />

You can see the class “img-responsive.” This is one of those magic Bootstrap CSS3 classes that makes your website scale for smartphones or big websites. While you may be tempted to take this off, I advise you leave it and let Bootstrap do what it knows best.
At the end of the page I want an email sign-up form so I can keep in touch with my prospective customers. Email sign up forms are something that almost every website in existence uses, so, there should be very little coding here. But you searched through the Bootstrap website and didn’t find it. Luckily there’s another website,, and if you do a quick search on sign up forms, you’ll see there are a few to choose from. I liked this one:

Well, that’s enough to get your basic functionality so you can wire in some email server. But I’d like to go a bit further.
I already have an account with MailChimp, a popular mailing list website,, so let’s just see what it takes to wire up a signup form to a mailchimp auto-responder list. So, if you have a mailchimp account, you can get your basic code for a signup form and combine with some of the Bootstrap visual enhancements and end up with code like this:

<!-- Begin MailChimp Signup Form -->
<div id="mc_embed_signup" class="text-center">
<form action="http:/url" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate>
<input type="email" value="" name="EMAIL" class="span6" id="mce-EMAIL" placeholder="email address" required>
                <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;"><input type="text" name="b_31c7d2f366bf7abc8b70e0bf3_64a94b06cb" value=""></div>

                    <button type="submit" id="mc-embedded-subscribe" class="btn btn-default btn-lg">
                        <span class="glyphicon glyphicon-off btn-lg"></span> Subscribe
<!--End mc_embed_signup-->

This gives you a decent looking sign up like this:
Which works. And when you hit submit, it opens a new window from mailchimp for the user to confirm their information… which sucks.
What I really want is to use the MailChimp API so you can handle the request from within your application. Since we’re not using WordPress or Drupal, we need to do this with ASP.Net. Unsurprisingly, someone has already done this, and their GitHub project is here:

So, let’s get to it. We’re going to install this into our project using the Package Manager Console [Tools–Library Package Manager–Package Manager Console] and type: Install-Package MailChimp.NET

That should get you a bunch of successful messages. Next I need my API Key from MailChimp. That’s covered here: essentially, it’s: primary dashboard–Account Settings–Extras–API Keys

Okay, you’ve imported the MailChimp API, you have your secret API key, now it’s time to go to your Controller and write your function.
Throw these imports into the top of the Controller:

using MailChimp;
using MailChimp.Lists;
using MailChimp.Helper;
Then add a function:
public void SubscribeEmail() {
            MailChimpManager mc = new MailChimpManager(&#8220;YourApiKeyHere-us2&#8243;);
            //  Create the email parameter
            EmailParameter email = new EmailParameter()
                Email = &#8220;;
            EmailParameter results = mc.Subscribe(&#8220;YourListID&#8221;, email);

But that will wait till next time.

More Holistic Web Architecture

A lot of architecture on the web discusses the problem from a less than holistic perspective.  With this blog I am attempting to start down a path that answers more than just the “web related” interests with its architecture.  So, it’s friendlier towards reporting, security, and operations teams.  A lot of my success comes from taking applications that were purely “developer centric” and teasing out messy bits to work more transparently for the ops teams and business leaders.

For this, the only real constraints I had were: ASP.Net, RESTful web service layer, and a three data center (global clients) web farm model.

It can be roughly described from the top-down as follows:


Use NGINX (a light weight web) as a reverse proxy to handle routing to three global web farms by IP address location.  Additional research has raised potentials for inserting more thorough DDOS detection at this layer.  Further research raises the potential for routing all static content from this level, potentially combining Varnish with NGINX, to reduce the number of hops for the user to get to the images and HTML for the site.


Maintain a User Interface layer using ASP.Net MVC4 combined with a BackboneJS framework along with underscoreJS and JQuery.  Further questions around whether SPA (Single Page Application, like HULU has) is better for you content or not.  Regardless, SPA has a lot of fans these days.  The frameworks seem to boil down to BackboneJS vs. KnockoutJS.  Further research revealed some opinion based leanings toward BackboneJS: it has a larger community of developers (unverified) and has built in hooks for a RESTful web service layer.  There is also a question of what is the best library or popular method to sanitize requests against XSS (cross site scripting) and SQLi (SQL injection).  I find some .Net/Java developers ignore the security layer because they feel safe within their frameworks.  However, I observe modern developers shifting towards faster and more responsive JavaScript libraries, and so, I want to keep an eye on this.  The frameworks only protect you if you use their compilers.


For the caching part I kept coming across success stories in web farms using Memcached.  Just to keep an eye on MS Azure, at this point, there is some potential interest in Windows Azure Caching (Preview).  However there appears to be a concern since MS Azure Caching in other forms has been cost prohibitive.  Also, as a MS developer, I’m just as concerned when choosing newer MS technologies as open source ones regarding the long term durability (is it maintained? Is there a healthy community).  Memcached apparently does the job well in web farm situations, so, it seems to be a first choice.


So, the Service layer.  ASP.Net Web API wins over WCF as a light weight RESTful web services that speaks in JSON.  Versioning in the services would be handled through the URI model and operations would be kept minimal to required functionality with the HTTP verbs.  Regarding speed…  I’ve been on both sides of this question: Use a service layer for Web-DB communications vs. regular code layer.  I know theoretically the straight code would be faster in a small app situation.  I know that, despite debating, that Web API would be faster than WCF in many situations.  I know that for any extensibility with external systems would be optimally built in a services fashion.  So, to me, this is less about writing SOA or not, and more about, if I have a team that already has to code out a services layer, why confuse them with internal/external questions.  I like to simplify things as much as possible up front, because I’ve seen many complex architectures fail out of the gate because the devs don’t get it and ultimately have a pressing deadline that takes priority over the purity of the concept.

This is where authentication is going to pass through, so we have Oath 2.0 vs. HMAC.  The traditional way is to do authentication over HTTPS encryption, but, that’s only encrypted over the wire and not at the end points which opens the application up to Man in the Middle attacks.  Research showed that Amazon, at some point, avoided this by not using OAuth and instead used HMAC.  Others did Two legged OAuth.  Regardless, caution needs to be taken here to choose a method that actually works before I start code.  The thought of implementing an unsecure authentication method out of ignorance is, in my mind, a pretty avoidable problem.


The data access code …  In fifteen years I’ve seen a lot of paths taken here.  Some of them were light and painless, but regarded by some architects as distinctly “un-MS.”  Personally, MS doesn’t pay me, so I have no loyalty to their lollipop data access flavors.  I have seen and used Entity Framework since its inception, and I pretty much find it a great example of a “ivory tower” concept that fails to live up to expectations in the real world.  I don’t need a DAL layer that knows how to talk to SQL, MySQL, Oracle, etc…  I never really have either.  Even in huge applications where mainframes were still in production this would not have helped.  Someone had already build that layer.  So, at this point I’d prefer a super simple layer with code minimized and tailored to the one database I have in production.  If down the road a merger took place and I ended up with 2 databases, I’d cross that bridge than rather than gimp a solution for things that “may occur.”  So, custom ADO.Net or an ORM or both.

Using ADO.Net to build the communications to a database usually means that SQLi has been defeated at this point.  That and ensuring that no user input is used to build any query strings dynamically.  Additionally at this point we have to consider making the calls to the database using TLS (Transfer Level Security).  I had an additional thought I have not seen implemented but I have wondered about.  The idea is my Services will request data from my database, but, how do I know all those requests came from the Services?  What is they were spoofed?  What if some savvy blackhat put a copy of my UI website on a thumb drive using WGET for the presentation layer and that site made a seemingly legit call back to my database?  I don’t know; could be paranoid, but these days…  So, the idea is to use something (HMAC) to make sure those requests are legit and then to route the other traffic to a honeypot database where I can monitor requests and try to track the traffic over time to find my little “helper.”

Down to the relational database layer…  Could be SQL Express, could be MariaDB (over MySQL).  Honestly, this doesn’t concern me because I wouldn’t choose to use many of the “bells and whistles” and I would choose to treat my database like a dumb trashcan for data that may blow up at any time.  It’s only value to me is that it’s cheap and fast, because if we’re successful, we’ll need more of them.  I’ve seen plenty of enterprise solutions use the most “pimped out” MS SQL servers they could have and they paid handsomely for it up front and down the road.  I prefer to let the programmers solve the hard problems and just use sharding to reduce the stress on a cheaper database.

Which brings me to Shard’ing.  I know Shard’ing scales better than Silo’ing, but I also know that the optimal sharding method requires some pretty insightful choices and a fast code layer to help the data calls get routed and bunched properly.  The example often given is by users alphabetically, but, I’m curious if there’s some more optimal way to choose that client shard’ing other than common sense.  Having studied MySpace and Amazon and others, this seems like a really painful road each company goes through and often takes a few tries to get just right.

So, at this point we have a basic architecture, but it’s missing, in my opinion some very key components.  A way to monitor everything and a way to get Sales/Marketing all those reports without screwing up my database traffic.  Oh, and giving the Security/Audit teams some toys would be nice.


I’ve worked with Ops guys and I’ve learned they can be your best friends or they can really hate you because you give them nothing to work with.  I like Ops.  So, I want to try out a distributed monitoring tool that has its hooks in everything without compromising.  From what I’m reading, and what I’ve experienced, this just isn’t one of those areas that everyone thinks about.  Ironic to me how most devs can debate endlessly about OOP or MVC vs. MVVM, but few have an answer to “how do you measure the “better-ness” of your OOP solution?”  Sometimes they say, that’s another team’s responsibility…  Now that’s team work.  Anyway, numbers are how we measure, not religious devotion to decoupled systems and high minded PhD white papers from MS/Oracle.

So, the weak consensus boiled down to a couple paths:

  • Ganglia (for metrics) + Nagios (for alerts)
  • Sensu + Collectd + Graphite + Logstash
  • Splunk

Now, all that really feels like heavy Ops, but not enough security.  It’s good to know when servers are tanking and databases are hung, but I’d sure like to know when a friendly is helping me “test” my system by initiating a DDOS attack on Web Farm A or port scan on one of my service layers.  So, where do we plug in SNORT or some other traffic monitoring security app?

Finally, the reporting.  I don’t know the statistics, but I’m pretty sure a high percentage of any “Data Warehouse” project I’ve ever observed from the sidelines failed miserably…  They failed in different ways.  Usually, the original devs were too busy so they just create reporting straight off production databases.  That works long enough for them to get a new job and a couple years layer business users start complaining about load times when they fire off a historical report against a database.  Hey, how are they supposed to know?  It was fine with Scott wrote it two years ago…  No, no one has cleaned out the history or log files or rebuilt indexes or whatever…  So eventually some BI company hears the complaints and sells them a big DW package which has more nobs than a space station.  Oh you wanted consulting?  That’s cost prohibitive, but we can teach your Dev for 2 hours and they’ll have it…  Oh you’re good devs don’t have time/interest in DW?  Just give me your worst, laziest, most checked out dev…  Okay, long story short, but that’s what I run into when it comes to the sad, sad land of reporting.

Which is even sadder, because REPORTS are for EXECUTIVES much of the time.  This is precisely how IT departments get judged and perceived by their corporations executive sales and marketing leaders.  Okay, so, here’s my new thought to solving this much unseen problem in IT.


You have a standalone SQL Enterprise Edition database just for reporting.  You setup a Quartz scheduler app to pull data every 2/4/6/24 hours from the prod databases, and transform it into quantitatively friendly tables for easy reporting.  Then you spend some cache and get Telerik Reporting with the responsive design so it works for mobile loaded up on a server and dishing those reports out.   I’m pretty sure this would take less time, despite costs, and satisfy more executives (who don’t want to come to the office to view a report), and really, outside of the data transformations, you could feasibly hand a Telerik solution to a B player on your team and it would still look like “magic rocket ships” to the leadership teams.  But, the data pulls…  have to be fast.  The new guy shouldn’t be handed Entity Framework with a blog on how to write LINQ and put in a corner.  This almost always results in high load times and absolutely unforgivable LINQ generated SQL.  I know, it’s not LINQs fault it’s smarter than the average dev, but, that’s the world we’re in.


This is a really fun thought experiment for me so I’m going to continue posts that begin building out each part to expose incorrect assumptions and show metrics where I can.

© Copyright Duke Hall - Designed by Pexeto