What is Accessible Log Entries Worth to You?

It’s just about as bad as it can get, unless you’ve actually done something to make it better.

You know how much hassle you need to go through to find out what happened yesterday at 2am in the reporting module that runs every night to index data and produce nice reports?

Let’s take a look at how you access your logs today, and see what a better and more accessible way of logging would be worth to you.

Log Sources

First of all, it is not enough to only look within the log of this particular reporting module of yours. Your software is probably not just this single, self contained module. It probably has a couple of other module that together makes up this distributed masterpiece of an architecture that you envisioned.

You need to correlate what happened in the reporting module, with something that happened in the download module – and you probably have lots of other relevant log sources. Do you even know how many, and are you able to get a complete overview of what happened throughout your entire system at 2am last night?

Log sources quickly becomes a mess because there are so many of them, and even some that you don’t control.

When do you need to view log files?

You probably don’t use log files for anything else than troubleshooting? Primarily because you have to go through so much hassle just to find the correct file, let alone dig through to the correct time of day.

But couldn’t you use log files to proactively make your software better, spot trends and overall use the data to make decisions when building new stuff or improving what’s already there?

On average, how big are the files your users upload? How many do they upload at a time? Oh, they can only upload a single file in one go – but how many times do they consecutively upload files then?

Quantitative data like this can be drawn from your log files, if they were easy to access and query. And it makes you able to play a whole different game when you don’t have to guess all the time, but actually rely on some real insights!

How do you access logs?

You copy the files from the server, of course! How many servers do you have? How big are the log files? Is your connection to the server fast enough to download that 2 gig log file you so desperately need?

The process of using log files are just completely broken. That’s why most log files stay deserted, and wind up being deleted because they take up to much storage.

You also need to know exactly where different services and modules store their logs. Most use rolling log files, so you also need to find the exact file by comparing the timestamp to one another.

How do you read log files?

If it’s 2 gigs, can you even open and browse it in a timely manner?

Most of the times you probably scroll through a huge file to see if you can find any interesting stuff buried there.

If your log file complies to any standard, you might be able to use a log parser that allows you to query several files at a time – but not all log files does that, and you still have difficulty correlating events across sources. Querying is good to spot trends.

How do you share your findings, or get help?

Analyzing log files is not entirely a combined effort. You often sit for yourself and try to find the needle in the haystack.

But in case you want help from your peers, what do you do? Send a fraction of the log in an e-mail, share it in a Gist?

Uploading the whole thing to source control, and using something like GitHub together with your team is probably one of the best ways to coherently analyze log files – much better than keeping the files to yourself.

Let’s be honest, discussing and analyzing the events in a log file is even more difficult and you probably only done it a couple of times.

Time to go home, how do you save your work?

It’s late, and you need to go home and continue your investigations. What do you do?

What if you concluded that nothing serious happened, but you found a little noise and smells you wanted to save for later when a serious issue does occur, do you just copy the log file to a shared drive, source control – do you write a little essay about what you found? Maybe you create an issue in your favorite bug tracker?

The process for saving this kind of work for later doesn’t really exist. You make up a new way every time, and trust your instinct to remind you later when it’s relevant.

A new issue occurred, where did the old insight go?

Or even worse, you forgot to save the log file from the download module last time, so you can’t determine if something has changed.

You also didn’t include a complete context of your previous findings, so you can’t see if this only occurs to admin users or all customers are affected.

Conclusion

We’ve seen how much hassle you encounter when using log files. Not only do you waste an enormous amounts of time, you also miss out on a few opportunities for using log entries to something more useful and proactive than troubleshooting.

So how much would a more accessible, and centralized management of log files be worth to you? Of course you can’t put a price on it, but I bet your life would improve – maybe not on a daily basis, but those sleepless nights and stressful days when customers are constantly calling support, and you get all the blame, is not entirely entertaining!

——-

Sign up for my Product Hacking series

I’m actually working on a project where I set out to solve some of the problems above, and I’m sharing everything in the process. You’ll see how to build a SaaS app from an empty solution to shipping a real product – Sign up to receive my Product Hacking series with stories and examples that takes you from an empty solution, to a shipping product!

The Best Code Documentation ‘Tool’ Ever Made

Code Documentation is dreaded by most programmers, and people even question its value. What good is it to have a separate document that describes what the code does, when you can just look at the code?

Of course, code documentation is about outlining the design decisions and how the implementation fits the problem it tries to solve and not just a one-to-one explanation of the code.

Comments as Code Documentation

A lot of people advocate comments as code documentation, and lots of tools and IDEs like Visual Studio has even adopted a syntax within comments to generate code documentation pages where you can browse a class and it’s properties and methods.

Comments as code documentation has a couple of issues, though. First of all, do you ever read them? And what do you expect people to write? Far too often, the comments of a class is just a default text like this:

/// <summary>
/// Helper for constructing Facebook Graph API queries.
/// </summary>
public static class FacebookQueryHelper
{
}

To me, the above comment is just unnecessary clutter.

Another problem with comments as code documentation is that you often forget to update it when you change the code. Say you have a method that does X, Y and Z and you perfectly described what was going on. What if you change the code dramatically, and forget to change the code documentation in the comments? The value of this documentation is now wrong, and you begin to stop looking at it altogether.

Opening up a code base for the first time, and seeing comments that is out of line with the implementation only degrades your perception of the code base and its quality. Over time, it becomes a parody and there’s even long discussions of the best story within comments.

 The Best Code Documentation ‘Tool’

The best code documentation ‘tool’ is just about as simple as it gets. It doesn’t require any download, install or configuration.

The Best Code Documentation ‘Tool’ is: Private methods. It doesn’t get anymore simple than that.

All it requires is that you change the way you implement code a little. Begin to embrace private methods by splitting out a method into smaller chunks, as soon it gets bigger than e.g. 8 lines. I know you can’t have a fixed length to decide when to split out a method into several private methods, but you can find your own criteria and see how it works.

Of course, you shouldn’t go mad and aim for names that documents themselves, but find your own sweet spot instead.

Take a look at this method:

You really need to examine the method to figure out what is going on. It’s all about the ‘how’, and not about the ‘what’. Then take a look at this modified version:

Now it’s very clear that two steps are taken to first of all get the signed request, and then to use the signed request to set the principal.

How To Receive E-mails in ASP.NET MVC using MailGun

Mailgun has a fantastic API for doing a lot of stuff with e-mail. Not only is it a breeze to use their dedicated SMTP service to send a lot of e-mails with great deliver-ability – it is also very powerful for routing e-mails back to your application.

There’s a lot of use cases for routing e-mails back to your app, a few includes:

  • Reply to messages within your app
  • Send files to your app from any device, without providing native apps for any particular devices
  • Execute commands from e-mail to e.g. run a background task, or whatever makes your users happy

Heck, you can even transfer money using e-mail. Just write $10 in the subject line, send it to your friend and add cash@square.com to the CC recipients list and you’ve just sent money via e-mail.

Mailgun routes

To route back e-mail to your application, you need two things:

  1. Setup an MX record of a domain so that it points to mxa.mailgun.org
  2. Add a route within Mailgun that forwards to your web app

Setting up the MX record is beyond the scope of this blog post. You can take a look at the Mailgun docs and find guides for common hosting providers.

To setup a route, you need to login to Mailgun, find the Routes tab and create a new route with a match criteria that let’s you filter by recipient, header etc. Read more about routes in the docs.

Receiving messages via HTTP

Whenever an e-mail matches the criteria of the route, Mailgun will issue an HTTP POST request to the URL you specified as part of the forward() action.

To receive the request in a controller/action in ASP.NET MVC, you can use the following code:

Mailgun listens to the HTTP status code you return:

For Route POSTs, Mailgun listens for the following codes from your server and reacts accordingly:

If Mailgun receives a 200 (Success) code it will determine the webhook POST is successful and not retry.
If Mailgun receives a 406 (Not Acceptable) code, Mailgun will determine the POST is rejected and not retry.
For any other code, Mailgun will retry POSTing according to the schedule below for Webhooks other than the

And will retry for 8 hours:

If your application is unable to process the webhook request but you do not return a 406 error code, Mailgun will retry (other than for delivery notification) during 8 hours at the following intervals before stop trying: 10 minutes, 10 minutes, 15 minutes, 30 minutes, 1 hour, 2 hour and 4 hours.

How To Use the Google Search Suggestions API (AutoComplete) from C#

Google is currently receiving a lot of negative mentions for their lack of providing keywords to website owners. Keywords used by visitors is such valuable information when running a website, and especially if you rely on that website for income!

I figured it’s about time to dig through the various APIs that are available to use that can fill this void and give us (maybe) even more insight. One thing is what keywords users actually use to find your site – what’s even more valuable is the keywords others are using but your site is not part of the results.

Google Search Suggestions API

The Google Search Suggestions API provides related keywords, or I guess it’s more complementary keywords that is an extended phrase, based on the keyword or text your provide.

You’ve seen it on Google.com, in Chrome, Internet Explorer, Firefox, iOS – everywhere, and it looks like this:

So let’s say we’re selling mountain bikes, and we want to dig into keywords being used to find accessories for mountain bikes. We simply use the URL for the Google Search Suggestions API, insert ‘mountain bike’ and open it in a browser: http://www.google.com/complete/search?output=toolbar&q=mountain%20bike&hl=en

They used to provide search volume for every keyword phrase, but that has been removed. I guess they want you to use AdWords!

The implementation of the API client is a very simple class:

Just call the GetSearchSuggestions method, use the async magic as you like and get your results:

Google Search Suggestions API results

Entity Framework Migrations “Cheat Sheet”

I’m a big fan of Entity Framework (Code First, aka. Magic Unicorn Edition), and in particular I am really beginning to like code first migrations, which makes it easy for you to not only migrate your database step by step, but as an extra bonus also get your database under version control. You don’t have your database version controlled?

Anyway, I mostly do web development and once an application gets going it is minimal how much the database changes from release to release, and since I’m not getting any younger and my memory is not as good as it used to, I often forget some of the (limited) commands that I need to use to generate SQL scripts to run against the server.

So I thought I’d compile a little “cheat sheet” with the basic commands that is necessary to use Entity Framework Migrations.

How to setup a project with Entity Framework Migrations

First step, of course, is to setup a project to actually use Entity Framework Migrations. I always have a dedicated DataAccess project – just a plain old Class Library created with the default template within Visual Studio.

I totally clean it, deleting the default Class1.cs file and then install the Entitiy Framework package, via the Package Manager Console:

PM> Install-Package EntityFramework
 Installing 'EntityFramework 5.0.0'.
 You are downloading EntityFramework from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkId=253898&clcid=0x409. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device.
 Successfully installed 'EntityFramework 5.0.0'.
 Adding 'EntityFramework 5.0.0' to EFPlays.DataAccess.
 Successfully added 'EntityFramework 5.0.0' to EFPlays.DataAccess.

Add some models

I like to divide logical parts of my app into projects, so I also create a dedicated Model class library. To start with an initial model (class) for our first migration, create the following class:

public class User
{
  public int Id { get; set; }
  public string EMail { get; set; }
  public string FirstName { get; set; }
  public string LastName { get; set; }
}

Note: Remember to add reference to the Model project (from the DataAccess project)

Add a DbContext class

The DbContext class is basically a code representation of your database. It defines a number of DbSet<T> properties that you can access from code (using LINQ), but more importantly it tells Entity Framework how to create the database. This is the holy grail of “Code First Development”.

The initial DbContext is very simple:

public class EFPlaysDbContext : DbContext
{
  public DbSet<User> Users { get; set; }
}

The DbContext class will be the one used elsewhere in the app, when we need to query the database. But of course, we don’t want to throw this dependency all around the app so the ultimate goal of the DataAccess project is to provide all methods required by the app to query, insert, update and delete from the database.

Enable Migrations

Migrations is actually what this blog post was about, and now we’re getting closer to it.

First off all, we need to enable migrations for the DataAccess project. This is done via the Package Manager Console:

PM> Enable-Migrations
Could not load assembly 'EFPlays.DataAccess'. (If you are using Code First Migrations inside Visual Studio this can happen if the startUp project for your solution does not reference the project that contains your migrations. You can either change the startUp project for your solution or use the -StartUpProjectName parameter.)

This error is because I started by creating an ASP.NET MVC project, and it became the startup project. Since I’m not going to use the DataAccess project on its own, instead of changing the startup project to DataAccess I’ll add a reference from the MVC project to both the DataAccess and Model projects.

PM> Enable-Migrations
Checking if the context targets an existing database...
Code First Migrations enabled for project EFPlays.DataAccess.

This creates for you, a default Configuration class. This class let’s you override EF Code First settings and behavior as well as adding seed data to your database whenever a migration is run.

Adding the first migration

To kick things off, we need to draw a line in the sand. What we’re looking at is migrations, and the first migration contains the entire database schema as it looked when the application was first released and used.

Let’s use the Add-Migration command to create the initial schema:

PM> Add-Migration Initial
Scaffolding migration 'Initial'.
The Designer Code for this migration file includes a snapshot of your current Code First model. This snapshot is used to calculate the changes to your model when you scaffold the next migration. If you make additional changes to your model that you want to include in this migration, then you can re-scaffold it by running 'Add-Migration 201308011947555_Initial' again.

This generates a new file, in this case called 201308011947555_Initial (Timestamp (UTC) followed by the name of the migration) – and it’s really simple:

namespace EFPlays.DataAccess.Migrations
{
    using System;
    using System.Data.Entity.Migrations;

    public partial class Initial : DbMigration
    {
        public override void Up()
        {
            CreateTable(
                "dbo.Users",
                c => new
                    {
                        Id = c.Int(nullable: false, identity: true),
                        EMail = c.String(),
                        FirstName = c.String(),
                        LastName = c.String(),
                    })
                .PrimaryKey(t => t.Id);

        }

        public override void Down()
        {
            DropTable("dbo.Users");
        }
    }
}

A method for deploying the changes (Up) and one for reverting the changes (Down). Within each method, you can also add you own data migration code. Let’s say part of the schema update requires data to be moved from one table, to another. You can do this by calling the SQL method, and specify a SQL statement for migrating the data.

Deploying the database to your local SQL server

Before we deploy our little database to a server, let’s make sure it goes where we want it to go and not let Entity Framework decide for us.

First of all, make sure that the startup project is set to the Web project – not the DataAccess project!

Then, add a connection string to the Web.Config file within the Web project:

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit

http://go.microsoft.com/fwlink/?LinkId=301880

  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="PreserveLoginUrl" value="true" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
  </appSettings>
	<connectionStrings>
		<add name="EFPlaysDbContext" connectionString="Data Source=.SQLEXPRESS;Initial Catalog=EFPlays;Integrated Security=True;" providerName="System.Data.SqlClient" />
	</connectionStrings>
  <system.web>
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5" />
    <pages>
      <namespaces>
        <add namespace="System.Web.Helpers" />
        <add namespace="System.Web.Mvc" />
        <add namespace="System.Web.Mvc.Ajax" />
        <add namespace="System.Web.Mvc.Html" />
        <add namespace="System.Web.Routing" />
        <add namespace="System.Web.WebPages" />
      </namespaces>
    </pages>
  </system.web>
  <system.webServer>
    <validation validateIntegratedModeConfiguration="false" />
  </system.webServer>
</configuration>

Now that we’ve told where the database should be created, let’s kick off the migration. Note that we’re running the commands against the DataAccess project, even though the Web project is set as the startup project. (Make sure DataAccess is the Default project in the Package Manager Console toolbat at the top):

PM> Update-Database
Specify the '-Verbose' flag to view the SQL statements being applied to the target database.
Applying code-based migrations: [201308011947555_Initial].
Applying code-based migration: 201308011947555_Initial.
Running Seed method.

If you opening SQL Server Management Studio, you’ll see the EFPlays database containing the one table we defined in our DbContext earlier.

Adding the second migration

We’re now up and running, and the real benefit of Entity Framework Migrations is about to be unleashed. Go ahead and change the User class to include a PhoneNumber property of type string:

public class User
{
	public int Id { get; set; }

	public string EMail { get; set; }

	public string FirstName { get; set; }

	public string LastName { get; set; }

	public string PhoneNumber { get; set; }
}

This is just a small addition to the model, and we want it added to a new migration so let’s create that right away:

PM> Add-Migration 'Add PhoneNumber to User'
Scaffolding migration 'Add PhoneNumber to User'.
The Designer Code for this migration file includes a snapshot of your current Code First model. This snapshot is used to calculate the changes to your model when you scaffold the next migration. If you make additional changes to your model that you want to include in this migration, then you can re-scaffold it by running 'Add-Migration 201308012011085_Add PhoneNumber to User' again.

Notice that it’s perfectly valid to use real descriptions as names – just remember the quotation marks.

Deploying the change to the database

This is as easy as calling the Update-Database command:

PM> Update-Database
Specify the '-Verbose' flag to view the SQL statements being applied to the target database.
Applying code-based migrations: [201308012011085_Add PhoneNumber to User].
Applying code-based migration: 201308012011085_Add PhoneNumber to User.
Running Seed method.

Getting hold of a SQL script

When you need to deploy to production, it shouldn’t be done via the Package Manager Console and a Dev machine. Instead, we need a script we can run against the database.

To get the full script of the entire schema, run this command:

PM> Update-Database -Script -SourceMigration:$InitialDatabase
Applying code-based migrations: [201308011947555_Initial, 201308012011085_Add PhoneNumber to User].
Applying code-based migration: 201308011947555_Initial.
Applying code-based migration: 201308012011085_Add PhoneNumber to User.

You get the script within Visual Studio. Notice that this will create a new table called __MigrationsHistory which is used by Entity Framework to keep track of migrations.

While it is useful to get the full script, most of the time you need a script for the changes made since the last deployment. Let’s say that our Initial migration was already created and now we just want to deploy the one where we added the phone number. This can be done by specifying a SourceMigration:

PM> Update-Database -Script -SourceMigration:Initial
Applying code-based migrations: [201308012011085_Add PhoneNumber to User].
Applying code-based migration: 201308012011085_Add PhoneNumber to User.

Entity Framework will in this case generate a script for the changes made from (excluding) the SourceMigration and include all migrations added since. If you want a script for a change made in the middle of the change, you can specify a TargetMigration as well. To try this, first let’s add a new column to our User class – this time we’ll add DateOfBirth:

public class User
{
	public int Id { get; set; }

	public string EMail { get; set; }

	public string FirstName { get; set; }

	public string LastName { get; set; }

	public string PhoneNumber { get; set; }

	public DateTime DateOfBirth { get; set; }
}

And now we want to add this change as a new migration:

PM> Add-Migration 'Add DateOfBirth to User'
Scaffolding migration 'Add DateOfBirth to User'.
The Designer Code for this migration file includes a snapshot of your current Code First model. This snapshot is used to calculate the changes to your model when you scaffold the next migration. If you make additional changes to your model that you want to include in this migration, then you can re-scaffold it by running 'Add-Migration 201308012035580_Add DateOfBirth to User' again.

Back to generating SQL scripts. If we run the previous command now, we’ll get both the ‘Add PhoneNumber to User’ and the ‘Add DateOfBirth to User’. If we wanted only the ‘Add PhoneNumber to User’ migration, the TargetMigration is what we want to use:

PM> Update-Database -Script -SourceMigration:Initial -TargetMigration:'Add PhoneNumber to User'
Applying code-based migrations: [201308012011085_Add PhoneNumber to User].
Applying code-based migration: 201308012011085_Add PhoneNumber to User.

Conclusion

The Package Manager Console is your friend. Entity Framework Migrations are really simple for evolving your database from release to release, and at the same time keep your entire database versioned. The versioning part is completely free, you don’t have to do any extra work. Just make sure your evolve your database in small, incremental chunks and you should be fine.

Twitter Streaming API + SignalR + Google Maps = Powerful Stuff!

I’ve been playing around with a variety of APIs every once in a while. It is great fun to hack around with the enormous amounts of real data that are out there, and it’s always easy to get crazy ideas with it. Whenever you build something, the lack of data often makes it look cheap, unfinished and crappy. With the abundance of APIs and rich data, this is not the case anymore.

So this time I was thinking about a way to map tweets in real time. My idea was, that following live events such as the Tour de France, a concert and football games is always in the hands of the broadcasters, the production company and TV presenters. But they only show a fraction of what’s actually happening.

Imagine all the geo-located images posted to Twitter and Instagram during the Alpe d’huez stage of the Tour de France – real, live images (and video). The most charming part is, that it’s actually coming from real people, that broadcasts their own little view of the events.

I cracked open a browser, and read the Twitter Streaming API docs, more specifically the Public (filter) stream that lets you add keywords, locations and users to a filter.

Twitter Integration

I didn’t want to write any Twitter API integration code, since there must be so many libraries out there. Unfortunately, support for the streaming API is minimal. TweetSharp did have an implementation, but that is incomplete (you can’t supply search parameters) – so I needed to tear it apart, and hack my own.

As an aside, I’m sad to see that the awesome TweetSharp library is not being actively developed by Daniel Crenna (the creator) – hopefully someone will take over development.

SignalR is awesome!

I remember the days of cometd – real time push notifications to the browser. It was OK, but a mess to get going. SignalR is awesome! Seriously, the code required to push messages directly to the browser is minimal:

The result

Here’s a Vine video showing the hack in action. Notice when I press send on the iPad, how the pin drops onto the Google Map, like instantly! This is extremely powerful!

Backbone and ASP.NET MVC: Use Nullable types as Id

As a side note to my blog post about renaming the idAttribute in Backbone when using it with ASP.NET (MVC), this post continues with the Id attribute.

The Id attribute in Backbone is very important. Backbone uses it to determine if an object is new, or already existed. Collections use it to determine if it already contains any given object.

So I ran into a problem in a situation where my model required a lot of “supporting data” before it was saved. To add this data, I decided to create the models on the server and add it to a collection (since there were a whole list of them).

No matter how many models I added to the list, only the first one was rendered. Why? Because my model on the server was using a non-nullable Integer as its Id property like this:

public class Account
{
	public int Id { get; set; }
}

So of course, all the models returned from the server had an Id of zero! And of course, Backbone thought that they were all the same instance since they shared the same Id.

Small change, huge impact

Talk about a single character making a huge impact!

By adding the silly question mark to the Id property, everything was working. That’s because a Nullable integer is not added to the JSON returned by the server, and Backbone then knows that this model does not exist!

3 Reasons Why Dedicated ViewModels in ASP.NET MVC is a *MUST*

I’d argue that using dedicated view-models in ASP.NET MVC (or any other MVC framework) is one of the things that has changed the way I work for the better. And for the better I mean more maintainable code, better designed code, more robust code — just better in all ways measurable.

Here are three reasons why you should use truly dedicated view-models for any view in ASP.NET MVC.

1. Abstract code beyond your control

If you work on a web team, that is part of a larger team you cannot expect to always have the entire back-end ready at the time you want to start coding a feature for the front-end. But why wait? Even if you don’t have the service layer, the domain models or the database ready, you can just easily create dummy instances of your view-model in a controller and return that to the view.

You can get started coding the UI, the JavaScript, maybe hook up your Backbone models or anything similar.

You are also in complete control of the design of the view-models, and it often turns out that they end up a hell of a lot different from the domain models.

2. Keep view oriented data annotations out of the domain model

You want to use both validation attributes, UIHints, template information and a lot of other view oriented stuff that should stay inside the MVC project.

The problem is that if you throw all these things on the domain model, you suddenly have a dependency on ASP.NET MVC and Razor which means that you must add reference to these dependencies from your lower tiers. Not good!

3. Defend your JavaScript from outside changes

JavaScript is playing a bigger and bigger part in modern web development. So the more JavaScript your write that manipulates your view-models, the more exposed you become to outside changes.

Having a dedicated view-model makes you in control, and not the DBA or any other dude who doesn’t care about the front-end.

How to avoid repeating yourself

It’s probably the worst argument against view-models, that you end up repeating the domain-models. If you think that, you’ve missed something.

The point of a view-model is to make it as closely tied to the view as possible. This will make the view a lot more simple, which is a good thing.

For mapping a domain-model to a view-model and vice-versa, I use AutoMapper which is extremely simple, yet powerful.

Backbone and ASP.NET MVC: Rename the ID attribute

In Backbone, a model’s ID attribute is vital in the way Backbone handles models and collections. In a collection, you can call get with an ID and you’ll get back the model.

The thing is, naming conventions across languages does not always agree. In .Net properties are named in PascalCase. In JavaScript camelCase is the standard, so naturally this leads to a conflict when your model in Backbone names the ID attribute id, and the corresponding domain model on the server side is named Id.

I was using an ASP.NET MVC controller to return a collection of Accounts, but when it hit the client, got changed and later saved, the ID was lost and the model was created as a new instance in the database.

The problem was that when Backbone fetched the collection, and automatically turned the objects in the array into instances of Account — Backbone was expecting to find every model’s ID in the id property, which did not exist. And on the server, it uses Id to know if this model is new or already exists.

Change the idAttribute

It turns out that Backbone is designed for this.

Whenever you define your models, you need to set the value of the idAttribute property like this:

var Account = Backbone.Model.extend({
	idAttribute: "Id"
});

But that’s a bit annoying since you’ll have to remember this on each and every model. And you need to define your own model every time, you can’t just create a new instance of the stock Backbone.Model class.

But with JavaScript being the awesome language that it is, you can change the prototype like this:

Backbone.Model.prototype.idAttribute = "Id";

And now you can totally forget about this, and let Backbone handle IDs properly!

Entity Framework: Update single column

Yesterday I blogged about how to delete a detached entity using Entity Framework 5 by only using its Id.

A common practice in many applications today is to not actually delete the entity from the database, but instead mark it as deleted and make sure your data access layer filters out those “deleted” items when selecting.

So I wanted to implement a general “delete” method that would update the DateDeleted column of an entity without touching any other column.

Again I use a detached entity, so the trick is to only mark the DateDeleted column as modified, and not the entire entity.

public bool DeleteEntity(T entity) where T : ModelBase
{
	entity.DateDeleted = DateTime.UtcNow;

	this._databaseContext.Set().Attach(entity);
	this._databaseContext.Configuration.ValidateOnSaveEnabled = false;

	this._databaseContext.Entry(entity).Property(m => m.DateDeleted).IsModified = true;

	int recordsAffected = this._databaseContext.SaveChanges();

	this._databaseContext.Configuration.ValidateOnSaveEnabled = true;

	return recordsAffected == 1;
}

Notice how I disable validation, and then enable it after saving the changes. This is because I don’t want to load the entity before changing DateDeleted. I want to use only the Id, so I want to avoid having to fulfill all validation rules on an entity.

All my model classes derive from the ModelBase class, which looks like this:

public abstract class ModelBase
{
	public ModelBase()
	{
		this.DateCreated = DateTime.UtcNow;
	}

	public DateTime DateCreated { get; set; }

	public DateTime? DateModifed { get; set; }

	public DateTime? DateDeleted { get; set; }
}

To “delete” an entity, I can instantiate an entity, give it an Id, and call the delete method:

Account account = new Account { Id = 123 };

DeleteEntity(account);