Tuesday, 31 December 2013

Triggering Events in Angular JS Directive Tests

The best place to perform DOM manipulations in an Angular JS application is in a directive. Sometimes we handle events on the elements wrapped inside the directive to perform required action. The jQlite implementation in Angular JS provides enough APIs to handle these events in the body of the directives. We can use jQuery as well, if API exposed by jQlite is not enough.

One of the primary design goals of Angular JS is testability. Directives are testable too. Since we work directly on DOM with directives, testing becomes a bit tricky. One of such trickier part to test is event. Say, we have a directive that handles blur event on a text box:

app.directive('textBoxBlur', function($window){
  return{
    require:'ngModel',
    link: function(scope, element, attrs, ngModel){
      element.bind('blur', function(e){
        $window.alert("value entered: " + element.val());
      });
    }
  }
});

To call the DOM events manually, jQlite provides a handy method triggerHandler. It is similar to jQuery’s triggerHandler method. It can be fired using jQlite object of any element. Following statement shows this:
elem.triggerHandler('blur');

Since we have to do it repeatedly in tests, it is better to wrap it inside a reusable function as shown below:
changeInputValue = function (elem, value) {
    elem.val(value);
    elem.triggerHandler('blur');
};

Now the above function can be called from any test case to test behaviour of the blur event.
it('Should call alert on losing focus', function(){
  changeInputValue(form.find('input'), "Ravi");
  expect(windowMock.alert).toHaveBeenCalled();
});

Complete sample is available on plnkr: http://plnkr.co/edit/InkyGdIhZiwfe0NC4hyu?p=preview

Happy coding!

Monday, 30 December 2013

Self-Hosting ASP.NET SignalR Using Katana

With the recent release of ASP.NET, Microsoft added a new component, Katana. It is an implementation of OWIN (Open Web Interfaces for .NET) that provides a light weight alternative to create a .NET server quickly without needing any of the ASP.NET components or IIS. The server can be composed anywhere, even on a simple Console based application. For more information on Katana, read the excellent introductory article by Howard Dierking.

The project templates of ASP.NET that come with Visual Studio 2013 have some pieces of Katana also installed, as we saw some of them in my post on Identity system. It is used there to make the identity system available for across all flavours of ASP.NET.

Katana made the process of self-hosting ASP.NET Web API and SignalR very easier with its simple interface. Any Katana based application needs a startup class to kick the things off. We can perform any global configuration in this class. It includes hub route mapping in SignalR or API controller routing in Web API.

Let’s build a simple console-hosted SignalR application using Katana. Create a console application using Visual Studio 2013 and install the following NuGet package on this project:


Install-Package Microsoft.AspNet.SignalR.SelfHost


This package installs all the dependencies required to write and publish a SignalR application on a non-web based environment. Add a new class to the project; name it Startup and add the following code to it:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.MapSignalR();
    }
}

The configuration method is used comparable to Application_Start event in Global.asax, it gets called at the beginning of the application. As we see, SignalR routes are mapped in the configuration method. We need to start the server in the Main method of the Program class. Add following code to the Main method:
static void Main(string[] args)
{
    string uri = "http://localhost:8080/";

    using (WebApp.Start<Startup>(uri))
    {
        Console.WriteLine("SignalR server started, start sending requests");
        Console.ReadKey();
        Console.WriteLine("Server stopped!");
    }
}

As there is no web server to host our application, we need to specify a port number where the server has to keep running. The generic static method WebApp.Start is called to start the server; it calls the Configuration method defined above. The server would stop once the console application stops running.

Let’s create a boring hello-world kind of hub. Following is a hub that takes name of a person and returns a greeting message with current time stamp:

public class HelloWorldHub : Hub
{
    public void Greet(string name) 
    {
        Console.WriteLine("Got a request from: {0}", name);
        string message= string.Format("Hello, {0}. Greeted at: {1}",name,DateTime.Now.ToString());
        Clients.All.acceptGreeting(message);
    }
}

Run the console application, you should be able to see the following message:

Let’s quickly create a console client to test if the server is able to push messages to the client. Create another console application and add the following NuGet package to it:


Install-Package Microsoft.AspNet.SignalR.Client


Add following code to the Main method:

static void Main(string[] args)
{
    var hubConnection = new HubConnection("http://localhost:8080/");
    var hubProxy = hubConnection.CreateHubProxy("HelloWorldHub");
    hubProxy.On<string>("acceptGreeting", message => {
        Console.WriteLine("Received from server: " + message);
        Console.WriteLine("\n\nPress any key to exit.");
    });

    Console.WriteLine("Establishing connection...");
    hubConnection.Start().Wait();
    Console.WriteLine("Sending request to hub...");
    hubProxy.Invoke("Greet", "Ravi").Wait();

    Console.ReadLine();
    hubConnection.Stop();
}

Run the client application, you should be able to see the following output:



Happy coding!

Sunday, 22 December 2013

Mocking promises in Angular JS Controller Tests

In a typical Angular JS application, we wrap the calls to backend services in a custom Angular JS service and return a promise from the method in the service to the calling component. For instance, say the calling component is a controller. While testing the controller, we create a mocked service instance with methods replaced by spies. As the original method in the service returns a promise containing the result from executing the backend API, the spy should return a mocked promise with a dummy result.

A few months back, when I was learning Angular and blogging my learning, I did blog posts on unit testing controllers using Jasmine and also on unit testing controllers using QUnit. Back then, I used to use spyOn().andCallThrough() on all methods in the service and used $httpBackend to avoid calling the backend APIs from the service. With time, I understood that this approach is not good as the controller still depends on the logic written inside the service. In this post, we will see how to return mock promises from the spies to isolate controller from the service.

Following are the service and the controller we will be using in this post:

var app = angular.module("myApp", []);
app.factory('dataSvc', function($http, $q){
    var basePath="api/books";
    getAllBooks = function(){
        var deferred = $q.defer();
 $http.get(basePath).success(function(data){
            deferred.resolve(data);
        }).error(function(err){
          deferred.reject("service failed!");
        });
        return deferred.promise;
     };
  
     return{
         getAllBooks:getAllBooks
     };
});

app.controller('HomeController', function($scope, $window, dataSvc){
   function initialize(){
       dataSvc.getAllBooks().then(function(data){
           $scope.books = data;
       }, function(msg){
          $window.alert(msg);
       });
   }

  initialize();
});

Let’s create a spy for the service. This is a bit tricky, as we need to force the promise to pass or fail based on a condition. At the same time, it is simple as $q provides the ready methods when and reject to make our job easier. Following is the spy for getAllBooks method:
var succeedPromise;
spyOn(booksDataSvc, "getAllBooks")
    .andCallFake(function(){
        if (succeedPromise) {
            return $q.when(booksData);
        }
        else{
            return $q.reject("Something went wrong");
        }
    });

The fake implementation of getAllBooks passes if value of the field succeedPromise is set to true, otherwise it fails. We need to manipulate this field in the test cases.

In the test cases, we need to call scope.$digest before checking the expectations, as the promise is triggered from a spy method, which is in non-angular world.

Following test case checks if the books object is set to some value when the promise passes.

it('Should call getAllBooks on creating controller', function(){
    succeedPromise = true;
    createController();
    homeCtrlScope.$digest();
    expect(booksDataSvc.getAllBooks).toHaveBeenCalled();
    expect(homeCtrlScope.books.length).not.toBe(0);
  });

The promise can be forced to fail by just setting succeedPromise to false in the test case. Following test case demonstrates it:
it('Should alert a message when service fails', function(){
    succeedPromise = false;
    createController();
    homeCtrlScope.$digest();
    expect(booksDataSvc.getAllBooks).toHaveBeenCalled();
    expect(windowMock.msg).not.toBe("");
  });

The complete sample is available on plnkr: http://plnkr.co/edit/xD9IPb6TRduAUwRGbIIG

Happy coding!

Wednesday, 18 December 2013

Unit Testing Asynchronous Web API Action Methods Using MS Test

Since Entity Framework now has a very nice support of performing all its actions asynchronously, the methods in the repositories in our projects will turn into asynchronous methods soon and so will be the code depending on it. Tom Fitzmacken did a nice job by putting together a tutorial on unit testing Web API 2 Controllers on official ASP.NET site. The tutorial discusses on testing synchronous action methods. The same techniques can be applied to test asynchronous action actions as well. In this post, we will see how easy it is to test asynchronous Web API action methods using MS Test.

I created a simple repository interface with just one method in it. The implementation class uses Entity Framework to get a list of contacts from the database.

public interface IRepository
{
    Task<IEnumerable<Contact>> GetAllContactsAsync();
}

public class Repository : IRepository
{
    ContactsContext context = new ContactsContext();

    public async Task<IEnumerable<Contact>> GetAllContactsAsync()
    {
        return await context.Contacts.ToArrayAsync();
    }
}

Following is the ASP.NET Web API controller that uses the above repository:
public class ContactsController : ApiController
{
    IRepository repository;

    public ContactsController() : this(new Repository())
    { }

    public ContactsController(IRepository _repository)
    {
        repository = _repository;
    }

    [Route("api/contacts/plain")]
    public async Task<IEnumerable<Contact>> GetContactsListAsync()
    {
        IEnumerable<Contact> contacts;
         try
         {
            contacts = await repository.GetAllContactsAsync();
         }
         catch (Exception)
         {
             throw;
         }
           
         return contacts;
    }

    [Route("api/contacts/httpresult")]
    public async Task<IHttpActionResult> GetContactsHttpActionResultAsync()
    {
        IEnumerable<Contact> contacts;

        try
        {
            contacts = await repository.GetAllContactsAsync();
        }
        catch (Exception ex)
        {
            return InternalServerError(ex);
        }
        
        return Ok(contacts);
    }
}

As we see, the controller has two action methods performing the same task, but  the way they return the results is different. Since both of the action methods respond to HTTP GET method, I used attribute routing to distinguish them. I used poor man’s dependency injection to instantiate the repository; it can be easily replaced using an IoC container.

Before writing unit tests for the above action methods, we need to create a mock repository.

public class MockRepository:IRepository
{
    List<Contact> contacts;

    public bool FailGet { get; set; }

    public MockRepository()
    {
        contacts = new List<Contact>() {
            new Contact(){Id=1, Title="Title1", PhoneNumber="1992637281", CustomerId=1},
            new Contact(){Id=2, Title="Title2", PhoneNumber="9172735171", SupplierId=2},
            new Contact(){Id=3, Title="Title3", PhoneNumber="8361910353", CustomerId=2},
            new Contact(){Id=4, Title="Title4", PhoneNumber="7801274518", SupplierId=3}
        };
    }

    public async Task<IEnumerable<Contact>> GetAllContactsAsync()
    {
        if (FailGet)
        {
            throw new InvalidOperationException();
        }
        await Task.Delay(1000);
        return contacts;
    }
}

The property FailGet in the above class is used to force the mock to throw an exception. This is done just to cover more test cases.

In the test class, we need a TestInitialize method to arrange the objects needed for unit testing.

[TestClass]
public class ContactsControllerTests
{
    MockRepository repository;
    ContactsController contactsApi;

    [TestInitialize]
    public void InitializeForTests()
    {
        repository = new MockRepository();
        contactsApi = new ContactsController(repository);
    }
}

Let us test the GetContactsListAsync method first. Testing this method seems to be straight forward, as it returns either a plain generic list or throws an exception. But the test method can’t just return void like other tests, as the method is asynchronous. To test an asynchronous method, the test method should also be made asynchronous and return a Task. Following test checks if the controller action returns a collection of length 4:
[TestMethod]
public async Task GetContacts_Should_Return_List_Of_Contacts() 
{
    var contacts = await contactsApi.GetContactsListAsync();
    Assert.AreEqual(contacts.Count(), 4);
}

If the repository encounters an exception, the exception is re-thrown from the GetContactsListAsync method as well. This case can be checked using the ExpectedException attribute.
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public async Task GetContacts_Should_Throw_Exception()
{
    repository.FailGet = true;
    var contacts = await contactsApi.GetContactsListAsync();
}

Now let’s test the GetContactsHttpActionResultAsync method. Though this method does the same thing as the previous method, it doesn’t return the plain .NET objects. To test this method, we need to extract the result from the IHttpActionResult object obtained from the action method. Following test checks if the action result contains a collection when the repository is able to fetch results. Return type of Ok() method used above is OkNegotiatedContentResult. IHttpActionresult has to be converted to this type to check for the result obtained:
[TestMethod]
public async Task GetContactsHttpActionResult_Should_Return_HttpResult_With_Contacts()
{
    var contactsResult = await contactsApi.GetContactsHttpActionResultAsync() as OkNegotiatedContentResult<IEnumerable<Contact>>;

    Assert.AreEqual(contactsResult.Content.Count(), 4);
}

Similarly, in case of error, we are calling InternalServerError() method to return the exception for us. We need to convert the result to ExceptionResult type to be able to check the type of exception thrown. It is shown below:
[TestMethod]
public async Task GetContactsHttpActionResult_Should_Return_HttpResult_With_Exception()
{
    repository.FailGet = true;
    var contactsResult = await contactsApi.GetContactsHttpActionResultAsync() as ExceptionResult;
    Assert.IsInstanceOfType(contactsResult.Exception,typeof(InvalidOperationException));
}

Happy coding!

Wednesday, 4 December 2013

Invoking Angular JS Filters from Controller

In my post on Filtering and Sorting data using Angular JS, I got a comment requesting for a post on calling filters in a controller. We will see how to achieve the same in this post.

There are two ways to invoke filters from controllers in Angular JS. One approach is using the $filter service and the other is directly injecting the filters into the controller. Let’s see each of these cases.

Using $filter service
The $filter service is a provider that accepts name of a filter and returns the filter function. Once the function is obtained, we can pass in the required parameters. Following is the syntax:

$filter(‘filterName’)(params);

Let’s invoke one of the most used filters, orderBy using $filter to sort a list of items based name. After getting the filter function, we need to pass the source array and sort expression to the filter function. Following statement shows this:

$scope.items = $filter('orderBy')(itemsArray, "Name")

To apply multiple filters, we can pass result of a filter as parameter to another filter. Following snippet applies a filter condition on the sorted list obtained above:
$scope.items = $filter('filter')($filter('orderBy')(itemsArray, "Name"), "a")

The filter condition and sort expression passed into the above filters can be any dynamic values as well. A custom filter can also be called using the same syntax. Say, rupee is a filter I wrote to display my currency in rupee format; we can invoke the filter as:
$scope.totalPrice= $filter('rupee')(totalPrice);

Getting filters injected into the controller
Filters can be injected into the controller, but name of the filter to be specified in the controller argument should be appended with the word “Filter”.

app.controller('MyCtrl', function(orderByFilter){  //injects the orderBy filter into the controller
});

As we have a reference to the filter available in the controller, we can invoke the filter directly from here.
$scope.items= orderByFilter(itemsArray, "Name");

The result obtained can be passed into another filter to get combined result.
$scope.items = filterFilter(orderBy(itemsArray, "Name"), "a");

Same syntax can be used to inject and use a custom filter as well. I have put together a sample on jsfiddle covering the above concept: Happy coding!

Tuesday, 3 December 2013

Using External Login Providers with ASP.NET Identity System

In last post, we saw how simple the new Identity system in ASP.NET is and we explored the code generated by the Visual Studio template for handling local user accounts. In this post, we will see how easy it is to use an external login provider with the identity system.

Allowing users to use an external provider to login to a website has several advantages. It includes:

  1. Your application doesn’t need to store user names and passwords of the users using external login
  2. Almost all of the external login providers have secured logging page, using HTTPS
  3. The user doesn’t need to register to your site and so he/she doesn’t need to remember another set of user name and password


The Startup.Auth.cs file contains a block of commented code for external login providers. The providers include Google, Microsoft, Twitter and Facebook.

// Uncomment the following lines to enable logging in with third party login providers
//app.UseMicrosoftAccountAuthentication(
//    clientId: "",
//    clientSecret: "");

//app.UseTwitterAuthentication(
//   consumerKey: "",
//   consumerSecret: "");

//app.UseFacebookAuthentication(
//   appId: "",
//   appSecret: "");

//app.UseGoogleAuthentication();

Out of these four providers, Google is the easiest provider to configure and use. It is as easy as just uncommenting the last statement in the above commented snippet. For other providers, we need to visit developer help websites of the third parties and input details of the application to get the keys. Pranav Rastogi documented these steps for us; they are available on Web Development Tools Blog on MSDN.

I enabled Google and Twitter in my application. The login page shows up buttons to use these providers.

After logging in using one of the external services, the application would ask the user to set a local user name. The user can optionally also set a local password for the account. Once the login at the external provider is successful, the user would be directed to ExternalLoginCallback. Following is the code inside this action method:
public async Task<ActionResult> ExternalLoginCallback(string returnUrl)
{         
    var loginInfo = await AuthenticationManager.GetExternalLoginInfoAsync();
    if (loginInfo == null)
    {
        return RedirectToAction("Login");
    }

    // Sign in the user with this external login provider if the user already has a login
    var user = await UserManager.FindAsync(loginInfo.Login);
    if (user != null)
    {
        await SignInAsync(user, isPersistent: false);
        return RedirectToLocal(returnUrl);
    }
    else
    {
        // If the user does not have an account, then prompt the user to create an account
        ViewBag.ReturnUrl = returnUrl;
        ViewBag.LoginProvider = loginInfo.Login.LoginProvider;
        return View("ExternalLoginConfirmation", new ExternalLoginConfirmationViewModel { UserName = loginInfo.DefaultUserName });
    }
}
It does the following:

  1. Checks if the external login is successful. If the login has failed, the user is directed to the login page
  2. Once the login is successful, it checks if the logged in user already has a local username in the application. If the user doesn’t have one, the application prompts the user to create one
  3. If the user already has an account, the application directs the user to the return URL

To identify the user when he/she comes back and logs in using the same third party account, the identity system maintains details in the following two tables:

  1. AspNetUsers: It is the same table that holds details of local users. By default, external user IDs would be created with empty passwords.
  2. AspNetUserLogins: Holds user ID, name of the external provider and a provider key

The code of creating these records can be found in the ExternalLoginConfirmation action method of the AccountController. Following is the snippet that adds the records:

var info = await AuthenticationManager.GetExternalLoginInfoAsync();
if (info == null)
{
    return View("ExternalLoginFailure");
}
var user = new ApplicationUser() { UserName = model.UserName };
var result = await UserManager.CreateAsync(user);
if (result.Succeeded)
{
    result = await UserManager.AddLoginAsync(user.Id, info.Login);
    if (result.Succeeded)
    {
        await SignInAsync(user, isPersistent: false);
        return RedirectToLocal(returnUrl);
    }
}

The AuthenticationManager.GetExternalLoginInfoAsync called above is a generic method and it provides just the data needed to identify the user. The returned value contains default user name, name of the third party provider and an authentication ID. It doesn’t provide vendor specific details about a user. To get more details from the vendor, we need to use the AuthenticateAsync method.
var vendorResult = await AuthenticationManager.AuthenticateAsync(DefaultAuthenticationTypes.ExternalCookie);

Following screenshot shows the details obtained after logging in using twitter. If you inspect the value returned from Google, you will find e-mail ID of the user in the details, you may use it as the default username for the application as well.

Happy coding!

Saturday, 30 November 2013

A Look at the new Identity System in ASP.NET

One of the key features added to ASP.NET core with the release of Visual Studio 2013 is the new Identity System. If you create a new ASP.NET project, may it be a Web Forms or an MVC project, you will find the default authentication type selected as "Individual User Accounts".



The individual user authentication type creates authentication system based on the Identity system, which looks greatly simplified when compared with the Membership system we had in earlier version. The default project template includes the following references (also available via NuGet packages with same names):

  • Microsoft.AspNet.Identity.Core: Consists of the core classes and interfaces for identity system
  • Microsoft.AspNet.Identity.EntityFramework: Consists of classes implemented for Identity system using ADO.NET Entity Framework

The default templates make use of Entity Framework to persist the user’s information in a SQL Server database. The default database used is an MDF file.

Classes and Setup:
The project template includes Owin and Katana. In the authorization configuration, we see the following setings applied to setup cookie based authentication system on Owin:

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
    AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
    LoginPath = new PathString("/Account/Login")
});
app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);

It makes the authentication system sharable across Web forms, MVC, Web API and SignalR without adding any additional code. The authentication configuration is called from Owin start-up class. In the models folder of the project, a file with following code is added for us:
public class ApplicationUser : IdentityUser
{
}

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    public ApplicationDbContext()
        : base("DefaultConnection")
    {
    }
}

As we see, this file includes two classes:

  • ApplicationUser: Inherited from Microsoft.AspNet.Identity.EntityFramework.IdentityUser. The IdentityUser class is inherited from the interface IUser. Any user class in the identity system must inherit from this interface. The class IdentityUser includes all necessary properties needed for a user account, like UserName, PasswordHash, SecurityStamp. This class allows us to add our own properties to a user’s profile. For example, if your web site needs to capture e-mail ID and phone number of the user, you can add them to this page. The IdentityUser class also includes navigation properties for UserRoles, UserClaims and UserLogins. The UserLogin entity is used to store information when a user is logged in using an external authentication provider, like Google, Microsoft, Twitter or Facebook.
  • ApplicationDbContext: It is the Entity Framework code-first DbContext class to create the database with necessary tables. It is extended from IdentityDbContext, the DbContext class defined in Microsoft.AspNet.Identity.EntityFramework namespace. It includes DbSets for User and Role. This class allows us to add our own DbSets to the database being created.

Once the application is executed, a SQL server MDF file is created with the following tables:




User management:
To manage the users and their data, the application uses the following two classes:

  • UserStore: A class in Microsoft.AspNet.Identity.EntityFramework namespace. It is responsible for all database operations related to managing users. It needs a DbContext for its work; we generally pass in an instance of the IdentityDbContext. This class implements six interfaces: IUserStore, IUserPasswordStore, IUserLoginStore, IUserClaimStore, IUserRoleStore and IUserSecurityStampStore. Each of them has a specific purpose. IUserStore is for creating, finding, updating and deleting the user information; IUserPasswordStore is for managing password and so on. While writing a custom Identity system, IUserStore is the least interface to be implemented. All methods declared in all of these interfaces are asynchronous and they return Task.
  • UserManager: This class is defined in the Microsoft.AspNet.Identity.Core assembly. It needs an instance of IUserStore type. The UserManager can be viewed as a repository that calls specific methods of UserStore to manage users for the application. As in UserStore, methods of UserManager are also asynchronous.
The UserManager is instantiated as follows:
var UserManager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));

Let’s see how a new user registration works. In the AccountController of the MVC project, we find the following code in the Register action method (Similar code can be found in Register.aspx.cs of Web Forms project):
var user = new ApplicationUser() { UserName = model.UserName };
var result = await UserManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
    await SignInAsync(user, isPersistent: false);
    return RedirectToAction("Index", "Home");
}

The UserManager.CreateAsync() method calls UserStore.CreateAsync to store the user information in Database. Once the registration is succeeded, a call to SignInAsync is made to authenticate the user. Following is the code inside SignInAsync method:
AuthenticationManager.SignOut(DefaultAuthenticationTypes.ExternalCookie);
var identity = await UserManager.CreateIdentityAsync(user, DefaultAuthenticationTypes.ApplicationCookie);
AuthenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = isPersistent }, identity);

The AuthenticationManager used here is the Authentication object of the current Owin context. The first statement clears any external cookies in the current application context. Then it asks the UserManager to create a claims-based identity for the current user and then the obtained claims-based identity object is set to the current Owin context.

The Login action method differs by just one statement, it is shown below:

var user = await UserManager.FindAsync(model.UserName, model.Password);

The FindAsync method returns null if the user’s credentials are not valid. If credentials are valid, it returns all information about the user. Once the user is found, a claims-based identity object of the user is set to the Owin context.

LogOff is a straight forward implementation. It just calls the SignOut method of the Authentication object of the Owin context:

Context.GetOwinContext().Authentication.SignOut();

Similarly, the template also generates code for changing password, removing user account and to handle external login accounts. We will see how the identity system manages external logins in a future post.

Happy coding!

Tuesday, 19 November 2013

“Controller as” Syntax in Angular JS 1.2

Angular JS 1.2 was released around 10 days back and 1.2.1 was released last week with a few fixes. The new release includes a number of significant features. One of them is the controller as syntax.

The classic way of creating a controller in Angular JS is by injecting $scope into it. It creates a new scope corresponding to the controller by inheriting from the parent scope. Angular 1.2 makes it possible to create a controller without injecting $scope into it. Objects and functions of the view are added to controller itself and they are referred in the view using an alias. The scope still exists; it is created and maintained behind the scenes for us. So, we still get the two-way data binding on all objects added to the controller.

Following is a simple controller and the page using it:

function HomeCtrl(){
    this.name="Ravi";
}

<div ng-app ng-controller="HomeCtrl as vm">
 <input type="text" ng-model="vm.name" />
 <br  />
 <span>{{vm.name}}</span>
</div>


We don’t see our good friend $scope in the controller, but the controller looks clean and independent now.

Using controller as with routing
To use controller as with routing in multiple views, we need to specify a parameter while configuring the route. Following snippet shows the syntax:

app.config(function($routeProvider){
    $routeProvider.when('/first', { templateUrl: 'first.html', controller: 'FirstCtrl', controllerAs:'vm' })
        .when('/second', { templateUrl: 'second.html' , controller: 'FirstCtrl', controllerAs:'vm' })
        .otherwise({ redirectTo: '/first' });
});

The views can use vm.<object-name> in the data binding expressions.

Using controller as in directives

A directive can have a own controller of its own. To use controller as in directives, we need to specify an additional property just as in case of routing. The alias can be used in data binding expressions in template of the directive. It is shown below:
app.directive("helloDir", function(){
    return{
        restrict:'A',
        template:"<span>{{dir.message}}</span>",
        controller:function(){
            this.message="Good Morning!";
        },
        controllerAs:"dir"
    }
});
This directive would render the message "Good Morning" when used in a view.
Happy coding!

Friday, 1 November 2013

Using Breeze JS to Consume ASP.NET Web API OData in an Angular JS Application

In last post, we saw how Breeze JS eases the job of querying OData services. It will be a lot of fun to use this great library with our favourite SPA framework, Angular JS. In this post, we will see how to hook up these two libraries to create data rich applications.

As stated in previous post, Breeze required data.js for to understand OData conventions. All functions performing CRUD operations in Breeze return a Q promise. Any changes made to properties of $scope inside then method of Q are not watched automatically by Angular, as 
callbacks hooked up to Q are executed in non-angular world. If the same thing can be done using $q, we don’t have to call $scope.$apply to make the changes visible to Angular’s dirty checking. For this purpose, Breeze team has created a module (use$q). This module can be installed in the project via NuGet: Breeze.Angular.Q. This package adds a JavaScript file to the application, breeze.angular.q.js. Once this file is included and the module is loaded, we don’t need q.js anymore.

Following is the list of scripts to be included on the page:

<script src="Scripts/angular.js"></script>
<script src="Scripts/datajs-1.1.1.js"></script>
<script src="Scripts/breeze.min.js"></script>
<script src="Scripts/breeze.angular.q.js"></script>

We don’t need jQuery anymore as Breeze detects presence of Angular and configures its AJAX adapter to use $http. (Check release notes of Breeze 1.4.4)

As both Angular and Breeze are JavaScript libraries, they can be easily used together. But we can’t enjoy the usage unless we follow the architectural constraints of Angular JS. The moment we include Breeze.js in an HTML page, the JS object “breeze” is available in the global scope. It can be used directly in any of the JavaScript components, Angular controllers and services are no exceptions. Best way to use an object in any Angular component is through Dependency Injection. Any global object can be made injectable by creating a value.
var app=angular.module(‘myApp’, []);
app.value(‘breeze’, breeze);
app.service(‘breezeDataSvc’, function($q, breeze){
 //logic in the service
});

We need to ask breeze to use $q as soon as the Angular JS application kicks off. For this, we need to register the following run block:
app.run(['$q','use$q', function ($q, use$q) {
    use$q($q);
}]);
Breeze has to be configured to work with OData service and use Angular’s AJAX API instead of jQuery. It is done by the following statement:
breeze.config.initializeAdapterInstances({ dataService: "OData" });

Now all we need to do is instantiate an EntityManager and start querying. Following is the complete implementation of the service including a function that does a basic OData request:
app.service('breezeDataSvc', function (breeze, $q) {
    breeze.config.initializeAdapterInstances({ dataService: "OData" });
            
    var manager = new breeze.EntityManager("/odata/");
    var entityQuery = new breeze.EntityQuery();
            
    this.basicCustomerQuery = function () {
        var deferred = $q.defer();
        manager.executeQuery(entityQuery.from("Customers").where("FirstName", "contains", "M")) .then(function (data) {
            deferred.resolve(data.results);
        }, function (error) {
            deferred.reject(error);
        });
        return deferred.promise;
    };
});

Following is a simple controller that uses the above service and sets the obtained results to a property in scope:
app.controller('SampleCtrl', function ($scope, breezeDataSvc) {
    function initialize() {
        breezeDataSvc.basicCustomerQuery().then(function (data) {
            $scope.customers = data.results;
        }, function (err) {
            alert(err.message);
        });
    };

    initialize();
});

Run this page on browser and see the behaviour.

Update: This post was updated on 11th January 2014 as per Breeze Angular Q-Promises page in documentation of Breeze

Happy coding!

Tuesday, 29 October 2013

Querying ASP.NET Web API OData using Breeze JS

A few days back I blogged about the query options supported by ASP.NET Web API OData. Though we get a number of options to query data from clients through REST-based URLs, building the URLs at runtime is not an easy task. One of the very popular ways to call services from rich JavaScript applications is through jQuery AJAX. While jQuery AJAX abstracts the pain of dealing with browser and configuring parameters, it expects an absolute URL. Writing a set of fixed URLs can be easy, but in larger applications we will have a number of scenarios where we need to build most part of an OData URL based on decisions. We can’t inspect any of the URLs unless a request is sent to them. It would be good to have an abstraction that generates the queries for us.

Breeze JS is the right library in such case. Breeze is built with OData query standards. But it is not limited to querying OData. Breeze can manage complex object graphs, cache data, query cached local objects, save complex objects to server, and perform validations and more. Breeze solves all the problems that one might face while working with data in rich JavaScript applications. In this post, we will see how Breeze simplifies querying OData services. We will explore some other essential features Breeze in future posts. To learn more about Breeze, make sure to check their official documentation and interactive tutorials on Breeze website.

Breeze uses jQuery for AJAX and Q for promises. Breeze needs data.js to talk to OData sources. Getting these scripts in Visual Studio is easy through the following NuGet packages:


Now add references to these libraries on the page.

<script src="~/Scripts/jquery-1.9.1.min.js"></script>
<script src="~/Scripts/datajs-1.1.1.min.js"></script>
<script src="~/Scripts/q.min.js"></script>
<script src="~/Scripts/breeze.min.js"></script>


Breeze heavily uses metadata of the data structure on which it has to work. ASP.NET Web API OData service exposes the metadata through its endpoint. But we need to set an important property to the OData configuration, it is Namespace. Adding the following statement to the Web API OData endpoint configuration:
modelBuilder.Namespace = "WebAPI_EF_OData.Models";

Make sure to check Brian Noyes’ step-by-step tutorial on Consuming ASP.NET Web API OData using Breeze. Brian does a nice job by explaining each step in detail.

Let’s set up Breeze on the client side. Add the following code to the page in which you want to perform breeze operations:

$(function () {
    var baseAddress = "/odata";
    breeze.config.initializeAdapterInstances({ dataService: "OData" });
    var manager = new breeze.EntityManager(baseAddress);
});

Now we can start using Breeze’s querying capabilities against the OData endpoint. Breeze uses LINQ-like operators to specify conditions on the data source. Following snippet shows a basic Breeze request to the EntitySet Customers and captures the response once it gets from the server:
var query = breeze.EntityQuery.from("Customers");
manager.executeQuery(query, function (data) {
    //Manipulate UI
}, function (err) {
    //Show Error Message
});

The above query sends a request to the URL http://localhost:/odata/Customers. Following query applies a simple condition on the above query:
var queryWithCondition = breeze.EntityQuery.from("Customers")
                                           .where("ContactTitle", "equals", "Owner");

This query corresponds to the URL http://localhost:/odata/Customers?$filter=ContactTitle eq 'Owner'. As stated earlier, Breeze is capable of expressing any OData URL. It has operators defined for orderby, checking length of strings, substrings, date operators, querying data as pages, expanding navigation properties and many others. Following listing shows a set of OData URLs and their corresponding Breeze queries.
// http://localhost:<port-no>/odata/Customers?$filter=startswith(ContactName,'Ana') eq true
var queryStartsWith = breeze.EntityQuery.from("Customers")
                                        .where("ContactName", "startsWith", "Ana");

// http://localhost:<port-no>/odata/Customers?$filter= not startswith(ContactName,'Ana') eq true
var Predicate = breeze.Predicate;
var predicate = new Predicate("ContactName", "startsWith", "Ana").not();
var queryNotStartsWith = breeze.EntityQuery.from("Customers")
                                           .where(predicate);

// http://localhost:<port-no>/odata/Customers?$filter=substringof('ill',CompanyName) eq true
var queryContainsSubstring = breeze.EntityQuery.from("Customers")
                                               .where("CompanyName", "contains", "ill");

// http://localhost:<port-no>/odata/Customers?$filter=length(ContactName) gt 10 and length(ContactName) lt 20
var queryCheckingLength = breeze.EntityQuery.from("Customers")
                                                           .where("length(ContactName)", "greaterThan", "10")
                                                            .where("length(ContactName)", "lessThan", "20");

// http://localhost:<port-no>/odata/Customers?$orderby=Country
var queryOrderBy = breeze.EntityQuery.from("Customers")
                                     .orderBy("Country");

// http://localhost:<port-no>/odata/Customers?$top=10
var queryTop10 = breeze.EntityQuery.from("Customers")
                                   .top(10);

// http://localhost:<port-no>/odata/Customers?$skip=40&$top=10
var queryTopAndSkip = breeze.EntityQuery.from("Customers")
                                        .top(10).skip(40);

// http://localhost:<port-no>/odata/Customers?$inlinecount=allpages
var queryInlineCount = breeze.EntityQuery.from("Customers")
                                         .inlineCount(true);

// http://localhost:<port-no>/odata/Customers?$expand=Orders/Employee
var queryExpand = breeze.EntityQuery.from("Customers")
                                    .expand("Orders/Employee");

// http://localhost:<port-no>/odata/Customers?$select=CustomerID,ContactName
var querySelect = breeze.EntityQuery.from("Customers")
                                    .select("CustomerID, ContactName");

This is not the end. I created this list as it will serve as a one stop reference for me and hopefully for you as well! Most of the operators look and behave like LINQ operators. Check API documentation on the official site to get details on each of the operators used above.

Happy coding!

Thursday, 24 October 2013

Making Bootstrap UI Accordion work with Bootstrap 3

The latest templates of Angular UI Bootstrap don’t play well with Bootstrap 3. A few days back, I tried using the accordion directive of Bootstrap UI and was disappointed with the outcome. It happened so because of the CSS classes. There are significant differences between the CSS classes in version 2.x and 3.0.

In the templates file, there are two templates defined for accordion, accordion.html and accordion-group.html. To make the accordion work with Bootstrap 3, we need to modify the CSS classes used in the directive. Following are the modified templates:

angular.module("template/accordion/accordion-group.html", []).run(["$templateCache", function ($templateCache) {
    $templateCache.put("template/accordion/accordion-group.html",
      "<div class=\"panel panel-default\">\n" +
      "  <div class=\"panel-heading\" ><a class=\"accordion-toggle\" ng-click=\"isOpen = !isOpen\" accordion-transclude=\"heading\">{{heading}}</a></div>\n" +
      "  <div class=\"panel-collapse collapse in\" collapse=\"!isOpen\">\n" +
      "    <div class=\"panel-body\" ng-transclude></div>  </div>\n" +
      "</div>");
}]);

angular.module("template/accordion/accordion.html", []).run(["$templateCache", function ($templateCache) {
    $templateCache.put("template/accordion/accordion.html",
      "<div class=\"panel-group\" ng-transclude></div>");
}]);


Make these changes and use the accordion directive just as in Bootstrap 2.
<div data-accordion="" data-close-others="true">
        <div data-accordion-group="" data-heading="Heading 1">
            Contents in first group
        </div>
        <div data-accordion-group="" data-heading="Heading 2">
            Contents in second group
        </div>
</div>


Out of other directives, some of them work well with Bootstrap 3 classes, but some of them don’t. Tweaking other directives may not be this easy. But there are ways to make them work.

Update (Jan 2014): Bootstrap UI for Bootstrap 3 is officially released. Refer to the project's source for the template.

Happy coding!

Thursday, 17 October 2013

Basics: Reasons to move from DataTables to Generic Collections

I think, these days no community member writes or speaks about using DataTables and DataSets for data operations. But, there are a number of real projects built using them and many developers still feel happy when they use them in their projects. Sometimes it is not easy to completely replace DataTables with typed generic lists, particularly in bulky projects. But now is the right time to move as future developers may not even learn about DataTables :).

Generic collections have a number of advantages over DataTables. One cannot imagine a day without generic collections once he/she gets to know how beneficial they are. Following is a list of reasons to move from DataTables to collections that I could think now:
  1. DataTable stores boxed objects, one needs to unbox value when needed. This adds overhead on the runtime environment. Whereas, values in generic collections are strongly typed, so no boxing involved.
  2. Unboxing happens at runtime, so is the type checking. If there is a mismatch between types of source and target, it leads to a runtime exception. This may lead to a number of issues while using DataTables. In case of collections, as the types are checked at the compile time, such type mismatches are caught during compilation.
  3. .NET languages got a very nice support for creating collections, like object initializer and collection initializer. We don’t have such features for DataTables.
  4. LINQ queries can be used on both DataTables as well as collections. But experience of writing the queries on generic collections is better because of intellisense support provided by Visual Studio.
  5. DataTables are framework specific; we often see issues with serializing and de-serializing them in web services. Generic collections are easier to serialize and de-serialize, so they can be easily used in any service and consumed from a client written in any language.
  6. ORMs are becoming increasingly popular and they use generic collections for all data operations.
  7. Mocking DataTables in unit tests is a pain, as it involves creating the structure of the table wherever needed. But a generic collection needs a class defined just once.

These are my opinions on preferring collections over DataTables. Any feedback is welcome.

Happy coding!