Quantcast
Channel: ASP.NET Blog
Viewing all 7144 articles
Browse latest View live

ASP.NET MVC, Data Access, and Deployment Content Maps Published

$
0
0

We are pleased to announce the publication on the ASP.NET site of three guides to ASP.NET documentation resources:

The content maps provide links to good quality up-to-date tutorials, articles, blogs, videos, and forum and StackOverflow threads. The guides exist on MSDN, but we’re publishing updated versions on the ASP.NET site now in hopes that they’ll get more use and generate more feedback.  With your help, we hope to update them every 4-6 weeks with the latest and best links. The top of each page includes an invitation to email us with suggestions:

If you know a great blog post, StackOverflow thread or any other link that would be useful, send us an email with the link.


Getting started with ASP.NET WebAPI OData in 3 simple steps

$
0
0

With the upcoming ASP.NET 2012.2 release, we’ll be adding support for OData to WebAPI. In this blog post, I’ll go over the three simple steps you’ll need to go through to get your first OData service up and running:

  1. Creating your EDM model
  2. Configuring an OData route
  3. Implementing an OData controller

Before we dive in, the code snippets in this post won’t work if you’re using the RC build. You can upgrade to using our latest nightly build by taking a look at this helpful blog post.

1) Creating your EDM model

First, we’ll create an EDM model to represent the data model we want to expose to the world. The ODataConventionModelBuilder class makes this this easy by using a set of conventions to reflect on your type and come up with a reasonable model. Let’s say we want to expose an entity set called Movies that represents a movie collection. In that case, we can create a model with a couple lines of code:

   1: ODataConventionModelBuilder modelBuilder = new ODataConventionModelBuilder();
   2: modelBuilder.EntitySet<Movie>("Movies");
   3: IEdmModel model = modelBuilder.GetEdmModel();

2) Configuring an OData route

Next, we’ll want to configure an OData route. Instead of using MapHttpRoute the way you would in WebAPI, the only difference here is that you use MapODataRoute and pass in your model. The model gets used for parsing the request URI as an OData path and routing the request to the right entity set controller and action. This would look like this:

   1: config.Routes.MapODataRoute(routeName: "OData", routePrefix: "odata", model: model);

The route prefix above is the prefix for this particular route. So it would only match request URIs that start with http://server/vroot/odata, where vroot is your virtual root.

3) Implementing an OData controller

Finally, we just have to implement our MovieController. Instead of deriving from ApiController, you’ll need to derive from ODataController. ODataController is a new base class that wires up the OData formatting and action selection for you. Here’s what an implementation might look like:

   1:publicclass MoviesController : ODataController
   2: {
   3:     List<Movie> _movies = TestData.Movies;
   4:  
   5:     [Queryable]
   6:public IQueryable<Movie> Get()
   7:     {
   8:return _movies.AsQueryable();
   9:     }
  10:  
  11:public Movie Get([FromODataUri] int key)
  12:     {
  13:return _movies[key];
  14:     }
  15:  
  16:public Movie Patch([FromODataUri] int key, Delta<Movie> patch)
  17:     {
  18:         Movie movieToPatch = _movies[key];
  19:         patch.Patch(movieToPatch);
  20:return movieToPatch;
  21:     }
  22: }

There’s a few things to point out here. Notice the [Queryable] attribute on the Get method. This enables OData query syntax on that particular action. So you can apply filtering, sorting, and other OData query options to the results of the action. Next, we have the [FromODataUri] attributes on the key parameters. These attributes instruct WebAPI that the parameters come from the URI and should be parsed as OData URI parameters instead of as WebAPI parameters. Finally, Delta<T> is a new OData class that makes it easy to perform partial updates on entities.

Instead of deriving from ODataController, you can also choose to derive from EntitySetController. EntitySetController is a convenient base class for exposing entity sets that provides simple methods you can override. It also takes care of sending back the right OData response in a variety of cases, like sending a 404 Not Found if an entity with a certain key could not be found. Here’s what the same implementation as above looks like with EntitySetController:

   1:publicclass MoviesController : EntitySetController<Movie, int>
   2: {
   3:     List<Movie> _movies = TestData.Movies;
   4:  
   5:     [Queryable]
   6:publicoverride IQueryable<Movie> Get()
   7:     {
   8:return _movies.AsQueryable();
   9:     }
  10:  
  11:protectedoverride Movie GetEntityByKey(int key)
  12:     {
  13:return _movies[key];
  14:     }
  15:  
  16:protectedoverride Movie PatchEntity(int key, Delta<Movie> patch)
  17:     {
  18:         Movie movieToPatch = _movies[key];
  19:         patch.Patch(movieToPatch);
  20:return movieToPatch;
  21:     }
  22: }

Notice how you don’t need [FromODataUri] anymore because EntitySetController has already added it for you on its own action parameters. That’s just one of the several advantages of using EntitySetController as a base class.

Protect your Queryable API with the validation feature in ASP.NET Web API OData

$
0
0

In the previous blog post, you can see how easy it is to enable OData query syntax for a particular action using Web API OData. Simply add a Queryable attribute to your action as follows, and you are done.

   1: [Queryable]
   2:public IQueryable<WorkItem> Get(int projectId)

It not only works for those actions using OData format, but also applies to any vanilla web api actions using other formats such as JSON.NET. It greatly reduces the need to write a lot of actions to perform operations like top, skip, orderby as well as filter. It is so powerful that it is almost hard to imagine building a real Web API application without it.

Now everything works beautifully, why validation?

As great as this may sound, adding Queryable to your action could expose your service to a DOS attack. A very complex query could take a long time to complete and burn many CPU cycles, with the unwanted side effect of blocking access to the entire service. For example, sorting a large set of records with a non indexed column could take a long time. So it makes sense to restrict the properties used in $orderby.

Also keep in mind if your queryable action is only exposed within your organization, you are probably fine given all your requests come from trusted clients. However, as soon as you expose your queryable action outside your organization, you must take extra caution. One way to protect your service is to validate your query and reject the bad ones upfront. 

In short, if your action returns a large set of data, and your action can be accessed by untrusted clients, it is strongly recommended that you add a layer of defense to your service using the new validation feature. In this blog post, I will go through a few common user scenarios and show you how your queryable action can be protected.

Basic Scenarios

Starting with the Web API OData RC release, we have added some really convenient properties on QueryableAttribute to help with validating the incoming queries. Let us go through them one by one.

Scenario 1: Only allow $top and $skip

A lot of time, all you want to support is client driven paging. In that case, you can limit your action to only allow query that contains $top and $skip. None of the advanced features in $orderby or $filter matter to you. This can be easily done using an simple property called AllowedQueryOptions.

   1: [Queryable(AllowedQueryOptions = AllowedQueryOptions.Skip | AllowedQueryOptions.Top)]
   2:public IQueryable<WorkItem> Get(int projectId)

Now with this, a request to http://localhost/api/workitems/?$top=2&$skip=3 works fine, but a request to http://localhost/api/workitems/?$orderby=Description or http://localhost/api/workitems/?$filter=Id eq 123 will receive a response of 400 Bad Request.

Scenario 2: Limit the value for $top to 100

In the client driven paging scenario, sometimes the server might want to limit the maximum number of records that the client wants to request using $top. You can use MaxTop property in this case.

   1: [Queryable(MaxTop = 100)]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$top=120 will fail since it exceeds the limit of 100.

Scenario 3: Limit the value for $skip to 200 

Similar to the previous scenario, server wants to limit the maximum number of records to be skipped. You can use MaxSkip property to achieve that.

   1: [Queryable(MaxSkip = 200)]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$skip=220 will fail since it exceeds the limit of 200.

Scenario 4:Only want to order the results by Id property, nothing else

A lot of times people only want to order the results with a fixed number of properties. Order by any arbitrary properties could be slow and unwanted. Now you have a simple way to do that using AllowedOrderbyProperties.

   1: [Queryable(AllowedOrderByProperties = "Id")]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$orderby=Description will fail since it can’t be ordered by Description property.

Scenario 5: Only need eq comparison in $filter

$filter is one of the most powerful query options you can apply to the results. If you know that your trusted clients only uses equal comparison inside the $filter. You should validate that as well using AllowedLogicalOperators. Here is how you can do it.

   1: [Queryable(AllowedLogicalOperators = AllowedLogicalOperators.Equal)]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$filter=Id eq 100 will succeed, while a request to http://localhost/api/workitems/?$filter=Id gt 100 will fail since it does not allow gt.

Scenario 6: Do not need any of the arithmetic operations in $filter

OData URL convention supports a lot of convenient arithmetic operations. However, it is possible your scenario don’t need to use any of those. Now you can turn that off by setting AllowedArithmeticOperators to None.

   1: [Queryable(AllowedArithmeticOperators = AllowedArithmeticOperators.None)]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$filter=Votes gt 20 will succeed, while a request to http://localhost/api/workitems/?$filter=Votes mul 2 gt 20 will fail since this action does not allow multiplication, which is one of the arithmetic operations.

Scenario 7: Only need to use StartsWith function in $filter

One of the reasons $filter is powerful is that it supports a lot of useful functions. You can read the whole list at the official odata spec. Now oftentimes you only use very few functions, so again you can limit that as well using AllowedFunctions property.

   1: [Queryable(AllowedFunctions = AllowedFunctions.StartsWith)]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$filter=startswith(Name, ‘A’) will succeed, while a request to http://localhost/api/workitems/?$filter=length(Name) lt 10 will fail because this action does not allow length function. 

Advanced Scenarios

If you can’t restrict the queries via those out of box properties described in the previous section, then you are NOT out of luck. You can use our extensibility point to add your own validation logic easily.

Scenario 8: How to customize default validation logic for $skip, $top, $orderby, $filter

The actual validation logic for those four query options can be extended by overriding their validator classes respectively. That includes TopQueryValidator, SkipQueryValidator, OrderbyQueryValidator and FilterQueryValidator.

The FilterQueryValidator in particular has a lot of virtual methods to help you walk through the Abstract Syntax Tree ( AST ) that is being generated by processing the incoming request.

Here is an example of how you can restrict properties used inside $filter. First you override the existing FilterQueryValidator and override the ValidateSingleValuePropertyAccessNode method so you can examine the PropertyAccessNode, and throw if its Name property has something you don’t like to see.

   1:public class RestrictiveFilterByQueryValidator : FilterQueryValidator
   2:         {   
   3:publicoverridevoid ValidateSingleValuePropertyAccessNode(SingleValuePropertyAccessNode propertyAccessNode, ODataValidationSettings settings)
   4:             {
   5:// Validate if we are accessing some sensitive property of WorkItem, such as Votes
   6:if (propertyAccessNode.Property.Name == "Votes")
   7:                 {
   8:thrownew ODataException("Filter with Votes is not allowed.");
   9:                 }
  10:  
  11:base.ValidateSingleValuePropertyAccessNode(propertyAccessNode, settings);
  12:             }
  13:         }

Then you can wire your custom validator up via a custom QueryableAttribute.

   1:publicclass MyQueryableAttribute : QueryableAttribute
   2: {
   3:publicoverridevoid ValidateQuery(System.Net.Http.HttpRequestMessage request, ODataQueryOptions queryOptions)
   4:     {
   5:if (queryOptions.Filter != null)
   6:         {
   7:             queryOptions.Filter.Validator = new RestrictiveFilterByQueryValidator();
   8:         }
   9:  
  10:base.ValidateQuery(request, queryOptions);
  11:     }
  12: }

The last step is to use your custom QueryableAttribute to decorate your action.

   1: [MyQueryable]
   2:public IQueryable<WorkItem> Get(int projectId)

Now a request to http://localhost/api/workitems/?$filter=Votes eq 20 will fail as the custom validation does not like filtering using Votes property.

Scenario 9: Use ODataQueryOptions to validate the query only

One of the hidden jewels in Web API OData queryable support is this low level API called ODataQueryOptions. If you want to have your own fine control over when the query option is validated or when the query is applied. You can use this instead of QueryableAttribute. Here is how your action will look like with ODataQueryOptions manually applied and your action will return the actual count of the queryable result.

   1:publicint GetCount(ODataQueryOptions<WorkItem> queryOptions)
   2: {
   3:// Validate
   4:     queryOptions.Validate(new ODataValidationSettings() { AllowedQueryOptions = AllowedQueryOptions.Top | AllowedQueryOptions.Skip });
   5:  
   6:// you can apply your query without using queryOptions at all 
   7:     IQueryable<WorkItem> result = queryOptions.ApplyTo(originalResult) as IQueryable<WorkItem>;
   8:  
   9:return result.Count();
  10: }

Now you can see one can easily call Validate without calling ApplyTo in the above example. And you can retrieve the raw query value as string and implement your own ApplyTo logic.

In this world, if you have a custom validator, you can also set it on queryOptions directly. Here is how that could look like.

   1:publicint Get(ODataQueryOptions<WorkItem> queryOptions)
   2: {
   3:// Validate using custom validator
   4:     queryOptions.Filter.Validator = new RestrictiveFilterByQueryValidator();
   5:     queryOptions.Validate(new ODataValidationSettings());
   6:  
   7:// you can apply your query without using queryOptions
   8:     IQueryable<WorkItem> result = queryOptions.ApplyTo(originalResult) as IQueryable<WorkItem>;
   9:  
  10:return result.Count();
  11: }

To conclude, ASP.NET Web API OData has not only made it super easy to enable OData query syntax on your actions, it also provides you with tools to help protect your queryable actions with upfront validation so that undesired and untrusted queries will be rejected. For more information on ASP.NET Web API OData security, please visit here. Happy coding!

Getting a MIME type from a file extension in ASP.NET 4.5

$
0
0

If you've ever implemented a file upload or download scenario in ASP.NET, you've likely come across the situation of having to provide a MIME type based off of a file extension. IIS does this automatically when it serves files, but it seems like in ASP.NET there isn't a good way to provide this functionality. Searches on the web might point you to building your own Dictionary of MIME types or just writing a switch statement, but this felt like a hack to me and so I turned to a resident expert, Levi Broderick.

It turns out in ASP.NET 4.5 we shipped a little-known new type, System.Web.MimeMapping, which has an API called GetMimeMapping(string fileName). Here is the MSDN documentation. You can pass either a file name as the method implies, or an extension.

This is awesome. It turns what would have been a pain into a one line call. But how is it implemented - did the ASP.NET team just do the "hack" for you by building a table?

The answer is yes and no. If you are running IIS in classic mode, the fallback is indeed a Dictionary of over 300 mappings of file extensions to MIME types. However, if you are running in integrated pipeline mode (most people), the list of MIME types used by the API is actually the list in IIS. This means that as you upgrade Windows and more MIME types are added to IIS, the code you write doesn't need to be updated to take advantage of the changes.

Hope this hidden gem helps!

Update: As folks have mentioned in the comments, this should not be the only check you run on a file upload, but it does simply the development step of generating the MIME type mapping once you have verified that you have the correct file type by checking bytes, for security reasons.

Pre-release of ASP.NET Scaffolding with a Web Forms scaffold generator

$
0
0

Today we are sharing a pre-release build of a new code generation framework known as ASP.NET Scaffolding, as well as a scaffold generator (code generator) for Web Forms. The Web Forms scaffold generator can automatically build Create-Read-Update-Delete (CRUD) views based on a model.

Introduction

Many programming tasks involve writing standard “boilerplate” code. Templates can help avoid having to hand-write all of this code, but only to a degree because they are static. MVC3 introduced the concept of scaffolding - generating code dynamically based on existing artifacts in a project. We are now extending this concept to other frameworks, starting with Web Forms for CRUD views generation.

The scaffolding framework provides APIs, conventions and a common delivery and discovery mechanism enabling a consistent experience for all ASP.NET scaffold generators.

Getting Started

As a prerequisite, you need to be running Visual Studio 2012 (Pro, Premium or Ultimate) with the Web Tooling Extensions 2012.2 Update.

  1. First, download and install the VSIX extension. From the Visual Studio main menu select “Tools” –> “Extensions and Updates…”. From the left sidebar select “Online” and search for “Microsoft ASP.NET Scaffolding”.
  2. Create a new Visual C# ASP.NET Web Forms Application project targeting .NET 4.5. If you are not starting a new project you can convert any project to a web application project by right-clicking on the solution and choosing “Convert to Web Application”.
  3. In the project,  open the NuGet Package manager and install the “ASP.NET Web Forms Scaffold Generator” (“Microsoft.AspNet.Scaffolding.WebForms”) package.
    • Right click on your project in Solution Explorer and select “Manage NuGet Packages…”
    • Ensure that “Include Prerelease” is selected in the dropdown at the top of the NuGet window
    • Search for “ASP.NET Web Forms Scaffolding” (or using the package ID of “Microsoft.AspNet.Scaffolding.WebForms”)
    • Click “Install”

Note: If you started from any template different than the template above, or you did not have the latest Visual Studio update and you have not used ASP.NET Friendly URLs in your project you will have to manually enable it. For more information, please check the ASP.NET Friendly URLs website.

Running the scaffold generator

  1. Create a Model: The Web Forms scaffold generator creates Create-Read-Update-Delete (CRUD) views when provided with a model. Currently it only supports simple POCO models without inheritance or related entities. Let's use the sample model below:
    publicclass Person
        {
            [ScaffoldColumn(false)]publicint ID { get; set; }publicstring FirstName { get; set; }publicstring LastName { get; set; }
        }
  2. Build your project: from the “Build” menu select “Build Solution”.
  3. To run the scaffold generator, open Solution Explorer and right click on your project. Select "Add >” and then select “Scaffold…". 
     
    image

  4. Click "Add" to run the selected scaffold generator. 

    image

  5. The “Add Web Forms Pages” dialog will appear.
  6. Select the model class that you created under the "Model class" dropdown. 

    image

  7. Select "Add new data context…" from the "Data context class" dropdown. Choose a name for the new data context and click OK.
  8. Depending on your preference choose whether to generate desktop or mobile views, or both.
  9. Click "Add" to run the scaffold generator.

Check the Results

  • All the files generated by the Web Forms scaffold generator can be found in the Views/<Model>/ folder. In addition to that it would have modified Web.config to add a connection string for a new DataContext class if that option was selected.
  • Each CRUD component should have a generated view with a corresponding code-behind.
  • For example, open Edit.aspx or Edit.mobile.aspx to examine it.
  • Inside of Edit.aspx, there should be a DynamicEntity control which in turn generates Label and DynamicControl pairs, one for each property of your model.
  • Inside of Edit.aspx.cs or Edit.mobile.aspx.cs, there should be GetItem and UpdateItem methods properly referencing the <Model> that you created for their queries.
  • To run the new scaffold, select Application Root\Views\Person\Default.aspx and hit Run. You should see the People list with links to create/edit/delete entries:

    image

  • If you generated mobile views you can easily see them as well by activating the F12 developer tools and then changing the user agent of the browser to a mobile one (e.g. from the developer tools menu select “Tools” –> “Change user agent string” –> “IE10 for Windows Phone 8”) and refreshing the page:

    image 

Known Issues

  • If you don't have the Web Tools 2012.2 RC Fall Update installed, you need to add the following to your Global.asax.cs for FriendlyUrls to work:
    using System.Web.Routing; 
        ... protectedvoid Application_Start(object sender, EventArgs e) 
        { 
            RouteConfig.RegisterRoutes(RouteTable.Routes); 
        }
  • Related model entities are not supported in this pre-release.
  • Once you install the “ASP.NET Web Forms Scaffold Generator” package the dependent package “ASP.NET Scaffolding Infrastructure” will get installed but may not get marked as installed in NuGet. This is due to a bug in NuGet that is on their list to get fixed.
  • Only solutions with a single project within them are supported.
  • To uninstall first remove the NuGet packages and then manually delete the following folders under the “packages” subfolder of your project: “Microsoft.AspNet.Scaffolding.Infrastructure.0.1.0-pre” and “Microsoft.AspNet.Scaffolding.WebForms.0.1.0-pre”.
  • jQuery Mobile 1.9.1 or later is not supported yet.
  • Visual Studio 2011, Visual Web Developer, .NET 4.0 and Visual Basic projects are not supported in this pre-release.

References

We'd love to hear from you

  • What do you think of the experience and flow overall?
  • What features would you like to see in the Web Forms scaffold generator itself?

A Message Flow in ASP.NET Web API OData

$
0
0

One of the biggest benefits that you have with ASP.NET Web API is that it gives 100% transparency of the source code because it is open source. You can easily enlist in the code repository and add the symbol path based on instructions at the CodePlex site, then off you go, you can debug the Web API code as if you were the original code author.

Now if you are like me, who like to put a few breakpoints in the source code before hitting F5, you will need to understand the OData stack a little better. If everything goes expected, you might not need those breakpoints. But if anything goes wrong, such as getting a 500 or 400 error, or even a closed connection, understanding the message flow and stopping at the right breakpoints will be very useful to find out the root cause of the issue.

In this blog, I will walk you through those common breakpoints in a very simple scenario. Imagine your client has sent a POST request to your OData service, which was written using those three simple steps that are described in an earlier blog. Your service is self hosted, and your request message looks like the following.

   1: POST http://localhost:50231/odata/Customers HTTP/1.1
   2: Content-Type: application/json
   3: Host: localhost:50231
   4: Content-Length: 25
   5:  
   6: {"Id":1,"Name":"Hongmei"}

What exactly happens from the moment this message arrives at the server until the new Customer gets added in the backend? Let us take a closer look. 

Stop 1: Parse the incoming URL into OData Path Segments

The very first stop is a custom Route Constraint that parses the incoming request URL into a list of OData Path Segments. Open the file at {git}\src\System.Web.Http.OData\OData\Routing\ODataPathRouteConstraint.cs, and the breakpoint should be placed at the beginning of the Match method. Just like any routing constraint, this one gets executed every time a particular route is evaluated to see if the incoming HTTP request should match.

In this particular case, since the URL is “http://localhost:50231/odata/Customers”, and you have set up your ODataRoute as follows:

   1: configuration.Routes.MapODataRoute(routeName:"odataRoute", routePrefix:"odata", model:model);

The incoming request matches the correct base address “http://localhost:50231/” as well as the correct prefix “odata”, the OData route gets picked. And the rest of the path, that is “Customers”, is being processed by the ODataPathHandler, and it matches the “~/entityset” template, where it determines “Customers” is the entity set name. All this information is stored in a data structure called ODataPath, which is a simple collection of OData Path Segments. ODataPath is saved along with the Request. 

As part of the Route Constraint, we also figure out the controller name based on a few OData Routing conventions. In this case, we know the {controller} value should be “Customers”, so it is added to the route value collection.

There are many more route templates and routing conventions that are supported out of box, and a future blog will describe each one in great detail.

Stop 2: Select the Controller

After routing, we should take a stop at the Controller selector. Open the file at {git}\src\System.Web.Http\Dispatcher\DefaultHttpControllerSelector.cs, and put a breakpoint at the SelectController method. This controller selector will read the value of the {controller} token out of the route value collection.

In this case, because the controller token is “Customers”, we will search for a type called “CustomersController”. It then instantiates the controller instance, and invokes the controller.ExecuteAsync method to continue executing the rest of the pipeline.

Please note this is vanilla Web API behavior. Web API OData did not need to override the default controller selector at all.

Stop 3: Select the Action

The next stop is to select action. Open the file at {git}\src\System.Web.Http.OData\OData\Routing\ODataActionSelector.cs, and put a breakpoint at the SelectAction method. Just like what happens earlier with controller name, the default OData selector asks OData Routing Conventions to figure out the correct action name to invoke based on the ODataPath.

In this case, because the request is a “POST” message, the convention would try to look for an action called “PostCustomer” first. Since the “PostCustomer” method is not defined in the controller in our simple case, then it falls back to use “Post” as the action name, which was defined as part of EntitySetController.

Stop 4: Deserialize the request message body

Once the action is selected, the next thing is to prepare the inputs for the action before we can invoke the action. That is where formatters are being used.

In this case, since it is an OData service, we will use ODataMediaTypeFormatter. You can open the file at {git}\src\System.Web.Http.OData\OData\Formatter\ODataMediaTypeFormatter.cs, and put a breakpoint at the ReadFromStreamAsync method. 

In this case, OData formatter will be able to deserialize the request body into a Customer instance, which becomes the input to the Post method. 

Stop 5: Invoke your Action in your Controller

After all this long journey, we have finally arrived at the action, where your code will be executed. Feel free to open the file {git}\src\System.Web.Http.OData\OData\EntitySetController.cs, and put a breakpoint at the “Post” method. Your code which overrides “GetKey” and “CreateEntity” methods will be invoked subsequently.

Stop 6: Serialize the response message body

The last stop, hopefully, is to write out the response. In this case, ODataMediaTypeFormatter is again being selected to actually serialize the content for the response message. Put a breakpoint at the WriteToStreamAsync method, and you will see it writes out the content to the underlying stream. The following is an example of the response.

   1: HTTP/1.1 200 OK
   2: Content-Length: 108
   3: Content-Type: application/json; charset=utf-8
   4: Server: Microsoft-HTTPAPI/2.0
   5: DataServiceVersion: 3.0
   6: Date: Thu, 21 Feb 2013 01:03:45 GMT
   7:  
   8: {
   9:"odata.metadata":"http://localhost:50231/odata/$metadata#Customers/@Element","Id":1,"Name":"Hongmei"
  10: }

Congratulations, you have just completed the journey of a message flow through various points of a live OData service. There are many other places you can put breakpoints, such as QueryableAttribute in the querying case. If debugging is not an option, you can also turn on tracing for diagnosis. See this for more information on how tracing can be used.

Hopefully, this kind of deep dive will help you understand your OData service better. Happy debugging!

MVC Single Page Application Template for ASP.NET and Web Tools 2012.2

$
0
0

With the final release of ASP.NET and Web Tools 2012.2, we refreshed single page application page on asp.net .  It talks about KnockoutJS Template and introduced 4 community created SPA templates that you can install as MVC templates.

There are four improvements for the MVC SPA template RTM release over the RC release that worth a note.

Antiforgery support for Web API calls

The ValidateAntiForgeryToken filter only works on MVC calls. Web API doesn't have a default Antiforgery filter yet. We implemented attribute “ValidateHttpAntiForgeryToken” inside Filters\ValidateHttpAntiForgeryTokenAttribute.cs to support antiforgery in Web API calls.  You can see section Anti-CSRF in KnockoutJS Template article for more information.

ValidateHttpAntiForgeryToken attribute is used for TodoController class and 3 methods for TodoListController class.

In index.cshtml, we have the following code:

@functions{
public string GetAntiForgeryToken()
    {
        string cookieToken, formToken;
        AntiForgery.GetTokens(null, out cookieToken, out formToken);
        return cookieToken + ":" + formToken;                
    }
}
<input id="antiForgeryToken" type="hidden" value="@GetAntiForgeryToken()" />
In todo.datacontext.js file, as part of the AJAX call, we have:
var antiForgeryToken = $("#antiForgeryToken").val();
if (antiForgeryToken) {
    options.headers = {
        'RequestVerificationToken': antiForgeryToken
    }
}

Use camel case for JSON data

We listened to John Papa’s advice and changed to camel casing for all the JavaScript models.  We added the following line in app_start/webapiconfig.cs to support the camel case for JSON serialization.

// Use camel case for JSON data.
config.Formatters.JsonFormatter.SerializerSettings.ContractResolver = 
new CamelCasePropertyNamesContractResolver();

Support JQuery.1.9 JSON parser change

JQuery 1.9 has a change for JSON Parser that it no longer accept empty string as valid return.  We change the jquery.validate.unobtrusive.js file for all the MVC templates to support JQuery.1.9.  For SPA template, we use “text” datatype instead of “json” when we are not expecting a JSON output in todo.datacontext.js file:

function saveChangedTodoList(todoList) {
        clearErrorMessage(todoList);
        return ajaxRequest("put", todoListUrl(todoList.todoListId), todoList, "text")
            .fail(function () {
                todoList.errorMessage("Error updating the todo list title. Please make sure it is non-empty.");
            });
    }

    // …
function ajaxRequest(type, url, data, dataType) { // Ajax helper
        var options = {
            dataType: dataType || "json",
            contentType: "application/json",
            cache: false,
            type: type,
            data: data ? data.toJson() : null
        };
        
// …
return $.ajax(url, options);
    }

Support KO ObservableArray IntelliSense by using JavaScript XML Documentation Comments

The editor uses “scripts/_references.js” file’s references files to get the Knockout’s IntelliSense in the data-bind attributes.  Sometimes it’s impossible for the JavaScript engine to figure out what the correct object type is for KO’s ObservableArray.  We need to add some XML Documentation Comments to help the JavaScript engine to provide meaningful JavaScript IntelliSense

todo.model.js

function todoList(data) {
        var self = this;
        data = data || {};
        // …
        self.todos = ko.observableArray(importTodoItems(data.todos));

    // …

function importTodoItems(todoItems) {
        /// <returns value="[new todoItem()]"></returns>
        return $.map(todoItems || [],
                function (todoItemData) {
                    return datacontext.createTodoItem(todoItemData);
                });
    }

todo.viewmodel.js:

window.todoApp.todoListViewModel = (function (ko, datacontext) {
    /// <field name="todoLists" value="[new datacontext.todoList()]"></field>
    var todoLists = ko.observableArray(),
So in general, if we have a simple code like below,
function TaskListViewModel() {
var self = this;
    self.tasks = ko.observableArray([]);
we can convert to the following to support tasks IntelliSense in VS:
function TaskListViewModel() {
    var self = this;
    self.tasks = ko.observableArray(function () {
        /// <returns value="[new Task()]"></returns>
        return [];
    }());

Summary

There are quite a few SPA frameworks around.  You can easily customize a MVC template and create a VSIX file for other people to install.  If you have a great template to share, feel free to contact us to get your template listed in our asp.net SPA page.

Translating OData queries to HQL

$
0
0

AspNet Web API OData makes it really simple to expose your IQueryable backend to be queried using the OData query syntax. Check out some samples here. If you are building using Entity Framework as your ORM (object-relational mapping) to talk to your database or just store all your data in-memory(Linq2Objects), you are in a good shape as you have an existing IQueryable. If you are talking to your own custom data source, you might not be lucky enough to have a LINQ provider or have a provider that has an incomplete implementation. If you are in this mess and are about to undertake the herculean task of implementing an IQueryable (don’t believe me - check out LINQ: Building an IQueryable provider series) – STOP now and thank me later.

Implementing a LINQ provider is hard. The query space that LINQ allows is huge and it is almost always easy to write a query that a provider cannot translate. Linq2Objects is an exception. But then, the Linq2Objects provider does not do any query translation in the truest sense. IQueryable is a powerful interface and if all you wanted to do is expose your backend for OData querying, it begs the question – is it really necessary? If you are building on top of AspNet Web API OData – the answer is NO. The query space that is covered by the OData query syntax is way smaller than LINQ. There are no complex projections ($select is very restricted), no nested queries to worry about, no complex joins and SelectMany’s, no GroupBy’s and aggregates etc. As a result, implementing the OData query space is easier than implementing a complete LINQ provider.

One of the places where Web API OData shines over WCF Data Services for building OData services is flexibility. Querying is no exception. Slap the [Queryable] attribute on your action returning IQueryable, you have the full power of OData querying – simplicity FTW. Your backend is not IQueryable, model bind the OData query to ODataQueryOptions, and, translate and “Apply” the query manually to your backend – flexibility FTW.

The rest of the post shows the second option in detail using a sample with NHibernate as the backend ORM layer.

NHibernate and HQL

NHibernate is an ORM layer on .NET platform. It is a .NET port of the Java ORM Hibernate. NHibernate uses a query language HQL (Hibernate Query Language) that is similar in appearance to SQL. Compared with SQL, however, HQL is fully object-oriented and understands notions like inheritance, polymorphism and association.

The sample that I have uses NHibernateFluent to define the object-database mappings and is using SQLite as the backend store. Check out Customer.cs, CustomerMap.cs and CustomersSessionFactory.cs for the model and the mappings code.

ODataQueryOptions

ODataQueryOptions<T> was built to be used when the user wants to take manual control over the OData query. It then becomes his/her responsibility to validate and execute the query and return the appropriate results in the response. The following snippet of code from CustomersController gives you the idea,

public class CustomersController : ApiController
{
    public IEnumerable<Customer> GetCustomers(ODataQueryOptions<Customer> queryOptions)
    {
        // validate the query.
        queryOptions.Validate(_validationSettings);


        // Apply the query.
        IQuery query = queryOptions.ApplyTo(_db);
    }
}

Translating ODataQueryOptions

The ApplyTo in the previous snippet is where the interesting work happens. ODataQueryOptions contains the individual $filter, $orderby, $skip, $top options.  

public class ODataQueryOptions
{
    // .. other stuff ..

    public ODataQueryContext Context { get; }

    public FilterQueryOption Filter { get; }

    public OrderByQueryOption OrderBy { get; }

    public SkipQueryOption Skip { get; }

    public TopQueryOption Top { get; }
}

The ApplyTo method takes the individual query options and translates them to HQL – the Hibernate query language.

$skip and $top

Translating $skip and $top is a piece of cake. The corresponding query options give the parsed value i.e. an integer through the ‘Value’ property. I translate $skip to a SetFirstResult method call and $top to SetMaxResults method call.

query.SetMaxResults(topQuery.Value);

query.SetFirstResult(skipQuery.Value);  

$orderby

$orderby is translated to HQL’s “order by” clause in the NHibernateOrderByBinder class. OrderByQueryOption provides access to the parsed list of order by nodes through the property “OrderbyNodes”. For simplicity, I am only handling ordering by simple properties, but, you can see that it is really easy to extend it to nested properties.

stringBuilder.Append("order by ");
foreach (var orderByNode in orderByQuery.OrderByNodes)
{
    var orderByPropertyNode = orderByNode as OrderByPropertyNode;

    if (orderByPropertyNode != null)
    {
        stringBuilder.Append(orderByPropertyNode.Property.Name);
        stringBuilder.Append(
           orderByPropertyNode.Direction == OrderByDirection.Ascending ? " asc," : " desc,");
    }
    else
    {
        throw new ODataException("Only ordering by properties is supported");
    }
}

$filter

Translating $filter is the most interesting part and involves the most code. I translate $filter to NHibernate “where” clause in the NHibernateFilterBinder class. Web API OData provides the parsed AST (Abstract Syntax Tree) for the $filter option through the property “FilterClause” on the FilterQueryOption class. NHibernateFilterBinder is just a tree visitor that visits the AST and builds the “where” clause.

The uber “protected string Bind(QueryNode node)” method looks at the node type and directs the call to the node specific method where the real translation happens. For instance, this is how OData ‘Any’ queries are translated,

private string BindAllNode(AllNode allNode)
{
    string innerQuery = "not exists ( from " + Bind(allNode.Source) + " " + allNode.RangeVariables.First().Name;
    innerQuery += " where NOT(" + Bind(allNode.Body) + ")";
    return innerQuery + ")";
}

 

We have seen how easy it is to expose a non-IQueryable data source to be queried using the OData query syntax using AspNet Web API OData. You can find the complete sample hosted on aspnet codeplex samples.


Disabling Knockout Intellisense

$
0
0

Web Tools Extensions 1.2, which is part for the Web Platform Installer package named “Windows Azure SDK for .NET (VS 2012) 1.8.1 – February 2013” contains a new feature which provides Intellisense in web forms and web pages for KnockoutJS MVVM data binding.  You can verify that you have Web Tools Extension 1.2 installed by opening Visual Studio’s “About” dialog and scrolling through the installed products list to find:

                Web Developer Tools    1.2.40208.0

Unfortunately, upon installation several customers have reported slow-downs and lock-ups of Visual Studio when editing certain web pages containing Knockout syntax.  We are actively investigating this issue and hope to have a solution soon.  In the meantime, you can turn this feature OFF by doing the following:

  1. Create a file named TurnKoOff.txt and paste the following text into the file:
    Windows Registry Editor Version 5.00


    [HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\HTML Editor]
    "KnockoutSupportEnabled"="0"
  2. Create another file named TurnKoOn.txt and paste the following text in:
    Windows Registry Editor Version 5.00

    [HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\HTML Editor]
    "KnockoutSupportEnabled"=-
    NOTE: If you are running Visual Studio 2012 Express for Web rather than Visual Studio 2012 Professional, Premium or Ultimate, substitute “\VWDExpress\” for “\VisualStudio\”.
  3. Change the extensions of both files from “.txt” to “.reg.”.
  4. Ensure that Visual Studio (or Express for Web) is not running.
  5. Double-click TurnKoOff.reg.  You will receive the warning:
    ”Adding information can unintentionally change or delete values and cause components to stop working correctly. If you do not trust the source of this information in TurnKoOff.reg, do not add it to the registry.”
  6. After exercising all appropriate caution, click “Yes”.
  7. Restart Visual Studio or Express for Web.

Knockout Intellisense will now be completely disabled.  It can be turned back on by double-clicking the TurnKoOn.reg file.

Please let us know if this solution does not work adequately for you.

We are interested in learning more about what cases some customers experience hangs and slow-downs.  Please share any information with us that you think will be helpful.

Visual Studio Web Deployment Tutorial Series Updated for Windows Azure and LocalDB

$
0
0

This week we published an updated version of the popular 12-part tutorial series that shows how to deploy an ASP.NET web application with SQL Server databases. Here are links to the new tutorials, with notes about what’s in them and what’s new:

  • Introduction 
    • Overview of the series, prerequisites, download and run the sample application.
    • New overview of database deployment options. The tutorials show how to use Entity Framework Code First Migrations and the dbDacFx provider (for deploying SQL Server databases without using Migrations).  The old ones showed Migrations and the dbFullSQL provider (for migrating SQL Server Compact to SQL Server).
  • Preparing for Database Deployment 
    • How to set up Code First Migrations and create data deployment scripts for use with dbDacFx deployment.
    • Uses LocalDB instead of SQL Server Compact for the development databases. (For people who are using SQL Server Compact, the old tutorial has been left on the ASP.NET site.)
  • Web.config File Transformations
    • How to set up build configuration Web.config transformations (e.g., Release and Debug).
    • Introduces profile-specific Web.config transformations (e.g., Test and Production).
    • Demonstrates the new Web.config transformation preview feature, which makes testing and debugging transforms much easier.
  • Project Properties
    • Review of deployment options specified in project properties.
  • Deploying to Test (to IIS on the development computer)
    • How to install IIS, install SQL Server Express, create SQL Express databases, and deploy the application to local IIS.
    • Now covers IIS installation for both Windows 8 and Windows 7.     
    • Deploys schema and data for the membership database by using the dbDacFx provider.
    • Deploys schema and data for the application database by using Code First Migrations.
  • Setting Folder Permissions
    • How to enable the application to write log files in an application folder when it runs in local IIS.
  • Deploying to Production   
    • How to deploy to staging and production environments in separate Windows Azure Web Sites. (Most of the deployment procedures shown apply also to third-party hosting providers, and differences are noted.)
    • Creates profile-specific transformation files for staging and production.
    • Deploys robots.txt file to staging, excludes it from production by editing the .pubxml file.
  • Deploying a Code Update
    • How to deploy an application update that doesn’t involve a database change.
    • Uses FTP to add and remove app_offline.htm, shows how to get credentials for FTP access from the Windows Azure .publishsettings file.
    • Previews file changes to be deployed by double-clicking files in the Publish Web Preview pane.
    • Deploys individual files and previews file changes by right-clicking files in Solution Explorer.
  • Deploying a Database Update
    • How to deploy a database update, using Migrations for the application database and dbDacFx for the membership database.
  • Command Line Deployment
    • New tutorial that shows how to deploy by using the command line.
    • Reviews and explains key command line options.
    • Shows how to get credentials for Windows Azure deployment from the .publishsettings file.
  • Deploying Extra Files
    • New tutorial that shows how to customize the web publish pipeline by editing the .pubxml file, in order to deploy extra files that are not included in the project.
  •   Troubleshooting
    • Common errors and how to resolve them.

Feedback is welcome; you can post comments here or on the tutorials themselves.

-- Tom Dykstra

ASP.NET Web API: Using Namespaces to Version Web APIs

$
0
0

In this post, I’ll show how to extend the routing logic in ASP.NET Web API, by creating a custom controller selector. Suppose that you want to version your web API by defining URIs like the following:

/api/v1/products/

/api/v2/products/

You might try to make this work by creating two different “Products” controllers, and placing them in separate namespaces:

namespace MyApp.Controllers.V1
{
    // Version 1 controller
    public class ProductsController : ApiController {  }
}

namespace MyApp.Controllers.V2
{
    // Version 2 controller
    public class ProductsController : ApiController {  }
}

The problem with this approach is that Web API finds controllers by class name, ignoring the namespace. So there’s no way to make this work using the default routing logic. Fortunately, Web API makes it easy to change the default behavior.

The interface that Web API uses to select a controller is IHttpControllerSelector. You can read about the default implementation here. The important method on this interface is SelectController, which selects a controller for a given HttpRequestMessage.

First, you need to understand a little about the Web API routing process. Routing starts with a route template. When you create a Web API project, it adds a default route template:

"api/{controller}/{id}"

The parts in curly brackets are placeholders. Here is a URI that matches this template:

http://www.example.com/api/products/1

So in this example, the placeholders have these values:

  • controller = products
  • id = 1

The default IHttpControllerSelector uses the value of “controller” to find a controller with a matching name. In this example, “products” would match a controller class named ProductsController. (By convention, you need to add the “Controller” suffix to the class name.)

To make our namespace scenario work, we’ll use a route template like this:

"api/{namespace}/{controller}/{id}"

Here is a matching URI:

http://www.example.com/api/v1/products/1

And here are the placeholder values:

  • namespace = v1
  • controller = products
  • id = 1

Now we can use these values to find a matching controller. First, call GetRouteData to get an IHttpRouteData object from the request:

public HttpControllerDescriptor SelectController(HttpRequestMessage request)
{
    IHttpRouteData routeData = request.GetRouteData();
    if (routeData == null)
    {
        throw new HttpResponseException(HttpStatusCode.NotFound);
    }

    // ...

Use IHttpRouteData to look up the values of “namespace” and “controller”. The values are stored in a dictionary as object types. Here is a helper method that returns a route value as a type T:

private static T GetRouteVariable(IHttpRouteData routeData, string name)
{
    object result = null;
    if (routeData.Values.TryGetValue(name, out result))
    {
        return (T)result;
    }
    return default(T);
}

Use this helper function to get the route values as strings:

string namespaceName = GetRouteVariable<string>(routeData, "namespace");
if (namespaceName == null)
{
    throw new HttpResponseException(HttpStatusCode.NotFound);
}

string controllerName = GetRouteVariable<string>(routeData, "controller");
if (controllerName == null)
{
    throw new HttpResponseException(HttpStatusCode.NotFound);
}

Now look for a matching controller type. For example, given “namespace” = “v1” and “controller” = “products”, this would match a controller class with the fully qualified name MyApp.Controllers.V1.ProductsController.

To get the list of controller types in the application, use the IHttpControllerTypeResolver interface:

IAssembliesResolver assembliesResolver = 
    _configuration.Services.GetAssembliesResolver();
IHttpControllerTypeResolver controllersResolver = 
    _configuration.Services.GetHttpControllerTypeResolver();

ICollection<Type> controllerTypes = 
    controllersResolver.GetControllerTypes(assembliesResolver);

This code performs reflection on all of the assemblies in the app domain. To avoid doing this on every request, it’s a good idea to cache a dictionary of controller types, and use the dictionary for subsequent look ups.

The last step is to replace the default IHttpControllerSelector with our custom implementation, in the HttpConfiguration.Services collection:

    config.Services.Replace(typeof(IHttpControllerSelector), 
        new NamespaceHttpControllerSelector(config));

You can find the complete sample hosted on aspnet.codeplex.com.

In order to keep the code as simple as possible, the sample has a few limitations:

  • It expects the route to contain a “namespace” variable. Otherwise, it returns an HTTP 404 error. You could modify the sample so that it falls back to the default IHttpControllerSector in this case.
  • The sample always matches the value of “namespace” against the final segment of the namespace (i.e., the inner scope). So “v1” matches “MyApp.Controllers.V1” but not “MyApp.V1.Controllers”. You could change this behavior by modifying the code that constructs the dictionary of controller types. (See the InitializeControllerDictionary method.)

Also, versioning by URI is not the only way to version a web API. See Howard Dierking’s blog post for more thoughts on this topic.

Deploy a Secure ASP.NET MVC application with OAuth, Membership and SQL Database

$
0
0

This tutorial shows you how to build a secure ASP.NET MVC 4 web application that enables users to log in with credentials from Facebook, Yahoo, and Google. You will also deploy the application to Windows Azure.

You can open a Windows Azure account for free, and if you don't already have Visual Studio 2012, the SDK automatically installs Visual Studio 2012 for Web Express. You can start developing for Windows Azure for free.

This tutorial assumes that you have no prior experience using Windows Azure. On completing this tutorial, you'll have a secure data-driven web application up and running in the cloud and using a cloud database.

You'll learn:

  • How to enable your machine for Windows Azure development by installing the Windows Azure SDK.
  • How to create a secure ASP.NET MVC 4 project and publish it to a Windows Azure Web Site.
  • How to use OAuth and the ASP.NET membership database to secure your application.
  • How to deploy a membership database to Windows Azure.
  • How to use a SQL database to store data in Windows Azure.
  • How to use Visual Studio to update and manage the membership database on SQL Azure.

You'll build a simple contact list web application that is built on ASP.NET MVC 4 and uses the ADO.NET Entity Framework for database access. The following illustration shows the login page for the completed application:


rxb

In this tutorial:

Set up the development environment

To start, set up your development environment by installing the Windows Azure SDK for the .NET Framework.

  1. To install the Windows Azure SDK for .NET, click the link below. If you don't have Visual Studio 2012 installed yet, it will be installed by the link. This tutorial requires Visual Studio 2012.
    Get Tools and SDK for Visual Studio 2012
  2. When you are prompted to run or save vwdorvs11azurepack.exe, click Run.

rxf

When the installation is complete, you have everything necessary to start developing.

Set up the Windows Azure environment

Next, set up the Windows Azure environment by creating a Windows Azure Web Site and a SQL database.

Create a web site and a SQL database in Windows Azure

Your Windows Azure Web Site will run in a shared hosting environment, which means it runs on virtual machines (VMs) that are shared with other Windows Azure clients. A shared hosting environment is a low-cost way to get started in the cloud. Later, if your web traffic increases, the application can scale to meet the need by running on dedicated VMs. If you need a more complex architecture, you can migrate to a Windows Azure Cloud Service. Cloud services run on dedicated VMs that you can configure according to your needs.

Windows Azure SQL Database is a cloud-based relational database service that is built on SQL Server technologies. The tools and applications that work with SQL Server also work with SQL Database.

  1. In the Windows Azure Management Portal, click Web Sites in the left tab, and then click New.
    rxWSnew
  2. Click CUSTOM CREATE.
    rxCreateWSwithDB
    The New Web Site - Custom Create wizard opens.
  3. In the New Web Site step of the wizard, enter a string in the URL box to use as the unique URL for your application. The complete URL will consist of what you enter here plus the suffix that you see next to the text box. The illustration shows "contactmgr2", but that URL is probably taken so you will have to choose a different one.
    rxCreateWSwithDB_2
  4. In the Database drop-down list, choose Create a new SQL database.
  5. In the Region drop-down list, choose the same region you selected for the Web site.
    This setting specifies which data center your VM will run in. In the DB CONNECTION STRING NAME, enter connectionString1.
    rxCreateWSwithDB_2[1]
  6. Click the arrow that points to the right at the bottom of the box. The wizard advances to the Database Settings step.
  7. In the Name box, enter ContactDB.
  8. In the Server box, select New SQL Database server. Alternatively, if you previously created a SQL Server database, you can select that SQL Server from the dropdown control.
  9. Enter an administrator LOGIN NAME and PASSWORD. If you selected New SQL Database server you aren't entering an existing name and password here, you're entering a new name and password that you're defining now to use later when you access the database. If you selected a SQL Server you’ve created previously, you’ll be prompted for the password to the previous SQL Server account name you created. For this tutorial, we won't check the *Advanced * box. The *Advanced * box allows you to set the DB size (the default is 1 GB but you can increase this to 150 GB) and the collation.
  10. Click the check mark at the bottom of the box to indicate you're finished.

    dntutmobile-setup-azure-site-004

    The following image shows using an existing SQL Server and Login.

    rxPrevDB

    The Management Portal returns to the Web Sites page, and the Status column shows that the site is being created. After a while (typically less than a minute), the Status column shows that the site was successfully created. In the navigation bar at the left, the number of sites you have in your account appears next to the Web Sites icon, and the number of databases appears next to the SQL Databases icon.

Create an ASP.NET MVC 4 application

You have created a Windows Azure Web Site, but there is no content in it yet. Your next step is to create the Visual Studio web application project that you'll publish to Windows Azure.

Create the project
  1. Start Visual Studio 2012.
  2. From the File menu click New Project.
  3. In the New Project dialog box, expand Visual C# and select Web under Installed Templates and then select ASP.NET MVC 4 Web Application. Keep the default .NET Framework 4.5. Name the application ContactManager and click OK.

    dntutmobile-createapp-002
  4. In the New ASP.NET MVC 4 Project dialog box, select the Internet Application template. Keep the default Razor View Engine and then click OK.

    rxb2
Set the page header and footer
  1. In Solution Explorer, expand the Views\Shared folder and open the _Layout.cshtml file.

    dntutmobile-createapp-004
  2. Replace each occurrence of "My ASP.NET MVC Application" with "Contact Manager".
  3. Replace "your logo here" with "CM Demo".
Run the application locally
  1. Press CTRL+F5 to run the application. The application home page appears in the default browser.

    rxa

This is all you need to do for now to create the application that you'll deploy to Windows Azure. Later you'll add database functionality.

Deploy the application to Windows Azure

  1. In your browser, open the Windows Azure Management Portal.
  2. In the Web Sites tab, click the name of the site you created earlier.

    dntutmobile-setup-azure-site-006
  3. On the right side of the window, click Download publish profile.

    dntutmobile-deploy1-download-profile

    This step downloads a file that contains all of the settings that you need in order to deploy an application to your Web Site. You'll import this file into Visual Studio so you don't have to enter this information manually.
  4. Save the .publishsettings file in a folder that you can access from Visual Studio.

    dntutmobile-deploy1-save-profile

    Security Note: The .publishsettings file contains your credentials (unencoded) that are used to administer your Windows Azure subscriptions and services. The security best practice for this file is to store it temporarily outside your source directories (for example in the Libraries\Documents folder), and then delete it once the import has completed. A malicious user gaining access to the publishsettings file can edit, create, and delete your Windows Azure services.
  5. In Visual Studio, right-click the project in Solution Explorer and select Publish from the context menu.

    dntutmobile-deploy1-publish-001
    The Publish Web wizard opens.
  6. In the Profile tab of the Publish Web wizard, click Import.

    dntutmobile-deploy1-publish-002
  7. Select the publishsettings file you downloaded earlier, and then click Open.
  8. In the Connection tab, click Validate Connection to make sure that the settings are correct. When the connection has been validated, a green check mark is shown next to the Validate Connection button.
  9. Click Next.

    dntutmobile-deploy1-publish-005
  10. In the Settings tab, click Next.
    You can accept all of the default settings on this page. You are deploying a Release build configuration and you don't need to delete files at the destination server. The UsersContext (DefaultConnection) entry under Databases is created because of the UsersContext : DbContext class which uses the DefaultConnection string

    rxPWS
  11. In the Preview tab, click Start Preview.
    The tab displays a list of the files that will be copied to the server. Displaying the preview isn't required to publish the application but is a useful function to be aware of. In this case, you don't need to do anything with the list of files that is displayed.

    dntutmobile-deploy1-publish-007
  12. Click Publish.
    Visual Studio begins the process of copying the files to the Windows Azure server. The Output window shows what deployment actions were taken and reports successful completion of the deployment.
  13. The default browser automatically opens to the URL of the deployed site.
    The application you created is now running in the cloud. The next time you deploy the application, only the changed (or new) files will be deployed.

    newapp005

Add a database to the application

Next, you'll update the MVC application to add the ability to display and update contacts and store the data in a database. The application will use the Entity Framework to create the database and to read and update data in the database.

Add data model classes for the contacts

You begin by creating a simple data model in code.

  1. In Solution Explorer, right-click the Models folder, click Add, and then Class.

    dntutmobile-adddatabase-001

  2. In the Add New Item dialog box, name the new class file Contact.cs, and then click Add.

    dntutmobile-adddatabase-002

  3. Replace the contents of the Contacts.cs file with the following code.

    using System.ComponentModel.DataAnnotations;
    using System.Globalization;
    namespace ContactManager.Models
    {
        public class Contact
        {
            public int ContactId { get; set; }
            public string Name { get; set; }
            public string Address { get; set; }
            public string City { get; set; }
            public string State { get; set; }
            public string Zip { get; set; }
            [DataType(DataType.EmailAddress)]
            public string Email { get; set; }
        }
    }
    

    The Contacts class defines the data that you will store for each contact, plus a primary key, ContactID, that is needed by the database.

Create web pages that enable app users to work with the contacts

The ASP.NET MVC scaffolding feature can automatically generate code that performs create, read, update, and delete (CRUD) actions.

Add a Controller and a view for the data

  1. Build the project (Ctrl+Shift+B). (You must build the project before using scaffolding mechanism.)
  2. In Solution Explorer, right-click the Controllers folder and click Add, and then click Controller....

    dntutmobile-controller-add-context-menu
  3. In the Add Controller dialog box, enter "HomeController" as your controller name.
  4. Set the Scaffolding options Template to MVC Controller with read/write actions and views, using Entity Framework.
  5. Select Contact as your model class and <New data context...> as your data context class.

    dntutmobile-controller-add-controller-dialog
  6. On the New Data Context dialog box, accept the default value ContactManager.Models.ContactManagerContext.
    rxNewCtx
  7. Click OK, then click Add in the Add Controller dialog box.
  8. On the Add Controller overwrite dialog, make sure all options are checked and click OK.
    rxOverwrite
    Visual Studio creates a controller methods and views for CRUD database operations for Contact objects.

Enable Migrations, create the database, add sample data and a data initializer

The next task is to enable the Code First Migrations feature in order to create the database based on the data model you created.

  1. In the Tools menu, select Library Package Manager and then Package Manager Console.
    dntutmobile-migrations-package-manager-menu
  2. In the Package Manager Console window, enter the following command:

    enable-migrations -ContextTypeName ContactManagerContext
    

    rxE
    You must specify the context type name (ContactManagerContext) because the project contains two DbContext derived classes, the ContactManagerContext we just added and the UsersContext, which is used for the membership database. The ContactManagerContext class was added by the Visual Studio scaffolding wizard.
    The enable-migrations command creates a Migrations folder and it puts in that folder a Configuration.cs file that you can edit to configure Migrations.

  3. In the Package Manager Console window, enter the following command:

    add-migration Initial
    

    The add-migration Initial command generates a file named <date_stamp>Initial in the Migrations folder that creates the database. The first parameter ( Initial ) is arbitrary and is used to create the name of the file. You can see the new class files in Solution Explorer.
    In the Initial class, the Up method creates the Contacts table, and the Down method (used when you want to return to the previous state) drops it.

  4. Open the Migrations\Configuration.cs file.
  5. Add the following namespaces.

     using ContactManager.Models;
    
  6. Replace the Seed method with the following code:

    protected override void Seed(ContactManager.Models.ContactManagerContext context)
    {
        context.Contacts.AddOrUpdate(p => p.Name,
           new Contact
           {
               Name = "Debra Garcia",
               Address = "1234 Main St",
               City = "Redmond",
               State = "WA",
               Zip = "10999",
               Email = "debra@example.com",
           },
            new Contact
            {
                Name = "Thorsten Weinrich",
                Address = "5678 1st Ave W",
                City = "Redmond",
                State = "WA",
                Zip = "10999",
                Email = "thorsten@example.com",
            },
            new Contact
            {
                Name = "Yuhong Li",
                Address = "9012 State st",
                City = "Redmond",
                State = "WA",
                Zip = "10999",
                Email = "yuhong@example.com",
            },
            new Contact
            {
                Name = "Jon Orton",
                Address = "3456 Maple St",
                City = "Redmond",
                State = "WA",
                Zip = "10999",
                Email = "jon@example.com",
            },
            new Contact
            {
                Name = "Diliana Alexieva-Bosseva",
                Address = "7890 2nd Ave E",
                City = "Redmond",
                State = "WA",
                Zip = "10999",
                Email = "diliana@example.com",
            }
            );
    }
    

    This code above will initialize the database with the contact information. For more information on seeding the database, see Seeding and Debugging Entity Framework (EF) DBs.

  7. In the Package Manager Console enter the command:

    update-database
    

    dntutmobile-migrations-package-manager-console

    The update-database runs the first migration which creates the database. By default, the database is created as a SQL Server Express LocalDB database. (Unless you have SQL Server Express installed, in which case the database is created using the SQL Server Express instance.)

  8. Press CTRL+F5 to run the application.

The application shows the seed data and provides edit, details and delete links.

rx2

OAuthAdd an OAuth Provider

OAuth is an open protocol that allows secure authorization in a simple and standard method from web, mobile, and desktop applications. The ASP.NET MVC internet template uses OAuth to expose Facebook, Twitter, Google, Yahoo, and Microsoft as authentication providers. Although this tutorial uses only Facebook, Google, and Yahoo as the authentication providers, you can easily modify the code to use any of the providers. The steps to implement other providers are very similar to the steps you will see in this tutorial.

In addition to authentication, the tutorial will also use roles to implement authorization. Only those users you add to the canEdit role will be able to create, edit, or delete contacts.

Registering with an external provider

To authenticate users with credentials from some external providers, you must register your web site with the provider and obtain a key and connection secret. Google and Yahoo don't require you to register and obtain keys.

This tutorial does not show all of the steps you must perform to register with these providers. The steps are typically not difficult. To successfully register your site, follow the instructions provided on those sites. To get started with registering your site, see the developer site for:

Navigate to https://developers.facebook.com/apps page and log in if necessary. Click the Register as a Developer button and complete the registration process. Once you complete registration, click Create New App. Enter a name for the app. You don't need to enter an app namespace.
rxFBapp

Enter localhost for the App Domain and http://localhost/ for the Site URL. Click Enabled for Sandbox Mode, then click Save Changes.

You will need the App ID and the App Secret to implement OAuth in this application.
rxFB

Creating test users

In the left pane under Settings click Developer Roles. Click the Create link on the Test Users row (not the Testers row).
rxFBt
Click on the Modify link to get the test users email (which you will use to log into the application). Click the See More link, then click Edit to set the test users password.

Adding application id and secret from the provider

Open the App_Start\AuthConfig.cs file. Remove the comment characters from the RegisterFacebookClient method and add the app id and app secret. Use the values you obtained, the values shown below will not work. Remove the comment characters from the OAuthWebSecurity.RegisterGoogleClient call and add the OAuthWebSecurity.RegisterYahooClient as shown below. The Google and Yahoo providers don't require you to register and obtain keys.
Warning: Keep your app ID and secret secure. A malicious user who has your app ID and secret can pretend to be your application.

 public static void RegisterAuth()
    {
        OAuthWebSecurity.RegisterFacebookClient(
            appId: "enter numeric key here",
            appSecret: "enter numeric secret here");

        OAuthWebSecurity.RegisterGoogleClient();
        OAuthWebSecurity.RegisterYahooClient();
    }
  1. Run the application and click the Log In link.
  2. Click the Facebook button.
  3. Enter your Facebook credentials or one of the test users credentials.
  4. Click Okay to allow the application to access your Facebook resources.
  5. You are redirected to the Register page. If you logged in using a test account, you can change the user name to something shorter, for example "Bill FB test". Click the Register button which will save the user name and email alias to the membership database.
  6. Register another user. Currently a bug in the log in system prevents you from logging off and logging in as another user using the same provider (that is, you can't log off your Facebook account and log back in with a different Facebook account). To work around this, navigate to the site using a different browser and register another user. One user will be added to the manager role and have edit access to application, the other user will only have access to non-edit methods on the site. Anonymous users will only have access to the home page.
  7. Register another user using a different provider.
  8. Optional: You can also create a local account not associated with one of the providers. Later on in the tutorial we will remove the ability to create local accounts. To create a local account, click the Log out link (if you are logged in), then click the Register link. You might want to create a local account for administration purposes that is not associated with any external authentication provider.

Add Roles to the Membership Database

In this section you will add the canEdit role to the membership database. Only those users in the canEdit role will be able to edit data. A best practice is to name roles by the actions they can perform, so canEdit is preferred over a role called admin. When your application evolves you can add new roles such as canDeleteMembers rather than superAdmin.

  1. In the View menu click Database Explorer if you are using Visual Studio Express for Web or Server Explorer if you are using full Visual Studio.

    rxp3

    rxp2
  2. In Server Explorer, expand DefaultConnection then expand Tables.
  3. Right click UserProfile and click Show Table Data.

    rxSTD
  4. Record the UserId for the user that will have the canEdit role. In the image below, the user ricka with UserId 2 will have the canEdit role for the site.

    rxUid
  5. Right click webpages_Roles and click Show Table Data.
  6. Enter canEdit in the RoleName cell. The RoleId will be 1 if this is the first time you've added a role. Record the RoleID. Be sure there is not a trailing space character, "canEdit " in the role table will not match "canEdit" in the controller code.

    rxRoleID
  7. Right click webpages UsersInRoles and click Show Table Data. Enter the UserId for the user you want to grant canEdit access and the RoleId.

    rxUR

    The webpages_OAuthMembership table contains the OAuth provider, the provider UserID and the UserID for each registered OAuth user. The webpages-Membership table contains the ASP.NET membership table. You can add users to this table using the register link. It's a good idea to add a user with the canEdit role that is not associated with Facebook or another third party authorization provider so that you can always have canEdit access even when the third party authorization provider is not available. Later on in the tutorial we will disable ASP.NET membership registration.

Protect the Application with the Authorize Attribute

In this section we will apply the Authorize attribute to restrict access to the action methods. Anonymous user will be able to view the home page only. Registered users will be able to see contact details, the about and the contacts pages. Only users in the canEdit role will be able to access action methods that change data.

  1. Add the Authorize filter and the RequireHttps filter to the application. An alternative approach is to add the Authorize attribute and the RequireHttps attribute to each controller, but it's considered a security best practice to apply them to the entire application. By adding them globally, every new controller and action method you add will automatically be protected, you won't need to remember to apply them. For more information see Securing your ASP.NET MVC 4 App and the new AllowAnonymous Attribute. Open the App_Start\FilterConfig.cs file and replace the RegisterGlobalFilters method with the following.

    public static void
    RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new HandleErrorAttribute());
        filters.Add(new System.Web.Mvc.AuthorizeAttribute());
        filters.Add(new RequireHttpsAttribute());
    }
    
  2. Add the AllowAnonymous attribute to the Index method. The AllowAnonymous attribute enables you to whitelist the methods you want to opt out of authorization.

  3. Add [Authorize(Roles = "canEdit")] to the Get and Post methods that change data (Create, Edit, Delete).
  4. Add the About and Contact methods. A portion of the completed code is shown below.

    public class HomeController : Controller { private ContactManagerContext db = new ContactManagerContext(); [AllowAnonymous] public ActionResult Index() { return View(db.Contacts.ToList()); }

    public ActionResult About()
    {
        return View();
    }
    
    
    public ActionResult Contact()
    {
        return View();
    }
    
    
    [Authorize(Roles = "canEdit")]
    public ActionResult Create()
    {
        return View();
    }
    // Methods moved and omitted for clarity.
    

    }

  5. Remove ASP.NET membership registration. The current ASP.NET membership registration in the project does not provide support for password resets and it does not verify that a human is registering (for example with a CAPTCHA). Once a user is authenticated using one of the third party providers, they can register. In the AccountController, remove the [AllowAnonymous] from the GET and POST Register methods. This will prevent bots and anonymous users from registering.

  6. In the Views\Shared_LoginPartial.cshtml, remove the Register action link.
  7. Enable SSL. In Solution Explorer, click the ContactManager project, then click F4 to bring up the properties dialog. Change SSL Enabled to true. Copy the SSL URL.

    rxSSL
  8. In Solution Explorer, right click the Contact Manager project and click Properties.
  9. In the left tab, click Web.
  10. Change the Project Url to use the SSL URL.
  11. Click Create Virtual Directory.

    rxS2
  12. Press CTRL+F5 to run the application. The browser will display a certificate warning. For our application you can safely click on the link Continue to this website. Verify only the users in the canEdit role can change data. Verify anonymous users can only view the home page.

    rxNOT

    rxNOT2


    Windows Azure Web sites include a valid security certificate, so you won't see this warning when you deploy to Azure.

Create a Data Deployment Script

The membership database isn't managed by Entity Framework Code First so you can't use Migrations to deploy it. We'll use the dbDacFx provider to deploy the database schema, and we'll configure the publish profile to run a script that will insert initial membership data into membership tables.

This tutorial will use SQL Server Management Studio (SSMS) to create data deployment scripts.

Install SSMS from Microsoft SQL Server 2012 Express Download Center:

(Note that this is a 600 megabyte download. It may take a long time to install and may require a reboot of your computer.)

On the first page of the SQL Server Installation Center, click New SQL Server stand-alone installation or add features to an existing installation, and follow the instructions, accepting the default choices. The following image shows the step that install SSMS.

rxSS

Create the development database script
  1. Run SSMS.
  2. In the Connect to Server dialog box, enter (localdb)\v11.0 as the Server name, leave Authentication set to Windows Authentication, and then click Connect. If you have installed SQL Express, enter .\SQLEXPRESS.

    rxC2S
  3. In the Object Explorer window, expand Databases, right-click aspnet-ContactManager, click Tasks, and then click Generate Scripts.

    rxGenScripts
  4. In the Generate and Publish Scripts dialog box, click Set Scripting Options. You can skip the Choose Objects step because the default is Script entire database and all database objects and that is what you want.
  5. Click Advanced.

    rx11
  6. In the Advanced Scripting Options dialog box, scroll down to Types of data to script, and click the Data only option in the drop-down list. (See the image below the next step.)
  7. Change Script USE DATABASE to False. USE statements aren't valid for Windows Azure SQL Database and aren't needed for deployment to SQL Server Express in the test environment.

    rxAdv
  8. Click OK.
  9. In the Generate and Publish Scripts dialog box, the File name box specifies where the script will be created. Change the path to your solution folder (the folder that has your Contacts.sln file) and change the file name to aspnet-data-membership.sql.
  10. Click Next to go to the Summary tab, and then click Next again to create the script.

    rx1
  11. Click Finish.

Deploy the app to Windows Azure

  1. Open the application root Web.config file. Find the DefaultConnection markup, and then copy and paste it under the DefaultConnection markup line. Rename the copied element DefaultConnectionDeploy. You will need this connection string to deploy the user data in the membership database.

    rxd
  2. Build the application.
  3. In Visual Studio, right-click the project in Solution Explorer and select Publish from the context menu.

    dntutmobile-deploy1-publish-001[1]

    The Publish Web wizard opens.
  4. Click the Settings tab. Click the v icon to select the Remote connection string for the ContactManagerContext and DefaultConnectionDeploy. The three Databases listed will all use the same connection string. The ContactManagerContext database stores the contacts, the DefaultConnectionDeploy is used only to deploy the user account data to the membership database and the UsersContext database is the membership database.

    rxD2
  5. Under ContactManagerContext, check Execute Code First Migrations.

    rxSettings
  6. Under DefaultConnectionDeploy check Update database then click the Configure database updates link.
  7. Click the Add SQL Script link and navigate to the aspnet-data-membership.sql file. You only need to do this once. The next deployment you uncheck Update database because you won't need to add the user data to the membership tables.

    rxAddSQL2
  8. Click Publish.
  9. Navigate to the https://developers.facebook.com/apps page and change the App Domains and Site URL settings to the Windows Azure URL.
  10. Test the application. Verify only the users in the canEdit role can change data. Verify anonymous users can only view the home page. Verify authenticated users can navigate to all links that don't change data.
  11. The next time you publish the application be sure to uncheck Update database under DefaultConnectionDeploy.

    rxSettings[1]

Update the Membership Database

Once the site is deployed to Windows Azure and you have more registered users you might want to make some of them members of the canEdit role. In this section we will use Visual Studio to connect to the SQL Azure database and add users to the canEdit role.

  1. In Solution Explorer, right click the project and click Publish.


    rxP
  2. Click the Settings tab.
  3. Copy the connection string. For example, the connection string used in this sample is: Data Source=tcp:d015leqjqx.database.windows.net,1433; Initial Catalog=ContactDB2;User Id=ricka0@d015lxyze;Password=xyzxyz
  4. Close the publish dialog.
  5. In the View menu click Database Explorer if you are using Visual Studio Express for Web or Server Explorer if you are using full Visual Studio.

    rxp3[1]

    rxp2[1]
  6. Click on the Connect to Database icon.

    rxc
  7. If you are prompted for the Data Source, click Microsoft SQL Server.

    rx3
  8. Copy and paste the Server Name, which starts with tcp (see the image below).
  9. Click Use SQL Server Authentication
  10. Enter your User name and Password, which are in the connection string you copied.
  11. Enter the database name (ContactDB, or the string after "Initial Catalog=" in the database if you didn't name it ContactDB.) If you get an error dialog, see the next section.
  12. Click Test Connection. If you get an error dialog, see the next section.

    rx4

Cannot open server login error

If you get an error dialog stating "Cannot open server" you will need to add your IP address to the allowed IPs.

rx5

  1. In the Windows Azure Portal, Select SQL Databases in the left tab.

    rx6
  2. Select the database you wish to open.
  3. Click the Set up Windows Azure firewall rules for this IP address link.
    rx7
  4. When you are prompted with "The current IP address xxx.xxx.xxx.xxx is not included in existing firewall rules. Do you want to update the firewall rules?", click Yes. Adding this address is often not enough, you will need to add a range of IP addresses.

Adding a Range of Allowed IP Addresses

  1. In the Windows Azure Portal, Click SQL Databases.
  2. Click the Server hosting your Database.

    rx8
  3. Click Configure on the top of the page.
  4. Add a rule name, starting and ending IP addresses.
    rx9
  5. At the bottom of the page, click Save.
  6. You can now edit the membership database using the steps previously shown.

Next Steps

To get the colorful Facebook, Google and Yahoo log on buttons, see the blog post Customizing External Login Buttons in ASP.NET MVC 4. Another way to store data in a Windows Azure application is to use Windows Azure storage, which provide non-relational data storage in the form of blobs and tables. The following links provide more information on ASP.NET MVC and Window Azure.

This tutorial and the sample application was written by Rick Anderson (Twitter @RickAndMSFT) with assistance from Tom Dykstra, Tom FitzMacken and Barry Dorrans (Twitter @blowdart).

Please leave feedback on what you liked or what you would like to see improved, not only about the tutorial itself but also about the products that it demonstrates. Your feedback will help us prioritize improvements. We are especially interested in finding out how much interest there is in more automation for the process of configuring and deploying the membership database.

ASP.NET MVC Facebook Birthday App

$
0
0

Tom Dykstra and I have published a really cool tutorial on creating a MVC FB birthday app. You can test the app out by clicking on the FB link below:

https://apps.facebook.com/birthdayapp-mvc/

The image below shows the birthday app.

Facebook app home page

Yao Huang Lin  is the principal developer for the ASP.NET MVC Facebook library and templates and he also wrote the sample used in this tutorial (so you can be sure it’s using best practices.)

The tutorial shows you how to:

  1. Quickly create a simple MVC/FB app.
  2. Understand the FB template code.
  3. Create the FB birthday app.
  4. Deploy the app to Windows Azure.

As always, we appreciate feedback.

Follow me ( @RickAndMSFT )   on twitter where I have a no spam guarantee of quality tweets.

Tutorial Series on Model Binding with ASP.NET Web Forms

$
0
0

I have written a tutorial series that shows how to use model binding with ASP.NET Web Forms. You might be familiar with the model binding concept from ASP.NET MVC, but with ASP.NET 4.5, model binding is now available in Web Forms applications. Model binding makes it very easy to create and maintain data-rich web pages. A lot of the manual steps of correlating elements with data properties are performed automatically. When you use model binding with dynamic data templates, you can quickly add or revise properties in your data model and those properties are correctly rendered and processed in the web forms throughout your site.

The series contains the following topics:

  1. Retrieving and displaying data
  2. Updating, deleting, and creating data
  3. Sorting, paging, and filtering data
  4. Integrating JQuery UI Datepicker
  5. Using query string values to filter data
  6. Adding business logic layer

On data-bound server controls, such as GridView, DetailsView, ListView or FormView, you specify the name of a data method for the properties:

  • SelectMethod
  • UpdateMethod
  • DeleteMethod
  • InsertMethod

Within those methods, you use a data access technology (like Entity Framework) to perform the required data operations. In methods that update the data, you can take advantage of the TryUpdateModel method to automatically apply the bound values from the web forms to the data model.

You can also use several new attributes to specify the source of a parameter value. These attributes are in the System.Web.ModelBinding namespace, and include:

  • Control
  • Cookie
  • Form
  • Profile
  • QueryString
  • RouteData
  • Session
  • UserProfile
  • ViewState

For example, applying the QueryString attribute to a parameter means that the value of that parameter is retrieved from a query string value with the same name.

The tutorial series shows how to add data methods in a code-behind file for a web page, or in a separate class that manages the data operations for a site. Using a separate class enables you to centralize the data logic and re-use methods in multiple pages.

I hope the tutorial series is useful to you, and I welcome feedback.

Debugging ASP.NET Web API with Route Debugger

$
0
0

Tutorial and Tool written by Troy Dai (Twitter @troy_dai) with assistance from Rick Anderson (Twitter @RickAndMSFT)

Search for “asp.net web api routing” on stackoverflow, you’ll find many questions. How exactly does Web API routing work? Why doesn’t my route work? Why is this action not invoked? Often time it is difficult to debug route.

To address this issue I wrote this tool named “ASP.NET Web API Route Debugger” trying to make Web API developers’ lives a bit easier.

In this article I’ll first introduce stdebugger. Then I’ll introduce how routing works in Web Api. It is followed by three examples of how to use the route debugger in real cases.

How to step up the Route Debugger

You can install Route Debugger from NuGet (http://www.nuget.org/packages/WebApiRouteDebugger/)

   1: PM> Install-Package WebApiRouteDebugger

The NuGet package will add a new area and to your project. The image below shows the new files added to the project. (The + icon shows new files and the red check icon shows changed files)

clip_image002

Hit F5 to compile and then navigate to http:// localhost:xxx/rd for the route debugger page.

clip_image004

Enter the URL you want to test and press Send. The results page is displayed.

clip_image006

I’ll explain how to read the results in the following sections.

How does routing works in ASP.NET Web Api

The routing mechanism of ASP.NET Web API is composed of three steps: find the matching route and parse the route data, find the matching controller, and find the matching action. In any step fails to find a selection the steps following will not be executed. For example, if no controller is found, the matching ends and no action is looked for.

In the first step, a route will be matched. Every route is defined with route template, defaults, constraints, data tokens and handler. (Routes are configured by default in App_Start\WebApiConfig.cs ) Once a route is matched, the request URL is parsed into route data based on the route template. Route data is a dictionary mapping from string to object.

Controller matching is purely done based on the value of “controller” key in route data. If the key “controller” doesn’t exist in route data, controller selection will fail.

After controller is matched, all the public methods on the controller are found through reflection. To match the action, it uses the following algorithm:

clip_image008

  1. If route data contains key “action”, then the action will be searched based on action name. Unlike ASP.NET MVC, Web API routes generally do not use action names in routing.
    1. Find all actions where the action name is “action” in the route data;
    2. Each action supports one or more HTTP Verbs (GET, POST, PUT, etc.). Eliminate those actions which don’t support the current HTTP request’s verb.
  2. If the route data doesn’t contains key “action”, then the action will be searched based on the supported request method directly.
  3. For selected actions in either of above two steps, examine the parameters of action method. Eliminate those actions that don’t match all the parameters in the route data.
  4. Eliminate all actions that are marked by the NonAction attribute.
    1. If more than one action matches, an HTTP 500 error is thrown. (Internally an HttpResponseException is thrown.)
    2. If there is no matching action, an HTTP 404 error is thrown.

How to use the Route bugger

Example 1: Missing Controller Value

Source: http://stackoverflow.com/questions/13876816/web-api-routing

Issue

The controller and routes are shown below. The URL doesn’t match the MachineApi route.

   1: localhost/api/machine/somecode/all

 

You can download Sample1 and install the route debugger NuGet package to follow along.

Controller

   1:publicclass MachineController : ApiController
   2: {
   3:public IEnumerable<Machine> Get()
   4:     {
   5:returnnew List<Machine>{
   6:new Machine    {
   7:                 LastPlayed = DateTime.UtcNow,
   8:                 MachineAlertCount = 1,
   9:                 MachineId = "122",
  10:                 MachineName = "test",
  11:                 MachinePosition = "12",
  12:                 MachineStatus = "test"
  13:             }
  14:         };
  15:     }
  16:  
  17:public IEnumerable<Machine> All(string code)
  18:     {
  19:returnnew List<Machine>
  20:         {
  21:new Machine
  22:             {
  23:                 LastPlayed = DateTime.UtcNow,
  24:                 MachineAlertCount = 1,
  25:                 MachineId = "122",
  26:                 MachineName = "test",
  27:                 MachinePosition = "12",
  28:                 MachineStatus = "test"
  29:             }
  30:         };
  31:     }
  32: }

Route

   1: config.Routes.MapHttpRoute(
   2:     name: "MachineApi",
   3:     routeTemplate: "api/machine/{code}/all"
   4: );
   5:  
   6: config.Routes.MapHttpRoute(
   7:     name: "DefaultApi",
   8:     routeTemplate: "api/{controller}/{id}",
   9:     defaults: new { id = RouteParameter.Optional }

Test

Test http://localhost/api/machine/somecode/all in the route debugger:

clip_image010

Observation

  1. The HTTP status code is 404 (resource not found);
  2. The route data contains only one key value pair, mapping “Somecode” to “Code”
  3. The selected route is “Api/Machine/{Code}/All” because the template fits the URL. However there are no default values defined for this route.
  4. No controller matches (none of the rows are highlighted in controller selecting table)

Analysis

The route debugger output shows the “controller” value is not found in the route data or route defaults. The default controller selector relies on “controller” value to find a proper controller.

A common misunderstanding of route templates is that the values are mapped based on their position. That’s not true. In this case even though Machine is placed right after Api, there is no hint that this segment of URL should be picked up.

Solution

Add a default value specifying the machine controller to the first route:

   1: config.Routes.MapHttpRoute(
   2:     name: "MachineApi",
   3:     routeTemplate: "api/machine/{code}/all",
   4:     defaults: new { controller = "Machine" });

After this change, you get an HTTP 200 return, the machine controller is matched and the Action is matched. The matching route, controller and action are highlighted in green in the route debugger as shown below.

clip_image012

Similar issue:

http://stackoverflow.com/questions/13869541/why-is-my-message-handler-running-even-when-it-is-not-defined-in-webapi

Example 2: Ambiguous default

Source: http://stackoverflow.com/questions/14058228/asp-net-web-api-no-action-was-found-on-the-controller

Controller

   1:publicclass ValuesController : ApiController
   2: {
   3:// GET api/values
   4:public IEnumerable<string> Get()
   5:     {
   6:returnnewstring[] { "value1", "value2" };
   7:     }
   8:// GET api/values/5
   9:publicstring Get(int id)
  10:     {
  11:return"value";
  12:     }
  13:// POST api/values
  14:publicvoid Post([FromBody]stringvalue)
  15:     {
  16:     }
  17:// PUT api/values/5
  18:publicvoid Put(int id, [FromBody]stringvalue)
  19:     {
  20:     }
  21:// DELETE api/values/5
  22:publicvoid Delete(int id)
  23:     {
  24:     }
  25:  
  26:     [HttpGet]
  27:publicvoid Machines()
  28:     {
  29:     }
  30:publicvoid Machines(int id)
  31:     {
  32:     }
  33: }

Route definition

   1: config.Routes.MapHttpRoute(
   2:     name: "DefaultApi",
   3:     routeTemplate: "api/{controller}/{action}/{id}",
   4:     defaults: new { action = "get", id = RouteParameter.Optional });

Test

Try the following three routes which work correctly

  • /api/Values
  • /api/Values/Machines
  • /api/Values/Machine/100

However the URL /api/Values/1 Returns a 404 error.

clip_image014

Observation

  1. In the route data section you can see “action” is mapped to “1”, the third segment in the URL. It is a restriction assignment since the selected route is api/{controller}/{action}/{id}
  2. Note that although the default mapping of “action” is “get”, the value “1” is assigned for the action, not the value “1”.
  3. The Values Controller is selected.
  4. The Action selecting table has no match. The “By Action Name” column is filled with “False”, which means all actions are rejected because their action names are not matched to the “action” value in route data.

Analysis

There are two pivots here.

  1. The route data will always prefer the value in URL over the default value if a URL value can be found. In all of the four URLs listed above, none of them matches the default “action” mapping. In all four URLs, the route data contains the action key and action value.
  2. Because the “action” value exists in the route data, the action selector will pick the action from the route data.

The route debugger tool shows that with the URL http://localhost:xxx/api/values/1, “1” is the action name and no such action exits.

Solution

Use one strategy of action matching, either by action name or by verb. Don’t put both in one controller and one route.

Example 3: Ambiguous Action

Source: Why don’t my routes find the appropriate action? http://stackoverflow.com/questions/14614516/my-web-api-route-map-is-returning-multiple-actions

Issue

The URL cause 500 is http://localhost/api/access/blob

Controller

   1:publicclass AccessController : ApiController
   2: {
   3:// GET api/access/blob
   4:     [HttpGet]
   5:publicstring Blob()
   6:     {
   7:return"blob shared access signature";
   8:     }
   9:  
  10:// GET api/access/queue
  11:     [HttpGet]
  12:publicstring Queue()
  13:     {
  14:return"queue shared access signature";
  15:     }
  16: }

Route definition

   1: config.Routes.MapHttpRoute(
   2:     name: "DefaultApi",
   3:     routeTemplate: "api/{controller}/{id}",
   4:     defaults: new { id = RouteParameter.Optional }
   5: );
   6:  
   7: config.Routes.MapHttpRoute(
   8:     name: "AccessApi",
   9:     routeTemplate: "api/{controller}/{action}"
  10: );

clip_image016

Observation

  1. There are two routes and the URL matches both routes. The first one is selected because Web API routing selects the first route that matches (Greedy matching).
  2. The first route template doesn't contain {action}, there isn’t “action” value in route data, therefore the action will be selected based on HTTP verb
  3. Controller selecting successfully matches the Access controller.
  4. Two actions are selected using the only available matching criteria, the HTTP verb GET

Analysis

The root problem is that both routes match and the first one is picked while the developer was expecting the second route to match. The “action” name is ignored and eventually action selector tries to match action based on verb alone.

Solution

Two solutions:

  1. Move the default route to the end.
  2. Delete default route.

The greedy route selection can lead to difficult to resolve errors, especially when you assume the wrong route was selected. The route debugger is especially useful for this problem, as it shows you the route template selected.

Conclusion

Web API routing problems can get tricky and be difficult to diagnose. The Route Debugger tool can help you find routing problems and understand how routing works. We plan to address routing problems in the future (at least partially) with Attribute routing in Web API.

Source Code

The tool’s source code is available. You can also download the source to the route debugger http://aspnet.codeplex.com. Click the Source Code tab and expand Tools\WebApi\RouteDebugger.

clip_image017

Resources

Acknowledgments

  1. Rick Anderson (Twitter @RickAndMSFT)
  2. Mike Wasson

XDT (XML Document Transform) released on codeplex.com

$
0
0

In Visual Studio 2010 we introduced a simple and straight forward method of transforming web.config during publishing/packaging. This support is called XML Document Transform, aka XDT. It allows you to transform any XML file, not just web.config. To learn more about XDT check out the docs.

Since we've released XDT there has been interest in re-using the transformation engine in other scenarios. To enable some of those scenarios we released XDT on NuGet. After that we started working on integrating XDT into NuGet and asked for some feedback from the community.

In order to cover all the scenarios for NuGet users we decided to release the source of XDT on codeplex.com using an Apache 2.0 license. You can now redistribute XDT with your own applications.

There are a few community projects which are already using XDT. Below you can see a few examples.

 

SlowCheetah

SlowCheetah is an open source Visual Studio 2010/2012 extension which enables you to transform app.config, as well as any other XML file, during build based on the build configuration.

SlowCheetah.Xdt

SlowCheetah.Xdt (project page) contains an MSBuild task, TransformXml, which allows you to invoke XDT transforms during a build.

This library also contains a custom XDT transform, AttributeRegexReplace. If you are interested in creating your own custom transforms read Sayed's blog post - Extending XML (web.config) Transformations.

Web Config Transform Runner

Web Config Transform Runner is a command line utility for XDT. It is available through NuGet. You can read more about Web Config Transform Runner at Eric Hexter's blog.

Config Transformation Tool

Config Transformation Tool is a command line tool to invoke XDT. Using this tool you can execute transforms from any command prompt, and also automate transforms in scripts.

If you create a project using XDT we'd love to hear from you in the comments section below.

Sayed Ibrahim Hashimi | http://msbuildbook.com | @SayedIHashimi

Seeking Feedback on Alternative Formats for ASP.NET MVC and Deployment Content Maps

$
0
0

The ASP.NET content maps are lists of resources that we have reviewed and recommend. The content maps have been popular in their present form, but we’re looking at ways to improve them, such as by publishing more lists but with a narrower focus to each one, by providing more information for each link, and by formatting them differently. Here are links to the existing content maps and three pages that show new approaches we’re considering: 

We’re calling the new lists “curated views.” Curated in the sense that, although anyone can suggest or submit links to add to the list, someone at Microsoft will be responsible for creating and maintaining them. Views in the sense that they present information about resources that can help with a particular question or topic.

Please tell us what you think of the new curated views compared to the existing content maps, or offer suggestions for other information or formats you’d like to see. You can leave comments on this post, or on the new web pages themselves, or send us an email.

We are interested in any feedback you have, but here are some specific questions to think about:

  1. Each curated view/content map includes a list of suggested content links.  Below is a list of additional information that could be provided with each link.  Which of these are most important?
    1. Date that the content was posted.
    2. Type of content (video, article, code sample, etc.).
    3. Author name.
    4. Short description.
    5. Level of difficulty of the content.
    6. Version of software/framework or SDK the content refers to.
    7. Website the content appears on.
    8. Number of likes or positive reviews.
    9. Rating assigned to the content by the community.
  2. If you opened a page similar to one of these curated views/content maps from Google or Bing search results, would you be likely to try the links on this page or just return to search results?
  3. If Microsoft and community experts published a large set of content views similar to these on a website, would you visit that site first when you had technical questions, or would you do an Internet search on Google/Bing first?
  4. Do the questions addressed by each curated view seem too narrow or too broad in scope to be helpful?  If so, which ones?  
  5. Do any of the curated views/content maps provide too much or too little detail for each link in the list? If so, which ones? 
  6. Do you find it helpful to see the profile of the person who created the curated view/content map?
  7. If we provided an easy way for you to publish your own curated views (with attribution) to a common site together with the Microsoft-created curated views, would you be interested in doing so?  Why or why not?
  8. If we provided an easy way for you to suggest new content links to add to content views/content maps that have already been published, would you be interested in doing so?  Why or why not?
  9. What would make these content views/content maps more helpful? 

Please update to the latest version of Web Essentials 2012 after installing VS2012 Update 2

$
0
0

After releasing ASP.NET and Web Tools 2012.2, which is also included in Visual Studio 2012 update 2, we’ve received a few customer feedback about their VS shows an error dialog saying:

An exception has been encountered. This may be caused by an extension.

You can get more information by examining the file 'C:\Users\<username>\AppData\Roaming\Microsoft\VisualStudio\12.0\ActivityLog.xml'.

When looking at ActivityLog.xml, the following error message may show up:

System.ComponentModel.Composition.ImportCardinalityMismatchException: Duplicate EditorFormatDefinition exports with identical name attributes exist. Duplicate name is LESSCSSVARIABLEDECLARATIONCLASSIFICATIONFORMAT

To solve this problem, one just need to update the “Web Essentials 2012” extension to the latest version.

What happened is author Mads (also PM in the Web editor team) graduated LESS and CoffeeScript feature from web essentials 2012 and put these feature officially in ASP.NET and Web Tools 2012.2 (see change log for version details).  So using an old version of Web Essentials will conflict with the newest version of VS2012 update 2.

Thanks for the support!  And keep enjoying Web Essentials!

ASP.NET hosts six community created SPA templates now

$
0
0

Since announcement of 4 community created Single Page Application templates when releasing ASP.NET and Web Tools 2012.2 in February 2013, the community has done some updates to the templates and released 2 more templates: Backbone template and Breeze/Angular template.  You can view the details and download the templates at http://www.asp.net/single-page-application/overview/introduction/other-libraries .  To use these templates, please make sure you installed Visual Studio 2012 Update 2, which included ASP.NET and Web Tools 2012.2.

Template’s website (if have any) and source location are listed here:

Backbone Template

https://github.com/kazimanzurrashid/AspNetMvcBackboneJsSpa

Breeze/Angular template

http://www.breezejs.com/samples/breezeangular-template

https://github.com/IdeaBlade/BreezeAngularSpaTemplateVsix

Breeze/Knockout template

http://www.breezejs.com/samples/knockout-spa

https://github.com/IdeaBlade/BreezeKnockoutSpaTemplateVsix

DurandalJS template

https://github.com/BlueSpire/Durandal

EmberJS template

https://github.com/xqiu/MVCSPAWithEmberjs

Hot Towel template

www.johnpapa.net/hottowel

https://github.com/johnpapa/HotTowel

 

Hope you enjoy studying them.  Feel free to log your issues and questions in the corresponding template source website.  And if you have some SPA template you want to share with the community, don’t hesitate to contact us to link to your work!  Thanks for the community support!

DataTable using SignalR+AngularJS+EntityFramework

$
0
0

SignalR brought developers an easier way to build real-time, very responsive web applications. But, how does it play with other available technologies? I took a couple of days to implement a very common scenario needed in every Enterprise Application: A DataTable to do CRUD operations persisting changes on a database.

My initial thought was this is going to be a trivial task, I have done DataTables a few times in the past. Then, I realized the real-time model with many concurrent users introduces a few challenges.

Before going through details, here is a screenshot of what I accomplished: A DataTable where multiple users are collaborating on the add/edit/delete of its contents! When user 1 edits David Fowler, the Delete and Edit buttons disappear for David in the browsers of users 2, 3, and 4. When user 2 edits Gustavo Armenta, the Delete and Edit buttons disappear for Gustavo in the browsers of users 1, 3, and 4. When user 1 finishes editing David Fowler, Delete and Edit buttons reappear on all browsers.

multiple_edits

 

HTML Content needs to update every time I receive a SignalR message

My SignalR client receives a JSON message indicating there is a new friend. Now I need to update my html table with the new content. AngularJS plays beautiful in this scenario as I only have to add a row in my existing array (array.push(item)) and refresh content (scope.apply()).

// SignalR Events
hubProxy.client.add = function (friend) {
  console.log("client.add(" + JSON.stringify(friend) + ")");
  $scope.friends.push(friend);
  $scope.$apply();
}

 

Broadcast SignalR messages when user modifies data

The previous code works well when the server pushes data to client. Now, let’s handle the case when the user is pushing changes.

// AngularJS Events
$scope.add = function () {
  hubProxy.server.add({ Name: $scope.add_friend.Name });
  $scope.add_friend.Name = "";
}

    Concurrent users competing to edit the same row

    What to do when two or more users want to edit the same row? You could allow them to edit simultaneously and the last one to save changes win. You could track versions and let the first user save changes and fail for other users as they need to refresh to the latest version before submitting changes. You could lock the row and allow only a single user to make changes, once he finishes other users can take the lock.

    In this sample, I have followed the lock approach with a few more considerations.

    1. User can only lock one row at a time
    2. Release lock in row when user completes the update or user disconnects temporarily or permanently
    3. Handle race condition when multiple users try to edit same row

    Most of the magic happen on the SignalR Hub. OnConnected() returns both a list of items and a list of locks. OnReconnected() simply refreshes the state as if connecting for the first time. OnDisconnected releases the lock I had taken so other active users can edit the row.

    private static ConcurrentDictionary<string, int> _locks = 
    new ConcurrentDictionary<string, int>(); public override async Task OnConnected() { var query = from f in _db.Friends orderby f.Name select f; await Clients.Caller.all(query); await Clients.Caller.allLocks(_locks.Values); } public override async Task OnReconnected() { // Refresh as other users could update data while we were offline await OnConnected(); } public override async Task OnDisconnected() { int removed; _locks.TryRemove(Context.ConnectionId, out removed); await Clients.All.allLocks(_locks.Values); } public void TakeLock(Friend value) { _locks.AddOrUpdate(Context.ConnectionId, value.Id, (key, oldValue) => value.Id); Clients.All.allLocks(_locks.Values); } public void Update(Friend value) { var updated = _db.Friends.First<Friend>(f => f.Id == value.Id); updated.Name = value.Name; _db.SaveChanges(); Clients.All.update(updated); int removed; _locks.TryRemove(Context.ConnectionId, out removed); Clients.All.allLocks(_locks.Values); }

      Then, AngularJS provides a very clear mapping between HTML content and conditional logic using ”ng-repeat” and “ng-show”. Look how easy is to make UI decisions based on the current state of JSON objects bound to $scope.

      <tr ng-repeat="friend in friends | orderBy:predicate">
          <td><button name="deleteButton" ng-click="delete(friend)" 
      ng-show="!friend.IsLocked">Delete</button></td> <td> <button name="editButton" ng-click="edit(friend)"
      ng-show="!friend.IsLocked">Edit</button> <button name="updateButton" ng-click="update(friend)"
      ng-show="isEdit(friend)">Update</button> </td> <td>{{friend.Id}}</td> <td ng-show="!isEdit(friend)">{{friend.Name}}</td> <td ng-show="isEdit(friend)"><input type="text" ng-model="edit_friend.Name" /></td> </tr>

       

      Things to improve in this sample

      1. Improve UI Look & Feel
      2. Play well with other datatypes like number, currency, date
      3. Display validation and error messages
      4. Support SignalR scaleout

      Source Code

      https://github.com/gustavo-armenta/SignalR-JS-HTML

      Feedback

      Have you seen other HTML/JS/SignalR controls out there? Please share the links!
      I have only seen this one:
      http://mvc.syncfusion.com/demos/ui/grid/productshowcase/signalr#

      Viewing all 7144 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>