Quantcast
Channel: ASP.NET Blog
Viewing all 7144 articles
Browse latest View live

Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template

$
0
0

Today we announced the preview for ASP.NET Fall 2012 Update and one of the cool features we added to the update is the Facebook Application Template. This template includes a new library that makes it easier to develop Facebook applications using ASP.NET MVC.

Here is the tutorial to learn more about this template. In this post, I won’t be going too deep into the implementation of the app, instead I’ll focus on the deployment of the app on Azure.

Disclaimer: This template is still in a preview state. The APIs could change significantly before the final release.

Prerequisites

Visual Studio 2012 + Azure SDK

ASP.NET Fall 2012 Update (preview)

A Windows Azure account

A Facebook account

Getting started

Here we are going to deploy the project created from the Facebook Application template to Azure. Once the project is deployed, the app should display a page showing the authenticated user and some of his/her friends.

Capture

Step 1: Create a new MVC Facebook template

Go to New Project –> ASP.NET MVC 4 Web Application, and then select the Facebook Application template.

image

Once the project is created, you can open HomeController.cs to get an idea of what it does. First, note that the Index action has a [FacebookAuthorize] attribute which will ensure that the request is signed (coming from Facebook). On the [FacebookAuthorize] attribute you can also specify additional permissions that you would like the user to grant to your application. Second, the parameters (user, userFriends) on the action are populated (by custom model binders) when the action is invoked so you can just display them on the view.

image

Step 2: Create a new App on Facebook

Login into https://developers.facebook.com/apps to create a new application. The “App Name” needs to be unique and you can provide the optional “App Namespace” if needed.

image

Once the app is created, it will be assigned an App ID and an App Secret.

image

You could paste these values into the Web.config of your new Facebook project created in Step 1. However, it’s recommended that you specify them on Azure (Step 4) instead of hard coding them in the source.

image

Step 3: Create a new Web Site on Windows Azure

Next, we need to create a new “Web Site” for the application.

image

Step 4: Configure your application on Azure

Once the new “Web Site” is created, click on it and select “CONFIGURE” to provide the app settings. Here we’ll add the following settings. Make sure you SAVE the settings.

key value
Facebook.AppId “App ID” from Facebook site
Facebook.AppSecret “App Secret” from Facebook site

 

image

Step 5: Configure your application on Facebook

Now we need to provide the Canvas URL. Here you can paste your Azure site URL, e.g. http://myfirstapplication.azuresites.net/.

Since October 1st, Facebook expects your canvas app to be hosted at a secure address (HTTPS). Thus you need to specify the Secure Canvas URL. If you don’t have a secure address yet, just put something like https://myfirstapplication.azuresites.net/ and enable Sandbox mode. Make sure you save the changes.

image

Step 6: Import the “publish profile” and publish your Facebook App to Azure

Now everything should be ready to go. For this last step, you need to download the “publish profile” of your Web Site in Azure.

image

Right click on the project and select Publish…

image

Import the “publish profile” you’ve downloaded previously and hit Publish.

image

The result

You can now browse to the Canvas Page to see the app. The URL of the Canvas Page is listed on your Facebook portal.

image

Upon the first time you access the app, the following dialog will appear asking for permissions to access your basic info and email address.

image

Click “Go to App” to grant the permissions, and you should see the home page showing your basic info and some of your friends.

Capture

Congratulations, you just published your first Facebook application!

Thanks,

Yao


Enabling ASP.NET Web API Help Pages for ASP.NET Web Forms Applications

$
0
0

ASP.NET Web API Help Pages is a new preview feature that automatically generates help page style content for your Web API endpoints. You can read more about it at Introducing the ASP.NET Web API Help Page and in further posts on Yao's blog. I won't revisit the basics of the feature here. Instead, what I want to focus on is bringing this new feature to an existing ASP.NET Web Forms application.

Why is this worth a blog post? Well, if you look at the design of the Help Pages feature, you'll see it uses MVC.

<Breaks squealing…>

Uh oh. You've got a Web Forms app and this feature depends on MVC. Time to build a hybrid app! Don't fear, Web Forms and MVC work just fine together.

Assuming you're starting with a Web Forms application which already has ASP.NET Web API added, here's all you need to do:

Step 1:

Use NuGet to install the Microsoft.AspNet.WebApi.HelpPage package. Since it's still prerelease, make sure you set "Include Prerelease" in the NuGet package manager otherwise you won't find it. Installing the package will also bring in MVC and Razor so no need to install those separately. 

image

Step 2:

The Help Pages feature is configured as an MVC area, and you'll see it under the special "Areas" folder of your app.

HelpArea

Since we didn't start from an MVC application, we're missing the registration code to wire-up MVC areas though.

In "global.asax.cs", you'll need to add:

AreaRegistration.RegisterAllAreas();

You'll also likely need to add the using statement for the "System.Web.Mvc" namespace.

Step 3:

At this point, we're almost done. A stylesheet reference will be missing, because we didn't start from an MVC application but you can add that manually if you like.

In "Areas/HelpPage/Views/Share/_Layout.cshtml", add:

<link href="~/Content/Site.css" rel="stylesheet" />


Step 4:

That's it. You're done!

Requesting the "Help" controller, we now see:

image

 

So there you have it. The MVC based Web API Help Pages feature is now running in your Web Forms application.

Hope this helps,

Mark

Interacting with Databases for Web Developers in VS 2012

$
0
0

VS2012 is out and with it brings a host of developer improvements when dealing with data. This post aims at highlighting a few key improvements which will affect you the most and hopefully you will find this information useful in getting started with the improvements in Visual Studio tooling to interact with databases. While the Data Tooling itself has undergone a lot of changes, this post is geared towards Web Application Developers.

Changes in VS2012
  • VS2012 ships with SQL2012 engine.
  • VS2012 ships with an enchanced Sql tools(SSDT) which brings in functionality from SSMS(SqlServerManagementStudio) to VS
    • As part of this integration, you will see a new window called “SQL Server Object Browser”. In this window you can do advanced Database management stuff, such as running sprocs, managing Initial Catalogs etc. which were not possible in “Database Explorer”
    • Some videos to learn more about SSDT.
  • "Sql Server Express" is no longer installed with VS. Instead VS ships with "Sql Server Express LocalDB".
    • LocalDB development is supported in VS2012 for .NET v4.0/v4.5 on Windows 8/ Windows 7 – Client and Server SKUs
    • LocalDB development is supported in VS2012 for .NET v3.5 on Windows 8 only(Client and Server SKUs)
  • Enhanced User experience while upgrading projects from VS2010 to VS2012
    • This user experience has some guidance which help you upgrade databases which were using SQLExpress to LocalDB
  • Auto loading of ConnectionStrings in ServerExplorer
    • if we detect that your ConnectionString points to a database which exists, then we load the connectionstring in the ServerExplorer so its there when you want to use it. This is an improvement from Dev10 where you had to explicitly add the ConnectionString
Why LocalDB?

The quick pitch for LocalDB is the following: “It is very easy to install and it requires no management” LocalDB runs as your account and not as a system wide service(which is how SqlExpress runs). This post from the SQL team goes in detail about the benefits

With all the changes of moving away from SqlExpress to LocalDB, nothing should change in terms of your application development. You should be able to use  almost the same connectionStrings as you had with SqlExpress, but instead change the DataSource from “.\SQLExpress” to “(LocalDb)\v11.0” and remove the UserInstance flag since LocalDb always runs as your account

Initial Catalog vs AttachDbFileName

If you look at the connectionstring, the templates for ASP.NET WebForms, MVC use in VS 2012, you will notice that the connectionstring looks like follows. This connection is somewhat different to what you might have seen with web projects in VS2010 where we were only using AttachDbFileName.This change has nothing to do with LocalDb, on the contrary this exposes some of the common patterns that have existed when working with SQL.

<addname="DefaultConnection"connectionString="Data Source=(LocalDb)\v11.0;

Initial Catalog=aspnet-MvcApplication18-20121022222325;Integrated Security=SSPI;

AttachDBFilename=|DataDirectory|\aspnet-MvcApplication18-20121022222325.mdf"

providerName="System.Data.SqlClient"/>

Following is how these two values are used

Initial Catalog: This entry is a key in the SQL master table which holds all of the databases which were created on an instance. In this case the instance is “(LocalDb)\v11.0”

AttachDbFileName: This entry tells SQL the location of the mdf file which holds the database information.

The SQL engine uses Initial Catalog to look up the database entry in its master list and load the database file from the path specified by AttachDbFileName

One small caveat

By surfacing this information to the developer, a developer can run into the following situation. Let’s say that you ran your application and created a database with the above connectionstring. In this case, if you delete the database file(mdf) from disk and do not change the connectionstring, then SQL will look up the database by the Initial Catalog entry which still exists and will try to load the database from disk based on the AttachDbFileName and since the database does not exist, you will get the following error.

“Cannot attach the file 'c:\users\foo\documents\visual studio 2012\Projects\MvcApplication18\MvcApplication18\App_Data\aspnet-MvcApplication18-20121022222325.mdf' as database 'aspnet-MvcApplication18-20121022222325'.

To workaround this error do either of the following

  • Change the Initial Catalog and AttachDbFileName value to be something unique
  • Using SSDT, connect to “(LocalDb)\v11.0” instance and delete the Initial Catalog entry specified in the database

The reason the AttachDbFileName entry exists so that the database file gets dropped in App_Data folder of your application.

How to Videos for basic operations in VS tooling

 

Working with Entity Framework Code First
Cheat sheet of things to remember/do for Data in VS2012

These cheat sheet is useful when you are trying to figure out what instance to use for LocalDB

  • Default Instance of Localdb in VS2012 is "(LocalDb)\v11.0"
    • This means whenever you are connecting to a localdb instance you need to type in the above to connect to using Windows Authentication to operate on your database
  • Tools-Options is "(LocalDb)\v11.0"
    • This means that any local database that you create will be created using "(LocalDb)\v11.0" instance
    • If you have SqlExpress installed and you want to use SqlExpress as the default local database server you should change this option in VS. This is a global setting for VS
  • LocalDB does not run with UserInstance=true setting
    • You will get an error when you run your application
Helpful links/scripts 

I hope this would help you understand the changes in the Data Tooling experience for VS2012.

ASP.NET WebForms Model Binding – Overview

$
0
0

This is the first post in the part of series around Model Binding. In this post I will introduce Model Binding integration in ASP.NET WebForms while focusing on history, benefits, principles and the integration of Model Binding. In the upcoming posts, I will be going into the details about the features that are supported by this system.

What is Model Binding

Model Binding is a system which binds a value submitted from the client to a model on the server so that the model is available for inspection for validation rules and processing on the server. In essence the system has two fundamental functions – Binding & Validation.

In a WebForms application, data binding(from client to server and vice versa) typically happens via Data Bound and Data Source controls. While this approach has the advantages of an experience where you can get your application up and running easily, it does have some drawbacks for some developers who want to tap in more modern ways of developing applications, such as : doing validation using Data Annotations, PRG pattern, Repository patters, cleaner page code behind etc.

History Behind Model Binding

Model Binding as a system has existed in many Frameworks. Rails implements the core functions of Binding & Validation in its own way. ASP.NET MVC had Model binding support since its early stages. When ASP.NET MVC2 was released, the team released a prototype of an “Extensible Model Binding system”. This prototype made it possible for a  Framework developer to write their own Model Binding system, should they choose to do so.(of course anyone in their right minds would not attempt it, but the ASP.NET team decided anyways to do this for WebForms)

Infact this Extensible Model Binding was how the team derived lots of inspiration from while trying to implement a Model Binding System that integrates the flexibility of Model Binding system and power of controls. ASP.NET WebAPI also derives some inspiration from this extensible model when ASP.NET WebAPI added support for binding using different formatters.

Benefits of Model Binding

With the introduction of Model Binding, it is now much easier to build modern webforms applications. Following are the areas where this integration adds value

  • Rich controls support
    • All the existing data bound controls such as GridView, ListView etc have been updated to work with Model Binding so you should be able to reuse the rich functionality of these controls
  • Validation
    • Modern ways of validation using Data Annotation attributes
    • Easier to propagate validation errors from Business Layer back to the UI on the page
    • Easier to integrate model level validation errors(from ORM such as EF or anything of your choice) when saving the record in the database
  • Dynamic Data support
    • Easier to add validation via Data Annotations and client side validation
    • Easier to customize the UI for columns using Data Annotations and Field Templates
      • Add rich UI for integrating jquery based DateTime picker for DateTime fields
    • Easier to customize the entities using Data Annotations and Entity Templates
  • Easier to embrace the following patterns
    • Unit Testing the application layer
    • PRG pattern
    • Repository pattern
    • Cleaner page code behind methods. These server side code files do not have any application functional code and they only focus on UI interactions
  • Extensible Model Binding system
    • Support scenarios of binding in advanced scenarios such as complex types, ad-hoc Model Binding
    • Customize the existing implementation to enable more commonly used scenarios easier such as Master Details view
    • Add more Binders and Value providers

I hope so far this would help you understand the bigger picture with WebForms Model Binding and value it brings to an application developer. As Scott Hanselman noted, this integration also brings ASP.NET MVC/WebForms application development much closer.

In the next post we will be looking at some high level fundamentals and features on Model Binding in WebForms

Capturing Unhandled Exceptions in ASP.NET Web API's with ELMAH

$
0
0

I'm a fan of using ELMAH to track unhandled exceptions in my ASP.NET applications. If you haven't tried ELMAH, you should definitely check it out. There's even an ELMAH NuGet package so it's trivial to install.

Now that I'm getting my feet wet with Web API, I'd like to have any Web API unhandled exceptions be directed to ELMAH as well so that I have a one stop shop for viewing all unhandled exceptions in my application. Without a bit of work though, this won't happen.

Here's why:

ELMAH records unhandled exceptions by tapping into the Application_Error event of ASP.NET applications, but when a Web API endpoint throws an unhandled exception, this event isn't triggered because the exception is intentionally swallowed higher in the Web API call stack. This enables Web API to send a response to the client like "500 - Internal Server Error". If the exception went unhandled, Web API wouldn't send any response to the client.

If you have a Web API that throws an unhandled exception, ELMAH shows:

image

So how do we catch these unhandled Web API exceptions and route them to ELMAH?

One good option is to use an Exception Filter. If you're familiar with exception filters in MVC, it's the same idea, but instead of implementing System.Web.Mvc.IExceptionFilter, we need to implement System.Web.Http.Filters.IExceptionFilter (note the namespace difference), and creating a derived class from System.Web.Http.Filters.ExceptionFilterAttribute is an easy approach.

In our derived class, we'll simply override the OnException method and make an explicit log call to ELMAH.

Then, we wire-up our custom exception filter globally by adding our filter:

Now when one of our Web API's throws an unhandled exception, ELMAH picks it up.

image

Hope this helps,
Mark

All about

$
0
0

Background – on quirks and compatibility

The .NET Framework (including ASP.NET) strives to maintain near-100% compatibility when an existing framework is updated on a machine. We try to ensure as much as possible that if an application was developed and deployed against .NET Framework 4, it will just continue to work on 4.5. This normally means keeping quirky, buggy, or undesirable behaviors in the product between versions, as fixing them may negatively affect applications which were relying on those behaviors.

Consider creating a new Console Application project targeting .NET Framework 4 and putting the following in Program.cs:

class Program
{
    static void Main(string[] args)
    {
        List<int> list = new List<int>() { 1, 2, 3 };
        list.ForEach(i =>
        {
            Console.WriteLine(i);
            if (i < 3) { list.Add(i + 1); }
        });
    }
}

When run, the output of this application is 1, 2, 3, 2, 3, 3. Now change the project to target .NET Framework 4.5, recompile, and run again. The output is 1, followed by an exception: System.InvalidOperationException: Collection was modified; enumeration operation may not execute. What happened?

The contract of IEnumerator.MoveNext() specifies that the method shall throw an InvalidOperationException if the underlying collection changes while an enumeration is currently taking place. Indeed, if you use the foreach keyword instead of the List<T>.ForEach() method in the above sample, you will see the exception thrown regardless of target framework. This discrepancy between the foreach keyword and the ForEach() method is one example of a quirk. The feature team wanted to bring the ForEach() method behavior in line with the foreach keyword behavior, but doing so for all applications would have been a breaking change. So they require the application to opt in to the new 4.5 behavior by switching the target framework.

Given that the .NET Framework 4.5 is an in-place update to .NET Framework 4, how is this pulled off? If you look closely at the console application project, you'll see that it also contains one additional file App.config:

<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
    </startup>
</configuration>

When the application is targeted to 4.5, Visual Studio modifies the configuration file. There are two related concepts in the <supportedRuntime> element: runtime version and target SKU. The .NET Framework 4.5 SKU rides on top of version 4.0 of the CLR, much like how the .NET Framework 3.0 and 3.5 SKUs ride on top of version 2.0 of the CLR.

<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5"/>
    </startup>
</configuration>

This information is captured into an attribute and compiled into the executable:

[assembly: TargetFramework(".NETFramework,Version=v4.5", FrameworkDisplayName = ".NET Framework 4.5")]

When a component in the .NET Framework needs to decide whether to apply quirks behavior to a particular code path or whether it should use new 4.5-standards logic, it consults the [TargetFramework] attribute to see what framework version the application targets, hence what logic it expects to take place. (If the attribute is not present, the runtime assumes 4.0 quirks behavior. There is additional fallback and resolution logic, but it's not important to the topic at hand.)

<httpRuntime targetFramework>

But what if ASP.NET applications want to opt in to the new 4.5 behaviors? ASP.NET developers can't use the <supportedRuntime> element since there's no .exe the [TargetFramework] attribute can be compiled into. However, we do control the <httpRuntime> element, and we can use its values to make decisions on how we should configure the CLR before loading your application into memory.

To this effect, we introduced the targetFramework attribute on the <httpRuntime> element. When you create a new ASP.NET application using the 4.5 project templates, the following line will be present in Web.config:

<httpRuntime targetFramework="4.5" />

The effect of this attribute is twofold. First, it controls the CLR's quirks mode behavior, just like the <supportedRuntime> element does in a console application. Consider a handler ~/MyHandler.ashx whose contents are similar to the console application above.

<%@ WebHandler Language="C#" Class="MyHandler" %>

using System.Collections.Generic;
using System.Web;
public class MyHandler : IHttpHandler
{

    public void ProcessRequest(HttpContext context)
    {
        context.Response.ContentType = "text/plain";
        List<int> list = new List<int>() { 1, 2, 3 };
        list.ForEach(i =>
        {
            context.Response.Write(i + " ");
            if (i < 3) { list.Add(i + 1); }
        });
    }

    public bool IsReusable
    {
        get { return false; }
    }
}

If the targetFramework attribute reads "4.0", the output will be 1 2 3 2 3 3. If the targetFramework attribute reads "4.5", an InvalidOperationException will be thrown due to the collection having been modified during the ForEach() operation.

Second, <httpRuntime targetFramework="4.5" /> is a shortcut that allows the ASP.NET runtime to infer a wide array of configuration settings. If the runtime sees this setting, it will expand it out just as if you had written the following:

<configuration>
  <appSettings>
    <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />
    <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" />
  </appSettings>
    <system.web>
      <compilation targetFramework="4.5" />
      <machineKey compatibilityMode="Framework45" />
      <pages controlRenderingCompatibilityVersion="4.0" />
    </system.web>
</configuration>

By inferring all of these settings from a single line, we help shrink the size of Web.config. Thus the file contains less runtime configuration and more application-specific configuration, such as connection strings.

I'll go over each of these in turn, explaining what they do.

<add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />

Enables the new await-friendly asynchronous pipeline that was introduced in 4.5. Many of our synchronization primitives in earlier versions of ASP.NET had bad behaviors, such as taking locks on public objects or violating API contracts. In fact, ASP.NET 4's implementation of SynchronizationContext.Post is a blocking synchronous call! The new asynchronous pipeline strives to be more efficient while also following the expected contracts for its APIs. The new pipeline also performs a small amount of error checking on behalf of the developer, such as detecting unanticipated calls to async void methods.

Certain features like WebSockets require that this switch be set. Importantly, the behavior of async / await is undefined in ASP.NET unless this switch has been set. (Remember: setting <httpRuntime targetFramework="4.5" /> is also sufficient.)

<add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" />

Causes server controls to render data-val attributes in markup rather than send a snippet of JavaScript for each input element which requires validation. This provides a better extensibility story for customizing how client-side validation is performed. Depending on the complexity of the page, it can also significantly reduce the size of the generated markup.

<compilation targetFramework="4.5" />

Selects which version of the .NET Framework's reference assemblies are used when performing compilation. (Note: Visual Studio requires that this element be present in Web.config, even though we auto-infer it.)

<machineKey compatibilityMode="Framework45" />

Causes ASP.NET to use an enhanced pipeline when performing cryptographic operations. More information on the new pipeline can be found here (links to the first part of a three-part blog series).

<pages controlRenderingCompatibilityVersion="4.5" />

Similar to quirks mode, but instead of affecting core runtime behavior it affects how controls render markup. For example, consider the control <img runat="server" src="..." alt="" /> (note the empty alt tag). In earlier versions of ASP.NET, this stripped the empty alt tag and output <img src="..." />, which fails HTML validation. When the rendering compatibility version is set to "4.5", the empty alt tag is preserved, and the output is <img src="..." alt="" />.

<pages controlRenderingCompatibilityVersion> defines the default value of the Control.RenderingCompatibility property. The value can still be overridden on a per-control basis if desired.

Miscellaneous questions

Can I detect the target framework programmatically?

Yes! Take a look at the HttpRuntime.TargetFramework static property. Keep in mind that this property should only be queried from within an ASP.NET application. Any other access (such as via a console application) could result in an undefined return value.

But what if I don't want the runtime to infer all of those configuration settings?

No problem! You can always explicitly change any setting. For example, you could put this in Web.config:

<httpRuntime targetFramework="4.5" />
<pages controlRenderingCompatibilityVersion="4.0" />

Even though <httpRuntime targetFramework="4.5" /> would normally imply <pages controlRenderingCompatibilityVersion="4.5" />, the runtime will notice that you have already explicitly set controlRenderingCompatibilityVersion and will respect your setting.

My hoster is displaying "Unrecognized attribute 'targetFramework'" errors!

This error means that you have created an ASP.NET 4.5 application but are trying to deploy it to a server that only has the .NET Framework 4 (not 4.5) installed. Many hosters offer ASP.NET 4.5 as an option. They may require you to contact them first so that they can enable 4.5 for your account.

How does this affect existing ASP.NET 4 applications?

There was no <httpRuntime targetFramework> attribute in .NET 4, hence existing ASP.NET 4 applications won't have this setting in Web.config. If the targetFramework attribute is not present, we assume a default value of "4.0" and run the application in quirks mode. Web developers can manually set <httpRuntime targetFramework="4.5" /> to opt-in to the new behaviors.

If there is no <httpRuntime targetFramework> attribute present in Web.config, we assume that the application wanted 4.0 quirks behavior.

Wrapping Up

I hope I have conveyed the intent and the utility of the <httpRuntime targetFramework> setting. In the future, we envision that this can be a useful mechanism for moving applications forward, as application developers can simply set targetFramework to "5.0" or "6.0" to opt-in wholesale to whatever new behaviors those framework versions may bring.

As always, please feel free to leave feedback in the comments!

Web Publish Updates with Windows Azure SDK 1.8

$
0
0

We recently released new updates to the publishing experience in Visual Studio 2012 and Visual Studio 2010 as part of the Windows Azure SDK for .NET 1.8 release.  The main new features include web site project (WSP) publishing using the same new publishing workflow as web application projects (WAPs), precompile options for both WAP and WSP, per publish profile web.config transforms, and a preview of the selective publishing feature we'll be releasing in the next publish update.  You can get the new publish updates by installing the Windows Azure SDK for .NET 1.8 package from the Web Platform Installer.

 The first big feature of this publish update is full support for publishing your web site projects.  Previously, when you wanted to publish a WSP, you either needed to copy all of the files in your project to your hosting server every time you deployed or pick out just the changed files and manually update those.  If you wanted to use Web Deploy to incrementally deploy your web site project changes, you needed to set up a separate Web Deployment project in your solution.  All of that has now been replaced with the one unified publish dialog that you saw for WAPs in the Visual Studio 2012 release:

Since both WAPs and WSPs use this new publish dialog, there is now just one simple experience when you want to publish your project, regardless of what type it is.  This means that WSPs get all of the great publish features that was previously available only for WAP including:

  • Choose your desired publish method among Web Deploy, Web Deploy Package, FTP, File System, FPSE
  • Create multiple publish profiles (*.pubxml files)
  • Command line publishing with Web Deploy that can be used from a CI server
  • Web.config tranforms
  • Incremental publishing of your database schema
  • Publishing your EF code first migrations
  • File and database preview

If you would like more details on these features, see http://blogs.msdn.com/b/webdev/archive/2012/06/15/visual-studio-2010-web-publish-updates.aspx.

 There are some small differences, however, since WSPs are a completely different project type.  First, since WSPs do not have a project file and MSBuild needs one to perform the web publishing tasks, there will be a new file called website.publishproj created in your website.  Since it is a design time config file, it will not be published with your web site content.  The website.publishproj files is basically a stripped down MSBuild project file that contains only the properties need for web site publishing.  You can see a sample of it here:

 Another difference  with WSPs is that it does not have a Properties page (or folder) and so your PublishProfiles with any .pubxml you create are instead stored in App_Data:

 Lastly, you may notice when publishing a WSP that there is only have a Debug configuration.  This is because WSPs usually don't have any dlls in the bin folder and thus no pdbs either so there is no difference between a Debug and Release configuration.  However, if you reference a class library, then a Release configuration will become available since the reference class has a dll.

 The next new feature in this release is the ability to specify precompile options for your WSP and WAP from the new publish dialog.  These are the same precompile options that were available in the old publish web site UI and WAP properties page, but now they can be conveniently set in the new publish workflow.  Clicking the Configure link next to Precompile during publishing brings up the Advanced Precompile Settings dialog:

 

Notice also the new File Publish option to Exclude files from the App_Data folder.  That was previously also part of the Properties page and has been moved to the new publish dialog for better accessibility.

Another new feature in this release is the ability to add web.config transforms on a per publish profile basis.  You can do this by first creating your publish profile.  Then right click on it in solution explorer and select Add Config Tranform.

 

You will then see a Web.<publish profile name>.config file created just for that publish profile.  When you right click on that web.config file, you can also preview the changes that the transform would make with a side by side diff of the original web.config and a tranformed one :

 

Finally, this publish update includes a preview of a new feature that will let you selectively publish your files.  This is a feature we are still working on and will officially release in the future, but if you would like to try it out, here's how.  First, add the following registry key: HKCU\Software\Microsoft\VisualStudio\11.0\WebProjects\EnableSelectivePublish=1 or if you are using the Express sku HKCU\Software\Microsoft\VWDExpress\11.0\WebProjects\EnableSelectivePublish=1.  Restart VS if you had it open, and create a publish profile if you don't already have one.  Now when you right click on a file, you will see additional options to publish, compare or update the selected file:

 

When you click Publish <filename>, it will publish the file(s) you have selected using your currently active publish profile.  There are no prompts or dialogs.  It will just send that file to the remote server specified in your publish profile.  The Compare command will do a diff of the local version of your selected file with the version on your remote server.  And lastly the Update local file command will pull down the remote version of the file and replace your local version.  These commands are meant as a convenient way to deploy small changes that you make to one or a few files without having to go through the publish dialog.  We hope you find it useful and are eager to hear any feedback you have on the feature.

John Wen

ASP.NET Web API and HTTP Byte Range Support

$
0
0

Range requests is the ability in HTTP to request a part of a document based on one or more ranges. This can be used in scenarios where the client wants to recover from interrupted data transfers as a result of canceled requests or dropped connections. It can also be used in scenarios where a client requests only a subset of a larger representation, such as a single segment of a very large document for special processing. Ranges specify a start and an end given a unit. The unit can be anything but by far the most common is “bytes”. An example of a range request asking for the first 10 bytes is as follows:

  GET /api/range HTTP/1.1
  Host: localhost:50231
  Range : bytes=0-9

An example asking for the first and last byte contains two ranges separated by comma as follows:

  GET /api/range HTTP/1.1
  Host: localhost:50231
  Range : bytes=0-0, -1

In this example the resource which we are doing range requests over contains the 26 letters of the English alphabet:

  HTTP/1.1 200 OK
  Content-Length: 26
  Content-Type: text/plain

  abcdefghijklmnopqrstuvwxyz

The response to a byte range request is a 206 (Partial Content) response. If only one range was requested then the response looks similar to a 200 (OK) response with the exception that it has a Content-Range header field indicating the range and the total length of the document:

  HTTP/1.1 206 Partial Content
  Content-Length: 10
  Content-Type: text/plain
 
Content-Range: bytes 0-9/26

 
abcdefghij

Note that the Content-Length header indicates the bytes actually in the response and not the total size of the document requested.

If more than one ranges were requested then the response has the media type “multipart/byteranges” with a body part for each range:

  HTTP/1.1 206 Partial Content
  Content-Length: 244
  Content-Type: multipart/byteranges; boundary="57c2656a-9716-4ea0-9d3b-2f76cbac4885"

  --57c2656a-9716-4ea0-9d3b-2f76cbac4885
  Content-Type: text/plain
  Content-Range: bytes 0-0/26

  a
  --57c2656a-9716-4ea0-9d3b-2f76cbac4885
  Content-Type: text/plain
  Content-Range: bytes 25-25/26

  z
  --57c2656a-9716-4ea0-9d3b-2f76cbac4885--

Range requests that don’t overlap with the extent of the resource result in a 416 (Requested Range Not Satisfiable) with a Content-Range header indicating the current extent of the resource.

  HTTP/1.1 416 Requested Range Not Satisfiable
  Content-Range: bytes */26

In addition to using ranges as described above, range requests can be made conditional using an If-Range header field meaning “send me the following range but only if the ETag matches; otherwise send me the whole response.”

With the addition of the ByteRangeStreamContent class to ASP.NET Web API (available in latest nightly build, not RTM), it is now simpler to support byte range requests. The ByteRangeStreamContent class can also be used in scenarios supporting conditional If-Range requests although we don’t show this scenario in this blog.

The ByteRangeStreamContent is very similar to the already existing StreamContent in that it provides a view over a stream. ByteRangeStreamContent requires the stream to be seekable in order to provide one or more ranges over it. Common examples of seekable streams are FileStreams and MemoryStreams. In this blog we show an example using a MemoryStream but a FileStream or other seekable stream would work just as well.

The Range Controller

Below is the sample controller. It is part of the ASP.NET Web API samples where the entire sample project is available in our git repository.

   1: public class RangeController : ApiController
   2: {
   3:     // Sample content used to demonstrate range requests
   4:     private static readonly byte[] _content = Encoding.UTF8.GetBytes("abcdefghijklmnopqrstuvwxyz");
   5:  
   6:     // Content type for our body
   7:     private static readonly MediaTypeHeaderValue _mediaType = MediaTypeHeaderValue.Parse("text/plain");
   8:  
   9:     public HttpResponseMessage Get()
  10:     {
  11:         // A MemoryStream is seekable allowing us to do ranges over it. Same goes for FileStream.
  12:         MemoryStream memStream = new MemoryStream(_content);
  13:  
  14:         // Check to see if this is a range request (i.e. contains a Range header field)
  15:         // Range requests can also be made conditional using the If-Range header field. This can be 
  16:         // used to generate a request which says: send me the range if the content hasn't changed; 
  17:         // otherwise send me the whole thing.
  18:         if (Request.Headers.Range != null)
  19:         {
  20:             try
  21:             {
  22:                 HttpResponseMessage partialResponse = Request.CreateResponse(HttpStatusCode.PartialContent);
  23:                 partialResponse.Content = new ByteRangeStreamContent(memStream, Request.Headers.Range, _mediaType);
  24:                 return partialResponse;
  25:             }
  26:             catch (InvalidByteRangeException invalidByteRangeException)
  27:             {
  28:                 return Request.CreateErrorResponse(invalidByteRangeException);
  29:             }
  30:         }
  31:         else
  32:         {
  33:             // If it is not a range request we just send the whole thing as normal
  34:             HttpResponseMessage fullResponse = Request.CreateResponse(HttpStatusCode.OK);
  35:             fullResponse.Content = new StreamContent(memStream);
  36:             fullResponse.Content.Headers.ContentType = _mediaType;
  37:             return fullResponse;
  38:         }
  39:     }
  40: }

 

The first thing to check is if the incoming request is a range request. If it is then we create a ByteRangeStreamContent and return that. Otherwise we create a StreamContent and return that. The ByteRangeStreamContent throws an InvalidByteRangeException is no overlapping ranges are found so we catch that and create a 416 (Requested Range Not Satisfiable) response.

Trying It Out

Running the sample creates a set of range requests. We write the corresponding responses to the console as follows:

  Full Content without ranges: 'abcdefghijklmnopqrstuvwxyz'

  Range 'bytes=0-0' requesting the first byte: 'a'

  Range 'bytes=-1' requesting the last byte: 'z'

  Range 'bytes=4-' requesting byte 4 and up: 'efghijklmnopqrstuvwxyz'

  Range 'bytes=0-0, -1' requesting first and last byte:
  --04214a40-a998-4b9e-a564-c21955bd36db
  Content-Type: text/plain
  Content-Range: bytes 0-0/26

  a
  --04214a40-a998-4b9e-a564-c21955bd36db
  Content-Type: text/plain
  Content-Range: bytes 25-25/26

  z
  --04214a40-a998-4b9e-a564-c21955bd36db--

  Range 'bytes=0-0, 12-15, -1' requesting first, mid four, and last byte:
  --b1d1d766-c424-49cb-9843-dd741be35f4c
  Content-Type: text/plain
  Content-Range: bytes 0-0/26

  a
  --b1d1d766-c424-49cb-9843-dd741be35f4c
  Content-Type: text/plain
  Content-Range: bytes 12-15/26

  mnop
  --b1d1d766-c424-49cb-9843-dd741be35f4c
  Content-Type: text/plain
  Content-Range: bytes 25-25/26

  z
  --b1d1d766-c424-49cb-9843-dd741be35f4c--

  Range 'bytes=100-' resulted in status code 'RequestedRangeNotSatisfiable' with
  Content-Range header 'bytes */26'

Have fun!

Henrik


Model Binding Fundamentals

$
0
0

In the previous post we looked at an overview of what is Model Binding and the benefits it brings in for WebForms developers. In this post we will look at the basic fundamentals and how the Model Binding system works with controls.

Data Binding with Model Binding

Uptil ASP.NET v4.0, data binding in controls happened with data bound controls such as GridView used to bound to data source controls such as ObjectDataSource. We will use this as a reference to understand the flow in the Model Binding System. A lot of the implementation derives inspiration from ObjectDataSource.

Following snippet shows how a GridView calls the Model Binding system to get a list of products.

<asp:GridView ID="getProduct" runat="server" ItemType="Products"
            UpdateMethod="getProduct_UpdateItem"
             SelectMethod="getProduct_GetData" />
public IQueryable<Products> getProduct_GetData()
        {
return _context.Products.ToList();
        }
 
// The id parameter name should match the DataKeyNames value set on the control
publicvoid getProduct_UpdateItem(int id)
        {
            Products item = null;            
 
            TryUpdateModel(item);
if (ModelState.IsValid)
            {
// Save changes here, e.g. MyDataLayer.SaveChanges();
 
            }
        }

 

At the heart of the Model Binding system is a datasource called ModelDataSource. and ModelDataSourceView As a developer when you never Bind the GridView to the ModelDataSource. The DataSource is an implementation detail which takes in value from the control and passes onto the ModelBinding system to perform the Binding & Validation to the model and gives the result back to the control/page.

Page Execution Flow

Now that we know that there is a datasource involved at the implementation level of the ModelBinding system, let’s look at the flow from the control – ModelBinding – control. At a very high level this is what happens

  • GridView calls the SelectMethod to get the values
  • This initializes the ModelDataSource and it calls the ExecuteSelect method on ModelDataSourceView
  • The ModelDataSourceView calls the DefaultModelBinder
  • DefaultModelBinder looks through all the ModelBinderProviders registered in the system and tried to get the ModelBinder that can provide the value for a given Type
  • The selected ModelBinder gets called to perform the Binding of values to the Model
  • The ModelBinder calls the ValueProvider to get the value.
    • This is useful if you want to specify that the value is coming from a control/querystring or other collections

I wanted to put this information out now so that you can get an idea about the plugability areas that exist in the ModelBinding system. In the later posts w We are going to look more into the details

Model Binders and Model Binder Providers

The initialization of the ModelBindig system registers the following ModelBinders. These  partition the responsibilities of the original DefaultModelBinder, making it easy to consume them from your own custom binder or to replace their individual responsibilities. For example, changing how dictionaries are bound application-wide now involves replacing a single provider rather than rewriting the DefaultModelBinder. The binders included in-box are, in order:

· TypeMatchModelBinderProvider– If the incoming value is already typed correctly for the target model (e.g. incoming value is string, property to bind is string), short-circuits the entire process and just returns the string.

· BinaryDataModelBinderProvider– Handles binding base-64 encoded input to byte[] and System.Linq.Data.Binary models.

· KeyValuePairModelBinderProvider– Handles KeyValuePair<TKey, TValue>. Consumed by the dictionary binder.

· ComplexModelDtoModelBinderProvider– Handles complex models. Consumed by the mutable object binder. More info on this type in the tutorial at the end of this document.

· ArrayModelBinderProvider– Handles T[].

· DictionaryModelBinderProvider– Handles IDictionary<TKey, TValue>.

· CollectionModelBinderProvider– Handles IEnumerable<T>.

· TypeConverterModelBinderProvider– Handles simple type conversions, e.g. incoming value is a string and property to bind is an integer.

Value Providers

Following are the Value providers which you can use to specify where should the ModelBinding look for when trying to get the value

Value Providers

Description

Form

The value is retrieved from the Form collection

Control

The value is retrieved from the specified control

QueryString

The value is retrieved from the QueryString collection

Cookie

The value is retrieved from the Cookie collection

Profile

The value is retrieved from the Profile collection

RouteData

The value is retrieved from the Route collection

Session

The value is retrieved from the Session collection

In the next few posts we will look at how to do basic create/replace/delete/filtering cases and then look at extending the ModelBinding system using these concepts

Hope you will find this useful!!!

The new Facebook application template and library for ASP.NET MVC

$
0
0

If you’re looking to build a Facebook App using ASP.NET MVC, then you’re in the right place. With the release of ASP.NET and Web Tools 2012.2, we’ve added lots of updates to our Facebook Application templateandlibrary since we first introduced it in the Preview.

The library, Microsoft.AspNet.Mvc.Facebook, can be downloaded as an independent NuGet package and it contains everything you need to create a Facebook Canvas application using ASP.NET MVC. It’s already included by default in the Facebook Application template to give you a quick jump start on building a Canvas Page.

Please refer to the tutorial for instructions on how to setup the Facebook Application template. In this post I’ll mainly focus on the design/API changes that we made from the Preview to RC. This will give you a pretty good idea of how the new APIs work in the updated Facebook Application template.

image

Introducing FacebookContext

Before RC, we used to model bind the User and other Facebook objects as parameters. The problem with model binding these parameters is that we needed to call Facebook Graph API to get the values and the model binders in MVC don’t support async. That’s why in RC, we’re replacing them with a FacebookContext parameter from which you can access the user ID, access token and the parsed signed request. More importantly, the FacebookContext has an instance of the FacebookClient which can be used to make asynchronously calls to Facebook to get the values for the User and other Facebook objects.

image

The instance of FacebookClient is populated with the required information such as AppId, AppSecret and AccessToken, so it’s ready to use.

public async Task<ActionResult> Index(FacebookContext context)
{
if (ModelState.IsValid)
    {
        MyAppUser user = await context.Client.GetCurrentUserAsync<MyAppUser>();
return View(user);
    }
 
return View("Error");
}

FacebookClient Extensions

We’ve added a number of extension methods to FacebookClient to make the common tasks, such as getting the current user and his/her friends, simpler.

image

In fact, most of the extension methods above are based on GetFacebookObjectAsync<TFacebookObject>(string objectPath). If you’re familiar with Facebook C# SDK, you might be wondering how it is different from GetTaskAsync<TResult>(string path). Well, the main difference between them is that GetFacebookObjectAsync<TFacebookObject>(string objectPath) will automatically generate “fields” query parameter when making the request so you get back only the properties you need. It reduces the payload size and sometime this can be quite significant. For example, if you have the following type named “MyUser”.

publicclass MyUser
{
publicstring Id { get; set; }
publicstring Name { get; set; }
publicstring Birthday { get; set; }
}

GetFacebookObjectAsync<MyUser>(“me”) will get back something like the following.

{
"id": "1234567",
"name": "sample name",
"birthday": "01/01/2000"
}

but GetTaskAsync<MyUser>(“me”) will get back the full payload. The difference can be even bigger when you’re retrieving a list of friends.

{  
"id": "1234567",
"name": "sample name",
"first_name": "sample name",
"middle_name": "sample name",
"last_name": "sample name",
"link": "https://www.facebook.com/sample",
"username": "sample",
"birthday": "01/01/2000",
"location": 
    {
"id": "123456789",
"name": "Redmond, Washington"
    },
"quotes": "Very long quotes...................",
"gender": "male",
"email": "sample@sample.com",
"timezone": -4,
"locale": "en_US",
"verified": true,
"updated_time": "2012-10-11T00:25:53+0000"
}

Being able to automatically generate the “fields” query parameter also makes it easier to retrieve connections(Picture, Friends, Likes, etc) as fields. For instance, you can get back the user information and what user Likes (a connection) in a single request by defining the types below. GetFacebookObjectAsync<MyUser> will automatically take care of generating the appropriate “fields” query to include the Likes connection (assuming you have user_likes permission).

publicclass MyUser
{
publicstring Id { get; set; }
publicstring Name { get; set; }
publicstring Birthday { get; set; }
public FacebookGroupConnection<Like> Likes { get; set; }
}
 
publicclass Like
{
publicstring Name { get; set; }
publicstring Category { get; set; }
}

Modeling Facebook Objects

Unlike before, where your Facebook user/object types were required to derive from FacebookUser/FacebookObject, now you can use any POCO type as long as it contains public properties with the matching name. We’ll deserialize the Facebook response using Json.NET. For example, the following Facebook response for a user will match the “MyUser” type below.

{  
"id": "1234567",
"name": "sample name",
"link": https://www.facebook.com/sample
}

Note that the property derialization is case insensitive.

publicclass MyUser
{
publicstring Id { get; set; }
publicstring Name { get; set; }
publicstring Link { get; set; }
}

Defining Connections

As noted previously, now you can include connections (Picture, Friends, Likes, etc) as properties using FacebookConnection<T> and FacebookGroupConnection<T>.

image

publicclass MyAppUser
{
publicstring Name { get; set; }
publicstring Email { get; set; }
 
public FacebookConnection<FacebookPicture> Picture { get; set; }
public FacebookGroupConnection<MyAppUserFriend> Friends { get; set; }
}
 
publicclass FacebookPicture
{
publicstring Url { get; set; }
}
 
publicclass MyAppUserFriend
{
publicstring Name { get; set; }
publicstring Link { get; set; }
 
public FacebookConnection<FacebookPicture> Picture { get; set; }
}

These types will match exactly how Facebook returns the connection data in JSON.

{
"name": "sample name",
"email": "sample@sample.com",
"id": "1234567",
"picture": {
"data": {...}
    },
"friends": {    
"data": [      
            {...},
            {...},
            {...}
        ]
    }
}

Property Attributes

Since we use Json.NET to do the deserialization, you can use JsonProperty/JsonIgnore to rename/exclude a property.

publicclass MyAppUser
{
    [JsonIgnore] // This ignores the property
publicstring MyProperty { get; set; }
 
    [JsonProperty("picture")] // This renames the property to picture.
public FacebookConnection<FacebookPicture> ProfilePicture { get; set; }
}

In addition you can apply modifiers to the connections using FacebookFieldModifier. For example, you can get the URL to a larger profile picture by adding the modifier “type(large)” to Picture. Go to Facebook Graph API to see the modifiers that are available. When GetFacebookObjectAsync<TFacebookObject> makes the request, these modifiers will be added to the ?field query parameter.

publicclass MyAppUser
{
    [FacebookFieldModifier("type(large)")] // This sets the picture size to large.
public FacebookConnection<FacebookPicture> Picture { get; set; }
}
 
publicclass FacebookPicture
{
publicstring Url { get; set; }
}

Facebook Authorization

FacebookAuthorizeAttribute can be used just like before to ensure that the signed_request parameter (sent by Facebook) is valid before invoking an action. Additionally, you can require a set of permissions to be granted before reaching the action by passing them into the attribute. In case the authorization fails, either because of invalid signed request or missing permissions, the users will be redirected to a Facebook OAuth dialog asking them to login or grant the required permissions.

Note that FacebookAuthorizeAttribute is not an authorization filter anymore, the actual authorization is done by FacebookAuthorizeFilter. That’s why it’s important to register the FacebookAuthorizeFilter globally to enable this functionality.

publicstaticclass FacebookConfig
{
publicstaticvoid Register(FacebookConfiguration configuration)
    {
// Other settings removed for clarity...
 
// Adding the authorization filter to check for Facebook signed requests and permissions
        GlobalFilters.Filters.Add(new FacebookAuthorizeFilter(configuration));
    }
}

Having a global authorization filter allowed us to combine the permissions declared on both the controller and the action. Which means the user will see the OAuth dialog once instead twice for the “Profile” action below when the permissions are missing.

[FacebookAuthorize("email")]
publicclass UserController : Controller
{
    [FacebookAuthorize("user_photos")]
public async Task<ActionResult> Profile(FacebookContext context) 
    {
// Implementation removed for clarity
    }
}

image

AuthorizationRedirectPath

When the permissions are missing, now you have the option of showing an “info” page before redirecting user to the OAuth dialog. In that page you can explaining why your app requires certain permissions so that users are more likely to grant them. To do that, add the following to your web.config and the user will be redirected to Home/Permissions when the authorization fails due to missing permissions.

<appSettings>
<addkey="Facebook:AuthorizationRedirectPath"value="Home/Permissions"/>
</appSettings>

On the action that is receiving the redirect, you can use FacebookRedirectContext parameter to access information like the required permissions and the RedirectUrl to the Facebook OAuth dialog.

publicclass HomeController : Controller
{
// Other actions removed for clarity...
 
// This action will handle the redirects from FacebookAuthorizeFilter when 
// the app doesn't have all the required permissions specified in the FacebookAuthorizeAttribute.
// The path to this action is defined under appSettings (in Web.config) with the key 'Facebook:AuthorizationRedirectPath'.
public ActionResult Permissions(FacebookRedirectContext context)
    {
if (ModelState.IsValid)
        {
return View(context);
        }
 
return View("Error");
    }
}
@using Microsoft.AspNet.Mvc.Facebook
@model FacebookRedirectContext
 
@if (Model.RequiredPermissions.Length > 0)
{
<h3>You need to grant the following permission(s) on Facebook to view this page:</h3>
<ul>
        @foreach (var permission in Model.RequiredPermissions)
        {
<li>@permission</li>
        }
</ul>
<aclass="buttonLink"href="@Html.Raw(Model.RedirectUrl)"target="_top">Authorize this application</a>
}

Facebook Realtime Update Support

Out of the box, we provide you with an abstract FacebookRealtimeUpdateController, which is a Web API controller that will handle all the HTTP requests sent by Facebook Realtime Update service. It will take care of verifying the subscription and validating the integrity of the payload by checking the X-Hub-Signature HTTP header. All you need to provide is a verify token and your business logic to handle the update. Here is a sample implementation of FacebookRealtimeUpdateController.

publicclass UserRealtimeUpdateController : FacebookRealtimeUpdateController
{
privatereadonlystaticstring UserVerifyToken = ConfigurationManager.AppSettings["Facebook:VerifyToken:User"];
 
publicoverridestring VerifyToken
    {
        get
        {
return UserVerifyToken;
        }
    }
 
publicoverride Task HandleUpdateAsync(ChangeNotification notification)
    {
if (notification.Object == "user")
        {
foreach (var entry in notification.Entry)
            {
// Your logic to handle the update here
            }
        }
    }
}

Note that you can have multiple custom FacebookRealtimeUpdateController to handle different subscriptions (Users, Permissions, etc) with different verify tokens.

FacebookConfiguration

The static FacebookSettings class has been replaced by FacebookConfiguration. Notice that many components such as FacebookAuthorizeFilter are taking an instance of FacebookConfiguration in the constructor which makes unit testing easier. For global access to FacebookConfiguration within the application, you can use GlobalFacebookConfiguration.Configuration, which is a singleton instance.

image

LoadFromAppSettings

This method loads the following properties from appSettings.

  • AppId – this property is required.
  • AppSecret – this property is required.
  • AppNamespace – this property is required if you provided a namespace to the app you created on https://developers.facebook.com/apps.
  • AppUrl – this property is optional. By default it’s set to https://apps.facebook.com/{AppId} or https://apps.facebook.com/{AppNamespace} when the AppNamespace is set. The only time you might want to set this is if Facebook changes their app URL structure.
  • AuthorizationRedirectPath – this property is optional. See the AuthorizationRedirectPath section above for more information.
<appSettings>
<addkey="Facebook:AppId"value=""/>
<addkey="Facebook:AppSecret"value=""/>
<addkey="Facebook:AppNamespace"value=""/>
<addkey="Facebook:AppUrl"value=""/>
<addkey="Facebook:AuthorizationRedirectPath"value=""/>
</appSettings>

ClientProvider property

You can set this property to customize how the FacebookClient is created. GlobalFacebookConfiguration.Configuration uses an instance of DefaultFacebookClientProvider which creates clients that use Json.NET serializers.

PermissionService property

You can set this property to change how current user permissions are retrieved. GlobalFacebookConfiguration.Configuration uses an instance of DefaultFacebookPermissionService which calls into Facebook Graph API to retrieve the current user permissions.

 

Well, that concludes our overview of the new APIs in the Facebook Application library. I would encourage you to install the ASP.NET and Web Tools 2012.2 and start playing with the Facebook Application template. Please do send us your feedback and feel free to use our CodePlex site to report any issues you might find.

Have fun building Facebook Apps with ASP.NET MVC!

Yao

Knockout Intellisense in ASP.NET and Web Tools 2012.2 RC

$
0
0

WTE (Web Tools Extension) 1.2 RC is part of the ASP.NET and Web Tools 2012.2 RC and it’s available for download from http://www.microsoft.com/download/details.aspx?id=36053.

Knockout Intellisense is an exciting new feature in WTE 1.2 RC.  Knockout is a declarative JavaScript MVVM system for client-side data binding.  See http://knockoutjs.com/ for complete information and http://learn.knockoutjs.com/ for tutorials. 
Knockout Intellisense allows you to code Knockout quickly and accurately, and custom bindings are fully supported!

With WTE 1.2 RC installed, you can utilize the Knockout Intellisense feature on any web page by:

  • Loading Knockout-n.n.n.js or Knockout-n.n.n.debug.js
  • Defining a view model in JavaScript as an object or a function
  • Calling ko.applyBindings(viewModel) after the page is loaded

Loading Knockout

Add Knockout to an existing project by selecting “Manage Nuget Packages…”:

Search online for “Knockout” and select “Knockoutjs”:

Press “Install”

Drag the Knockout JavaScript file from Scripts to the head of your web page:

Note: to use Knockout with Bundling and Minification, see the “Working with Bundling and Minification” topic below.

Knockout does not require jQuery, but the two work great together.

Defining a View Model

A view model can be defined in one of two ways; see http://knockoutjs.com/ for complete details.

Declare the data structure you want to use as a JavaScript object:

    <script>
      var viewModel = {
        fieldOne: "string",
        fieldTwo: 12345,
        fieldThree: null,
        fieldFour: ko.observable("two way data-binding"),
        fieldFive: ko.observableArray([1,2,3])
      };
    </script>

 Or, declare a function with members describing your data:

    <script>
      function viewModel() {
        this.detailsEnabled = ko.observable(false);
          this.enableDetails = function () {
            this.detailsEnabled(true);
          };
          this.disableDetails = function () {
            this.detailsEnabled(false);
          };
      };
    </script>

Applying Bindings

If you’ve described your view model as an object, bind itusing the syntax:

    ko.applyBindings(viewModel);

If you’ve declared your view model as a function, use the syntax:

    ko.applyBindings(new viewModel());

Note: In either case, it’s critical to bind the view model after the page data has been loaded.  You can do this by either declaring a script block at the end of the page:

    …
    <script>
      …
      ko.applyBindings(viewModel);
    </script>
    </body>
    </html>

Or by applying the binding in a jQuery document.ready function or similar method:

    <script>
      $(function () {
        function viewModel() {
          …
        }
        ko.applyBindings(new viewModel());
      });
    </script>

Using Knockout Intellisense

Once you’ve defined and applied your view model, you are ready to work with Knockout Intellisense. 
Add a data binding to an HTML element by using the new “db” snippet.

Place your cursor in an opening HTML tag, type <space>db<tab><tab> and get:

   

Next, begin typing the name of a binding (custom bindings
are fully supported) and a completion list will appear:

   

Complete your binding with a colon. Press Control-J or Control-Space if you’d like to see a full list of available values; applicable values will appear automatically once you start typing:

   

Note that Knockout bindings are highlighted to make them stand out on a page:

   

Working with Bundling and Minification

To add Knockout to an existing project utilizing bundling and minification, such as an application created from the ASP.NET Web Forms Application template:

  • Add the Knockoutjs package to the project as described above
  • Open the file Scripts\_references.js and drag Knockout-n.n.n.js onto the document window.  This will add a script reference definition:
    /// <reference path="knockout-2.2.0.js" />
  • While you have _references.js open, ensure that your jQuery and other file names exactly match the installed file names.  If you have updated packages, these may be out of sync and you will not benefit from accurate Intellisense.
  • Open App_Start\BundleConfig.css and add the following to the function “RegisterBundles”:
       bundles.Add(new ScriptBundle("~/bundles/knockout").Include("~/Scripts/knockout-*"));
  • In an asp:PlaceHolder in the head of the master page, add:
        <%: Scripts.Render("~/bundles/knockout") %>
  • Remember to declare your view model and call ko.applyBindings after your page is loaded.

Caveats

Bindings for “foreach”, “with” and template

Three special bindings, “foreach”, “with” and “template”, are not correctly implemented in WTE 1.2 RC.

The “foreach” and “with” bindings should each establish a binding context, such that values within the context should refer to nested bindings. In the RC, the top-level context is used within the binding context rather than the nested context, so values presented in the value completion list are invalid.
This is a known RC issue and will be fixed in RTM.

Variables such as $parent and $data will not point to the correct contextual object for Intellisense.  Of course, these features work normally at runtime.

The “template” binding should not show a list of values from the view model at all; rather, it uses an object which describes a named template and its associated data.  In the RC, “template” displays an ordinary top-level context value list, which is invalid.
This is a known RC issue and will be fixed in RTM.

Containerless Control Flow Syntax

The “if”, “ifnot”, “foreach” and “with” bindings can be expressed in containerless flow control comments, like:

    <!-- ko foreach: myItems -->
    <li>Item <span data-bind="text: $data"></span></li>
    <!-- /ko -->

WTE 1.2 RC does not evaluate these comments and they are ignored.  Again, “foreach” and “with” do not establish binding contexts in this case and so the values in the value completion list will be invalid.
This is a known RC issue and will be fixed in RTM.

SPA Application Template

WTE 1.2 RC comes with a new MVC4 application template, theSingle Page Application:

   

This template makes extensive use of Knockout in Views/Home/Index.cshtml.  However, most of the binding expressions are nested with a binding context established by a “foreach” binding, so the value Intellisense provided is invalid.  (See the earlier caveat.) 
This is a known RC issue and will be fixed in RTM.

Troubleshooting

If you get no colorization of Knockout binding expressions, then the Knockout-n.n.n.js file was not found among the scripts or references.  Double check the “Loading Knockout” instructions above.

If you get colorization and binding name Intellisense, but not value Intellisense, then the ko.applyBindings call was not found, which establishes the view model to use.  Double check the “Defining a view Model” instructions above.

If you’re having trouble with Bundling and Minification, more information is available at http://go.microsoft.com/fwlink/?LinkId=254726.

If you see the following values in the value completion list rather than the view model values that you expect:

   

…then you have declared your view model as a function, but called
    ko.applyBindings(viewModel)
instead of
    ko.applyBindings(new viewModel())
You are not the first.  :-)

 

 

CSS Auto-Sync and JavaScript Selection Mapping in Page Inspector

$
0
0

With the release of ASP.NET and Web Tools 2012.2 RC (details here), we have added a couple of new features to Page Inspector in Visual Studio 2012, namely CSS Auto-Sync and JavaScript Selection Mapping.

I explain these features in the context of a MVC 4 Single Page Application (SPA app) below, which is a new template available in this release. But the Page Inspector improvements are available for all the other kinds of web projects as well, such as other types of MVC projects and web forms projects.

 

CSS Auto-Sync

Let’s get started by creating a new MVC 4 SPA application using VS 2012.

With VS 2012, once the application is created, you can right-click the MVC project and ‘View in Page Inspector’ and the page will load in the Page Inspector browser. You can click ‘Inspect’ and as you inspect various elements on the page (see figure 1), and Page Inspector provides you a visual representation of the element in the page’s DOM tree, lists the CSS properties for the element in the Styles pane and performs Source Code Selection Mapping, ie. range selection in the source file that generated the HTML element (for non-dynamically generated elements).

 

Figure 1 : Inspecting the paragraph containing the ‘Sign Up’ link

PI-Inspect

 

Similar to other browser tools, you can change values in the Page Inspector DOM Explorer, and the browser automatically reflects the changes. You can click on a selector in the Styles pane and that will take you directly to the rule in the CSS file (See figure 2).

 

Figure 2 : Click on a selector to take you directly to the rule in the CSS file

PI-SelectStyles

 

Editing styles in the Styles pane of the DOM Explorer was useful to try out different styles, but there were limitations. If you made a bunch of changes in the Styles pane and wanted to save those changes, you had to manually port those changes over to the CSS file. You could only edit existing rules, but could not add new rules.

New CSS Auto-Sync feature –

With this release, Page Inspector now has a new CSS Auto-Sync feature that allows you to edit the CSS file directly and as you type, the changes will get automatically sync-ed back to the Page Inspector browser. For example, I can add a new property font-style: italic in the CSS file and you can see that the Page Inspector browser immediately reflects the change to the ‘Sign Up’ link (see figure 3). I can also change the color of the link and by typing color: #, the CSS color picker gets invoked and as I try out different colors in the color picker, those get immediately pushed to the browser. (Note: With the RC, to notice the change in color, you might need to click once on the browser to clear selection. This will get fixed for the RTM release and we will automatically clear selection.)

Note that there’s a new icon with two blue arrows at the top right corner of the Page Inspector tool window. This new icon serves as a visual indicator that CSS Auto-Sync is working correctly and that the page you are viewing is in-sync with your CSS file.

 

Figure 3 – CSS Auto-Sync causes edits in your CSS file to get automatically sync-ed to the Page Inspector browser

PI-ColorPicker

 

Thus, you can now leverage the full power of the CSS editor to make changes in your CSS file and be able to immediately view changes in Page Inspector. If you have used Page Inspector in the past, you might be familiar with the Ctrl-Alt-Enter shortcut, and you can still use that at any time when editing the CSS file. Hitting Ctrl-Alt-Enter will cause any unsaved changes to get saved and refresh the page in the Page Inspector browser.

The CSS Auto-Sync feature is supported only when editing CSS files at the moment and is not supported when editing style blocks and style attributes (inline styles).

 

JavaScript Selection Mapping

With this release, Page Inspector now has the capability to map items that were dynamically added to the page back to the corresponding JS code. Continuing with our example SPA application above, let’s go ahead and sign-up for a new user. Once you sign-up, the application logs you in and creates a to-do list with a couple of sample items.

If you now switch to ‘Inspect’ mode in the Page Inspector browser and inspect the elements inside the to-do list, you will notice that the feedback in the Page Inspector browser is now in orange with “JS” next to the element name (see figure 4). The new ‘Call Stack’ pane lights up with an orange underline to indicate that a call stack is available for this element, and the source code selection mapping takes you to the line of JS user code that created the element. This happens because Page Inspector detected the item was dynamically added to the DOM using JavaScript.

 

Figure 4 – JavaScript Selection Mapping for dynamically created elements

PI-JSMapping

 

JS Call Stack -

You can click on the item in the Page Inspector browser, and switch to the Call Stack pane and Page Inspector will show you the call stack for how that element was created, containing JS calls from both user code and library code (grayed-out). You can use arrow keys to navigate up and down the call stack pane and see the corresponding line of JS source show up in the editor.

While the source code selection mapping takes you to the topmost call in the call stack coming from user code, when you first switch to the call stack pane, you might notice that corresponding item is not selected in the Call Stack pane. This should get fixed for the RTM release. For the RTM release, we are also considering other improvements to the Call Stack pane such as adding the ability to hide and/or collapse the calls made through library code.

 

Figure 5 – Call Stack pane

PI-CallStackPane

 

 

- Bala Chirtsabesan.

‘Paste JSON As Classes’ in ASP.NET and Web Tools 2012.2 RC

$
0
0

‘Paste JSON As Classes’ is a cool feature in ASP.NET and Web Tools 2012.2 RC. This feature will help you generate strongly typed classes in C# or VB.NET from valid JSON text.

With ASP.NET and Web Tools 2012.2 RC installed, you will see new menu option like below for C# and VB.NET Website and Web Application projects only. This new menu option will be enabled for .cs and .vb file extensions inside these projects:

clip_image001

 

JSON to C#/VB.NET class conversion

To use this feature, just copy sample JSON text and “Paste JSON As Classes” inside .vb or .cs file. This feature uses Newtonsoft JSON parser to parse JSON text from clipboard. Once Newtonsoft JSON parser validates the clipboard data as valid JSON, then it will be converted into C# or VB.NET class depending on the selected file type.

JSON to Classes Conversion rules

  1. Outermost class name is always Rootobject.
  2. All classes and properties are public.
  3. If property is a keyword in C#/VB.NET then it will be prepended with _ (underscore).
  4. If property is number then it will be converted to int/float/double/Single type
  5. JSON string is converted to string in C# and String in VB.NET
  6. Boolean data types are converted to bool in C# and Boolean in VB.NET
  7. If JSON text in clipboard is array of multiple JSON objects and for given property if value is null in one object and value is supported type in another then it will be marked as nullable type.
  8. With single JSON object, nullable data type will not be emitted.

Let’s look at some examples:

{
    "link": "http://www.microsoft.com/Surface/en-US", /*Awesome*/
    "virtual": "Virtual Keyboard",
    "partial": " The magnesium panels are finished with partial vapor deposition",
    "Price": 499.99,
    "title": "Microsoft Surface",
    "शीर्षक": "माइक्रोसॉफ्ट सरफेस",
    "Like": true,
    "cdDrive":null,
}

C# class for above JSON object will look like below.

public class Rootobject
{
    public string link { get; set; }
    public string _virtual { get; set; }
    public string partial { get; set; }
    public float Price { get; set; }
    public string title { get; set; }
    public string शीर्षक { get; set; }
    public bool Like { get; set; }
    public object cdDrive { get; set; }
}

VB.NET class will be like below

Public Class Rootobject
    Public link As String
    Public virtual As String
    Public _partial As String
    Public Price As Single
    Public title As String
    Public शीर्षक As String
    Public _Like As Boolean
    Public cdDrive As Object
End Class

DateTime Formats

JSON parser recognizes some DateTime formats as valid DateTime data types and some are not.

Below JSON object has some valid DateTime data and some invalid DateTime data types

[
 {
    "DateTimeValid": "2012-05-03T00:06:00.638Z",
    "NullableDateTime": null,
    "DateTimeToString": "2009-07-31T00:00Z",
    "DateTimeToObject": "2012-05-03T00:06:00.638Z",
 },
 {
    "DateTimeValid": "2012-05-03T00:06:00Z",
    "NullableDateTime": "2012-05-03T00:06:00.638Z",
    "DateTimeToString": "2009-07-31T00:00Z",
    "DateTimeToObject": "2012-05-03T00:06",
 }
]

C# class for above JSON object will look like below.

public class Rootobject
{
    public Class1[] Property1 { get; set; }
}

public class Class1
{
    public DateTime DateTimeValid { get; set; }
    public DateTime? NullableDateTime { get; set; }
    public string DateTimeToString { get; set; }
    public object DateTimeToObject { get; set; }
}

First two properties of Class1 are valid DateTime objects, 3rd property is deemed as string as the DateTime format is not valid/recognized. Last property is converted as object as one object in array is valid DateTime and other object in array is invalid DateTime object. So property type cannot be DateTime nor can be string. Common base type for string and DateTime is object.

Single Dimensional Arrays

JSON Arrays are represented with [ ] notation in C# and with () in VB. Example as below.

[
 {
    "Name": "Kamal, Employee",
    "Manager": 
        {
            "Name": "Barry, Manager"
        }
 },
 {
    "Name": "Barry, Manager"
 }
]

C# class for above JSON object will look like below.

public class Rootobject
{
    public Class1[] Property1 { get; set; }
}

public class Class1
{
    public string Name { get; set; }
    public Manager Manager { get; set; }
}

public class Manager
{
    public string Name { get; set; }
}

VB. NET class as below

Public Class Rootobject
    Public Property1() As Class1
End Class

Public Class Class1
    Public Name As String
    Public Manager As Manager
End Class

Public Class Manager
    Public Name As String
End Class

 

Multi-Dimensional Arrays

You can convert multi-dimension JSON arrays to classes. JSON Example for two dimensional array:

[ [ 1, 2, 3 ] ]

C# class for above JSON object will look like below:

public class Rootobject
{
    public int[][] Property1 { get; set; }
}

VB.NET class:

Public Class Rootobject
    Public Property1()() As Integer
End Class

Also you can convert JSON with objects having arrays inside array as shown in the example below

{
    "results": 
    [{
        "address_Office": 
        [{
            "street_number": "One",
            "city_name": "Bellevue",
            "state_name": "Washington",
            "street_name": [ "Microsoft Way" ]
        }],
    }]
}

C# class for above JSON object will look like below:

public class Rootobject
{
    public Result[] results { get; set; }
}

public class Result
{
    public Address_Office[] address_Office { get; set; }
}

public class Address_Office
{
    public string street_number { get; set; }
    public string city_name { get; set; }
    public string state_name { get; set; }
    public string[] street_name { get; set; }
}

VB.NET class:

Public Class Rootobject
    Public results() As Result
End Class

Public Class Result
    Public address_Office() As Address_Office
End Class

Public Class Address_Office
    Public street_number As String
    Public city_name As String
    Public state_name As String
    Public street_name() As String
End Class

Troubleshooting

If JSON Object in clipboard is invalid, error message will be shown with the reason.

Example: Below JSON object contains max value for decimal data type. Newtonsoft JSON Parser does not recognize values above 1.7976931348623157E+308(which is max value for double data type).

{
    "DecimalMax": 1.79769313486232E+308,
}

This will result in error like below when “Paste JSON As Classes” is called.

clip_image002

Another example of invalid JSON object where “” are not valid character:

{"login":””;}

This will result in error like below when “Paste JSON As Classes” is called.

clip_image003

Thank you for your time reading this blog post. Looking forward to hear your feedback on this feature!

SignalR: Building real time web applications

$
0
0

Note: This sample is targeting 1.0RC1, I will update the post after RTM is released

SignalR offers a simple and clean API to write real time web applications where the server needs to continuously push data to clients. Common applications are chat, news feed, notifications, multiplayer games.

In this sample, I demonstrate powerful features like:

  • A server implementation hosted in IISExpress
  • Client implementations running on IISExpress, a Console Application, and a Windows Store App
  • Doing request/response operations sync and async
  • Server pushing broadcast messages to ALL clients
  • Server pushing group messages to specific devices like a web browser, a desktop, or a tablet.

Server

Server derives the Hub class, and it handles incoming requests in two ways:

  1. Method "y Request(x)" transparently sends a response to client
  2. Method "void RequestAsync(x)" uses Clients.Caller.<DynamicMethod> to send a response to client

Server is also pushing data to clients in two ways:

  1. hubContext.Clients.All.<DynamicMethod>
  2. hubContext.Clients.Group(groupName).<DynamicMethod>

As you can see, dynamic methods define the events that need to be handled on the client side. Think of them as if they were interface methods to execute remote calls.

 

namespace SignalR.Sample
{
  public class SampleHub : Hub
  {     
    public static void Start()
    {
      ThreadPool.QueueUserWorkItem(_ =>
      {
      IHubContext hubContext = GlobalHost.ConnectionManager.GetHubContext<SampleHub>();

hubContext.Clients.All.Broadcast(data); hubContext.Clients.Group("WebApp").Group(string.Format("WebApp {0}", DateTime.Now)); }); } public FromServerToClientData Request(FromClientToServerData request) { FromServerToClientData response = new FromServerToClientData(); response.Text = "Responding to: " + request.Text; return response; } public void RequestAsync(FromClientToServerData request) { FromServerToClientData response = new FromServerToClientData(); response.Text = "Responding to: " + request.Text; Thread.Sleep(TimeSpan.FromSeconds(5)); Clients.Caller.ResponseAsync(response); } public void JoinGroup(string groupName) { Groups.Add(Context.ConnectionId, groupName); } } }

Client

To write a client, you must define how to handle Server <DynamicMethods>: "client.Broadcast", "client.Group", and "client.ResponseAsync".

Then, client is started, and after the connection succeeds, the client can make requests to server by invoking "server.requestAsync" and "server.joinGroup".

You will see similar patterns in the ConsoleApp and WindowsStoreApp clients.

$(function () {
  $.connection.sampleHub.client.Broadcast = function(value) {
    $('#BroadcastTextArea').html('<pre>Broadcast: ' + value.Now + ' ' + value.Integer + 
' ' + value.Text + '</pre>'); } $.connection.sampleHub.client.Group = function (value) { $('#GroupTextArea').html('<pre>Group: ' + value + '</pre>'); } $.connection.sampleHub.client.ResponseAsync = function (value) { $('#ResponseAsyncTextArea').html('<pre>ResponseAsync: ' + value.Text + '</pre>'); } $.connection.hub.logging = true; $.connection.hub.start().done(function () { var request = new Object(); request.Text = 'This is a request from a WebApp!' $.connection.sampleHub.server.request(request).done(function (response) { $('#ResponseTextArea').html('<pre>Response: ' + response.Text + '</pre>'); }); $.connection.sampleHub.server.requestAsync(request); $.connection.sampleHub.server.joinGroup('WebApp'); }); });

There is more information about how I wrote the code when running the sample, simply follow these instructions:

  1. Download and extract SignalR.Sample.zip
  2. On Visual Studio 2012, open SignalR.Sample.sln
  3. Right-click the solution and select "Set Startup Projects…"
  4. Select "Multiple startup projects", and set the projects in this order:
  • SignalR.Sample
  • SignalR.Sample.Client
  • SignalR.Sample.WinRTClient
  • Set "Action" column in all projects to "Start". Click OK.
  • Ctrl-F5 to run the projects, it will start a browser, a console app, and a windows store app.
  • There are more advanced samples in the SignalR source code repository, follow these steps to get them:

    1. Install the latest Git for Windows
    2. Open a Command Prompt, create a folder, and download source code

    md C:\SignalR\release
    cd C:\SignalR\release
    git clone -b release https://github.com/SignalR/SignalR.git .
    1. On Visual Studio 2012, open C:\SignalR\release\Microsoft.AspNet.SignalR.sln

    SignalR has recently shipped as RC, more details here. For the RTM version, I want to post a more complex sample, my idea is to demonstrate how to publish the server implementation on Azure and use scale-out with Service Bus. Please post a comment if you are interested to see something else.

    MVC Single Page Application Template Update for ASP.NET and Web Tools 2012.2 RC

    $
    0
    0

    We added a C# MVC Single Page Application (SPA) template in ASP.NET Fall 2012 Update BUILD Preview. John Papa had written a detailed blog about the preview version. With help of John Papa’s detailed feedback and sample code, we’ve rewritten and reorganized the JavaScript code to make it more structured in RC. We really appreciate the help from John Papa. He just wrote another blog about SPA template in RC version.

    The major improvement for the MVC Single Page Template RC release includes the following:

    1. Added VB MVC SPA template for .net framework 4.5 and 4.0.

    2. Added WebAPI HelpPage support

    3. Moved all the application specific JavaScript files to Scripts/app folder

    4. Redo the datacontext, model and viewmodel files to make cleaner separation of data access, model and viewmodel layers

    5. Created a namespace for the app called "todoApp"

    6. Updated the knockout Nuget packages

     

    Let’s discuss the important components for MVC SPA template using C#.

    Introduction

    The MVC SPA template is actually a sample application to demonstrate how to build a SPA application with MVC Web API. The application is able to create, read, update and delete (CRUD) todo-list with todo-items in each list for authenticated users.

    At the server side, we use MVC Web API to create the scaffolding of our todo-list and todo-items models.

    At the client side, we use Knockout JS framework and binding syntax to demo the Model-View-View Model pattern for SPA. We use JQuery $.ajax to send and receive JSON requests between client data access layer and server’s Web API layer. If you have not used KnockoutJS, you can use the links from the reference section to learn it.

    To use the template, just choose “Single Page Application” template when you creates a new ASP.NET MVC 4 Project.

    clip_image002 

    Layers

    Server models

    We have 2 basic entity framework models TodoItem and TodoList.

    clip_image003

     

    TodoList entity contains a list of TodoItems.

    clip_image004

     

    TodoItem entity has a foreign key TodoListId and a virtual TodoList to represent the relationship with TodoList entity.

    clip_image005

    The default JSON/XML serialization of these two models, however, will run into circular reference problems. Here are a few solutions to solve them. If the project only uses JSON, it can be solved quite simply by adding a “[JsonObject(IsReference = true)]” attribute to TodoList class.

    The default XML serialization could also fail when you have the EF proxy and lazy loading enabled. The issue arises when you try to serialize a proxy of TodoList which really isn’t a TodoList but a dynamically generated type. Thus the following exception is thrown:

    Type 'System.Data.Entity.DynamicProxies.TodoList_4ECA216B8773C8B6552AD787C3BA8087E9C365199E3C2F3C1E28B89CD6556441' with data contract name TodoList_4ECA216B8773C8B6552AD787C3BA8087E9C365199E3C2F3C1E28B89CD6556441:http://schemas.datacontract.org/2004/07/System.Data.Entity.DynamicProxies' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer

    In our template, we demonstrated how to use Data Transfer Object (DTO) to solve both problems so that it works for XML and JSON serialization. DTO is a preferred design pattern for solutions with complex data models. You can see the DTO classes from TodoItemDto.cs and TodoListDto.cs files.

    Server Web API DBContext

    The TodoItemContext.cs file provides the Database Context that Entity framework will use. This file would normally be generated and modified via data model Web API Scaffolding.

    clip_image006

    Server Web API Controllers

    Controllers\TodoController.cs and TodoListController.cs files contain the Web API controller code. When we built the SPA application for template, we generated the files through Web API scaffolding, and then modified them to work with DTOs and authorizations. Controller class attribute “[Authorize]” ensures only authorized user can access these Web API controller calls. Checking against User.Identity.Name inside most functions is to ensure that the database records can only be accessed by their corresponding owners.

    The TodoListController.GetTodoLists function has an include statement in its LINQ call. It’s because our default connection string does not have MultipleActiveResultSets=”true” set (MARS). You can modify the SQL connection string locally if you want MARS behavior. For Windows Azure deployment, you need to manually modify its SQL Azure connection string to include MARS if you choose to do so.

    Server View and Client View with Knockout JS

    In Views\Home\Index.cshtml, we use the knockout data-bind attributes as the view. You can see all the knockout data-bind attributes are colorized and have limited intelliSense available for RC. In order to get the knockout binding handlers and user defined handlers’ intelliSense, we need to reference the knockout and app/todo*.js files in Scripts/_references.js file.

    clip_image008

    Client JavaScript Data Access Layer

    Scripts/app/todo.datacontext.js file defines the todoApp namespace and the data access layer to communicate with the server Web API controllers. One thing to note is the usage of $.ajax with “cache: false” inside its options parameter, because in certain browsers, JSON requests can be cached and we will not be able to get updated Web API results. One thing we might improve in the future for this file is to remove the knockout relationship in this file so that any other SPA framework can work with this file without changing it.

    Client JavaScript Models

    Scripts/app/todo.model.js file defines the knockout data models. Besides the general knockout observable defines, it also shows how to subscribe a function to an observable so that function will get called whenever the observed property changes.

    Client JavaScript ViewModels

    Scripts/app/todo.viewmodel.js defines the view model for the page. It defines the page level view model and initiates the knockout bindings for the page.

    Knockout Bindings JavaScript File

    Scripts/app/todo.bindings.js defines customized knockout binding handlers which demonstrate how to extend knockout JavasSript framework. We also extended the knockout binding handlers to work with JQuery Validation plugin.

    AjaxLogin.js file

    Scripts/app/ajaxlogin.js shows how we use AJAX to do form based login with JSON. We also have the code to handle the login/register link actions.

    Summary

    In MVC SPA template, we showed a SPA design pattern with KnockoutJS framework, to work with server end MVC Web API. Our purpose is to provide a simple starting point for people to study knockoutJS SPA framework with MVC Web API, instead of trying to fully cover all SPA aspects, which also includes routing and much more.

    Knockout can also be easily replaced with other SPA JavaScript frameworks that you feel comfortable with. Don’t be shy to study the difference of all the SPA frameworks out there.

    References

    John Papa’s blog about our ASP.NET and Web Tools 2012.2 RC SPA template

    John Papa’s blog about our ASP.NET Fall Update 2012 Build Preview SPA template

    Knockout JS

    The excellent Knockout tutorial


    New Tutorial Series and Sample Application for ASP.NET MVC 4 with Windows Azure Tables, Blobs, and Queues

    $
    0
    0

    On December 19 we published a new tutorial series and accompanying sample application that shows how to work with Windows Azure Storage tables, queues, and blobs in a multi-tier application that uses ASP.NET MVC 4 and ASP.NET Web API.

    The sample application is an email service that runs in a Windows Azure Cloud Service. The front-end is a web role that manages mailing lists, subscribers, and messages. The back-end is a pair of worker roles that handle scheduling and sending emails. 

    There are five tutorials: one provides an overview of the application, one shows how to download and run the completed application, and three show how to build the application from scratch in Visual Studio.

    Here are links to the tutorials, with a sampling of what you can find in them:

    • .NET Multi-Tier Application Using Storage Tables, Queues, and Blobs - 1 of 5
      • Front-end overview including screen shots of web pages.
      • Back-end overview including diagrams of application architecture.     
      • Schemas and sample contents for Windows Azure tables used by the application.
      • Explanations of how the application uses queues and blobs.
      • A data diagram for the tables and queues.
      • A discussion of relative merits of running the front-end as a web role in a Cloud Service vs. in a Windows Azure Web Site.
      • A discussion of operating costs and options for minimizing costs.
    • Configuring and Deploying the Windows Azure Email Service application - 2 of 5
      • How to download, configure, and run the application.
      • How to publish the application to the staging environment in your own Windows Azure account, and how to promote to production.
      • How to limit access to the application in Windows Azure by specifying IP restrictions.
      • How to use Azure Storage Explorer and Visual Studio to view data in Windows Azure development storage and Windows Azure Storage accounts.
      • How to use either the automated or manual method to add Windows Azure Storage account credentials to the Visual Studio project.
      • How to configure the application for tracing and how to view tracing data in Windows Azure Storage.
      • How to scale the application by adding web or worker role instances.
      • How to decrease project startup time by disabling development storage when you are using a Windows Azure Storage account.
    • Building the web role for the Windows Azure Email Service application - 3 of 5
      • How to create a solution that contains a Cloud Service project with a web role and a worker role.
      • How to work with Windows Azure tables, blobs, and queues in MVC 4 controllers and views.
        • How to handle basic CRUD operations.
        • How to upload files and store them in blobs.
        • How to handle changes to table data that involve changing the row key or the partition key of an entity.
        • How to handle concurrency conflicts.
        • How to set the retry policy to avoid subjecting the user to long wait times.
      • How to use the new Storage Client Library (SCL) 2.0 API (project templates still give you the 1.7 API by default).
      • How to reference an SCL 1.7 assembly in order to get diagnostic functionality that hasn’t been added to SCL 2.0 yet.
      • How to handle web role instance shutdown gracefully by overriding the OnStop method.
      • How to create tables, queues, and blobs in code so that you don’t have to create them manually.
      • How to limit Windows Azure Storage transaction costs, increase efficiency, and implement atomic transactions by performing table operations in batches of up to 100 operations.
      • How to run the web front-end in a Windows Azure Web Site instead of a Cloud Service
    • Building worker role A (email scheduler) for the Windows Azure Email Service application - 4 of 5
      • How to create, query, and update Windows Azure Storage tables in a worker role.
      • How to add work items to a queue for processing by another worker role.
      • How to set the appropriate connection limit and configure diagnostics in the OnStart method.
      • How to handle worker role instance shutdown gracefully by overriding the OnStop method.
      • How to make sure that no emails are missed and no duplicate emails are sent if the worker role instance shuts down unexpectedly.
      • How to test a worker role that uses Windows Azure Storage tables and queues.
    • Building worker role B (email sender) for the Windows Azure Email Service application - 5 of 5
      • How to add a worker role to a Cloud Service project.
      • How to poll a queue and process work items from the queue.
        • How to make sure that only one worker role instance gets any given queue work item for processing.
        • How increase efficiency and decrease transaction costs by getting up to 32 work items at a time.
        • How to handle “poison messages” that cause exceptions when the worker role tries to process them.
      • How to download text from blobs.
      • How to send emails by using SendGrid.
      • How to make sure that no emails are missed and no duplicate emails are sent if the worker role instance shuts down unexpectedly.

    Feedback is welcome; you can post comments here or on the tutorials themselves.  One thing we know still needs work is the formatting of code blocks: the tutorials were written in Markdown, and we haven’t found a way to copy and paste code from Visual Studio into Markdown that preserves line spacing and indentation when the Markdown is rendered into HTML.  Suggestions for dealing with that issue are welcome also.

    -- Tom Dykstra

    Dual Mode Sockets - Never create an IPv4 Socket again

    $
    0
    0

    The Problem:

    If you’ve worked with network programing in recent years you’ve probably had to grapple with the IPv4 vs. IPv6 issue at some point.  In short, IPv6 is a great new technology for the future of networking, but most applications will still have to support IPv4 for several years during the transition. 

    Working at the socket layer developers most commonly address this by creating both an IPv4 and IPv6 socket and then juggling them back and forth as needed. Some existing System.Net APIs do this for you, such as:

    • ServicePoint [WebClient, HttpWebRequest, FtpWebRequest, SmtpClient, etc.]
    • TcpClient(hostname, port)
    • TcpClient.Connect(hostname, port)
    • UdpClient.Connect(hostname, port)
    • Socket.ConnectAsync(args) with DnsEndPoint.AddressFamily = Unspecified

    These APIs were a helpful mitigation but they only covered a subset of the scenarios.

    The Early Solution:

    The native Windows Winsock APIs introduced a new feature in Vista called Dual Mode sockets (http://msdn.microsoft.com/en-us/library/windows/desktop/bb513665(v=vs.85).aspx).  This allows you to create one socket and use it for either IPv4 or IPv6 communication. To do this you create an IPv6 socket, set the IPV6_V6ONLY socket option to false (0), and then append the IPv6 prefix “0:0:0:0:0:FFFF:” to your IPv4 addresses such as “0:0:0:0:0:FFFF:127.0.0.1”.  It takes some getting used to reading your IPv4 addresses like that, but it is easier than trying to manage two different sockets.

    Since System.Net.Sockets.Socket is a thin wrapper over the native Winsock APIs, the Dual Mode functionality was immediately available at the .NET layer.  Unfortunately it was difficult to configure and use, and a few scenarios didn’t work (UDP ReceiveMessageFrom, etc.).  Take the following example of a Dual Mode client and server written using .NET 3.5:

    // Server
    TcpListener listener = new TcpListener(IPAddress.IPv6Any, port);
    listener.Server.SetSocketOption(SocketOptionLevel.IPv6, (SocketOptionName)27, false);
    listener.Start();

    // Client
    Socket socket = new Socket(AddressFamily.InterNetworkV6, SocketType.Stream, ProtocolType.Tcp);
    socket.SetSocketOption(SocketOptionLevel.IPv6, (SocketOptionName)27, false);
    IPAddress address = IPAddress.Parse("::FFFF:" + IPAddress.Loopback.ToString());
    socket.Connect(address, port);

    This approach gets increasingly difficult for APIs such as Socket.Connect(IPAddress[] list, int port), and is impossible for APIs like Socket.Connect(string hostname, int port).

    In .NET 4.0 the SocketOptionName.IPv6Only (27) enum value was added, but otherwise this scenario was unchanged.

    The .NET 4.5 Polished Solution:

    With .NET 4.5 the entire Socket API has been updated so that Dual Mode is easy to configure, use, and it just works!  The Socket class now has a boolean DualMode property to enable the socket option, a new constructor that does this by default, IPv4 addresses will be internally converted to IPv6 addresses for you, and every API has been validated to make sure it works when Dual Mode is enabled. There is also a new static TcpListener.Create method for creating a simple Dual Mode server.  See what the above code sample looks like now:

    // Server
    TcpListener listener = TcpListener.Create(port);
    listener.Start();

    // Client
    Socket socket = new Socket(SocketType.Stream, ProtocolType.Tcp);
    socket.Connect(IPAddress.Loopback, port);

    Something similar can be done for TcpClient:

    TcpClient client = new TcpClient(AddressFamily.InterNetworkV6);
    client.Client.DualMode = true;
    client.Connect(IPAddress.Loopback, port);

    Note that the APIs listed at the top of this article that internally emulate Dual Mode have not been changed in order to maintain compatibility with existing applications.  Otherwise all Connect, Bind, SendTo, ReceiveFrom, etc. Socket APIs now work with Dual Mode.  There are also some new helper methods on the IPAddress class if you find yourself needing to convert between IPv4 and IPv6 address representations (e.g. 127.0.0.1 to or from ::FFFF:127.0.0.1), see IPAddress.MapToIPv4() and MapToIPv6().

    Enjoy!

    ~Chris Ross

    Real Scenario: folder deployment scenarios with MSDeploy

    $
    0
    0

    Hi everyone Sayed here. I recently had a customer, Johan, contact me to help with some challenges regarding deployment automation. He had some very specific requirements, but he was able to automate the entire process. He has been kind enough to agree to write up his experience to share with everyone. His story is below. If you have any comments please let us know. I will pass them to Johan.

    I’d like to thank Johan for his willingness to write this up and share it. FYI if you’d like me to help you in your projects I will certainly do my best, but if you are willing to share your story like Johan it will motivate me more :) – Sayed

     

    Folder deployment scenarios with the MsDeploy command line utility

    We have an Umbraco CMS web site where deployment is specific to certain folders only and not the complete website. Below is a snapshot of a typical Umbraco website.

    clip_image001

    I have omitted root files such as web.config as they never get deployed. Now as we develop new features, only those purple folders gets modified so there is no need to redeploy the bulky yellow folders which mostly contains the Umbraco CMS admin system. I will refer to these purple folders as release files.

    The green media folder is a special case. Our ecommerce team modifies the content on a daily basis in a CMS environment. They may publish their changes to production at will. Content related files from this folder automatically synchronises to the production environment through a file watcher utility. Thus the CMS and PROD environments always contain the latest versions and we can never overwrite this.

    This results in a special case for deployment. These are the steps we follow:

    1. When preparing the QA environment, we first update the media folder from CMS

    2. Then we transfer all the release files from DEV to QA

    3. After QA sign off, we deploy those same the release files to PROD

    4. Next we update the CMS release files

    5. And then finally the media is transferred to DEV so

    This flow is illustrated below:

    clip_image003

    Previously this was done by zipping folders up, transferring it to the different environments, expanding and then overriding the target destination. This was tedious and obviously quite risky. With assistance from Sayed Hashimi at Microsoft, I have managed to greatly simplify this task through MsDeploy.

    MsDeploy can be downloaded here and the command line utility normally installs at:

    c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe

    The syntax I will concentrate on is as follows (more details can be found here):

    msdeploy.exe -verb:sync

                 -source:<package|manifest>=<pathToProviderObject>

                 -dest:<package|manifest>=<pathToProviderObject>

                 -useCheckSum -disableRule:BackupRule

    The ‘useCheckSum’ setting will ensure that only file changes are considered and ‘disableRule:BackupRule’ setting will stop warnings from being thrown. Automatic backups can be enabled with these instructions, but to keep things simple, we will create our own backups.

    For the purposes of illustration, we’ll assume that our websites are all installed at the following location in each environment:

    C:\MsDeployTest\[Environment]\Website

    Step 1: Update the QA media folder

    clip_image004

    First we need to package up the media folder to a zip file, transfer it over to the QA environment and then deploy that package. We’ll create a .bat file and an .xml file that we can reuse in the future. On the CMS box at a location of your choice, create a Package.bat file with the following contents:

    @echo off

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:manifest="Package.xml" ^

    -dest:package="Media @ %date:~6,4%-%date:~3,2%-%date:~0,2% %time:~0,2%-%time:~3,2%.zip" ^

    -useCheckSum ^

    -disableRule:BackupRule

    pause

    Also create a Package.xml file at the same location with the following contents:

    <?xml version="1.0" encoding="utf-8"?>

    <sitemanifest>

    <contentPath path="C:\MsDeployTest\CMS\Website\media"/>

    </sitemanifest>

    When you execute the Package.bat file, it will create a zip file (i.e. Media @ 2013-01-05 10-00.zip) containing the folder and subfolders you listed in Package.xml. Now we transfer the zip file to the QA box into a folder of our choice. In that same folder, create a Deploy.bat file with the following contents:

    @echo off

    set /p package=Enter the package name (tab to cycle):

    if '%package%' == '' goto error

    set /p manifest=Enter the manifest name (tab to cycle):

    if '%manifest%' == '' goto error

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:package=%package% ^

    -dest:manifest=%manifest% ^

    -useCheckSum ^

    -disableRule:BackupRule ^

    -whatif

    echo.

    set /p deploy=Trial run complete - proceed to deployment (Y/N)?

    if "%deploy%"=="Y" goto deploy

    if "%deploy%"=="y" goto deploy

    goto end

    :deploy

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:package=%package% ^

    -dest:manifest=%manifest% ^

    -useCheckSum ^

    -disableRule:BackupRule

    md Archive

    move %package% Archive

    goto end

    :error

    echo You did not enter a package name

    :end

    echo.

    pause

    Also create a Media.xml file at the same location with the same contents as Package.xml listed above. Update any paths if required.

    When you now execute the batch file, supply the package filename when prompted (you can use the tab key to cycle through files in the same folder). Note that we used the ‘-dest:manifest=”Media.xml”’ parameter with MsDeploy. This does not modify Media.xml but rather uses it as reference for deployment instructions. Also note that MsDeploy executes twice, but the first time it uses the ‘-whatif’ parameter. This gives you the chance to test the deployment first without making any modifications. It will spew out the same output as always but only simulate the syncing of files. Then you will be prompted to commit the deployment and if you proceed, the QA website’s media folder should now be synced up with the CMS environment.

    Step 2: Release to QA

    clip_image005

    On the DEV box in a folder of your choice, create a Package.bat file with the following contents:

    @echo off

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:manifest="Package.xml" ^

    -dest:package="Release @ %date:~6,4%-%date:~3,2%-%date:~0,2% %time:~0,2%-%time:~3,2%.zip" ^

    -useCheckSum ^

    -disableRule:BackupRule

    pause

    Also create a Package.xml file with the following contents:

    <?xml version="1.0" encoding="utf-8"?>

    <sitemanifest>

    <contentPath path="C:\MsDeployTest\DEV\Website\bin"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\Content"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\css"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\macroScripts"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\masterpages"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\Scripts"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\usercontrols"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\Views"/>

    <contentPath path="C:\MsDeployTest\DEV\Website\xslt"/>

    </sitemanifest>

    Executing the batch file will give you a zip file such as Release @ 2013-01-05 10-00.zip containing the release folders. Transfer the zip file to the QA box into the same folder as before. Then create a Release.xml file with the same contents as Package.xml listed above and fix the path names. When you now execute the Deploy.bat batch file and supply the release package filename, the QA website’s release folders should now be synced up with the DEV environment.

    Step 3: Release to PROD

    clip_image006

    This follows the same deployment as in step 2, but here we include in a backup step. So after choosing a suitable folder and transferring the release package file there, create a Deploy.bat file with the following contents:

    @echo off

    set /p package=Enter the package name (tab to cycle):

    if '%package%' == '' goto error

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:package=%package% ^

    -dest:manifest="Deploy.xml" ^

    -useCheckSum ^

    -disableRule:BackupRule ^

    -whatif

    echo.

    set /p deploy=Trial run complete - proceed to deployment (Y/N)?

    if "%deploy%"=="Y" goto deploy

    if "%deploy%"=="y" goto deploy

    goto end

    :deploy

    md Archive

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:manifest="Deploy.xml" ^

    -dest:package="Archive\Backup @ %date:~6,4%-%date:~3,2%-%date:~0,2% %time:~0,2%-%time:~3,2%.zip" ^

    -useCheckSum ^

    -disableRule:BackupRule

    echo Backup complete

    echo.

    "c:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" ^

    -verb:sync ^

    -source:package=%package% ^

    -dest:manifest="Deploy.xml" ^

    -useCheckSum ^

    -disableRule:BackupRule

    echo Deploy complete

    echo.

    move %package% Archive

    goto end

    :error

    echo You did not enter a package name

    :end

    echo.

    pause

    Now create a Deploy.xml file with the same contents as Package.xml from step 2 and fix the paths. Once all is in place and the trail run succeeds, you can proceed and deploy your changes to the PROD website.

    In addition to releasing your changes, it will first create a backup in the Archive folder such as Backup @ 2013-01-05 10-00.zip containing the original folders. If you need to do a restore, you can just execute Deploy.bat with this same backup file.

    Step 4: Release to CMS

    clip_image007

    Now everything becomes a formality and you can just repeat the release part of “Step 2: Release to QA” to update the CMS box. After executing Deploy.bat, all environments are now running the same code.

    Step 5: Update the DEV media folder

    clip_image008

    For this final step just repeat “Step 1: Update the QA media folder”. Now all environments should have the same content.

    I hope this will be helpful to someone with the same problem or something similar where only subsets of folders require updates. There is a lot of room for enhancement, i.e. becoming more creative with the zip file naming convention by including labels. MsDeploy is a great tool and can be used to deploy not just websites, but other kinds of projects which are as well. Please feel free to ask me questions. If I cannot help, I’m sure Sayed or someone at Microsoft will come to the rescue.

     

    You can download the SampleFiles to help work through the process as well.

     

    Johan van der Merwe

    johan.vdmerwe@btinternet.com

    XDT (web.config) transform engine released on NuGet

    $
    0
    0

    We have just released the XDT transform engine on NuGet, with a license which allows you to redistribute the assembly with your own product. . If you are not familiar with XDT it is a XML transformation engine which drives our web.config transforms in Visual Studio. You can read more about XDT at here. You can now take a dependency on XDT and ship it with your own product.

    For specifics on the terms of use please read the license.

    Why are we doing this?

    One of the top 10 most voted features for NuGet is Support Visual Studio (XDT) web.config transforms. To summarize the request, NuGet package authors would like to be able to leverage XDT transforms when a package is installed.

    When we initially set out to do this work, we knew that we needed to release XDT under a license which allows it to be redistributed with other 3rd party products. When we solicited feedback on our plans from the NuGet community, we heard some pretty clear feedback. If XDT wouldn’t work for all NuGet scenarios it had no place in NuGet core. We have heard that feedback and are reacting.

    The easiest option, and best in my opinion, is to release the XDT code under an open source license. There are no guarantees that we will be able to do this, but things are looking good at this time. Once we make more progress on this I will let you all know.

     

    Since we are hoping that XDT will have permanent home soon we have not setup a formal way to communicate issues. For now the best thing to do is to use the Contact Owners page.

     

    Known Bugs

    XDT is pretty solid but every product has some bugs. Most of them are pretty minor but you may be likely to run into the bug described below.

    Whitespace is not preserved when calling XmldTransformableDocument.Save(FIleSteam)

    If you reference Microsoft.Web.Xdt.dll and call XmldTransformableDocument.Save(FIleSteam) the whitespaces are not preserved in the transformed file.

     

     

    Thanks,

    Sayed Ibrahim Hashimi | http://sedodream.com/

    Introducing SvcPerf - An End-to-End trace Analysis tool for WCF

    $
    0
    0

    Standardizing on ETW

    In 4.5 we introduced ETW tracing for WCF. In the past couple of months we have been trying to establish some common tools that can be used to debug and analyze ETW traces from WCF. We have a large number of tools that allow ETW analysis and the one aspect we wanted to do was to allow activity correlation which WCF uses extensively. Folks who have already used ETW in its full capacity already know about ETW activities and how they are propagated. Those of you who don't, this FAQ: Common Questions for ETW and Windows Event Log should be a good start point. Others who just want to jump in, hop over to the tool page here -http://svcperf.codeplex.com/

    Pros of ETW

    1. Ability to control it dynamically compared to System.Diagnostics
    2. One of the fastest tracing mechanisms in the Operating System
    3. Well understood formats of producing and consuming traces.
    4. Inherent concepts of Activities and correlation.
    5. Real-time analysis capabilities, even in production environments.

    Some of the not so great features are

    1. Lot of tools available which causes confusion in the tool to be used.
    2. Cannot be easily read with text analysis tools due to binary format of the file
    3. File format can be better optimized.
    4. Collection is machine wide and can impact the system if there are large number of WCF services on the box.

    Despite these issues ETW offers one of the most robust tracing infrastructures on the platform. With that in mind we build our End-to-End(E2E) tracing for WCF. We needed to make multiple layers understand and consistent with our E2E model. We also needed to be able to correlate a large number of scenarios from our transports all the way to our dispatchers. We have large number of pumps in multiple layers and being extremely asynchronous means that you lose the ability to depend on things like TLS and ExecutionContext etc. With these in mind we choose a model where we explicitly flow our correlation across layers and use it when stamping our events. This also means that we need to be able to stitch causality chains for request analysis.

    • Note: This does not deprecate Diagnostics Tracing and neither is it guaranteed to provide feature parity. Instead we consider this as new capability which we can use for production analysis cases where we would not like process restarts or configuration changes to running applications. Also for 4.5 we do not have message logging which still requires configuration and can be done only through Diagnostics Tracing.

    Why do we need another tool?

    The other more popular ETW tools like Xperf and WPA have a good amount of analysis capability already built in them but they are not tuned for statistical analysis of data. For e.g. given a trace how do you find out which are your slow requests or how to obtain a distribution of your request processing times. These kinds of question were better answered with SQL like queries. That said there are log parsers that have this capability but we wanted to work on raw ETL and this is where SvcPerf comes into play. Besides being a quick ETL viewer it also is a query engine built on top of Tx (LINQ over Traces).

    clip_image001

    In the above screen shot you can see that Activities are correlated for you and the ETW fields like ActivityID and RelatedActivityID are can be directly queried.

    If you want to just give the tool a try you can get it from here. SvcPerf.exe (It is self-signed and will give you a warning but if you want you can try to compile the source as well provided you remove the strongname.).

    To collect the trace use logman with the WCF provider and you also need the manifest so that events can be decoded. Refer - http://svcperf.codeplex.com/wikipage?title=How%20do%20I%20collect%20an%20ETL%20trace%20for%20WF%2fWCF%3f&referringTitle=FAQs

    In the coming posts I will try to take up some simple scenarios where this tools might help in analysis.

    1. Request Duration analysis and Drill Downs
    2. Custom Queries and Graphs
    3. Synthetic Performance Counters
    4. Template Queries
    Viewing all 7144 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>