Quantcast
Channel: ASP.NET Blog
Viewing all 7144 articles
Browse latest View live

Notes from the ASP.NET Community Standup – July 19, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

Jon Galloway is out and about, Hunter is off in Boston… but we do have a list of cool links for the week.

Community Links

Laurent shares his experience using Microsoft Azure with Docker Cloud

Filip announced that WebApiContrib is being migrated to ASP.NET Core

Steve Desmond recorded a speedrun of his upgrade from RC2 to ASP.NET Core RTM

Muhammed shared some ideas about fluent interfaces in ASP.NET Core

Julie Lerman’s presentation from .NET Fringe about using ASP.NET Core and Entity Framework Core on a Mac is now available

ASP.NET Monsters talk about ViewComponents on their latest episode

Dominic wrote about the progress with IdentityServer

Jonathan published an article about doing all of his .NET work on a Mac with VSCode

Jerrie Pelser wrote about using roles with JWT Middleware … and also covered using parameters with OpenID

Luca wrote about using TypeScript in Visual Studio

Unobtrusive client-side validation in Angular is demonstrated in a project on GitHub

A GitHub project from SuperLloyd was published that supports serialization in .NET Core

The ASP.NET Monsters have launched a contest called ‘The Summer of Config’

The Roadmap

Scott and Damian spent some time reviewing the .NET Core roadmap that was posted to the .NET Blog.  Damian made it clear that not all components are going to be updated on each release because the framework is composed of many components and not all of them are effected.

Scott asked if the changes to support enhancing the ASP.NET Core build and publish process for performance are coming in the next release.  Damian pointed out that the investigation is underway and that a number of the components that will help this process are coming in the very near future.

After some review of the features described on the roadmap, Damian highlighted that SignalR planning has started.  There is an open issue on the SignalR repository for planning that discusses the current features under consideration.  The other ASP.NET facet that is being explored is bringing ASP.NET Web Pages to ASP.NET Core, tentatively called ‘MVC View Pages’ due to their View-only architecture.  The issue tracking planning for that featureset is in the MVC issues list.  A prototype is under construction to support the goals outlined for View Pages.

The Raspberry Pi demo previously shared on the standup is a scenario that the team wants to enable with the next release.  This lead the team to discuss the use of the term ‘support’, and in this case its being used to refer to “a possible configuration that you are not prevented from using” and NOT “call Microsoft paid support services for instructions”.  Similarly the team is working on enabling Alpine Linux to be usable with .NET Core.

The rest of the details on the roadmap are on the .NET Blog.


Notes from the ASP.NET Community Standup – July 26, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

This week, Jon Galloway actually IS on vacation, and Damian is wrapped up in important meetings.  Scott was joined this week by Maria Naggaga from New York City to talk about training and learning about ASP.NET Core from Code Schools.  While Maria’s twitter account is @LadyNaggaga, she needed to clarify for some of the live viewers that in fact, she is not the pop singer Lady Gaga.

Maria pointed out that many code schools are being attended by teachers who are looking to start bringing programming and technology to their classrooms to augment the K-12 teaching curriculum.  Also, there are teachers attending code schools to change careers.

In years past, it was common for developers to read a book or take a class and then take a certification test to be designated a “Certified Developer” in a programming language or discipline.  With the code schools or bootcamps, its typical to attend class for a 40-hour week over 2-4 months (perhaps even a year) and have a portfolio of applications built and interviews setup to connect you with a future employer.

Scott and Maria reviewed a few online schools and talked about them:

  • CodeSchool – is a service from Pluralsight that offers interactive in-browser coding, quizzes, and easy learning that can be completed over a few days.  CodeSchool now has a .NET course available  for free to the public
  • Learn How to Program.com from Epicodus is a series of online tutorials in different topics, and they have C#  as well as ASP.NET courses available.
  • Maria is also working with CodingDojo to help build a course that is scheduled to be released in September
  • Scott shared his ASP.NET Core Workshop source on GitHub that you can download and walk through.  We plan to assemble the various workshops Microsoft is creating so that there is one cohesive learning experience that can be completed on Visual Studio Code or Visual Studio 2015.

Maria also shared EDX, which is an online marketplace of courses on many topics and we plan to get ASP.NET Core content into their catalog.

Questions

Question: Is Microsoft planning to do more Virtual Academy courses?

— Yes, we are planning several of these for ASP.NET Core in September

Question:  Is there a writeup on getting ASP.NET Core running with Mono on Raspberry Pi 2?

— We did have it running in earlier builds, but we are still working on getting ASP.NET Core running on a Pi.

Question:  Will ASP.NET Core work on Apache some day?

— Configure your Apache server to reverse proxy requests to Kestrel.  We’ll write up a blog post on this topic soon.

Question:  Continuous deployment from GitHub to Azure, does this work?

— Yes, we are working on this now.  Damian has some updates to the deployment script that are being integrated and it should be faster.

Notes from the ASP.NET Community Standup – August 9, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

The team is back home after a week in Sydney presenting and teaching workshops about ASP.NET.  They would like to know: what do you think this show is missing?  We hear from you about watching this show, but want to know what it needs to improve.  Give us your feedback in the discussion area below.

Community Links

We’re answering your questions that you post on Gitter

Steve Smith wrote an article in MSDN Magazine discussing when to use Middleware and when to use Filters

Khalid Abuhakmeh published an article about using Semantic UI with ASP.NET Core instead of using Bootstrap

He also shared an article about Strongly Typed Configuration Settings in ASP.NET Core

John Callaway shared some insight about Generic Repositories and Dependency Injection

Tore published some code on GitHub called Netling that helps with stress testing web applications

Steve Gordon wrote about updating the AllReady project to Entity Framework 1.0 RTM

Chris Myers wrote about Debugging Dockerized .NET Core Apps with Visual Studio Code

Eric Fisher has two articles on the CodeSchool blog about getting started developing with ASP.NET

Bobby Johnson has a post about Learning C# on windows, osx, or linux with .NET Core koans.

Radu Matei wrote an article with an Introduction to the ASP.NET Core MVC API

Swamininathan Vetri published a how-to article about Running your first ASP.NET Core Web API on a Mac

Accomplishments / Planning

The ASP.NET team is working on the next release of the framework, as detailed on the last roadmap blog post.  Some of the things in progress include:

  • MVC View Pages now has a functional prototype that Damian is reviewing
  • There is a functional prototype for Razor precompilation
  • The Dependency Injection system is being improved with the help of some of the container authors feedback
  • Docker and containers are being worked on to improve the experience and make the development process really enjoyable
  • The release and versioning strategy is being addressed so that there is a nice predictable cadence.  There was a blog post on the .NET blog discussing the support strategy.
  • Response Caching middleware work has started, as one of the team members is assigned to this task
  • The URL Rewriting Middleware is functional now
  • View Components as TagHelpers is still ongoing

Questions and Answers

Question:  Is there an update on the next version of the ASP.NET Core tools?

— No announcements on that during the video, they will be conveyed on the blog first

Question:  What’s the status of the request validation built in to IIS?

— No status, this is the first we’ve heard of a request on this.  We’re interested in this type of thing, and it could get onto our roadmap.

Question:  Can you write up how to do port-forwarding and configuration with Apache?

— We do have nginx instructions available, and we have some documents started on HAProxy.

Question:  Any news on the JavaScript Services from a few weeks ago?

— We’re still collecting feedback on the GitHub repo.  Keep an eye open there for more details as that project progresses.

Question:  Is there a way to load XML configuration files of the older “app.config” format in ASP.NET Core?

— With a standard .NET “app.config” if you compile with the full framework, you should be able to continue to use the System.Configuration.ConfigurationManager API.

Question:  What are the latest benchmark numbers for Kestrel internally?

— On the big-iron servers, we’re well over 5 million requests per second.  We’re starting to look at the other benchmarks like the database enabled tests, and we’ll be working through those efforts. The numbers on the readme of the benchmarks repository are current based on the machines in the development team’s lab, which is smaller than the big-iron performance lab.

Question:  How are we doing with the conversion of NuGet packages to support Core?

— Its ongoing, more of a community process and the teams are supporting those library owners who need help getting unblocked to complete their conversions.

Question:  How is the Azure deployment update going?

— Its ongoing.  The Kudu deployments occur on the durable, but much slower drive.  There are some tunings that are being applied to NuGet 3.5, the .NET CLI, and the Kudu deployment strategy that are resulting in a significantly faster deployment time.

Question:  Is there any news about the project.json migration?

— No news to report on this, as we have updates ready for this they will be published on the blog.

Question:  How should I build on my continuous integration server when I have a mix of xproj and csproj files?

— Use MSBuild

Question:  Is there new documentation for token authorization?

— In the short-term, use identity server.  Longer term, we’re working on a solution that’s in the box.

The team will be back on Tuesday the 16th to discuss the latest updates on ASP.NET.

 

Notes from the ASP.NET Community Standup- August 16th

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meet

This week the team went over community blogs, the search for the ultimate laptop, and the Razor Pages / MVC View Pages prototype.

Razor Pages Prototype

This week Damian shares the Razor Pages (also called MVC View Pages) prototype that the team has been working on. To learn more about the projects goals and features checkout the MVC View Pages issue #494 on the aspnet/mvc repo.
If you are eager to start playing around with Razor Pages you can get started at the aspnet/RazorPages repo. We (ASP.NET team) are looking forward to your feedback and seeing what you have built.

Community Links

Marek Böttcher published an article on Localization with ASP.NET Core(Post in English)

Talking Dotnet published an article on how to add swagger to ASP.NET Core Web API.

Khalid Abuhakmeh published an article on ASP.NET Core’s IApplicationLifetime.

Khalid Abuhakmeh published an article on running Jekyll on Kestrel and ASP.NET Core.

Jerrie Pelser published an article on Authenticating a user with LinkedIn in ASP.NET Core.

Shayne Boyer wrote a tutorial on the docs.asp.net page on ASP.NET Web API help pages using swagger.

Andrew Lock published an article on exploring the cookie authentication middleware in ASP.NET Core.

Hisham published an article on Razor pages for ASP.NET Core. Please note that razor pages are still a prototype an in early stages.

Hisham published and article on creating configurable error pages in ASP.NET Core.

Lohith from Telerik published an article on using Kendo UI Core in ASP.NET MVC Core.

Matthew Abbott published an article on building .NET Core apps using Bamboo and Cake.

Marius Schulz published an article on simulating Latency in ASP.NET Core.

Simon Timms  from the  ASP.NET Monsters published an article on getting Nginix up and running on Ubuntu box with SSL and HTTP2.

Filip W published and article on Request.IsLocal in ASP.NET Core.

Announcing the ongoing Bug Bounty for .NET Core and ASP.NET Core

$
0
0

It’s with a great deal of pleasure that I can announce an on-going bug bounty for .NET Core and ASP.NET Core, our cross platform runtime and web stack.

During the RC1 and RC2 bounty periods we received quite a few interesting, intriguing and even puzzling bugs which we’ve addressed. The RC 1 bounty included one report which prompted an entire rewrite of a feature to make it easier for developers to use successfully.

Nothing makes me happier than being able to reward and recognize security researchers for their hard work in discovering and reporting these bugs and I look forward to continuing working with and compensating researchers for their efforts. The entire team recognizes the value of bug bounties and we view them as having two great values, it’s both the right thing to do for our customers and the right thing to do for the security researcher community.

The bounty includes both the Windows and Linux versions of .NET Core and ASP.NET Core, and includes Kestrel, our new web server. It encompasses the current release version, and the latest supported beta, or release candidate of any future versions.

https://dot.net/core has instructions on how to install .NET Core on Windows, Linux and OS X. Windows researchers can use Visual Studio 2015, including the free Visual Studio 2015 Community Edition. The source for .NET Core can be found on GitHub at https://github.com/dotnet/corefx. The source for ASP.NET Core can be found on GitHub at https://github.com/aspnet.

We encourage you to read the MSRC announcement which has a link to the program terms and FAQs before beginning your research or reporting a vulnerability. We would also like to applaud and issue a hearty and grateful thanks to everyone in the community who has reported issues in .NET and ASP.NET in the past. We look forward to rewarding you in the future as we take .NET and ASP.NET cross platform.

Further information on all Microsoft Bug Bounty programs can be found at https://aka.ms/BugBounty and in the associated terms and FAQs.

Notes from the ASP.NET Community Standup – August 30, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

Today Crystal Qian and Justin Kotalik, interns with the ASP.NET Core team for the past 12 weeks joined the standup to talk about their experience and to demonstrate some of the cool stuff they’ve been working on with the team.  But first…

Community Links

Radu Matei shows how to register a list of services from a JSON file – handy for testing or runtime DI configuration.

Jon Hilton continues a series on building a .NET Core application completely from the command line with a look at publishing.

Andrew Lock shows how to set the hosting environment for ASP.NET Core from Visual Studio, Visual Studio Code, operating system settings, and command line.

Dmitry Sikorsky introduces Platformus CMS, a new CMS written on ASP.NET Core and ExtCore.

Nice, thorough walkthrough by Bipin Joshi explaining how to configure ASP.NET Core Identity.

Damien Bowden shows how to log to Elasticsearch from an ASP.NET Core application using NLog.

Muhammad Rehan Saeed explains NGINX principles and configuration in the context of ASP.NET Core deployment.

Taiseer Joudeh begins a series explaining how he used Azure Active Directory B2C in a large web / service / mobile application, including ASP.NET Web API 2 and ASP.NET MVC.

Benjamin Fistein demonstrates how the Peachpie compiler can be used to compile PHP applications and run them in an ASP.NET Core application.

Anuraj Parameswaran shows how to configure NancyFx in an empty ASP.NET Core application.

Andrew Lock takes a look at how JWT Bearer Auth Middleware is implemented in ASP.NET Core as a means to understanding authentication in the framework in general.

Bill Boga points out an important gotcha if you’re writing middleware that modifies the response content.

Damien Bowden shows how to use MySQL with ASP.NET Core and EF Core.

Nicolas Bello Camilletti explains the components included in JavaScriptServices.

Mads Kristensen announces a new project template pack for ASP.NET Core applications.

Donovan Brown points out an important environment variable to speed up .NET Core installation on build servers.

Bill Boga shows off a configuration provider for GPS location tracking. Fun post, and great way to learn about configuration providers.

Damien Bowden shows how to implement undo, redo functionality in an ASP.NET Core application using EF and SQL Server.

Maher Jendoubi explores upcoming changes to DI integration in ASP.NET Core 1.1, showing how some popular 3rd party continers will be registered.

Scott Allen shows how he’s been creating extension methods to move code out of his Startup.cs.

Hisham Bin Ateya shows how to use custom middleware to prevent external sites from displaying your site’s images.

Norm Johanson announces ASP.NET Core support on AWS Elastic Beanstalk and the AWS Toolkit for Visual Studio.

Chris Sells describes the existing support for ASP.NET in Google Cloud Platform and lays out the plans for future ASP.NET Core support.

Here’s a useful library from Federico Daniel Colombo that makes it easy to add auditing to your ASP.NET applications.

Intern Accomplishments

Justin lead off by showing his work on writing a URL rewrite middleware module for ASP.NET Core.  He found that there are three common approaches that are used to rewrite URLs:

  • Syntax from Apache mod-rewrite
  • Syntax from the IIS rewrite module
  • Translate using regular expressions to identify and replace content in the URL

You can find his work on GitHub at https://github.com/aspnet/BasicMiddleware/tree/dev/src/Microsoft.AspNetCore.Rewrite

Crystal then showed her work in making ViewComponents into TagHelpers.  She demonstrated an initial ViewComponent that turned a photo of our program manager Dan Roth into an ANSI image.  Along the way, she showed the shortcomings of using a ViewComponent such as a lack of intellisense and parameter hints.  With a Crystal’s enhancement, you can reference your ViewComponents using a tag with a “vc” namespace and translated to lower-kabob case.

You can track Crystal’s work as part of addressing MVC issue 1051.

The work featured in this video will be shipping with the next release version of ASP.NET Core.

Notes from the ASP.NET Community Standup – September 6, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

Glenn Condron joined us this week, and showed us some improvements the team has made to the .NET Docker containers.

Community Links

Talking .NET published a post about integrating HangFire with Web API.

Andrew Lock had two posts for us:  the first is about OpenID Connect in ASP.NET Core and the second is about the POST-REDIRECT-GET web application technique.

Shawn Wildermuth has a series about ‘What I learned while building with ASP.NET Core” and this week he’s covering routing.

Eric Anderson wrote a post about troubleshooting an issue with some code that he copied from one machine to another got an error starting dotnet.exe

Hisham shared two posts: one about routing and localization and a second about chart controls with a tag helper.

Scott interjected and pointed out that the taghelpers + clean JavaScript library experience in ASP.NET Core is really nice and that more web developers should check out these tools.

Stefan Prodan published a post about doing continuous deployment with Docker Hub.

Amjad published a cool article showing nine different docker .NET application templates.

Steve Smith wrote an article for MSDN Magazine discussing feature slices for ASP.NET Core MVC.  You can read it online or in the print edition.

Barry Dorrans announced that the ASP.NET Core bug bounty has been extended.

Jon Hilton wrote about using MediatR to extend your app with notifications

Some guy named Scott wrote about using ASP.NET Core on low price Linux hosting providers.

Brock Allen announced IdentityServer 4RC1 availability with compatibility for ASP.NET Core

Christos Sakells has been updating a sample application on GitHub that uses ASP.NET Core, TypeScript and Angular 2

Jerrie Pelser has launched a course that demonstrates how to build a contact list application with ASP.NET Core

James South published an article showing how to use .NET Core for image processing

Donovan Brown wrote a post about using the outputName property in the buildOptions element in project.json

The latest updates for log4net show that they are working on support for .NET Core

Simon Timms wrote an article about how to properly use HttpClient in .NET Core.

Accomplishments – Docker Discussion

Glenn announced that the team intends to publish a new ASP.NET Core specific container image.  Currently the image is not ASP.NET specific but relates to .NET only and ASP.NET Core runs very well on the dotnet image.  We think that there are some optimizations that can be made for an ASP.NET Core specific deployment that the community will find very valuable.

The new documentation about building .NET Docker images was just published and Glenn shared the link to the online version of this at:  https://docs.microsoft.com/en-us/dotnet/articles/core/docker/building-net-docker-images

From the Microsoft/dotnet images page on the Docker Hub, Glenn showed us the differences between the various images available.

Current .NET Docker Images

Current .NET Docker Images

While it may look scary with a bunch of versions listed, they each have their own proper place.  The Development images are for various versions of the .NET Core framework and provide full compile capabilities from within the image.  The Runtime images are images that have the minimum software required to run your application after it has been compiled.

When you refer to the microsoft/dotnet image, you are presented with the most wide-ranging image that covers most scenarios.  You can further optimize by using the microsoft/dotnet:<VERSION>-core image that contains the runtime, the OS dependencies, the dotnet executable, and enough to run your application.  You can view the commands that were used to build the docker image by executing the docker history command, and you will see results similar to the following:

.NET Container History

You can then clearly see, starting from the bottom of the list and working up, the commands that were added to build up the image.  There is a no-trunc switch that you can add to this command in order to prevent the ‘CREATED BY’ column from being truncated so that you can read the entire command executed.

Glenn then showed us that building a simple Hello World console app and running it in the dotnet:core image without the compiler and SDK results in a 253MB image.  The same application built with the dotnet image that does have the SDK embedded creates a 540MB image, almost twice as large.  It should be noted, that your application will be much larger than a ‘Hello World’ console application, and these size measurements are a minimum number that your application will start at.

By comparison, we measured the size of some common language docker images and built the default ASP.NET Core web template into an image and compared their image sizes:

Size of some popular Docker images

In reality, the sizes of these images are not a significant problem once the image is deployed to a server running docker.  When a container is running, every instance shares the cost of the image and only needs disk space to manage changes to the image.  The image size is only important when delivering the image and the network bandwidth needed to deliver that content.

A suggestion came in from the community that the dotnet image could be reduced in size by switching from a debian base to the latest ubuntu image that is 60MB smaller.  The team hasn’t researched this yet, and think that it could be a viable change.

Glenn then showed us Troy Dai’s docker hub account and the prototype docker images he is working on.

Prototype Docker ImagesThe development image contains:

  • All of the ASP.NET Core NuGet packages pre-restored for the user
  • Pre-cross gen’d cache of ASP.NET Core and all of the dependent assemblies so that you don’t pay that compilation task at start time
  • Environment variables already set to listen to the public 5000 port

The production image includes a cache of runtime assemblies that have been pre-just-in-time compiled to help improve the startup performance when running in production.

Try some of these images and let us know what you think.  The team is working to improve them, and we will report more details on these changes as they are published.

Announcing the ASP.NET Core September 2016 Patch Release

$
0
0

Today we are making available a patch release to the ASP.NET Core 1.0 release.  This patch contains some updates to MVC, Routing, AntiForgery, Entity Framework Core, and the Kestrel server.  Release notes and links to the issues that are addressed for these packages are available on GitHub.  There are updated ASP.NET Core templates available as part of the “Microsoft .NET Core 1.0.1 – VS 2015 Tooling Preview 2” release in the Tools section on the .NET Downloads page.

There are also several updates to the .NET Core framework, SDK, and Windows Server Hosting components that you will need to install in order to use this version of the ASP.NET Core framework.  We recommend installing this update on your development and production machines to also address a security advisory that was issued. More details about that advisory can be found on TechNet

Support

This is a long-term- support release that we are issuing, and paid support is available for the next three years.  For more information about the .NET Core / ASP.NET Core release and support cycle, check our support page for more details.

Updating My Existing Application

To update an application that you are currently working on, you simply need to update the package references to the latest 1.0.1 versions of the packages in your project.json file..

Project.JSON DependenciesNote: the IISIntegration package is shown for comparison purposes only.  It was not updated in this release.

You can “lock in” to a specific version of a package by referencing the full version number of the package, as shown in the IISIntegration and Kestrel packages on lines 1 and 2.  if you have them referenced in your project.json:

  • EntityFrameworkCore
  • AspNetCore.Server.Kestrel
  • AspNetCore.Mvc
  • AspNetCore.Antiforgery
  • AspNetCore.Routing

The last two, Antiforgery and Routing are included by the MVC package, and do not need any extra work if you are not directly referencing them in your project.json.

Summary

We are happy with this release because the fixes that we are deploying help improve the stability of the framework.  Most of the bugs we are addressing were identified by customers and don’t have easy workarounds.  Thank you to the participants on our GitHub issues lists and those who are submitting pull-requests.  We look forward to working with you on the next release of ASP.NET Core.


Introducing IdentityServer4 for authentication and access control in ASP.NET Core

$
0
0

This is a guest post by Brock Allen and Dominick Baier. They are security consultants, speakers, and the authors of many popular open source security projects, including IdentityServer.

Modern applications need modern identity. The protocols used for implementing features like authentication, single sign-on, API access control and federation are OpenID Connect and OAuth 2.0. IdentityServer is a popular open source framework for implementing authentication, single sign-on and API access control using ASP.NET.

While IdentityServer3 has been around for quite a while, it was based on ASP.NET 4.x and Katana. For the last several months we’ve been working on porting IdentityServer to .NET Core and ASP.NET Core. We are happy to announce that this works is now almost done and IdentityServer4 RC1 was published to NuGet on September 6th.

IdentityServer4 allows building the following features into your applications:

Authentication as a Service
Centralized login logic and workflow for all of your applications (web, native, mobile, services and SPAs).

Single Sign-on / Sign-out
Single sign-on (and out) over multiple application types.

Access Control for APIs
Issue access tokens for APIs for various types of clients, e.g. server to server, web applications, SPAs and native/mobile apps.

Federation Gateway
Support for external identity providers like Azure Active Directory, Google, Facebook etc. This shields your applications from the details of how to connect to these external providers.

Focus on Customization
The most important part – many aspects of IdentityServer can be customized to fit your needs. Since IdentityServer is a framework and not a boxed product or a SaaS, you can write code to adapt the system the way it makes sense for your scenarios.

You can learn more about IdentityServer4 by heading to https://identityserver.io. Also you can visit the github repo, the documentation, and see our support options.

There are also quick-start tutorials and samples that walk you through common scenarios for protecting APIs and implementing token-based authentication.

Give it a try. We appreciate feedback, suggestions, and bug reports on our issue tracker.

Announcing the DotNetCompilerPlatform 1.0.2 release

$
0
0

Today I’m pleased to announce that the Microsoft.CodeDom.Providers.DotNetCompilerPlatform 1.0.2 package is released on NuGet. It enables ASP.NET to support the new language features and improves the compilation performance. To install this NuGet package, open NuGet Package Manager in visual studio, search Microsoft.CodeDom.Providers.DotNetCompilerPlatform and click Install/Update button.

What’s new in the new release

  1. Update the dependency package Microsoft.Net.Compilers to which is the latest RTM version of the Roslyn compiler. This new version addresses a performance issue with csc.exe on single core machines.
  2. Disable profiling when launching Roslyn compiler. the profiling is enabled on the w3wp.exe process, CLR will not load NGEN’d assembly which could slow down cold startup performance. The Roslyn compiler references several large assemblies and takes several seconds to start the Roslyn compiler process if those reference assemblies are not loaded from NGEN. This new version disables profiling when starting the compiler. If you do want to profile the Roslyn compiler, you can add the following appsetting entry in web.config.
    <appSettings><add key="aspnet:DisableProfilingDuringCompilation" value="false"/></appSettings>
  3. Address an issue on GitHub which causes a compilation error. The root cause of the issue is that the Roslyn compiler assemblies are included as project items which are part of the “candidate assembly files”.  In the error case, ResolveAssemblyReferences task resolves the reference assembly, namely System.Collections.Immutable.dll, to the one under Roslyn compiler folder which may be an older version than the project actually references. This new release fixes the issue.

Tips

Roslyn compiler references several large assemblies and it takes several seconds to load them all, as CLR needs to JIT them. So to get better performance (for a simple website, the cold startup is about 50% faster), it’s highly recommended to NGEN those assemblies. Here are the commands to do NGEN.

Ngen.exe install Microsoft.CodeAnalysis.CSharp.dll

Ngen.exe is under %SystemRoot%\Microsoft.Net\Framework\v4.0.30319 folder and you may want to NGEN all the managed assemblies in Roslyn compiler package.

Known issue

When you update Microsoft.CodeDom.Providers.DotNetCompilerPlatform nuget package you may see an error message similar to that in the image below when updating the Microsoft.Net.Compilers NuGet package. This happens because the Roslyn compiler process is still active and NuGet fails to uninstall the old one. To work around this issue, please restart Visual Studio and reinstall the package.

nuget_error

Feedback

If you find any issue or want to ask a question regarding what’s discussed in this article, please leave a comment below, or use the Contact Owners form for the NuGet package.

 

Secure ASP.NET ViewState

$
0
0

During an appearance on the .NET Rocks podcast last week, a question was raised about securely sending information through ASP.NET ViewState.  I responded to the question by indicating that the typical security concern for web content is not to trust any content submitted from the web, including ViewState.  After that podcast was published, several of my colleagues corrected me: in ASP.NET 4.5 the encryption of ViewState received a significant rewrite that addressed this issue and effectively makes ViewState very secure.

Encrypted and MAC’d

In older versions of ASP.NET, there was an option to “EnableViewStateMac” that would allow you to configure whether ViewState was protected against tampering with a Message Authentication Code (MAC – true setting).  As a secondary configuration option, ViewState was encrypted if the “ViewStateEncryptionMode” was set to true.  Beginning with ASP.NET 4.5.2, this configuration is ignored and all requests are both encrypted and protected with a Message Authentication Code.  Security advisory KB2905247, which was sent to all Windows machines on a patch Tuesday in September 2014, set ASP.NET to ignore the EnableViewStateMac setting and use the ASP.NET 4.5.2 encryption settings in all versions of ASP.NET going back to ASP.NET 1.1.  Troy Hunt has a magnificent blog post describing how ViewState MAC works if you are interested in the details.

Improved Encryption Pipeline

With the ASP.NET 4 release, you could replace the symmetric encryption and message authentication algorithms used by the cryptographic pipeline within ASP.NET request processing.  You could change the algorithm by setting a decryption and validation attribute on the machineKey element in the machine.config file.  More details on this configuration can be found in the MSDN documentation.

You can force your Windows web server to use the updated ASP.NET 4.5 encryption capabilities by applying a compatibilityMode attribute to the machineKey element in machine.config like this:

<machineKey compatibilityMode="Framework45" />

Alternatively, you can apply a targetFramework attribute to the httpRuntime element in web.config, as the updated ASP.NET project templates do:

<httpRuntime targetFramework="4.5" />

More information about the updates to ASP.NET Encryption and ViewState are available online.

Introducing the ASP.Net Async SessionState Module

$
0
0

SessionStateModule is ASP.NET’s default session-state handler which retrieves session data and writes it to the session-state store. It already operates asynchronously when acquiring the request state, but it doesn’t support async read/write to the session-state store. In the .NET Framework 4.6.2 release, we introduced a new interface named ISessionStateModule to enable this scenario.

Benefits of the asynchronous SessionState module

It’s all about the scalability. As the world moves to the cloud, it makes really easy to scale-out computing resources to serve the large spikes in service requests to an application. It’s very important to design a scalable system so that an application benefits from cloud computing architecture. When you are considering the scalability in terms of session-state, you should not use in-memory session-state provider. The in-memory provider makes it impossible to share session data across multiple web servers. You need to store session data in other mediums such as Microsoft Azure SQL Database, NoSQL, Azure Redis Cache etc. In this case, the new async Sessionsate module enables you to plug in the async session-state provider to access those storage providers asynchronously. Async I/O operation helps release the thread more quickly than synchronous I/O operation, and ASP.NET can handle other requests. If you are interested in knowing more details about async programming, you can read Stephen Cleary’s article on Async Programming : Introduction to Async/Await on ASP.NET.

How to use the Async SessionState module

  1. Open the NuGet package manager and search for Microsoft.AspNet.SessionState.SessionStateModule and install. Since the ISessionStateModule interface is introduced in .NET Framework 4.6.2, you need to target your application to .NET Framework 4.6.2. Download the .NET Framework 4.6.2 Developer Pack if you do not already have it installed.
  2. asynSTM-1The NuGet package will copy Microsoft.AspNet.SessionState.SessionStateModule.dll to the bin folder and add the following configuration into the web.config file. If you don’t provide an async session-state provider, the module will use a default in-memory provider. With the in-memory provider you won’t get the async benefits from the default provider, since the default provider just stores the session data in memory.
  3. asynSTM-2If you want to get all the async benefits mentioned above, you need to install a real async sessionstate provider. With the release of Microsoft.AspNet.SessionState.SessionStateModule NuGet package, we are also releasing an async version Sql sessionstate provider NuGet package which leverages Entity Framework to do the async database operation. To install this package, open NuGet package manager and search for the Microsoft.AspNet.SessionState.SqlSessionStateProviderasync package and install it.
  4. asynSTM-3The installation will add Microsoft.AspNet.SessionState.SqlSessionStateProviderAsync.dll to the bin folder and insert the following configuration into the web.config file. The only additional configuration you need to do is to define a connection string within the connectionstrings element with the same name as the value in the connectionStringName attribute. In the sample configuration below, my connection string would be “DefaultConnection”.

asynSTM-4

How to implement an async provider

In most of the cases, you can leverage the Microsoft.AspNet.SessionState.SessionStateModule NuGet package and just implement the provider if you want to store the session data somewhere else. To implement your own  async sessionstate provider which works with Microsoft.AspNet.SessionState.SessionStateModule, all you need to do is to implement a concrete SessionStateStoreProviderAsyncBase class which is included in the Microsoft.AspNet.SessionState.SessionStateModule NuGet package.  Here is the signature for that class:

The new async session-state provider base class is almost same as the sync version, except the majority of the methods return a task in async sessionstate provider base class. You can reference the sample code of sync version sessionstate provider and use async version System.Data.Odbc API to do the database operation instead.
We think that this will assist in some performance tuning scenarios with ASP.NET, and encourage you to try this provider if you are currently using SQL Server as your session state provider. Let us know if you are building an async provider for another storage medium. We encourage you to share any new sessionstate providers you write on NuGet.org. Good luck and happy coding!

Notes from the ASP.NET Community Standup – October 11, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

Community Links

We now have a list of free courses available to teach you more about ASP.NET and ASP.NET Core.

The latest article we’re spotlighting in the ASP.NET documentation is Getting started with ASP.NET Core MVC and Entity Framework Core using Visual Studio by Tom Dykstra

OneTrueError – Automated exception handling by Jonas Gauffin

Building REST services with ASP.NET Core Web API and Azure SQL Database by Jovan Popovic

Check out the Bitwarden project: Free, Open Source Password Manager built on ASP.NET Core Kyle Spearrin

Custom authorisation policies and requirements in ASP.NET Core by Andrew Lock

Real-time applications using ASP.NET Core, SignalR & Angular by Christos Sakells

ReactJS.NET 3.0 – .NET Core and lots of small tweaks to support ReactJS by Daniel Lo Nigro

IdentityModel v2 released by Dominick Baier

Running your ASP.NET Core application on Azure Container Service by Rene van Osnabrugge

Request Filtering for ASP.NET Core applications: Part 3 – Integrating with ASP.NET Pipeline by Hisham Bin Ateya

Check out the EasyLOB project: a Data-Driven Design Archetype for developing Web based .NET LOB Applications

How to: ASP NET Core – third party middleware index by Milan Stanaćev

Accomplishments

Damian reported that the team is very close to delivering ASP.NET Core 1.1  A full list of features are on the roadmap on GitHub, and a preview release is scheduled for the very near future.  The features on the roadmap for 1.1 include:

  • URL Rewriting middleware
  • Response caching middleware
  • DI improvements for 3rd party containers
  • WebListener server (Windows only)
  • Middleware as MVC filters
  • ViewComponents as Tag Helpers
  • View precompilation (tooling preview)
  • Cookie-based TempData provider
  • Improved Azure integration
    • App Service startup time improvements
    • App Service logging provider
    • Azure Key Vault provider

We then discussed the version of the SignalR server that is being built on top of ASP.NET Core.  The framework components of SignalR are being reviewed at this time, and in particular we looked at the “ASP.NET Sockets” implementation that uses a socket-like programming model to interact abstractly with real-time requests from a client across some transport mechanism provided by SignalR.  Damian started off by showing us a sample that uses the ASP.NET Core Sockets EndPoint construct:

https://github.com/davidfowl/Sockets/blob/master/samples/SocketsSample/EndPoints/ChatEndPoint.cs

The code for interacting with the Socket uses the typical Socket workflow with a while-loop to read content from the Socket if data has been received on the Socket.  The Sockets implementation is built on top of a feature called Channels.

Further experimentation on ASP.NET Core features are being shared into the ASPLabs repository.

Also, the team has completed an initial set of prototypes designs for RazorPages and have started working to turn this into a real product.  It is currently scheduled to be delivered at roughly the same time as the SignalR framework.

Questions:

Q:  Are we going to see compilation speed improvements in 1.1?

— No, 1.1 is not a tooling release but rather a set of updates to runtime packages.  You will see a performance improvement with the move from project.json to msbuild.  More news on that will be coming very soon.

Q:  Any updates on System.Drawing for .NET Core?

— No.  Take a look at ImageProcessor.org and ImageResizer.Net to help with that.  We will address this gap in the future.

Q:  Will Azure App Service WebApp for Linux Preview support ASP.NET Core?

— We have Node and PHP support currently.

At this point, Damian chimed-in with a response to a question that he had received outside of the chat-room about super-simple HTTP service configuration.  He wrote up his idea for a simple configuration and shared it as an issue in the Routing repository The API Damian is proposing could look similar to the following:

Q:  Do you have an update on MSBuild / CSProj?

— There is a blog post coming along shortly… after that is published we will discuss further.

Q:  What is the recommended way to access microservices from an ASP.NET Core app?

— Use whatever makes you happy – you can connect and use a microservice with a number of different techniques, and there is no one preferred way.

Q:  Is there are a way to use a publish / subscribe pattern with a servicebus in the SignalR code that was shared?

— We demonstrated the lowest layer in this video, you can layer anything you want on top of it.  We have some samples in the repo that show how to start layering on top.  The team is still discussing plans to ship a firm pub / sub abstraction.

Q:  What’s the latest news about using Razor templates without needing the entire MVC framework?

— The team is working to refactor Razor to run outside of MVC.  As part of the refactoring, it will be easier for other razor host processes to be enabled in Visual Studio. More will be coming later

Q:  Is there an update on JavaScript services?

— We’re now building Visual Studio templates to support these services and get more feedback.  The beginnings of these templates are in the GitHub repository.

At this point, the team discussed how JavaScript has changed the way that they develop applications and some of their background that has shaped the way that they approach the ASP.NET tools.  Damian referred to a JavaScript article on Medium that humorously discusses how to learn JavaScript in 2016, and that they see JavaScript currently in a state where innovation is happening extremely quickly.

 

 

Notes from the ASP.NET Community Standup – October 18, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Within 30 minutes, Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

This week’s meeting is below:

Community Links

Request Filtering for ASP.NET Core applications: Part 4 – Extending the Request Filtering Rules

Localization Resource Generator & Translator via “dotnet” CLI

ASP.NET Core: Globalization and Localization

Building Apps with Polymer and ASP.NET Core

Don’t Share Your Secrets! (.NET CORE Secret Manager Tool)

Debugging into ASP.NET Core Source

Modifying the UI based on user authorisation in ASP.NET Core

Error Handling and ExceptionFilter Dependency Injection for ASP.NET Core APIs

Angular2 autocomplete with ASP.NET Core and Elasticsearch

ASP.NET Core with Angular2 – tutorial

Managing containerized ASP.NET Core apps with Kubernetes

Working with Environments and Launch Settings in ASP.NET Core

A question was asked on Twitter that Jon highlighted, asking if development had completely stopped on ASP.NET 4.6 and if there was a road map.

— We can tell you that development has not stopped, and we have released versions 4.6.1 and 4.6.2 of the .NET Framework with enhancements for ASP.NET.  There are minor tweaks and adjustments being applied to the full framework as most of our attention is taken up by delivering the new .NET Core and ASP.NET Core frameworks.  We expect to deliver features to the .NET Framework versions that help bring some of the innovation in .NET Core back to the existing .NET Framework so that applications built for Windows on the full framework get those benefits as well.

Accomplishments / Demo

Damian went on to present a demo and discussion about the logging and performance profiling features of ASP.NET Core with Visual Studio 2015.  In particular, he spent time looking at using ETW (Event Tracing for Windows) and AppInsights integrations with Visual Studio.  On Linux, there is a replacement for ETW that uses the same EventSource API but logs to a different location on disk.

When starting a new project, there is an option in the window to add Application Insights to your project.  You don’t need to sign up for an Azure account, and you can choose to “Install the SDK Only”.

Application Insights - Install SDK Only

With AppInsight data collected locally, you can start your application with the debugger running, there is some very interesting data being analyzed by Visual Studio for you.  The diagnostic tools window is available that shows the current memory, CPU usage, and events triggered by the application.  You can click the AppInsights button on the toolbar to open the Application Insights Search window and see more detailed data about the interactions with the application.

View Application Insights inside of Visual Studio

These profilers and diagnostic tools are what really make Visual Studio shine for developers that need to do more than just ‘write code’.

The Application Insights team is working to add more features to their data collection and processing facilities inside of Visual Studio.  They have also integrated with CodeLens for Visual Studio Ultimate users that shows the number of exceptions logged for each method you are viewing in the text editor.  If your application has Application Insights running in production, you can connect your Visual Studio instance to that and see the exceptions logged in your production environment.

Damian went on to show us where the Application Insights telemetry is injected into the ASP.NET Core middleware.  He went on to show us some improvements that he prototyped in an ASP.NET Core logger provider that logs more information to Application Insights.

In an effort to make it even easier to get this type of logging and analytics for Visual Studio developers, Damian showed the ETW logger provider that is already written but not yet shipped.  You can grab it from the nightly MyGet feed under the Microsoft.Extensions.Logging.EventSource package name.  With ASP.NET Core logging to ETW, you can then use the ETW PerfView tool to track and analyze the activities and events in your application.

Is there a performance impact while debugging and using ETW?  When you get to about 60,000 events per second you will start to see performance impacts, and that won’t typically be hit in debugging sessions.  ETW is designed to be turned on by default and it only impacts performance when there is a listener interacting with the Windows event stream.

Scott spent some time learning more about Application Insights and Visual Studio, and wrote up his experience on his blog.

Our next standup will be in two weeks, on November 1.  Thanks for watching, and happy coding!

Modern ASP.NET Web Forms Development – Dependency Injection

$
0
0

Puzzle PieceWe’ve all read various ‘best practices’ posts about this framework and that framework from expert developers in the community.  They’ll cover topics regarding how to make your application more maintainable and how to drive down the risk of maintenance in your applications.  A common design recommendation is to structure your applications so that you compose the objects you are working on.  The MVC framework is a good example that demonstrates this technique.  You compose controller objects in MVC using constructor dependency injection so that the other facilities that our controllers need like loggers and database access are managed in other classes.  What do we do if we’re working on an ASP.NET Web Forms application?  The Page object doesn’t allow you to use constructor injection, so how do we work around this limitation?

In this series of posts, we’re going to look at how to modernize the development of your ASP.NET Web Forms application using existing features in innovative ways.  Along the way, we will also learn how to reduce risk while maintaining our long standing applications.

Dependency Injection Primer

For those unfamiliar with the concept, dependency injection is a class design strategy that relocates logic from one class to another. The second class is then injected into the first through either a property or a constructor argument.  As an example, we can move all database management logic to a repository class or all log management tasks to a logger class.  Our application code can be structured so that it focuses more on the application logic and only pulling in these concerns when needed.

The following code demonstrates how you can inject a repository object that is defined by an interface, ICustomerRepository, into a Controller class for use in an MVC action method:

An inversion of control container can be configured to automatically create the CustomerController and pass in an appropriate class that implements the ICustomerRepository interface.

Inversion of Control Containers

An Inversion of Control (IOC) container is an object in your code that manages the construction and object lifetime of objects that can be automatically injected into your classes for use.  The code you are writing in the client classes, like the CustomerController above, calls into the classes that do most of the work for you.  This inversion of control is facilitated by a container object.  There are many containers available on NuGet.org for you to choose and add to your projects including:

For this article, we’re going to work with Autofac and add its capabilities to a Web Forms application.

Web Forms and Dependencies

ASP.NET Web Forms has a long history of managing code in a collection of code-behind files named something like “EditCustomer.aspx.cs” that contain a partial class.  This is the editable part of the class, as there is also a “EditCustomer.designer.aspx.cs” class that contains some generated code for the framework to use in managing the user-interface.  If we would like to add dependencies to a page, say a reference to a CustomerRepository, we would typically create the CustomerRepository in the constructor of the EditCustomer class and then use it appropriately later in the class.

This leads to some code management problems as we are then creating all of the dependent objects that we need to build and service our pages in the constructor of every page.

If those objects change or if we wanted to use a different implementation of our interface, we’re going to have a lot of code to update and test to ensure that it still functions the same way.  We can start to simplify this problem by introducing an abstract BasePage concept to move some of these common concerns out into a reusable class that other pages can inherit from as well:

We still have a page level concern for the CustomerRepository, but perhaps that’s just a one-off problem for this page.  Or maybe it isn’t…

Using a Container with Web Forms

Out of the box, you can’t add parameters to any of your code-behind file’s constructors.  Not only that, but you have no way to plug in a container to set property values either.  There is a way around this, and it involves using an HttpModule.  An HttpModule is a class that can intercept events in the ASP.NET pipeline and interact with the input, output, and other processes managing the pipeline.  In this case, we’re going to write a simple HttpModule to inspect a Page object and inject required properties or constructor arguments as needed.

In this sample, we’re going to use Autofac to power our module.  After installing the Autofac NuGet package, you can create a new class that implements the IHttpModule interface.  The IHttpModule defines two methods that you must implement: Init and Dispose.  We can use the Init method to listen for the PreRequestHandlerExecute event, the event that is triggered after a Page is constructed and before it begins any Page-level events processing.  Let’s also add a static constructor to configure the Autofac container:

With some additional reflection work inside of our event handler method, we can inject arguments for the constructor method and execute that constructor.  This is a little strange, as we’re used to executing constructor methods when calling “new FooClass()”.  In this case, the class is already instantiated and we are triggering the constructor method after the fact.

Structuring a Page

To set up a page to be injected, we can add public properties of types that Autofac knows how to handle or we can create a public constructor method that accepts arguments.  If we use the constructor injection technique, we need to ensure that a protected argument-less constructor method is available for the framework to use when it creates the Page object.

Final Configuration

The last piece of configuration needed to ensure that our AutofacModule executes in the ASP.NET event pipeline is to add an entry appropriately to web.config:

Summary

We’ve taught an old dog a new trick.  This is sample code only, and may not be optimized for the best performance for your application in production.  Some of the containers, like Autofac, have their own modules available for you to use if you want to employ this technique with your application.  Is this something that you find valuable?  Should we make investments in this type of “modernization” of the web forms templates and architecture?  Let us know in the comments below.

Thanks to MVP and Regional Director Miguel Castro for helping with the content of this article.


Announcing ASP.NET Core 1.1 Preview 1

$
0
0

Today we are happy to announce the release of ASP.NET Core 1.1 Preview 1. This release includes a bunch of great new features along with many bug fixes and general enhancements. We invite you to try out the new features and to provide feedback.

To update an existing project to ASP.NET Core 1.1 Preview 1 you will need to do the following:

  1. Download and install the updated .NET Core 1.1 Prevew 1 SDK
  2. Follow the instructions on the .NET Core 1.1 Preview 1 announcement to update your project to use .NET Core 1.1 Preview 1
  3. Update your ASP.NET Core packages dependencies to use the new 1.1.0-preview1 versions

Note: To updated your packages to 1.1 Preview 1 with the NuGet Package Manager in Visual Studio you will need to download and install NuGet Package Manager for Visual Studio 2015 3.5 RC1 or later from nuget.org.

You should now be ready to try out 1.1!

What’s new?

The following new features are available for preview in this release:

  • URL Rewriting middleware
  • Response caching middleware
  • Response compression middleware
  • WebListener server
  • View Components as Tag Helpers
  • Middleware as MVC filters
  • Cookie-based TempData provider
  • View compilation
  • Azure App Service logging provider
  • Azure Key Vault configuration provider
  • Redis and Azure Storage Data Protection Key Repositories

For additional details on the changes included in this release please check out the release notes.

Let’s look at some of these features that are ready for you to try out in this preview:

URL Rewriting Middleware

We are bringing URL rewriting functionality to ASP.NET Core through a middleware component that can be configured using IIS standard XML formatted rules, Apache Mod_Rewrite syntax, or some simple C# methods coded into your application. This allows mapping a public URL space, designed for consumption of your clients, to whatever representation the downstream components of your middleware pipeline require as well as redirecting clients to different URLs based on a pattern.

For example, you could ensure a canonical hostname by rewriting any requests to http://example.com to instead be http://www.example.com for everything after the re-write rules have run. Another example is to redirect all requests to http://example.com to https://example.com. You can even configure URL rewrite such that both rules are applied and all requests to example.com are always redirected to SSL and rewritten to www.

We can get started with this middleware by adding a reference to our web application for the Microsoft.AspNetCore.Rewrite package.  This allows us to add a call to configure RewriteOptions in our Startup.Configure method for our rewriter:

As you can see, we can both force a rewrite and redirect with different rules.

  • Url Redirect sends an HTTP 301 Moved Permanently status code to the client with the new address
  • Url Rewrite gives a different URL to the next steps in the HTTP pipeline, tricking it into thinking a different address was requested.

Response Caching Middleware

Response Caching similar to the OutputCache capabilities of previous ASP.NET releases can now be activated in your application by adding the Microsoft.AspNetCore.ResponseCaching and the Microsoft.Extensions.Caching.Memory packages to your application.  You can add this middleware to your application in the Startup.ConfigureServices method and configure the response caching from the Startup.Configure method.  For a sample implementation, check out the demo in the ResponseCaching repository.

Response Compression Middleware

You can now add GZipCompression to the ASP.NET HTTP Pipeline if you would like ASP.NET to do your compression instead of a front-end web server.  This middleware is available in the Microsoft.AspNetCore.ResponseCompression package.  You can add simple GZipCompression using the fastest compression level with the following syntax in your Startup.cs class:

There are other options available for configuring compression, including the ability to specify custom compression providers.

WebListener Server for Windows

WebListener is a server that runs directly on top of the Windows Http Server API. WebListener gives you the option to take advantage of Windows specific features, like support for Windows authentication, port sharing, HTTPS with SNI, HTTP/2 over TLS (Windows 10), direct file transmission, and response caching WebSockets (Windows 8). On Windows you can use this server instead of Kestrel by referencing the Microsoft.AspNetCore.Server.WebListener package instead of the Kestrel package and configuring your WebHostBuilder to use Weblistener instead of Kestrel:

You can find other samples demonstrating the use of WebListener in its GitHub repository.

Unlike the other packages that are part of this release, WebListener is being shipped as both 1.0.0 and 1.1.0-preview. The 1.0.0 version of the package can be used in production LTS (1.0.1) ASP.NET Core applications. The 1.1.0-preview version of the package is a pre-release of the next version of WebListener as part of the 1.1.0 release.

View Components as Tag Helpers

You can now invoke View Components from your views using Tag Helper syntax and get all the benefits of IntelliSense and Tag Helper tooling in Visual Studio. Previously, to invoke a View Component from a view you would use the Component.InvokeAsync method and pass in any View Component arguments using an anonymous object:

@await Component.InvokeAsync("Copyright", new { website = "example.com", year = 2016 })

Instead, you can now invoke a View Component like you would any Tag Helper while getting Intellisense for the View Component parameters:

TagHelper in Visual Studio

To enable invoking your View Components as Tag Helpers simply add your View Components as Tag Helpers using the @addTagHelpers directive:

@addTagHelper "*, WebApplication1"

Middleware as MVC filters

Middleware typically sits in the global request handling pipeline. But what if you want to apply middleware to only a specific controller or action? You can now apply middleware as an MVC resource filter using the new MiddlewareFilterAttribute.  For example, you could apply response compression or caching to a specific action, or you might use a route value based request culture provider to establish the current culture for the request using the localization middleware.

To use middleware as a filter you first create a type with a Configure method that specifies the middleware pipeline that you want to use:

You then apply that middleware pipeline to a controller, an action or globally using the MiddlewareFilterAttribute:

Cookie-based TempData provider

As an alternative to using Session state for storing TempData you can now use the new cookie-based TempData provider. The cookie-based TempData provider will persist all TempData in a cookie and remove the need to manage any server-side session state.

To use the cookie-based TempData provider you register the CookieTempDataProvider service in your ConfigureServices method after adding the MVC services as follows:

services.AddMvc();
services.AddSingleton<ITempDataProvider, CookieTempDataProvider>();

View compilation

While razor syntax for views provides a flexible development experience that doesn’t require a compiler, there are some scenarios where you do not want the razor syntax interpreted at runtime.  You can now precompile the Razor views that your application references and deploy them with your application.  You can add the view compiler to your application in the “tools” section of project.json with the package reference “Microsoft.AspNetCore.Mvc.Razor.Precompilation.Tools”.  After running a package restore, you can then execute “dotnet razor-precompile” to precompile the razor views in your application.

Azure App Service logging provider

The Microsoft.AspNetCore.AzureAppServicesIntegration package allows your application to take advantage of App Service specific logging and diagnostics. Any log messages that are written using the ILogger/ILoggerFactory abstractions will go to the locations configured in the Diagnostics Logs section of your App Service configuration in the portal (see screenshot).

Usage:

Add a reference to the Microsoft.AspNetCore.AzureAppServicesIntegration package and call the UseAzureAppServices method in your Program.cs.

NOTE: UseIISIntegration is not in the above example because UseAzureAppServices includes it for you, it shouldn’t hurt your application if you have both calls, but explicitly calling UseIISIntegration is not required.

Once you have added the UseAzureAppServices method then your application will honor the settings in the Diagnostics Logs section of the Azure App Service settings as shown below. If you change these settings, switching from file system to blob storage logs for example, your application will automatically switch to logging to the new location without you redeploying.

Azure Key Vault configuration provider

The Microsoft.Extensions.Configuration.AzureKeyVault package provides a configuration provider for Azure Key Vault. This allows you to retrieve configuration from Key Vault secrets on application start and hold it in memory, using the normal ASP.NET Core configuration abstractions to access the configuration data.

Basic usage of the provider is done like this:

For an example on how to add the Key Vault configuration provider see the sample here: https://github.com/aspnet/Configuration/tree/dev/samples/KeyVaultSample

Redis and Azure Storage Data Protection Key Repositories

The Microsoft.AspNetCore.DataProtection.AzureStorage and Microsoft.AspNetCore.DataProtection.Redis packages allow storing your Data Protection keys in Azure Storage or Redis respectively. This allows keys to be shared across several instances of a website so that you can, for example, share an authentication cookie, or CSRF protection across many load balanced servers running your ASP.NET Core application. As data protection is used behind the scenes for a few things in MVC it’s extremely probable once you start scaling out you will need to share the keyring. Your options for sharing keys before these two packages would be to use a network share with a file based key repository.

Examples:

Azure:

services.AddDataProtection()
  .AddAzureStorage(“<blob URI including SAS token>”);

Redis:

// Connect
var redis = ConnectionMultiplexer.Connect("localhost:6379");

// Configure
services.AddDataProtection()
  .PersistKeysToRedis(redis, "DataProtection-Keys");

NOTE: When using a non-persistent Redis instance then anything that is encrypted using Data Protection will not be able to be decrypted once the instance resets. For the default authentication flows this would usually just mean that users are redirected to login again. However, for anything manually encrypted with Data Protections Protect method you will not be able to decrypt the data at all. For this reason, you should not use a Redis instance that isn’t persistent when manually using the Protect method of Data Protection. Data Protection is optimized for ephemeral data.

Summary

Thank you for trying out ASP.NET Core 1.1 Preview 1! If you have any problems, we will be monitoring the GitHub issues for the ASP.NET Core repositories. We hope you enjoy these new features and improvements!

Bearer Token Authentication in ASP.NET Core

$
0
0

This is a guest post from Mike Rousos

Introduction

ASP.NET Core Identity automatically supports cookie authentication. It is also straightforward to support authentication by external providers using the Google, Facebook, or Twitter ASP.NET Core authentication packages. One authentication scenario that requires a little bit more work, though, is to authenticate via bearer tokens. I recently worked with a customer who was interested in using JWT bearer tokens for authentication in mobile apps that worked with an ASP.NET Core back-end. Because some of their customers don’t have reliable internet connections, they also wanted to be able to validate the tokens without having to communicate with the issuing server.

In this article, I offer a quick look at how to issue JWT bearer tokens in ASP.NET Core. In subsequent posts, I’ll show how those same tokens can be used for authentication and authorization (even without access to the authentication server or the identity data store).

Offline Token Validation Considerations

First, here’s a quick diagram of the desired architecture.authentication-architecture

 

The customer has a local server with business information which will need to be accessed and updated periodically by client devices. Rather than store user names and hashed passwords locally, the customer prefers to use a common authentication micro-service which is hosted in Azure and used in many scenarios beyond just this specific one. This particular scenario is interesting, though, because the connection between the customer’s location (where the server and clients reside) and the internet is not reliable. Therefore, they would like a user to be able to authenticate at some point in the morning when the connection is up and have a token that will be valid throughout that user’s work shift. The local server, therefore, needs to be able to validate the token without access to the Azure authentication service.

This local validation is easily accomplished with JWT tokens. A JWT token typically contains a body with information about the authenticated user (subject identifier, claims, etc.), the issuer of the token, the audience (recipient) the token is intended for, and an expiration time (after which the token is invalid). The token also contains a cryptographic signature as detailed in RFC 7518. This signature is generated by a private key known only to the authentication server, but can be validated by anyone in possession of the corresponding public key. One JWT validation work flow (used by AD and some identity providers) involves requesting the public key from the issuing server and using it to validate the token’s signature. In our offline scenario, though, the local server can be prepared with the necessary public key ahead of time. The challenge with this architecture is that the local server will need to be given an updated public key anytime the private key used by the cloud service changes, but this inconvenience means that no internet connection is needed at the time the JWT tokens are validated.

Issuing Authentication Tokens

As mentioned previously, Microsoft.AspNetCore.* libraries don’t have support for issuing JWT tokens. There are, however, several other good options available.

First, Azure Active Directory Authentication provides identity and authentication as a service. Using Azure AD is a quick way to get identity in an ASP.NET Core app without having to write authentication server code.

Alternatively, if a developer wishes to write the authentication service themselves, there are a couple third-party libraries available to handle this scenario. IdentityServer4 is a flexible OpenID Connect framework for ASP.NET Core. Another good option is OpenIddict. Like IdentityServer4, OpenIddict offers OpenID Connect server functionality for ASP.NET Core. Both OpenIddict and IdentityServer4 work well with ASP.NET Identity 3.

For this demo, I will use OpenIddict. There is excellent documentation on accomplishing the same tasks with IdentityServer4 available in the IdentityServer4 documentation, which I would encourage you to take a look at, as well.

A Disclaimer

Please note that both IdentityServer4 and OpenIddict are pre-release packages currently. OpenIddict is currently released as a beta and IdentityServer4 as an RC, so both are still in development and subject to change!

Setup the User Store

In this scenario, we will use a common ASP.NET Identity 3-based user store, accessed via Entity Framework Core. Because this is a common scenario, setting it up is as easy as creating a new ASP.NET Core web app from new project templates and selecting ‘individual user accounts’ for the authentication mode.new-web-app

 

This template will provide a default ApplicationUser type and Entity Framework Core connections to manage users. The connection string in appsettings.json can be modifier to point at the database where you want this data stored.

Because JWT tokens can encapsulate claims, it’s interesting to include some claims for users other than just the defaults of user name or email address. For demo purposes, let’s include two different types of claims.

Adding Roles

ASP.NET Identity 3 includes the concept of roles. To take advantage of this, we need to create some roles which users can be assigned to. In a real application, this would likely be done by managing roles through a web interface. For this short sample, though, I just seeded the database with sample roles by adding this code to startup.cs:

// Initialize some test roles. In the real world, these would be setup explicitly by a role manager
private string[] roles = new[] { "User", "Manager", "Administrator" };
private async Task InitializeRoles(RoleManager<IdentityRole> roleManager)
{
    foreach (var role in roles)
    {
        if (!await roleManager.RoleExistsAsync(role))
        {
            var newRole = new IdentityRole(role);
            await roleManager.CreateAsync(newRole);
            // In the real world, there might be claims associated with roles
            // _roleManager.AddClaimAsync(newRole, new )
        }
    }
}

I then call InitializeRoles from my app’s Startup.Configure method. The RoleManager needed as a parameter to InitializeRoles can be retrieved by IoC (just add a RoleManager parameter to your Startup.Configure method).

Because roles are already part of ASP.NET Identity, there’s no need to modify models or our database schema.

Adding Custom Claims to the Data Model

It’s also possible to encode completely custom claims in JWT tokens. To demonstrate that, I added an extra property to my ApplicationUser type. For sample purposes, I added an integer called OfficeNumber:

public virtual int OfficeNumber { get; set; }

This is not something that would likely be a useful claim in the real world, but I added it in my sample specifically because it’s not the sort of claim that’s already handled by any of the frameworks we’re using.

I also updated the view models and controllers associated with creating a new user to allow specifying role and office number when creating new users.

I added the following properties to the RegisterViewModel type:

[Display(Name = "Is administrator")]
public bool IsAdministrator { get; set; }

[Display(Name = "Is manager")]
public bool IsManager { get; set; }

[Required]
[Display(Name = "Office Number")]
public int OfficeNumber { get; set; }

I also added cshtml for gathering this information to the registration view:

<div class="form-group">
    <label asp-for="OfficeNumber" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="OfficeNumber" class="form-control" />
        <span asp-validation-for="OfficeNumber" class="text-danger"></span>
    </div>
</div>
<div class="form-group">
    <label asp-for="IsManager" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="IsManager" class="form-control" />
        <span asp-validation-for="IsManager" class="text-danger"></span>
    </div>
</div>
<div class="form-group">
    <label asp-for="IsAdministrator" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="IsAdministrator" class="form-control" />
        <span asp-validation-for="IsAdministrator" class="text-danger"></span>
    </div>
</div>

Finally, I updated the AccountController.Register action to set role and office number information when creating users in the database. Notice that we add a custom claim for the office number. This takes advantage of ASP.NET Identity’s custom claim tracking. Be aware that ASP.NET Identity doesn’t store claim value types, so even in cases where the claim is always an integer (as in this example), it will be stored and returned as a string. Later in this post, I explain how non-string claims can be included in JWT tokens.

var user = new ApplicationUser { UserName = model.Email, Email = model.Email, OfficeNumber = model.OfficeNumber };
var result = await _userManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
    if (model.IsAdministrator)
    {
        await _userManager.AddToRoleAsync(user, "Administrator");
    }
    else if (model.IsManager)
    {
        await _userManager.AddToRoleAsync(user, "Manager");
    }

    var officeClaim = new Claim("office", user.OfficeNumber.ToString(), ClaimValueTypes.Integer);
    await _userManager.AddClaimAsync(user, officeClaim);
    ...

Updating the Database Schema

After making these changes, we can use Entity Framework’s migration tooling to easily update the database to match (the only change to the database should be to add an OfficeNumber column to the users table). To migrate, simply run dotnet ef migrations add OfficeNumberMigration and dotnet ef database update from the command line.

At this point, the authentication server should allow registering new users. If you’re following along in code, go ahead and add some sample users at this point.

Issuing Tokens with OpenIddict

The OpenIddict package is still pre-release, so it’s not yet available on NuGet.org. Instead, the package is available on the aspnet-contrib MyGet feed.

To restore it, we need to add that feed to our solution’s NuGet.config. If you don’t yet have a NuGet.config file in your solution, you can add one that looks like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
    <add key="aspnet-contrib" value="https://www.myget.org/F/aspnet-contrib/api/v3/index.json" />
  </packageSources>
</configuration>

Once that’s done, add a reference to "OpenIddict": "1.0.0-beta1-*" and "OpenIddict.Mvc": "1.0.0-beta1-*" in your project.json file’s dependencies section. OpenIddict.Mvc contains some helpful extensions that allow OpenIddict to automatically bind OpenID Connect requests to MVC action parameters.

There are only a few steps needed to enable OpenIddict endpoints.

Use OpenIddict Model Types

The first change is to update your ApplicationDBContext model type to inherit from OpenIddictDbContext instead of IdentityDbContext.

After making this change, migrate the database to update it, as well (dotnet ef migrations add OpenIddictMigration and dotnet ef database update).

Configure OpenIddict

Next, it’s necessary to register OpenIddict types in our ConfigureServices method in our Startup type. This can be done with a call like this:

services.AddOpenIddict<ApplicationDbContext>()
    .AddMvcBinders()
    .EnableTokenEndpoint("/connect/token")
    .UseJsonWebTokens()
    .AllowPasswordFlow()
    .AddSigningCertificate(jwtSigningCert);

The specific methods called on the OpenIddictBuilder here are important to understand.

  • AddMvcBinders. This method registers custom model binders that will populate OpenIdConnectRequest parameters in MVC actions with OpenID Connect requests read from incoming HTTP request’s context. This isn’t required, since the OpenID Connect requests can be read manually, but it’s a helpful convenience.
  • EnableTokenEndpoint. This method allows you to specify the endpoint which will be serving authentication tokens. The endpoint shown above (/connect/token) is a pretty common default endpoint for token issuance. OpenIddict needs to know the location of this endpoint so that it can be included in responses when a client queries for information about how to connect (using the .well-known/openid-configuration endpoint, which OpenIddict automatically provides). OpenIddict will also validate requests to this endpoint to be sure they are valid OpenID Connect requests. If a request is invalid (if it’s missing mandatory parameters like grant_type, for example), then OpenIddict will reject the request before it even reaches the app’s controllers.
  • UseJsonWebTokens. This instructs OpenIddict to use JWT as the format for bearer tokens it produces.
  • AllowPasswordFlow. This enables the password grant type when logging on a user. The different OpenID Connect authorization flows are documented in RFC and OpenID Connect specs. The password flow means that client authorization is performed based on user credentials (name and password) which are provided from the client. This is the flow that best matches our sample scenario.
  • AddSigningCertificate. This API specifies the certificate which should be used to sign JWT tokens. In my sample code, I produce the jwtSigningCert argument from a pfx file on disk (var jwtSigningCert = new X509Certificate2(certLocation, certPassword);). In a real-world scenario, the certificate would more likely be loaded from the authentication server’s certificate store, in which case a different overload of AddSigningCertificate would be used (one which takes the cert’s thumbprint and store name/location).
    • If you need a self-signed certificate for testing purposes, one can be produced with the makecert and pvk2pfx command line tools (which should be on the path in a Visual Studio Developer Command prompt).
      • makecert -n "CN=AuthSample" -a sha256 -sv AuthSample.pvk -r AuthSample.cer This will create a new self-signed test certificate with its public key in AuthSample.cer and it’s private key in AuthSample.pvk.
      • pvk2pfx -pvk AuthSample.pvk -spc AuthSample.cer -pfx AuthSample.pfx -pi [A password] This will combine the pvk and cer files into a single pfx file containing both the public and private keys for the certificate (protected by a password).
      • This pfx file is what needs to be loaded by OpenIddict (since the private key is necessary to sign tokens). Note that this private key (and any files containing it) must be kept secure.
  • DisableHttpsRequirement. The code snippet above doesn’t include a call to DisableHttpsRequirement(), but such a call may be useful during testing to disable the requirement that authentication calls be made over HTTPS. Of course, this should never be used outside of testing as it would allow authentication tokens to be observed in transit and, therefore, enable malicious parties to impersonate legitimate users.

Enable OpenIddict Endpoints

Once AddOpenIddict has been used to configure OpenIddict services, a call to app.UseOpenIddict(); (which should come after the existing call to UseIdentity) should be added to Startup.Configure to actually enable OpenIddict in the app’s HTTP request processing pipeline.

Implementing the Connect/Token Endpoint

The final step necessary to enable the authentication server is to implement the connect/token endpoint. The EnableTokenEndpoint call made during OpenIddict configuration indicates where the token-issuing endpoint will be (and allows OpenIddict to validate incoming OIDC requests), but the endpoint still needs to be implemented.

OpenIddict’s owner, Kévin Chalet, gives a good example of how to implement a token endpoint supporting a password flow in this sample. I’ve restated the gist of how to create a simple token endpoint here.

First, create a new controller called ConnectController and give it a Token post action. Of course, the specific names are not important, but it is important that the route matches the one given to EnableTokenEndpoint.

Give the action method an OpenIdConnectRequest parameter. Because we are using the OpenIddict MVC binder, this parameter will be supplied by OpenIddict. Alternatively (without using the OpenIddict model binder), the GetOpenIdConnectRequest extension method could be used to retrieve the OpenID Connect request.

Based on the contents of the request, you should validate that the request is valid.

  1. Confirm that the grant type is as expected (‘Password’ for this authentication server).
  2. Confirm that the requested user exists (using the ASP.NET Identity UserManager).
  3. Confirm that the requested user is able to sign in (since ASP.NET Identity allows for accounts that are locked or not yet confirmed).
  4. Confirm that the password provided is correct (again, using a UserManager).

If everything in the request checks out, then a ClaimsPrincipal can be created using SignInManager.CreateUserPrincipalAsync.

Roles and custom claims known to ASP.NET identity will automatically be present in the ClaimsPrincipal. If any changes are needed to the claims, those can be made now.

One set of claims updates that will be important is to attach destinations to claims. A claim is only included in a token if that claim includes a destination for that token type. So, even though the ClaimsPrincipal will contain all ASP.NET Identity claims, they will only be included in tokens if they have appropriate destinations. This allows some claims to be kept private and others to be included only in particular token types (access or identity tokens) or if particular scopes are requested. For the purposes of this simple demo, I am including all claims for all token types.

This is also an opportunity to add additional custom claims to the ClaimsPrincipal. Typically, tracking the claims with ASP.NET Identity is sufficient but, as mentioned earlier, ASP.NET Identity does not remember claim value types. So, if it was important that the office claim be an integer (rather than a string), we could instead add it here based on data in the ApplicationUser object returned from the UserManager. Claims cannot be added to a ClaimsPrincipal directly, but the underlying identity can be retrieved and modified. For example, if the office claim was created here (instead of at user registration), it could be added like this:

var identity = (ClaimsIdentity)principal.Identity;
var officeClaim = new Claim("office", user.OfficeNumber.ToString(), ClaimValueTypes.Integer);
officeClaim.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken, OpenIdConnectConstants.Destinations.IdentityToken);
identity.AddClaim(officeClaim);

Finally, an AuthenticationTicket can be created from the claims principal and used to sign in the user. The ticket object allows us to use helpful OpenID Connect extension methods to specify scopes and resources to be granted access. In my sample, I pass the requested scopes filtered by those the server is able to provide. For resources, I provide a hard-coded string indicating the resource this token should be used to access. In more complex scenarios, the requested resources (request.GetResources()) might be considered when determining which resource claims to include in the ticket. Note that resources (which map to the audience element of a JWT) are not mandatory according to the JWT specification, though many JWT consumers expect them.

Put all together, here’s a simple implementation of a connect/token endpoint:

[HttpPost]
public async Task<IActionResult> Token(OpenIdConnectRequest request)
{
    if (!request.IsPasswordGrantType())
    {
        // Return bad request if the request is not for password grant type
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.UnsupportedGrantType,
            ErrorDescription = "The specified grant type is not supported."
        });
    }

    var user = await _userManager.FindByNameAsync(request.Username);
    if (user == null)
    {
        // Return bad request if the user doesn't exist
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.InvalidGrant,
            ErrorDescription = "Invalid username or password"
        });
    }

    // Check that the user can sign in and is not locked out.
    // If two-factor authentication is supported, it would also be appropriate to check that 2FA is enabled for the user
    if (!await _signInManager.CanSignInAsync(user) || (_userManager.SupportsUserLockout && await _userManager.IsLockedOutAsync(user)))
    {
        // Return bad request is the user can't sign in
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.InvalidGrant,
            ErrorDescription = "The specified user cannot sign in."
        });
    }

    if (!await _userManager.CheckPasswordAsync(user, request.Password))
    {
        // Return bad request if the password is invalid
        return BadRequest(new OpenIdConnectResponse
        {
            Error = OpenIdConnectConstants.Errors.InvalidGrant,
            ErrorDescription = "Invalid username or password"
        });
    }

    // The user is now validated, so reset lockout counts, if necessary
    if (_userManager.SupportsUserLockout)
    {
        await _userManager.ResetAccessFailedCountAsync(user);
    }

    // Create the principal
    var principal = await _signInManager.CreateUserPrincipalAsync(user);

    // Claims will not be associated with specific destinations by default, so we must indicate whether they should
    // be included or not in access and identity tokens.
    foreach (var claim in principal.Claims)
    {
        // For this sample, just include all claims in all token types.
        // In reality, claims' destinations would probably differ by token type and depending on the scopes requested.
        claim.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken, OpenIdConnectConstants.Destinations.IdentityToken);
    }

    // Create a new authentication ticket for the user's principal
    var ticket = new AuthenticationTicket(
        principal,
        new AuthenticationProperties(),
        OpenIdConnectServerDefaults.AuthenticationScheme);

    // Include resources and scopes, as appropriate
    var scope = new[]
    {
        OpenIdConnectConstants.Scopes.OpenId,
        OpenIdConnectConstants.Scopes.Email,
        OpenIdConnectConstants.Scopes.Profile,
        OpenIdConnectConstants.Scopes.OfflineAccess,
        OpenIddictConstants.Scopes.Roles
    }.Intersect(request.GetScopes());

    ticket.SetResources("http://localhost:5000/");
    ticket.SetScopes(scope);

    // Sign in the user
    return SignIn(ticket.Principal, ticket.Properties, ticket.AuthenticationScheme);
}

Testing the Authentication Server

At this point, our simple authentication server is done and should work to issue JWT bearer tokens for the users in our database.

OpenIddict implements OpenID Connect, so our sample should support a standard /.well-known/openid-configuration endpoint with information about how to authenticate with the server.

If you’ve followed along building the sample, launch the app and navigate to that endpoint. You should get a json response similar to this:

{
  "issuer": "http://localhost:5000/",
  "jwks_uri": "http://localhost:5000/.well-known/jwks",
  "token_endpoint": "http://localhost:5000/connect/token",
  "code_challenge_methods_supported": [ "S256" ],
  "grant_types_supported": [ "password" ],
  "subject_types_supported": [ "public" ],
  "scopes_supported": [ "openid", "profile", "email", "phone", "roles" ],
  "id_token_signing_alg_values_supported": [ "RS256" ]
}

This gives clients information about our authentication server. Some of the interesting values include:

  • The jwks_uri property is the endpoint that clients can use to retrieve public keys for validating token signatures from the issuer.
  • token_endpoint gives the endpoint that should be used for authentication requests.
  • The grant_types_supported property is a list of the grant types supported by the server. In the case of this sample, that is only password.
  • scopes_supported is a list of the scopes that a client can request access to.

If you’d like to check that the correct certificate is being used, you can navigate to the jwks_uri endpoint to see the public keys used by the server. The x5t property of the response should be the certificate thumbprint. You can check this against the thumbprint of the certificate you expect to be using to confirm that they’re the same.

Finally, we can test the authentication server by attempting to login! This is done via a POST to the token_endpoint. You can use a tool like Postman to put together a test request. The address for the post should be the token_endpoint URI and the body of the post should be x-www-form-urlencoded and include the following items:

  • grant_type must be ‘password’ for this scenario.
  • username should be the username to login.
  • password should be the user’s password.
  • scope should be the scopes that access is desired for.
  • resource is an optional parameter which can specify the resource the token is meant to access. Using this can help to make sure that a token issued to access one resource isn’t reused to access a different one.

Here are the complete request and response from me testing the connect/token API:

Request

POST /connect/token HTTP/1.1
Host: localhost:5000
Cache-Control: no-cache
Postman-Token: f1bb8681-a963-2282-bc94-03fdaea5da78
Content-Type: application/x-www-form-urlencoded

grant_type=password&username=Mike%40Fabrikam.com&password=MikePassword1!&scope=openid+email+name+profile+roles

Response

{
  "token_type": "Bearer",
  "access_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IkU1N0RBRTRBMzU5NDhGODhBQTg2NThFQkExMUZFOUIxMkI5Qzk5NjIiLCJ0eXAiOiJKV1QifQ.eyJ1bmlxdWVfbmFtZSI6Ik1pa2VAQ29udG9zby5jb20iLCJBc3BOZXQuSWRlbnRpdHkuU2VjdXJpdHlTdGFtcCI6ImMzM2U4NzQ5LTEyODAtNGQ5OS05OTMxLTI1Mzk1MzY3NDEzMiIsInJvbGUiOiJBZG1pbmlzdHJhdG9yIiwib2ZmaWNlIjoiMzAwIiwianRpIjoiY2UwOWVlMGUtNWQxMi00NmUyLWJhZGUtMjUyYTZhMGY3YTBlIiwidXNhZ2UiOiJhY2Nlc3NfdG9rZW4iLCJzY29wZSI6WyJlbWFpbCIsInByb2ZpbGUiLCJyb2xlcyJdLCJzdWIiOiJjMDM1YmU5OS0yMjQ3LTQ3NjktOWRjZC01NGJkYWRlZWY2MDEiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjUwMDEvIiwibmJmIjoxNDc2OTk3MDI5LCJleHAiOjE0NzY5OTg4MjksImlhdCI6MTQ3Njk5NzAyOSwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo1MDAwLyJ9.q-c6Ld1b7c77td8B-0LcppUbL4a8JvObiru4FDQWrJ_DZ4_zKn6_0ud7BSijj4CV3d3tseEM-3WHgxjjz0e8aa4Axm55y4Utf6kkjGjuYyen7bl9TpeObnG81ied9NFJTy5HGYW4ysq4DkB2IEOFu4031rcQsUonM1chtz14mr3wWHohCi7NJY0APVPnCoc6ae4bivqxcYxbXlTN4p6bfBQhr71kZzP0AU_BlGHJ1N8k4GpijHVz2lT-2ahYaVSvaWtqjlqLfM_8uphNH3V7T7smaMpomQvA6u-CTZNJOZKalx99GNL4JwGk13MlikdaMFXhcPiamhnKtfQEsoNauA",
  "expires_in": 1800
}

The access_token is the JWT and is nothing more than a base64-encoded string in three parts ([header].[body].[signature]). A number of websites offer JWT decoding functionality.

The access token above has these contents:

{
  "alg": "RS256",
  "kid": "E57DAE4A35948F88AA8658EBA11FE9B12B9C9962",
  "typ": "JWT"
}.
{
  "unique_name": "Mike@Contoso.com",
  "AspNet.Identity.SecurityStamp": "c33e8749-1280-4d99-9931-253953674132",
  "role": "Administrator",
  "office": "300",
  "jti": "ce09ee0e-5d12-46e2-bade-252a6a0f7a0e",
  "usage": "access_token",
  "scope": [
    "email",
    "profile",
    "roles"
  ],
  "sub": "c035be99-2247-4769-9dcd-54bdadeef601",
  "aud": "http://localhost:5001/",
  "nbf": 1476997029,
  "exp": 1476998829,
  "iat": 1476997029,
  "iss": "http://localhost:5000/"
}.
[signature]

Important fields in the token include:

  • kid is the key ID that can be use to look-up the key needed to validate the token’s signature.
    • x5t, similarly, is the signing certificate’s thumbprint.
  • role and office capture our custom claims.
  • exp is a timestamp for when the token should expire and no longer be considered valid.
  • iss is the issuing server’s address.

These fields can be used to validate the token.

Conclusion and Next Steps

Hopefully this article has provided a useful overview of how ASP.NET Core apps can issue JWT bearer tokens. The in-box abilities to authenticate with cookies or third-party social providers are sufficient for many scenarios, but in other cases (especially when supporting mobile clients), bearer authentication is more convenient.

Look for a follow-up to this post coming soon covering how to validate the token in ASP.NET Core so that it can be used to authenticate and signon a user automatically. And in keeping with the original scenario I ran into with a customer, we’ll make sure the validation can all be done without access to the authentication server or identity database.

Resources

Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM

$
0
0

We are happy to announce that ASP.NET Core 1.1 is now available as a stable release on nuget.org! This release includes a bunch of great new features along with many bug fixes and general enhancements. We invite you to try out the new features and to provide feedback.

To update an existing project to ASP.NET Core 1.1 you will need to do the following:

  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

    Two places to update project.json to .NET Core 1.1

  3. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1

Performance

We are very pleased to announce the participation of ASP.NET Core with the Kestrel webserver in the round 13 TechEmpower benchmarks.  The TechEmpower standard benchmarks are known for their thorough testing of the many web frameworks that are available.  In the latest results from TechEmpower, ASP.NET Core 1.1 with Kestrel was ranked as the fastest mainstream fullstack web framework in the plaintext test.

TechEmpower also reports that the performance of ASP.NET Core running on Linux is approximately 760 times faster than it was one year ago.  Since TechEmpower started measuring benchmarks in March 2013, they have never seen such a performance improvement as they have observed in ASP.NET Core over the last year.

You can read more about the TechEmpower benchmarks and their latest results on the TechEmpower website.

New Web Development Features in Visual Studio 2017

including:

  • The new JavaScript editor
  • Embedded ESLint capabilities that help check for common mistakes in your script
  • JavaScript debugging support for browser
  • Updated BrowserLink features for two-way communication between your browsers and Visual Studio while debugging

Support for ASP.NET Core in Visual Studio for Mac

Visual Studio for MacWe’re pleased to announce the first preview of ASP.NET Core tooling in Visual Studio for Mac.  For those familiar with Visual Studio, you’ll find many of the same capabilities you’d expect from a Visual Studio development environment.  IntelliSense and refactoring capabilities are built on top of Roslyn, and it shares much of Visual Studio’s .NET Core debugger.

This first preview focuses on developing Web API applications.  You’ll have a great experience creating a new project, and working with C# files, and TextMate bundle support for web file types (e.g HTML, JavaScript, JSON, .cshtml). In a future update, we’ll be adding the same first class support for these file types we have in Visual Studio which will bring IntelliSense for all of those as well.

What’s new in the ASP.NET 1.1?

This release was designed around the following feature themes in order to help developers:

  • Improved and cross-platform compatible site hosting capabilities when using a host other than Windows Internet Information Server (IIS).
  • Support for developing with native Windows capabilities
  • Compatibility, portability and performance of middleware and other MVC features throughout the UI framework
  • Improved deployment and management experience of ASP.NET Core applications on Microsoft Azure. We think these improvements help make ASP.NET Core the best choice for developing an application for the cloud.

For additional details on the changes included in this release please check out the release notes.

URL Rewriting Middleware

We are bringing URL rewriting functionality to ASP.NET Core through a middleware component that can be configured using IIS standard XML formatted rules, Apache Mod_Rewrite syntax, or some simple C# methods coded into your application.  When you want to run your ASP.NET Core application outside of IIS, we want to enable those same rich URL rewriting capabilities regardless of the web host you are using.  If you are using containers, Apache, or nginx you will be able to have ASP.NET Core manage this capability for you with a uniform syntax that you are familiar with.

URL Rewriting allows mapping a public URL space, designed for consumption of your clients, to whatever representation the downstream components of your middleware pipeline require as well as redirecting clients to different URLs based on a pattern.

For example, you could ensure a canonical hostname by rewriting any requests to http://example.com to instead be http://www.example.com for everything after the re-write rules have run. Another example is to redirect all requests to http://example.com to https://example.com. You can even configure URL rewrite such that both rules are applied and all requests to example.com are always redirected to SSL and rewritten to www.

We can get started with this middleware by adding a reference to our web application for the Microsoft.AspNetCore.Rewrite package.  This allows us to add a call to configure RewriteOptions in our Startup.Configure method for our rewriter:

As you can see, we can both force a rewrite and redirect with different rules.

  • Url Redirect sends an HTTP 301 Moved Permanently status code to the client with the new address
  • Url Rewrite gives a different URL to the next steps in the HTTP pipeline, tricking it into thinking a different address was requested.

Response Caching Middleware

Response Caching similar to the OutputCache capabilities of previous ASP.NET releases can now be activated in your application by adding the Microsoft.AspNetCore.ResponseCaching and the Microsoft.Extensions.Caching.Memory packages to your application.  You can add this middleware to your application in the Startup.ConfigureServices method and configure the response caching from the Startup.Configure method.  For a sample implementation, check out the demo in the ResponseCaching repository.

You can now add GZipCompression to the ASP.NET HTTP Pipeline if you would like ASP.NET to do your compression instead of a front-end web server.  IIS would have normally handled this for you, but in environments where your host does not provide compression capabilities, ASP.NET Core can do this for you.  We think this is a great practice that everyone should use in their server-side applications to deliver smaller data that transmits faster over the network.

This middleware is available in the Microsoft.AspNetCore.ResponseCompression package.  You can add simple GZipCompression using the fastest compression level with the following syntax in your Startup.cs class:

There are other options available for configuring compression, including the ability to specify custom compression providers.

WebListener Server for Windows

WebListener is a server that runs directly on top of the Windows Http Server API. WebListener gives you the option to take advantage of Windows specific features, like support for Windows authentication, port sharing, HTTPS with SNI, HTTP/2 over TLS (Windows 10), direct file transmission, and response caching WebSockets (Windows 8).  This may be advantageous for you if you want to bundle an ASP.NET Core microservice in a Windows container that takes advantage of these Windows features.

On Windows you can use this server instead of Kestrel by referencing the Microsoft.AspNetCore.Server.WebListener package instead of the Kestrel package and configuring your WebHostBuilder to use Weblistener instead of Kestrel:

You can find other samples demonstrating the use of WebListener in its GitHub repository.

Unlike the other packages that are part of this release, WebListener is being shipped as both 1.0.0 and 1.1.0-preview. The 1.0.0 version of the package can be used in production LTS (1.0.1) ASP.NET Core applications. The 1.1.0-preview version of the package is a pre-release of the next version of WebListener as part of the 1.1.0 release.

View Components as Tag Helpers

ViewComponents are an ASP.NET Core display concept that provides for a razor view that is triggered from a server-side class that inherits from the ViewComponent base class.  You can now invoke from your views using Tag Helper syntax and get all the benefits of IntelliSense and Tag Helper tooling in Visual Studio. Previously, to invoke a View Component from a view you would use the Component.InvokeAsync method and pass in any View Component arguments using an anonymous object:

 @await Component.InvokeAsync("Copyright", new { website = "example.com", year = 2016 })

Instead, you can now invoke a View Component like you would any Tag Helper while getting Intellisense for the View Component parameters:

TagHelper in Visual Studio

This gives us the same rich intellisense and editor support in the razor template editor that we have for TagHelpers.  With the Component.Invoke syntax, there is no obvious way to add CSS classes or get tooltips to assist in configuring the component like we have with the TagHelper feature.  Finally, this keeps us in “HTML Editing” mode and allows a developer to avoid shifting into C# in order to reference a ViewComponent they want to add to a page.

To enable invoking your View Components as Tag Helpers simply add your View Components as Tag Helpers using the @addTagHelpers directive:

@addTagHelper "*, WebApplication1"

Middleware as MVC filters

Middleware typically sits in the global request handling pipeline. But what if you want to apply middleware to only a specific controller or action? You can now apply middleware as an MVC resource filter using the new MiddlewareFilterAttribute.  For example, you could apply response compression or caching to a specific action, or you might use a route value based request culture provider to establish the current culture for the request using the localization middleware.

To use middleware as a filter you first create a type with a Configure method that specifies the middleware pipeline that you want to use:

You then apply that middleware pipeline to a controller, an action or globally using the MiddlewareFilterAttribute:

Cookie-based TempData provider

To use the cookie-based TempData provider you register the CookieTempDataProvider service in your ConfigureServices method after adding the MVC services as follows:

services.AddMvc();
services.AddSingleton<ITempDataProvider, CookieTempDataProvider>();

View compilation

The Razor syntax for views provides a flexible development experience where compilation of the views happens automatically at runtime when the view is executed. However, there are some scenarios where you do not want the Razor syntax compiled at runtime. You can now compile the Razor views that your application references and deploy them with your application.  To enable view compilation as part of publishing your application,

  1. Add a reference to “Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Design” under the “dependencies” section.
  2. Add a reference to “Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Tools” under the tools section
  3. Add a postpublish script to invoke view compiler:
"scripts": {

   "postpublish": "dotnet razor-precompile --configuration %publish:Configuration% --framework %publish:TargetFramework% --output-path %publish:OutputPath% %publish:ProjectPath%"

}

Azure App Service logging provider

The Microsoft.AspNetCore.AzureAppServicesIntegration package allows your application to take advantage of App Service specific logging and diagnostics. Any log messages that are written using the ILogger/ILoggerFactory abstractions will go to the locations configured in the Diagnostics Logs section of your App Service configuration in the portal (see screenshot).  We highly recommend using this logging provider when deploying an application to Azure App Service.  Prior to this feature, it was very difficult to capture log files without a third party provider or hosted service.

Usage:

Add a reference to the Microsoft.AspNetCore.AzureAppServicesIntegration package and add the one line of code to UseAzureAppServices when configuring the WebHostBuilder in your Program.cs.

 
  var host = new WebHostBuilder()
    .UseKestrel()   
    .UseAzureAppServices()   
    .UseStartup<Startup>()   
    .Build();

NOTE: UseIISIntegration is not in the above example because UseAzureAppServices includes it for you, it shouldn’t hurt your application if you have both calls, but explicitly calling UseIISIntegration is not required.
Once you have added the UseAzureAppServices method then your application will honor the settings in the Diagnostics Logs section of the Azure App Service settings as shown below. If you change these settings, switching from file system to blob storage logs for example, your application will automatically switch to logging to the new location without you redeploying.

Azure Key Vault configuration provider

Azure Key Vault is a service that can be used to store secret cryptographic keys and other secrets in a security hardened container on Azure.  You can set up your own Key Vault by following the Getting Started docs. The Microsoft.Extensions.Configuration.AzureKeyVault package then provides a configuration provider for your Azure Key Vault. This package allows you to retrieve configuration from Key Vault secrets on application start and hold it in memory, using the normal ASP.NET Core configuration abstractions to access the configuration data.

Basic usage of the provider is done like this:

For an example on how to add the Key Vault configuration provider see the sample here: https://github.com/aspnet/Configuration/tree/dev/samples/KeyVaultSample

Redis and Azure Storage Data Protection Key Repositories

The Microsoft.AspNetCore.DataProtection.AzureStorage and Microsoft.AspNetCore.DataProtection.Redis packages allow storing your Data Protection keys in Azure Storage or Redis respectively. This allows keys to be shared across several instances of a web application so that you can  share an authentication cookie, or CSRF protection across many load balanced servers running your ASP.NET Core application. As data protection is used behind the scenes for a few things in MVC it’s extremely probable once you start scaling out you will need to share the keyring. Your options for sharing keys before these two packages would be to use a network share with a file based key repository.

Examples:

Azure:

services.AddDataProtection() 
  .AddAzureStorage(“<blob URI including SAS token>”);

Redis:

// Connect
var redis = ConnectionMultiplexer.Connect("localhost:6379"); 
// Configure
services.AddDataProtection() 
  .PersistKeysToRedis(redis, "DataProtection-Keys");

NOTE: When using a non-persistent Redis instance then anything that is encrypted using Data Protection will not be able to be decrypted once the instance resets. For the default authentication flows this would usually just mean that users are redirected to login again. However, for anything manually encrypted with Data Protections Protect method you will not be able to decrypt the data at all. For this reason, you should not use a Redis instance that isn’t persistent when manually using the Protect method of Data Protection. Data Protection is optimized for ephemeral data.

In our initial release of dependency injection capabilities with ASP.NET Core, we heard feedback that there was some friction in enabling 3rd party providers.   With this release, we are acting on that feedback and introducing a new IServiceProviderFactory interface to enable those 3rd party containers to be configured easily in ASP.NET Core applications.  This interface allows you to move the construction of the container to the WebHostBuilder and allow for further customization of the mappings of your container in a new ConfigureContainer method in the Startup class.

Developers of containers can find a sample demonstrating how to connect their favorite provider, including samples using Autofac and StructureMap on GitHub.

This additional configuration should allow developers to use their favorite containers by adding a line to the Main method of their application that reads as simple as “UseStructureMap()”

Summary

The ASP.NET Core 1.1 release improves significantly on the previous release of ASP.NET Core.  With improved tooling in the Visual Studio 2017 RC and new tooling in Visual Studio for Mac, we think you’ll find web development to be a delightful experience.  This is a fully supported release and we encourage you to use the new features to better support your applications.  You will find a list of known issues with workarounds on GitHub, should you run into trouble. We will continue to improve the fastest mainstream full-stack framework available on the web, and want you to join us.  Download Visual Studio 2017 RC and Visual Studio for Mac from https://visualstudio.com and get the latest .NET Core SDK from https://dot.net

Put a .NET Core App in a Container with the new Docker Tools for Visual Studio

$
0
0

By now hopefully you’ve heard the good news that we’ve added first class support for building and running .NET applications inside of Docker containers in Visual Studio 2017 RC.  Visual Studio 2017 and Docker support building and running .NET applications using Windows containers (on Windows 10/Server 2016 only), and .NET Core applications on Linux containers, including the ability to publish and run Linux containers on Microsoft’s Azure App Service.

Docker containers package an application with everything it needs to run: code, runtime, system tools, system libraries – anything you would install on a server.  Put simply, a container is an isolated place where an application can run without affecting the rest of the system, and without the system affecting the application. This makes them an ideal way to package and run applications in production environments, where historically constraints imposed by the production environment (e.g. which version of the .NET runtime the server is running) have dictated development decisions.  Additionally, Docker containers are very lightweight which enable scaling applications quickly by spinning up new instances.

In this post, I’ll focus on creating a .NET Core application, publishing it to the Azure App Service Linux Preview and setting up continuous build integration and delivery to the Azure Container Service.

Getting Started

To get started in Visual Studio 2017 you need to install the “.NET Core and Docker (Preview)” workload in the new Visual Studio 2017 installer

         

Once it finishes installing, you’ll need to install Docker for Windows (if you want to use Windows containers on Windows 10 or Server 2016 you’ll need the Beta channel and the Windows 10 Anniversary Edition, if you want Linux containers you can choose either the Stable or Beta channel installers).

After you’ve finished installing Docker, you’ll need to share a drive with it where your images will be built to and run from.  To do this:

  • Right click on the Docker system tray icon and choose settings
  • Choose the “Shared Drives” tab
  • Share the drive your images will run from (this is the same drive the Visual Studio project will live on)

2-driveconfig

Creating an application with Docker support

Now that Visual Studio and Docker are installed and configured properly let’s create a .NET Core application that we’ll run in a Linux container.

On the ASP.NET application dialog, there is a checkbox that allows us to add Docker support to the application as part of project creation.  For now, we’ll to skip this, so we can see how to add Docker support existing applications.

3-webapplicationNow that we have our basic web application, let’s add a quick code snippet to the “About” page that will show what operating system the application is running on

4-aboutNext, we’ll hit Ctrl + F5 to run it inside IIS Express, we can see we’re running on Windows as we would expect.

5-runnativelyNow, to add Docker support to the application, right click on the project in Solution Explorer, choose Add, and then “Docker Project Support” (use “Docker Solution Support” to create containers for multiple projects).

6-adddockersupportYou’ll see that the “Start” button has changed to say “Docker” and several Docker files have been added to the project.

7-additionaldockerfilesLet’s hit Ctrl+F5 again and we can see that the app is now running inside a Linux container locally.

8-runninginlinuxcontainer

Running the application in Azure

Now let’s publish the app to Microsoft Azure App Service, which now offers the ability to run Linux Docker containers in a preview form.

To do this, I’ll right click on the app and choose “Publish”.  This will open our brand new publish page.  Click the “Profiles” dropdown and select “New Profile”, and then choose “Azure App Service Linux (Preview)” and click “OK”

9-publishtoazurecontainer

Before proceeding it’s important to understand the anatomy of how a Docker application works in a production environment:

  • A container registry is created that the Docker image is published to
  • The App Service site is created that downloads the image from the container registry and runs it
  • At any time, you can push a new image to the container registry which will then result in the copy running in App Service being updated.

With that understanding, let’s proceed to publishing our application to Azure.  The next thing we’ll see is the Azure provisioning dialog.  There are a couple of things to note about using this dialog in the RC preview:

  • If you are using an existing Resource Group, it must be in the same region as the App Service Plan you are creating
  • If you are creating a new Resource Group, you must set the Container Registry and the App Service plan to be in the same region (e.g. both must be in “West US”)
  • The VM size of the App Service Plan must be “S1” or larger

10-publishtoazurecontainerpart2When we click “OK” it will take about a minute, and then we’ll return to the “Publish” page, where we’ll see a summary of the publish profile we just created.

11-publishtoazurecontainerpart3Now we click “Publish” and it will take about another minute during which time you’ll see a Docker command prompt pop up

12-publishtoazurecontainerpart4When the application is ready, your browser will open to the site, and we can see that we’re running on Linux in Azure!

13-runninginazurecontainer

Setting up continuous build integration and delivery to the Azure Container Service

Now let’s setup continuous build delivery to Microsoft Azure Container Service. To do this, I’ll right click on the project and choose “Configure Continuous Delivery…”.  This will bring up a continuous delivery configuration dialog.

Configure Continuous Delivery

On the Configure Continuous Delivery dialog, select a user account with a valid Azure subscription as well as An Azure subscription with a valid Container registry and a DC/OC orchestrator Azure Container Service.

Configure Continuous Delivery

When done, click OK to start the setup process. A dialog will pop-up to explain that the setup process started.

Configuration Started

As the continuous build delivery setup can take several minutes to complete, you may consult the ‘Continuous Delivery Tools’ output tool window later to inspect the progress.

Upon successful completion of the setup, the output window will display the configuration details used to create the build and release definitions on VSTS to enable continuous build delivery for the project to the Azure Container Service.

Setup Complete

Conclusion

Please download Visual Studio 2017 today, and give our .NET Core and Docker experience a try.  It’s worth noting that this is a preview of the experience, so please help us make it great by providing feedback in the comments below.

Client-side debugging of ASP.NET projects in Google Chrome

$
0
0

Visual Studio 2017 RC now supports client-side debugging of both JavaScript and TypeScript in Google Chrome.

For years, it has been possible to debug both the backend .NET code and the client-side JavaScript code running in Internet Explorer at the same time. Unfortunately, the capability was limited solely to Internet Explorer.

In Visual Studio 2017 RC that changes. You can now debug both JavaScript and TypeScript directly in Visual Studio when using Google Chrome as your browser of choice. All you should do is to select Chrome as your browser in Visual Studio and hit F5 to debug.

If you’re interested in giving us feedback on future features and ideas before we ship them, join our community.

browser-selector

The first thing you’ll notice when launching Chrome by hitting F5 in Visual Studio is a page that says, “Please wait while we attach…”.

debugger-attach

What happens is that Visual Studio is attaching to Chrome using the remote debugging protocol and then redirects to the ASP.NET project URL (something like http://localhost:12345) after it attaches. After the attach is complete, the “Please wait while we attach…” message remains visible while the ASP.NET site starts up where normally you’d see a blank browser during this time.

Once the debugger is attached, script debugging is now enabled for all JavaScript files in the project as well as all TypeScript files if there is source map information available. Here’s a screen shot of a breakpoint being hit in a TypeScript file.

breakpoint-hit

For TypeScript debugging you need to instruct the compiler to produce a .map file. You can do that by placing a tsconfig.json file in the root of your project and specify the a few properties, like so:

{
  "compileOnSave": true,
  "compilerOptions": {
    "sourceMap": true
  }
}

There are developers who prefer to use Chrome’s or IE’s own dev tools to do client-side debugging and that is great. There will be a setting in Visual Studio that allows you to disable client-side debugging in both IE and Chrome, but unfortunately that didn’t make it in to the release candidate.

We hope you’ll enjoy this feature and we would love to hear your feedback in the comments section below, or via Twitter.

Download Visual Studio 2017 RC

Viewing all 7144 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>