Quantcast
Channel: ASP.NET Blog
Viewing all 7144 articles
Browse latest View live

Announcing ASP.NET 2.0.0-Preview1 and Updates for .NET Web Developers

$
0
0

The ASP.NET team is pleased to share the first preview version of the ASP.NET Core 2.0 framework.  In this post, we’ll look at the new features and changes to the web framework that were announced at the Build 2017 keynote and sessions.  We will also look at some other updates that were published for ASP.NET 4.7 and WCF.

ASP.NET Core 2.0.0-preview1

The next full version of ASP.NET Core is on its way, and developers who have been following along on the ASP.NET GitHub repositories have been very vocal about their interest in the new features in this version.  Some of these new features include:

  • A new ASP.NET Core meta-package that includes all features that you need to build an application. No longer do you need to pick and choose individual ASP.NET Core features in separate packages as all features are now included in a Microsoft.AspNetCore.All package in the default templates. If there are features you don’t need in your application, our new package trimming features will exclude those binaries in your published application output by default.
  • A new default Web Host configuration, codifying the typical defaults of the web host with the WebHost.CreateDefaultBuilder() API. This adds Kestrel, IIS configuration, default configuration sources, logging providers, and the content root.
  • Updated configuration and simplified logging. We have enhanced the LoggerFactory object to easily support a Dictionary<string, LogLevel> that defines log filters instead of a FilterLoggerSettings object, making it simpler to control the source and level of logs that get propagated from your application to your configured log providers.
  • Create pages without controllers in ASP.NET Core with the new RazorPages capabilities.  Just create a Pages folder and drop in a cshtml file with the new @page directive to get started.
  • Debugging your application in the cloud is easier than ever with integrated Azure Application Insights and diagnostics when debugging in Visual Studio and after deploying to Azure App Service.
  • A newly revamped authentication model that makes it easy to configure authentication for your application using DI.
  • New templates for configuring authentication for your web apps and web APIs using Azure AD B2C
  • New support in ASP.NET Core Identity for providing identity as a service. Updated templates decouple your apps from their identity concerns standard protocols (OpenID Connect, OAuth 2.0). Easily migrate apps using ASP.NET Core Identity for authentication to use Azure AD B2C or any other OpenID Connect compliant identity provider.
  • Build secure web APIs using ASP.NET Core Identity. Acquire access tokens for accessing your web APIs using the Microsoft Authentication Library (MSAL)
  • NET Core has always helped HTMLEncode your content by default, but with the new version we’re taking ab extra step to help prevent cross-site request forgery (XSRF) attacks: ASP.NET Core will now emit anti-forgery tokens by default and validate them on form POST actions and pages without extra configuration.

For a full list of changes see the release notes.

ASP.NET Core 2 – More Performance Improvements

ASP.NET Core 1 was ranked in the top 10 of the TechEmpower plaintext benchmarks in November 2016.  We’ve continued to work on our performance, and we’re already seeing improvements thanks to enhancements in the Kestrel server, thread pool, and JITter, to name just a few.

We are also providing a runtime store with pre-JITted versions of all of the ASP.NET Core packages we ship.  This reduces a lot of the work needed at startup time for your application.  Our initial tests reflect a significant improvement in startup time and we’re continuing to work on optimizing the store for subsequent releases.

ASP.NET Core projects now default to pre-compiling Razor views and pages during publish, which removes one of the most significant portions of application startup time after deployment. This change, along with the aforementioned publish trimming and runtime store, also contributes to the drastic reduction in size of published applications, reducing deployment times and disk usage on servers.

ASP.NET 4.7 – SQL Server, Session Provider, and OutputCache Provider

We’re also announcing an update to the SQL Server session provider and Session State module this week for ASP.NET 4.7  This provider contains updates that will help you with modern web applications.  We see customers with JavaScript that makes multiple concurrent requests to the server for data.  These requests, by default, lock the session object and prevent concurrent requests.  The new Session State provider eliminates that concurrency locking check, and also allows you to configure throttling for concurrent requests for the same session ID.  The updated session state module ships with an updated In-Memory provider, the default provider with ASP.NET

The SQL Server provider has been optimized for SQL Server versions 2008 and later, as well as for Azure SQL Server.  All requests to the database are asynchronous and should help with concurrent request performance on your web applications.  It has also been configured to no longer require you to allocate the tables and schema for the session management, and instead create and manage them for you.  The database provider also works with the new concurrent requests mode.  Finally, the provider has been optimized to use in-memory OLTP on SQL Server 2016 and SQL Azure.

Activate the updated Session Provider’s concurrent requests per session feature with the following AppSettings:

  <add key="aspnet:AllowConcurrentRequestsPerSession" value="true" />
  <add key="aspnet:RequestQueueLimitPerSession" value="100" />

The SQL Server OutputCache provider has been updated and now utilize asynchronous IO as well.  You can reference the updated provider with this markup in your web.config:

<caching>
      <outputCache defaultProvider="SQLAsyncOutputCacheProvider">
        <providers>
          <add name="SQLAsyncOutputCacheProvider" connectionStringName="DefaultConnection1" type="Microsoft.AspNet.OutputCache.SQLAsyncOutputCacheProvider.SQLAsyncOutputCacheProvider, Microsoft.AspNet.OutputCache.SQLAsyncOutputCacheProvider" />
        </providers>
      </outputCache>
</caching>

<modules>
      <remove name="OutputCache" />
      <add name="OutputCache" type="Microsoft.AspNet.OutputCache.OutputCacheModuleAsync, Microsoft.AspNet.OutputCache.OutputCacheModuleAsync" preCondition="integratedMode" />
</modules>

We expect to release these packages next week.  You will be able to install the providers for ASP.NET 4.7 from NuGet:

  • Install-Package Microsoft.AspNet.OutputCache.OutputCacheModuleAsync
  • Install-Package Microsoft.AspNet.OutputCache.SQLAsyncOutputCacheProvider
  • Install-Package Microsoft.AspNet.SessionState.SessionStateModule
  • Install-Package Microsoft.AspNet.SessionState.SqlSessionStateProviderAsync

We’ll post an update when they are available.

WCF Connected Services and Containers

The WCF team has issued an update to the WCF Connected Service extension for Visual Studio 2017 and .NET Core.  This extension provides the same “Add Connected Service” feature that you’re familiar with in .NET Framework.  The tool makes it much simpler to configure.NET Core 1.x projects that need to connect to WCF end points.  A future update will enable connectivity from .NET Core 2.0 projects. Install the extension for Visual Studio 2017 from the Visual Studio Marketplace.

Connected WCF Service Extension

Last week we quietly published a WCF Docker image to assist in shifting HTTP services to Docker Windows containers.  This initial container supports HTTP services running on .NET 4.6.1 in self-hosted or IIS-hosted models and does not have tooling available to support it just yet.  You can migrate your service to a container by adding a Dockerfile to a project with a self-hosted or IIS-hosted service that contains the following configuration:

FROM microsoft/wcf

RUN mkdir C:\WcfService

RUN powershell -NoProfile -Command \
    Import-module IISAdministration; \
    New-IISSite -Name "WcfService" -PhysicalPath C:\WcfService -BindingInformation "*:83:"

EXPOSE 83

ADD content/ /WcfService

More details and samples with this image can be found on the WCF docker image repository.  Details are also available if you would like to run the WCF client for .NET Core in a container.

Are you interested in working with other endpoints or security with WCF services in containers?  Let us know what features you want to see in our next set of updates for WCF container images in the comments area below.

Preview 1 Issues

This preview version of ASP.NET Core 2.0 ships with support for the .NET Core 2.0 SDK only.  Our goal is to ship ASP.NET Core 2.0 on .NET Standard 2.0 so applications can run on .NET Core, Mono and .NET Framework.  As the team was working through the last of their issues before Build, it was uncovered that the preview of ASP.NET Core 2.0 utilized API’s that were outside of .NET Standard 2.0, preventing it from running on .NET Framework. Because of this we limited Preview 1 support .NET Core only so it would not break a developer upgrading an ASP.NET Core 1.x application to ASP.NET Core 2 preview on .NET Framework.

Summary

The .NET teams have been very busy, and brought a bunch of updates to Build for all of our web frameworks.  After the conference completes. We’ll publish another post with links to videos from our sessions about all of these features and samples.  You can watch some of our sessions online at Channel 9 and you can download the ASP.NET Core 2 Preview 1 release from http://dot.net


Introducing ASP.NET Core 2.0 Preview 2

$
0
0

At Build 2017, we released an initial preview version of ASP.NET Core 2.0.  Over the last two months we have incorporated your feedback and added a number of new features.  We now have a Preview 2 version of the ASP.NET Core 2.0 framework and Visual Studio tools for you to try.  In this post, we will review some of the new features in this preview release.

Update Visual Studio and install ASP.NET Core 2.0 Preview 2

If you don’t already have the latest Visual Studio 2017 Preview version installed on your Windows system, download the latest from https://www.visualstudio.com/vs/preview/

You can update an existing Visual Studio 2017 Preview installation using the Microsoft Visual Studio Installer application available on your start menu.  Choose to update Visual Studio 2017 Preview and the latest Visual Studio 2017 Preview 3 patch (15.3 Preview 3) will be downloaded and applied to your installation. For Mac users, you should install the Visual Studio for Mac and update to the latest preview from the Beta channel.

Next, download the latest .NET Core 2.0 SDK and install it.  This will give you an updated version of the .NET Core command-line tools and runtime.  You can verify that you have the correct version installed by executing the following at the command-line:

dotnet –-version

You should see the version “2.0.0-preview2-006497” reported, the current version of the 2.0 Preview 2 SDK.

SPA Templates for Everyone!

When you start the updated Visual Studio 2017 Preview and create a new ASP.NET Core website with .NET Core, you will notice that the ASP.NET template chooser for ASP.NET Core 2.0 shows some extra templates:

New ASP.NET Core Templates

The ASP.NET Core SPA templates for Angular and React are now available from Visual Studio.  They’re also available on the command-line as part of the standard installation of the .NET Core SDK.  Of particular note, the Angular template has been updated to Angular 4.  For more information about how to get started using the contents of the ASP.NET Core SPA templates, check the article from Steve Sanderson when the templates were initially made available.

 

ASP.NET Core 2 and .NET Framework

You can now choose to build your ASP.NET Core 2.0 applications with the .NET Framework by choosing the ASP.NET Core with .NET Framework template option in Visual Studio 2017.

Start a New ASP.NET Core Project with .NET Framework

Kestrel Improvements

We’ve added a number of server constraint configuration options to the Kestrel server through the KestrelServerOptions class’s new Limits property.  You can now add limits for the following:

  • Maximum Client Connections
  • Maximum Request Body Size
  • Maximum Request Body Data Rate

Maximum client connections

The maximum number of concurrent open HTTP/S connections can be set for the entire application with the following code:

.UseKestrel(options =>
{
    options.Limits.MaxConcurrentConnections = 100;
    options.Limits.MaxConcurrentUpgradedConnections = 100;

Note how there are two limits. Once a connection is upgraded from HTTP to another protocol (e.g. on a WebSockets request), it’s not counted against the limit anymore since upgraded connections have their own limit.

Maximum request body size

To configure the default constraint for the entire application:

.UseKestrel(options =>
{
    options.Limits.MaxRequestBodySize = 10 * 1024;

This will affect every request, unless it’s overridden on a specific request:

app.Run(async context =>
{
    context.Features.Get<IHttpMaxRequestBodySizeFeature>().MaxRequestBodySize = 10 * 1024;

 

You can only configure the limit on a request if the application hasn’t started reading yet, otherwise an exception is thrown. There’s an IsReadOnly property in the feature that tells you if the request body is in read-only state, meaning it’s too late to configure the limit.

Minimum request body data rate

To configure a default minimum request rate:

.UseKestrel(options =>
{
    options.Limits.RequestBodyMinimumDataRate =
        new MinimumDataRate(rate: 100, gracePeriod: TimeSpan.FromSeconds(10));

To configure per-request:

app.Run(async context =>
{
    context.Features.Get<IHttpRequestBodyMinimumDataRateFeature>().MinimumDataRate =
        new MinimumDataRate(rate: 100, gracePeriod: TimeSpan.FromSeconds(10));

The way the rate works is as follows: Kestrel will check every second if data is coming in at the specified rate in bytes/second. If the rate drops below the minimum, the connection is timed out. The grace period is the amount of time that Kestrel will give the client to get it’s send rate up to the minimum, so the rate is not checked during that time. This is to avoid dropping connections that are initially sending data at a slow rate due to TCP slow start.

Razor Support for C# 7.1

The Razor engine has been updated to work with the new Roslyn compiler and that includes support for C# 7.1 features like Default Expressions, Inferred Tuple Names, and Pattern-Matching with Generics.  To use C #7.1 features in your project add the following property in your project file and then reload the solution:

<LangVersion>latest</LangVersion>

C# 7.1 is itself in a preview state, and you can review the language specification for these features on their GitHub repository.

Enhanced HTTP Header Support for Range, ETags, and LastUpdate

When using MVC to transmit a FileStreamResult or a FileContentResult, you now have the option to set an ETag or a LastModified date on the content you wish to transmit.  You can set these values on the returned content with code similar to the following:

var data = Encoding.UTF8.GetBytes("This is a sample text from a binary array");
var entityTag = new EntityTagHeaderValue("\"MyCalculatedEtagValue\"");
return File(data, "text/plain", "downloadName.txt", lastModified: DateTime.UtcNow.AddSeconds(-5), entityTag: entityTag);

 

The file returned to your visitors will be decorated with the appropriate HTTP headers for the ETag and Last Modified values.

If an application visitor requests content with a Range Request header, ASP.NET will recognize that and handle that header. If the requested content can be partially delivered, ASP.NET will appropriately skip and return just the requested set of bytes.  You do not need to write any special handlers into your methods to adapt or handle this feature, it is handled by the framework for you.

New Page Filters for Razor Pages

Page filters (IPageFilter, IAsyncPageFilter) allow you to run code before and after a page handler is executed, much in the same way that action filters let your run code before and after an action method is executed. Page filters can also influence which page handler gets executed or to run initialization code before model binding occurs. In Preview 2 you can add a page filter globally or using an app model convention. For preview 2 you can’t apply filters using attributes, but we expect this support to come later.

Azure App Service Support

The Preview 2 version of ASP.NET Core 2.0 can be deployed to Azure App Service with no changes needed.  Azure Data Centers are being rolled out today with completion scheduled for June 30th. You can track progress of the roll out on this Azure App Service Announcements issue.

Postponed features

Some features available in Preview 1 have been pulled out of the ASP.NET Core 2.0 release to give them more time to bake. We still plan to do these features, but for now they have been removed from Preview 2. These features are:

  • NET Core Identity as a Service, including the support for issue identity and access tokens using OpenID Connect and OAuth 2.0
  • Default configuration schema for configuring HTTPS, certificates and authentication (you can still configure logging by default).

Summary

This preview release delivers some of the promised features of the ASP.NET Core 2.0 framework, and we hope that you try them out.  You can find a complete set of release notes in the Announcements repository on GitHub with links to the feature issues and pull-requests that completed those features.  What do you think of the update to ASP.NET Core 2.0?  Let us know in the comments below.

Protected: WCF Web Service Reference – Metadata Exchange Endpoint Authentication

$
0
0

This content is password protected. To view it please enter your password below:

Hash Passwords with ASP.NET Membership Providers

$
0
0

Are you using the legacy ASP.NET membership providers with your application?  When you look in web.config, is there a membership configuration within the system.web element?  The membership provider has been available since ASP.NET 2, and has been superseded by the Identity provider for a more secure authentication and authorization facility in your application.

Best practices in security today dictate that you should not be storing passwords in cleartext or in an encrypted format.  These values can be read or decrypted, and you will feel shame if your password list is published somewhere by a nefarious party.

Starting with ASP.NET 4.6.2, we have updated the MembershipProvider base when reading the PasswordFormat property.  If your application is configured with a setting that is not Hashed, we are now throwing a warning entry into the Windows Event Log that will encourage you to choose the more secure Hashed setting for your Membership configuration.

Event Log entry recommending hashing passwords

Event Log entry recommending hashing passwords

A hashed configuration will use the hash algorithm defined in the machineKey validation attribute.  By default, this value is set to “HMACSHA256”.  This attribute can be configured to hash with a number of different algorithms, and we no longer recommend using MD5 or SHA1 hashing.

Recommended Solutions

If you want to update an existing application to use Hashed passwords with Membership, we recommend that you force every user to reset their password at the same time you change the passwordFormat setting in web.config  To force this reset, the consult the following steps:

  1. Ensure that all users have an email address configured in your membership repository.
  2. Create a password-change page if you don’t have one already and link it to your user login page
  3. Notify all the application’s users that they will be forced to reset their password on a scheduled date
  4. On the scheduled date of your password reset, change the passwordFormat setting in web.config and update your membership repository to clear out all passwords stored.

In a default SQL membership repository, you could execute the following statement to clear all passwords:

UPDATE AspNetUsers SET PasswordHash=’’;

Ideally, we recommend that you update your application to use the improved ASP.NET Identity provider.  The newer provider enables several scenarios for integration with third-party authentication providers, two-factor authentication, and external notification systems like text messaging and email.  You can learn more about the Identity provider on our Identity announcement blog post.

Summary

We continue to support the membership providers for ASP.NET that were introduced in ASP.NET 2.0.  It is in your best interest to ensure that you are using them in the most secure configuration available.  Please take a few minutes and review your ASP.NET application’s configuration and determine if you should apply any updates.

WCF Web Service Reference – Metadata Exchange Endpoint Authentication

$
0
0

With the recent update to the WCF Service Reference tool in the VS Extensions Gallery, support has been added for downloading metadata for a web service where the metadata exchange (MEX) endpoint has been secured with HTTP authentication.

The purpose of MEX endpoints is to allow clients to discover the service capabilities, including security aspects of the service, and usually this endpoint can be accessed by an anonymous request. This is not a problem in general as the actual service resources can be exposed on secured endpoints. Still, there are cases in which the metadata might be considered sensitive information and so the MEX endpoint must also be secured, allowing only authorized clients to discover the service capabilities.

We need to differentiate between these two levels of authentication; the MEX authentication is usually processed by the server’s pipeline (IIS) while other service requests are authenticated by the web service host’s pipeline (WCF). If authentication is enabled at both levels, the same type of authentication must be used.

In this article, we will demonstrate the new HTTP authentication feature in the WCF Web Service Reference tool and how it is related to the web service authentication feature in WCF.

To illustrate let’s step through an example: I have created a simple WCF Service Application (DemoWebService) using Visual Studio 2017.  I then deployed the service to my local IIS, and configure it with Digest authentication using the IIS management console, as illustrated in the picture below.

In order for my service to work with this server configuration, it must be configured with transport security, I edited the service configuration as follows:

Notice in particular the configuration settings for the binding in lines 20-26 in the web.config file above. By default, VS configures the service to expose metadata over HTTP, line 32.

Now I will add a service reference to a .net core console application (NetCoreDemoServiceClient) using the WCF Service Reference tool.

The user credentials dialog box is presented. After I provide my credentials I can successfully add a reference to my client application.

The generated code configures the endpoint binding security and the address using the SSL URL scheme. A snippet of the generated code is presented below, observe the binding security configuration at lines 97-98 and the default https address at line 108.

However, in order to access the service endpoint operations, we still need to get the user credentials. That’s where the partial method at line 78 above comes in handy.
The partial configuration method is implemented by the user, the generated code should never be edited as it may be overwritten later. The interesting code is shown in lines 26-27 below where the client credentials are provided. The mechanism for obtaining the user credentials is out of the scope of this article.

Given that the binding has been configured with transport credentials, SSL authentication will occur for the server as well, in the above code server authentication validation is provided in line 30.


Now, executing the code in lines 11-12 above will successfully get the service resource requested by the client.
Install the WCF Service Reference update today and let us know what you think of the new MEX Endpoint Authentication feature and any other functionality. Instructions can be found in the feedback and questions section of the download page.
Enjoy!

Development time IIS support for ASP.NET Core Applications

$
0
0

With a recent update to Visual Studio 2017, we have added support for debugging ASP.NET Core applications against IIS. This blog post will walk you through enabling this feature and setting up your project to use this feature.

Getting Started

To get started:

  • You need to install Visual Studio 2017 (version 15.3) Preview (it will not work with any earlier version of Visual Studio)
  • You must have the ASP.NET and web development workload OR the .NET Core cross-platform development workload installed

Enable IIS

Before you can enable Development time IIS support in Visual Studio, you will need to enable IIS. You can do this by selecting the Internet Information Services checkbox in the Turn Windows features on or off dialog.

If your IIS installation requires a reboot, make sure to complete it before proceeding to the next step.

Development time IIS support

Once you’ve installed IIS, you can launch the Visual Studio installer to modify your existing Visual Studio installation. In the installer select the Development time IIS support component which is listed as optional component under the ASP.NET and web development workload. This will install the ASP.NET Core Module which is a native IIS module required to run ASP.NET Core applications on IIS.

Adding support to an existing project

You can now create a new launch profile to add Development time IIS support . Make sure to select IIS from the Launch dropdown in the Debug tab of Project properties of your existing ASP.NET Core application

Alternativately, you can add manually a launch profile to your launchSettings.json file.

{
    "iisSettings": {
        "windowsAuthentication": false,
        "anonymousAuthentication": true,
        "iis": {
            "applicationUrl": "http://localhost/WebApplication2",
            "sslPort": 0
        }
    },
    "profiles": {
        "IIS": {
            "commandName": "IIS",
            "launchBrowser": "true",
            "launchUrl": "http://localhost/WebApplication2",
            "environmentVariables": {
                "ASPNETCORE_ENVIRONMENT": "Development"
            }
        }
    }
}

Congratulations! At this point, your project is all set up for development time IIS support. You may be prompted to restart Visual Studio if weren’t running as an administrator.

Conclusion

We look forward to you trying out this feature and let us know how it works for you. You can do that below, or via twitter @AndrewBrianHall and @sshirhatti.

Announcing ASP.NET Core 2.0

$
0
0

The ASP.NET team is proud to announce general availability of ASP.NET Core 2.0.  This release features compatibility with .NET Core 2.0, tooling support in Visual Studio 2017 version 15.3, and the new Razor Pages user-interface design paradigm.  For a full list of updates, you can read the release notes.  The latest SDK and tools can be downloaded from https://dot.net/core. Read the .NET Core 2.0 release announcement for more information and watch the launch video:

 

With the ASP.NET Core 2.0 release we’ve added many new features to make building and monitoring web apps easier and we’ve worked hard to improve performance even more.

Updating a Project to ASP.NET Core 2.0

ASP.NET Core 2.0 runs on both .NET Framework 4.6.1 and .NET Core 2.0, so you will need to update your target framework in your project to netcoreapp2.0 if you were previously targeting a 1.x version of .NET Core.

Figure 1 - Setting Target Framework in Visual Studio 2017

Figure 1 – Setting Target Framework in Visual Studio 2017

Next, we recommend you reference the new Microsoft.AspNetCore.All metapackage instead of the collection of individual Microsoft.AspNetCore.* packages that you previously used.  This new metapackage contains references to all of the AspNetCore packages and maintains a complete line-up of compatible packages.  You can still include explicit references to specific Microsoft.AspNetCore.* package versions if you need one that is outside of the lineup, but our goal is to make this as simple a reference as possible.

What happens at publication time?  We know that you don’t want to publish the entire AspNetCore framework to your target environments, so the publish task now distributes only those libraries that you reference in your code.  This tree-pruning step should help make your publish process much smoother and easier to distribute your web applications.

More information about the features and changes you will need to address when migrating from ASP.NET Core 1.x to 2.0 can be found in our documentation.

Introducing Razor Pages

With this release of ASP.NET Core, we are introducing a new coding paradigm that makes writing page-focused scenarios easier and simpler than our current Model-View-Controller architecture.  Razor Pages are a page-first structure that allow you to focus on the user-interface and simplify the server-side experience by writing PageModel objects.

If you are familiar with how to configure your ASP.NET Core Startup class for MVC, then you already have the following lines in your Startup class:

Surprise!  The AddMvc and UseMvc configuration calls in your Startup class also activate the Razor Pages feature.  You can start writing a Razor Page by placing a new cshtml file called Now.cshtml in the Pages/ top-level folder of your application.  Let’s look at a simple page that shows the current time:

This looks like a standard MVC View written in Razor, but also has the @page directive at the top to indicate that this is a stand-alone Razor Page built with that paradigm.  HtmlHelpers, TagHelpers, and other .NET Code is available to us in the course of the page.  We can add methods just as we could in Razor views, by adding a block level element called @functions and writing methods into that element:

We can build more complex structures by taking advantage of the new PageModel object.  The PageModel is an MVVM architectural concept that allows you to execute methods and bind properties to the Page content that is being rendered.  We can enhance our sample by creating a NowModel.cshtml.cs C# class file in the Pages folder with this content:

With this class that inherits from PageModel, we can now do more complex interactions and build out a class that can be unit tested.  In this case, we are simply loading the LastModified property on the Now page and setting that to the LastModified property.  Also note the OnGet method to indicate that this PageModel handles the HTTP GET verb.  We can update our Razor Page with the following syntax to start using the PageModel and output the last update date:

For more information, check out the ASP.NET Core documentation on getting started with Razor Pages.

Updated Templates and SPA Templates

The templates that ship with ASP.NET Core have been enhanced to not only include a web application that is built with the MVC pattern, but also a razor pages web application template, and a series of templates that enable you to build single-page-applications (SPA) for the browser.  These SPA templates use the JavaScript Services functionality to embed NodeJS within ASP.NET Core on the server, and compile the JavaScript applications server-side as part of the .NET build process.

New ASP.NET Core Templates in Visual Studio 2017

Figure 2 – New ASP.NET Core Templates in Visual Studio 2017

These same templates are also available out of the box at the command-line when you type dotnet new:

Figure 3 – Templates available with the dotnet new command

DbContext Pooling with Entity Framework Core 2.0

Many ASP.NET Core applications can now obtain a performance boost by configuring the service registration of their DbContext types to use a pool of pre-created instances, avoiding the cost of creating new instance for every request.  Try adding the following code to your Startup/ConfigureServices to enable DbContext pooling:

You can read more information about the updates included in Entity Framework Core 2.0 online in their announcement post.

Monitor and Profile with No Code Changes and Application Insights

ASP.NET Core 2.0 runs with no modifications necessary on Azure App Service and comes with integrations that provide performance profiling, error reporting, and diagnostics from Azure Application Insights. In Visual Studio 2017, right-click on your project and choose “Add – Application Insights Telemetry” to start collecting data from your application. You can then review the performance of your application including all log messages directly within Visual Studio 2017.

Telemetry Reported in Visual Studio 2017

Figure 4 – Telemetry Reported in Visual Studio 2017

That’s nice when you’re developing your application, but what if your application is already in Azure?  We’ve got support in the Azure portal to start profiling and debugging, and it starts when you first publish your application and navigate to the cloud portal for your new app service.  Azure will prompt you with a new purple banner indicating that Application Insights for monitoring and profiling is available

Banner in Azure Portal offering to assist in configuring Application Insights

Figure 5 – Banner in Azure Portal offering to assist in configuring Application Insights

When you click through that banner, you will create an Application Insights service for your application and attach those features without recompiling or redeploying.  Shortly afterwards, your new Application Insights service will start reporting data about the activity captured.

Initial Application Insights Overview on Azure Portal

Figure 6 – Initial Application Insights Overview on Azure Portal

It even shows the number of failed requests and errors in the application.  If you click through that area, you’ll see details about the failed requests:

Failed Requests Report on Azure Portal

Figure 7 – Failed Requests Report on Azure Portal

There is a System.Exception thrown, and identified at the bottom of the screen, if we click through that reported exception, we can see more about each time that exception was thrown.  When you click through a single instance of those exceptions, you get some neat information about the exception including the call stack:

Exception Analysis in the Azure Portal

Figure 8 – Exception Analysis in the Azure Portal

Snapshot debugging in Application Insights now supports ASP.NET Core 2.0.  If you configure snapshot debugging in your application, then the “Open Debug Snapshot” link at the top will appear and show the complete call stack and you can click through method calls in the stack to review the local variables:

Stack Trace in Azure Portal

Figure 9 – Stack Trace in Azure Portal

Local values reported in Azure Portal

Figure 10 – Local values reported in Azure Portal

 

Nice!  We can go one step further and click that “Download Snapshot” button in the top corner to start a debug session in Visual Studio right at the point this exception was thrown.

What about the performance of these pages?  From the Application Insights blade, you can choose the Performance option on the left and dig deeper into the performance of each request to your application.

Application Profiling in Azure Portal

Figure 11 – Application Profiling in Azure Portal

There are more details available in our docs about performance profiling using Application Insights.

If you want the raw logs about your application, you can enable the Diagnostic Logs in App Service and set the diagnostic level to Warning or Error to see this exception get thrown.

Configure Logging within the Azure Portal

Figure 12 – Configure Logging within the Azure Portal

Finally, choose the Log Stream on the left and you can watch the same console that you would have on your developer workstation.  The errors and log messages of the selected severity level or greater will appear as they are triggered in Azure.

Live Console Logging inside the Azure Portal

Figure 13 – Live Console Logging inside the Azure Portal

All of the Application Insights features can be activated in ASP.NET Core without rebuilding and deploying.  Snapshot debugging requires an extra step and some code to be added, and the configuration is as simple as an extra NuGet package and a line in your Startup class.

You can learn more about Application Insights Telemetry in our online documentation.

Razor Support for C# 7.1

The Razor engine has been updated to work with the new Roslyn compiler and that includes support for C# 7.1 features like Default Expressions, Inferred Tuple Names, and Pattern-Matching with Generics.  To use C #7.1 features in your project add the following property in your project file and then reload the solution:

<LangVersion>latest</LangVersion>

C# 7.1 is itself in a preview state, and you can review the language specification for these features on their GitHub repository.

Simplified Application Host Configuration

Host configuration has been dramatically simplified, with a new WebHost.CreateDefaultBuilder included in the default ASP.NET Core templates that automatically allocates a Kestrel server that will attempt to run on IIS if it is available, and configures the standard console logging providers.  Your Program.cs file is simplified to only this content:

That reduces the possibility of accidentally breaking some of the standard configuration that most developers were not altering in their ASP.NET Core applications.  Why make you write the same boilerplate code over and over, when it could be simplified to 3 lines of code?

Summary

These updates in ASP.NET Core 2.0 provide new ways to write your applications and simplify some of the operational process of managing a production application.  A word of thanks to our .NET Community for their feedback, issues, and contributed source code on GitHub.  They’ve really been a huge help in delivering this new version of ASP.NET Core. We encourage you to download the latest .NET Core SDK from https://dot.net/core and start working with this new version of ASP.NET Core.  You can watch the launch video for .NET Core 2.0 and ASP.NET Core 2.0 at: https://aka.ms/dotnetcore2launchvideo

ASP.NET Core 2.0 Features

$
0
0

Last week we announced the release of ASP.NET Core 2.0 and described some top new features, including Razor Pages, new and updated templates, and Application Insights integration. In this blog post we are going to dig into more details of features in 2.0. This list is not exhaustive or in any particular order, but highlights a number of interesting and important features.

ASP.NET Core Metapackage/Runtime Store

ASP.NET Core has had a goal of allowing developers to only depend on what they need. While people generally appreciate the increased modularity, it came with an increased cost for everyone in the form of finding and depending on a relatively large set of smaller packages. In 2.0 we have improved this story with the introduction of the Microsoft.AspNetCore.All package.

Microsoft.AspNetCore.All is a metapackage, meaning it only has references to other packages, and it references:

  1. All ASP.NET Core packages.
  2. All Entity Framework Core packages.
  3. All dependencies used by ASP.NET Core and Entity Framework Core

The version number of the Microsoft.AspNetCore.All metapackage represents the ASP.NET Core version and Entity Framework Core version (aligned with the .NET Core version). This makes it easy to depend on a single package and version number that gives you all available APIs and lines up the entire stack.

Applications that use the Microsoft.AspNetCore.All metapackage automatically take advantage of the .NET Core Runtime Package Store. The runtime store contains all the runtime assets needed to run ASP.NET Core 2.x applications on .NET Core. When you use the Microsoft.AspNetCore.All metapackage, no assets from the referenced ASP.NET Core packages are deployed with the application — the .NET Core Runtime Store already contains these assets. The assets in the runtime store are precompiled to improve application startup time. This means that by default if you are using the Microsoft.AspNetCore.All metapackage your published app size will be smaller and your apps will startup faster.

You can read more about the .NET Core Runtime Package Store here: https://docs.microsoft.com/en-us/dotnet/core/deploying/runtime-store

WebHost builder APIs

The WebHost builder APIs are static methods in the Microsoft.AspNetCore package that provide several ways of creating and starting a WebHost with a default configuration. These methods reduce the common code that the majority of ASP.NET Core applications are going to need. For example. you can use the default web host builder like this:

The above code is taken from the MVC template in Visual Studio, and shows using the CreateDefaultBuilder method to construct a WebHost. You can see the configuration that CreateDefaultBuilder uses here.

The Program.BuildWebHost method is provided by convention so that tools, like EF migrations, can inspect the WebHost for the app without starting it. You should not do anything in the BuildWebHost method other than building the WebHost. We used an expression-bodied method in the templates to help indicate that this method shouldn’t be used for anything other than creating an IWebHost.

In addition to CreateDefaultBuilder there are a number of Start methods that can be use to create and start a WebHost:

These methods provide a way to run an application in a single line of code, but more importantly they provide a way to quickly get a web server responding to requests and executing your code without any ceremony or extra configuration.

Configuration as a Core service

As we watched developers build applications with 1.x, and listened to the feedback and feature requests being made, it become obvious that there are several services that should always be available in ASP.NET Core. Namely, IConfiguration, ILogger (and ILoggerFactory), and IHostingEnvironment. To that end in 2.0 an IConfiguration object will now always be added to the IoC container, this means that you can accept IConfiguration in your controller or other types activated with DI just like you can with ILogger and IHostingEnvironment.

We also added a WebHostBuilderContext to WebHostBuilder. WebHostBuilderContext allows these services to be configured earlier, and be available in more places:

The WebHostBuilderContext is available while you are building the WebHost and gives access to an IHostingEnvironment and IConfiguration object.

Logging Changes

There are three main differences in the way that Logging can be used in 2.0:

  1. Providers can be registered and picked up from DI instead of being registered with ILoggerFactory, allowing them to consume other services easily.
  2. It is now idiomatic to configure Logging in your Program.cs. This is partly for the same reason that configuration is now a core service: you need Logging to be available everywhere, which means it should be configured early.
  3. The log filtering feature that was previously implemented by a wrapping LoggerFactory is now a feature of the default LoggerFactory, and is wired up to the registered configuration object. This means that all log messages can be run through filters, and they can all be configured via configuration.

In practice what these changes mean is that instead of accepting an ILoggerFactory in your Configure method in Startup.cs you will write code like the following:

Kestrel Hardening

The Kestrel web server has new features that make it more suitable as an Internet-facing server. We’ve added a number of server constraint configuration options in the KestrelServerOptions class’s new Limits property. You can now add limits for the following:

  • Maximum client connections
  • Maximum request body size
  • Minimum request body data rate

WebListener Rename

The packages Microsoft.AspNetCore.Server.WebListener and Microsoft.Net.Http.Server have been merged into a new package Microsoft.AspNetCore.Server.HttpSys. The namespaces have been updated to match.

For more information, see Introduction to Http.sys.

Automatic Page and View compilation on publish

Razor page and view compilation is enabled during publish by default, reducing the publish output size and application startup time. This means that your razor pages and views will get published with your app as a compiled assembly instead of being published as .cshtml source files that get compiled at runtime. If you want to disable view pre-compilation, then you can set a property in your csproj like this:

If you do this you will have .cshtml files in your published output again, as well as all the reference assemblies that might be required to compile those files at runtime.

Tag Helper components

Tag helper components are responsible for generating or modifying a specific piece of HTML. They are registered as services and optionally executed by TagHelpers. For example, the new HeadTagHelper and BodyTagHelper will run all the registered tag helper components, so they can modify the head or body of page being rendered.
This makes a Tag helper component useful for tasks such as dynamically adding a script to all your pages. This is how we enable the Application Insights support in ASP.NET Core. The UserApplicationInsights method registers a tag helper component that is executed by the HeadTagHelper to inject the Application Insights JavaScript, and because it is done via DI we can be sure that it is only registered once, avoiding duplicate JavaScript being added.

IHostedServices

If you register an IHostedService then ASP.NET Core will call the Start() and Stop() methods of your type during application start and stop respectively. Specifically, start is called after the server has started and IApplicationLifetime.ApplicationStarted is triggered.

Today the we only use hosted services in SignalR, but we have discussed using it for things like:

  • An implementation of QueueBackgroundWorkItem that allows a task to be executed on a background thread
  • Processing messages from a message queue in the background of a web app while sharing common services such as ILogger

IHostingStartup

If you read the ASP.NET Core 2.0 announcement blog post then you will have seen a lot of talk about automatic light-up of Application Insights. When you publish to an Azure App Service and enable Application Insights you get log messages and other telemetry “for free”, meaning that you don’t have to add any code to your application to make it work. This automatic light-up feature is possible because of the IHostingStartup interface and the associated logic in the ASP.NET Core hosting layer.
The IHostingStartup interface defines a single method: void Configure(IWebHostBuilder builder);. This method is called while the WebHost is being built in the Program.cs of your ASP.NET Core application and allows code to setup anything that can be configured on a WebHostBuidler, including default services and loggers which is how Application Insights works. ASP.NET Core will execute any IHostingStartup implementations in the applications assembly as well as any that are listed in an Environment Variable called ASPNETCORE_HOSTINGSTARTUPASSEMBLIES.

Improved TempData support

We’ve made a couple of improvements to the TempData feature in this release:

  • The cookie TempData provider is now the default TempData provider. This means you no longer need to setup session support to make use of the TempData features
  • You can now attribute properties on your controllers and page models with the TempDataAttribute to indicate that the property should be backed by TempData. Set the property to add a value to TempData, or read from the property to read from TempData.

Media type suffixes

ASP.NET Core MVC now supports media type suffixes (ex. application/foo+json). The JSON and XML formatters have been updated to support the json and xml suffixes respectfully. For example, the JSON formatters now support parsing requests of the form Content-Type: application/*+json (with any parameters), and support formatting responses when your ObjectResult’s ContentTypes or Response.ContentType value are of the form application/*+json. We’ve extended the MediaType API to add SubtypeSuffix and SubtypeWithoutSuffix properties, and you can use wildcard patterns to indicate support for media type suffixes on your own formatters.

Summary

As you can see there is a lot of new stuff in ASP.NET Core 2.0. We hope you enjoy trying out these new features. Download .NET Core 2.0 today and let us know what you think!


Getting Started with Windows Containers

$
0
0

Containers provide a way of running an application in a controlled environment, isolated from other applications running on the machine, and from the underlying infrastructure. They are a cost-effective way of abstracting away the machine, ensuring that the application runs in the same conditions, from development, to test, to production.

Containers started in Linux, as a virtualization method at the OS level that creates the perception of a fully isolated and independent OS, but it does not require creating a full virtual machine. People have been already using Linux containers for a while. Docker greatly simplified the containerization on Linux by offering a set of tools that make it easy to create, deploy and run applications by using containers.

Windows Server implements the container technology, and Docker API’s and tool-set are extended to support Windows Containers, offering developers who use Linux Docker the same experience on Windows Server.

There are two kinds of container images available: Windows Server Core and Nano Server. Nano Server is lightweight and only for x64 apps. Windows Server Core image is larger and has more capabilities; it allows running “Full” .NET Framework apps, such as an ASP.NET application, in containers. The higher compatibility makes it more suitable as a first step in transitioning to containers. ASP.NET Core on .NET Core apps can run on both Nano Server and Server Core, but are better suited for running on Nano Server, because of its smaller size.

The following steps show how to get started on running ASP.NET Core and ASP.NET applications on Windows containers.

Prerequisites:

Install Docker

Install Docker for Windows – Stable channel

After installing Docker, logging out of Windows and re-login is required. Docker may prompt for that. After logging in again, Docker starts automatically.

Switch Docker to use Windows Containers

By default, Docker is set to use Linux containers. Right-click on the docker tray icon and select “Switch to Windows Containers”.

Switch to Windows Containers

Switch to Windows Containers

Running docker version will show Server OS/arch changed to Windows after docker switched to Windows containers.

Docker version before switching to Windows containers

Docker version before switching to Windows containers

Docker version after switching to Windows Containers

Docker version after switching to Windows Containers

Set up an ASP.NET or ASP.NET Core application to run in containers

ASP.NET as well as ASP.NET Core applications can be run in containers. As mentioned above, there are two kinds of container images available for Windows: Nano Server and Server Core containers. ASP.NET Core apps are lightweight enough that they can run in Nano Server containers. ASP.NET apps need more capabilities and require Server Core containers.

The following walkthrough shows the steps needed to run an ASP.NET Core and an ASP.NET application in a Windows Container. To start, create an ASP.NET or ASP.NET Core Web application, or use an existing one.

Note: ASP.NET Core applications developed in Visual Studio can have Docker support automatically added using Visual Studio Tools for Docker. Until recently, Visual Studio Tools for Docker only supported Linux Docker scenarios, but in Visual Studio 2017 version 15.3, support has been added for containerizing ASP.NET Core apps as Windows Nano images. Docker support with Windows Nano Server can be added at project creation time by checking the “Enable Docker Support” checkbox and selecting Windows in the OS dropdown, or it can be added later on by right-clicking on the project in Solution Explorer, then Add -> Docker Support.

This tutorial assumes that “Docker Support” was not checked when the project was created in Visual Studio, so that the whole process of adding Docker support manually can be explained.

Publish the App

The first step is to put together in one folder all the application artifacts needed for the application to run in the container. This can be done with the publish command. For ASP.NET Core, run the following command from the project directory, which will publish the app for Release config in a folder; here it is named PublishOutput.

dotnet publish -c Release -o PublishOutput

dotnet Publish Output
Or use the Visual Studio UI to publish to a folder (for ASP.NET or ASP.NET Core)

Publish with Visual Studio

Publish with Visual Studio

Create the Dockerfile

To build a container image, Docker requires a file with the name “Dockerfile” which contains all the commands, in order, to build a given image. Docker Hub contains base images for ASP.NET and ASP.NET Core.

Create a Dockerfile with the content shown below and place it in the project folder.

Dockerfile for ASP.NET Core application (use microsoft/aspnetcore base image)

FROM microsoft/aspnetcore:1.1
COPY ./PublishOutput/ ./
ENTRYPOINT ["dotnet", "myaspnetcorewebapp.dll"]

The instruction FROM microsoft/aspnetcore:1.1 gets the microsoft/aspnetcore image with tag 1.1 from dockerhub. The tag is multi-arch, meaning that docker figures out whether to use the Linux or Nano Server container image depending on what container mode is set. You can use as well the specific tag of the image: FROM microsoft/aspnetcore:1.1.2-nanoserver
The next instruction copies the content of the PublishOutput folder into the destination container, and the last one uses the ENTRYPOINT instruction to configure the container to run an executable: the first argument to ENTRYPOINT is the executable name, and the second one is the argument passed to the executable.

If you publish to a different location, you need to edit the dockerfile, so to avoid this, you can copy the content of the current folder into the destination container, as in the dockerfile below, and add the dockerfile to your published output, by specifying this in the publishOptions section in the project file:

FROM microsoft/aspnetcore:1.1
COPY . .
ENTRYPOINT ["dotnet", "myaspnetcorewebapp.dll"]

"publishOptions": {
   "include": [
     "dockerfile",

Dockerfile for ASP.NET application (use microsoft/aspnet base image)

FROM microsoft/aspnet
COPY ./PublishOutput/ /inetpub/wwwroot

An entry point does not need to be specified in the ASP.NET dockerfile, because the entry point is IIS, and this is configured in the microsoft/aspnet base image.

Build the image

Run docker build command in the project directory to create the container image for the ASP.NET Core app.

docker build -t myaspnetcoreapp .

Build Your Application Image

Build Your Application Image

The argument -t is for tagging the image with a name. Running the docker build command will cause pulling the ASP.NET Core base image from Docker Hub. Docker images consist of multiple layers. In the example above, there are ten layers that make the ASP.NET Core image.

The docker build command for ASP.NET will take significantly longer compared with ASP.NET Core, because the images that need to be downloaded are larger. If the image was previously downloaded, docker will use the cached image.

After the container image is created, you can run docker images to display the list and size of the container images that exist on the machine. The following is the image for the ASP.NET (Full Framework):

ASP.NET Full Framework Image

And this is the image for the ASP.NET Core:

ASP.NET Core Image

Note in the images above the differences in size for the ASP.NET vs ASP.NET Core containers: the image size for the ASP.NET container is 11.6GB, and the image size for the ASP.NET Core container is about ten times smaller.

Run the container

The command docker run will run the application in the container:

docker run -d -p 80:80 myaspnetcoreapp

Docker Run ResultsThe -d argument tells Docker to start the image in the detached mode (disconnected from the current shell).

The -p argument maps the container port to the host port.

The ASP.NET app does not need the -p argument when running because the microsoft/aspnet image has already configured the container to listen on port 80 and expose it.

The docker ps command shows the running containers:

Docker ps ResultsTo give the running container a name and avoid getting an automatically assigned one, use the --name argument when with the run command:

docker run -d --name myapp myaspnetcoreapp

This name can be used instead of the container ID in most docker commands.

View the web page running in a browser

Due to a bug that affects the way Windows talks to containers via NAT (https://github.com/Microsoft/Virtualization-Documentation/issues/181#issuecomment-252671828) you cannot access the app by browsing to http://localhost. To work around this issue, the internal IP address of the container must be used.

The address of the running Windows container can be obtained with:

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" <first chars of HASH>

Docker Inspect Results

Where HASH is the container ID; the name of the running container can be used instead.

Then type the URL returned into your browser: http://172.25.199.213:80 and you will see the site running.

Note that the limitation mentioned above only applies when accessing the container from localhost. Users from other machines, or other VM’s or containers running on the host, can access the container using the host’s IP and port.

Wrap up

The steps above show a simple approach for adding docker support for ASP.NET Core and ASP.NET applications.

For ASP.NET Core, in addition to the base images that help build the docker container which runs the application, there are docker images available that help compile/publish the application inside the container, so the compile/publish steps can be moved inside the Dockerfile. The Dockerfile can use several base images, each in different stages of execution. This is known as “multi-stage” build. A multi-stage build for ASP.NET Core uses the base image microsoft/aspnetcore-build  such as in this github sample: https://github.com/dotnet/dotnet-docker-samples/blob/master/aspnetapp/Dockerfile

Resources to help getting started with Windows containers:

Welcome to the New Blog Template for ASP.NET Developers

$
0
0

By Juliet Daniel, Lucas Isaza, and Uma Lakshminarayan

Have you always wanted to build a blog or other web application but haven’t had the time or educational resources to learn? With our blog template, available in our GitHub repo, you can create your web application fast and effortlessly, and even learn to master the new Razor Pages architecture along the way.

This blog post will explore how to use Razor Pages features and best practices and walk through the blog template code that we wrote.

This summer we had the awesome opportunity to be part of Microsoft’s Explore Program, a 12-week internship for rising college sophomores and juniors to learn more about software development and program management. As interns on the Visual Studio Web Tools team, our task was to create a web application template as a pilot for a set of templates showcasing new features and best practices in Razor Pages, the latest ASP.NET Core coding paradigm. We decided to build a blog template because of our familiarity with writing and reading blogs and because we believe that many developers would want a shortcut to build a personal or professional blog.

In our first week, the three of us all acted as Program Managers (PM) to prioritize features. Along with researching topics in web development, we had fun playing with different blog engines to help us brainstorm features for our project. After that, every three weeks we rotated between the PM and developer roles, with one of us acting as PM and the other two as developers. Working together, we’ve built a tool that we believe will inspire developers to build more web applications with Microsoft’s technologies and to contribute to the ASP.NET open source movement.

Introduction

This blog template is a tool to help developers quickly build a blog or similar web application. This blog template also serves as an example that shows how to build a web app from ASP.NET Core using the new Razor Pages architecture. Razor Pages effectively streamlines building a web application by associating HTML pages with C# code, rather than compartmentalizing a project into the Model-View-Controller pattern.

We believe that a blog template appeals to a broad audience of developers while also showcasing a variety of unique and handy features. The basic structure of the template is useful for developers interested in building an application beyond blogs, such as an ecommerce, photo gallery, or personal web site. All three alternatives are simply variations of a blog with authentication.

You can find our more detailed talk on the ASP.NET Community Standup about writing the blog template with code reviews and demos here. You can also access our live demo at https://venusblog.azurewebsites.net/ (Username: webinterns@microsoft.com, Password: Password.1).

Background

This template was designed to help Visual Studio users create new web applications fast and effortlessly. The various features built in the template make it a useful tool for developers:

  • Data is currently stored using XML files. This was an early design decision made to allow users on other blogs to move their data to this template smoothly.

    The usage of LINQ (Language Integrated Query) enables the developer to query items from the blog from a variety of sources such as databases, XML documents (currently in use), and in-memory objects without having to redesign or learn how elements are queried from a specific source.
  • The blog is built on Razor Pages from ASP.NET Core. The image below showcases the organization of the file structure that Razor Pages uses. Each view contains a corresponding Model in a C# file. Adding another Razor Page to your project is as simple as adding a new item to the Pages folder and choosing the Razor Page with model type.
  • The template includes a user authentication feature, implemented using the new ASP.NET Identity Library. This tool allows the owner of the blog to be the single user registered and in control of the blog. Identity also provided us with a tested and secure way to create and protect user profiles.
    We were able to use this library to implement login, registration, password recovery, and other user management features. To enable identity, we simply included it in the startup file and added the corresponding pages (with their models).
  • Customizing the theme is fast and flexible with the use of Bootstrap. Simply download a Bootstrap theme.min.css file and add it to the CSS folder in your project (wwwroot > css). You can find free or paid Bootstrap themes at websites such as bootswatch.com. You can delete our default theme file, journal-bootstrap.min.css, to remove the default theming. Run your project, and you’ll see that the style of your blog has changed instantly.
  • Entity Framework provides an environment that makes it easy to work with relational data. In our scenario, that data comes in the form of blog posts and comments for each post.

Using the Template

Creating an Instance of the Template (Your Own Blog)

There are two options for instantiating a template. You can use dotnet new included with the dotnet CLI. However, the current version contains minor bugs that will be fixed soon. Alternatively, you’ll need to get the newest templating code with the following steps. Click the green “Clone or download” button. Copy the link in the dropdown that appears. Open a command prompt and change directories to where you want to install the templating repo.

In the desired directory, enter the command:

git clone <link you copied earlier>

This will pull all the dotnet.templating code and put it in a directory named “templating”. Now change to the templating directory and switch branches to “rel/2.0.0-servicing” by running:

git checkout rel/2.0.0-servicing

Then run the command “setup”.

  • Note: If you get errors about not being able to run scripts, close your command window. Then open a powershell window as administrator and run the command “Set-ExecutionPolicy Unrestricted”. Close the powershell window, then open a new command prompt and go back to the templating directory and run setup again.

Once the setup runs correctly, you should be able to run the command “dotnet new3”. If you are just using the dotnet CLI, you can replace “dotnet new3” with “dotnet new” for the rest of the steps. Install your blog template with the command:

dotnet new3 -i [path to blog template source]

This path will be the root directory of your blog template repository.
Now you can create an instance of the template by running:

dotnet new3 blog -o [directory you want to create the instance in] -n [name for the instance]

For example:

dotnet new3 blog -o c:\temp\TestBlog\ -n “My Blog”

Reflection

We hope that our project encourages developers to build more web applications with Microsoft’s technologies and have fun doing so. Personally, we’ve learned a lot about web development and Razor Pages through developing this project. We’ve also developed useful skills to move forward in our careers. For example, we really enjoyed learning to brainstorm and prioritize features, which turned out to be a much more complicated process than any of us had expected. Sprint planning and time estimation also proved to be a tricky task. Sometimes it was hard to predict how much time it would take to implement certain features, but as we became more familiar with our project and our team’s engineering processes this became much easier.

Reaching out to the right people turned out to be a key ingredient to accelerating our development process and making sure we were building in the right direction. Once we began meeting with people outside of our assigned team, we realized almost immediately that it was a great way to get feedback on our project. We also began to look for the right people to ask our questions so the development of our project progressed even faster. Most importantly, we really appreciate how helpful and communicative our manager, Barry, and our mentors, Jimmy and Mads, were throughout the internship. They took time out of their busy schedules to help us and give us insightful career advice.

Juliet Daniel is a junior at Stanford studying Management Science & Engineering. In her free time, she enjoys biking, running, hiking, foodspotting, and playing music. She keeps a travel blog at juliets-journey.weebly.com.

Lucas Isaza is a junior at Stanford studying Economics and Applied Statistics. He enjoys playing basketball and lacrosse, exploring new restaurants in the area, and hanging out with friends.

Uma Lakshminarayan is a junior at UCLA studying Computer Science. She enjoys cooking and eating vegetarian foods, taking walks with friends, and discovering new music. You will usually find her singing or listening to music.

Announcing SignalR for ASP.NET Core 2.0

$
0
0

Today we are glad to announce an alpha release of SignalR for ASP.NET Core 2.0. This is the first official release of a new SignalR that is compatible with ASP.NET Core. It consists of a server component, a .NET client targeting .NET Standard 2.0 and a JavaScript/TypeScript client.

What’s New?

SignalR for ASP.NET Core is a rewrite of the original SignalR. We looked at common SignalR usage patterns and issues that users face today and decided that rewriting SignalR is the right choice. The new SignalR is simpler, more reliable, and easier to use. Despite these underlying changes, we’ve worked to ensure that the user-facing APIs are very similar to previous versions.

JavaScript/TypeScript Client

SignalR for ASP.NET Core has a brand-new JavaScript client. The new client is written in TypeScript and no longer depends on jQuery. The client can also be used from Node.js with a few additional dependencies.

The client is distributed as an npm module that contains the Node.js version of the client (usable via require), as well as a version for use in the browser which can be included using a <script> tag. TypeScript declarations for the client included in the module make it easy to consume the client from TypeScript applications.

The JavaScript client runs on the latest Chrome, FireFox, Edge, Safari and Opera browsers as well as Internet Explorer version 11, 10, 9. (Not all transports are compatible with every browser). Internet Explorer 8 and below is not supported.

Support for Binary Protocols

SignalR for ASP.NET Core offers two built-in hub protocols – a text protocol based on JSON and a binary protocol based on MessagePack. Messages using the MessagePack protocol are typically smaller than messages using the JSON protocol. For example a hub method returning the integer value of 1 will be 43 bytes when using the JSON based protocol while only 16 bytes when using MessagePack. (Note, the difference in size may vary depending on the message type, the contents of the message and the transport used – binary messages sent over Server Sent Events transport will be base64 encoded since Server Sent Events is a text transport.)

Support for Custom Protocols

The SignalR hub protocol is documented on GitHub and now has extension points that make it possible to plug in custom implementations.

Streaming

It is now possible to stream data from the server to the client. Unlike a regular Hub method invocation, streaming means the server is able to send results to the client before the invocation completes.

Using SignalR with Bare Websockets

The process of connecting to SignalR has been simplified to the point where, when using websockets, it is now possible to connect to the server without any client with a single request.

Simplified Scale-Out Model

Unfortunately, when it comes to scaling out applications there is no “one size fits all” model – each application is different and has different requirements that need to be considered when scaling out the application. We have worked to improve, and simplify, the scale-out model and are providing a Redis based scale-out component in this Alpha. Support for other providers is being evaluated for the final release, for example service bus.

What’s Changed?

We added a number of new features to SignalR for ASP.NET Core but we also decided to remove support for some of the existing features or change how they work. One of the consequences of this is that SignalR for ASP.NET Core is not compatible with previous versions of SignalR. This means that you cannot use the old server with the new clients or the old clients with the new server. Below are the features which have been removed or changed in the new version of SignalR.

Simplified Connection Model

In the existing version of SignalR the client would try starting a connection to the server, and if it failed it would try using a different transport. The client would fail starting the connection when it could not connect to the server with any of the available transports. This feature is no longer supported with the new SignalR.

Another functionality that is no longer supported is automatic reconnects. Previously SignalR would try to reconnect to the server if the connection was dropped. Now, if the client is disconnected the user must explicitly start a new connection if they want to reconnect. Note, that it was required even before – the client would stop its reconnect attempts if it could not reconnect successfully within the reconnect timeout. One more reason to remove automatic reconnects was a very high cost of storing messages sent to clients. The server would by default remember the last 1000 messages sent to a client so that it could replay messages the client missed when it was offline. Since each connection had its own buffer the memory footprint of storing these messages was very high.

Sticky Sessions Are Now Required

Because of how scale-out worked in the previous versions of SignalR, clients could reconnect and/or send messages to any server in the farm. Due to changes to the scale-out model, as well as not supporting reconnects, this is no longer supported. Now, once the client connects to the server it needs to interact with this server for the duration of the connection.

Single Hub per Connection

The new version of SignalR does not support having more than one Hub per connection. This results in a simplified client API, and makes it easier to apply Authentication policies and other Middleware to Hub connections. In addition subscribing to hub methods before the connection starts is no longer required.

Other Changes.

The ability to pass arbitrary state between clients and the Hub (a.k.a. HubState) has been removed as well as the support for Progress messages. We also don’t create a counterpart of hub proxies at the moment.

Getting Started

Setting up SignalR is relatively easy. After you create an ASP.NET Core application you need to add a reference to the Microsoft.AspNetCore.SignalR package like this

and a hub class:

This hub contains a method which once invoked will invoke the Send method on each connected client.

After adding a Hub class you need to configure the server to pass requests sent to the chatend point to SignalR:

Once you set up the server you can invoke hub methods from the client and receive invocations from the server. To use the JavaScript client in a browser you need to install the signalr-client npm module first using the following command:

then copy the signalr-client.js to your script folder and include on your page using the <script> tag:

After you include the script you can start the connection and interact with the server like this:

To use the SignalR managed client you need to add a reference to the Microsoft.AspNetCore.SignalR.Client package:

Then you can invoke hub methods and receive invocations like this:

If you want to take advantage of streaming you need to create a hub method that returns either a ReadableChannel<T> or an IObservable<T>. Here is an example of a hub method streaming stock prices to the client from the StockTicker sample we ported from the old SignalR:

The JavaScript code that invokes this hub method looks like this:

Each time the server sends a stream item the displayStock client function will be invoked.

Invoking a streaming hub method from a C# client and reading the items could look as follows:

Migrating from existing SignalR

We will be releasing a migrating from existing SignalR guide in the coming weeks.

Known issues

This is an alpha release and there are a few issues we know about:

  • Connections using the Server Sent Event transport may be disconnected after two minutes of inactivity if the server is running behind IIS
  • The WebSockets transport will not work if the server hosting SignalR is running behind IIS on Windows 7 or Windows Server 2008 R2, due to limitations in IIS
  • ServerSentEvents transport in the C# client can hang if the client is being closed while the data from the server is still being received
  • Streaming invocations cannot be canceled by the client
  • Generating a production build of an application using TypeScript client in angular-cli fails due to UglifyJS not supporting ES6. This issue can be worked around as described in this comment.

Summary

The long awaited version of SignalR for ASP.NET Core just shipped. Try it out and let us know what you think! You can provide feedback or let us know about bugs/issues here.

Announcing SignalR for ASP.NET Core Alpha 2

$
0
0

A few weeks ago we released the alpha1 version of SignalR for ASP.NET Core 2.0. Today we are pleased to announce a release of the alpha2 version of SignalR for ASP.NET Core 2.0. This release contains a number of changes (including API changes) and improvements.

Notable Changes

  • The JSON hub protocol now uses camel casing by default when serializing and deserializing objects on the server and by the C# client
  • IObservable subscriptions for streaming methods are now automatically unsubscribed when the connection is closed
  • It is now possible to invoke client methods in a type safe manner when using HubContext (a community contribution from FTWinston– Thanks!)
  • A new HubConnectionContext.Abort() method allows terminating connections from the server side
  • Users can now control how their objects are serialized when using MessagePack hub protocol
  • Length prefixes used in binary protocols are now encoded using Varints which reduces the size of the message by up to 7 bytes

Release notes can be found on github.

API Changes

TypeScript/JavaScript client:

  • Event names were changed and now use lower case:
    • onDataReceived on IConnection was renamed to onreceive
    • onClosed on HubConnection and IConnection was renamed to onclose
  • It is now possible to register multiple handlers for the HubConnection onclose event by passing the handler as a parameter. The code used to subscribe to the closed event when using the alpha1 version of the client:

needs to be changed to:

  • The HubConnection on() method now allows registering multiple callbacks for a client method invocation
  • A new off() method was added to HubConnection to enable removing callbacks registered with the on method

C# Client

The HubConnection.Stream() method was changed to be async and renamed to StreamAsync()
New overloads of WithJsonHubProtocol() and WithMessagePackProtocol() on HubConnectionBuilder that take protocol-specific settings were added

Server

The params keyword was removed from the IClientProxy.InvokeAsync() method and replaced by a set of extension methods

A word of thanks to everyone who has tried the new SignalR and provided feedback. Please keep it up! You can provide feedback or let us know about bugs/issues here.

For examples on using this, and future, versions you can look at the SignalR Samples repository on GitHub.

User accounts made easy with Azure

$
0
0

One of the most common requirements for a web application is to have users create accounts, for the purpose of access control and personalization. While ASP.NET templates have always made it easy to create an application that uses a database you control to register and track user accounts, that introduces other complications over the long term. As laws around user information get stricter and security becomes more important, maintaining a database of users and passwords comes with an increasing set of maintenance and regulatory challenges.

A few weeks ago I tried out the new Azure Active Directory B2C service, and was really impressed with how easy it was to use. It added user identity and access control to my app, while moving all the responsibility for signing users up, authenticating them, and maintaining the account database to Azure (and it’s free to develop with).

In this post I’ll briefly walk through how to get up and running with Azure B2C in a new ASP.NET Core app. It’s worth noting it works just as well with ASP.NET apps on the .NET Framework with slightly different steps (see walkthrough). I’ll then include some resources that will help you with more complex scenarios including authenticating against a backend Web API.

Step 1: Create the B2C Tenant in Azure

  • To get started, you’ll need an Azure account. If you don’t have one yet, create your free account now
  • Create an Azure AD B2C Directory
  • Create your policies (this is where you indicate what you need to know about the user)
    • Create a sign-up or sign-in policy
      • Choose all of the information you want to know about the user under “Sign-up attributes”
      • Selected all the information you want passed to your application under “Application Claims” (note: the default template uses the “Display Name” attribute to address the user in the navigation bar when they are signed in so you will want to include that)
        clip_image002
    • Create a profile editing policy
    • Create a password reset policy
    • Note: After you create each policy, you’ll be taken back to the tab for that policy type which will show you the full name of the policy you just created, which will be of the form “B2C_1_<name_you_entered>”.  You’ll need these names below when you’re creating your project.
      image
  • Register your application (follow the instructions for a Web App)
    • Note: You’ll get the “Reply URL” in the next step when you create the new project.

Step 2: Create the Project in Visual Studio

  • File -> New Project -> Visual C# -> ASP.NET Core Web Application
    clip_image004
  • On the New ASP.NET dialog, click the “Change Authentication” button on the right side of the dialog
    image
    • Choose “Individual User Accounts”
    • Change the dropdown in the top right to “Connect to an existing user store in the cloud”
    • Fill in the required information from the B2C Tenant you created in the Azure portal previously
    • Copy the “Reply URI” from the “Change Authentication” dialog and enter it into the application properties for the app you previously created in your B2C tenant in the Azure portal.
    • Click OK
      clip_image006

Step 3: Try it out

Now run your application (ctrl+F5), and click “Sign in” in the top right:

clip_image008

You’ll be navigated to Azure’s B2C sign-in/sign-up page:

clip_image010

The first time, click the “Sign up now” at the bottom to create your account. Once your account is created, you’ll be redirected back to your app and you’re now signed in. It’s as easy that.

clip_image012

Additional Resources

The above walkthrough show a quick overview for how to get started with Azure B2C and ASP.NET Core. If you are interested in exploring further or using Azure B2C in a different context, here are a few resources that you may find useful:

  • Create an ASP.NET (.NET Framework) app with B2C
  • ASP.NET Core GitHub sample: This sample demonstrates how to use a web front end to authenticate, and then obtain a token to authenticate against a backend Web API.
  • If you are looking to add support to an existing app, you may find it easiest to create a new project in Visual Studio and copy and paste the relevant code into your existing application. You can of course use code from the GitHub samples mentioned above as well

Conclusion

Hopefully you found this short overview of Azure B2C interesting. Authentication is often much more complex than the simple scenario we covered here, and there is no single “one size fits all”, so it should be pointed out that there are many alternative options, including third-party and open source options. As always, feel free to let me know what you think in the comments section below, or via twitter.

Sharing Configuration in ASP.NET Core SPA Scenarios

$
0
0

This is a guest post from Mike Rousos

ASP.NET Core 2.0 recently released and, with it, came some new templates, including new project templates for single-page applications (SPA) served from an ASP.NET Core backend. These templates make it easy to setup a web application with a rich JavaScript frontend and powerful ASP.NET Core backend. Even better, the templates enable server-side prerendering so the JavaScript front-end is already rendered and ready to display when users first arrive at your web app.

One challenge of the SPA scenario, though, is that there are two separate projects to manage, each with their own dependencies, configuration, etc. This post takes a look at how ASP.NET Core’s configuration system can be used to store configuration settings for both the backend ASP.NET Core app and a front-end JavaScript application together.

Getting Started

To get started, you’ll want to create a new ASP.NET Core Angular project – either by creating a new ASP.NET Core project in Visual Studio and selecting the Angular template, or using the .NET CLI command dotnet new angular.

New ASP.NET Core Angular Project

At this point, you should be able to restore client packages (npm install) and launch the application.

In this project template, the ASP.NET Core app’s configuration is loaded from default sources thanks to the WebHost.CreateDefaultBuilder call in Program.cs. The default configuration providers include:

  • appsettings.json
  • appsettings.{Environment}.json
  • User secrets (if in a development environment)
  • Environment variables
  • Command line arguments

You can see that appsettings.json already has some initial config values related to logging.

For the client-side application, there aren’t any configuration values setup initially. If we were using the Angular CLI to create and manage this application, it would provide environment-specific TypeScript files (environment.ts, environment.prod.ts, etc.) to provide settings specific to different environments. The Angular CLI would pick the right config file to use when building or serving the application, based on the environment specified. In our case, though, we’re not using the Angular CLI to build the client (we’re just using WebPack directly).

Instead of using client-side TypeScript files for configuration, it would be convenient to share portions of our server app’s configuration with the client app. That would enable us to use ASP.NET Core’s rich configuration system which can load from environment-specific config files, as well as from many other sources (environment variables, Azure Key Vault, etc.). We just need to make those config settings available to our client app.

Embedding Client Configuration

Since our goal is to store client and server settings together in the ASP.NET Core app, it’s helpful to define the shape of the client config settings by creating a class modeling the configuration data. This isn’t required (you could just send settings as raw json), but if the structure of your configuration isn’t frequently changing, it’s a little nicer to work with strongly typed objects in C# and TypeScript.

Here’s a simple class for storing sample client configuration data:

Next, we can use configuration Options to easily extract a ClientConfiguration object from the server application’s larger configuration.

Here are the calls to add to Startup.ConfigureServices to make a ClientConfiguration options object available in the web app’s dependency injection container:

Notice that we’ve specified that the ClientConfiguration object comes from the “ClientConfiguration” section of the app’s configuration, so that’s where we need to add config values in appsettings.json (or via environment variables, etc.):

If you want to set these sorts of hierarchical settings using environment variables, the variable name should include all levels of the setting’s hierarchy delimited by colons or double underscores. So, for example, the ClientConfiguration section’s UserMessage setting could be set from an environment variable by setting ClientConfiguration__UserMessage (or ClientConfiguration:UserMessage) equal to some value.

Creating a Client Configuration Endpoint

There are a number of ways that we can make configuration settings from our server application available to the client. One easy option is to create a web API that returns configuration settings.

To do that, let’s create a ClientConfiguration controller (which receives the ClientConfiguration options object as a constructor parameter):

Next, give the controller a single index action which, as you may have guessed, just returns the client configuration object:

At this point, you can launch the application and confirm that navigating to /ClientConfiguration returns configuration settings extracted from those configured for the web app. Now we just have to setup the client app to use those settings.

Creating a Client-Side Model and Configuration Service

Since our client configuration is strongly typed, we can start implementing our client-side config retrieval by making a configuration model that matches the one we made on the server. Create a configuration.ts file like this:

Next, we’ll want to handle app config settings in a service. The service will use Angular’s built-in Http service to request the configuration object from our web API. Both the Http service and our application’s ‘BASE_URL’ (the web app’s root address which we’ll call back to to reach the web API) can be injected into the configuration service’s constructor.

Then, we just create a loadConfiguration function to make the necessary GET request, deserialize into a Configuration object, and store the object in a local field. We convert the http request into a Promise (instead of leaving it as an Observable) so that it works with Angular’s APP_INITIALIZER function (more on this later!).

The finished configuration service should look something like this:

Now that we have a configuration service, we need to register it in app.module.shared.ts to make it available to other components. The ASP.NET Core Angular template puts most module setup for our client app in app.module.shared.ts (instead of app.module.ts) since there are separate modules for server-side rendering and client-side rendering.App.module.shared.ts contains the module pieces common to both scenarios.

To register our service, we need to import it and then add it to a providers array passed to the @NgModule decorator:

There’s one other important change to make before we leave app.module.shared.ts. We need to make sure that config values are loaded from the server before any components are rendered. To do that, we add ConfigurationService.loadConfiguration to our app’s APP_INITIALIZER function (which is called at app-initiazliation time and waits for returned promises to finish prior to any components being rendered).

Import APP_INITIALIZER from @angular/core and then update your providers argument to include a registration for APP_INITIALIZER:

Note that useFactory is a function that must return a function (which, in turn, returns a promise), so we have the double fat-arrow syntax seen above. Also, don’t forget to specify multi: true since there may be multiple APP_INITIALIZER functions registered.

Now the configuration service is registered with DI and will automatically load configuration from the server when the app starts up.

To make use of it, let’s update the app’s home component. Import ConfigurationService into the home component and update the component’s constructor to take an instance of the service as a parameter. Make sure to make the parameter public so that it can be used from the home component’s HTML template. Since we will want to loop over the ‘messageCount’ config setting, it’s also useful to create a small helper function to return an array with a length of messageCount for use with *ngFor in the HTML template later.

Here’s my simple home component:

Finally, get rid of everything currently in home.component.html and replace it with an HTML template that takes advantage of the configuration values:

Trying it Out

You should now be able to run the web app and see the server-side configuration values reflected in the client application!

Here’s a screenshot of my sample app running with ASPNETCORE_ENVIRIONMENT set to Development (I set MessageCount to 2 in appsettings.Development.json):

Development Environment Results

And here’s one with ASPNETCORE_ENVIRONMENT set to Production (where MessageCount is three and the message is appropriately updated):

Production Environment Results

Wrap Up

By exposing (portions of) app configuration from our ASP.NET Core app and making use of Angular’s APP_INITIALIZER function, we can share configuration values between server and client apps. This allows our client apps to take advantage of ASP.NET Core’s rich configuration system. In this sample, the client configuration settings were only used by our Angular app, but if your scenario includes some settings that are needed by both the client and server applications, this sort of solution allows you to set the config values in one place and have them be available to both applications.

Future improvements of this sample could include adding a time-to-live on the client’s cached configuration object to allow automatically reloaded config values to propagate to the client, or perhaps using different configuration providers to show Angular app configuration coming from Azure Key Vault or some other less common source.

Further Reading

 

Recent updates for publishing

$
0
0

We have recently added a few interesting features for ASP.NET publishing. The updates include:

  • Container Registry Publish Updates
  • Create publish profile without publishing

In this post, we will briefly describe these updates. We will get started with the container related news.

Container Registry Publish Updates

Container development (e.g. Docker) has grown in popularity recently, including in .NET development. We’ve started adding support for containerized applications in Visual Studio as well. When developing a containerized app, there are two components that are needed to run your application.

  • App image
  • Host to run the container

The app image includes the application itself and info about configuring the machine hosting the application.

The host machine will load the app image and run it. There are a variety of options for the host machine that can be used. In previous releases we supported publishing a containerized ASP.NET Core project to Azure Container Registry (ACR) and creating a new Azure App Service to host the application. If you were running your application using a different host, Visual Studio wouldn’t have helped. Now in Visual Studio we have the following container publish related features:

  • A: Publish an ASP.NET Core containerized app to ACR and a new Azure App Service (Visual Studio 2017 15.0)
  • B: Publish an ASP.NET (Core or full .NET Framework) containerized project to a container registry (including, but not limited to, ACR) (Visual Studio 2017 15.5 Preview 2)

 

Feature A enabled Azure App Service users to run a containerized ASP.NET Core app to a new Azure App Service host. This feature was included in the initial release Visual Studio 2017. We are including it here for completeness. To publish one of these apps to App Service you’ll use the Microsoft Azure App Service Linux option in the publish page. See the next image.

After selecting this option you’ll be prompted to configure the new App Service instance and the container registry settings.

For feature B, we have added a new Container Registry publish option on the Publish page. You can see an image of that below.

The radio buttons below the Container Registry button lists out the different options. Let’s take a closer look at those in the table below.

 

Option When to use
Create New Azure Container Registry Select this option when you want to publish your app image to a new Azure Container Registry. You can publish several different app images to the same Container Registry.
Select Exiting Azure Container Registry Select this option when you’ve already created the Azure Container Registry, and you want to publish a new app image to it.
Docker hub Select this option if you want to publish to docker hub (hub.docker.com).
Custom Select this option to explicitly set publish options.

 

After selecting the appropriate option and clicking the Publish button, you’ll be prompted to complete the configuration and continue to publish. The Container Registry publish feature is enabled for both ASP.NET Core and ASP.NET full .NET Framework projects.

To try out the Azure related features you’ll need an Azure subscription. If you don’t already have one you can get started for free.

We’ve only briefly covered the Container Registry features here. We will be blogging more soon about how to use this in end-to-end scenarios here. Until then you can take a look at the docs. Now let’s move on to the next update.

Create Publish Profile Without Publishing

In Visual Studio publishing to a new destination includes two steps:

  • Create Publish Profile
  • Start publishing

In Visual Studio 2017 15.5 Preview 2 we have added a new gear option next to the Publish button. In previous releases of Visual Studio 2017 when you created a publish profile, the publish process was automatically started immediately after that. This prevented you from changing publish settings for the initial publish. We’ve heard feedback from users that in some cases the publish options need to be customized before the initial publish. Some reasons you may chose to delay the publish process includes; you need to configure databases, you need to change the Build Configuration used, you want to validate publish credentials before publish, etc. In the image below you can see the new gear option highlighted.

To create a publish profile and not publish, after selecting the publish destination (by clicking on one of the big buttons) and then clicking on the gear you’ll get a context menu with two options. You’ll want to select Create Profile.

 

 

 

After you select Create Profile here, you’ll continue to create the profile, and any new Azure resources if applicable. You can then publish your app at a later time with the Publish button. The following image shows this button.

Now that we’ve covered the delayed publish feature, let’s wrap up.

Conclusion

These were some updates that we wanted to share with you. We’ll be blogging more soon about how to use the container features in full scenarios. If you have any questions, please comment below or email me at sayedha AT microsoft.com or on Twitter @SayedIHashimi. You can also use the built in send feedback feature in Visual Studio 2017.

Thanks,
Sayed Ibrahim Hashimi


Publishing a Web App to an Azure VM from Visual Studio

$
0
0

We know virtual machines (VMs) are one of the most popular places to run apps in Azure, but publishing to a VM from Visual Studio has been a tricky experience for some. So, we’re pleased to announce that in Visual Studio 15.5 (get the preview now) we’ve added some improvements to the experience. In this post, we’ll discussed the requirements for a VM that’s ready to run an ASP.NET web application, and then walk through how to publish to it from Visual Studio 15.5 Preview 2. Also, if you have a minute to tell us about how you work with VMs, we’d appreciate it.

Contents

    – Prepare your VM for publishing
    – Walk-through: Publishing from Visual Studio
    – Modifying publish settings [Optional]

Prepare your VM for publishing

Before you can publish a web application to an Azure Virtual Machine from Visual Studio, the VM must be properly configured. The minimum requirements are listed below.

    Server Components:
        • IIS
        • ASP.NET 4.6
        • Web Management Service
        • Web Deploy
    Open firewall ports:
        • Port 80 (http)
        • Port 8172 (Web Deploy)
    DNS:
        • A DNS name assigned to the VM

Walk-through: Publishing a web app to an Azure Virtual Machine from Visual Studio 2017

  1. Open your web application in Visual Studio 2017 (v15.5 Preview 2)
  2. Right-click the project and choose “Publish…”
  3. Press the arrow on the right side of the page to scroll through the publishing options until you see “Microsoft Azure Virtual Machine”.
  4. Select the “Microsoft Azure Virtual Machine” icon, then click “Browse…” to open the Azure Virtual Machine selector.
    The Azure VM selector dialog will open.
  5. Choose the appropriate account (with Azure subscription connected to your virtual machine).
    • If you’re signed in to Visual Studio, the account list will be pre-populated with all your authenticated accounts.
    • If you are not signed in, or if the account you need is not listed, choose “Add an account…” and follow the prompts to log in.

    Wait for the list of Existing Virtual Machines to populate. (Note: This can take some time).

  6. From the Existing Virtual Machines list, select the VM that you intend to publish your web application to, then press “OK”.

    Focus returns to the Publish page with the Azure Virtual Machine populated and the “Publish” button enabled.

  7. Press the “Publish” button to create the publish profile and begin publishing to your Azure VM.
    Note: You can delay publishing so you can configure additional settings prior to your first publish as covered later in the post.
  8. When prompted for User name and Password, enter the credentials of a user who is authorized for publishing web applications on the VM, then press “OK”.
    Note: For new VMs, this is usually the administrator account. To enable non-administrator user accounts with permission to publish via WebDeploy, go into the authentication settings in IIS -> Management Service on the VM.
  9. If prompted, accept the security certificate.
  10. Publishing proceeds.
    You can watch the progress in the Output window.
    When publishing completes, a web browser will launch and open at the destination URL of the web site hosted on the Azure VM.
    Note: If you don’t want the web browser launching after each publish, remove the “Destination URL” from the Publish Profile settings.

Success!

At this point, you have finished publishing your web application to the VM.
The Publish page refreshes with the new profile selected and the details shown in the Summary section.

You can return to this screen any time to publish again, rename or delete the profile, launch the web site in a browser, or modify the publish settings.
Read on to learn about some interesting settings.

Modify Publish Settings [Optional]

After the Publish Profile has been created, you can edit the settings to tweak your publishing experience.
To modify the settings of the publish profile, click the “Settings…” link on the Publish page.

This will open the Publish Profile Settings dialog.

Save user credentials to the profile

To avoid having to provide user name and password each time you publish, you can store the user credentials in the publish profile.

  1. In the “User name” and “Password” fields, enter the credentials of an authorized user on the target VM.
  2. Press “Validate Connection” to confirm that the details are correct.
  3. Choose “Save password” if you don’t want to be prompted to enter the password each time you publish.
  4. Click “Next” to progress to the “Settings” tab, or click “Save” to accept the changes and close the dialog.
Ensure a clean publish each time

To ensure that your web application is uploaded to a clean web site each time you publish, you can configure the publish profile to delete all files on the target web server before publishing.

  1. Go into the “Settings” page of the Publish dialog.
  2. Expand the File Publish Options.
  3. Choose “Remove additional files at destination”.
    Warning! Deleting files on the target VM may have undesired effects, including removing files that were uploaded by other team members, or files generated by the application. Please be sure you know the state of the machine before publishing with this option enabled.

Conclusion

We’d love for you to download the 15.5 Preview and let us know what you think of the new experience. Also, if you could take two minutes to tell us about how you use VMs in the cloud, we’d appreciate it. As always please let us know what you think in the comments section below, by using the send feedback tool in Visual Studio, or via Twitter.

Creating a Minimal ASP.NET Core Windows Container

$
0
0

This is a guest post by Mike Rousos

One of the benefits of containers is their small size, which allows them to be more quickly deployed and more efficiently packed onto a host than virtual machines could be. This post highlights some recent advances in Windows container technology and .NET Core technology that allow ASP.NET Core Windows Docker images to be reduced in size dramatically.

Before we dive in, it’s worth reflecting on whether Docker image size even matters. Remember, Docker images are built by a series of read-only layers. When using multiple images on a single host, common layers will be shared, so multiple images/containers using a base image (a particular Nano Server or Server Core image, for example) will only need that base image present once on the machine. Even when containers are instantiated, they use the shared image layers and only take up additional disk space with their writable top layer. These efficiencies in Docker mean that image size doesn’t matter as much as someone just learning about containerization might guess.

All that said, Docker image size does make some difference. Every time a VM is added to your Docker host cluster in a scale-out operation, the images need to be populated. Smaller images will get the new host node up and serving requests faster. Also, despite image layer sharing, it’s not unusual for Docker hosts to have dozens of different images (or more). Even if some of those share common layers, there will be differences between them and the disk space needed can begin to add up.

If you’re new to using Docker with ASP.NET Core and want to read-up on the basics, you can learn all about containerizing ASP.NET Core applications in the documentation.

A Starting Point

You can create an ASP.NET Core Docker image for Windows containers by checking the ‘Enable Docker Support’ box while creating a new ASP.NET Core project in Visual Studio 2017 (or by right-clicking an existing .NET Core project and choosing ‘Add -> Docker Support’).

Adding Docker Support

To build the app’s Docker image from Visual Studio, follow these steps:

  1. Make sure the docker-compose project is selected as the solution’s startup project.
  2. Change the project’s Configuration to ‘Release’ instead of ‘Debug’.
    1. It’s important to use Release configuration because, in Debug configuration, Visual Studio doesn’t copy your application’s binaries into the Docker image. Instead, it creates a volume mount that allows the application binaries to be used from outside the container (so that they can be easily updated without rebuilding the image). This is great for a code-debug-fix cycle, but will give us incorrect data for what the Docker image size will be in production.
  3. Push F5 to build (and start) the Docker image.

Visual Studio Docker Launch Settings

Alternatively, the same image can be created from a command prompt by publishing the application (dotnet publish -c Release) and building the Docker image (docker build -t samplewebapi --build-arg source=bin/Release/netcoreapp2.0/publish .).

The resulting Docker image has a size of 1.24 GB (you can see this with the docker images or docker history commands). That’s a lot smaller than a Windows VM and even considerably smaller than Windows Server Core containers or VMs, but it’s still large for a Docker image. Let’s see how we can make it smaller.

Initial Template Image

Windows Nano Server, Version 1709

The first (and by far the greatest) improvement we can make to this image size has to do with the base OS image we’re using. If you look at the docker images output above, you will see that although the total image is 1.24 GB, the majority of that (more than 1 GB) comes from the underlying Windows Nano Server image.

The Windows team recently released Windows Server, version 1709. One of the great features in 1709 is an optimized Nano Server base image that is nearly 80% smaller than previous Nano Server images. The Nano Server, version 1709 image is only about 235 MB on disk (~93 MB compressed).

The first thing we should do to optimize our ASP.NET Core application’s image is to use that new Nano Server base. You can do that by navigating to the app’s Dockerfile and replacing FROM microsoft/aspnetcore:2.0 with FROM microsoft/aspnetcore:2.0-nanoserver-1709.

Be aware that in order to use Nano Server, version 1709 Docker images, the Docker host must be running either Windows Server, version 1709 or Windows 10 with the Fall Creators Update, which is rolling out worldwide right now. If your computer hasn’t received the Fall Creators Update yet, don’t worry. It is possible to create Windows Server, version 1709 virtual machines in Azure to try out these new features.

After switching to use the Nano Server, version 1709 base image, we can re-build our Docker image and see that its size is now 357 MB. That’s a big improvement over the original image!

If you’re building your Docker image by launching the docker-compose project from within Visual Studio, make sure Visual Studio is up-to-date (15.4 or later) since recent updates are needed to launch Docker containers based on Nano Server, version 1709 from Visual Studio.

AspNet Core v1709 Docker Image

That Might be Small Enough

Before we make the image any smaller, I want to pause to point out that for most scenarios, using the Nano Server, version 1709 base image is enough of an optimization and further “improvements” might actually make things worse. To understand why, take a look at the sizes of the component layers of the Docker image created in the last step.

AspNet Core v1709 Layers

The largest layers are still the OS (the bottom two layers) and, at the moment, that’s as small as Windows images get. Our app, on the other hand is the 373 kB towards the top of the layer history. That’s already quite small.

The only good places left to optimize are the .NET Core layer (64.9 MB) or the ASP.NET Core layer (53.6 MB). We can (and will) optimize those, but in many cases it’s counter-productive to do so because Docker shares layers between images (and containers) with common bases. In other words, the ASP.NET Core and .NET Core layers shown in this image will be shared with all other containers on the host that use microsoft/aspnetcore:2.0-nanoserver-1709 as their base image. The only additional space that other images consume will be the ~500 kB that our application added on top of the layers common to all ASP.NET Core images. Once we start making changes to those base layers, they won’t be sharable anymore (since we’ll be pulling out things that our app doesn’t need but that others might). So, we might reduce this application’s image size but cause others on the host to increase!

So, bottom line: if your Docker host will be hosting containers based on several different ASP.NET Core application images, then it’s probably best to just have them all derive from microsoft/aspnetcore:2.0-nanoserver-1709 and let the magic of Docker layer sharing save you space. If, on the other hand, your ASP.NET Core image is likely to be used alongside other non-.NET Core images which are unlikely to be able to share much with it anyhow, then read on to see how we can further optimize our image size.

Reducing ASP.NET Core Dependencies

The majority of the ~54 MB contributed by the ASP.NET Core layer of our image is a centralized store of ASP.NET Core components that’s installed by the aspnetcore Dockerfile. As mentioned above, this is useful because it allows ASP.NET Core dependencies to be shared between different ASP.NET Core application Docker images. If you have a small ASP.NET Core app (and don’t need the sharing), it’s possible to just bundle the parts of the ASP.NET Core web stack you need with your application and skip the rest.

To remove unused portions of the ASP.NET Core stack, we can take the following steps:

  1. Update the Dockerfile to use microsoft/dotnet:2.0.0-runtime-nanoserver-1709 as its base image instead of microsoft/aspnetcore:2.0-nanoserver-1709.
  2. Add the line ENV ASPNETCORE_URLS http://+:80 to the Dockerfile after the FROM statement (this was previously done in the aspnetcore base image for us).
  3. In the app’s project file, replace the Microsoft.AspNetCore.All metapackage dependency with dependencies on just the ASP.NET Core components the app requires. In this case, my app is a trivial ‘Hello World’ web API, so I only need the following (larger apps would, of course, need more ASP.NET Core packages):
    1. Microsoft.AspNetCore
    2. Microsoft.AspNetCore.Mvc.Core
    3. Microsoft.AspNetCore.Mvc.Formatters.Json
  4. Update the app’s Startup.cs file to callservices.AddMvcCore().AddJsonFormatters() instead of services.AddMvc() (since the AddMvc extension method isn’t in the packages we’ve referenced).
    1. This works because our sample project is a Web API project. An MVC project would need more MVC services.
  5. Update the app’s controllers to derive from ControllerBase instead ofController
    1. Again, since this is a Web API controller instead of an MVC controller, it doesn’t use the additional functionality Controller adds).

Now when we build the Docker image, we can see we’ve shaved a little more than 40 MB by only including the ASP.NET Core dependencies we need. The total size is now 315 MB.

NanoServer No AspNet All

Bear in mind that this is a trivial sample app and a real-world application would not be able to cut as much of the ASP.NET Core framework.

Also, notice that while we eliminated the 54 MB intermediate ASP.NET Core layer (which could have been shared), we’ve increased the size of our application layer (which cannot be shared) by about 11 MB.

Trimming Unused Assemblies

The next place to consider saving space will be from the .NET Core/CoreFX layer (which is consuming ~65 MB at the moment). Like the ASP.NET Core optimizations, this is only useful if that layer wasn’t going to be shared with other images. It’s also a little trickier to improve because unlike ASP.NET Core, .NET Core’s framework is delived as a single package (Microsoft.NetCore.App).

To reduce the size of .NET Core/CoreFX files in our image, we need to take two steps:

  1. Include the .NET Core files as part of our application (instead of in a base layer).
  2. Use a preview tool to trim unused assemblies from our application.

The result of those steps will be the removal of any .NET Framework (or remaining ASP.NET Core framework) assemblies that aren’t actually used by our application.

First, we need to make our application self-contained. To do that, add a <RuntimeIdentifiers>win10-x64</RuntimeIdentifiers> property to the project’s csproj file.

We also need to update our Dockerfile to use microsoft/nanoserver:1709 as its base image (so that we don’t end up with two copies of .NET Core) and useSampleWebApi.exe as our image’s entrypoint instead of dotnet SampleWebApi.dll.

Up until now, it was possible to build the Docker image either from Visual Studio or the command line. But Visual Studio doesn’t currently support building Docker images for self-contained .NET Core applications (which are not typically used for development-time debugging). So, to build our Docker image from here on out, we will use the following command line interface commands (notice that they’re a little different from those shown previously since we are now publishing a runtime-specific build of the application). Also, you may need to delete (or update) the .dockerignore file generated as part of the project’s template because we’re now copying binaries into the Docker image from a different publish location.

dotnet publish -c Release -r win10-x64
docker build -t samplewebapi --build-arg
   source=bin/Release/netcoreapp2.0/win10-x64/publish .

These changes will cause the .NET Core assemblies to be deployed with our application instead of in a shared location, but the included files will be about the same. To remove unneeded assemblies, we can use Microsoft.Packaging.Tools.Trimming, a preview package that removes unused assemblies from a project’s output and publish folders. To do that, add a package reference to Microsoft.Packaging.Tools.Trimming and a <TrimUnusedDependencies>true</TrimUnusedDependencies> property to the application’s project file.

After making those additions, re-publishing, and re-building the Docker image (using the CLI commands shown above), the total image size is down to 288 MB.

NanoServer SelfContained Trim Dependencies

As before, this reduction in total image size does come at the expense of a larger top layer (up to 53 MB).

One More Round of Trimming

We’re nearly done now, but there’s one more optimization we can make.Microsoft.Packaging.Tools.Trimming removed some unused assemblies, but others still remain because it isn’t clear whether dependencies to those ones assemblies are actually exercised or not. And that’s not to mention all the IL in an assembly that may be unused if our application calls just one or two methods from it.

There’s another preview trimming tool, the .NET IL Linker, which is based on the Mono linker and can remove unused IL from assemblies.

This tool is still experimental, so to reference it we need to add a NuGet.config to our solution and include https://dotnet.myget.org/F/dotnet-core/api/v3/index.json as a package source. Then, we add a dependency to the latest preview version of ILLink.Tasks(currently, the latest version is 0.1.4-preview-981901).

ILLink.Tasks will trim IL automatically, but we can get a report on what it has done by passing /p:ShowLinkerSizeComparison=true to our dotnet publish command.

After one more publish and Docker image build, the final size for our Windows ASP.NET Core Web API container image comes to 271 MB!

NanoServer Double Trim

Even though trimming ASP.NET Core and .NET Core Framework assemblies isn’t common for most containerized projects, the preview trimming tools shown here can be very useful for reducing the size of large applications since they can remove application-local assemblies (pulled in from NuGet, for example) and IL code paths that aren’t used.

Wrap-Up

This post has shown a series of optimizations that can help to reduce ASP.NET Core Docker image size. In most cases, all that’s needed is to be sure to use new Nano Server, version 1709 base images and, if your app is large, to consider some preview dependency trimming options like Microsoft.Packaging.Tools.Trimming or the .NET IL Linker.

Less commonly, you might also consider using app-local versions of the ASP.NET Core or .NET Core Frameworks (as opposed to shared ones) so that you can trim unused dependencies from them. Just be careful to keep common base image layers unchanged if they’re likely to be shared between multiple images. Although this article presented the different trimming and minimizing options as a series of steps, you should feel free to pick and choose the techniques that make sense for your particular scenario.

In the end, a simple ASP.NET Core web API sample can be packaged into a < 360 MB Windows Docker image without sacrificing any ability to share ASP.NET Core and .NET Core base layers with other Docker images on the host and, potentially, into an even smaller image (271 MB) if that sharing is not required.

Improvements to Azure Functions in Visual Studio

$
0
0

We’re excited to announce several improvements to the Azure Functions experience in Visual Studio as part of the latest update to the Azure Functions tools on top of Visual Studio 2017 v15.5. (Get the preview now.)

New Function project dialog

To make it easier to get up and running with Azure Functions, we’ve introduced a new Functions project dialog. Now, when creating a Functions project, you can choose one that starts with the one of the most popular trigger types (Http, Queue or Timer). If you’re looking for something different choose the Empty project, then add the item after project creation.

Additionally, most Function apps require a valid storage account to be specified in AzureWebJobsStorage. Typically this has meant adding a connection string to the local.settings.json after the function is created. To make it easier to find and configure the connection strings for your Function’s storage account, we’ve introduced a Storage Account picker in the new project dialog.

Storage account picker in new Functions project dialog

The default option is the Storage Emulator. The Storage Emulator is a local service, installed as part of the Azure workload, that offers much of the functionality of a real Azure storage account. If it’s not already running, you can start it by pressing the Windows Start key and typing “Microsoft Azure Storage Emulator”. This is a great option if you’re looking to get up and running quickly – especially if you’re playing around, as it doesn’t require any resources to be provisioned in Azure.

However, the best way to guarantee that all supported features are available to your Functions project is to configure it to use an Azure storage account. To help with this, we’ve added a Browse… option in the Storage Account picker that launches the Azure Storage Account selection dialog. This lets you choose from existing storage accounts that you have access to through your Azure subscriptions.

When the project is created, the connection string for the selected storage account will be added to the local.settings.json file and you’ll be able to run your Functions project straight away!

.NET Core support

You can now create Azure Functions projects inside Visual Studio that target .NET Core. When creating a Functions project, you can choose a target from the selector at the top of the new project dialog. If you choose the Azure Functions v2 (.NET Standard) target, your project will run against .NET Core or .NET Framework.

Choose Azure Functions runtime

Manage Application Settings

An important part of deploying Functions to Azure is adding appropriate application settings. Azure Functions projects store local settings in the local.settings.json file, but this file does not get published to Azure (by design). So, the settings that control the application running in Azure need to be manually configured. As part of our new tooling improvements, we’ve added the ability for you to view and edit your Function’s app settings in the cloud from within Visual Studio. On the Publish page of the Connected Services dialog, you’ll find an option to Manage Application Settings….

Manage App Settings link in Publish dialog

This launches the Application Settings dialog, which allows you to view, update, add and remove app settings just like you would on the Azure portal. When you’re satisfied with the changes, you can press Apply to push the changes to the server.

Application Settings editor

Detect mismatching Functions runtime versions

To prevent the issue where you are developing locally against an out-of-date version of the runtime, now, after publishing a Functions app, we’ll compare your local runtime version against the portal’s version. If they are different, Visual Studio will offer to change the app settings on the cloud to match the version you are using locally.

Update mismatching Functions extension version

Try out the new features

Download the latest version of Visual Studio 2017 (v15.5) and start enjoying the improved Functions experience today.

Ensure you have the Azure workload installed and the latest version of the Azure Web Jobs and Functions Tools.
Note: If you have a fresh installation, you may need to manually apply the update to Azure Functions and Web Jobs Tools. Look for the new notifications flag in the Visual Studio title bar. Clicking the link in the Notifications window opens the Extensions and Updates dialog. From there you can click Update to upgrade to the latest version.

Update notifications

If you have any questions or comments, please let us know by posting in the comments section below.

Announcing .NET 4.7.1 Tools for the Cloud

$
0
0

Packages and ContainersToday we are releasing a set of providers for ASP.NET 4.7.1 that make it easier than ever to deploy your applications to cloud services and take advantage of cloud-scale features.  This release includes a new CosmosDb provider for session state and a collection of configuration builders.

A Package-First Approach

With previous versions of the .NET Framework, new features were provided “in the box” and shipped with Windows and new versions of the entire framework.  This means that you can be assured that your providers and capabilities were available on every current version of Windows.  It also means that you had to wait until a new version of Windows to get new .NET Framework features.  We have adopted a strategy starting with .NET Framework 4.7 to deliver more abstract features with the framework and deploy providers through the NuGet package manager service.  There are no concrete ConfigurationBuilder classes in the .NET Framework 4.7.1, and we are now making available several for your use from the NuGet.org repository.  In this way, we can update and deploy new ConfigurationBuilders without requiring a fresh install of Windows or the .NET Framework.

ConfigurationBuilders Simplify Application Management

In .NET Framework 4.7.1 we introduced the concept of ConfigurationBuilders.  ConfigurationBuilders are objects that allow you to inject application configuration into your .NET Framework 4.7.1 application and continue to use the familiar ConfigurationManager interface to read those values.  Sure, you could always write your configuration files to read other config files from disk, but what if you wanted to apply configuration from environment variables?  What if you wanted to read configuration from a service, like Azure Key Vault?  To work with those configuration sources, you would need to rewrite some non-trivial amount of your application to consume these services.

With ConfigurationBuilders, no code changes are necessary in your application.  You simply add references from your web.config or app.config file to the ConfigurationBuilders you want to use and your application will start consuming those sources without updating your configuration files on disk.  One form of ConfigurationBuilder is the KeyValueConfigBuilder that matches a key to a value from an external source and adds that pair to your configuration.  All of the ConfigurationBuilders we are releasing today support this key-value approach to configuration.  Lets take a look at using one of these new ConfigurationBuilders, the EnvironmentConfigBuilder.

When you install any of our new ConfigurationBuilders into your application, we automatically allocate the appropriate new configSections in your app.config or web.config file as shown below:

The new “builders” section contains information about the ConfigurationBuilders you wish to use in your application.  You can declare any number of ConfigurationBuilders, and apply the settings they retrieve to any section of your configuration.  Let’s look at applying our environment variables to the appSettings of this configuration.  You specify which ConfigurationBuilders to apply to a section by adding the configBuilders attribute to that section, and indicate the name of the defined ConfigurationBuilder to apply, in this case “Environment”

<appSettings configBuilders="Environment">
  <add key="COMPUTERNAME" value="VisualStudio" />
</appSettings>

The COMPUTERNAME is a common environment variable set by the Windows operating system that we can use to replace the VisualStudio setting defined here.  With the below ASPX page in our project, we can run our application and see the following results.

AppSettings Reported in the Browser

AppSettings Reported in the Browser

The COMPUTERNAME setting is overwritten by the environment variable.  That’s a nice start, but what if I want to read ALL the environment variables and add them as application settings?  You can specify Greedy Mode for the ConfigurationBuilder and it will read all environment variables and add them to your appSettings:

<add name="Environment" mode="Greedy"
  type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

There are several Modes that you can apply to each of the ConfigurationBuilders we are releasing today:

  • Greedy – Read all settings and add them to the section the ConfigurationBuilder is applied to
  • Strict – (default) Update only those settings where the key matches the configuration source’s key
  • Expand – Operate on the raw XML of the configuration section and do a string replace where the configuration source’s key is found.

The Greedy and Strict options only apply when operating on AppSettings or ConnectionStrings sections.  Expand can perform its string replacement on any section of your config file.

You can also specify prefixes for your settings to be handled by adding the prefix attribute.  This allows you to only read settings that start with a known prefix.  Perhaps you only want to add environment variables that start with “APPSETTING_”, you can update your config file like this:

<add name="Environment"
     mode="Greedy" prefix="APPSETTING_"
     type="Microsoft.Configuration.ConfigurationBuilders.EnvironmentConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Environment, Version=1.0.0.0, Culture=neutral" />

Finally, even though using the APPSETTING_ prefix is a nice catch to only read those settings, you may not want your configuration to actually be called “APPSETTING_Setting” in code.  You can use the stripPrefix attribute (default value is false) to omit the prefix when the value is added to your configuration:

Greedy AppSettings with Prefixes Stripped

Greedy AppSettings with Prefixes Stripped

Notice that the COMPUTERNAME was not replaced in this mode.  You can add a second EnvironmentConfigBuilder to read and apply settings by adding another add statement to the configBuilders section and adding an entry to the configBuilders attribute on the appSettings:

Try using the EnvironmentConfigBuilder from inside a Docker container to inject configuration specific to your running instances of your application.  We’ve found that this significantly improves the ability to deploy existing applications in containers without having to rewrite your code to read from alternate configuration sources.

Secure Configuration with Azure Key Vault

We are happy to include a ConfigurationBuilder for Azure Key Vault in this initial collection of providers.  This ConfigurationBuilder allows you to secure your application using the Azure Key Vault service, without any required login information to access the vault.  Add this ConfigurationBuilder to your config file and build an add statement like the following:

<add name="AzureKeyVault"
     mode="Strict"
     vaultName="MyVaultName"
     type="Microsoft.Configuration.ConfigurationBuilders.AzureKeyVaultConfigBuilder, Microsoft.Configuration.ConfigurationBuilders.Azure" />

If your application is running on an Azure service that has , this is all you need to read configuration from the vault and add it to your application.  Conversely, if you are not running on a service with MSI, you can still use the vault by adding the following attributes:

  • clientId – the Azure Active Directory application key that has access to your key vault
  • clientSecret – the Azure Active Directory application secret that corresponds to the clientId

The same mode, prefix, and stripPrefix features described previously are available for use with the AzureKeyVaultConfigBuilder.  You can now configure your application to grab that secret database connection string from the keyvault “conn_mydb” setting with a config file that looks like this:

You can use other vaults by using the uri attribute instead of the vaultName attribute, and providing the URI of the vault you wish to connect to.  More information about getting started configuring key vault is available online.

Other Configuration Builders Available

Today we are introducing five configuration builders as a preview for you to use and extend:

This new collection of ConfigurationBuilders should help make it easier than ever to secure your applications with Azure Key Vault, or orchestrate your applications when you add them to a container by no longer embedding configuration or writing extra code to handle deployment settings.

We plan to fully release the source code and make these providers open source prior to removing the preview tag from them.

Store SessionState in CosmosDb

Today we are also releasing a session state provider for Azure Cosmos Db.  The globally available CosmosDb service means that you can geographically load-balance your ASP.NET application and your users will always maintain the same session state no matter the server they are connected to.  This async provider is available as a NuGet package and can be added to your project by installing that package and updating the session state provider in your web.config as follows:

<connectionStrings
  <add name="myCosmosConnString"
       connectionString="- YOUR CONNECTION STRING -"/>
</connectionStrings>
<sessionState mode="Custom" customProvider="cosmos">
  <providers>
    <add name="cosmos"
         type="Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync, Microsoft.AspNet.SessionState.CosmosDBSessionStateProviderAsync"
         connectionStringName="myCosmosConnString"/>
  </providers>
</sessionState>

Summary

We’re continuing to innovate and update .NET Framework and ASP.NET.  With these new providers, they should make it easier to deploy your applications to Azure or make use of containers without having to rewrite your application.  Update your applications to .NET 4.7.1 and start using these new providers to make your configuration more secure, and to start using CosmosDb for your session state.

Orchard Core Beta 1 released

$
0
0

This is a guest post by Sebastien Ros on behalf of the Orchard community

Two years ago, the Orchard community started developing Orchard on .NET Core. After 1,500 commits, 297,000 lines of code, 127 projects, we think it’s time to release a public version, namely Orchard Core Beta 1.

What is Orchard Core?

If you know what Orchard and .NET Core are, then it might seem obvious: Orchard Core is a redevelopment of Orchard on ASP.NET Core.

Orchard Core consists of two different targets:

  • Orchard Core Framework: An application framework for building modular, multi-tenant applications on ASP.NET Core.
  • Orchard Core CMS: A Web Content Management System (CMS) built on top of the Orchard Core Framework.

It’s important to note the differences between the framework and the CMS. Some developers who want to develop SaaS applications will only be interested in the modular framework. Others who want to build administrable websites will focus on the CMS and build modules to enhance their sites or the whole ecosystem.

Beta

Quoting Jeff Atwood on https://blog.codinghorror.com/alpha-beta-and-sometimes-gamma/:

“The software is complete enough for external testing — that is, by groups outside the organization or community that developed the software. Beta software is usually feature complete, but may have known limitations or bugs. Betas are either closed (private) and limited to a specific set of users, or they can be open to the general public.”

It means we feel confident that developers can start building applications and websites using the current state of development. There are bugs, limitations and there will be breaking changes, but the feedback has been strong enough that we think it’s time to show you what we have accomplished so far.

Building Software as a Service (SaaS) solutions with the Orchard Core Framework

It’s very important to understand the Orchard Core Framework is distributed independently from the CMS on nuget.org. We’ve made some sample applications on https://github.com/OrchardCMS/OrchardCore.Samples that will guide you on how to build modular and multi-tenant applications using just Orchard Core Framework without any of the CMS specific features.

One of our goals is to enable community-based ecosytems of hosted applications which can be extended with modules, like e-commerce systems, blog engines and more. The Orchard Core Framework enables a modular environment that allows different teams to work on separate parts of an application and make components reusable across projects.

What’s new in Orchard Core CMS

Orchard Core CMS is a complete rewrite of Orchard CMS on ASP.NET Core. It’s not just a port as we wanted to improve the performance drastically and align as close as possible to the development models of ASP.NET Core.

  • Performance. This might the most obvious change when you start using Orchard Core CMS. It’s extremely fast for a CMS. So fast that we haven’t even cared about working on an output cache module. To give you an idea, without caching Orchard Core CMS is around 20 times faster than the previous version.
  • Portable. You can now develop and deploy Orchard Core CMS on Windows, Linux and macOS. We also have Docker images ready for use.
  • Document database abstraction. Orchard Core CMS still requires a relational database, and is compatible with SQL Server, MySQL, PostgreSQL and SQLite, but it’s now using a document abstraction (YesSql) that provides a document database API to store and query documents. This is a much better approach for CMS systems and helps performance significantly.
  • NuGet Packages. Modules and themes are now shared as NuGet packages. Creating a new website with Orchard Core CMS is actually as simple as referencing a single meta package from the NuGet gallery. It also means that updating to a newer version only involves updating the version number of this package.
  • Live preview. When editing a content item, you can now see live how it will look like on your site, even before saving your content. And it also works for templates, where you can browse any page to inspect the impact of a change on templates as you type it.
  • Liquid templates support. Editors can safely change the HTML templates with the Liquid template language. It was chosen as it’s both very well documented (Jekyll, Shopify, …) and secure.
  • Custom queries. We wanted to provide a way to developers to access all their data as simply as possible. We created a module that lets you create custom ad-hoc SQL, and Lucene queries that can be re-used to display custom content, or exposed as API endpoints. You can use it to create efficient queries, or expose your data to SPA applications.
  • Recipes. Recipes are scripts that can contain content and metadata to build a website. You can now include binary files, and even use them to deploy your sites remotely from a staging to a production environment for instance. They can also be part of NuGet Packages, allowing you to ship predefined websites.
  • Scalability. Because Orchard Core is a multi-tenant system, you can host as many websites as you want with a single deployment. A typical cloud machine can then host thousands of sites in parallel, with database, content, theme and user isolation.

Resources

Development plan

The Orchard Core source code is available on GitHub.

There are still many important pieces to add and you might want to check our roadmap, but it’s also the best time to jump into the project and start contributing new modules, themes, improvements, or just ideas.

Feel free to drop on our dedicated Gitter chat and ask questions.

Viewing all 7144 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>