Quantcast
Channel: ASP.NET Blog
Viewing all 7144 articles
Browse latest View live

Improve website performance by optimizing images

$
0
0

We all want our web applications to load as fast as possible to give the best possible experience to the users. One of the steps to achieve that is to make sure the images we use are as optimized as possible.

If we can reduce the file size of the images then we can significantly reduce the weight of the website. This is important for various reasons, including:

  • Less bandwidth needed == cheaper hosting
  • The website loads faster
  • Faster websites have higher conversion rates
  • Less data needed to load your page on mobile devices (mobile data can be expensive)

To optimize images is always better for the user and therefore for you too, but it’s something that is easy to forget to do and a bit cumbersome. So, let’s look at a couple of options that are simple to use.

All these options use great optimization algorithms that are capable of reducing the file size of images by up to 75% without any noticeable quality loss.

Gulp

If you are already using Gulp, then using the gulp-imagemin package is a good option. When configured it will automatically optimize the images as part of your build.

Pros:

  • Can be automated as part of a build
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • Requires some configuration
  • Increases the build time sometimes by a lot
  • Doesn’t optimize dynamically added images

Visual Studio Image Optimizer

The Image Optimizer extension for Visual Studio is one of the most popular extensions due to its simplicity of use and strong optimization algorithms.

Pros:

  • Remarkably simple to use – no configuration
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • It is open source

Cons:

  • No build time support
  • Doesn’t optimize dynamically added images

Azure Image Optimizer

Installing the Azure.ImageOptimizer NuGet package into any ASP.NET application will automatically optimize images once the app is deployed to Azure App Services with zero code changes to the web application. It uses the same algorithms as the Image Optimizer extension for Visual Studio.

To try out the Azure Image Optimizer you’ll need an Azure subscription. If you don’t already have one you can get started for free.

This is the only solution that optimize images dynamically added at runtime such as a user uploaded profile pictures.

Pros:

  • Remarkably simple to use
  • Uses industry standard optimization algorithms
  • Supports both lossy and lossless optimization
  • Optimizes dynamically added images
  • Set it and forget it
  • It is open source

Cons:

  • Only works on Azure App Service

To understand how the Azure Image Optimizer works, check out the documentation on GitHub. Spoiler alert – it is an Azure Webjob running next to your web application.

Final thoughts

There are many more options for image optimizations that I didn’t cover, but it doesn’t really matter how you chose to optimize the images. The important part is that you optimize them.

My personal preference is to use the Image Optimizer extension for Visual Studio to optimize the known images and combine that with the Azure.ImageOptimizer NuGet package to handle any dynamically added images at runtime.

For more information about image optimization techniques check out Addy Osmani’s very comprehensive eBook Essential Image Optimization.


Configuring HTTPS in ASP.NET Core across different platforms

$
0
0

As the web moves to be more secure by default, it’s more important than ever to make sure your websites have HTTPS enabled. And if you’re going to use HTTPS in production its a good idea to develop with HTTPS enabled so that your development environment is as close to your production environment as possible. In this blog post we’re going to go through how to setup an ASP.NET Core app with HTTPS for local development on Windows, Mac, and Linux.

This post is primarily focused on enabling HTTPS in ASP.NET Core during development using Kestrel. When using Visual Studio you can alternatively enable HTTPS in the Debug tab of your app to easily have IIS Express enable HTTPS without it going all the way to Kestrel. This closely mimics what you would have if you’re handling HTTPS connections in production using IIS. However, when running from the command-line or in a non-Windows environment you must instead enable HTTPS directly using Kestrel.

The basic steps we will use for each OS are:

  1. Create a self-signed certificate that Kestrel can use
  2. Optionally trust the certificate so that your browser will not warn you about using a self-signed certificate
  3. Configure Kestrel to use that certificate

You can also reference the complete Kestrel HTTPS sample app

Create a certificate

Windows

Use the New-SelfSignedCertificate Powershell cmdlet to generate a suitable certificate for development:

New-SelfSignedCertificate -NotBefore (Get-Date) -NotAfter (Get-Date).AddYears(1) -Subject "localhost" -KeyAlgorithm "RSA" -KeyLength 2048 -HashAlgorithm "SHA256" -CertStoreLocation "Cert:\CurrentUser\My" -KeyUsage KeyEncipherment -FriendlyName "HTTPS development certificate" -TextExtension @("2.5.29.19={critical}{text}","2.5.29.37={critical}{text}1.3.6.1.5.5.7.3.1","2.5.29.17={critical}{text}DNS=localhost")

Linux & Mac

For Linux and Mac we will use OpenSSL. Create a file https.config with the following data:

Run the following command to generate a private key and a certificate signing request:

openssl req -config https.config -new -out csr.pem

Run the following command to create a self-signed certificate:

openssl x509 -req -days 365 -extfile https.config -extensions v3_req -in csr.pem -signkey key.pem -out https.crt

Run the following command to generate a pfx file containing the certificate and the private key that you can use with Kestrel:

openssl pkcs12 -export -out https.pfx -inkey key.pem -in https.crt -password pass:<password>

Trust the certificate

This step is optional, but without it the browser will warn you about your site being potentially unsafe. You will see something like the following if you browser doesn’t trust your certificate:

Windows

To trust the generated certificate on Windows you need to add it to the current user’s trusted root store:

  1. Run certmgr.msc
  2. Find the certificate under Personal/Certificates. The “Issued To” field should be localhost and the “Friendly Name” should be HTTPS development certificate
  3. Copy the certificate and paste it under Trusted Root Certification Authorities/Certificates
  4. When Windows presents a security warning dialog to confirm you want to trust the certificate, click on “Yes”.

Linux

There is no centralized way of trusting the a certificate on Linux so you can do one of the following:

  1. Exclude the URL you are using in your browsers exclude list
  2. Trust all self-signed certificates on localhost
  3. Add the https.crt to the list of trusted certificates in your browser.

How exactly to achieve this depends on your browser/distro.

Mac

Option 1: Command line

Run the following command:

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain https.crt

Some browsers, such as Chrome, require you to restart them before this trust will take affect.

Option 2: Keychain UI

If you open the “Keychain Access” app you can drag your https.crt into the Login keychain.

Configure Kestrel to use the certificate we generated

To configure Kestrel to use the generated certificate, add the following code and configuration to your application.

Application code

This code will read a set of HTTP server endpoint configurations from a custom section in your app configuration settings and then apply them to Kestrel. The endpoint configurations include settings for configuring HTTPS, like which certificate to use. Add the code for the ConfigureEndpoints extension method to your application and then call it when setting up Kestrel for your host in Program.cs:

Windows sample configuration

To configure your endpoints and HTTPS settings on Windows you could then put the following into your appsettings.Development.json, which configures an HTTPS endpoint for your application using a certificate in a certificate store:

Linux and Mac sample configuration

On Linux or Mac your appsettings.Development.json would look something like this, where your certificate is specified using a file path:

You can then use the user secrets tool, environment variables, or some secure store such as Azure KeyVault to store the password of your certificate using the HttpServer:Endpoints:Https:Password configuration key instead of storing the password in a file that goes into source control.

For example, to store the certificate password as a user secret during development, run the following command from your project:

dotnet user-secrets set HttpServer:Endpoints:Https:Password

To override the certificate password using an environment variable, create an environment variable named HttpServer:Endpoints:Https:Password (or HttpServer__Endpoints__Https__Password if your system does not allow :) with the value of the certificate password.

Run your application

When running from Visual Studio you can change the default launch URL for your application to use the HTTPS address by modifying the launchSettings.json file:

Redirect from HTTP to HTTPS

When you setup your site to use HTTPS by default, you typically want to allow HTTP requests, but have them redirected to the corresponding HTTPS address. In ASP.NET Core this can be accomplished using the URL rewrite middleware. Place the following code in the Configure method of your Startup class:

Conclusion

With a little bit of work you can setup your ASP.NET Core 2.0 site to always use HTTPS. For a future release we are working to simplify setting up HTTPS for ASP.NET Core apps and we plan to enable HTTPS in the project templates by default. We will share more details on these improvements as they become publicly available.

Testing ASP.NET Core MVC web apps in-memory

$
0
0

This post was written and submitted by Javier Calvarro Nelson, a developer on the ASP.NET Core MVC team

Testing is an important part of the development process of any app. In this blog post we’re going to explore how we can test ASP.NET Core MVC app using an in-memory server. This approach has several advantages:

  • It’s very fast because it does not start a real server
  • It’s reliable because there is no need to reserve ports or clean up resources after it runs
  • It’s easier than other ways of testing your application, such as using an external test driver
  • It allows testing of traits in your application that are hard to unit test, like ensuring your authorization rules are correct

The main shortcoming of this approach is that it’s not well suited to test applications that heavily rely on JavaScript. That said, if you’re writing a traditional web app or an API then all the benefits mentioned above apply.

For testing MVC app we’re going to use TestServer. TestServer is an in-memory implementation of a server for ASP.NET Core app akin to Kestrel or HTTP.sys.

Creating and setting up the projects

Start by creating an MVC app using the following command:

dotnet new mvc -au Individual -uld --use-launch-settings -o .\TestingMVC\src\TestingMVC

Create a test project with the following command:

dotnet new xunit -o .\TestingMVC\test\TestingMVC.Tests

Next create a solution, add the projects to the solution and add a reference to the app project from the test project:

dotnet new sln
dotnet sln add .\src\TestingMVC\TestingMVC.csproj
dotnet sln add .\test\TestingMVC.Tests\TestingMVC.Tests.csproj
dotnet add .\test\TestingMVC.Tests\TestingMVC.Tests.csproj reference .\src\TestingMVC\TestingMVC.csproj

Add references to the components we’re going to use for testing by adding the following item group to the test project file:

Now, we can run dotnet restore on the project or the solution and we can move on to writing tests.

Writing a test to retrieve the page at ‘/’

Now that we have our projects set up, let’s wirte a test that will serve as an example of how other tests will look.

We’re going to start by changing Program.cs in our app project to look like this:

In the snippet above, we’ve changed the method IWebHost BuildWebHost(string[] args) to call a new method IWebHostBuilder CreateWebHostBuilder(string[] args) within it. The reason for this is that we want to allow our tests to configure the IWebHostBuilder in the same way the app does and to allow making changes required by tests. (By chaining calls on the WebHostBuilder.)

One example of this will be setting the content root of the app when we’re running the server in a test. The content root needs to be based on the appliation’s root, not the test’s root.

Now, we can create a test like the one below to get the contents of our home page. This test will fail because we’re missing a couple of things that we describe below.

The test above can be decomposed into the following actions:

  • Create an IWebHostBuilder in the same way that my app creates it
  • Override the content root of the app to point to the app’s project root instead of the bin folder of the test app. (.\src\TestingMVC instead of .\test\TestingMvc.Tests\bin\Debug\netcoreapp2.0)
  • Create a test server from the WebHost builder
  • Create an HttpClient that can be used to communicate with our app. (This uses an internal mechanism that sends the requests in-memory – no network involved.)
  • Send an HTTP request to the server using the client
  • Ensuring the status code of the response is correct

Requirements for Razor views to run on a test context

If we tried to run the test above, we will probably get an HTTP 500 error instead of an HTTP 200 success. The reason for this is that the dependency context of the app is not correctly set up in our tests. In order to fix this, there are a few actions we need to take:

  • Copy the .deps.json file from our app to the bin folder of the testing project
  • Disable shadow copying assemblies

For the first bullet point, we can create a target file like the one below and include in our testing project file as follows:

For the second bullet point, the implementation is dependent on what testing framework we use. For xUnit, add an xunit.runner.json file in the root of the test project (set it to Copy Always) like the one below:

This step is subject to change at any point; for more information look at the xUnit docs at http://xunit.github.io/#documentation.

Now if you re-run the sample test, it will pass.

Summary

  • We’ve seen how to create in-memory tests for an MVC app
  • We’ve discussed the requirements for setting up the app to find static files and find and compile Razor views in the context of a test
  • Set up the content root in the tests to the app’s root folder
  • Ensure the test project references all the assemblies in the app
  • Copy the app’s deps file to the bin folder of the test project
  • Disable shadow copying in your testing framework of choice
  • We’ve shown how to write a functional test in-memory using TestServer and the same configuration your app uses when running on a real server in Production

The source code of the completed project is available here: https://github.com/aspnet/samples/tree/master/samples/aspnetcore/mvc/testing/TestingMVC

Happy testing!

Take a Break with Azure Functions

$
0
0

So, it’s Christmas time. The office is empty, the boss is away, and you’ve got a bit of free time on your hands. How about learning a new skill and having some fun?

Azure Functions are a serverless technology that executes code based on various triggers (i.e. a URL is called, an item is placed on a queue, a file is added to blob storage, a timer goes off.) There’s all sorts of things you can do with Azure Functions, like running high CPU-bound calculations, calling various web services and reporting results, sending messages to groups – and nearly anything you can imagine. But unlike traditional applications and services, there’s no need to set up an application host or server that’s constantly running, waiting to respond to requests or triggers. Azure Functions are deployed as and when needed, to as many servers as needed, to meet the demands of incoming requests. There’s no need to set up and maintain hosting infrastructure, you get automatic scaling, and – best of all – you only pay for the cycles used while your functions are being executed.

Want to have a go and try your hand at the latest in web technologies? Follow along to get started with your own Azure Functions.

In this post I’ll show you how to create an Azure Function that triggers every 30 minutes and writes a note into your slack channel to tell you to take a break. We’ll create a new Function app, generate the access token for Slack, then run the function locally.

Prerequisites:

Create a Function App (Timer Trigger)

We all know how important it is to take regular breaks if you spend all day sitting at a desk, right? So, in this tutorial, we’ll use a Timer Trigger function to post a message to a Slack channel at regular intervals to remind you (and your whole team) to take a break. A Timer Trigger is a type of Azure Function that is triggered to run on regular time intervals.

Just run it

If you want to skip ahead and run the function locally, fetch the source from this repo, insert the appropriate Slack channel(s) and OAuth token in the local.settings.json file, start the Azure Storage Emulator, then Run (or Debug) the Functions app in Visual Studio.

Step-by-step guide
  1. Open Visual Studio 2017 and select File->New Project.
  2. Select Azure Functions under the Visual C# category.
  3. Provide a name (e.g. TakeABreakFunctionApp) and press OK.
    The New Function Project dialog will open.
  4. Select Azure Functions v1 (.NET Framework), chose Timer trigger and press OK.
    Note: This will also work with Azure Functions v2, but for this tutorial I’ve chosen v1, since v2 is still in preview.

    New Timer Trigger

    A new solution is created with a Functions App project and single class called Function1 that contains a basic Timer trigger.

  5. Edit Function1.cs.
    • Add helper methods:
      • Env (for fetching environment variables)
      • SendHttpRequest (for sending authenticated http requests)
      • SendMessageToSlack (for generating and sending the appropriate Slack request – based on environment variables)
    • Update method: Run
      • Change the return type to async Task.
      • Add an asynchronous call to the SendMessageToSlack method.
      • Update Chron settings for the TimerTrigger attribute.
    • Add appropriate Using statements.

  6. The completed code should look like this:

    using System;
    using System.Net.Http;
    using System.Net.Http.Headers;
    using System.Threading.Tasks;
    using Microsoft.Azure.WebJobs;
    using Microsoft.Azure.WebJobs.Host;
    
    namespace TakeABreakFunctionsApp
    {
        public static class Function1
        {
            [FunctionName("Function1")]
            public static async Task Run([TimerTrigger("0 */30 * * * *")]TimerInfo myTimer, TraceWriter log)
            {
                log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
                await SendMessageToSlack("You're working too hard. How about you take a break?", log);
            }
    
            private static async Task SendMessageToSlack(string message, TraceWriter log)
            {
                // Fetch environment variables (from local.settings.json when run locally)
                string channel = Env("ChannelToNotify");
                string slackbotUrl = Env("SlackbotUrl");
                string bearerToken = Env("SlackOAuthToken");
    
                // Prepare request and send via Http
                log.Info($"Sending to {channel}: {message}");
                string requestUrl = $"{slackbotUrl}?channel={Uri.EscapeDataString(channel)}&text={Uri.EscapeDataString(message)}";
                await SendHttpRequest(requestUrl, bearerToken);
            }
    
            private static async Task SendHttpRequest(string requestUrl, string bearerToken)
            {
                HttpClient httpClient = new HttpClient();
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bearerToken);
                HttpResponseMessage response = await httpClient.GetAsync(requestUrl);
            }
    
            private static string Env(string name) => Environment.GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);
        }
    }
  7. Edit local.settings.json.
    Add the following environment variables.
    • SlackbotUrl – The URL for the Slack API to post chat messages
    • SlackOAuthToken – An OAuth token that grants permission for your app to send messages to a Slack workspace.
      – See below for help generating a Slack OAuth token.
    • ChannelToNotify – The Slack channel to send messages to
  8. Your local.settings.json should look something like this:
    (Your SlackOAuthToken and ChannelToNotify variables will be specific to your Slack workspace.)

    {
      "IsEncrypted": false,
      "Values": {
        "AzureWebJobsStorage": "UseDevelopmentStorage=true",
        "AzureWebJobsDashboard": "UseDevelopmentStorage=true",
        "SlackbotUrl": "https://slack.com/api/chat.postMessage",
        "SlackOAuthToken": "[insert your generated token]",
        "ChannelToNotify": "[your channel id]"
      }
    }

Your Functions app is now ready to run! You just need to grab an authorization token for your Slack workspace.

Generate an OAuth token for your app to send messages to your Slack workspace

Before you can post a message to a Slack workspace, you must first tell Slack about the app and assign specific permissions for the app to send messages as a bot. Once you’ve installed the app to the Slack workspace, you will be issued an OAuth token that you can send with your http requests. For full details, you can follow the instructions here. Otherwise, follow the steps below.

  • Click here to register your new Functions app with your Slack workspace.
  • Provide a name (e.g. “Take a Break”) and select the appropriate Slack workspace, then press Create App.
  • Create A Slack App

    When the app is registered with Slack, the Slack API management page opens for the new app.

  • Select OAuth & Permissions from the navigation menu on the left.
  • In the OAuth & Permissions page, scroll down to Scopes, select the permission chat:write:bot, then select Save Changes.
  • Select Permission Scopes

  • After the scope permissions have been created and the page has refreshed, scroll to the top of the OAuth & Permissions page and select Install App to Workspace.
  • Slack Install App to Workspace

  • A confirm page opens. Review the details, then click Authorize.
  • Your OAuth Access Token is generated and presented at the top of the page.
  • OAuth Access Token

  • Copy this token and add it to your local.settings.json as the value for SlackOAuthToken.

    Note: The OAuth access token is a secret and should not be made public. If you check this token into a public source control system like GitHub, Slack will find it and permanently disable it!

Run your Functions App on your local machine

Now that you’ve registered your app with Slack and have provided a valid OAuth token in your local.settings.json, you can run the Function locally.

Start the local Storage Emulator

You can configure your function to use a storage account on Azure. But if your app is configured to use development storage (which is the default for new Functions), then it will run against the local Azure Storage Emulator. Therefore, you’ll need to make sure the Storage Emulator is started before running your Functions app.

  • Open the Windows Start Menu and search for “Storage Emulator”.

Microsoft Azure Storage Emulator will launch. You can manage it via the icon in the Windows System Tray.

Start the Function app from Visual Studio
  • Press Ctrl+F5 to build and run the Functions app.
  • If prompted, update to the latest Functions tools.
  • A new command window launches and displays the log output from the Functions app.

Function App Running

After a certain period of time, the Timer trigger will fire and send a message to your Slack workspace.

Function Timer Executes

You should see the message appear in the appropriate Slack channel.

Message Appears In Slack

Feel free to play around with the Timer Chron options in the Run method’s attributes to configure the function to execute at the intervals you’d like. Here are some example Chron settings.
        Trigger Chron format: (seconds minutes hours days months years)
        (“0 */15 6-20 * * *”) = Every 15 minutes, between 06:00 AM and 08:59 PM
        (“0 0 0-5,21-23 * * *”) = Every hour from 12:00 AM to 06:00 AM and 09:00 PM to 12:00 AM

Congratulations! You’ve written a working Azure Functions App with a Timer trigger function.

What’s next?


Publish your Functions App to the cloud
So that your Functions app is always available, and can be accessed globally (eg. For Http trigger types), you can publish your app to the cloud. This article describes the process of publishing a Functions app to Azure.

Experiment with other Functions types
There’s an excellent collection of open-source samples available here. Poke around and see what takes your interest.

Tell us about your experience with Azure Functions
We’d love to hear about your experience with Azure Functions. If you’ve got a minute, please complete this short survey.
As always, feel free to leave comments and questions in the space below.

Happy holidays!

Justin Clareburt
Senior Program Manager
Visual Studio and .NET

Announcing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4

$
0
0

Today we are releasing Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 on NuGet. This release contains some minor bug fixes and a couple of new features specifically targeted at enabling .NET Standard support for the ASP.NET Web API Client.

You can find the full list of features and bug fixes for this release in the release notes.

To update an existing project to use this preview release run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebApi -Version 5.2.4-preview1
Install-Package Microsoft.AspNet.WebPages -Version 3.2.4-preview1

ASP.NET Web API Client support for .NET Standard

The ASP.NET Web API Client package provides strongly typed extension methods for accessing Web APIs using a variety of formats (JSON, XML, form data, custom formatter). This saves you from having to manually serialize or deserialize the request or response data. It also enables using .NET types to share type information about the request or response with the server and client.

This release adds support for .NET Standard 2.0 to the ASP.NET Web API Client. .NET Standard is a standardized set of APIs that when implemented by .NET platforms enables library sharing across .NET implementations. This means that the Web API client can now be used by any .NET platform that supports .NET Standard 2.0, including cross-platform ASP.NET Core apps that run on Windows, macOS, or Linux. The .NET Standard version of the Web API client is also fully featured (unlike the PCL version) and has the same API surface area as the full .NET Framework implementation.

For example, let’s use the new .NET Standard support in the ASP.NET Web API Client to call a Web API from an ASP.NET Core app running on .NET Core. The code below shows an implementation of a ProductsClient that uses the Web API client helper methods (ReadAsAsync<T>(), Post/PutAsJsonAsync<T>()) to get, create, update, and delete products by making calls to a products Web API:

Note that all the serialization and deserialization is handled for you. The ReadAsAsync<T>() methods will also handle selecting an appropriate formatter for reading the response based on its content type (JSON, XML, etc.).

This ProductsClient can then be used to call the Products Web API from your Razor Pages in an ASP.NET Core 2.0 app running on .NET Core (or from any .NET platform that supports .NET Standard 2.0). For example, here’s how you can use the ProductsClient from the page model for a page that lets you edit the details for a product:

For more details on using the ASP.NET Web API Client see Call a Web API From a .NET Client (C#).

Please try out Preview 1 of ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 and let us know what you think! Any feedback can be submitted as issues on GitHub. Assuming everything with this preview goes smoothly, we expect to ship a stable release of these packages by the end of the month.

Enjoy!

64 bit ASP.NET Core on Azure App Service

$
0
0

When creating an Azure App Service .NET Core is already pre-installed. However, only the 32 bit .NET runtime is installed. In this post we will look at a  few ways that you can get a 64 bit runtime on Azure App Service

During the 2.1 timeframe we are hoping to have both 32 and 64 bit runtimes installed as well as enabling the portal experience to switch between the two.

1. Deploy a self-contained application

Self-contained deployments don’t require .NET Core to be installed on a machine, because they carry the runtime they need with them. Because of this you can deploy a 64bit self-contained deployment to Azure App Service. For information about self-contained deployments you can look here:

Information: https://docs.microsoft.com/en-us/dotnet/core/deploying/

CLI instructions: https://docs.microsoft.com/en-us/dotnet/core/deploying/deploy-with-cli

Visual Studio instructions: https://docs.microsoft.com/en-us/dotnet/core/deploying/deploy-with-vs

2. Deploy your own runtime

The pre-installed runtime is installed on a local SSD, but you can copy your own runtime onto your server and modify your application to use that instead. To do this you would:

  1. Download a zip of the x64 runtime that you want to use
  2. Go to the Kudu console (under advanced tools, debug console)
  3. Drag the zip of the runtime onto the file explorer section of the Kudu console. Kudu has a feature that will copy up the zip and extract it on the server. The UI should change as you drag the zip showing you a location to drop the zip in order for this feature to work.
  4. Modify your applications web.config to use the dotnet.exe that was just extracted on the server

A web.config file is generated for your ASP.NET Core application when you don’t have one in your App. But if your application already contains one, then it will be used instead. Your web.config would like this:

[PATH_TO_EXE] will point to the location you extracted the dotnet.exe, for example D:\home\dotnet\dotnet.exe. Your application will now use the copy of dotnet.exe that you copied to the server, meaning that it is now using a 64 bit runtime.

NOTE: There are two main caveats with this approach. 1, you must service your own runtime. If a new patch of .NET Core comes out then you will need to deploy it yourself to get any improvements. 2, the cold start time of your application will likely be a bit slower as the runtime is loading from a slower drive.

3. Use Linux Azure App Service

There is no official 32 bit runtime for .NET Core available on Linux. Because of that, if you use Linux Azure App Service then you will have a 64 bit runtime with a normal deployment.

4. Use Web Apps for Containers

Because you are deploying your own container, with whichever runtime you choose, when using Containers you will always have the runtime that you want available. You can find more information about web Apps for Containers here: https://azure.microsoft.com/en-us/services/app-service/containers/

Conclusion

We hope to add 64 bit as a pre-installed option for Azure App Service, but in the meantime you can use the options listed here if you need a 64 bit runtime.

Azure Storage for Serverless .NET Apps in Minutes

$
0
0

Azure Storage is a quick and effortless way to store data for applications that has high availability, is secure, scales and is redundant. This blog post walks through a simple application that creates a short code for a long URL to easily reference it. It uses Table Storage to map codes to URLs and a Queue to process redirect counts. Everything is handled by serverless Azure Functions. The only prerequisite to build and run locally is Visual Studio 2017 15.5 or later, including the Azure Developer workload. That will automatically install the Azure Storage Emulator you can use to program against tables, queues, blobs, and files on your local machine. You do not have to have an Azure account to run this on your machine.

Build and Test Locally with Function App Host and Azure Storage Emulator

You can download the source code for this project here.

Open Visual Studio 2017 and create a new  “Azure Functions” project (the template will be under the “Cloud” category). Pick a name like, ShortLink.

Add new Azure Functions Project

Add new Azure Functions Project

In the next dialog, choose “Azure Functions v1”, select “Http Trigger”, pick “Storage Emulator” for the Storage Account, and set Access rights to “Anonymous.”

Choosing the function template

Choosing the function template

Right-click the name Function1.cs in the Solution Explorer and rename it to LinkShortener.cs. Change the function name to “Set” and update the code to use “href” instead of “name” as follows:

Hit F5 to run the function locally. You should see the function console launch and provide you with a list of URLs to access your function.

Endpoint from function app

Endpoint from function app

Access the end point from your web browser by copying and pasting the URL for the “Set” operation. You should receive an error message asking you to pass an href. Append the following to the end of the URL:

?href=https://developer.microsoft.com/advocates

You should see the URL echoed back to you. Stop debugging (SHIFT+F5).

Out of the box, the functions template creates a function app. The function app hosts multiple functions, which are snippets of code that can be triggered by various events. In this example, the code is triggered by an HTTP/HTTPS request. Visual Studio uses attributes to declare the function name and specify the bindings. The log is automatically passed into the method you to to write logging information.

It’s time to add storage!

Table Storage uses a partition (to segment the data) and a row key (to identify a unique data item). The app will use a special partition of “1” to store a key that indicates the next code to use. The short code is generated by a simple algorithm that translates an integer to a string of alphanumeric characters. To store a short code, the partition will be set to the first character of the code, the row key will be the short code, and a target field will contain the full URL. Create a new class file and name it UrlKey.cs. Add this using statement:

using Microsoft.WindowsAzure.Storage.Table;

Then add the class:

Next, add a class named UrlData.cs, include the same “using” statement and define the class like this:

Add the same using statement to the top of the LinkShortener.cs file. Azure Functions provides special bindings that take care of connecting to various resources. Modify the Run method to include a binding for the key and another binding that will be used to write out the URL information.

The Table attributes represent bindings to Table Storage. Different parameters allow behaviors such as passing in existing entries or collections of entries, as well as a CloudTable instance you can think of as the context you use to interact with a specific table. The binding logic will automatically create the table if it doesn’t exist. The key entry is automatically passed in if it exists. This is because the partition and key are included in the binding. If it doesn’t exist, it will be passed as null and you can initialize it and store it as a new entry:

Next, add the code to turn the numeric key value into an alphanumeric code, then create a new instance of the UrlData class.

The final steps for the redirect loop involve saving the data and updating the key. The response returns the code.

Now you can test the functionality. Make sure the storage emulator is running by searching for “Storage Emulator” in your applications and clicking on it. It will send a notification when it is ready. Press F5 and paste the same URL used earlier with the query string set. If all goes well, the response should contain the initial value “BNK”. Next, open “Cloud Explorer” (View -> Cloud Explorer) and navigate to local developer storage. Expand table storage and view the two entries. Note the id for the key has been incremented:

Cloud Explorer with local Table Storage

Cloud Explorer with local Table Storage

With an entry in storage, the next step is a function that takes the short code and redirects to the full URL. The strategy is simple: check for an existing entry for the code that is passed. If it exists, redirect to the URL, otherwise redirect to a “fallback” (in this case I used my personal blog). The redirect should happen quickly, so the short code is placed on a queue for a separate function to process statistics. Simply declaring the queue with the Queue binding is all it takes for the storage driver to create the queue and add the entry. You are passed an asynchronous collection so you may add multiple queue entries. Anything you add is automatically inserted into the queue. It’s that simple!

Run the project again, and navigate to the new “Go” endpoint and pass the “BNK” parameter. Your URL will look something like: http://localhost:7071/api/Go/BNK. You should see it redirect to the page you originally passed in. Refresh your Cloud Explorer and expand the “Queues” section. There should be a new queue named “counts” with a single entry (or more if you tried the redirect multiple times).

Cloud Explorer with local Queue

Cloud Explorer with local Queue

Processing the queue ties together elements of the previous function. The function uses a queue trigger and will be called for and with each entry in the queue. The implemented logic simply looks for a matching entry in the table, increments the count, then saves it.

Run the project, and if your Storage Emulator is running, you should see a call to the queue processing function in the function app console. After it completes, refresh your Cloud Explorer. You should see the queue is now empty and the count has been updated on the URL in Table Storage.

Publish to Azure

It’s great to be able to run and debug locally, but to be useful the app should be hosted in the cloud. This step requires an Azure Account (you can get one for free). Right-click on the ShortLink project and choose “Publish…”. Make sure “Azure Function App” and “Create New” are selected, then click the “Publish” button.

Publish to Azure

Publish to Azure

In the dialog, give the app a unique name (it must be globally unique so you may have to try a few variations). Choose “New” for the resource group and give it a logical name, then choose “New” for plan. Give the plan a name (I like to use the app name followed by “Link”), choose a region close to you and pick the “Consumption Plan” then press “OK.”

Choose a service plan

Choose a service plan

Click “Create” to create the necessary assets in Azure. Visual Studio will create the resources for you, build your application, then publish it to Azure. When everything is ready, you will see the message “Publish completed.” in the Output dialog for Build.

Test adding a link (replace “myshortlink” with your own function app name):
http://myshortlink.azurewebsites.net/api/Set?href=https://docs.microsoft.com/azure/storage/


Then test the redirect:
http://myshortlink.azurewebsites.net/api/Go/BNK

You can use the Storage Explorer to attach to Azure and verify the count.

But wait – isn’t Azure Storage supposed to be secure? How did this just work without me entering credentials?

If you don’t specify a connection string, all storage references default to an AzureWebJobsStorage connection key. This is the storage account created automatically to support your function app. In your local project, the local.settings.json file points to development storage (the emulator). When the Azure Function App was created, a connection string was automatically generated for the storage account. The application settings override your local settings, so the application was able to run against the storage account without modification! If you want to connect to a different storage account (for example, if you choose to use CosmosDB for premium table storage) you can simply add a new connection string and specify it as a parameter on the bindings and triggers.

When you publish from Visual Studio, the publish dialog has a link to “Manage Application Settings…”. There, you can add your own settings including any custom connection strings you need, and it will deploy the settings securely to Azure as part of the publish process.

Custom application settings

Custom application settings

That’s all there is to it!

Conclusion

There is a lot more you could do with the application. For example, the application “as is” does not have any authentication, meaning anyone could access your link shortener and create short links. You want to change the access to “Function level” for the “Set” function and secure the website with an SSL certificate to prevent anonymous access. For a more complete version of the application that includes logging, monitoring, and web front end to paste links, read Build a Serverless Link Shortener Faster than you can Finish your Latte.

The intent of this post was to illustrate how easy and effective the experience of integrating Azure Storage with your application can be. There are SDKs available to perform the same functions from desktop and mobile applications as well. Perhaps the biggest benefit of leveraging storage is the low cost. I  run a production link shortener that processes several hundred hits per day, and my monthly cost for both the serverless function and the storage is less than one dollar. Azure Storage is both accessible and cost effective.

Here is the full project.

Enjoy!

ASP.NET Core 2.1 roadmap

$
0
0

Five months ago, we shipped ASP.NET Core 2.0 as a foundational release for our high performance, cross-platform web framework for .NET and .NET Core. Since then we have been hard at work to deliver the next wave of features in ASP.NET Core 2.1. Below is an outline of the features and improvements that are planned for this release, which is targeted for mid-year 2018.

Contents:

MVC

Razor Pages improvements

In ASP.NET Core 2.0 we introduced Razor Pages as a new paged-based model for building Web UI. In 2.1 we are making a variety of improvements to Razor Pages to make it even more productive.

Razor Pages in an area

Areas provide a way to partition a large MVC app into smaller functional groupings each with their own controllers and views. In 2.1 we will add support for areas to Razor Pages so that areas can have their own pages directory.

Support for /Pages/Shared

In 2.1 Razor Pages will fall back to finding Razor assets such as layouts and partials in /[pages root]/Shared before falling back to /Views/Shared. In addition to this, pages themselves can now be in the /[pages root]/Shared path and they will be routable as if they existed directly at /[pages root]/, unless a page actually exists at that location, in which case it will be served instead.

Bind all properties on a page or controller

Starting in 2.0 you could use the BindPropertyAttribute to specify that a property on a page model or controller should be bound to data from the request. If you have lots of properties that you want to bind, then this can get tedious and verbose. In 2.1 we will add support for specifying that all properties on a page or controller should be bound by putting the BindPropertyAttribute on the class.

Implement IPageFilter on page models

We will implement IPageFilter on page models, so that you can run logic before or after page handlers run for a given request, much the same way that you can implement IActionFilter on a controller.

Functional testing infrastructure

Writing functional tests for an MVC app allows you to test handling of a request end-to-end including running routing, filters, controllers, actions, views and pages. While writing in-memory functional tests for MVC apps is possible with ASP.NET Core 2.0 it requires significant setup.

For 2.1 we will provide an test fixture implementation that handles the typical pitfalls when trying to test MVC applications using TestServer:

  • Copy the .deps file from your project into the test assembly bin folder
  • Specify the content root of the application’s project root so that static files and views can be found
  • Streamline setting up your app on TestServer

A sample test that uses the new test fixture with xUnit looks like this:

See https://github.com/aspnet/announcements/issues/275 for additional details.

Web API improvements

ASP.NET Core gives you a single unified framework for building both Web UI and Web APIs. In 2.1 we are making various improvements to the
framework for building Web APIs.

Better Input Processing

We want the experience around invalid input to be more automatic and more consistent. More concretely we’re going to:

  • Create a programming model where your action code isn’t called when a request has validation errors (see “Enhanced Web API controller conventions” below)
  • Improve the fidelity of error responses when the request body fails to deserialize or the JSON is invalid
  • Enable placing validation attributes directly on action parameters
Support for Problem Details

We are adding support for RFC 7808 – Problem Details for HTTP APIs as a standardized format for returning machine readable error responses from HTTP APIs. You can return a Problem Details response from your API action using the ValidationProblem() helper method.

Improved OpenAPI specification support

We want to embrace the OpenAPI specification (previously called “Swagger”) and make Web APIs built with ASP.NET Core more descriptive. Today you need a lot of “attribute soup” to get a reasonable OpenAPI spec from ASP.NET Core. We plan to introduce an opinionated layer that infers the possible responses based on what you’re likely to have done with your actions (attributes still win when you want to be explicit).

For example, actions that return IActionResult need to be attributed to indicate the return type so that the schema of the response body can be determined. Actions that return the response type directly don’t need to be attributed, but then you lose the flexibility to return any action result.

We will introduce a new ActionResult<T> type that allows you to return either the response type or any action result, while still indicating the response type.

Enhanced Web API controller conventions and ActionResult<T>

We are adding the [ApiController] attribute as the way to opt-in to Web API specific conventions and behaviors. These behaviors include:

  • Automatically responding with a 400 when validation errors occur
  • Infer smarter defaults for action parameters: [FromBody] for complex types, [FromRoute] when possible, otherwise [FromQuery]
  • Requires attribute routing – actions are not accessible by convention-based routes

Here’s an example Web API controller that uses these new enhancements:

Here’s what the Web API would look like if you were to implement it with 2.0:

JSON Patch improvements

For JSON Patch we will add support for the test operator and for patching dictionaries with non-string keys.

Partial Tag Helper

Razor partial views are a convenient way to include some Razor content into a view or page. Today there are four different methods for rendering a partial on a page that have different trade-offs and limitations (Html.Partial vs Html.RenderPartial, sync vs async). Rendering partials also suffers from a limitation where the generated prefix for rendered form elements based on the given model, must be handled manually for each partial rendering.

The new partial Tag Helper makes rendering a partial straightforward and elegant. You can specify the model using model expression syntax and the partial Tag Helper will handle setting up the correct HTML field prefix for you:

Razor UI in a class library

ASP.NET Core 2.1 will make it easier to build and include Razor based UI in a library and share it across multiple projects. A new Razor SDK will enable building Razor files into a class library project that can then be packaged into a NuGet package. Views and pages in libraries will automatically be discovered and can be overridden by the application. By integrating Razor compilation into the build, the app startup time is also significantly faster, while still allowing for fast updates to your Razor views and pages at runtime as part of an iterative development workflow.

SignalR

For ASP.NET Core 2.1 we are porting ASP.NET SignalR to ASP.NET Core to support real-time web scenarios. As previously announced, ASP.NET Core SignalR will also include a number of improvements, including a simplified scale-out model, a new JavaScript client with no jQuery dependency, a new compact binary protocol based on MessagePack, support for custom protocols, a new streaming response model, and support for clients based on bare WebSockets. You can start trying out ASP.NET Core SignalR today by checking out the samples.

WebHooks

WebHooks are a lightweight HTTP pattern for event notification across the web. WebHooks enable services to send event notifications over HTTP to registered subscribers. For 2.1 we are porting a subset of the ASP.NET WebHooks receivers to ASP.NET Core in a way that integrates with the ASP.NET Core idioms.

For 2.1 we plan to port the following receivers:

  • Microsoft Azure alerts
  • Microsoft Azure Kudu notifications
  • Microsoft Dynamics CRM
  • Bitbucket
  • Dropbox
  • GitHub
  • MailChimp
  • Pusher
  • Salesforce
  • Slack
  • Stripe
  • Trello
  • WordPress

To use a WebHook receiver in ASP.NET Core WebHooks you attribute a controller action that you want to handle the notification. For example, here’s how you can handle an Azure alert:

Improvements for GDPR

The ASP.NET Core 2.1 project templates will include some extension points to help you meet some of your UE General Data Protection Regulation (GDPR) requirements.

A new cookie consent feature will allow you to ask for (and track) consent from your users for storing personal information. This can be combined with a new cookie feature where cookies can be marked as essential or non-essential. If a user has not consented to data collection, non-essential cookies will not be sent to the browser. You will still need to create the wording on the UI prompt and a suitable privacy policy which matches the GDPR analysis you or your company have performed, along with implementing the logic for determining under what conditions a given user should be asked for consent before writing non-essential cookies (the templates simply default to asking all users).

Additionally, the ASP.NET Core Identity templates for individual authentication now have a UI to allow users to download their personal data, along with the ability to delete their account entirely. By default, these UI areas only return personal information from ASP.NET Core identity, and perform a delete on the identity tables. As you add your own information into your database you should extend these features to also include that data according to your GDPR analysis.

Finally, we are considering extension points to allow you to apply your own encryption of ASP.NET Core identity data. We recommend that you examine the encryption features of your database to see if they match your GDPR requirements before attempting to layer on your own encryption mechanisms. Both Microsoft SQL and SQL Azure, as well as Azure table storage offer transparent encryption of data at rest, which does not require any changes to your application and is managed for you.

Security

HTTPS

With the increased focus on security and privacy, enabling HTTPS for web apps is more important than ever before. HTTPS enforcement is becoming increasingly strict on the web, and sites that don’t use it are considered, and increasingly labeled as, not secure. GDPR requires the use of HTTPS to protect user privacy. While using HTTPS in production is critical, using HTTPS during development can also help prevent related issues before deployment, like insecure links.

On by default

To facilitate secure website development, we are enabling HTTPS in ASP.NET Core 2.1 by default. Starting in 2.1, in addition to listing on http://localhost:5000, Kestrel will listen on https://localhost:5001 when a local development certificate is present. A suitable certificate will be created when the .NET Core SDK is installed or can be manually setup using the new ‘dev-certs’ tool. We will also update our project templates to run on HTTPS by default and include HTTPS redirection and HSTS support.

HTTPS redirection and enforcement

Web apps typically need to listen on both HTTP and HTTPS, but then redirect all HTTP traffic to HTTPS. ASP.NET Core 2.0 has URL rewrite middleware that can be used for this purpose, but it could be tricky to configure correctly. In 2.1 we are introducing specialized HTTPS redirection middleware that intelligently redirects based on the presence of configuration or bound server ports.

Use of HTTPS can be further enforced using HTTP Strict Transport Security Protocol (HSTS), which instructs browsers to always access the site via HTTPS. ASP.NET Core 2.1 adds HSTS middleware that supports options for max age, subdomains, and the HSTS preload list.

Configuration for production

In production, HTTPS must be explicitly configured. In 2.1 we are introducing default configuration schema for configuring HTTPS for Kestrel that is simple and straightforward. You can configure multiple endpoints including the URLs and the certificate to use for HTTPS either from a file on disk or from a certificate store:

Virtual authentication schemes

We’re adding something tentatively called “Virtual Schemes” to address two main scenarios:

  1. Making it easier to mix authentication schemes, like bearer tokens and cookie authentication in the same app (sample). Virtual schemes allow you to configure a dynamic authentication scheme that will use bearer authentication only for requests starting with /api, and cookie authentication otherwise
  2. Compose (mix/match) different authentication verbs (Challenge/SignIn/SignOut/Authenticate) across different handlers. For example, combining OAuth + Cookies, where you would have Challenge = OAuth, and everything else handled by cookies.

Identity

Identity as a library

ASP.NET Core Identity gives you a framework for setting up authentication and identity concerns for your site, including user registration, managing passwords, two-factor authentication, social logins and much more. However, setting up a site to use ASP.NET Core Identity requires quite a bit of code. While project templates help with generating this code, they don’t help with adding identity to an existing application and the code can’t easily be updated.

For 2.1 we will provide a default identity UI implementation as a library. You can add the default identity UI to your application by installing a NuGet package and then enable it in your Startup class:

Identity scaffolder

If you want all the identity code to be in your application so that you can change it however you want, you can use the new identity scaffolder to add the identity code to your application. All the scaffolded identity code is generated in an identity specific area folder so that it remains nicely separated from your application code.

Options improvements

To configure options with the help of configured services, you can today implement IConfigureOptions<T>. In 2.1 we’re adding convenience overloads to the Configure method that allow you to configure options using services without having to implement a separate class:

Also, the new ConfigureOptions<TSetup> method lets you register a single class that configures multiple options (by implementing IConfigureOptions<T> multiple times):

HttpClientFactory

The new HttpClientFactory type can be registered and used to configure and consume instances of HttpClient in your application. It provides several benefits:

  1. Provide a central location for naming and configuring logical instances of HttpClient. For example, you may configure a “github” client that is pre-configured to access GitHub and a default client for other purposes.
  2. Codify the concept of outgoing middleware via delegating handlers in HttpClient and implementing Polly based middleware to take advantage of that.
  3. Manage the lifetime of HttpClientMessageHandlers to avoid common problems that can be hit when managing HttpClient lifetimes yourself.

HttpClient already has the concept of delegating handlers that could be linked together for outgoing HTTP requests. The factory will make registering of these per named client more intuitive as well as implement a Polly handler that allows Polly policies to be used for Retry, CircuitBreakers, etc. Other “middleware” could also be implemented in the future but we don’t yet know when that will be.

In this first example we will configure two logical HttpClient configurations, a default one with no name and a named “github” client.

Registration in Startup.cs:

Consumption in a controller:

In addition to using strings to differentiate configurations of HttpClient, you can also leverage the DI system using what we are calling a typed client:

A class called GitHubService:

This type can have behavior and completely encapsulate HttpClient access if you wish, or just be used as a strongly typed way of naming an HttpClient as shown here.

Registration in Startup.cs:

NOTE: The Polly section of this code sample should be considered pseudocode at best. We haven’t built this yet and as such are not sure of the final shape of the API.

Consumption in a Razor Page:

Kestrel

Transport Extensibility

The current implementation of the underlying libuv connection semantics has been decoupled from the rest of Kestrel and abstracted away into a new Transport abstraction. While we continue to ship with libuv as the default transport, we are also adding support for a new transport based on the socket types included in .NET.

Socket Transport

We are continuing to invest in a new socket transport for Kestrel as we believe it has the potential to be more performant than the existing libuv transport. While we aren’t quite there yet, you can still easily switch to the new socket transport and try it out today.

Default configuration

We are adding support to Kestrel for configuring endpoints and HTTPS settings (see HTTPS: Configuration for production)

ASP.NET Core Module

The ASP.NET Core Module (ANCM) is a global IIS module for IIS that acts as a reverse-proxy from IIS to your Kestrel backend.

Version agility

Since ANCM is a global singleton, it can’t version or ship with the same agility as the rest of the ASP.NET Core. In 2.1, we’ve refactored ANCM into two pieces: the shim and the request handler. The shim will continue to be installed as a global singleton, but the request handler will ship as part of the new Microsoft.AspNetCore.Server.IIS package which can be referenced directly by your application. This will allow you to use different versions of ANCM with different app deployments.

In-process hosting

In 2.1, we’re adding a new in-process mode to ANCM for .NET Core based apps where the runtime and your app are both loaded inside the IIS worker process (w3wp.exe). This removes the performance penalty of proxying requests over the loopback adapter. Our preliminary tests show performance improvements of around ~4.4x compared to running out-of-process. Configuring your app to use to use the in-process model can be done using `web.config`, and will be eventually be the default for new applications targeting 2.1:

Alternatively, you can set a project property in your project file:

New Microsoft.AspNetCore.App package

ASP.NET Core 2.1 will introduce a new meta-package for use by applications: Microsoft.AspNetCore.App. The new meta-package differs from the existing meta-package in that it reduces the number of dependencies of packages not owned or supported by the ASP.NET or .NET teams to just those deemed necessary to ensure the major framework features function. We will update project templates to use the new meta-package. The existing Microsoft.AspNetCore.All meta-package will continue to be made available throughout the 2.x lifecycle. For additional details see https://github.com/aspnet/Announcements/issues/287.

In conclusion

We hope you are as excited about these features and improvements as we are! Of course, it is still early in the release and these plans are subject to change, but you can follow along with the latest status of these features by tracking the action on GitHub. Major updates and changes will be posted on the Announcements repo. You can also get live updates and participate in the conversation by watching the weekly ASP.NET Community Standup at https://live.asp.net. You can also read about the roadmaps for .NET Core 2.1 and EF Core 2.0 on the .NET team blog. Your feedback is welcome and appreciated!


Learn how to do Image Recognition with Cognitive Services and ASP.NET

$
0
0

With all the talk about artificial intelligence (AI) and machine learning (ML) doing crazy things, it’s easy to be left wondering, “what are practical ways I can use this today?” It turns out there are some extremely easy ways to try this today.

In this post, I’ll walk through how to detect faces, gender, ages, and hair color in photos, by adding only a few lines of code to an ASP.NET app. Images will be uploaded and shown in an image gallery built with ASP.NET, images will be hosted in Azure Storage, and Azure Cognitive Services will be used to analyze the images. The full application is available on GitHub. To begin, clone the repository on your machine.

What we’ll build

Here’s what the recognized photos can look like when displayed in a web browser. Note how the image and metadata generated by Azure Cognitive Services is displayed alongside it.

A sample image of the application running, showing a woman whose age and gender have been estimated by Cognitive Services.

Set up prerequisites with Visual Studio and Azure

To begin, make sure you’ve installed Visual Studio 2017 with the ASP.NET and web workload. This will provide everything you need to build and run the app yourself.

Next, set up the Azure prerequisites.

First, ensure you have an Azure account. If not, you can sign up for an Azure free account, which will give you a $200 credit towards anything.

Next, create a Storage account, through the Azure Portal:

You’ll need to create the Storage resource:

An image of the Azure Portal, showing how to create a Storage resource.

After creating the resource, you’ll need to create the storage account for your resource with the default settings:

An image of the Create Storage Account page in the Azure Portal with default settings selected.

Finally, create a Cognitive Services resource through the Azure portal:

An image showing how to create a Cognitive Services resource.

Once you’ve set that up, you’re ready to start hacking away at the sample app!

Explore the codebase

Open the project in Visual Studio 2017 if you haven’t already. The application is an ASP.NET MVC app. It does three major things:

The first major operation is uploading an image to Azure Blob storage, analyzing the image using Azure Cognitive Services, and uploading image metadata generated from Cognitive Services back to Blob Storage.

The second major operation is to snag images and their associated metadata from Blob Storage.

The UI simply wires up these images to a page with an upload button.

Add your API keys

Modify the Web.config file to include your Cognitive Services URL and Cognitive Services API key. Look for this file:

Your Cognitive Services URL and API keys can be found in the dashboard for your Cognitive Services resource in the Azure Portal here:

The connection strings for your Azure Storage resource can be found in the Azure Portal under Access Keys:

An image showing how to access the Azure Storage Access Keys in the Azure Portal.

Once you have entered your information in your Web.Config file, you’ll be good to go!

To learn more about how to best work with keys and other information in a development, see Best Practices for Deploying Passwords and other Sensitive Data to ASP.NET and Azure.

Run the application and add some images

Now that everything is set up and configuration has been set up locally, you can run the application on your machine!

Press F5 to debug and see how everything works. I recommend that you set a breakpoint in the Upload controller action (HomeController.cs, line 32), so that you can step through each operation as you upload a new image. In the opened browser, upload an image to see what happens!

If you want to see images show up in Azure blobs when running the app, you can do so with Cloud Explorer (View -> Cloud Explorer). You may need to log in first, but after that, you can navigate to your created Storage Account and see all of your Blobs under Blob Containers:

An image showing Visual Studio Cloud Explorer and browsing live Blobs in Azure.

In this example, I’ve uploaded three images to my container called “images”. The web app also uploaded a json file with image metadata for each image.

Publish to Azure and impress your friends with your use of AI

You can publish the entire application to Azure App Service. Right-click on your project and select “Publish”. Next, select App Service and continue. You can create one right in the Visual Studio UI:

An image showing how you can publish to Azure from Visual Studio 2017.

Finally, click “Create” and it will create all the Azure resources you need and publish your app! After that process completes (it should take a minute or two), your browser will open with your application running entirely in Azure.

Next steps

And that’s it! Try exploring other, interesting things you can do with Cognitive services. Some fun things to try, without needing to add support for any services or read other tutorials:

  • Modify the web app to replace someone’s face with an emoji that matches their measured emotion (try the System.Drawing API!)
  • Group faces by similarity, age, or if they have makeup on
  • Try it out on pictures of animals instead of humans

Additionally, check out these tutorials to learn more about what you can do with .NET and Cognitive Services:

Cheers, and happy coding!

A new experiment: Browser-based web apps with .NET and Blazor

$
0
0

Today I’m excited to announce a new experimental project from the ASP.NET team called Blazor. Blazor is an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly. Blazor promises to greatly simplify the task of building fast and beautiful single-page applications that run in any browser. It does this by enabling developers to write .NET-based web apps that run client-side in web browsers using open web standards.

If you already use .NET, this completes the picture: you’ll be able to use your skills for browser-based development in addition to existing scenarios for server and cloud-based services, native mobile/desktop apps, and games. If you don’t yet use .NET, our hope is that the productivity and simplicity benefits of Blazor will be compelling enough that you will try it.

Why use .NET for browser apps?

Web development has improved in many ways over the years but building modern web applications still poses challenges. Using .NET in the browser offers many advantages that can help make web development easier and more productive:

  • Stable and consistent: .NET offers standard APIs, tools, and build infrastructure across all .NET platforms that are stable, feature rich, and easy to use.
  • Modern innovative languages: .NET languages like C# and F# make programming a joy and keep getting better with innovative new language features.
  • Industry leading tools: The Visual Studio product family provides a great .NET development experience on Windows, Linux, and macOS.
  • Fast and scalable: .NET has a long history of performance, reliability, and security for web development on the server. Using .NET as a full-stack solution makes it easier to build fast, reliable and secure applications.

Browser + Razor = Blazor!

Blazor is based on existing web technologies like HTML and CSS, but you use C# and Razor syntax instead of JavaScript to build composable web UI. Note that it is not a way of deploying existing UWP or Xamarin mobile apps in the browser. To see what this looks like in action, check out Steve Sanderson’s prototype demo at NDC Oslo last year. You can also try out a simple Blazor app running in Azure.

Blazor will have all the features of a modern web framework including:

  • A component model for building composable UI
  • Routing
  • Layouts
  • Forms and validation
  • Dependency injection
  • JavaScript interop
  • Live reloading in the browser during development
  • Server-side rendering
  • Full .NET debugging both in browsers and in the IDE
  • Rich IntelliSense and tooling
  • Ability to run on older (non-WebAssembly) browsers via asm.js
  • Publishing and app size trimming

WebAssembly changes the Web

Running .NET in the browser is made possible by WebAssembly, a new web standard for a “portable, size- and load-time-efficient format suitable for compilation to the web.” WebAssembly enables fundamentally new ways to write web apps. Code compiled to WebAssembly can run in any browser at native speeds. This is the foundational piece needed to build a .NET runtime that can run in the browser. No plugins or transpilation needed. You run normal .NET assemblies in the browser using a WebAssembly based .NET runtime.

Last August, our friends on Microsoft’s Xamarin team announced their plans to bring a .NET runtime (Mono) to the web using WebAssembly and have been making steady progress. The Blazor project builds on their work to create a rich client-side single page application framework written in .NET.

A new experiment

While we are excited about the promise Blazor holds, it’s an experimental project, not a committed product. During this experimental phase, we expect to engage deeply with early Blazor adopters to hear your feedback and suggestions. This time allows us to resolve technical issues associated with running .NET in the browser and to ensure we can build something that developers love and can be productive with.

Where it’s happening

The Blazor repo is now public and is where you can find all the action. It’s a fully open source project: you can see all the development work and issue tracking in the public repo.

Please note that we are very early in this project. There aren’t any installers or project templates yet and many planned features aren’t yet implemented. Even the parts that are already implemented aren’t yet optimized for minimal payload size. If you’re keen, you can clone the repo, build it, and run the tests, but only the most intrepid pioneers would attempt to write app code with it today. If you are that intrepid pioneer, please do dig into the sources. Feedback and suggestions can be provided through the Blazor repo issue tracker. In the months ahead, we hope to publish pre-alpha project templates and tooling that will let a wider audience try it out.

Please also check out the Blazor FAQ to learn more about the project.

Thanks!

Diagnosing Errors on your Cloud Apps

$
0
0

One of the most frustrating experiences is when you have your app working on your local machine, but when you publish it it’s inexplicably failing. Fortunately, Visual Studio provides handy features for working with apps running in Azure. In this blog I’ll show you how to leverage the capabilities of Cloud Explorer to diagnose issues in Azure.

If you’re interested in developing apps in the cloud, we’d love to hear from you. Please take a minute to complete our one question survey.

Prerequisites

– If you want to follow along, you’ll need Visual Studio 2017 with Azure development workload installed.
– This blog assumes you have an Azure subscription and have an App running in Azure App Services. If you don’t have an Azure subscription, click here to sign up for free credits.
– For the purposes of this blog, we’ve developed a simple one-page web app. The source is available here.

Open the solution

If you have your app running on Azure, open the solution in Visual Studio.
Alternatively, clone the source for the sample app and open it in Visual Studio.
Publish the app to Microsoft Azure App Services.

Connect to your Azure subscription with Cloud Explorer

Cloud Explorer is a powerful tool that ships with the Azure development workload in Visual Studio 2017. We can use Cloud Explorer to view and interact with the resources in our Azure subscription.

To view your Azure resources in Cloud Explorer, enable the subscription in the Account Manager tab.
– Open Cloud Explorer (View -> Cloud Explorer)
– Press the Account Management button in the Cloud Explorer toolbar.
– Choose the Azure subscription that you are working with, then press Apply.

Cloud Explorer - Account Manager

Your Azure subscription now appears in the Cloud Explorer. You can toggle the grouping of elements by Resource Groups or Resource Types using the drop-down selector at the top of the window.

View Streaming Logs

When I ran my app after publishing, there was an error. The error message shown on the web page was not very descriptive. So what can I do? How can I get more information about what’s going wrong?

One easy way to diagnose issues on the server is to inspect the application logs. Using Cloud Explorer, you can access the streaming logs of any App Service in your subscription. The streaming logs output a concatenation of all the application logs saved on the App Service. The default log level is “Error”.

To view streaming logs for your application running on Azure App Services:
– Expand the subscription node and select your App Service.
– Click View Streaming Logs in the Actions panel.

Cloud Explorer - View Streaming Logs

The Output window opens with a new log stream from the App Service running on the cloud.

– If you’re using the sample app, refresh the page in the web browser and wait for the page to complete rendering.
This might take ten seconds or more, as the server waits for the fetch operation to time out before returning the result.

You can read the log messages to see what’s happening on the server.

Streaming Logs - Showing Errors

If you switch to Verbose output logging, you see a lot more.

Streaming Logs - Verbose view

Notice the [Error] that appears in the streaming logs: “Exception occurred while attempting to list files on server.”
It doesn’t tell us much, but at least now we can start looking in the ListBlobFiles.StorageHelper for clues.

We know it works locally, so we’ll need to debug the version running on the cloud to see why it’s failing.
For that, we need remote debugging. Once again, Cloud Explorer to the rescue!

Remote Debugging App Service running on Azure

Using Cloud Explorer, you can attach a remote debugger to applications running on Azure. This lets you control the flow of execution by breaking and stepping through the code. It also provides an opportunity to view the value of variables and method returns by utilizing Visual Studio’s debugger tooltips, autos, watches, call stack and other diagnostic tools.

Publish a Debug version of the Web App

Before you can attach a debugger to an application on Azure, there must be a debug version of the code running on the App Service. So, we’ll re-publish the app with Debug release configuration. Then we’ll attach a remote debugger to the app running in the cloud, set breakpoints and step through the code.

• Open the publish summary page (Right-click project, choose “Publish…”)
• Select (or create) the publish profile for your web app
• Click Settings
• When the settings dialog opens, go to the Settings tab.
• In the Configurations drop-down, select “Debug”.
• Save the publish profile settings.
• Press Publish to republish the web app

Publish Debug Configuration

Attach Remote Debugger

You can attach a remote debugger to allow you to step through the code that’s running on your Azure App Service. This lets you see the values of variables and watch the flow of control in your app.

To attach a remote debugger:
• In the Cloud Explorer, select the web app.
• Click Attach Debugger in the Actions panel.

Visual Studio will switch over to Debug mode. Now you can set breakpoints in the code and watch the execution as the program runs.

Set breakpoint and execute the code

If you’re following along from the sample, try this:

• Set breakpoints in the GetBlobFileListAsync() method of the StorageHelper.cs
• Refresh the page in the web browser
• Execution will stop at your first breakpoint.
• Hover your mouse cursor over the _storageConnectionString variable and inspect its value.
• Notice that the connection string is “UseDevelopmentStorage=true”.

Remote Debugging in Visual Studio

Problem found! We’re referencing our local Storage (“UseDevelopmentStorge=true”), which won’t work in the cloud.
To fix it, we’ll need to provide a connection string to the app running in the cloud that points to our Blob storage container.

Complete the debugging session.
– Press F5 to allow the request to complete.
– Then press Shift+F5 to stop the remote debugging session.

Next steps

Re-publish with Release configuration
Once you’ve finished debugging and your app is working as expected, you can republish a Release version of the app for better performance.
Go to the Publish page, find the Publish Profile, select “Settings…” and change the configuration to “Release”.

Related Links

Get started with Azure Blob storage using .NET
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs

Use the Azure Storage Emulator for development and testing
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator

Introduction to Razor Pages in ASP.NET Core
https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio

ASP.NET Core – Simpler ASP.NET MVC Apps with Razor Pages
MSDN Magazine article by Steve Smith
https://msdn.microsoft.com/en-us/magazine/mt842512.aspx

Upload image data in the cloud with Azure Storage
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images

Azure Blob Storage Samples for .NET
https://github.com/Azure-Samples/storage-blob-dotnet-getting-started

File nesting in Solution Explorer

$
0
0

We are excited to share with you a new capability in Visual Studio that was a clear ask from you, the community. Visual Studio has been nesting related files in Solution Explorer for a long time now, but not everybody agrees with the rules it uses. That’s not a problem any more because Visual Studio now gives you complete control over file nesting in Solution Exporer! We hope your continued feedback helps us evolve this capability into a fan favorite!

Out of the box you get to pick between the presets Off, Default and Web, but you can always customize it exactly to your liking. You can even create solution-specific and project-specific settings, but more on all of that later. First let’s go over what you get out of the box.

What you get out of the box

Off: This option gives you a flat list of files without any file nesting whatsoever.

Default: This options gives you the default file nesting behavior in Solution Explorer that Visual Studio has had since before you were able to control it.

Web: This option applies the “Web” file nesting behavior to all the projects in the current solution. It has a lot of rules and we encourage you to check it out and tell us what you think. The very first picture in this post is highlighting just a few good examples of the file nesting that you get with this option.

Customizing file nesting to your exact liking

If you don’t like what you get out of the box, you can always create your own, custom file nesting settings that make Solution Explorer nest files to your exact liking. You can add as many custom file nesting settings as you like and you can switch between them as you see fit. Every time you want to create a new one you start by choosing to either start with an empty file or to use the Web settings as your starting point:

We recommend you use Web settings as your starting point because it’s easier to tweak something that already works. If you do that you’ll be starting off with something that looks like the following (instead of being empty):

Let’s focus on the node dependentFileProvider and more specifically the children being added to it. Each child node is a type of rule that Visual Studio can use to nest files. For example, “having the same filename, but a different extension” is one such type of rule. Let’s go over each type of rule available to you:

  • extentionToExtention: Use this type of rule to make file.js nest under file.ts
  • fileSuffixToExtension: Use this type of rule to make file-vsdoc.js nest under file.js
  • addedExtension: Use this type of rule to make file.html.css nest under file.html
  • pathSegment: Use this type of rule to make jquery.min.js nest under jquery.js
  • allExtensions: Use this type of rule to make file.* nest under file.js
  • fileToFile: Use this type of rule to make bower.json nest under .bowerrc

Ordering is very important in every part of your custom settings file. You can change the order in which rules are executed by moving them up or down inside of the dependentFileProvider node.  For example, if you have one rule that makes file.js the parent of file.ts and another rule that makes file.coffee the parent of file.ts the order in which they appear in the file decides what happens when all three files are present at the same time: file.js, file.ts and file.coffee. Since file.ts can only have one parent, whichever rule executes first wins.

You can manage all settings, including your own custom settings through the same button in Solution Explorer:

 

Creating solution-specific and project-specific settings

You can create solution-specific and project-specific settings through the context menu of each solution and project:

 

Solution-specific and project-specific settings will be combined with whatever Visual Studio settings are already active. Don’t be surprised for example if you have a blank project-specific settings file, yet Solution Explorer is still nesting files. The nesting is either coming from the solution-specific settings or the Visual Studio settings. The process of merging file nesting settings goes: Project > Solution > Visual Studio.

You can tell Visual Studio to ignore solution-specific and project-specific settings, even if the files exist on disk, by enabling the option Ignore solution and project settings under Tools | Options | ASP.NET Core | File Nesting.

You can do the opposite and tell Visual Studio to only use the solution-specific or the project-specific settings. Remember that “root” node we saw earlier in our custom settings? If not, go back and take a look at the picture. If you set that node to true it tells Visual Studio to stop merging files at that level and not combine it with files higher up the hierarchy.

The great thing about solution-specific and project-specific settings is that they can be checked into source control and the entire team that works on the repo can share them.

Next steps

Download Visual Studio 2017 15.6 Preview 4 and try file nesting in Solution Explorer. The feature is currently only supported by ASP.NET Core projects, but tell us that you want it for other projects as well and we will try to make it happen.

Please ask us questions and give us your feedback any way you find most convenient. You can leave a comment on this blog post, you can submit your suggestions on UserVoice or you can drop us an email on Anton.Piskunov<at>microsoft.com (Principal Engineer) and Angelos.Petropoulos<at>microsoft.com (Product Manager).

Two Lesser Known Tools for Local Azure Development

$
0
0

If you’re developing applications that target Azure services (e.g. Web Apps, Functions, Storage), you’ll want to know about two powerful tools that come with Visual Studio 2017 and the Azure development workload:

  • Cloud Explorer is a tool window inside Visual Studio that lets you browse your Azure resources and perform specific tasks – like stop and start app service, view streaming logs, create storage items.
  • Storage Emulator is a separate application to Visual Studio that provides a local simulation of the Azure storage services. It’s really handy for testing Functions that trigger from queues, blobs or tables.

In this blog I’ll show you how you can develop Azure applications entirely locally – including the ability to interact with Azure storage – without ever needing an Azure subscription.

Prerequisites

Note: You will NOT need an Azure subscription to follow this blog. In fact, that’s the whole point of this blog. 😉

Cloud Explorer

The Cloud Explorer is your window into Azure from within Visual Studio. You can browse the common resources in your Azure subscriptions in one convenient tool window. Each of the various Azure services have different properties and actions.

Cloud Explorer - Expanded

In the picture above, you can see it has listed a variety of resources from my Azure subscription including my App Services, SQL Databases and Virtual Machines, as well as my App Service Plans, Storage Accounts and other network infrastructure assets. I have published the sample app to an App Service called ListBlobFilesSample. You can see it listed under the App Services node.

Each resource has a collection of properties and actions. You can trigger actions by right-clicking on the item of interest. For instance, I can View Streaming Logs to see a running output of my application in the cloud, or I can Attach Debugger to step through the code to diagnose errors. (Note: For more information about diagnosing errors, see Diagnosing Errors on your Cloud Apps.)

In this blog, we’ll be using Cloud Explorer to interact with our Storage Accounts – specifically, with the local (Development) storage account using the Microsoft Azure Storage Emulator.

Sample Code

For this post, we’ll be working with a sample Web App with a single Razor Page file that displays a list of items in a Blob storage container (i.e. list of files in a folder in a storage account).

Clone the source from here and open the ListBlobFiles solution in Visual Studio.
The web app consists of:
      – a single Razor Page file (Index.cshtml),
      – its code behind file (Index.cshtml.cs),
      – a utility class for reading items from storage (StorageHelper.cs),
      – the application’s settings file (appsettings.json),
      – standard web app startup files (Program.cs and Startup.cs)

Here’s a snippet of the most interesting part – the helper class that returns a list of files stored in a blob storage container.

Using Storage Completely Offline with Storage Emulator

Using the Storage Emulator, you can develop, run, debug and test your applications that use Azure Storage locally without an Azure subscription. The other great thing is, the Storage Emulator is part of the Azure development workload in Visual Studio, so there is no extra installation required.

Start the Storage Emulator

  • Press the Windows key and type “Storage Emulator”, then select Microsoft Azure Storage Emulator.
  • When the Storage Emulator is running, an icon will appear in the Windows system tray.

    Storage Emulator icon in task bar

Launch the web app from Visual Studio

  • Press Ctrl+F5 to build and run the web app locally.
  • A web browser will launch an open the Index page of the app.
    The page renders and shows there are no files in the Blob container.
    Web page displays errors

Let’s add some files to a local storage container and see if they show up when we refresh the page.

Create local Blob Storage (using Storage Emulator and Cloud Explorer)

  • Open Cloud Explorer
  • Expand to Blob Containers under (Local)->Storage Accounts->(Development)
  • Click Create Blob Container in the Actions panel
  • Cloud Explorer - Create Blob Container

  • Enter a name for the local blob storage container (ie. “myfiles”) – Note: must contain only lowercase/numbers/hyphens

Add files to your Blob container

  • Right-click the new container (myfiles) and select Open.
  • In the toolbar, click the Upload button.
  • Browse for a file, then press OK.
  • Do this repeatedly to add several files to your blob container (storage folder).

Cloud Explorer - Add files to blob container

You’ll see the files appear in the container window, along with the URL for each item.
The Microsoft Azure Activity Log window shows the status of the uploads.

Files appear in container view

Return to the web browser that is running our local web app and refresh the page.
Notice that the page now outputs the URLs of all the files in the container.

Web page renders correctly, showing files in blob container

Success! You’re now doing local development of an app that uses Azure storage – without needing any resources on Azure.

Next Steps

Try it on the cloud! When you’re ready, publish your app to Azure App Services and configure it to run with Azure Storage on the cloud.

You can continue to use Cloud Explorer within Visual Studio to interact with your storage account on Azure in just the same way you did with local development.

Related Links

Get started with Azure Blob storage using .NET
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs

Use the Azure Storage Emulator for development and testing
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator

Introduction to Razor Pages in ASP.NET Core
https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio

ASP.NET Core – Simpler ASP.NET MVC Apps with Razor Pages
https://msdn.microsoft.com/en-us/magazine/mt842512.aspx

Azure Article: Azure Blob Storage Photo Gallery Web Application
https://azure.microsoft.com/en-us/resources/samples/storage-blobs-dotnet-webapp/
Related sample on GitHub: Image Resizer Web App
https://github.com/Azure-Samples/storage-blob-upload-from-webapp

Upload image data in the cloud with Azure Storage
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images

Azure Blob Storage Samples for .NET
https://github.com/Azure-Samples/storage-blob-dotnet-getting-started

Announcing ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4

$
0
0

Today we released stable packages for ASP.NET MVC 5.2.4, Web API 5.2.4, and Web Pages 3.2.4 on NuGet. This release contains some minor bug fixes and a couple of new features specifically targeted at enabling .NET Standard support for the ASP.NET Web API Client. You can read about the .NET Standard support for the ASP.NET Web API Client in the earlier preview announcement.

For the full list of features and bug fixes for this release please see the release notes.

To update an existing project to use this release you can run the following commands from the NuGet Package Manager Console for each of the packages you wish to update:

Install-Package Microsoft.AspNet.Mvc -Version 5.2.4
Install-Package Microsoft.AspNet.WebApi -Version 5.2.4
Install-Package Microsoft.AspNet.WebPages -Version 3.2.4

If you have any questions or feedback on this release please let us know on GitHub.

Thanks!

ASP.NET Core 2.1.0-preview1 now available

$
0
0

Today we’re very happy to announce that the first preview of the next minor release of ASP.NET Core and .NET Core is now available for you to try out. We’ve been working hard on this release over the past months, along with many folks from the community, and it’s now ready for a wider audience to try it out and provide the feedback that will continue to shape the release.

You can read about .NET Core 2.1.0-preview2 over on their blog.

You can also read about Entity Framework Core 2.1.0-preview1 on their blog.

How do I get it?

You can download the new .NET Core SDK for 2.1.0-preview1 (which includes ASP.NET Core 2.1.0-preview1) from https://www.microsoft.com/net/download/dotnet-core/sdk-2.1.300-preview1

Visual Studio 2017 version requirements

Customers using Visual Studio 2017 should also install (in addition to the SDK above) and use the Preview channel (15.6 Preview 6 at the time of writing) when working with .NET Core and ASP.NET Core 2.1 projects. .NET Core 2.1 projects require Visual Studio 2017 15.6 or greater.

Impact to machines

Please note that given this is a preview release there are likely to be known issues and as-yet-to-be-discovered bugs. While .NET Core SDK and runtime installs are side-by-side on your machine, your default SDK will become the latest version, which in this case will be the preview. If you run into issues working on existing projects using earlier versions of .NET Core after installing the preview SDK, you can force specific projects to use an earlier installed version of the SDK using a global.json file as documented here. Please log an issue if you run into such cases as SDK releases are intended to be backwards compatible.

Already published applications running on earlier versions of .NET Core and ASP.NET Core shouldn’t be impacted by installing the preview. That said, we don’t recommend installing previews on machines running critical workloads.

New features

You can see a summary of the new features in 2.1 in the roadmap post we published previously.

Furthermore, we’re publishing a series of posts here that go over the new feature areas in detail. We’ll update this post with links to these posts as they go live over the coming days:

  • Using ASP.NET Core previews in Azure App Service
  • Introducing HttpClientFactory
  • Improvements for using HTTPS
  • Improvements for building Web APIs
  • Introducing compatibility version in MVC
  • Getting started with SignalR
  • Introducing global tools
  • Using Razor UI in class libraries
  • Improvements for GDPR
  • Improvements to the Kestrel HTTP server
  • Improvements to IIS hosting
  • Functional testing of MVC applications
  • Introducing Identity UI as a library
  • Hosting non-server apps with GenericHostBuilder

Announcements and release notes

You can see all the announcements published pertaining to this release at https://github.com/aspnet/Announcements/issues?q=is%3Aopen+is%3Aissue+milestone%3A2.1.0

Release notes will be available shortly at https://github.com/aspnet/Home/releases/tag/2.1.0-preview1

Giving feedback

The main purpose of providing previews like this is to solicit feedback from customers such that we can refine and improve the changes in time for the final release. We intend to release a second preview within the next couple of months, followed by a single RC release (with “go-live” license and support) before the final RTW release.

Please provide feedback by logging issues in the appropriate repository at https://github.com/aspnet or https://github.com/dotnet. The posts on specific topics above will provide direct links to the most appropriate place to log issues for the features detailed.

Migrating an ASP.NET Core 2.0.x project to 2.1.0-preview1

Follow these steps to migrate an existing ASP.NET Core 2.0.x project to 2.1.0-preview1:

  1. Open the project’s CSPROJ file and change the value of the <TargetFramework> element to netcoreapp2.1
    • Projects targeting .NET Framework rather than .NET Core, e.g. net471, don’t need to do this
  2. In the same file, update the versions of the various <PackageReference> elements for any Microsoft.AspNetCore, Microsoft.Extensions, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview1-final
  3. In the same file, update the versions of the various <DotNetCliToolReference> elements for any Microsoft.VisualStudio, and Microsoft.EntityFrameworkCore packages to 2.1.0-preview1-final
  4. In the same file, remove the <DotNetCliToolReference> elements for any Microsoft.AspNetCore packages. These have been replaced by global tools.

That should be enough to get the project building and running against 2.1.0-preview1. The following steps will change your project to use new code-based idioms that are recommended in 2.1

  1. Open the Program.cs file
  2. Rename the BuildWebHost method to CreateWebHostBuilder, change its return type to IWebHostBuilder, and remove the call to .Build() in its body
  3. Update the call in Main to call the renamed CreateWebHostBuilder method like so: CreateWebHostBuilder(args).Build().Run();
  4. Open the Startup.cs file
  5. In the ConfigureServices method, change the call to add MVC services to set the compatibility version to 2.1 like so: services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
  6. In the Configure method, add a call to add the HSTS middleware after the exception handler middleware: app.UseHsts();
  7. Staying in the Configure method, add a call to add the HTTPS redirection middleware before the static files middleware: app.UseHttpsRedirection();
  8. Open the project propery pages (right-mouse click on project in Visual Studio Solution Explorer and select “Properties”)
  9. Open the “Debug” tab and in the IIS Express profile, check the “Enable SSL” checkbox and save the changes
  10. Open the Properties/launchSettings.json file
  11. In the "iisSettings"/"iisExpress" section, note the new property added to define HTTPS port for IIS Express to use, e.g. "sslPort": 44374
  12. In the "profiles/IIS Express/environmentVariables" section, add a new property to flow the configured HTTPS port through to the application like so: "ASPNETCORE_HTTPS_PORT": "44374"
    • This configuration value will be read by the HTTPS redirect middleware you added above to ensure non-HTTPS requests are redirected to the correct port. Make sure it matches the value configured for IIS Express.

Note that some projects might require more steps depending on the options selected when the project was created, or packages added since. You might like to try creating a new project targeting 2.1.0-preview1 (in Visual Studio or using dotnet new at the cmd line) with the same options to see what other things have changed.


ASP.NET Core 2.1: Improvements for using HTTPS

$
0
0

Securing web apps with HTTPS is more important than ever before. Browser enforcement of HTTPS is becoming increasingly strict. Sites that don’t use HTTPS are increasingly labeled as insecure. Browsers are also starting to enforce that new and existing web features must only be used from an secure context (Chromium, Mozilla). New privacy requirements like the Global Data Protection Regulation (GDPR) require the use of HTTPS to protect user data. Using HTTPS during development also helps prevent HTTPS related issues before deployment, like insecure links.

ASP.NET Core 2.1 makes it easy to both develop your app with HTTPS enabled and to configure HTTPS once your app is deployed. The ASP.NET Core 2.1 project templates have been updated to enable HTTPS by default. To enable HTTPS in production simply configure the correct server certificate. ASP.NET Core 2.1 also adds support for HTTP Strict Transport Security (HSTS) to enforce HTTPS usage in production and adds improved support for redirecting HTTP traffic to HTTPS endpoints.

HTTPS in development

To get started with ASP.NET Core 2.1 and HTTPS install the .NET Core 2.1 SDK. The SDK will create an HTTPS development certificate for you as part of the first-run experience. For example, when you run dotnet new razor for the first time you should see the following console output:

ASP.NET Core
------------
Successfully installed the ASP.NET Core HTTPS Development Certificate.
To trust the certificate (Windows and macOS only) first install the dev-certs tool by running 'dotnet install tool dotnet-dev-certs -g --version 2.1.0-preview1-final' and then run 'dotnet-dev-certs https --trust'.
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.

The ASP.NET Core HTTPS Development Certificate has now been installed into the local user certificate store, but it still needs to be trusted. To trust the certificate you need to perform a one-time step to install and run the new dotnet dev-certs tool as instructed:

C:\WebApplication1>dotnet install tool dotnet-dev-certs -g --version 2.1.0-preview1-final

The installation succeeded. If there are no further instructions, you can type the following command in shell directly to invoke: dotnet-dev-certs

C:\WebApplication1>dotnet dev-certs https --trust
Trusting the HTTPS development certificate was requested. A confirmation prompt will be displayed if the certificate was not previously trusted. Click yes on the prompt to trust the certificate.
A valid HTTPS certificate is already present.

To run the dev-certs tool both dotnet-dev-certs and dotnet dev-certs (without the extra hyphen) will work. Note: If you get an error that the tool was not found you may need to open a new command prompt if the current command prompt was open when the SDK was installed.

Trust certificate dialog

Click Yes to trust the certificate.

On macOS the certificate will get added to your keychain as a trusted certificate.

On Linux there isn't a standard way across distros to trust the certificate, so you'll need to perform the distro specific guidance for trusting the deveopment certificate.

Run the app by running dotnet run. The ASP.NET Core 2.1 runtime will detect that the development certificate is installed and use the certificate to listen on both http://localhost:5000 and https://localhost:5001:

C:\WebApplication1>dotnet run
Using launch settings from C:\WebApplication1\Properties\launchSettings.json...
Hosting environment: Development
Content root path: C:\WebApplication1
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Close any open browsers and then in a new browser window browse to https://localhost:5001 to access the app via HTTPS.

Razor Pages with HTTPS

If you didn't trust the ASP.NET Core development certificate then the browser will display a security warning:

Untrusted certificate warning

You can still click on "Details" to ignore the warning and browse to the site, but you're better off running dotnet dev-certs --trust to trust the certificate. Just run the tool once and you should be all set.

HTTPS redirection

If you browse to the app via http://localhost:5000 you get redirected to the HTTPS endpoint:

HTTPS redirect

This is thanks to the new HTTPS redirection middleware that redirects all HTTP traffic to HTTPS. The middleware will detect available HTTPS server addresses at runtime and redirect accordingly. Otherwise, it redirects to port 443 by default.

The HTTPS redirection middleware is added in app's Configure method:

app.UseHttpsRedirection();

You can configure the HTTPS port explicitly in your ConfigureServices method:

services.AddHttpsRedirection(options => options.HttpsPort = 5002);

Alternatively you can specify the HTTPS port to redirect to using configuration or the ASPNETCORE_HTTPS_PORT environment variable. This is useful for when HTTPS is being handled externally from the app, like when the app is hosted behind IIS. For example, the project template adds the ASPNETCORE_HTTPS_PORT environment variable to the IIS Express launch profile so that it matches the HTTPS port setup for IIS Express:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:51667",
      "sslPort": 44370
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "ASPNETCORE_HTTPS_PORT": "44370"
      }
    }
  }
}

HTTP Strict Transport Security (HSTS)

HSTS is a protocol that instructs browsers to access the site via HTTPS. The protocol has allowances for specifying how long the policy should be enforced (max age) and whether the policy applies to subdomains or not. You can also enable support for your domain to be added to the HSTS preload list.

The ASP.NET Core 2.1 project templates enable support for HSTS by adding the new HSTS middleware in the app's Configure method:

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}
else
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

Note that HSTS is only enabled when running in a non-development environment. This is to prevent setting an HSTS policy for localhost when in development.

You can configure your HSTS policy (max age, include subdomains, exclude specific domains, support preload) in your ConfigureServices method:

services.AddHsts(options =>
{
    options.MaxAge = TimeSpan.FromDays(100);
    options.IncludeSubDomains = true;
    options.Preload = true;
});

Configuring HTTPS in production

The ASP.NET Core HTTPS development certificate is only for development purposes. In production you need to configure your app for HTTPS including the production certificate that you want to use. Often this is handled externally from the app using a reverse proxy like IIS or NGINX. ASP.NET Core 2.1 adds support to Kestrel for configuring endpoints and HTTPS certificates.

You can still configure server URLs (include HTTPS URLs) using the ASPNETCORE_SERVER_URLS environment variable. To configure the HTTPS certificate for any HTTPS server URLs you configure a default HTTPS certificate.

The default HTTPS certificate can be loaded from a certificate store:

{
  "Certificates": {
    "Default": {
      "Subject": "mysite",
      "Store": "User",
      "Location": "Local",
      "AllowInvalid": "false" // Set to "true" to allow invalid certificates (e.g. self-signed)
    }
  }
}

Or from a password protected PFX file:

{
  "Certificates": {
    "Default": {
      "Path": "cert.pfx",
      "Password": "<password>"
    }
  }
}

You can also configure named enpoints for Kestrel that include both the URL for the endpoint and the HTTPS certificate:

{
  "Kestrel": {
    "EndPoints": {
      "Http": {
        "Url": "http://localhost:5005"
      },

      "HttpsInlineCertFile": {
        "Url": "https://localhost:5006",
        "Certificate": {
          "Path": "cert.pfx",
          "Password": "<cert password>"
        }
      },

      "HttpsInlineCertStore": {
        "Url": "https://localhost:5007",
        "Certificate": {
          "Subject": "mysite",
          "Store": "My",
          "Location": "CurrentUser",
          "AllowInvalid": "false" // Set to true to allow invalid certificates (e.g. self-signed)
        }
      }
    }
  }
}

Summary

We hope these new features will make it much easier to use HTTPS during development and in production. Please give the new HTTPS support a try and let us know what you think!

ASP.NET Core 2.1.0-preview1: Using ASP.NET Core Previews on Azure App Service

$
0
0

There are 3 options to get ASP.NET Core 2.1 Preview applications running on Azure App Service:

  1. Installing the Preview1 site extension
  2. Deploying your app self-contained
  3. Using Web Apps for Containers

Installing the site extension

Starting with 2.1-preview1 we are producing an Azure App Service site extension that contains everything you need to build and run your ASP.NET Core 2.1-preview1 app. You can install this site extension by:

  1. Go to the Extensions blade
    Azure App Service Site Extension UI

    Site Extension UI

  2. Click ‘Add’ at the top of the screen and Choose the ‘ASP.NET Core Runtime Extension’ from the list of available extensions.
    ASP.NET Core Runtime Extensions

    Choose the ASP.NET Core Runtime Extensions

  3. Then agree to the license terms by clicking ‘OK’ on the ‘Accept Legal Terms’ screen, finally click ‘OK’ at the bottom of the Add Extension screen.
    Accept Agreement

    Accept Agreement

Once the add operation has completed you will have .NET Core 2.1 Preview 1 installed. You can verify this by going to the Console and running ‘dotnet –info’. It should look like this:

dotnet Info output

dotnet Info output

You can see the path to the site extension where Preview1 has been installed, showing that you are running from the site extension instead of from the default ProgramFiles location. If you see ProgramFiles instead then try restarting your site and running the info command again.

Using an ARM template

If you are using an ARM template to create and deploy applications you can use the ‘siteextensions’ resource type to add the site extension to a Web App. For example:

You could add, and edit, this snippet in your own ARM template to add the site extension to your web app. Making sure that this resource definition is in the resources collection of your site resource.

Deploy a self-contained app

You can deploy a self-contained app that carries the preview1 runtime with it when being deployed. This option means that you don’t need to prepare your site, but it does require you to publish your application differently than you would when deploying an app once the SDK is pre-installed on the server.

Self-contained apps are an option for all .NET Core applications, and some of you may be deploying your applications this way already.

Use Docker

We have 2.1 preview1 Docker images available on Docker Hub for use. You can use them as your base image and deploy to Web Apps for Containers as you normally would.

Conclusion

This is the first time that we are using site extensions instead of pre-installing previews globally on Azure App Service. If you have any problems getting it to work then log an issue on GitHub.

ASP.NET Core 2.1.0-preview1: Getting started with SignalR

$
0
0

Since 2013, ASP.NET developers have been using SignalR to build real-time web applications. Now, with ASP.NET Core 2.1 Preview 1, we’re bringing SignalR over to ASP.NET Core so you can build real-time web applications with all the benefits of ASP.NET Core. We released an alpha version of this new SignalR back in October that worked with ASP.NET Core 2.0, but now it’s ready for a broader preview and built-in to ASP.NET Core 2.1 (no additional NuGet packages required!). This new version of SignalR gave us a chance to significantly redesign some elements and learn from the lessons of the past, but the core APIs you work with should be very similar. The new design gives us a much more flexible platform on which to build the future of real-time .NET server applications. For now, though, let’s walk through a simple Chat demo to see how it works in ASP.NET Core SignalR.

Prerequisites

In order to complete this tutorial you need the following tools:

  1. .NET Core SDK version 2.1.300-preview1 or higher.
  2. Node JS (just needed for NPM, to download the SignalR JavaScript library; we strongly recommend using at least version 8.9.4 of Node).
  3. Your IDE/Editor of choice.

Building the UI

Let’s start by building a simple UI for a simple chat app. First, create a new Razor pages application using dotnet new:

Add a new page for the chat UI:

You should now have Pages/Chat.cshtml and Pages/Chat.cshtml.cs files in your project. First, open Pages/Chat.cshtml.cs, change the namespace name to match your other page models and add the Authorize attribute to ensure only authenticated users can access the Chat page.

Next, open Pages/Chat.cshtml and add some UI:

The UI we’ve added is fairly simple. We’re going to use ASP.NET Core Identity for authentication, which means the user will be authenticated, and will have a username when we get here. To try it out, use dotnet run to launch the site and Register as a new user. Then navigate to the /Chat endpoint, you should see the following UI:


The Chat UI

Writing the server code

In SignalR, you put server-side code in a “Hub”. Hubs contain methods that the SignalR Client allows you to invoke from the browser, much like how an MVC controller has actions that are invoked by issuing HTTP requests. However, unlike an MVC Controller Action, SignalR allows the server to invoke methods on the client as well, allowing you to develop real-time applications that notify users of new content. So, first, we need to build a hub. Back in the root of the project, create a Hubs directory and add a new file to that directory called ChatHub.cs:

Let’s go back over that code a little bit and look at what it does.

First, we have a class inheriting from Hub, which is the base class required for all SignalR Hubs. We apply the [Authorize] attribute to it which restricts access to the Hub to registered users and ensures that Context.User is available for us in the Hub methods. Inside Hub methods, you can use the Clients property to access the clients connected to the hub. We use the .All property, which gives us an object that can be used to send messages to every client connected to the Hub.

When a new client connects, the OnConnectedAsync method will be invoked. We override that method to Send the SendAction message to every client, and provide two arguments: The name of the user, and the action that occurred (in this case, that they “joined” the chat session). We do the same for OnDisconnectedAsync, which is invoked when a client disconnects.

When a client invokes the Send method, we send the SendMessage message to every client, again providing two arguments: The name of the user sending the message and the message itself. Every client will receive this message, including the sending client itself.

To finish off the server-side, we need to add SignalR to our application. We do that in the Startup.cs file. First, add the following to the end of the ConfigureServices method to register the necessary SignalR services into the DI container:

Then, we need to put SignalR into the middleware pipeline, and give our ChatHub hub a URL that the client can reference. We do that by adding these lines to the end of the Configure method:

This configures the hub so that it is available at the URL /hubs/chat. You can use any URL you want, but it can’t match an existing MVC action or Razor Page.

NOTE: You’ll need to add a using directive for SignalRTutorial.Hubs in order to use ChatHub in your MapHub call.

Building the client-side

Now that we have the server hub up and running, we need to add code to the Chat.cshtml page to use the client. First, however, we need to get the SignalR JavaScript client and add it to our application. There are many ways you can do this, such as using a bundling tool like Webpack, but here we’re going to go with a fairly simple approach of copying and pasting. First, install the SignalR client using NPM:

You can find the version of the client designed for use in Browsers in node_modules/@aspnet/signalr/dist/browser. There are minified files there as well. For now, let’s just copy the signalr.js file out of that directory and into wwwroot/lib/signalr in the project:


SignalR JS file in the lib/wwwroot/signalr folder

Now, we can add JavaScript to our Chat.cshtml page to wire everything up. At the end of the file (after the closing </ul> tag), add the following:

We put our scripts in the Scripts Razor section, in order to ensure they end up at the very bottom of the Layout page. First, we load the signalr.js library we just copied in:

Then, we add a script block for our own code. In that code, we first get references to some DOM elements, and define a helper function to add a new item to the messages-list list. Then, we create a new connection, connecting to the URL we specified back in the Configure method.

At this point, the connection has not yet been opened. We need to call connection.start() to open the connection. However, before we do that we have some set-up to do. First, let’s wire up the “submit” handler for the <form>. When the “Send” button is pressed, this handler will be fired and we want to grab the content of the message text box and send the Send message to the server, passing the message as an argument (we also clear the text box so that the user can enter a new message):

Then, we wire up handlers for the SendMessage and SendAction messages (remember back in the Hub we use the SendAsync method to send those messages, so we need a handler on the client for them):

Finally, we start the connection. The .start method returns a JavaScript Promise object that completes when the connection has been established. Once it’s established, we want to enable the text box and button:

Testing it out

With all that code in place, it should be ready to go. Use dotnet run to launch the app and give it a try! Then, use a Private Browsing window and log in as a different user. You should be able to chat back and forth between the browser windows.

Conclusion

This has been a brief overview of how to get started with SignalR in ASP.NET Core 2.1 Preview 1. Check out the full code for this tutorial if you’d like to see more details. If you need help, post questions on StackOverflow using the signalr-core tag. Finally, if you think you’ve found a bug, file it on our GitHub repository.

ASP.NET Core 2.1.0-preview1: Introducing compatibility version in MVC

$
0
0

This post was written by Ryan Nowak

In 2.1 we’re adding a feature to address a long-standing problem for maintaining MVC – how do we make improvements to framework code without making it too hard for developers to upgrade to the latest version? This is not an easy concern to solve – and with 7 major releases of MVC (dating back to 2009) there are a few things we’d like to leave in the past.

Unlike most other parts of ASP.NET Core, MVC is a framework – our code calls your code in lots of idiosyncratic ways. If we change what methods we call or in what order, or how we handle exceptions – it’s very easy for working code to become non-working code. In our experience, it’s also just not good enough for the team to just expect developers to rely on the documented behavior and punish those who don’t.

This last bit is summed up with Hyrum’s Law, or if you prefer, the XKCD version. We make decisions with the assumption that some developers have built working applications that rely on our bugs.

Despite these challenges, we think it’s worthwhile to keep moving forward. We’re disappointed too when we get a good piece of feedback that we can’t act upon because it’s incompatible with our legacy behavior.

What we’re doing

Our plan is to continue to make improvements to framework behaviors – where we think we’ve made a mistake – or where we can update a feature to be unequivocally better. However, we’re going to make these changes opt-in, and make it easy to opt-in. New applications created from our templates will opt-in to the current release’s behaviors by default.

When we reach the next major release (3.0 – not any time soon) – we will remove the old behaviors.

Opt-in means that updating your package references don’t give you different behavior. You have to choose the new version and the new behavior.

Right now this looks like:

OR

What this means

I think this does a few things that are valuable. Consider all of the below as goals or success criteria. We still have to do a good job understanding your feedback and communicating for these things to happen.

For you and for us: We can continue to invest in new ideas and adapt to a changing web landscape.

For you: It’s easy to adopt new versions in small steps.

For us: Streamlines things that require a lot of effort to support, document, and respond to feedback.

For us: Simplifies the decision process of how to make and communicate a change.

What we’re not doing

While we’re giving you fine-grained control over which new behaviors you get, we don’t intend on keeping old things forever. This is not a license to live in the past. As stated above, our plan is to update things that are broken and keep moving forward by removing old behaviors over time.

We’re also not treating this new capability as *open-season* on breaking changes. Making any change that impacts developers on our platform has to be justified in providing enough value, and needs to be comprehensible and actionable by those that are impacted – because we expect all developers to deal with it eventually.

A good candidate change is one that:

  • adds a feature, but with a small break risk for a minority of users (areas for Razor Pages)
  • fixes a big problem, but with a comprehensible impact (exception handling for input formatters)
  • never worked the way we thought (bug), and streamlines something complicated (combining authorization filters)

Note that in all of the cases above, the new behaviors are easier for us to explain and document. We would recommend that everyone choose the new behaviors, it’s not a matter of preference.

Give us feedback about this. If you think this plan leaves you out in the cold, let us know how and why.

What’s happening now?

Most of the known work for us has already happened. Have made about 5 design changes to features inside MVC during the 2.1 milestone that deserved a compatibility switch.

You can find a summary of these things here below. My hope is that the documentation added to the specific options and types explains what is changing when you opt-in to each setting and why we feel it’s important.

General MVC

Combine Authorization Filters

Smarter exception handling for formatters

Smarter validation for enums

Allow non-string types with HeaderModelBinder (2.1.0-preview-2)

JSON Formatter

Better error messages

Razor Pages

Areas for Pages

Appendix A: an auspicious example

I think exception handling for input formatters is probably the best illustrative example of how this philosophy works.

The best starting place is probably to look at the docs that I added in this PR. We have a problem in the 1.X and 2.0 family of MVC releases where any exception thrown by an IInputFormatter will be swallowed by the infrastructure and turned into a model state error. This includes TypeLoadException, NullReferenceException, ThreadAbortException and all other kinds of esoterica.

This is the case because we didn’t have an exception type that says “I failed to process the input, report an error to the client”. We added this in 2.1 and we’ve updated our formatters to use it in the appropriate cases (the XML serializers throw exceptions). However this can’t help formatters we didn’t write.

This leads to the need for a switch. If you need to use a formatter written against 1.0 that throws an exception and expects MVC to handle it, that will still work until you opt-in to the new behavior. We do plan on removing the old way in 3.0, but this eases the pressure – instead of this problem blocking you from adopting 2.1, you have time to figure out a solution before 3.0 (a long time away).

——

I hope this example provides a little insight into what our process is like. See the relevant links for the in-code documentation about the other changes. We are looking forward to feedback on this, either on GitHub or as comments on this post.

ASP.NET Core 2.1.0-preview1: Improvements for building Web APIs

$
0
0

ASP.NET Core 2.1 adds a number of features that make it easier and more convenient to build Web APIs. These features include Web API controller specific conventions, more robust input processing and error handling, and JSON patch improvements.

Please note that some of these features require enabling MVC compatibility with 2.1, so be sure to check out the post on MVC compatibility versions as well.

[ApiController] and ActionResult<T>

ASP.NET Core 2.1 introduces new Web API controller specific conventions that make Web API development more convenient. These conventions can be applied to a controller using the new [ApiController] attribute:

  • Automatically respond with a 400 when validation errors occur – no need to check the model state in your action method
  • Infer smarter defaults for action parameters: [FromBody] for complex types, [FromRoute] when possible, otherwise [FromQuery]
  • Require attribute routing – actions are not accessible by convention-based routes

You can also now return ActionResult<T> from your Web API actions, which allows you to return arbitrary action results or a specific return type (thanks to some clever use of implicit cast operators). Most Web API action methods have a specific return type, but also need to be able to return multiple different action results.

Here’s an example Web API controller that uses these new enhancements:

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly ProductsRepository _repository;

    public ProductsController(ProductsRepository repository)
    {
        _repository = repository;
    }

    [HttpGet]
    public IEnumerable<Product> Get()
    {
        return _repository.GetProducts();
    }

    [HttpGet("{id}")]
    public ActionResult<Product> Get(int id)
    {
        if (!_repository.TryGetProduct(id, out var product))
        {
            return NotFound();
        }
        return product;
    }

    [HttpPost]
    [ProducesResponseType(201)]
    public ActionResult<Product> Post(Product product)
    {
        _repository.AddProduct(product);
        return CreatedAtAction(nameof(Get), new { id = product.Id }, product);
    }
}

Because these conventions are more descriptive tools like Swashbuckle or NSwag can do a better job generating an OpenAPI specification for this Web API that includes information like return types, parameter sources, and possible error responses without needing addition attributes.

Better input processing

ASP.NET Core 2.1 does a much better job of providing appropriate error information when the request body fails to deserialize or the JSON is invalid.

For example, in ASP.NET Core 2.0 if your Web API received a request with a JSON property that had the wrong type (like a string instead of an int) you get a generic error message, like this:

{
  "count": [
    "The input was not valid."
  ]
}

In 2.1 we provide more detailed error information about what was wrong with the request including path and line number information:

{
  "count": [
    "Could not convert string to integer: abc. Path 'count', line 1, position 16."
  ]
}

Similarly, if the request is syntactically invalid (ex. missing a curly brace) then 2.1 will let you know:

{
  "": [
    "Unexpected end when reading JSON. Path '', line 1, position 1."
  ]
}

You can also now add validation attributes to top level parameters of your action method. For example, you can mark a query string parameter as required like this:

[HttpGet("test/{testId}")]
public ActionResult<TestResult> Get(string testId, [Required]string name)

Problem Details

In this release we added support for RFC 7808 – Problem Details for HTTP APIs as a standardized format for returning machine readable error responses from HTTP APIs.

To update your Web API controllers to return Problem Details responses for invalid requests you can add the following code to your ConfigureServices method:

services.Configure<ApiBehaviorOptions>(options =>
{
    options.InvalidModelStateResponseFactory = context =>
    {
        var problemDetails = new ValidationProblemDetails(context.ModelState)
        {
            Instance = context.HttpContext.Request.Path,
            Status = StatusCodes.Status400BadRequest,
            Type = "https://asp.net/core",
            Detail = "Please refer to the errors property for additional details."
        };
        return new BadRequestObjectResult(problemDetails)
        {
            ContentTypes = { "application/problem+json", "application/problem+xml" }
        };
    };
});

You can also return a Problem Details response from your API action for an invalid request using the ValidationProblem() helper method.

An example Problem Details response for an invalid request looks like this (where the content type is application/problem+json):

{
  "errors": {
    "Text": [
      "The Text field is required."
    ]
  },
  "type": "https://asp.net/core",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "detail": "Please refer to the errors property for additional details.",
  "instance": "/api/values"
}

JSON Patch improvements

JSON Patch defines a JSON document structure for implementing HTTP PATCH semantics. A JSON Patch document defines a sequence of operations (add, remove, replace, copy, etc.) that can be applied to a JSON resource.

ASP.NET Core has supported JSON Patch since it first shipped, but in 2.1 we've added support for the test operation. The test operation allows to check for specific values before applying the patch. If any test operations fail then the whole patch fails.

A Web API controller action that supports JSON Patch looks like this:

[HttpPatch("{id}")]
public ActionResult<Value> Patch(int id, JsonPatchDocument<Value> patch)
{
    var value = new Value { ID = id, Text = "Do" };

    patch.ApplyTo(value, ModelState);

    if (!ModelState.IsValid)
    {
        return BadRequest(ModelState);
    }

    return value;
}

Where the Value type is defined as follows:

public class Value
{
    public int ID { get; set; }

    public string Text { get; set; }

    public IDictionary<int, string> Status { get; } = new Dictionary<int, string>();
}

The following JSON Patch request successfully adds a value to the Status dictionary (note that we've also added support for non-string dictionary keys, like int, Guid, etc.):

Successful request

[
  { "op": "test", "path": "/text", "value": "Do" },
  { "op": "add", "path": "/status/1", "value": "Done!" }
]

Successful response

{
  "id": 123,
  "text": "Do",
  "status": {
    "1": "Done!"
  }
}

Conversely the following JSON Patch request fails because the value of the text property doesn't match:

Failed request

[
  { "op": "test", "path": "/text", "value": "Do not" },
  { "op": "add", "path": "/status/1", "value": "Done!" }
]

Failed response

{
  "Value": [
    "The current value 'Do' at path 'text' is not equal to the test value 'Do not'."
  ]
}

Summary

We hope you enjoy these Web API improvements. Please give them a try and let us know what you think. If you hit any issues or have feedback please file issues on GitHub.

Viewing all 7144 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>