Quantcast
Channel: ASP.NET Blog
Viewing all 7144 articles
Browse latest View live

Notes from the ASP.NET Community Standup – November 1, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.


ASP.NET Community Standup 11/01/2016

Community Links

Puma Scan is a software security Visual Studio analyzer extension that is built on top of Roslyn.

Plug ASP.NET Core Middleware in MVC Filters Pipeline 

Building An API with NancyFX 2.0 + Dapper

.NET Standard based Windows Service support for .NET 

Accessing the HTTP Context on ASP.NET Core

Accessing services when configuring MvcOptions in ASP.NET Core 

Adding Cache-Control headers to Static Files in ASP.NET Core 

Building .Net Core On Travis CI

Umbraco CLI running on ASP.NET Core

Testing SSL in ASP.NET Core

ASP.NET API Versioning 

Creating a new .NET Core web application, what are your options?

Using MongoDB .NET Driver with .NET Core WebAPI

ASP.NET Core project targeting .NET 4.5.1 running on Raspberry Pi

Free ASP.NET Core 1.0 Training on Microsoft Virtual Academy

Using dotnet watch test for continuous testing with .NET Core and XUnit.net 

Azure Log Analytics ASP.NET Core Logging extension

Bearer Token Authentication in ASP.NET Core

ASP.NET Core Module
Removal of dnvm scripts for the aspnet/home repo

Demos

ASP.NET Core 1.1  Preview 1  added a couple of  new features around Azure integration, performance and more. In this Community Standup Damian walks us through how he easily upgraded live.asp.net site to ASP.NET Core 1.1, as well as, how to add View Compilation and  Azure App Services.

Upgrading Existing Projects

Before you start using any of the ASP.NET Core 1.1  Preview 1 features makes sued to update the following:

  • Install .NET Core 1.1 Preview 1 SDK
  • Upgrade existing project  from .NET Core 1.0 to .NET Core 1.1 Preview 1. Make sure to also updated your ASP.NET Core packages to their latest versions 1.1.0-preview1.
  • Update the netcoreapp1.0 target framework to netcoreapp1.1.

View compilation

Damian went over how he added View compilation to live.asp.net. Typically your razor pages get complied the first time someone visits the site. The advantage of  View compilation is, you can now precompile the razor views that your application references and deploy them. This features allow for faster startup times in your application since your views are ready to go.

To start using  precompiled views in your application follow the following steps.

  • Add  View compilation package
            "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Design": {
                "version": "1.1.0-preview4-final",
                "type": "build"
             }
  • Add View compilation tool
             "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Tools": {
                 "version": "1.1.0-preview4-final"
               }
  • Include the post publish script to evoke pre-compilation

Now, that live.asp.net is configured to use view compilation, it will pre-compile the razor views. Once you’ve published your application, you will notice that your PublishOutput folders no longer contains a view folder. Instead, you will see appname.PrecompileViews.dll.

Azure App Service logging Provider

Damian also configured  live.asp.net to use Azure App services.  By adding  Microsoft.AspNetCore.AzureAppServicesIntegration package , and calling the UseAzureAppservices method in Program.cs Diagnostic logs are now turned on in Azure.(see image below).

appservices-logging-150
With Application Logging turned on, you can choose the log level you want and see them in Kudu console, or Visual Studio. (see image below)

appservices-viewlogginginkuduconsole-150

Application Logs in Kudu

This week Damian went over how to use some of the new features in ASP.NET Core 1.1  Preview 1. For more details on ASP.NET Core 1.1 please check out the announcement from last month. Thanks for watching.


MVP Hackathon 2016: Cool Projects from Microsoft MVPs

$
0
0

Last week was the annual MVP Summit on Microsoft’s Redmond campus.  We laughed, we cried, we shared stories around the campfire, and we even made s’mores.  Ok, I’m stretching it a bit about the last part, but we had a good time introducing the MVPs to some of the cool technologies you saw at Connect() yesterday, and some that are still in the works for 2017.  As part of the MVP Summit event, we hosted a hackathon to explore some of the new features and allow attendees to write code along with Microsoft engineers and publish that content as an open source project.

We shared the details of some of these projects with the supervising program managers covering Visual Studio, ASP.NET, and the .NET framework.  Those folks were impressed with the work that was accomplished, and now we want to share these accomplishments with you.  This is what a quick day’s worth of work can accomplish when working with your friends.

MVP Hackers at the end of the Hackathon

MVP Hackers at the end of the Hackathon

  • Shaun Luttin wrote a console application in F# that plays a card trick.  Source code at:  https://github.com/shaunluttin/magical-mathematics
  • Rainer Stropek created a docker image to fully automate the deployment and running of a Minecraft server with bindings to allow interactions with the server using .NET Core.  Rainer summarized his experience and the docker image on his blog
  • Tanaka Takayoshi wrote an extension command called “add” for the dotnet command-line interface.  The Add command helps format new classes properly with namespace and initial class declaration code when you are working outside of Visual Studio. Tanaka’s project is on GitHub.
  • Tomáš Herceg wrote an extension for Visual Studio 2017 that supports development with the DotVVM framework for ASP.NET.  DotVVM is a front-end framework that dramatically simplifies the amount of code you need to write in order to create useful web UI experiences.  His project can be found on GitHub at: https://github.com/riganti/dotvvm   See the animated gif below for a sample of how DotVVM can be coded in Visual Studio 2017:

    DotVVM Intellisense in action

    DotVVM Intellisense in action

  • The ASP.NET Monsters wrote Pugzor, a drop-in replacement for the Razor view engine using the “Pug” JavaScript library as the parser and renderer. It can be added side-by-side with Razor in your project and enabled with one line of code. If you have Pug templates (previously called Jade) these now work as-are inside ASP.NET Core MVC. The ASP.NET Monsters are: Simon Timms, David Paquette and James Chambers

    Pugzor

    Pugzor

  • Alex Sorkoletov wrote an addin for Xamarin Studio that helps to clean up unused using statements and sort them alphabetically on every save.  The project can be found at: https://github.com/alexsorokoletov/XamarinStudio.SortRemoveUsings
  • Remo Jansen put together an extension for Visual Studio Code to display class diagrams for TypeScript.  The extension is in alpha, but looks very promising on his GitHub project page.

    Visual Studio Code - TypeScript UML Generator

    Visual Studio Code – TypeScript UML Generator

  • Giancarlo Lelli put together an extension to help deploy front-end customizations for Dynamics 365 directly from Visual Studio.  It uses the TFS Client API to detect any changes in you workspace and check in everything on your behalf. It is able to handle conflicts that prevents you to overwrite the work of other colleagues. The extension keeps the same folder structure you have in your solution explorer inside the CRM. It also supports adding the auto add of new web resources to a specific CRM solution. This extension uses the VS output window to provide feedback during the whole publish process.  The project can be found on its GitHub page.

    Publish to Dynamics

    Publish to Dynamics

  • Simone Chiaretta wrote an extension for the dotnet command-line tool to manage the properties in .NET Core projects based on MSBuild. It allows setting and removing the version number, the supported runtimes and the target framework (and more properties are being added soon). And it also lists all the properties in the project file.  You can extend your .NET CLI with his NuGet package or grab the source code from GitHub.  He’s written a blog post with more details as well.

    The dotnet prop command

    The dotnet prop command

  • Nico Vermeir wrote an amazing little extension that enables the Surface Dial to help run the Visual Studio debugger.  He wrote a blog post about it and published his source code on GitHub.
  • David Gardiner wrote a Roslyn Analyzer that provides tips and best practice recommendations when authoring extensions for Visual Studio.  Source code is on GitHub.

    VSIX Analyzers

    VSIX Analyzers

  • Cecilia Wirén wrote an extension for Visual Studio that allows you to add a folder on disk as a solution folder, preserving all files in the folder.  Cecilia’s code can be found on GitHub

    Add as Solution Folder

    Add Folder as Solution Folder

  • Terje Sandstrom updated the NUnit 3 adapter to support Visual Studio 2017.

    NUnit Results in Visual Studio 2017

    NUnit Results in Visual Studio 2017

     

  • Ben Adams made the Kestrel web server for ASP.NET Core 8% faster while sitting in with some of the ASP.NET Core folks.

Summary

We had an amazing time working together, pushing each other to develop and build more cool things that could be used with Visual Studio 2015, 2017, Code, and Xamarin Studio.  Stepping away from the event, and reading about these cool projects inspires me to write more code, and I hope it does the same for you.  Would you be interested in participating in a hackathon with MVPs or Microsoft staff?  Let us know in the comments below

 

Notes from the ASP.NET Community Standup – November 22, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

This week the team hosted the standup on Aerial Spaces.  Every week’s episode is published on YouTube for later reference. The team answers your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

ASP.NET Community Standup 11/22/2016

Community Links

Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM

Announcing .NET Core 1.1

App Service on Linux now supports Containers and ASP.NET Core

ASP.NET Core Framework Benchmarks Round 13

MVP Hackathon 2016: Cool Projects from Microsoft MVPs

Damian Edwards live coding live.asp.net

EDI.Net Serializer/Deserializer

ASP.NET Core’s URL Rewrite Middleware behind a load balancer

ASP.NET Core  Workshops and Code Labs

Unexpected Behavior in LanguageViewLocationExpander

Project.json to CSproj

OrchardCMS Roadmap

ASP.NET Core and the Enterprise Part 3: Middleware

Using .NET Core Configuration with legacy projects

High-Performance Data Pipelines

.NET Core versioning

Not your granddad’s .NET – Pipes Part 1

Accomplishments

Tech Empower Benchmark

Tech Empower Benchmark Round 13 came out and ASP.NET Core is Top 10 receiving  1,822,366 requests per second on ASP.NET Core in Round 13.  Read more

capture

Question and Answers

Question:  Will there be MVC 4 project support in Visual Studio 2017?

— Removed in RC but should be coming back in the next release.

Question: What should I grab ASP.NET Core 1.1 runtime or SDK? / What’s the difference between the .NET Core SDK and runtime?

— In short, if you a developer you want to install the .NET Core SDK. If you are server administrator you may only want install the runtime.

Question: Will .csproj tooling be finalized with Visual Studio 2017 RTM?

— Yes, that is the current plan in place. There are couple of know issues for ASP.NET Core support in Visual Studio 2017; we have listed the workarounds on our GitHub repo.

Question: How for along is the basic pipeline API?

— Currently, being tested by some folks at Stack Overflow.  If you would like to get involved tweet David Fowler.

Question: When will URL based cultural localization be available?

— It’s available now.  With ASP.NET Core 1.1  Middleware as MVC filters.

In this example from the ASP.NET Core 1.1 announcement we used a route value based request culture provider to establish the current culture for the request using the localization middleware.

The team will be back on Tuesday the 29th of November to discuss the latest updates on ASP.NET Core.  See you then!

Visual Studio Tools for Azure Functions

$
0
0

Today we are pleased to announce a preview of tools for building Azure Functions for Visual Studio 2015. Azure Functions provide event-based serverless computing that make it easy to develop and scale your application, paying only for the resources your code consumes during execution. This preview offers the ability to create a function project in Visual Studio, add functions using any supported language, run them locally, and publish them to Azure. Additionally, C# functions support both local and remote debugging.

In this post, I’ll walk you through using the tools by creating a C# function, covering some important concepts along the way. Then, once we’ve seen the tools in action I’ll cover some known limitations we currently have.

Also, please take a minute and let us know who you are so we can follow up and see how the tools are working.

Getting Started

Before we dive in, there are a few things to note:

For our sample function, we’ll create a C# function that is triggered when a message is published into a storage Queue, reverses it, and stores both the original and reversed strings in Table storage.

  • To create a function, go to:
  • File -> New Project
  • Then select the “Cloud” node under the “Visual C#” section and choose the “Azure Functions (Preview) project type
    image
  • This will give us an empty function project. There are a few things to note about the structure of the project:
  • For the purposes of this blog post, we’ll add an entry that speeds up the queue polling interval from the default of once a minute to once a second by setting the “maxPollingInterval” in the host.json (value is in ms)
    image
  • Next, we’ll add a function to the project, by right clicking on the project in Solution Explorer, choose “Add” and then “New Azure Function”
    image
  • This will bring up the New Azure Function dialog which enables us to create a function using any language supported by Azure Functions
    image
  • For the purposes of this post we’ll create a “QueueTrigger – C#” function, fill in the “Queue name” field, “Storage account connection” (this is the name of the key for the setting we’ll store in “appsettings.json”), and the “Name” of our function
    image
  • This will create a new folder in the project with the name of our function with the following key files:
  • The last thing we need to do in order to hook up function to our storage Queue is provide the connecting string in the appsettings.json file (in this case by setting the value of “AzureWebJobsStorage”)
    image
  • Next we’ll edit the “host.json” file to add two bindings, one that gives us the ability to read from the table we’ll be pushing to, and another that gives us the ability to write entries to the table
    image
  • Finally, we’ll write our function logic in the run.csx file
    image
  • Running the function locally works like any other project in Visual Studio, Ctrl + F5 starts it without debugging, and F5 (or the Start/Play button on the toolbar) launches it with debugging. Note: Debugging currently only works for C# functions. Let’s hit F5 to debug the function.
  • The first time we run the function, we’ll be prompted to install the Azure Functions CLI (command line) tools. Click “Yes” and wait for them to install, our function app is now running locally. We’ll see a command prompt with some messages from the Azure Functions CLI pop up, if there were any compilation problems, this is where the messages would appear since functions are dynamically compiled by the CLI tools at runtime.
    image
  • We now need to manually trigger our function by pushing a message into the queue with Azure Storage Explorer. This will cause the function to execute and hit our breakpoint in Visual Studio.
    image

Publishing to Azure

  • Now that we’ve tested the function locally, we’re ready to publish our function to Azure. To do this right click on the project and choose “Publish…”, then choose “Microsoft Azure App Service” as the publish target
    image
  • Next, you can either pick an existing app, or create a new one. We’ll create a new one by clicking the “New…” button on the right side of the dialog
  • This will pop up the provisioning dialog that lets us choose or setup the Azure environment (we can customize the names or choose existing assets). These are:
    • Function App Name: the name of the function app, this must be unique
    • Subscription: the Azure subscription to use
    • Resource Group: what resource group the to add the Function App to
    • App Service Plan: What app service plan you want to run the function on. For complete information read about hosting plans, but it’s important to note that if you choose an existing App Service plan you will need to set the plan to “always on” or your functions won’t always trigger (Visual Studio automatically sets this if you create the plan from Visual Studio)
  • Now we’re ready to provision (create) all of the assets in Azure. Note: that the “Validate Connection” button does not work in this preview for Azure Functions 
    image
  • Once provisioning is complete, click “Publish” to publish the Function to Azure. We now have a publish profile which means all future publishes will skip the provisioning steps
    image
    Note: If you publish to a Consumption plan, there is currently a bug where new triggers that you define (other than HTTP) will not be registered in Azure, which can cause your functions not to trigger correctly. To work around this, open your Function App in the Azure portal and click the “Refresh” button on the lower left to fix the trigger registration. This bug with publish will be fixed on the Azure side soon.
  • To verify our function is working correctly in Azure, we’ll click the “Logs” button on the function’s page, and then push a message into the Queue using Storage Explorer again. We should see a message that the function successfully processed the message
    image
  • The last thing to note, is that it is possible to remote debug a C# function running in Azure from Visual Studio. To do this:
    • Open Cloud Explorer
    • Browse to the Function App
    • Right click and choose “Attach Debugger”
      image

Known Limitations

As previously mentioned, this is the first preview of these tools, and we have several known limitations with them. They are as follow:

  • IntelliSense: IntelliSense support is limited, and available only for C#, and JavaScript by default. F#, Python, and PowerShell support is available if you have installed those optional components. It is also important to note that C# and F# IntelliSense is limited at this point to classes and methods defined in the same .csx/.fsx file and a few system namespaces.
  • Cannot add new files using “Add New Item”: Adding new files to your function (e.g. .csx or .json files) is not available through “Add New Item”. The workaround is to add them using file explorer, the Add New File extension, or another tool such as Visual Studio Code.
  • Functions published from Visual Studio are not properly registered in Azure: This is caused by a bug in the Azure service for Functions running on a Consumption plan. The workaround is to open the Function App’s page in the Azure portal and click the “Refresh” button in the bottom left. This will register the functions with Azure.
  • Function bindings generate incorrectly when creating a C# Image Resize function: The settings for the binding “Azure Storage Blob out (imageSmall)” are overridden by the settings for the binding “Azure Storage Blob out (imageMedium)” in the generated function.json. The workaround is to go to the generated function.json and manually edit the “imageSmall” binding.

Conclusion

Please download and try out this preview of Visual Studio Tools for Azure Functions and let us know who you are so we can follow up and see how they are working. Additionally, please report any issues you encounter on our GitHub repo (include “Visual Studio” in the issue title) and provide any comments or questions you have below, or via Twitter.

Introducing the ASP.Net Async OutputCache Module

$
0
0

OutputCacheModule is ASP.NET’s default handler for storing the generated output of pages, controls, and HTTP responses.  This content can then be reused when appropriate to improve performance. Prior to the .NET Framework 4.6.2, the OutputCache Module did not support async read/write to the storage.

Starting with the .NET Framework 4.6.2 release, we introduced a new OutputCache async provider abstract class named OutputCacheProviderAsync, which defines interfaces for an async OutputCache provider to enable asynchronous access to a shared OutputCache. The Async OutputCache Module that supports those interfaces is released as a NuGet package, which you can install to any 4.6.2+ web applications.

Benefits of the Async OutputCache Module

It’s all about the scalability. The cloud makes it really easy to scale-out computing resources to serve the large spikes in service requests to an application. When you consider the scalability of an OutputCache, you can not use an in-memory provider because the in-memory provider does not allow you to share data across multiple web servers.

You will need to store OutputCache data in another storage medium such as Microsoft Azure SQL Database, NoSQL, or Redis Cache.  Currently, the OutputCache interaction with these storage mediums is restricted to run synchronously. With this update, the new async OutputCache module enables you to read and write data from these storage providers asynchronously. Async I/O operations help release threads quicker than synchronous I/O operations, which allows ASP.NET to handle other requests. If you are interested in more details about programming asynchronously and the use of the async and await keywords, you can read Stephen Cleary’s excellent article on Async Programming : Introduction to Async/Await on ASP.NET.

How to use the Async OutputCache Module

  1. Target your application to 4.6.2+.

The OutputCacheProviderAsync interface was introduced in .NET Framework 4.6.2, therefore you need to target your application to .NET Framework 4.6.2 or above in order to use the Async OutputCache Module. Download the .NET Framework 4.6.2 Developer Pack if you do not have it installed yet and update your application’s web.config targetFrameworks attributes as demonstrated below:

<system.web>
  <compilation debug="true" targetFramework="4.6.2"/>
  <httpRuntime targetFramework="4.6.2"/>
</system.web>
  1. Add the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync NuGet package.

Use the NuGet package manager to install the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync package.  This will add a reference to the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync.dll and add the following configuration into the web.config file.

<system.webServer>
  <modules>
    <remove name="OutputCache"/>
    <add name="OutputCache" type="Microsoft.AspNet.OutputCache.OutputCacheModuleAsync, Microsoft.AspNet.OutputCache.OutputCacheModuleAsync" preCondition="integratedMode"/>
  </modules>
</system.webServer>

Now your applications will start using Async OutputCache Module. If no outputcacheprovider is specified in web.config, the module will use a default synchronous in-memory provider, with that you won’t get the async benefits. We have not yet released an Async OutputCache provider, but plan to in the near future. Let’s take a look at how you can implement an async OutputCache Provider of your own.

How to implement an async OutputCache Provider

An async OutputCache Provider just needs to implement the OutputCacheProviderAsync interface.

More specifically, the async provider should implement the following 8 APIs.

Add(String, Object, DateTime) Inserts the specified entry into the output cache. (Inherited from OutputCacheProvider.)
AddAsync(String, Object, DateTime) Asynchronously inserts the specified entry into the output cache.
Get(String) Returns a reference to the specified entry in the output cache.(Inherited from OutputCacheProvider.)
GetAsync(String) Asynchronously returns a reference to the specified entry in the output cache.
Remove(String) Removes the specified entry from the output cache.(Inherited from OutputCacheProvider.)
RemoveAsync(String) Asynchronously removes the specified entry from the output cache.
Set(String, Object, DateTime) Inserts the specified entry into the output cache, overwriting the entry if it is already cached.(Inherited from OutputCacheProvider.)
SetAsync(String, Object, DateTime) Asynchronously Inserts the specified entry into the output cache, overwriting the entry if it is already cached.

If you want your provider to support Cache Dependency and callback functionality, you will need to implement the interface ICacheDependencyHandler, which is defined within the Microsoft.AspNet.OutputCache.OutputCacheModuleAsync.dll.  You can add this reference by installing the same NuGet package referenced in our web project.

The current version of the Async OutputCache Module does not support Registry Key nor SQL dependencies. Depending on the feedback we hear, we may consider adding them in the future.

Once you have finished implementing your provider class, you can use it in a web application by adding a reference to your library and adding the following configurations into the web.config file:

<system.web>
  <caching>
    <outputCache defaultProvider="CustomOutputCacheProvider">
    <providers>
      <add name="CustomOutputCacheProvider" type="CustomOutputCacheProvider.CustomOutputCacheProvider, CustomOutputCacheProvider" />
    </providers>
    </outputCache>
  </caching>
</system.web>

That should work! If you need some help to get started, here is an example of an in-memory Async OutputCache Provider as a proof of concept. You can see that it has implemented all the APIs needed and is ready to plug in and use.

 

Summary

To wrap up things we have talked about: we have released an async version of the OutputCache Module which allows ASP.NET to take advantage of modern async techniques to help scale your OutputCache. With this new interface, you can now write your own async version of OutputCache providers easily. We encourage you to try this module and extend the your current OutputCache provider to any storage medium that supports async interactions. We also encourage you to share the providers you wrote on NuGet.org and let us know about them in the comments area below. Good luck and happy coding! If you have any questions or suggestions, please feel free to reach out to us by leaving your comments here.

Notes from the ASP.NET Community Standup – November 29, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway(Jon’s in Russia this week)and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

ASP.NET Community Standup 11/29/2016

Quick Note: Jon’s in Russia this week so, we don’t have any community links this week.  

Question and Answers

This week  Damian and Scott jumped right into question. Damian had a question on Hanselman’s post “Publishing ASP.NET Core 1.1 applications to Azure using git deploy“.

Damian’s Question: “How did you create a project without a global.json? …. In Visual Studio today the project always includes a global.json… did you create it on a Mac?

— Scott: “dotnet new in the command line.”

Damian went on to explain the difference between a new application created using the dotnet cli and one created in Visual Studio. When a .NET Core project is created using the dotnet new templates, it does not come with solution level files like global.json.

dotnet new template

dotnet new template files

Visual Studio 2015 template

Visual Studio 2015 template with global.json

Today global.json is how you set the version of the .NET Core SDK needed for your application. Remember that unless you specify the version SDK, .NET Core will use the latest one on your machine and your app will not work. If you find yourself in a similar scenario to the one mentioned this how you fix it.

Find out what version of SDK you have locally.

dotnetversion

Add global.json to your project and include the appropriate version of the SDK.

Check out Hanselman’s post “Publishing ASP.NET Core 1.1 applications to Azure using git deploy” for more information on the above.

Question: What are we doing to simplify the Docker versioning numbers?

— Now, that we have release 1.0 and 1.1 we can make a fair assessment of how well the versioning strategy is working. Based on those experiences we are going to make some adjustments.

Question: Why isn’t ASP.NET Core 1.1 backward compatible? I have a lot of 1.0 libraries.

— The intent is that with minor releases like from  1.0 to 1.1 of  any package or component shouldn’t  break stuff. However, the support matrix for the .NET Core is you can’t mix current components with LTS components. For example you can use ASP.NET Core hosting 1.0 with MVC for ASP.NET Core 1.1

See you at our next community standup!

 

New Updates to Web Tools in Visual Studio 2017 RC

$
0
0

Update 12/12: There is a bug in the Visual Studio 2017 installer, that if you upgrade an existing RC installation to the 12/12 RC update it uninstalls IIS Express and Web Deploy.  The fix is to manually re-install IIS Express and Web Deploy.  For details see our known issues page.

Today we announced an update to Visual Studio 2017 RC that includes a variety of improvements for both ASP.NET and ASP.NET Core projects. If you’ve already installed Visual Studio 2017 RC then these updates will be pushed to you automatically. Otherwise, simply install Visual Studio 2017 RC and you will get the latest updates. Below is a summary of the improvements to the Web tools in this release:

  • The ability to turn off script debugging for Chrome and Internet Explorer if you prefer to use the in-browser tools. To do this, go to Debug -> Options, and uncheck “Enable JavaScript debugging for ASP.NET (Chrome and IE)”.
    chrome-script-debugging
  • Bower packages now restore correctly without any manual workarounds required.
  • General stability improvements for ASP.NET Core applications, including:
    • Usability and stability improvements for creating ASP.NET Core apps with Docker containers. Most notably we’ve fixed the issue that when provisioning a app in Azure App Service, new resource groups no longer need to be created in the same region as the App Service plan.
    • Entity Framework Core commands such as Add-Migration, and Update-Database can be invoked from the NuGet Package Manager Console.
    • ASP.NET Core applications now work with Windows Authentication.
    • Lots of improvements to the .NET Core tooling. For complete details see the .NET team blog post.

Thanks for trying out this latest update of Visual Studio 2017! For an up to date list of known issues see our GitHub page, and keep the feedback coming by reporting any issues using the built-in feedback tools.

Announcing Microsoft ASP.NET WebHooks V1

$
0
0

We are very happy to announce ASP.NET WebHooks V1 making it easy to both send and receive WebHooks with ASP.NET.

WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more — the possibilities are endless! When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Because of their simplicity, WebHooks are already exposed by most popular services and Web APIs. To help managing WebHooks, Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

The two parts can be used together or apart depending on your scenario. If you only need to receive WebHooks from other services, then you can use just the receiver part; if you only want to expose WebHooks for others to consume, then you can do just that.

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, is available as Open Source on GitHub, and as Nuget packages.

A port to the ASP.NET Core is being planned so please stay tuned!

Receiving WebHooks

Dealing with WebHooks depends on who the sender is. Sometimes there are additional steps registering a WebHook verifying that the subscriber is really listening. Often the security model varies quite a bit. Some WebHooks provide a push-to-pull model where the HTTP POST request only contains a reference to the event information which is then to be retrieved independently.

The purpose of Microsoft ASP.NET WebHooks is to make it both simpler and more consistent to wire up your API without spending a lot of time figuring out how to handle any WebHook variant:

WebHookReceivers

A WebHook handler is where you process the incoming WebHook. Here is a sample handler illustrating the basic model. No registration is necessary – it will automatically get picked up and called:

public class MyHandler : WebHookHandler
{
// The ExecuteAsync method is where to process the WebHook data regardless of receiver
public override Task ExecuteAsync(string receiver, WebHookHandlerContext context)
{
// Get the event type
string action = context.Actions.First();

// Extract the WebHook data as JSON or any other type as you wish
JObject data = context.GetDataOrDefault<JObject>();

return Task.FromResult(true);
}
}

Finally, we want to ensure that we only receive HTTP requests from the intended party. Most WebHook providers use a shared secret which is created as part of subscribing for events. The receiver uses this shared secret to validate that the request comes from the intended party. It can be provided by setting an application setting in the Web.config file, or better yet, configured through the Azure portal or even retrieved from Azure Key Vault.

For more information about receiving WebHooks and lots of samples, please see these resources:

Sending WebHooks

Sending WebHooks is slightly more involved in that there are more things to keep track of. To support other APIs registering for WebHooks from your ASP.NET application, we need to provide support for:

  • Exposing which events subscribers can subscribe to, for example Item Created and Item Deleted;
  • Managing subscribers and their registered WebHooks which includes persisting them so that they don’t disappear;
  • Handling per-user events in the system and determine which WebHooks should get fired so that WebHooks go to the correct receivers. For example, if user A caused an Item Created event to fire then determine which WebHooks registered by user A should be sent. We don’t want events for user A to be sent to user B
  • Sending WebHooks to receivers with matching WebHook registrations.

As described in the blog Sending WebHooks with ASP.NET WebHooks Preview, the basic model for sending WebHooks works as illustrated in this diagram:

WebHooksSender

Here we have a regular Web site (for example deployed in Azure) with support for registering WebHooks. WebHooks are typically triggered as a result of incoming HTTP requests through an MVC controller or a WebAPI controller. The orange blocks are the core abstractions provided by ASP.NET WebHooks:

  1. IWebHookStore: An abstraction for storing WebHook registrations persistently. Out of the box we provide support for Azure Table Storage and SQL but the list is open-ended.
  2. IWebHookManager: An abstraction for determining which WebHooks should be sent as a result of an event notification being generated. The manager can match event notifications with registered WebHooks as well as applying filters.
  3. IWebHookSender: An abstraction for sending WebHooks determining the retry policy and error handling as well as the actual shape of the WebHook HTTP requests. Out of the box we provide support for immediate transmission of WebHooks as well as a queuing model which can be used for scaling up and out, see the blog New Year Updates to ASP.NET WebHooks Preview for details.

The registration process can happen through any number of mechanisms as well. Out of the box we support registering WebHooks through a REST API but you can also build registration support as an MVC controller or anything else you like.

It’s also possible to generate WebHooks from inside a WebJob. This enables you to send WebHooks not just as a result of incoming HTTP requests but also as a result of messages being sent on a queue, a blob being created, or anything else that can trigger a WebJob:

WebHooksWebJobsSender

The following resources provide details about building support for sending WebHooks as well as samples:

Thanks to all the feedback and comments throughout the development process, it is very much appreciated!

Have fun!

Henrik


Notes from the ASP.NET Community Standup –December 13, 2016

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.


ASP.NET Community Standup 12/14/2016

Community Links

Rethinking email confirmation

Managing Cookie Lifetime with ASP.NET Core OAuth 2.0 providers

Build a REST API for your Mobile Apps with ASP.NET Core

IdentityServer4 and ASP.NET Core 1.1

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

Making Application Insight Fast and Secure

Simple SQL Localization NuGet package

Building Application Insights Logging Provider for ASP.NET Core

Generic Repository Pattern In ASP.NET Core

Integration Testing with Entity Framework Core and SQL Server

Bare metal APIs with ASP.NET Core MVC

Accessing HttpContext outside of framework components in ASP.NET Core

Optimize expression compilation memory usage

ASP.NET Core Response Optimization

Angular 2 and ASP.NET Core MVC

Dockerizing a Real World ASP.NET Core Application

Convert ASP.NET Web Servers to Docker with Image2Docker

Sharing code across .NET platforms with .NET Standard

Multiple Versions of .NET Core Runtimes and SDK Tools SxS Survive Guide

Migration to ASP.NET Core: Considerations and Strategies

Updating Visual Studio 2017 RC – .NET Core Tooling improvement

Accomplishments

On December 12th, we announced updates to the .NET Core tooling for Visual Studio 2017 RC. This update came with enhancements and bug fixes to the earlier release of VS 2017 .NET Core tooling. Some areas addressed include:

csproj file simplification: .NET Core project files now use an even more simplified syntax, making them easier to read.

Previous

Simplified

CLI commands added: New commands added for adding and removing  project to project references.

Overall quality improved: Bug fixes in xproj to csproj migration, project to project references, NuGet, MSBuild and ASP.NET Core with Docker.

For more details on the .NET Core Tooling improvements please read the announcement here.

Questions and Answers

Question:  With the new VS 2017 .NET Core tooling updates  I get csproj file when I create a new application. Will I ever want to use project.json again?

— The logic is when you go into a new folder and type dotnet, it needs to find an SDK. It will by default use the latest version of the dotnet SDK available. However, by using global.json you can specify previous versions of the SDK you would like to use.

Question: Is global.json still the router for the dotnet SDK?  checkout this post for reference

— In the current build it is but, will be replaced. The intent is we will still support side-by-side SDKs with the ability to switch between them.

See you at our next community standup!

 

 

Notes from the ASP.NET Community Standup –January 3, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.


ASP.NET Community Standup 1/3/2017

Community Links

Custom Tag Helper: Toggling Visibility On Existing HTML elements

Introducing a new Markdown View Engine for ASP.NET Core

Building production ready Angular apps with Visual Studio and ASP.NET Core

Cooking with ASP.NET Core and Angular 2

Angular2 CLI with ASP.NET Core application – tutorial

A Year of Open Source (2016)

Bootstrap Flexbox Navbar

React App +  ASP.NET Core

Azure AD as a Identity Provider in ASP.NET Core application

IdentityServer4.1.0.0

Use the mssql extension for Visual Studio Code

How to enable gZip compression in ASP.NET Core

Adding static file caching to live.asp.net

Automating Deployment Of ASP.NET Core To Azure App Service From Linux

Orchestrating multi service asp.net core application using docker-compose

Creating a WebSockets middleware for ASP .NET Core

Fluent URL builder and testable HTTP for .NET

GitHub Issue: app_offline.htm is case sensitive             

Questions and Answers

Question: When will the list of .NET Standard 2.0 APIs be baked/ready?

— This all available on the .NET Standard GitHub repo. For more information please checkout .NET blog and .NET Standard GitHub repo.

Question: Is Visual Studio 2017 still in RC?

—  Yes, it is. The team does continue to push updates to Visual 2017 RC and you will notice a change in the build number but, this is still all RC.

Question: How do I run an ASP.NET Core 1.1 application on update 3?

— Currently, you have to update the project manually. Visual Studio does not contain templates for ASP.NET Core 1.1. For more information on how to update your application to 1.1 please read ASP.NET Core 1.1 RTM announcement from November 2016.

Note: Visual Studio 2017 we will have both templates for  ASP.NET Core 1.0 and 1.1.

See you at our next community standup!

 

ASP.NET Core Authentication with IdentityServer4

$
0
0

This is a guest post by Mike Rousos

In my post on bearer token authentication in ASP.NET Core, I mentioned that there are a couple good third-party libraries for issuing JWT bearer tokens in .NET Core. In that post, I used OpenIddict to demonstrate how end-to-end token issuance can work in an ASP.NET Core application.

Since that post was published, I’ve had some requests to also show how a similar result can be achieved with the other third-party authentication library available for .NET Core: IdentityServer4. So, in this post, I’m revisiting the question of how to issue tokens in ASP.NET Core apps and, this time, I’ll use IdentityServer4 in the sample code.

As before, I think it’s worth mentioning that there are a lot of good options available for authentication in ASP.NET Core. Azure Active Directory Authentication is an easy way to get authentication as a service. If you would prefer to own the authentication process yourself, I’ve used and had success with both OpenIddict and IdentityServer4.

Bear in mind that both IdentityServer4 and OpenIddict are third-party libraries, so they are maintained and supported by community members – not by Microsoft.

The Scenario

As you may remember from last time, the goal of this scenario is to setup an authentication server which will allow users to sign in (via ASP.NET Core Identity) and provides a JWT bearer token that can be used to access protected resources from a SPA or mobile app.

In this scenario, all the components are owned by the same developer and trusted, so an OAuth 2.0 resource owner password flow is acceptable (and is used here because it’s simple to use in a demonstration). Be aware that this model exposes user credentials and access tokens (both of which are sensitive and could be used to impersonate a user) to the client. In more complex scenarios (especially if clients shouldn’t be trusted with user credentials or access tokens), OpenID Connect flows such as implicit or hybrid flows are preferable. IdentityServer4 and OpenIddict both support those scenarios. One of IdentityServer4’s maintainers (Dominick Baier) has a good blog post on when different flows should be used and IdentityServer4 quickstarts include a sample of using the implicit flow.

As we walk through this scenario, I’d also encourage you to check out IdentityServer4 documentation, as it gives more detail than I can fit into this (relatively) short post.

Getting Started

As before, my first step is to create a new ASP.NET Core web app from the ‘web application’ template, making sure to select “Individual User Accounts” authentication. This will create an app that uses ASP.NET Core Identity to manage users. An Entity Framework Core context will be auto-generated to manage identity storage. The connection string in appsettings.json points to the database where this data will be stored.

Because it’s interesting to understand how IdentityServer4 includes role and claim information in its tokens, I also seed the database with a couple roles and add a custom property (OfficeNumber) to my ApplicationUser type which can be used as a custom claim later.

These initial steps of setting up an ASP.NET Core application with identity are identical to what I did in my previously with OpenIddict, so I won’t go into great detail here. If you would like this setup explained further, please see my previous post.

Adding IdentityServer4

Now that our base ASP.NET Core application is up and running (with Identity services), we’re ready to add IdentityServer4 support.

  1. Add "IdentityServer4": "1.0.2" as a dependency in the app’s project.json file.
  2. Add IdentityServer4 to the HTTP request processing pipeline with a call to app.UseIdentityServer() in the app’s Startup.Configure method.
    1. It’s important that the UseIdentityServer() call come after registering ASP.NET Core Identity (app.UseIdentity()).
  3. Register IdentityServer4 services in the Startup.ConfigureServices method by calling app.AddIdentityServer().

We’ll also want to specify how IdentityServer4 should sign tokens. During development, an auto-generated certificate can be used to sign tokens by calling AddTemporarySigningCredential after the call to AddIdentityServer in Startup.ConfigureServices. Eventually, we’ll want to use a real cert for signing, though. We can sign with an x509 certificate by calling AddSigningCredential:

services.AddIdentityServer()
  // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
  .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))

Note that you should not load the certificate from the app path in production; there are other AddSigningCredential overloads that can be used to load the certificate from the machine’s certificate store.

As mentioned in my previous post, it’s possible to create self-signed certificates for testing this out with the makecert and pvk2pfx command line tools (which should be on the path in a Visual Studio Developer Command prompt).

  • makecert -n "CN=AuthSample" -a sha256 -sv IdentityServer4Auth.pvk -r IdentityServer4Auth.cer
    • This will create a new self-signed test certificate with its public key in IdentityServer4Auth.cer and it’s private key in IdentityServer4Auth.pvk.
  • pvk2pfx -pvk IdentityServer4Auth.pvk -spc IdentityServer4Auth.cer -pfx IdentityServer4Auth.pfx
    • This will combine the pvk and cer files into a single pfx file containing both the public and private keys for the certificate. Our app will use the private key from the pfx to sign tokens. Make sure to protect this file. The .cer file can be shared with other services for the purpose of signature validation.

Token issuance from IdentityServer4 won’t yet be functional, but this is the skeleton of how IdentityServer4 is connected to our ASP.NET Core app.

Configuring IdentityServer4

Before IdentityServer4 will function, it must be configured. This configuration (which is done in ConfigureServices) allows us to specify how users are managed, what clients will be connecting, and what resources/scopes IdentityServer4 is protecting.

Specify protected resources

IdentityServer4 must know what scopes can be requested by users. These are defined as resources. IdentityServer4 has two kinds of resources:

  • API resources represent some protected data or functionality which a user might gain access to with an access token. An example of an API resource would be a web API (or set of APIs) that require authorization to call.
  • Identity resources represent information (claims) which are given to a client to identify a user. This could include their name, email address, or other claims. Identity information is returned in an ID token by OpenID Connect flows. In our simple sample, we’re using an OAuth 2.0 flow (the password resource flow), so we won’t be using identity resources.

The simplest way to specify resources is to use the AddInMemoryApiResources and AddInMemoryIdentityResources extension methods to pass a list of resources. In our sample, we do that by updating our services.AddIdentityServer() call to read as follows:

services.AddIdentityServer()
  // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
  .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))
  .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources()); // <- THE NEW LINE 

The MyApiResourceProvider.GetAllResources() method just returns an IEnumerable of ApiResources.

return new[]
{
    // Add a resource for some set of APIs that we may be protecting
    // Note that the constructor will automatically create an allowed scope with
    // name and claims equal to the resource's name and claims. If the resource
    // has different scopes/levels of access, the scopes property can be set to
    // list specific scopes included in this resource, instead.
    new ApiResource(
        "myAPIs",                                       // Api resource name
        "My API Set #1",                                // Display name
        new[] { JwtClaimTypes.Name, JwtClaimTypes.Role, "office" }) // Claims to be included in access token
};

If we also needed identity resources, they could be added with a similar call to AddInMemoryIdentityResources.

If more flexibility is needed in specifying resources, this can be accomplished by registering a custom IResourceStore with ASP.NET Core’s dependency injection. An IResourceStore allows for finer control over how resources are created, allowing a developer to read resource information from an external data source, for example. An IResourceStore which works with EntityFramework.Core (IdentityServer4.EntityFramework.Stores.ResourceStore) is available in the IdentityServer4.EntityFramework package.

Specify Clients

In addition to specifying protected resources, IdentityServer4 must be configured with a list of clients that will be requesting tokens. Like configuring resources, client configuration can be done with an extension method: AddInMemoryClients. Also like configuring resources, it’s possible to have more control over the client configuration by implementing our own IClientStore. In this sample, a simple call to AddInMemoryClients would suffice to configure clients, but I opted to use an IClientStore to demonstrate how easy it is to extend IdentityServer4 in this way. This would be a useful approach if, for example, client information was read from an external database. And, as with IResourceStore, you can find a ready-made IClientStore implementation for working with EntityFramework.Core in the IdentityServer4.EntityFramework package.

The IClientStore interface only has a single method (FindClientByIdAsync) which is used to look up clients given a client ID. The returned object (of type Client) contains, among other things, information about the client’s name, allowed grant types and scopes, token lifetimes, and the client secret (if it has one).

In my sample, I added the following IClientStore implementation which will yield a single client configured to use the resource owner password flow and our custom ‘myAPIs’ resource:

public class CustomClientStore : IClientStore
{
    public static IEnumerable<Client> AllClients { get; } = new[]
    {
        new Client
        {
            ClientId = "myClient",
            ClientName = "My Custom Client",
            AccessTokenLifetime = 60 * 60 * 24,
            AllowedGrantTypes = GrantTypes.ResourceOwnerPassword,
            RequireClientSecret = false,
            AllowedScopes =
            {
                "myAPIs"
            }
        }
    };

    public Task<Client> FindClientByIdAsync(string clientId)
    {
        return Task.FromResult(AllClients.FirstOrDefault(c => c.ClientId == clientId));
    }
}

I then registered the store with ASP.NET Core dependency injection (services.AddSingleton<IClientStore, CustomClientStore>() in Startup.ConfigureServices).

Connecting IdentityServer4 and ASP.NET Core Identity

To use ASP.NET Core Identity, we’ll be using the IdentityServer4.AspNetIdentity package. After adding this package to our project.json, the previous app.AddIdentityServer() call in Startup.ConfigureServices can be updated to look like this:

services.AddIdentityServer()
  // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
  .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))
  .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources())
  .AddAspNetIdentity<ApplicationUser>(); // <- THE NEW LINE

This will cause IdentityServer4 to get user profile information from our ASP.NET Core Identity context, and will automatically setup the necessary IResourceOwnerPasswordValidator for validating credentials. It will also configure IdentityServer4 to correctly extract JWT subject, user name, and role claims from ASP.NET Core Identity entities.

Putting it Together

With configuration done, IdentityServer4 should now work to serve tokens for the client we defined. The registering of IdentityServer4 services in Startup.ConfigureServices ends up looking like this all together:

// Add IdentityServer services
services.AddSingleton<IClientStore, CustomClientStore>();

services.AddIdentityServer()
    // .AddTemporarySigningCredential() // Can be used for testing until a real cert is available
    .AddSigningCredential(new X509Certificate2(Path.Combine(".", "certs", "IdentityServer4Auth.pfx")))
    .AddInMemoryApiResources(MyApiResourceProvider.GetAllResources())
    .AddAspNetIdentity<ApplicationUser>();

As before, a tool like Postman can be used to test out the app. The scope we specify in the request should be our custom API resource scope (‘myAPIs’).

Here is a sample token request:

POST /connect/token HTTP/1.1
Host: localhost:5000
Cache-Control: no-cache
Postman-Token: 958df72b-663c-5638-052a-aed41ba0dbd1
Content-Type: application/x-www-form-urlencoded

grant_type=password&username=Mike%40Contoso.com&password=MikesPassword1!&client_id=myClient&scope=myAPIs

The returned access token in our app’s response (which can be decoded using online utilities) looks like this:

{
 alg: "RS256",
 kid: "671A47CE65E10A98BB86EDCD5F9684E9D048FAE9",
 typ: "JWT",
 x5t: "ZxpHzmXhCpi7hu3NX5aE6dBI-uk"
}.
{
 nbf: 1481054282,
 exp: 1481140682,
 iss: "http://localhost:5000",
 aud: [
  "http://localhost:5000/resources",
  "myAPIs"
 ],
 client_id: "myClient",
 sub: "f6435683-f81c-4bd4-9c14-c7c09b236f4e",
 auth_time: 1481054281,
 idp: "local",
 name: "Mike@Contoso.com",
 role: "Administrator",
 office: "300",
 scope: [
  "myAPIs"
 ],
 amr: [
  "pwd"
 ]
}.
[signature]

You can read more details about how to understand the JWT fields in my previous post. Note that there are a few small differences between the tokens generated with OpenIddict and those generated with IdentityServer4.

  • IdentityServer4 includes the amr (authentication method references) field which lists authentication methods used.
  • IdentityServer4 always requires a client be specified in token requests, so it will always have a client_id in the response whereas OpenIddict treats the client as optional for some OAuth 2.0 flows.
  • IdentityServer4 does not include the optional iat field indicating when the access token was issued, but does include the auth_time field (defined by OpenID Connect as an optional field for OAuth 2.0 flows) which will have the same value.

In both cases, it’s possible to customize claims that are returned for given resources/scopes, so developers can make sure claims important to their scenarios are included.

Conclusion

Hopefully this walkthrough of a simple IdentityServer4 scenario is useful for understanding how that package can be used to enable authentication token issuance in ASP.NET Core. Please be sure to check out the IdentityServer4 docs for more complete documentation. As IdentityServer4 is not a Microsoft-owned library, support questions or issue reports should be directed to IdentityServer or the IdentityServer4 GitHub repository.

The scenario implemented here is no different from what was covered previously, but serves as an example of how different community-driven libraries can work to solve a given problem. One of the most exciting aspects of .NET Core is the tremendous community involvement we’ve seen in producing high-quality libraries to extend what can be done with .NET Core and ASP.NET Core. I think token-based authentication is a great example of that.

I have checked in sample code that shows the end product of the walk-through in this blog. Reviewing that repository may be helpful in clarifying any remaining questions.

Resources

Notes from the ASP.NET Community Standup –January 10, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

(Sorry for the delay on this one.)


ASP.NET Community Standup 1/10/2017

Community Links

Your First Angular 2, ASP.NET Core Project in Visual Studio Code – Part 6

ASP.NET Core Template Pack

Introducing downr: A simple blogging engine in ASP.NET Core with support for Markdown

mDocs: Building a project documentation using Markdown View Engine

An introduction to ViewComponents – a login status view component 

Configuring .NET Core Applications using Consul

Visual Studio 2017 and Visual Studio 2015 with .NET Core

Smarter build scripts with MSBuild and .NET Core

Demo: Azure Application Insights & ASP.NET Core 1.1

Application Insights is an application performance service used to monitor live web applications, detect anomalies, and perform analytics. In this community stand up, Damian went over some  Azure Application Insights features he added to the live.asp.net.
Damian shared how he is using App Insights to log application lifetime events on the live.asp.net site. He did this by creating an ASP.NET Core startup filter which is a piece of code that you can run during application start without having to add it to  Startup.cs.

By adding the code above you can view logs on Azure of every time your application starts or stops.  App Insights also supports collecting data locally; to learn more about this please see the community standup from 9/18/2016 or  Hanselman’s post on this topic.

capture

This week the team spent time going over some interesting  features in Azure App Insights and, how you can use them to gather details logs about an application. You can learn more about the Azure App Insights APIs here.

Happy coding!

 

 

 

Updates to Web Tools in Visual Studio 2017 RC

$
0
0

Today we announced a new update to Visual Studio 2017 RC that includes a variety of improvements for both ASP.NET and ASP.NET Core projects. If you’re already installed Visual Studio 2017 RC, you will be notified of the available update automatically. Otherwise simply install Visual Studio 2017 RC to get the latest updates. Also, if you’re willing to help us improve our tools by giving feedback on features and ideas before we ship them, let us know who you are.

Below is a summary of improvements this update brings:

  • Workload Updates
    • .NET Core graduated from being a Preview workload to a standard part of the “ASP.NET and web development” workload
      image
    • If, you only want to build .NET Core apps, there is a dedicated “.NET Core cross-platform development” workload available in the “Other Toolsets” category of the Visual Studio Installer.
    • The capability to open existing MVC4 projects is now available as an optional component for the “ASP.NET and web development” workload.  If you do not choose it during installation, you will be prompted to install it when you open an MVC4 project.
      image
  • You can now remote debug .NET Core apps running on Linux over SSH from Visual Studio using the Attach to Process dialog. See the debugger team’s blog post for details on how to enable it.
    clip_image004
  • When debugging JavaScript running in Chrome, Chrome now launches with all of your extensions and customizations enabled once you log into the Chrome instance launched by Visual Studio.

Thanks for trying out this latest update of Visual Studio 2017! For an up to date list of known issues see our GitHub page, and keep the feedback coming by reporting any issues using the built-in feedback tools or via Twitter.

Notes from the ASP.NET Community Standup –January 24, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

Quick Note: Scott Hanselman will be doing a blog post on the Docker 3 hour ASP.NET Community Standup from 1/17/17.  We will add a link to his post as soon as it is available.

ASP.NET Community Standup 1/24/2017

Community Links

Dockerizing Nerd Dinner: Part 2, Connecting ASP.NET to SQL Server

Building microservices with ASP.NET Core (without MVC)

An In Depth Guide Into a Ridiculously Simple API Using .NET Core

File Upload API with Nancy, .NET Core in a Shockingly Small Amount of Code

Power BI Tag Helper: Part 1 – Power BI Publish to Web

Power BI Tag Helper: Part 2 – Power BI Embedded

New Year, New Blog

Custom Project Templates Using dotnet new

ASP.NET Boilerplate: Quartz Integration

Working with Multiple .NET Core SDKs – both project.json and msbuild/csproj

Working with a Distributed Cache in ASP.NET Core

Reloading strongly typed options in ASP.NET Core 1.1.0

Project.json to MSBuild conversion guide

ASP.NET Core Workshop

Questions and Answers

Question:  What is the release status for Kestrel?

—  Kestrel has been out for a while we released 1.0 in June 2016, 1.1 November 2016,  and we are working on 1.1.1 and 1.4 servicing release for LTS and current due in February.

Question:  Can you share success stories of customers using ASP.NET Core for large traffic websites?

— We do know of customers who have deployed it successfully.  However, we don’t have any that we can share publicly.

Question:  Is it safe to use Kestrel in production ?

—  Yes it is safe to use Kestrel in production.  However, we don’t recommend Kestrel being used as an edge server(don’t use it directly exposed to the internet).

Question:  Is ASP.NET Core supported in Visual Studio 2013?

— No, ASP.NET Core is not supported in Visual Studio 2013.  Today, ASP.NET Core  is supported in Visual Studio 2015 using  project.json tooling and Visual Studio 2017 using csproj tooling.

Question: What is the recommendation for doing ASP.NET Core authentication?

— In a post by Mike Rousos he goes over in detail  how to get started with ASP.NET Core Authentication using IdentityServer4.

Our next standup will be on February 7.  Thanks for watching, and happy coding!

Announcing Continuous Delivery Tools for Visual Studio 2017

$
0
0

Posting on behalf of Ahmed Metwally

Visual Studio Team Services enables developers to create build and release definitions for continuous build integration and deployment of their projects. With a continuous integration and deployment configured, unit tests automatically, assuming the build succeeds and the tests pass, the changes are automatically deployed run on every build after every code push.

To better support this workflow in Visual Studio, we released an experimental DevLabs extension, Continuous Delivery Tools for Visual Studio with the Visual Studio 2017 RC.3 update. The extension makes it simple to automate and stay up to date on your DevOps pipeline for ASP.NET and .NET Core projects targeting Azure App Services and Azure Container Services. You will instantly be notified in Visual Studio if a build fails and will be able to access more information on build quality through the VSTS dashboard.

The Configure Continuous Delivery dialog lets you pick a branch from the repository to deploy to a target App service. When you click OK, the extension creates build and release definitions on Team Services (this can take a couple of minutes), and then kicks off the first build and deployment automatically. From this point onward, Team Services will trigger a new build and deployment whenever you push changes up to the repository.

clip_image002[4]

For more details on how the extension works please check the Visual Studio blog. To download and install the extension, please go to the Visual Studio Gallery.

Microsoft DevLabs Extensions

This extension is a Microsoft DevLabs extension which is an outlet for experiments from Microsoft that represent some of the latest ideas around developer tools. They are designed for broad use, feedback, and quick iteration but it’s important to note DevLabs extensions are not supported and there is no commitment they’ll ever make it big and ship in the product.

It’s all about feedback…

We think there’s a lot we can do in the IDE to help teams collaborate and ship high quality code faster. In the spirit of being agile we want to ship fast, try out new ideas, improve the ones that work and pivot on the ones that don’t. Over the next few days, weeks and months we’ll update the extension with new fixes and features. Your feedback is essential to this process. If you are interested in sharing your feedback join our slack channel or ping us at vsDevOps@microsoft.com.

imageAhmed Metwally, Senior Program Manager, Visual Studio
@cd4vs
Ahmed is a Program Manager on the Visual Studio Platform team focused on improving team collaboration and application lifecycle management integration.


ASP.NET Documentation Now on docs.microsoft.com

$
0
0

This post was written by Wade Pickett

We are happy to announce ASP.NET documentation and guidance has been migrated to docs.microsoft.com!

Better Together and Great Features

This brings the ASP.NET documentation together with ASP.NET Core, C#, Entity Framework Core, Azure, Visual Studio, C++, and SQL on Linux.

docs.microsoft.com will allow for a consistent experience across documentation sets, in addition to supporting:

  • Community contributions
  • Social sharing
  • Global localization supporting 45 languages
  • Responsive mobile design
  • Side-notes and comments
  • Real-time table of contents filter

ASP.NET documentation is open for contributions in GitHub!

ASP.NET Core docs and now ASP.NET docs are open and available in GitHub where you can contribute to the docs and code samples directly or notify the community (which includes our teams here at Microsoft) of a doc issue.

You can also see what doc projects are under way and get involved to discuss approach, scope, and priority.

Take a look at these current projects and issues lists to discover community group efforts you might be interested in:

ASP.NET Core:

.NET Core:

There will be a big focus on the docs.microsoft.com site to continue to update both the content and site features.  See up to date announcements here: https://docs.microsoft.com/teamblog/.

Building Single Page Applications on ASP.NET Core with JavaScriptServices

$
0
0

This is a guest post by Steve Sanderson

These days, many developers are building Single-Page Applications (SPAs) using frameworks such as Angular or React. These are powerful frameworks that produce a great end-user experience, but we often hear that building these applications is complicated. It can be a challenge to integrate server-side and client-side code well, or even just to choose a productive project setup in the first place.

Our goal is to make ASP.NET Core the best server-side platform for these kinds of projects. So, we recently shipped three new NuGet packages that are intended to simplify SPA development and add powerful, useful features:

  • Microsoft.AspNetCore.SpaTemplates plugs into dotnet new, providing project templates for Angular 2, Aurelia, Knockout, React, and React+Redux applications.
  • Microsoft.AspNetCore.SpaServices is how SpaTemplates-produced projects work internally. It provides useful features for SPA applications, such as server-side rendering for Angular and React applications, plus integration with Webpack build middleware.
  • Microsoft.AspNetCore.NodeServices is how SpaServices works internally. It’s a low-level library that gives a fast, robust way for ASP.NET Core applications to run arbitrary JavaScript code on the server.

Collectively, these features go by the name JavaScriptServices. You’ll find the source code, issue tracker, and documentation on the JavaScriptServices GitHub repository.

Prerequistes

To work with these technologies, first make sure you’ve installed the following:

  • .NET Core SDK 1.0 RC4 (or later) for Windows, Mac, or Linux
    • Or, if you’re on Windows, you can install the latest Visual Studio 2017 RC, which includes it. Be sure you have VS2017 build 26206 or later – older versions won’t work.
  • Node.js, version 6 or later

Getting started

The easiest way to get started is by using one of the project templates we’ve made available. These plug into the standard dotnet new command, and work on Windows, Mac, and Linux.

To install the Single Page Application (SPA) templates, run the following command:

dotnet new --install Microsoft.AspNetCore.SpaTemplates::*

One this is installed, you’ll see that dotnet new now can produce projects based on angular, aurelia, knockout, react, and reactredux:

Template list for the "dotnet new" command

Template list for the “dotnet new” command

To actually generate a new project, first create an empty directory for it to go into, cd to that directory, and then use dotnet new to create your project. For example:

dotnet new angular

There are two ways to run your new project: via the command line, or via Visual Studio (Windows only).

Option 1: Running via the Command Line

To run your project on the command line, you must first restore both the .NET and NPM dependencies. Execute the following commands:

dotnet restore
npm install

Then, set an environment variable to tell ASP.NET to run in development mode:

  • If you’re using PowerShell in Windows, execute $Env:ASPNETCORE_ENVIRONMENT = "Development"
  • If you’re using cmd.exe in Windows, execute setx ASPNETCORE_ENVIRONMENT "Development", and then restart your command prompt to make the change take effect
  • If you’re using Mac/Linux, execute export ASPNETCORE_ENVIRONMENT=Development

Finally, you can start your new app by running dotnet run. It will listen on port 5000, so point your browser to http://localhost:5000 to see it.

Option 2: Running in Visual Studio 2017 RC

If you’re on Windows and want to use Visual Studio 2017 RC, you can simply open your newly-generated .csproj file in Visual Studio. It will take care of restoring the .NET and NPM dependencies for you (though it can take a few minutes).

When your dependencies are restored, just press Ctrl+F5 to launch the application in a browser as usual.

Alternative: Creating a SPA project via Yeoman

If for some reason you’re stuck on older (pre-RC4) versions of .NET Core tooling, or if you need to use Visual Studio 2015, then instead of using the dotnet new command, you can use Yeoman to create your new project. You’ll need .NET Core SDK 1.1 and Node.js version 6 or later.

To install Yeoman and these templates, open a command prompt, and then run the following:

npm install -g yo generator-aspnetcore-spa

Then, cd to an empty directory where you want your project to go, and then run Yeoman as follows:

yo aspnetcore-spa

Once your new project is created, Yeoman will automatically fetch its .NET and NPM dependencies. You can then set the ASPNETCORE_ENVIRONMENT variable as described above, then run your project by executing dotnet run.

Your new Single-Page Application project

Whichever way you choose to create and run your project, here’s how it will initially look:

Generated Angular application homepage

Generated Angular application homepage

Let’s now look at some of the features in the templates.

Server-side prerendering

If you’re using the Angular or React+Redux template, then you will have server-side prerendering. This makes your application’s initial UI appear much more quickly, because users don’t have to wait for their browser to download, evaluate, and execute large JavaScript libraries before the application appears. This works by executing Angular/React/etc components on the server as well as on the client. To a limited extent, it means your SPA can even work with JavaScript disabled in the browser, which is great for ensuring your app is accessible to all search engine crawlers.

As a quick (but artificial) demo to prove this is working, just try disabling JavaScript in your browser. You’ll still be able to load the page and navigate around. Bear in mind that only navigation works without JavaScript, not any other user actions that are meant to execute JavaScript.

The primary use case for server-side prerendering is to make your page appear extremely quickly, even if the user has a slow network connection or a slow mobile device, and even if your SPA codebase is very large. The client-side code will download in the background, and then takes over execution as soon as it’s loaded. This feature solves what is otherwise a significant drawback of large SPA frameworks.

Webpack dev middleware

These projects all use Webpack as a front-end build system, because it’s the dominant system used by Angular/React/etc developers. One of its powerful features is the ability, during development, to keep running in the background and incrementally recompile any modified code extremely quickly.

Webpack dev middleware is integrated into these projects via SpaServices. As long as your application is running in the Development environment, you can modify your client-side code (e.g., the TypeScript, or in Angular, the html files that are compiled into your components), and the updated version is almost immediately available to the browser. You don’t have to run any build commands manually.

Hot Module Replacement

Hot Module Replacement (HMR) takes the dev middleware feature a step further. It sets up a live link between the Webpack dev middleware service and your application running in your local browser. Whenever your source files change and the dev middleware provides an incremental update, HMR pushes it to your local browser immediately. It does thiswithout causing a full page reload (because that might wipe out useful state, such as any debugging session you have in progress). It directly updates the affected modules in your running application.

The purpose of this is to give you a faster, more productive development experience. To see it working, just edit one of the TypeScript or HTML files in your /ClientApp directory. You’ll see the corresponding update appear in your browser right away.

If you’re using React or React+Redux, your application state will br preserved through the update. If you cause a compiler error, its details will appear as an overlay in your browser. Once you fix the compiler error, your application will resume, still preserving its previous in-memory state.

Note: HMR is currently available in all the templates except for the Aurelia one. We aim to add it to the Aurelia template soon.

Efficient production builds

These project templates are set up to build your client-side assets (TypeScript, bundled HTML, CSS, etc.) in two different modes:

  • Development, which includes source maps for easy debugging
  • Production, which tightly minifies your code and does not include source maps

Since this is achieved using Webpack 2, you can easily edit the Webpack configration files to set up whatever combination of build options you require, or to enable support for compiling LESS/SASS or other popular front-end file types and languages. See the webpack.config.js file at the root of the project, and Webpack 2 documentation for details of the available options. There’s a guide to enabling LESS support in the SpaServices documentation.

Invoking Webpack manually

The dev middleware feature means you don’t normally need to invoke Webpack manually. But if you do want to run Webpack manually on the command line, you can run the following:

webpack --config webpack.config.vendor.js
webpack

The first line repackages all of your vendor dependencies, i.e., third party libraries such as Angular or React and all their dependencies. You only need to run this if you modify your third-party dependencies, such as if you update to a newer version of your chosen SPA framework.

The second line (running webpack with no parameters) rebuilds your own application code. Separating your own application code from your vendor dependencies makes your builds much faster.

These commands will produce development-mode builds. If you want to produce production-mode builds, then also pass the flag --env.prod when invoking Webpack.

Publishing for deployment

To deploy your application to production, you can use the publish feature which is built into dotnet command line tooling and Visual Studio. For example, on the command line, run:

dotnet publish -c Release

This will produce a ready-to-deploy production build of your application. It includes .NET code compiled in Release mode, and invokes Webpack with the --env.prod flag to produce a production build of front-end assets. Equivalently, you can use the Publish option from Visual Studio’s Build menu.

Learn more

Learn more about SpaServices and see other usage examples at the SpaServices online documentation.

Using NodeServices directly

Even if you’re not building a single-page application, it can be extremely useful to be able to run JavaScript on the server in certain cases. Why would you want to do this? Primarily, because a huge number of useful, high-quality Web-related open source packages are in the form of Node Package Manager (NPM) modules. NPM is the largest repository of open-source software packages in the world, and the Microsoft.AspNetCore.NodeServices package means that you can use any of them in your ASP.NET Core application.

Of course, this is how SpaServices works behind the scenes. For example, to prerender Angular or React components on the server, it needs to execute your JavaScript on the server. It does this via NodeServices, which starts up a hidden Node.js instance and provides a fast and robust way of making calls into it from .NET.

Walkthrough: Using NodeServices

For this walkthrough, first create a new application with ASP.NET Core 1.1.0 or later.

Next, add a reference to Microsoft.AspNetCore.NodeServices using one of these techniques:

  • If you use Visual Studio, use its NuGet Package Manager dialog
  • Or, if you have .NET Core RC 4 (or later) tools, you can execute dotnet add package Microsoft.AspNetCore.NodeServices
  • Or, if you have a project.json-based project, you can edit your project.json file to add a reference to Microsoft.AspNetCore.NodeServices and then run dotnet restore

Next, configure ASP.NET’s dependency injection (DI) system to make it aware of NodeServices. In your Startup.cs file, in the ConfigureServices method, add the following line:

services.AddNodeServices();

You’re now ready to receive instances of INodeServices in your application. INodeServices is the API through which .NET code can make calls into JavaScript that runs in a Node environment. Let’s start just by getting back a string from JavaScript.

In HomeController.cs, at the top, add the line using Microsoft.AspNetCore.NodeServices;. Now amend its About method as follows, so that it makes a call into Node.js and awaits the result:

If you run your application now and try to browse to the about page, you’ll get an error saying Cannot find module ‘myNodeModule.js’. This is because NodeServices tried to invoke your JavaScript code, but no such code was found. Of course, you need to create a file with that name.

At the root directory of your project, add a file called myNodeModule.js, containing the following:

Finally, display the result by editing your Views/Home/About.cshtml view so it emits the ResultFromNode value:

Now you’ll see your JavaScript code was executed on the server, the result went back to your .NET code, and is displayed to the visitor:

Screenshot displaying value returned from Node.js

Screenshot displaying value returned from Node.js

Since you can now run arbitrary server-side JavaScript in your application, you have access to the entire NPM ecosystem. This provides new ways of solving many problems, including dynamic transpilation of TypeScript/SASS/ES2017, image/audio manipulation, or running on the server many libraries that would otherwise run in the browser.

Example: Rendering charts/graphs on the server

Imagine you want to display some charts and graphs in your web pages. These need to be constructed dynamically using data from some backend database or web service.

There are various ways to approach this. One popular choice today is to have the server send raw data values to the browser, and then use a client-side library to actually draw the charts. For example, you could use the extremely popular chartist.js library.

This works well, but there may be cases where you don’t want to depend on executing client-side code for this (you might want to avoid that overhead for the browser, or can’t rely on JavaScript being enabled and not blocked on all your customers’ devices).

NodeServices lets you continue using Chartist, but run it on the server, so that browsers don’t need to run any JavaScript code to get the graphs. To do this, following on from the previous example, add the Chartist library to your project by running the following in a command line:

npm install --save node-chartist

Next, you can amend your About method to pass some data and options from .NET code into your Node.js module:

The options and data objects, when JSON-serialized, will be in exactly the correct format for Chartist. You can learn more about the available chart types and option in Chartist’s documentation.

The next job is to update myNodeModule.js so that it receives the options and data parameters, and passes them through to Chartist. When it gets back the rendered chart, it can return it to the .NET code::

If you know JavaScript, you’ll recognize that Chartist’s generate function returns a Promise object. This code uses the then method to pass the result – which in this case is a string of SVG markup – back to .NET via the supplied callback parameter. In effect, the Node.js module you’re writing here is a simple adapter between .NET code and (in this case) the Chartist library.

There are two final things you need to do to see the graph in your page:

  1. In your About.cshtml, replace @ViewData["ResultFromNode"] with @Html.Raw(ViewData["ChartMarkup"]). You need to use Html.Raw because Chartist will return SVG markup.
  2. In your Views/Shared/_Layout.cshtml file, in the <head> element, add a reference to a suitable stylesheet for the Chartist charts. This is how you control the color scheme and visual style. For example, use the default Chartist CSS file available via CDN: <link rel="stylesheet" href="http://cdn.jsdelivr.net/chartist.js/latest/chartist.min.css">

Now when you run your application and visit the About page, you’ll see your chart:

Screenshot displaying server-rendered chart

Screenshot displaying server-rendered chart

This looks identical to how Chartist would render the same data on the client, except now it involves no client-side code for the browser to execute. Chartist is relatively simple. If it turns out that your requirements are too sophisticated for it, you could switch to the incredibly powerful D3.js, because that also supports running in Node and therefore works with NodeServices.

Summary

Of course, rendering charts is just one example. NodeServices allows ASP.NET Core developers to make use of the entire NPM ecosystem, which gives rise to a huge range of possibilities.

You can read more about the APIs in NodeServices (for invoking Node.js code from .NET) at the NodeServices project on GitHub. Similarly, you can read more about the APIs in SpaServices (a package of SPA-specific helpers, e.g., for server-side prerendering) at the SpaServicesProject on GitHub.

Try it out

We hope these new features will make it easier for you to build sophisticated, modern web applications that combine .NET and JavaScript code.

Please let us know if you have feedback on the features, or encounter any problems when using them, by posting to the issues list on the JavaScriptServices GitHub repo.

 

 

Notes from the ASP.NET Community Standup –February 14, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

ASP.NET Community Standup 2/04/2017

Community Links

Overriding ASP.NET Core Framework-Provided Services

What happened to my Thread.CurrentPrincipal

Simplify UseRequestLocalization Configuration

Log Colorization using ColorizedConsoleLoggerProvider

Logging using DiagnosticSource in ASP.NET Core

Middleware | My View

A Reminder to Take Care when Registering Dependencies

Migrating from .NET Framework to .NET Core

Lessons learnt going into production with ASP.NET Core

Enabling SSO for Discourse with IdentityServer3

Offering Solutions Software GmbH – Articles about Angular & ASP.NET

dotNetify

AspNetCore IdentityServer4 Angular2 Docker

 .NET Core on ARM

Functional ASP.NET Core

Running Serverless ASP.NET Core Web APIs with Amazon Lambda

.Net Core Web API on AWS Lambda Performance

Updates to Web Tools in Visual Studio 2017 RC

ASP.NET Documentation Now on docs.microsoft.com

Building Single Page Applications on ASP.NET Core with JavaScript Services

.NET Core Migration

Training with the ASP.NET Monsters

Orchard Harvest 2017 – Orchard Harvest

Accomplishments

Getting ready for Visual Studio 2017 RTM — The team has been working hard to finish the ASP.NET Core MSBuild experience in time for the VS 2017 March 7th.  Currently,  the only out-of-the-box  project templates in VS 2017 with the new MSBuild format include the .NET Core and .NET Standard templates.

.NET  Core 2.0 work in progress — The team is continuing to work on NET standard 2.0.NET Core 2.0, and ASP.NET Core 2.0.  Some of the things we are working on with  ASP.NET Core 2.0  include:  Razor pages, we are beginning the work on SignalR, and we will continue to invest  in Kestrel.

Happy coding and see you next week !

Announcing Microsoft ASP.NET WebHooks V1 RTM

$
0
0

We are very happy to announce ASP.NET WebHooks V1 RTM making it easy to both send and receive WebHooks with ASP.NET.

WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more — the possibilities are endless! When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Because of their simplicity, WebHooks are already exposed by most popular services and Web APIs. To help managing WebHooks, Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

The two parts can be used together or apart depending on your scenario. If you only need to receive WebHooks from other services, then you can use just the receiver part; if you only want to expose WebHooks for others to consume, then you can do just that.

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, is available as Open Source on GitHub, and as Nuget packages.

A port to the ASP.NET Core is being planned so please stay tuned!

Receiving WebHooks

Dealing with WebHooks depends on who the sender is. Sometimes there are additional steps registering a WebHook verifying that the subscriber is really listening. Often the security model varies quite a bit. Some WebHooks provide a push-to-pull model where the HTTP POST request only contains a reference to the event information which is then to be retrieved independently.

The purpose of Microsoft ASP.NET WebHooks is to make it both simpler and more consistent to wire up your API without spending a lot of time figuring out how to handle any WebHook variant:

WebHookReceivers

A WebHook handler is where you process the incoming WebHook. Here is a sample handler illustrating the basic model. No registration is necessary – it will automatically get picked up and called:

public class MyHandler : WebHookHandler
{
// The ExecuteAsync method is where to process the WebHook data regardless of receiver
public override Task ExecuteAsync(string receiver, WebHookHandlerContext context)
{
// Get the event type
string action = context.Actions.First();

// Extract the WebHook data as JSON or any other type as you wish
JObject data = context.GetDataOrDefault<JObject>();

return Task.FromResult(true);
}
}

Finally, we want to ensure that we only receive HTTP requests from the intended party. Most WebHook providers use a shared secret which is created as part of subscribing for events. The receiver uses this shared secret to validate that the request comes from the intended party. It can be provided by setting an application setting in the Web.config file, or better yet, configured through the Azure portal or even retrieved from Azure Key Vault.

For more information about receiving WebHooks and lots of samples, please see these resources:

Sending WebHooks

Sending WebHooks is slightly more involved in that there are more things to keep track of. To support other APIs registering for WebHooks from your ASP.NET application, we need to provide support for:

  • Exposing which events subscribers can subscribe to, for example Item Created and Item Deleted;
  • Managing subscribers and their registered WebHooks which includes persisting them so that they don’t disappear;
  • Handling per-user events in the system and determine which WebHooks should get fired so that WebHooks go to the correct receivers. For example, if user A caused an Item Created event to fire then determine which WebHooks registered by user A should be sent. We don’t want events for user A to be sent to user B
  • Sending WebHooks to receivers with matching WebHook registrations.

As described in the blog Sending WebHooks with ASP.NET WebHooks Preview, the basic model for sending WebHooks works as illustrated in this diagram:

WebHooksSender

Here we have a regular Web site (for example deployed in Azure) with support for registering WebHooks. WebHooks are typically triggered as a result of incoming HTTP requests through an MVC controller or a WebAPI controller. The orange blocks are the core abstractions provided by ASP.NET WebHooks:

  1. IWebHookStore: An abstraction for storing WebHook registrations persistently. Out of the box we provide support for Azure Table Storage and SQL but the list is open-ended.
  2. IWebHookManager: An abstraction for determining which WebHooks should be sent as a result of an event notification being generated. The manager can match event notifications with registered WebHooks as well as applying filters.
  3. IWebHookSender: An abstraction for sending WebHooks determining the retry policy and error handling as well as the actual shape of the WebHook HTTP requests. Out of the box we provide support for immediate transmission of WebHooks as well as a queuing model which can be used for scaling up and out, see the blog New Year Updates to ASP.NET WebHooks Preview for details.

The registration process can happen through any number of mechanisms as well. Out of the box we support registering WebHooks through a REST API but you can also build registration support as an MVC controller or anything else you like.

It’s also possible to generate WebHooks from inside a WebJob. This enables you to send WebHooks not just as a result of incoming HTTP requests but also as a result of messages being sent on a queue, a blob being created, or anything else that can trigger a WebJob:

WebHooksWebJobsSender

The following resources provide details about building support for sending WebHooks as well as samples:

Thanks to all the feedback and comments throughout the development process, it is very much appreciated!

Have fun!

Henrik

Notes from the ASP.NET Community Standup –February 14, 2017

$
0
0

This is the next in a series of blog posts that will cover the topics discussed in the ASP.NET Community Standup. The community standup is a short video-based discussion with some of the leaders of the ASP.NET development teams covering the accomplishments of the team on the new ASP.NET Core framework over the previous week. Join Scott HanselmanDamian EdwardsJon Galloway and an occasional guest or two discuss new features and ask for feedback on important decisions being made by the ASP.NET development teams.

Each week the standup is hosted live on Google Hangouts and the team publishes the recorded video of their discussion to YouTube for later reference. The guys answer your questions LIVE and unfiltered. This is your chance to ask about the why and what of ASP.NET! Join them each Tuesday on live.asp.net where the meeting’s schedule is posted and hosted.

ASP.NET Community Standup 2/04/2017

Community Links

Overriding ASP.NET Core Framework-Provided Services

What happened to my Thread.CurrentPrincipal

Simplify UseRequestLocalization Configuration

Log Colorization using ColorizedConsoleLoggerProvider

Logging using DiagnosticSource in ASP.NET Core

Middleware | My View

A Reminder to Take Care when Registering Dependencies

Migrating from .NET Framework to .NET Core

Lessons learnt going into production with ASP.NET Core

Enabling SSO for Discourse with IdentityServer3

Offering Solutions Software GmbH – Articles about Angular & ASP.NET

dotNetify

AspNetCore IdentityServer4 Angular2 Docker

 .NET Core on ARM

Functional ASP.NET Core

Running Serverless ASP.NET Core Web APIs with Amazon Lambda

.Net Core Web API on AWS Lambda Performance

Updates to Web Tools in Visual Studio 2017 RC

ASP.NET Documentation Now on docs.microsoft.com

Building Single Page Applications on ASP.NET Core with JavaScript Services

.NET Core Migration

Training with the ASP.NET Monsters

Orchard Harvest 2017 – Orchard Harvest

Accomplishments

Getting ready for Visual Studio 2017 RTM — The team has been working hard to finish the ASP.NET Core MSBuild experience in time for the VS 2017 March 7th.  Currently,  the only out-of-the-box  project templates in VS 2017 with the new MSBuild format include the .NET Core and .NET Standard templates.

.NET  Core 2.0 work in progress — The team is continuing to work on NET standard 2.0.NET Core 2.0, and ASP.NET Core 2.0.  Some of the things we are working on with  ASP.NET Core 2.0  include:  Razor pages, we are beginning the work on SignalR, and we will continue to invest  in Kestrel.

Happy coding and see you next week !

Viewing all 7144 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>