Category: Adapters


The book is now available, well done co-authors, reviewers and Packt Publishing team.

Connected Pawns

I am pleased to announce “SOA Patterns with BizTalk Server 2013 and Microsoft Azure – Second Edition” by Mark Brimble, Johann Cooper, Coen Dijgraaf and Mahindra Morar can now be purchased on the Amazon or from Packt. This is based on the first edition written by by Richard Seroter. Johann has given a nice preview of the book here.

This is the first book I have co-authored and I can now tick that off on my bucket list. It was an experience to work with such talented co-authors and I was continually amazed how they rose to the occasion. I agree with Johann’s statement about being privileged to write the second edition of what was the first BizTalk book I ever owned.

As I updated chapters I was continually struck by how little had changed. We added new chapters, Chapter 4, Chapter 5, and Chapter 6…

View original post 125 more words

For those who extend their BizTalk solutions using WCF behaviors, a common challenge is working out how to flow details from your BizTalk message into your WCF behavior extension so that you can access them in the extension.

Luckily for you, BizTalk and WCF are like a match made in heaven and work together really seamlessly.  BizTalk context properties flow through to WCF behavior extensions and are easily accessible, and it is quite possible to flow details back from a WCF behavior to the response message’s BizTalk context as well.

For the rest of this blog post I will assume that you have already setup a WCF Message Inspector extension and are making use of it on a WCF send port in BizTalk (a lot of what I demonstrate should be applicable for other types of behavior extensions and for receive locations as well).  I will demonstrate how you can access BizTalk context from within your Message Inspector, and also how you can set context property values on the response message as well.

For your request messages this is a no brainer.  Every single context property that was attached to the message in BizTalk will automatically flow through to your WCF Message Inspector in the form of a property bag attached to the request message. Context property values can be fetched out of the property bag by using the context property namespace#name as in the following example which fetches the BTS.Operation context property value (you would normally of course want to check whether you are getting a null value back before converting it to a string to avoid a NullReferenceException in case the context property didn’t exist in the property bag).

        public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
        {
            string _BTSOperation = request.Properties["http://schemas.microsoft.com/BizTalk/2003/system-properties#Operation"].ToString();
            ///Do some stuff with the BTS Operation
        }

Leveraging your context properties in WCF behaviors opens the door towards much more cohesive solutions in which you use the right types of components for the right kinds of problems. A good example of this would be scenarios in which you need to calculate an OAUTH signature, which is probably best done in a WCF behavior, however you might have otherwise been tempted to do it in a pipeline component instead since you needed access to BizTalk context properties.  Knowing that you can flow your context properties through to your behavior extensions keeps your options open.

How about the flip side when you want to flow some data back from your WCF behavior to BizTalk context properties. Here your options diverge slightly depending on whether you are dealing with a SOAP or RESTFul service based on the WCF-WebHttpBinding binding, and you’ll also find that all options include a bit more difficulty.  You can choose to pass the context in the form of inbound SOAP headers or inbound HTTP headers as demonstrated in the following code snippet.

        public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
        {
            //Writing to WCF.InboundHttpHeaders context property
            System.ServiceModel.Channels.HttpResponseMessageProperty httpHeaders = (System.ServiceModel.Channels.HttpResponseMessageProperty)reply.Properties["httpResponse"];
            httpHeaders.Headers.Add("ContextHTTPHeader", "Content property value");

            //Writing to WCF.InboundHeaders as well as to http://contextproperty#ContextSOAPHeader context property
            var header = new MessageHeader<string>("Content property value");
            var untypedMessageHeader = header.GetUntypedHeader("ContextSOAPHeader", "http://contextproperty");
            reply.Headers.Add(untypedMessageHeader);
        }

So if I actually use this behavior on a WCF SOAP based send port what will the result look like?

Context

We can see that populating an inbound HTTP header in the WCF behavior has resulted in an HTTP header called ContextHTTPHeader being appended to the end of the WCF.InboundHttpHeaders context property. You can use the BRE Pipeline Framework to extract individual HTTP headers easily. The value of the context property looks like the below.

X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Connection: Keep-Alive
Content-Length: 535
Cache-Control: private, max-age=0
Content-Type: application/soap+xml; charset=utf-8
Date: Wed, 01 Jul 2015 08:59:10 GMT
Server: WWW Server/1.1
ContextHTTPHeader: Content property value

Populating the inbound SOAP header in the WCF behavior actually results in it showing up in two context properties. The first is in the WCF.InboundHeaders context property, and the value looks like the below.

<headers><ContextSOAPHeader xmlns=”http://contextproperty”>Content property value</ContextSOAPHeader></headers>

The second context property which you’ll notice is a custom context property with the same name/namespace as the SOAP header which we created – http://contextproperty#ContextSOAPHeader.

<ContextSOAPHeader xmlns=”http://contextproperty”>Content property value</ContextSOAPHeader>

In both of the above scenarios you’ll notice that it isn’t too simple to actually extract the value that you set for your inbound SOAP header. Once again, the BRE Pipeline Framework could help you extract the value using regex.

Note that if you try to set an inbound SOAP header in a WCF behavior on a WCF-WebHttp send port you will be met with the following exception.

System.InvalidOperationException: Envelope Version ‘EnvelopeNone (http://schemas.microsoft.com/ws/2005/05/envelope/none)’ does not support adding Message Headers.

One of the core tenets of RESTFul architecture is that APIs should support content negotiation.  That is to say that the service consumer should be able to tell the service through the use of the Accept HTTP header what format the response message should be returned in.  On the flip side, it is commonly accepted that RESTFul APIs should support a range of request formats; typically XML and JSON at the very least, and that the Content-Type HTTP header is used to convey the format.

BizTalk Server 2013 R2 delivered the new JSON Decoder and Encoder pipeline components that at last treat JSON as a first class format in BizTalk Server.  Internally, BizTalk maps will still only work with XML as will a range of other BizTalk functions, however at the very least JSON is now accepted on the outskirts.  It is possible to get JSON support in older versions of BizTalk however you will have to create custom pipeline components to cater for this, since Microsoft has only added out of the box functionality in BizTalk Server 2013 R2.

Unfortunately, the traditional means of employing these pipeline components comes with some rigidity.  If you want to expose a RESTFul API using a WCF-WebHttp receive location then you will need to choose whether or not to include a JSON Decoder pipeline component in your receive pipeline.  On the flip side, you will need to choose whether to use a JSON Encoder on your send pipeline.  There is no way to dynamically decide whether to employ the JSON Decoder and Encoder based on the Content-Type and Accept header on the request message without employing an orchestration.  And it gets even worse…what if you wanted to build an API that also supported an HTML response type?

I’m not against the use of orchestration when they are used to provide some form of workflow or process orchestration, however I do not like being forced to use orchestration due to technical constraints.

I have addressed this problem for the most part using the BRE Pipeline Framework.  Because JSON decoding and encoding is only supported by BizTalk Server 2013 R2, I have addressed this problem in a custom MetaInstruction, or an extension to the framework, rather than making a change to the core framework.  Note that this extension requires that you have the BRE Pipeline Framework v1.6.0 or above installed on your machine.  This new functionality is catered for via the new BREPipelineFramework.JSONInstructions vocabulary.

Vocab

To get started download the installer for the extension and install it on your BizTalk 2013 R2 machine.  Install the vocabulary using the Business Rules Engine Deployment Wizard, the vocabulary file being found in the program files folder which by default is “C:\Program Files (x86)\BREPipelineFramework.JSONInstructions”.

Here’s a quick rundown on what these vocabulary definitions do.

  • DecodeJSON – This vocabulary definition executes the JSON Decoder against the message.  You will need to specify a root node name and namespace with which the decoded XML will be wrapped in.
  • EncodeJSON – This vocabulary definition executes the JSON Encoder against the message.  You will need to specify whether the root node will be stripped off prior to encoding or not.
  • ExecuteXSLTTransform – This vocabulary definition will execute an XSLT transformation based on an XSLT file which you must provide a file location for.  This is handy if you want to transform to/from HTML.
  • AssessContentType – This vocabulary definition lets you assess what the content type of your request message is so you can decide how to process it.  It will first attempt to derive the content type by studying the Content-Type HTTP header on the request.  If no Content-Type header is available then it will probe the first character of the message to determine if it is XML or JSON.
  • CacheAcceptHeader – Since we need to assess the Accept header value when choosing how to encode the response message, we must cache the Accept header value while processing the request.  If we did not do this then we would not have access to the value later.
  • AssessCachedAcceptHeader – This vocabulary definition lets you assess what the desired content type is based on the cached Accept header.  This can be executed on the send pipeline that is used to encode the response message.
  • GetCachedAcceptHeader – This vocabulary definition will return the raw value of the cached accept header.  This can be executed on the send pipeline that is used to encode the response message.  Only use this if you aren’t happy with the logic used by the AssessCachedAcceptHeader vocabulary definition.

The following screenshot demonstrates how we can dynamically decode incoming request messages based on the derived content type of the request message.  Note that if the message was XML instead of JSON then this rule would not first.  Note as well that we are caching the Accept header value for later use…theoretically this should be done in a separate rule which will execute regardless of the request content type but I have collapsed the two actions into one rule for brevity.

Rcv JSON

The following screenshot demonstrates the assessment of the cached Accept header value to dynamically encode the response message to JSON.

Snd JSON

And the next screenshot similarly dynamically transforms the response to HTML by executing custom XSLT.

Snd HTML

So how do you make use of this new vocabulary?  The easiest way to do this is to put the fully qualified class/assembly name of the custom MetaInstruction in the CustomMetaInstructionSeparatedList parameter (which takes in a ; delimited list) of the BREPipelineFrameworkComponent pipeline component as below.  Once you’ve done this then the vocabulary will be available for use in the ExecutionPolicy.

PC Config

The fully qualified class/assembly name value for the JSON extension is as below.

BREPipelineFramework.JSON.JSONMetaInstruction, BREPipelineFramework.JSON, Version=1.0.0.0, Culture=neutral, PublicKeyToken=83eab0b166470ebc

So what are the shortcomings of the current solution?

  • The response content type derivation based on the cached Accept header doesn’t currently go as far as is normally expected by RESTFul standards.  These standards dictate that multiple content types can be specified and each can be given priority weightings.  The solution I’ve put in place is just a v1, and it will always prioritize as follows regardless of explicit priorities – JSON > XML > URLEncodedForm > CSV > HTML > Text > Other.  If you don’t like this logic then you can always make your own choice by assessing the raw Accept header value with the GetCachedAcceptHeader vocabulary definition instead of the AssessCachedAcceptHeader vocabulary definition.
  • I have not yet found a context driven way of overriding the Content-Type HTTP header on the response message.  This will always be set to Application/XML, unless you have set an alternate value on the Messages tab of your receive location’s configuration in which case it will always be that alternative value.  For now the only way I can think of overriding this is via a WCF behavior.
  • It only works with BizTalk Server 2013 R2, but you could easily get it to work with BizTalk 2013 by creating your own custom MetaInstruction and vocabulary, and for the most part you can just reuse my own code except in your MetaInstruction you reference your custom JSON encoding/decoding pipeline components.  You can find the source code for the extension at “$/BREPipelineFramework/Main/BREPipelineFramework.BizTalk2013.R2/BREPipelineFramework.JSON” in the BRE Pipeline Framework source code repository.

One of the key areas to focus on when using Service Bus Relays to expose on premise BizTalk hosted WCF services externally without making any firewall changes is availability. Service Bus Relay endpoints in Azure will only be enlisted upon initialization of your on premise service. NOT when your application pool starts, NOT when you create your Service Bus Namespace, but only when a request is made to the local endpoint of the service. Browsing (effectively an HTTP GET) to the local .svc file on the BizTalk VM serves just as well to enlist the Azure Service Bus Relay endpoint.

A common solution to this problem is to use the IIS 8.0 Application Initialization module which has been really well documented here. What this effectively results in is that your .svc file is activated any time IIS resets, your application recycles, or your app domain reloads. The Application Initialization module is also available for IIS 7.5 and can be downloaded here.  This results in your relay endpoint being enlisted in Azure.

Application Initialization is an absolutely core part to any Service Bus Relay service, regardless whether the backend service is a vanilla WCF service or based on a BizTalk receive location. No WCF Service Bus Relay solution should be built without this in mind.

However, I found that when dealing with BizTalk there is an additional consideration to keep in mind. What I observed is that if your BizTalk environment encounters outages due to an inability to connect to the BizTalk Management database (due to network or database issues) the services (not the application pool, not IIS, the services themselves!) will get shut down by BizTalk. When BizTalk recovers from the outage the services will not get spun up again until someone calls on the local service endpoint or browses to the local .svc file. Because the IIS application pool has not restarted, Application Initialization will not kick in and thus your endpoint will not be enlisted in Azure.

The solution I have put in place is to generate keepalive requests to the local endpoint every minute. This makes me feel a bit dirty, but I haven’t found a better solution yet, so I will detail it for you. If you can think of a better option, please do share it and I will update this post.

What I have done is setup a receive location based on the Scheduled Task adapter that generates a keepalive message every one minute (I’ve chosen quite a regular interval because availability is really key to my services, choose your interval appropriately) that gets routed to a solicit-response WCF-WebHttp send port (actually in my case 4 send ports, one corresponding to each service being exposed by Service Bus Relays that I want to keep alive). The endpoint address on this send port is pointed towards the .svc file for the service I want to keep alive, and the “HTTP Method and URL Mapping” section of the send port’s configuration is set to “GET” since we want to perform an equivalent action to browsing to the .svc file.

webhttp

One more thing of note is that since this is an HTTP GET, the target endpoint is not expecting a request message body, so we must use the WCF-WebHttp adapter’s functionality to supress it, as per the following screenshot.

Supress

Because I’m not really interested in the response messages being returned to the send port, I route these messages to a file send port which uses the BRE Pipeline Framework to discard the message (utilizing the NullifyMessage vocabulary definition in the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary).  The address on the send port can be set to any folder since no file will actually get written out by the adapter.
Discard

Using the WCF-WebHttp adapter and pointing it directly to the .svc file helps minimize the requirement for additional development since you aren’t forced down the path of exposing additional operations in your service to cater for the keepalives.

In a recent post I mentioned how you could dynamically set HTTP headers on a WCF-WebHttp send port.  This post will take this a bit further and discuss how you can dynamically set the target endpoint address as well as the HTTP method.

First things first, just as I mentioned in the post about HTTP headers, you must ensure that the BTS.IsDynamicSend context property is set to true to enable the methods I am about to describe.

Overriding the endpoint address is a piece of cake.  Just set the BTS.OutboundTransportLocation context property to the target URL.

    public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(IPipelineContext pContext, Microsoft.BizTalk.Message.Interop.IBaseMessage pInMsg)
    {
        // Some non important logic here

        pInMsg.Context.Write("OutboundTransportLocation", "http://schemas.microsoft.com/BizTalk/2003/system-properties", "http://target.service.com/parameters/go/here");
        pInMsg.Context.Write("IsDynamicSend", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true);
        return pInMsg;
    }

Overriding the HTTP method took a bit more trial and error.  In order to achieve this you must set the WCF.HttpMethodAndUrl to the desired HTTP method.

    public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(IPipelineContext pContext, Microsoft.BizTalk.Message.Interop.IBaseMessage pInMsg)
    {
        // Some non important logic here

        pInMsg.Context.Write("HttpMethodAndUrl", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", "POST");
        pInMsg.Context.Write("IsDynamicSend", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true);
        return pInMsg;
    }

So what exactly does this context property represent and why is it named Method and URL?  This context property represents the value that would normally be contained in the HTTP method and URL mapping section of a WCF-WebHttp send port’s configuration.
MethodAndUrlMapping

If you inspect the help text underneath the textbox in the above screenshot you’ll notice that there are ways to set the HTTP method and URL mapping in this configuration.

  • By hardcoding the HTTP method in the text box such that every message being processed by the send port will have the same HTTP method.
  • By dynamically setting it (and the URL parameters as well) based on the XML BtsHttpUrlMapping configuration based on the value of the BTS.Operation context property on the message being processed.

Whatever value you have in the HTTP method and URL configuration of your send port will be put into the WCF.HttpMethodAndUrl context property on your message before it gets handled by the WCF adapter stack, which will apply the correct HTTP method and outbound URL.

So back to overriding the method, by setting a value to the WCF.HttpMethodAndUrl context property and ensuring that the HTTP method and URL configuration on the send port is blanked out, we are able to dynamically set the HTTP method on the outbound message.

I would be negligent if I didn’t mention that setting these two context properties (and BTS.IsDynamicSend) is a piece of cake when using the BRE Pipeline Framework which has out of the box capabilities to set these context properties. Being rules based, you can inspect the messages that are being processed and dynamically decide what the outbound URL and HTTP method should be based on message content and/or context.
BRE

Theoretically we could even dynamically set the WCF.HttpMethodAndUrl context property value to contain a BtsHttpUrlMapping XML instance so that we could dynamically set the HTTP method and URL based on the BTS.Operation context property value.  Given that it is a lot easier to just set the BTS.OutboundTransportLocation and WCF.HttpMethodAndUrl context properties to set the HTTP method and URL dynamically, this method is just too complicated to have its usage justified in most scenarios.

%d bloggers like this: