Archive for June, 2015


One of the core tenets of RESTFul architecture is that APIs should support content negotiation.  That is to say that the service consumer should be able to tell the service through the use of the Accept HTTP header what format the response message should be returned in.  On the flip side, it is commonly accepted that RESTFul APIs should support a range of request formats; typically XML and JSON at the very least, and that the Content-Type HTTP header is used to convey the format.

BizTalk Server 2013 R2 delivered the new JSON Decoder and Encoder pipeline components that at last treat JSON as a first class format in BizTalk Server.  Internally, BizTalk maps will still only work with XML as will a range of other BizTalk functions, however at the very least JSON is now accepted on the outskirts.  It is possible to get JSON support in older versions of BizTalk however you will have to create custom pipeline components to cater for this, since Microsoft has only added out of the box functionality in BizTalk Server 2013 R2.

Unfortunately, the traditional means of employing these pipeline components comes with some rigidity.  If you want to expose a RESTFul API using a WCF-WebHttp receive location then you will need to choose whether or not to include a JSON Decoder pipeline component in your receive pipeline.  On the flip side, you will need to choose whether to use a JSON Encoder on your send pipeline.  There is no way to dynamically decide whether to employ the JSON Decoder and Encoder based on the Content-Type and Accept header on the request message without employing an orchestration.  And it gets even worse…what if you wanted to build an API that also supported an HTML response type?

I’m not against the use of orchestration when they are used to provide some form of workflow or process orchestration, however I do not like being forced to use orchestration due to technical constraints.

I have addressed this problem for the most part using the BRE Pipeline Framework.  Because JSON decoding and encoding is only supported by BizTalk Server 2013 R2, I have addressed this problem in a custom MetaInstruction, or an extension to the framework, rather than making a change to the core framework.  Note that this extension requires that you have the BRE Pipeline Framework v1.6.0 or above installed on your machine.  This new functionality is catered for via the new BREPipelineFramework.JSONInstructions vocabulary.

Vocab

To get started download the installer for the extension and install it on your BizTalk 2013 R2 machine.  Install the vocabulary using the Business Rules Engine Deployment Wizard, the vocabulary file being found in the program files folder which by default is “C:\Program Files (x86)\BREPipelineFramework.JSONInstructions”.

Here’s a quick rundown on what these vocabulary definitions do.

  • DecodeJSON – This vocabulary definition executes the JSON Decoder against the message.  You will need to specify a root node name and namespace with which the decoded XML will be wrapped in.
  • EncodeJSON – This vocabulary definition executes the JSON Encoder against the message.  You will need to specify whether the root node will be stripped off prior to encoding or not.
  • ExecuteXSLTTransform – This vocabulary definition will execute an XSLT transformation based on an XSLT file which you must provide a file location for.  This is handy if you want to transform to/from HTML.
  • AssessContentType – This vocabulary definition lets you assess what the content type of your request message is so you can decide how to process it.  It will first attempt to derive the content type by studying the Content-Type HTTP header on the request.  If no Content-Type header is available then it will probe the first character of the message to determine if it is XML or JSON.
  • CacheAcceptHeader – Since we need to assess the Accept header value when choosing how to encode the response message, we must cache the Accept header value while processing the request.  If we did not do this then we would not have access to the value later.
  • AssessCachedAcceptHeader – This vocabulary definition lets you assess what the desired content type is based on the cached Accept header.  This can be executed on the send pipeline that is used to encode the response message.
  • GetCachedAcceptHeader – This vocabulary definition will return the raw value of the cached accept header.  This can be executed on the send pipeline that is used to encode the response message.  Only use this if you aren’t happy with the logic used by the AssessCachedAcceptHeader vocabulary definition.

The following screenshot demonstrates how we can dynamically decode incoming request messages based on the derived content type of the request message.  Note that if the message was XML instead of JSON then this rule would not first.  Note as well that we are caching the Accept header value for later use…theoretically this should be done in a separate rule which will execute regardless of the request content type but I have collapsed the two actions into one rule for brevity.

Rcv JSON

The following screenshot demonstrates the assessment of the cached Accept header value to dynamically encode the response message to JSON.

Snd JSON

And the next screenshot similarly dynamically transforms the response to HTML by executing custom XSLT.

Snd HTML

So how do you make use of this new vocabulary?  The easiest way to do this is to put the fully qualified class/assembly name of the custom MetaInstruction in the CustomMetaInstructionSeparatedList parameter (which takes in a ; delimited list) of the BREPipelineFrameworkComponent pipeline component as below.  Once you’ve done this then the vocabulary will be available for use in the ExecutionPolicy.

PC Config

The fully qualified class/assembly name value for the JSON extension is as below.

BREPipelineFramework.JSON.JSONMetaInstruction, BREPipelineFramework.JSON, Version=1.0.0.0, Culture=neutral, PublicKeyToken=83eab0b166470ebc

So what are the shortcomings of the current solution?

  • The response content type derivation based on the cached Accept header doesn’t currently go as far as is normally expected by RESTFul standards.  These standards dictate that multiple content types can be specified and each can be given priority weightings.  The solution I’ve put in place is just a v1, and it will always prioritize as follows regardless of explicit priorities – JSON > XML > URLEncodedForm > CSV > HTML > Text > Other.  If you don’t like this logic then you can always make your own choice by assessing the raw Accept header value with the GetCachedAcceptHeader vocabulary definition instead of the AssessCachedAcceptHeader vocabulary definition.
  • I have not yet found a context driven way of overriding the Content-Type HTTP header on the response message.  This will always be set to Application/XML, unless you have set an alternate value on the Messages tab of your receive location’s configuration in which case it will always be that alternative value.  For now the only way I can think of overriding this is via a WCF behavior.
  • It only works with BizTalk Server 2013 R2, but you could easily get it to work with BizTalk 2013 by creating your own custom MetaInstruction and vocabulary, and for the most part you can just reuse my own code except in your MetaInstruction you reference your custom JSON encoding/decoding pipeline components.  You can find the source code for the extension at “$/BREPipelineFramework/Main/BREPipelineFramework.BizTalk2013.R2/BREPipelineFramework.JSON” in the BRE Pipeline Framework source code repository.

Disclaimer – Shameless (but hopefully informational) plug follows 🙂

Close to a year ago myself and my colleagues Mark, Mahindra, and Colin were approached by Packt Publishing to update Richard Seroter’sSOA Patterns with BizTalk Server 2009” book for BizTalk Server 2013.  We considered this a great honour seeing as the base material in the first edition was so good, and decided to take on the project.

Nine months of working through the authoring process later, I am pleased to announce that “SOA Patterns with BizTalk Server 2013 and Microsoft Azure – Second Edition” is due to be published imminently and is available for pre-order.

Book

So what will you find in this edition of the book?

For me personally, the first edition of this book made me realize that there if more to BizTalk than tightly bound orchestrations, and it opened my eyes to how the platform could be used to maximum effectiveness.  It is my personal favourite BizTalk book, and it is one of only two BizTalk books that I personally own and have read cover to cover many times (the other is Professional BizTalk Server 2006).  I’ve of course read loads more BizTalk books, but these are ones I want in my personal library at home.

Given this high degree of respect for the first edition, you’ll find that much of it has been preserved in its original form.  What you will find is that some of the original material has been tweaked to make it more relevant, and that it has been interspersed with many more warnings, tips, and tricks.

Beyond the original source material, we now have major portions of the book dedicated to Azure platforms such as Service Bus, and MABS (Microsoft Azure BizTalk Server).  We also explore the usage of Azure App Services’ API Apps and Logic Apps, as well as Azure API Management however this is more of a peek under the covers to whet your whistle rather than an extreme deep dive.

We also dive deep into the use of REST and how to implement RESTFul APIs using BizTalk Server.  Expect to get into the theory behind REST as well as read about some practical examples.  Given how relevant RESTFul APIs are in the integration space these days this is something which all BizTalkers should be exposed to.

We have an entire chapter that is dedicated to introducing you to tools that we believe that you should be aware of, and should be carefully assessing for usage in your own environment.  These include the ESB Toolkit, the BRE Pipeline Framework, Sentinet, BizTalk 360, AIMS for BizTalk, and the BizTalk Documenter.  While there are way more frameworks we’d have loved to discuss, there was simply limited space so we had to prioritize.  We have provided quick summaries of those that we feel need awareness but weren’t in our top pick (due to relevance to the book’s title rather than due to how functional and generally useful they are).

As BizTalk Server 2013 R2 was released after we started working on the book and the number of new features that are applicable to this book are relatively limited, we decided to truck on and focus on BizTalk Server 2013.  However, we would be negligent not to mention those features in BizTalk Server 2013 R2 that are relevant to this book, so you will find a section on BizTalk Server 2013 R2 in the book as well.

Finally, you will find that we have updated the existing accompanying source code for BizTalk Server 2013 and expanded the existing samples with further examples (for example we demonstrate in brief how the async features in .NET 4.5 can be used in WCF clients, and we have also provided samples for untyped orchestrations).  And of course we have added brand new source code to accompany new chapters as well.

It has been a great honour writing this book, and it will be an even greater honour if you decide to read it as well.  I sincerely hope that we have done Richard’s original book justice, and that this new edition helps to add to your knowledge, expand your horizons, and further your careers.

I have just released v1.6.0 of the BRE Pipeline Framework to the CodePlex project page.  The framework is now mature enough that it’s core has not changed much this time around, but there are still tons of new features and optimizations to be found in this version. For a recap on what the framework is all about you can read the primer or the provided documentation.

Before I dive into the new features, here are some steps to get started. If you’re installing the framework for the first time or upgrading from a version above v1.4.0 then all you need to do is download the installer, uninstall the previous version if applicable, run the new installer to completion, and then import the BREPipelineFramework.LatestVocabs.xml vocabulary definition file from the vocabulary subfolder in the program files folder (default location is C:\Program Files (x86)\BRE Pipeline Framework\Vocabularies).  If you are upgrading from a version older than v1.4.0 then you might be forced to stop your BizTalk host instances since older versions of the pipeline component were not signed with a strong named key and installed in the GAC.

Onto the new features.

HTTP Header Manipulation
The BREPipelineFramework.SampleInstructions.ContextInstructions vocabulary already has vocabulary definitions that allow you to read in the WCF.InboundHttpHeaders context property to read in inbound headers, or to set the WCF.HttpHeaders context property to set outbound headers. However what you’ll find is that reading or setting HTTP headers from these context properties is not exactly easy as each context property actually contains a delimited list of all the inbound or outbound HTTP headers as per the following example. Getting to an individual HTTP header can be tricky if you try to use these vocabulary definitions.

32d6f253-de53-4cf3-b559-1ec34ac1ad28
Important_User: True
Content-Length: 328
Content-Type: application/x-www-form-urlencoded
Expect: 100-continue
Host: servicebusrelaytest.servicebus.windows.net
Authorized_Token: 32d6f253-de53-4cf3-b559-1ec34ac1ad28" Namespace="http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties" Promoted="false" />

So now there is a new vocabulary called BREPipelineFramework.SampleInstructions.HttpHeaderInstructions which gives you the below vocabularies that allow you easier access to get or set inbound or outbound HTTP headers.
HTTPHeaderInstructions

The following screenshot demonstrates how these vocabulary definitions can be used. As you’ll see, you can read in inbound HTTP headers, explicitly set outbound HTTP headers, or copy inbound HTTP headers to outbound HTTP headers. You can set as many outbound HTTP headers as you want, and the framework will roll them all up into the single context property for you.
ExampleHttpHeader

Dynamic Pipeline Component Execution
This version of the framework now allows you to dynamically execute some of the out of the box Microsoft pipeline components. This feature was provided as a direct result of an issue posted on the framework’s page that asked for the ability to have context properties promoted on the target message when performing dynamic transformation using the framework. This would only really be possible by executing the XML Disassembler pipeline component, so the framework now has the ability to execute this component and some others.  You will be able to execute these pipeline components from within the new vocabulary BREPipelineFramework.SampleInstructions.PipelineInstructions.

PipelineInstructions

As you’ve seen in the previous screenshot, you can now execute the XML Assembler, XML DIsassembler, Flat File Assembler, Flat File Disassembler, and XML Validator pipeline components dynamically.  This means that you can assess conditions that allow you to decide whether/which pipeline components you should execute, and moreover you can dynamically configure parameters for these pipeline components as well.  The following screenshot demonstrates how you can execute the Flat File Disassembler while specifying the trailer schema parameter value.

FFDisassemble

One thing to note is that the BRE Pipeline Framework pipeline component is not a disassembler, so if you choose to execute a disassembler pipeline component and it ends up debatching an envelope message, the pipeline component will only pass through the first disassembled message.  In the case of the XML Disassembler, if you don’t want the message to be debatched you can make use of the DisassembleXMLMessagePropertyPromotionOnly vocabulary definition, which will result in context properties being promoted against the message without any debatching taking place.

Easier assertion of custom MetaInstructions

In the past, if you wanted to use your own custom MetaInstructions and corresponding vocabularies (see the project document page for more info on creating these) you would have to use an Instruction Loader Policy to assert the MetaInstruction fact so that it could be used in your Execution Policy (the BRE Policy in which you assess and manipulate your message’s context and content). In the following screenshot you’ll see how you a rule in an Instruction Loader Policy is used to assert a custom MetaInstruction fact into an Execution Policy by specifying the fully qualified class name and fully qualified assembly name of the MetaInstruction class.
InstructionLoaderPolicy
With v1.6.0 of the framework there is now a new pipeline component parameter called CustomMetaInstructionSeparatedList within which you can provide a semicolon separated list of fully qualified class/assembly names in the below format.

BREPipelineFramework.TestSampleInstructions.MetaInstruction, BREPipelineFramework.TestSampleInstructions, Version=1.0.0.0, Culture=neutral, PublicKeyToken=83eab0b166470ebc;BREPipelineFramework.TestSampleInstructions.MetaInstructionCopy, BREPipelineFramework.TestSampleInstructions, Version=1.0.0.0, Culture=neutral, PublicKeyToken=83eab0b166470ebc

This makes usage of custom MetaInstructions a whole lot easier to manage. You might still want to use an Instruction Loader Policy if any of the below scenarios applies to you.

  • You only want to conditionally assert the custom MetaInstruction in certain scenarios
  • You want to assert XML or SQL based facts
  • You want to conditionally override the ExecutionPolicy or the version

Assorted changes
There’s a couple more interesting new features across a variety of vocabularies.
In the BREPipelineFramework.SampleInstructions.CachingInstructions vocabulary, you’ll now find a vocabulary definition called GetCachedValueFromSSOConfigStore. The first time this vocabulary definition is executed it will read in a value from an SSO configuration store and it will also cache it. On further use of the vocabulary definition it will read the cached value rather than the SSO configuration store, keeping the number of hits to the SSO database to a minimum. By default the values will be cached for 30 minutes, but you can override this through the use of the UpdateCacheExpiryTime vocabulary definition in the same vocabulary.
CachedSSO
There is also a new vocabulary definition GetValueFromSSOConfigStore in the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary which will read in a value from an SSO configuration store without caching the result. This is probably useful in scenarios where you expect that you will be updating the configuration values regularly and always want to be reading in the latest value.
There are two new vocabulary definitions in the BREPipelineFramework.SampleInstructions.XMLTranslatorInstructions vocabulary that allow you to remove an element and all child elements from the XML message being processed, in a fully streaming manner. These are the RemoveElementAndChildrenByName and RemoveElementAndChildrenByNameAndNamespace vocabulary definitions. An example of their usage is below. No exception will be thrown in an element matching the criteria you set is not found in the message, so this vocabulary can be used liberally.
RemoveElementAndChildren
There are also two new vocabulary definitions ReturnFirstRegexMatchInString and ReturnRegexMatchInStringByIndex that allow you to run regular expressions against strings (these can be any strings, you could potentially chain together vocabulary definitions to run the regular expression against a context property for example).
V1.6.0 of the framework also has more tracing all over the place which will make it easier to understand what is going on within your BRE rules.

Optimizations

Finally, there have been some optimizations made to the framework. All reflected types for custom MetaInstructions and maps being executed dynamically will now be cached rather than having their types reflected every single time. This results in improved runtime performance of the framework.

One of the key areas to focus on when using Service Bus Relays to expose on premise BizTalk hosted WCF services externally without making any firewall changes is availability. Service Bus Relay endpoints in Azure will only be enlisted upon initialization of your on premise service. NOT when your application pool starts, NOT when you create your Service Bus Namespace, but only when a request is made to the local endpoint of the service. Browsing (effectively an HTTP GET) to the local .svc file on the BizTalk VM serves just as well to enlist the Azure Service Bus Relay endpoint.

A common solution to this problem is to use the IIS 8.0 Application Initialization module which has been really well documented here. What this effectively results in is that your .svc file is activated any time IIS resets, your application recycles, or your app domain reloads. The Application Initialization module is also available for IIS 7.5 and can be downloaded here.  This results in your relay endpoint being enlisted in Azure.

Application Initialization is an absolutely core part to any Service Bus Relay service, regardless whether the backend service is a vanilla WCF service or based on a BizTalk receive location. No WCF Service Bus Relay solution should be built without this in mind.

However, I found that when dealing with BizTalk there is an additional consideration to keep in mind. What I observed is that if your BizTalk environment encounters outages due to an inability to connect to the BizTalk Management database (due to network or database issues) the services (not the application pool, not IIS, the services themselves!) will get shut down by BizTalk. When BizTalk recovers from the outage the services will not get spun up again until someone calls on the local service endpoint or browses to the local .svc file. Because the IIS application pool has not restarted, Application Initialization will not kick in and thus your endpoint will not be enlisted in Azure.

The solution I have put in place is to generate keepalive requests to the local endpoint every minute. This makes me feel a bit dirty, but I haven’t found a better solution yet, so I will detail it for you. If you can think of a better option, please do share it and I will update this post.

What I have done is setup a receive location based on the Scheduled Task adapter that generates a keepalive message every one minute (I’ve chosen quite a regular interval because availability is really key to my services, choose your interval appropriately) that gets routed to a solicit-response WCF-WebHttp send port (actually in my case 4 send ports, one corresponding to each service being exposed by Service Bus Relays that I want to keep alive). The endpoint address on this send port is pointed towards the .svc file for the service I want to keep alive, and the “HTTP Method and URL Mapping” section of the send port’s configuration is set to “GET” since we want to perform an equivalent action to browsing to the .svc file.

webhttp

One more thing of note is that since this is an HTTP GET, the target endpoint is not expecting a request message body, so we must use the WCF-WebHttp adapter’s functionality to supress it, as per the following screenshot.

Supress

Because I’m not really interested in the response messages being returned to the send port, I route these messages to a file send port which uses the BRE Pipeline Framework to discard the message (utilizing the NullifyMessage vocabulary definition in the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary).  The address on the send port can be set to any folder since no file will actually get written out by the adapter.
Discard

Using the WCF-WebHttp adapter and pointing it directly to the .svc file helps minimize the requirement for additional development since you aren’t forced down the path of exposing additional operations in your service to cater for the keepalives.

While working on a project using the WCF webHttpRelayBinding binding with SAS based authentication over transport security, I found that my services were taking a very long time to spin up (30-60 seconds) and that my runtime performance was a bit less than optimal in terms of latency (and I had proven that the latency was not a result of my backend service).  To give you an idea what I was working with, my web.config file had contents similar to the below in the system.serviceModel element.

      <endpointBehaviors>
        <behavior name="sharedSecretClientCredentials">
          <transportClientEndpointBehavior>
            <tokenProvider type="SharedAccessSignature">
              <sharedAccessSignature keyName="keyname" key="key" />
            </tokenProvider>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
      <service name="Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance" behaviorConfiguration="ServiceBehaviorConfiguration">
        <endpoint name="RelayEndpoint" address="https://sbnamespace.servicebus.windows.net/service" binding="webHttpRelayBinding" bindingNamespace="http://tempuri.org/" bindingConfiguration="RelayEndpointConfig" behaviorConfiguration="sharedSecretClientCredentials" contract="Microsoft.BizTalk.Adapter.Wcf.Runtime.ITwoWayAsync" />
      </service>
    </services>
    <bindings>
      <webHttpRelayBinding>
        <binding name="RelayEndpointConfig">
          <security relayClientAuthenticationType="RelayAccessToken" mode="Transport" />
        </binding>
      </webHttpRelayBinding>     
    </bindings>

I didn’t observe such problems on my development VM (which I was running with pretty much no firewalls behind it), but did observe this on my client’s UAT environment. This was in spite of following Microsoft’s guidelines that suggest that you should have outbound TCP ports 9350-9354 open on your firewall to enable Service Bus connectivity.  I went through an exercise using the PortQuiz website to prove that these ports were indeed accessible from the UAT server so the performance issues were puzzling.

Next up, I spun up a fiddler capture.  To start with I applied the below filter into Fiddler to get rid of the extra noise.

tcp.port eq 9350 || tcp.port eq 9351 || tcp.port eq 9352 || tcp.port eq 9353 || tcp.port eq 9354

I then initialized some of my services (I shut them down forcibly and then spun them up again) to observe which ports were in use.  I saw that the conversation with Service Bus was being initialized on port 9350 as expected, however that appeared to be the end of the story.  I wasn’t seeing any comms on ports 9351-9354.  I then right clicked one of the displayed records in WireShark and chose “Conversation Filter -> IP” which updates the filter such that it displays anything with a source or destination IP address matching those on the selected record.

This suddenly resulted in a whole lot more records being displayed and helped me get to the root of the issue.  What I was observing was that after Service Bus made the initial connection on port 9350, it attempted to continue the conversation on port 5671 (AMQPS or AMQP over SSL) which hadn’t been opened on the firewall.  This connection attempt was obviously failing, and the observed behavior was that some retries were attempted with fairly large gaps in between until Service Bus finally decided to fall back to port 443 (HTTPS) instead.  Pay particular attention to the lines in the following screenshot with the numbers 1681, 2105, and 2905 in the first column.

2.6 Capture

This explained why my service was taking a long time to start up (because Service Bus was failing to connect via AMQPS and was going through retry cycles before falling back to HTTPS) and also explained why my runtime performance was lower than my expectation (because HTTPS is known to be slower than TCP).  However this didn’t explain why Service Bus was attempting to use port 5671 rather than 9351-9354 as per documentation.

Repeating the same test on my own VM showed that Service Bus was continuing the connection on ports 9351-9354 as expected… So why the difference? On the suggestion of my colleague Mahindra, I compared the assembly versions of the Microsoft.ServiceBus assembly across the two machines. You can do this by running “gacutil -l Microsoft.ServiceBus” in a Visual Studio command prompt, or by manually checking the GAC directory which is typically “C:\Windows\Microsoft.NET\assembly\GAC_MSIL” for .NET 4+ assemblies.

Voila. I found that I was running version 2.1.0.0 on the machine that was behaving correctly, and version 2.6.0.0 on the machine that was misbehaving. It appears that the protocol choosing behavior for Service Bus relays has changed sometime in between these two assembly versions. I have not pinpointed exactly which version this change occurred in, and I don’t yet know whether this change was by design or accidental. Either way, Microsoft have not yet updated their documentation, which means that others will be as confused as I was.

So what are your choices?

  1. You can downgrade to an older version of the assembly.  2.1.0.0 will definitely work for you, but you might be able to get away with a slightly higher version which is less than 2.6.0.0, but it will be up to you to establish which versions are acceptable since I haven’t managed to do this.  You’ll need to update the webHttpRelayBinding binding registration in your machine.config files (or wherever you’ve chosen to register it if you’ve gone for a more granular approach) to point to the older assembly as well.
  2. You can choose to stick with the latest version of the assembly and open up outbound TCP connections on port 5671 on your firewall.

I chose to stick with option #1 because I’m not sure at this stage whether this change in behavior is intentional or incidental, and also because my impression is that raw TCP over ports 9351-9354 would be faster than the AMQPS protocol.  You will find that option #2 is also functional.

With the older version of the assembly in play I could not see traffic on ports 9351-9354 as expected, my services were spinning up in less than a second, and latency was much more in line with my expectations.
2.1 Capture

%d bloggers like this: