Tag Archive: BizTalk Server


I have just released a new version v1.5.5 of the BRE Pipeline Framework to the CodePlex project page.  This is a relatively minor update in that it only adds some missing WCF context properties to the enumerations used to get and set WCF context properties.

What makes this minor update important is that most of these context properties are REST related, and will enable you to build flexible REST based integration solutions where you can dynamically set HTTP methods, URLs, or headers.  The list of context properties added to the enumeration is below.

  • HttpHeaders
  • HttpMethodAndUrl
  • InboundHttpHeaders
  • InboundHttpMethod
  • InboundHttpStatusCode
  • InboundHttpStatusDescription
  • IssuerName
  • IssuerSecret
  • OutboundHttpStatusCode
  • OutboundHttpStatusDescription
  • StsUri
  • SuppressMessageBodyForHttpVerbs
  • VariablePropertyMapping

This new version is fully backwards compatible, and installing it will only require uninstalling the previous version from the add/remove programs control panel and then running the new installer.  Please post any issues/feedback/comments to the CodePlex project discussions page.

I’m pleased to announce that the White Paper titled “The A-Y of running BizTalk Server in Microsoft Azure” which I have been working on for the last two months is now available to download on the BizTalk 360 White Paper collection.

Writing this White Paper has been a momentous task, especially given that Microsoft Azure is an ever-changing entity (I’m pretty certain my synopsis on D-series VMs is already a bit dated since Microsoft have recently released new information stating that they sport faster CPUs, which I was previously unaware of, in addition to having SSDs) and I owe all the reviewers a big deal of thanks for their help.

This endeavour started out as a blog post (I just deleted the draft) and it quickly became apparent that the topic was well larger than anything I could cover within a single post and required a lot more attention to detail.

I hope the paper proves to be interesting and valuable to you, and as always welcome any feedback.

It will be nice to get back to blogging again 🙂

A couple of months ago I released the BRE Pipeline Framework v1.5 (since superseded by v1.5.1) to CodePlex.  One of the new features in this version of the framework is support for dynamic transformation.  In this blog post I’ll explain some scenarios in which this feature might be useful to you and show you how you can use the BRE Pipeline Framework to execute your maps dynamically.

 

Why you’d want to use the BRE Pipeline Framework for Dynamic Transformation

The first reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is to build messaging only applications that support hot deployments for maps.  A hot deployment from a BizTalk perspective is effectively one in which a BizTalk application doesn’t need to be stopped during a redeployment of your map assemblies, the only requirement being that the relevant host instances are restarted after the deployment is complete.  In a messaging application with maps on your receive/send port you might find that BizTalk complains if you try to import an MSI with maps that are used on receive/send ports if the application is in a running state, in fact you might even need to delete the receive/send ports altogether or at the very least remove the maps from the ports before your redeployment takes.  A hot deployment would not require more than a momentary outage of your application while host instances restart which fits in well with an application that is not tolerant to outages.

The second reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is for transformation selectivity.  The way BizTalk works if you apply inbound/outbound maps on your receive/send ports is that it chooses the first map for which the source schema matches the message type of the message in question.  If you apply two maps with the same source schema on a receive/send port there is no way for you to specify any additional conditions that determine which map executes, the second map will always be ignored.  With the BRE Pipeline Framework you can apply complex conditions which include combinations of checking message body content through XPath statements or regexes, checking values in the message context, checking values from SSO configuration stores or EDI trading partner agreements, or even based on the current time of the day…and the best part is that if the selectivity functionality already exists in the BRE Pipeline Framework (which it does in all the mentioned scenarios and many more) then you can achieve all of this with 0 lines of code, and if the functionality doesn’t exist then the framework provides extensibility points to cater for this.  Another great advantage of using the BRE to choose which maps get executed is that you can always change the selectivity rules at runtime if required without any code changes, which can offer you much flexibility for applications that are not tolerant to outages.

The third reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is so that you can chain maps.  As mentioned above, on a BizTalk receive/send port only the first matching map will execute.  It is not possible to execute multiple maps sequentially within a port within a single direction (you can of course specify one inbound and one outbound map thus two if your port is a two-way port).  In a messaging only solution if you receive a message and send it out on a send port then the maximum number of maps you can execute is 2, one on the receive port in the inbound direction, and one on the send port in the outbound direction.  For the most part this is adequate but there might be some scenarios where this isn’t.  With the BRE Pipeline Framework you can specify as many maps to execute sequentially within a single rule or across multiple rules (make sure you set priorities across the different rules to guarantee the required map execution order), or you can execute a map in your BRE Policy and then apply one on your port as well (keep in mind that for an inbound message the pipeline executes before the port maps and the reverse is true for outbound messages).  Beyond just chaining maps, you can also chain other Instructions contained within the BRE Pipeline Framework together with your maps, so you could for example execute a map dynamically then perform a string find/replace against the message body.

Another possible benefit of the dynamic transformation is avoiding inter-application references, which is possibly one of the most painful aspects of BizTalk Server. When you want to pass a message from one application to another you can have almost no choice but to have one application reference the other (the one being referenced containing the schema). Inter-application referenced make deployments a lot more difficult since you can’t update the referenced application without deleting all the referencing applications first (forget about hot deployments altogether, this is the other extreme). To get around this problem you could potentially have separate schemas in each application, create a map in an assembly that only gets deployed to the GAC rather than to the BizTalk Management Database, and then use the BRE Pipeline Framework to transform the message in a pipeline (in either application, it shouldn’t matter). This could allow you to create more decoupled applications with much easier deployment models.

The aforementioned benefits can also be achieved through the use of the ESB Toolkit.  The main difference between the BRE Pipeline Framework and the ESB Toolkit is that the former is intended to provide utility whereas the latter is more about implementing the routing slip pattern with added utility as well.  The ESB Toolkit comes with a fair learning curve as there are a whole lot of new concepts to wrap your head around such as itineraries, resolves, messaging extenders, orchestration extenders etc… and you’ll find that at least the out of the box utility can be quite limited (there are of course community projects that have improved on these lackings).  I definitely see valid scenarios in which either framework should be used, and wouldn’t consider them to be competing frameworks…they could even be used in tandem.

One final reason that I can think of off the top of my head is traceability.  Given that the BRE Pipeline Framework caters for tracing based on the CAT Instrumentation framework and also provides rules execution logs (see this post for more info) you can always tell why a map was chosen for a given message.  This can be especially handy when you are debugging a BizTalk application.

Just like with the ESB Toolkit executing a map dynamically within your pipeline goes against one of the best practice principles of building streaming pipeline components, so please carefully evaluate the pros and cons of using this feature before implementing it.

 

Implementing Dynamic Transformation with the BRE Pipeline Framework

To illustrate how to use the BRE Pipeline Framework to execute maps dynamically I will provide you with and walk you through a simple example solution.  The solution contains three schemas, one called PurchaseOrder, one called PurchaseOrderEDI, and one called PurchaseOrderXML.  All the schemas contain an element (with varying names) which hold the purchase order number.  The PurchaseOrder schema also contains an additional node called OrderType which is also linked to a promoted property of the same name.  The rule we want to put in place for PurchaseOrder messages is that if the value of the OrderType context property is XML then we want to execute a map that converts the message to a PurchaseOrderXML.  If the result of an XPath query against the OrderType node is EDI then we want to execute a map that converts the message to a PurchaseOrderEDI.

On to the implementation…   You will of course need to download and install at least v1.5.1 of the framework from the CodePlex site and import the required vocabularies from the program files folder.  Once done you will need to create a receive pipeline (receive or send are both supported) and drag the BREPipelineFrameworkComponent from the toolbox (you’ll need to add it to your toolbox by right clicking within the toolbox choosing “Choose items” and selecting the component from within the pipeline components tab if it isn’t already in your toolbox) to the validate pipeline stage (you can choose any stage except disassemble/assemble).  The only parameter you have to set on the pipeline component is ExecutionPolicy which specifies which BRE Policy will be called to resolve the maps to execute (you could optionally specify the ApplicationContext parameter if you plan on calling the BRE Policy from multiple pipelines and you want some rules to only apply for certain pipelines).   For the purpose of this example we will use an XML Disassembler component prior to the BREPipelineFrameworkComponent and ensure the StreamsToReadBeforeExecution parameter on the BREPipelineFrameworkComponent is left at its default value of Microsoft.BizTalk.Component.XmlDasmStreamWrapper so that we will be able to inspect context property values promoted by the XML Disassembler (see this post for more info).

Pipeline

Once all the components are deployed to BizTalk we’ll create a receive location that picks up a file and makes use of the aforementioned receive pipeline, as well as a file send port that subscribes to messages from this receive port.  Finally we’ll create the BRE Policy which contains two rules.

The first rule is used to transform messages to the PurchaseOrderXML message format as below.  The rule is made of a single condition which uses the GetCustomContextProperty vocabulary definition from the BREPipelineFramework.SampleInstructions.ContextInstructions vocabulary to evaluate the value of a custom context property.

MapToXMLRule

The second rule is used to transform messages to the PurchaseOrderEDI message format as below.  The rule is made of a single condition which uses the GetXPathResult vocabulary definition from the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary to evaluate the value of a node from within the message body with the use of an XPath statement.

MapToEDIRule

Both of the aforementioned rules make use of the TransformMessage vocabulary definition from the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary in their actions to apply a map against the message.  The input format of the vocabulary definition is as follows – Execute the map {0} in fully qualified assembly {1} against the current message – {2}.  The first parameter in this vocabulary definition is the fully qualified map name (.NET namespace + .NET type), and the second parameter is the fully qualified assembly name including the assembly version and the PublicKeyToken (you can execute gacutil -l with the assembly name to get the fully qualified assembly name from a Visual Studio Command Prompt).  The third parameter is used to specify what sort of validation is performed against the input message before executing the map and is an enumeration with the below values.

  • ValidateSourceSchema – This option will validate the current messages BTS.MessageType context property against the source schema of the specified map.  If they don’t match up then an exception is thrown rather than the map being executed.  If a message type is not available then an exception will be thrown.
  • ValidateSourceSchemaIfKnown – This option will validate the current messages BTS.MessageType context property against the source schema of the specified map.  If they don’t match up then an exception is thrown rather than the map being executed.  If a message type is not available then no exception will be thrown and the map will execute.
  • DoNotValidateSourceSchema – This option will not result in any validation of the BTS.MessageType context property on the current message and will result in the map being executed regardless, possibly resulting in a runtime error during execution of your map or an empty output message.  If a message type is not available then no exception will be thrown and the map will execute.  I haven’t experimented with this myself but this might allow you to create generic maps which apply generic XSLT against varying input messages to create a given output message format.  If anyone decides to experiment with this then please do let me know your results.

That’s all that is required to stitch together a solution making use of the BRE Pipeline Framework which involves dynamic transformation.  If you push through a PurchaseOrder message with an OrderType of XML then it will get converted to a PurchaseOrderXML message, if the OrderType is EDI then it will get converted to a PurchaseOrderEDI message, and if the OrderType is anything else then the message will remain a PurchaseOrder as expected.

As previously mentioned the BRE Pipeline Framework comes along with a  lot of traceability features (also documented here).  If you set the CAT Instrumentation Framework Controller to capture pipeline component trace output you will get information such as the below which tells you which map is getting executed, what the source message type is, and what the destination message type is.

EventTrace
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:TRACEIN: BREPipelineFramework.PipelineComponents.BREPipelineFrameworkComponent.TraceIn() => [102f63bb-2c86-4213-a892-2a5175569469]
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:START -> 102f63bb-2c86-4213-a892-2a5175569469
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has started executing with an application context of , an Instruction Execution Order of RulesExecution and an XML Facts Application Stage of BeforeInstructionExecution.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has an optional Execution policy paramater value set to BREMaps.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has an optional tracking folder paramater value set to c:\temp.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Inbound message body had a stream type of Microsoft.BizTalk.Component.XmlDasmStreamWrapper
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Inbound message body stream was not seekable so wrapping it with a ReadOnlySeekableStream
[3]1FF4.2690::08/16/2014-22:13:01.246 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Reading stream to ensure it's read logic get's executed prior to pipeline component execution
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.CachingMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.ContextMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.HelperMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.MessagePartMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.XMLTranslatorMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.265 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Executing Policy BREMaps 1.0
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding Instruction BREPipelineFramework.SampleInstructions.Instructions.TransformationInstruction to the Instruction collection with a key of 0.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Starting to execute all MetaInstructions.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Executing instruction BREPipelineFramework.SampleInstructions.Instructions.TransformationInstruction.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Applying transformation BREMaps.PurchaseOrder_To_PurchaseOrderXML,   BREMaps, Version=1.0.0.0, Culture=neutral, PublicKeyToken=21bb7669ee013ee3 to the message
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Message is being transformed from message type http://BREMaps.PurchaseOrder#PurchaseOrder to message type http://BREMaps.PurchaseOrderXML#PurchaseOrderXML
[0]1FF4.2690::08/16/2014-22:13:01.278 [Event]:END <- 102f63bb-2c86-4213-a892-2a5175569469: 32ms
[0]1FF4.2690::08/16/2014-22:13:01.278 [Event]:TRACEOUT: BREPipelineFramework.PipelineComponents.BREPipelineFrameworkComponent.Execute(...) = "102f63bb-2c86-4213-a892-2a5175569469"

If you set the TrackingFolder parameter on the BREPipelineFrameworkComponent pipeline component to a valid folder then you will get output like the below (note this is just an excerpt) which provides valuable information telling you which BRE rules get fired and why.

CONDITION EVALUATION TEST (MATCH) 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Test Expression: BREPipelineFramework.SampleInstructions.MetaInstructions.ContextMetaInstructions.GetContextProperty == XML
Left Operand Value: XML
Right Operand Value: XML
Test Result: True

CONDITION EVALUATION TEST (MATCH) 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Test Expression: BREPipelineFramework.SampleInstructions.MetaInstructions.HelperMetaInstructions.GetXPathResult == EDI
Left Operand Value: XML
Right Operand Value: EDI
Test Result: False

RULE FIRED 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Rule Name: Map To XML Format
Conflict Resolution Criteria: 0

One more thing worth mentioning is that once the BRE Pipeline Framework executes a map, it will promote the output message type to the BTS.MessageType context property just as a map on a port would.  This means that you can reliably create routing filters based on the BTS.MessageType context property if you make use of the dynamic transformation feature in the BRE Pipeline Framework.

The aforementioned solution is available for download here.  I’ve included the Visual Studio solution with the source code as well as an export of the BRE Policy, as well as an MSI installer which will create the example application for you (you might need to reconfigure the folders in the file receive and send port based on where you unzip the solution and you might have to provide full control permissions on these folders to your host instance user).  I’ve also included some example XML messages for your convenience.

Happy transforming.

I have just released a v1.5 of the BRE Pipeline Framework to the CodePlex project page.  This is a feature rich and heavily optimized version of the framework with much richer traceability as well.  A breakdown of new features and improvements by category is below.

  • Qualitative features
    • Now supports BizTalk Server 2010, 2013, and 2013 R2 (note for 2013 and 2013 R2 if you want to make use of SSO features you will need to add an assembly binding redirect for Microsoft.BizTalk.Interop.SSOClient from v5.0.1.0 to v7.0.2300.0 or v9.0.1000.0 respectively in your BizTalk config files).
    • This version of the framework has been heavily unit tested with 204 unit tests at the time of v1.5 being released which provide 95.16% code coverage.  The framework is now more reliable than ever.
    • The order of instructions specified in the BRE is now respected.  Previously they were only respected within a given MetaInstruction/Vocabulary, but now they will also be respected across MetaInstructions/Vocabularies.
    • Pipeline component supports streaming behaviour and doesn’t read the stream unless it absolutely has to.  Even when it has to (for example the regex find/replace instruction in the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary) the framework has now been optimized to read in smaller chunks of the stream at a time rather than the entire stream in order to minimize memory usage.
    • Pipeline component now provides ETW tracing which can be captured using the CAT Instrumentation Framework Controller.  This provides a lot of detail as to what the pipeline component is actually doing, and can give you access to stack traces in case you want to deep dive into error details.  Combine this with rules tracing information that was catered for in v1.4 and you have all the instrumentation you need.
    • Exception handling in the framework has been vastly improved.  If an uncaught exception occurs during BRE policy execution you will no longer get the vague “An error has occurred while executing a BRE Policy” type error but now will get the actual exception details.
    • You can specify a version of an InstructionLoaderPolicy or ExecutionPolicy that will fire if you want to be explicit (if not then the highest deployed version will fire) and you can also override these versions in the InstructionLoaderPolicy.
    • You can now choose when you want the XML based facts to apply; before instructions execute (new default behaviour), after they execute (previous default behaviour), or at a specific point during instruction execution.  This means you can now string XML based facts together with other instructions giving you more power to inspect and manipulate messages.
    • In previous versions of the framework each of the individual vocabulary versions were exported to XML files and are saved in “C:\Program Files (x86)\BRE Pipeline Framework\Vocabularies” when you run the installer.  Due to the number of vocabulary versions, it was starting to get very painful importing all of these to the rules engine database.  To make life easier I have created two new export files, the first called BREPipelineFramework.AllVocabs.xml which contains all the vocabulary versions in one file so you only have to import it once, and a second called BREPipelineFramework.LatestVocabs.xml which only contains the latest versions of each vocabulary in case you haven’t used previous versions of the framework and thus don’t need to install old vocabulary versions for backwards compatibility.
  • Major new functionality
    • Ability to transform messages using the TransformMessage vocabulary definition in the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary.  You can even chain multiple maps one after the other in your pipeline, and in the case of a receive pipeline you can still execute an inbound map configured on the port after the pipeline has completed execution.
    • The BREPipelineInstructions.SampleInstructions.HelperInstructions now contains a TraceInfo definition that allows to you to write out ETW trace statements that can be captured using the CAT Instrumentation Framework Controller.
    • A new vocabulary called BREPipelineFramework.SampleInstructions.CachingInstructions that allows you to cache custom strings, or context properties associated with a message, and to either fetch them back later, or in the case of context properties reapply them against the message.  The cache supports expiry times against each cached item so you don’t have to worry about using up all your memory, and you can also remove items from the cache after you have consumed them.  More on this in future blog posts and this makes for some fantastic message only use cases which might previously have required orchestration.
    • The BREPipelineFramework.SampleInstructions.ContextInstructions vocabulary now contains definitions that allow you to get or set ESB context properties by enumeration (all other out of the box context properties were already catered for), and also allow you to promote the BTS.MessageType context property against the message if it doesn’t already exist.
    • A new vocabulary called BREPipelineFramework.SampleInstructions.PartInstructions that allows you to get or set message part details such as part names by index, content type, character sets, message part context properties such as MIME.FileName etc…
    • A new vocabulary called BREPipelineFramework.SampleInstructions.XMLInstructions that allows you to manipulate your XML based fact messages in ways that the BRE doesn’t support out of the box.  These functions include adding elements and optionally populating values within them, adding attributes with optional values etc…
    • A new vocabulary called BREPipelineFramework.SampleInstructions.XMLTranslatorInstructions that allows you manipulate your XML message in a pure streaming fashion (i.e. the changes will not get applied to the message until the stream is actually read for the first time, which will usually be when the message reaches the BizTalk Message Box) which is very efficient and easy on memory.  Manipulation functions include adding, removing, replacing namespaces and prefixes for elements and attributes, updating element or attribute names, and update element or attribute values.  All of these manipulations can be conditional; for example you can create an instruction that will only update the value of an element if the name and namespace of the element match certain criteria and if the old value in the element matches certain criteria.

This scratches the surface of improvements made to the framework, look out for more blog posts (they have been rare in the last few months which I have dedicated to updating the BRE Pipeline Framework but should become more regular now) which explore new features in further depth.

Just two weeks ago I came across a requirement whereby I had to deploy a BizTalk application to an integrated test environment, this application making use of a WCF-SQL receive location with ambient transactions configured.  The ambient transaction option ensures that the BizTalk adapter flows a transaction through to SQL Server and thus the SQL transaction will only commit when the message received by BizTalk is successfully written to the BizTalk message box database.  This is of course crucial in a guaranteed delivery based solution where you can’t afford to lose any messages.

Adapter

The agreement made with the database developers was that Windows Integrated security would be used so the BizTalk host instance account would need permissions over the relevant SQL resources.  What I didn’t realize until we performed the deployment was that on the test environment the SQL database server was not on the domain but was rather configured to be in a workgroup.  The problem of Windows integrated security being unavailable was easy to overcome as we decided that on the test environment we would use SQL logins instead, however I did not want to compromise on transactional behavior as doing so would mean that we would observe different behaviors in the test environment and other environments.  I also found this blog post (no disrespect intended to the writer who took the time to share his learnings) that suggested this was not possible, however I was not convinced.

I found that all worked well when ambient transactions were turned off, however when turned on it looked like the receive location just hangs, holding a lock on SQL resources (I tried to do a select on the table in question using SQL Server Management Studio and it couldn’t return any values due to locks being in place) which won’t be removed until the host instance is reset.  This is after ensuring that MSDTC was setup on both servers as per the below configuration (MSDTC settings can be found by running dcomcnfg, expanding Component Services\Computers\My Computer\Distributed Transaction Coordinator\Local DTC, right clicking on it and choosing properties and then browsing to the security tab).

DTC Properties

The very first thing I did was to ensure that any firewalls (both OS level and hardware levels) were configured to allow DTC connections between the two servers.  This wasn’t a problem in my case as both servers had their windows firewalls turned off and seeing as they were on the same network there were no hardware firewalls between them.  If firewall configuration is applicable to you take a look at this article.

The next thing I did was download the DTCPing tool, which you will need to extract onto both the BizTalk server as well as the SQL server and have running on both.  You will then need to type in the SQL server NetBIOS name into the tool and run it, repeating the same test from the SQL server to the BizTalk server.  There are a variety of error messages that the tool might return to you (see this article for more information on troubleshooting MSDTC using DTCPing) however in my scenario I found that there was no problem when running the test from my BizTalk server to the SQL server, however when I ran the test from the SQL server to the BizTalk server I got an error saying “gethostbyname failure – Can not resolve xxx Invalid remote host” where xxx was the NetBIOS name of my BizTalk server.  The reason for this is that the BizTalk server NetBIOS name was not known to the SQL server (the SQL server NetBIOS name was known to my BizTalk server in my case but if it wasn’t then the following fix would be needed on both servers) so I had to add an entry relating my BizTalk server’s NetBIOS name to its IP address in the HOSTS file on the SQL Server (the HOSTS file can typically be found in “C:\Windows\System32\Drivers\etc”).  Once this was done I was able to successfully connect in both directions using DTCPing.  A quick and easy way to find out whether a NetBIOS name is known to a machine or not is to open a command prompt and type in “ping {NetBIOS name}”.  If that doesn’t work but “ping {IP address}” does work then chances are that this step is necessary for you.

dtcping

Even though DTCPing was now returning a successful result I found that my BizTalk receive location was still exhibiting the same behavior with ambient transactions turned on.  My next troubleshooting step was to download the DTCTester tool.  DTCTester is a command line utility which only needs to be run on the server that is enlisting the transaction (the BizTalk server in my case) unlike the DTCPing tool however, it does require you to setup an ODBC data source corresponding to your target, in my case the SQL server database (see this article for more information on creating a SQL Server targeted ODBC data source).  You can execute the DTCTester tool by opening a command prompt window, browsing to the directory where you extracted the executable, and typing in “dtctester.exe {ODBC data source name} {username used to connect to data source} {password used to connect to data source}”.  In my case I got a very generic error message as below, which didn’t really help me solve my problem but did prove to me (not that I needed much convincing) that the problem still remained in the DTC layer.

MSDTCTesterError

I will admit that the very last change I had to make to fix the problem was arrived at through a process of trial and error.  I had to drop down the minimum authentication level required for DTC from the previous value of “Incoming caller authentication required” as was set on both servers to “No authentication required” as below.

NoAuthDTC

I did discuss this removal of authentication requirements with the security architect at my client site before making the change to ensure it wouldn’t be a problem for them and compromise their security.  After reviewing a few articles such as this one describing the potential risks of using unsecured DTC, and this one describing DTC security, we decided that seeing as the externally facing firewall blocked any external DTC connection attempts to both the BizTalk server and the SQL server this security setting was acceptable on the test environment only.  It would be more desirable for the SQL server to be brought onto the domain however project timelines did not allow for this level of disruption so this alternative was discounted, even though it was preferable from a security and architecture perspective.  It was also deemed that the differing authentication levels for MSDTC between the test environment and other environments was unlikely to introduce any functional differences to the BizTalk applications deployed there.

The changes I made might or might be enough to fix your specific problem, but at the very least I hope this blog post expresses what tools are available to troubleshoot such problems and what thought process needs to be employed to get MSDTC working for the WCF-SQL adapter and in general when one or both of the machines in question are not on a domain.