Latest Entries »

A couple of months ago I released the BRE Pipeline Framework v1.5 (since superseded by v1.5.1) to CodePlex.  One of the new features in this version of the framework is support for dynamic transformation.  In this blog post I’ll explain some scenarios in which this feature might be useful to you and show you how you can use the BRE Pipeline Framework to execute your maps dynamically.

 

Why you’d want to use the BRE Pipeline Framework for Dynamic Transformation

The first reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is to build messaging only applications that support hot deployments for maps.  A hot deployment from a BizTalk perspective is effectively one in which a BizTalk application doesn’t need to be stopped during a redeployment of your map assemblies, the only requirement being that the relevant host instances are restarted after the deployment is complete.  In a messaging application with maps on your receive/send port you might find that BizTalk complains if you try to import an MSI with maps that are used on receive/send ports if the application is in a running state, in fact you might even need to delete the receive/send ports altogether or at the very least remove the maps from the ports before your redeployment takes.  A hot deployment would not require more than a momentary outage of your application while host instances restart which fits in well with an application that is not tolerant to outages.

The second reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is for transformation selectivity.  The way BizTalk works if you apply inbound/outbound maps on your receive/send ports is that it chooses the first map for which the source schema matches the message type of the message in question.  If you apply two maps with the same source schema on a receive/send port there is no way for you to specify any additional conditions that determine which map executes, the second map will always be ignored.  With the BRE Pipeline Framework you can apply complex conditions which include combinations of checking message body content through XPath statements or regexes, checking values in the message context, checking values from SSO configuration stores or EDI trading partner agreements, or even based on the current time of the day…and the best part is that if the selectivity functionality already exists in the BRE Pipeline Framework (which it does in all the mentioned scenarios and many more) then you can achieve all of this with 0 lines of code, and if the functionality doesn’t exist then the framework provides extensibility points to cater for this.  Another great advantage of using the BRE to choose which maps get executed is that you can always change the selectivity rules at runtime if required without any code changes, which can offer you much flexibility for applications that are not tolerant to outages.

The third reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is so that you can chain maps.  As mentioned above, on a BizTalk receive/send port only the first matching map will execute.  It is not possible to execute multiple maps sequentially within a port within a single direction (you can of course specify one inbound and one outbound map thus two if your port is a two-way port).  In a messaging only solution if you receive a message and send it out on a send port then the maximum number of maps you can execute is 2, one on the receive port in the inbound direction, and one on the send port in the outbound direction.  For the most part this is adequate but there might be some scenarios where this isn’t.  With the BRE Pipeline Framework you can specify as many maps to execute sequentially within a single rule or across multiple rules (make sure you set priorities across the different rules to guarantee the required map execution order), or you can execute a map in your BRE Policy and then apply one on your port as well (keep in mind that for an inbound message the pipeline executes before the port maps and the reverse is true for outbound messages).  Beyond just chaining maps, you can also chain other Instructions contained within the BRE Pipeline Framework together with your maps, so you could for example execute a map dynamically then perform a string find/replace against the message body.

Another possible benefit of the dynamic transformation is avoiding inter-application references, which is possibly one of the most painful aspects of BizTalk Server. When you want to pass a message from one application to another you can have almost no choice but to have one application reference the other (the one being referenced containing the schema). Inter-application referenced make deployments a lot more difficult since you can’t update the referenced application without deleting all the referencing applications first (forget about hot deployments altogether, this is the other extreme). To get around this problem you could potentially have separate schemas in each application, create a map in an assembly that only gets deployed to the GAC rather than to the BizTalk Management Database, and then use the BRE Pipeline Framework to transform the message in a pipeline (in either application, it shouldn’t matter). This could allow you to create more decoupled applications with much easier deployment models.

The aforementioned benefits can also be achieved through the use of the ESB Toolkit.  The main difference between the BRE Pipeline Framework and the ESB Toolkit is that the former is intended to provide utility whereas the latter is more about implementing the routing slip pattern with added utility as well.  The ESB Toolkit comes with a fair learning curve as there are a whole lot of new concepts to wrap your head around such as itineraries, resolves, messaging extenders, orchestration extenders etc… and you’ll find that at least the out of the box utility can be quite limited (there are of course community projects that have improved on these lackings).  I definitely see valid scenarios in which either framework should be used, and wouldn’t consider them to be competing frameworks…they could even be used in tandem.

One final reason that I can think of off the top of my head is traceability.  Given that the BRE Pipeline Framework caters for tracing based on the CAT Instrumentation framework and also provides rules execution logs (see this post for more info) you can always tell why a map was chosen for a given message.  This can be especially handy when you are debugging a BizTalk application.

Just like with the ESB Toolkit executing a map dynamically within your pipeline goes against one of the best practice principles of building streaming pipeline components, so please carefully evaluate the pros and cons of using this feature before implementing it.

 

Implementing Dynamic Transformation with the BRE Pipeline Framework

To illustrate how to use the BRE Pipeline Framework to execute maps dynamically I will provide you with and walk you through a simple example solution.  The solution contains three schemas, one called PurchaseOrder, one called PurchaseOrderEDI, and one called PurchaseOrderXML.  All the schemas contain an element (with varying names) which hold the purchase order number.  The PurchaseOrder schema also contains an additional node called OrderType which is also linked to a promoted property of the same name.  The rule we want to put in place for PurchaseOrder messages is that if the value of the OrderType context property is XML then we want to execute a map that converts the message to a PurchaseOrderXML.  If the result of an XPath query against the OrderType node is EDI then we want to execute a map that converts the message to a PurchaseOrderEDI.

On to the implementation…   You will of course need to download and install at least v1.5.1 of the framework from the CodePlex site and import the required vocabularies from the program files folder.  Once done you will need to create a receive pipeline (receive or send are both supported) and drag the BREPipelineFrameworkComponent from the toolbox (you’ll need to add it to your toolbox by right clicking within the toolbox choosing “Choose items” and selecting the component from within the pipeline components tab if it isn’t already in your toolbox) to the validate pipeline stage (you can choose any stage except disassemble/assemble).  The only parameter you have to set on the pipeline component is ExecutionPolicy which specifies which BRE Policy will be called to resolve the maps to execute (you could optionally specify the ApplicationContext parameter if you plan on calling the BRE Policy from multiple pipelines and you want some rules to only apply for certain pipelines).   For the purpose of this example we will use an XML Disassembler component prior to the BREPipelineFrameworkComponent and ensure the StreamsToReadBeforeExecution parameter on the BREPipelineFrameworkComponent is left at its default value of Microsoft.BizTalk.Component.XmlDasmStreamWrapper so that we will be able to inspect context property values promoted by the XML Disassembler (see this post for more info).

Pipeline

Once all the components are deployed to BizTalk we’ll create a receive location that picks up a file and makes use of the aforementioned receive pipeline, as well as a file send port that subscribes to messages from this receive port.  Finally we’ll create the BRE Policy which contains two rules.

The first rule is used to transform messages to the PurchaseOrderXML message format as below.  The rule is made of a single condition which uses the GetCustomContextProperty vocabulary definition from the BREPipelineFramework.SampleInstructions.ContextInstructions vocabulary to evaluate the value of a custom context property.

MapToXMLRule

The second rule is used to transform messages to the PurchaseOrderEDI message format as below.  The rule is made of a single condition which uses the GetXPathResult vocabulary definition from the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary to evaluate the value of a node from within the message body with the use of an XPath statement.

MapToEDIRule

Both of the aforementioned rules make use of the TransformMessage vocabulary definition from the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary in their actions to apply a map against the message.  The input format of the vocabulary definition is as follows – Execute the map {0} in fully qualified assembly {1} against the current message – {2}.  The first parameter in this vocabulary definition is the fully qualified map name (.NET namespace + .NET type), and the second parameter is the fully qualified assembly name including the assembly version and the PublicKeyToken (you can execute gacutil -l with the assembly name to get the fully qualified assembly name from a Visual Studio Command Prompt).  The third parameter is used to specify what sort of validation is performed against the input message before executing the map and is an enumeration with the below values.

  • ValidateSourceSchema – This option will validate the current messages BTS.MessageType context property against the source schema of the specified map.  If they don’t match up then an exception is thrown rather than the map being executed.  If a message type is not available then an exception will be thrown.
  • ValidateSourceSchemaIfKnown – This option will validate the current messages BTS.MessageType context property against the source schema of the specified map.  If they don’t match up then an exception is thrown rather than the map being executed.  If a message type is not available then no exception will be thrown and the map will execute.
  • DoNotValidateSourceSchema – This option will not result in any validation of the BTS.MessageType context property on the current message and will result in the map being executed regardless, possibly resulting in a runtime error during execution of your map or an empty output message.  If a message type is not available then no exception will be thrown and the map will execute.  I haven’t experimented with this myself but this might allow you to create generic maps which apply generic XSLT against varying input messages to create a given output message format.  If anyone decides to experiment with this then please do let me know your results.

That’s all that is required to stitch together a solution making use of the BRE Pipeline Framework which involves dynamic transformation.  If you push through a PurchaseOrder message with an OrderType of XML then it will get converted to a PurchaseOrderXML message, if the OrderType is EDI then it will get converted to a PurchaseOrderEDI message, and if the OrderType is anything else then the message will remain a PurchaseOrder as expected.

As previously mentioned the BRE Pipeline Framework comes along with a  lot of traceability features (also documented here).  If you set the CAT Instrumentation Framework Controller to capture pipeline component trace output you will get information such as the below which tells you which map is getting executed, what the source message type is, and what the destination message type is.

EventTrace
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:TRACEIN: BREPipelineFramework.PipelineComponents.BREPipelineFrameworkComponent.TraceIn() => [102f63bb-2c86-4213-a892-2a5175569469]
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:START -> 102f63bb-2c86-4213-a892-2a5175569469
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has started executing with an application context of , an Instruction Execution Order of RulesExecution and an XML Facts Application Stage of BeforeInstructionExecution.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has an optional Execution policy paramater value set to BREMaps.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has an optional tracking folder paramater value set to c:\temp.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Inbound message body had a stream type of Microsoft.BizTalk.Component.XmlDasmStreamWrapper
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Inbound message body stream was not seekable so wrapping it with a ReadOnlySeekableStream
[3]1FF4.2690::08/16/2014-22:13:01.246 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Reading stream to ensure it's read logic get's executed prior to pipeline component execution
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.CachingMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.ContextMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.HelperMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.MessagePartMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.XMLTranslatorMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.265 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Executing Policy BREMaps 1.0
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding Instruction BREPipelineFramework.SampleInstructions.Instructions.TransformationInstruction to the Instruction collection with a key of 0.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Starting to execute all MetaInstructions.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Executing instruction BREPipelineFramework.SampleInstructions.Instructions.TransformationInstruction.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Applying transformation BREMaps.PurchaseOrder_To_PurchaseOrderXML,   BREMaps, Version=1.0.0.0, Culture=neutral, PublicKeyToken=21bb7669ee013ee3 to the message
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Message is being transformed from message type http://BREMaps.PurchaseOrder#PurchaseOrder to message type http://BREMaps.PurchaseOrderXML#PurchaseOrderXML
[0]1FF4.2690::08/16/2014-22:13:01.278 [Event]:END <- 102f63bb-2c86-4213-a892-2a5175569469: 32ms
[0]1FF4.2690::08/16/2014-22:13:01.278 [Event]:TRACEOUT: BREPipelineFramework.PipelineComponents.BREPipelineFrameworkComponent.Execute(...) = "102f63bb-2c86-4213-a892-2a5175569469"

If you set the TrackingFolder parameter on the BREPipelineFrameworkComponent pipeline component to a valid folder then you will get output like the below (note this is just an excerpt) which provides valuable information telling you which BRE rules get fired and why.

CONDITION EVALUATION TEST (MATCH) 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Test Expression: BREPipelineFramework.SampleInstructions.MetaInstructions.ContextMetaInstructions.GetContextProperty == XML
Left Operand Value: XML
Right Operand Value: XML
Test Result: True

CONDITION EVALUATION TEST (MATCH) 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Test Expression: BREPipelineFramework.SampleInstructions.MetaInstructions.HelperMetaInstructions.GetXPathResult == EDI
Left Operand Value: XML
Right Operand Value: EDI
Test Result: False

RULE FIRED 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Rule Name: Map To XML Format
Conflict Resolution Criteria: 0

One more thing worth mentioning is that once the BRE Pipeline Framework executes a map, it will promote the output message type to the BTS.MessageType context property just as a map on a port would.  This means that you can reliably create routing filters based on the BTS.MessageType context property if you make use of the dynamic transformation feature in the BRE Pipeline Framework.

The aforementioned solution is available for download here.  I’ve included the Visual Studio solution with the source code as well as an export of the BRE Policy, as well as an MSI installer which will create the example application for you (you might need to reconfigure the folders in the file receive and send port based on where you unzip the solution and you might have to provide full control permissions on these folders to your host instance user).  I’ve also included some example XML messages for your convenience.

Happy transforming.

This article has been jointly written by Connected Pawns’ Mark Brimble and Adventures inside the Message Box’s Johann Cooper and is a response to MuleSoft’s recent blog article “10 reasons to walk from BizTalk” in an attempt to analyse whether the article has any merit.

A quick disclaimer first. Our trade is primarily but not limited to BizTalk. Our current focuses are on Microsoft products, however as it should be our loyalty is to our customers (whoever that might be at the time) and we consider ourselves open to any and all technologies. We consider MuleSoft products, and other integration offerings such as Boomi, to be opportunities rather than threats to our careers. We haven’t used MuleSoft products yet but do follow them out of interest as they are clearly an industry leader in the integration space, which is quite an achievement considering the strengths of the more established incumbents.

Recently one of our customers chose MuleSoft because it is open source and that better suits the culture of that organisation. This was a “greenfields” on-premises project and we helped them evaluate some of the products available. In our opinion it was not possible to make a choice solely on technological grounds and the decision had to be made on organizational factors e.g. were they buy versus build or open source versus vendor specific or what support model best suits.

On to the article… Our belief is that the target of this audience is CIOs and CFOs rather than integration specialists, and it is likely that it has been pushed through by marketing teams rather than technical teams, unfortunately with a degree of smoke and mirrors and misinformation. An integration specialist, especially one trained up in BizTalk Server, will see through most of the points raised very quickly.

First of all the article doesn’t make it clear whether it is comparing Anypoint to BizTalk Server or Services. There is no question that BizTalk Services is not yet a fully matured product, it is still in its infancy but there is no question that Microsoft are committed to it. BizTalk Server on the other hand is a robust and mature product. BizTalk Services doesn’t seek to replace BizTalk Server, but rather compliments it.  You could swap it out quite easily for any other cloud integration offering which could work alongside BizTalk Server. If a competitor wants to make point by point comparisons then it should do so against a specific product rather than leave things vague.

Points 1, 3, 4 and 7 in MuleSoft’s post are heavily marketing speak and don’t really offer any real substance in terms of comparison. All mature ESB type products are very heavy feature wise and it is not worth comparing feature for feature (unless there is a huge obvious gap).  What is important is to see which product fits your organisation better. If one product came out with a killer feature you could be rest assured that the same or a similar feature would quickly appear in all the competitors products. 

The way we see it, the most difficult part of integration projects is getting the requirements correct and sorting out dependencies rather than problems with tooling. These problems will be faced regardless which engine you use, so this isn’t a huge differentiating factor. Chances are the cost to deliver a project on an already established platform will be equivalent with all other things being equal (minus the whole amortization aspect which doesn’t apply to BizTalk Server but will of course apply to BizTalk Services).

Regarding the time required to provision an integration environment there is no question that setting up BizTalk Server is a project in itself, however this isn’t the case with BizTalk Services which adopts a more lightweight approach. Anypoint definitely holds the upper hand in this respect compared to BizTalk Server which is a traditional heavyweight product, but Microsoft doesn’t intend for BizTalk Server to fit in the lightweight category, which is instead taken care of by BizTalk Services. Whether you want to choose a heavyweight or lightweight offering should be a consideration when choosing a new integration engine, but doesn’t really come into play if you already have an established BizTalk Server environment.

With point 2 there are some major differentiating factors when it comes to cloud readiness which the article doesn’t explain in depth but we’ll try to provide our understanding here. MuleSoft definitely offers more connectivity options from the cloud than BizTalk Services currently does, though this gap should lessen with time. BizTalk Server doesn’t yet support high availability (the weakest link being a lack of support for SQL clustering) when running in IAAS mode in the cloud, whereas our understanding is this isn’t a problem with MuleSoft products. There is no doubt that Microsoft has some catching up to do, and this is reflected in Gartner/Forrester survey results. This is something MuleSoft should be capitalising on to capture the attention of those adopting a cloud strategy in a hurry.

Point 5 will appeal to open source people but who really cares if you can see the source code for the product… surely you aren’t going to change the source code yourself and redeploy a customized version of the product. Will Microsoft really chase us down if we use reflector to check out what’s under BizTalk Server’s hood? If that was the case we’d surely have an entire army of lawyers knocking on our doors :)

Point 6 is a good one. It isn’t a reason to walk away from BizTalk, but is a reason to choose Anypoint over BizTalk Server/Services if choosing an integration engine and you have a java preference. We don’t think multi-language support is really that appealing for the enterprise since most companies are going to choose one platform and stick with it. It does hold a certain appeal for ISVs however who might find that their target market for selling integration solutions is wider with a language agnostic solution.

Point 8 regarding costs is once again an unfair comparison. Is the comparison between BizTalk Server or Services? BizTalk Server definitely fits into the traditional heavyweight integration engine category unlike Anypoint; a comparison with BizTalk Services is more apt. That said we can’t perform a fair comparison here because MuleSoft does not publish all it’s prices. A little more transparency would be nice here, especially for ISVs who really need all the facts before approaching potential customers.

Point 9 might well be true but one would have to talk to people in the know to get a real answer. The fact is SLAs are just numbers and it’s about the ability to deliver on them. We don’t have enough experience (thankfully) with either company on P1 type issues so we can’t comment on this too much.

10 is and isn’t true. The release cadence for BizTalk Server and Services was committed to at the BizTalk Summit 2013 and they are reasonably aggressive. Regarding Gartner type reports it is true that MuleSoft is consistently on top, in fact Microsoft wasn’t even on the radar for a while due to a cloud offering not being available, however that has changed now and Microsoft is also in the leaders quadrant albeit still lagging behind Mule.  This should certainly come into play when evaluating vendors, but once again chances are that you aren’t going to walk away from an established integration platform solely to move to a vendor who might have a lead but is considered to be in the same category.

In summary we just don’t think there are many reasons for an organisation to ditch BizTalk Server and jump ship to MuleSoft, however there are many compelling reasons to assess MuleSoft products if you are either choosing a new integration engine for your company or you have a had a major change in strategy that requires you to move to the cloud in a hurry and aren’t keen to wait for Microsoft to fix up their gaps in BizTalk Services / BizTalk Server on IAAS.

A lesson I learnt (the hard way) while working on the BRE Pipeline Framework was that if you use one of the out of box disassembler pipeline components such as the XML/EDI/Flat File disassembler and you rely on them to promote context properties from the body of your message, you will find that those context properties are not promoted until the message stream has been read at least once.

What this actually means is that if you build a custom pipeline component which you intend to use in your receive pipeline after a disassembler, and you try to read a context property that you expected to be promoted by the disassembler, you will find that it has a null value if you don’t read the stream.

However, if you perform a read of the inbound stream using a StreamReader (remember to add the reader as a resource to the pipeline context like below so that it won’t get disposed till the pipeline completes processing, to ensure that your stream is seekable and to rewind the stream after reading it) you will find that all your context properties are promoted.

StreamReader reader = new StreamReader(inmsg.BodyPart.Data);
reader.Read();
pc.ResourceTracker.AddResource(reader);
copiedBodyPart.Data.Position = 0;

One other interesting fact is that even if you call the StreamReader.Read() method which should only read one character from the stream, you will find that all your context properties are promoted, and in fact the current position in the stream (before you reset it of course) is the end of the stream!

So why does this behaviour happen? It turns out that when the out of the box disassemblers execute they don’t actually promote the context properties themselves. They actually wrap the inbound stream with a Microsoft.BizTalk.Component.XmlDasmStreamWrapper wrapper stream, which is responsible for promoting properties amongst other tasks. The property promotion does not happen until the stream has been read. This also hold true for other context processing type functions that are performed by disassemblers, as detailed so well by Charles Young in this blog post about BizTalk Server 2004 (from which I’ve borrowed part of this blog’s title).

This behaviour is in line with best practice pipeline component development guidance, which suggests that pipeline components should be built in an efficient stateless manner in which content processing is only executed once when the stream is read as the message is committed to the BizTalk message box (rather than within the pipeline components themselves) with the use of wrapper streams. This model suggests that all content processing should be carried out either when a pipeline completes execution or in orchestrations.

This guidance works well for solutions which include orchestrations which provide workflow functionality, but obviously doesn’t hold water in messaging only solutions (of course this could spurn a debate as to whether a solution that mandates content processing must contain orchestration, however that is not a topic for today) in which you might want to perform some evaluation of the message content (potentially via context properties) in a pipeline. In this case you might have no choice but to read the stream prior to attempting your own processing if you depend on having available the outputs of a preceding disassembler.

You will also find if you are creating a new message in your pipeline and copying over the content and context from the original message to your new message, that if you don’t read the stream and attempt to perform a shallow copy of the context from the original message to the target message, that your target message will lose any context properties that the disassembler was meant to promote.  You should instead copy the context over by reference.

So how this affect the BRE Pipeline Framework?  Prior to v1.5 I was naively making a copy of the original stream from the source message to the target output message, which of course means that the stream was being read.  This is certainly not the most efficient way to create the output message, especially if I don’t intend to manipulate the message body in any way since passing the original stream by reference should be good enough.  I decided that this was the path I would go down with the BRE Pipeline Framework v1.5.  However, of course, without reading the stream the context properties would no longer be available for evaluation in rules that are executed by the BRE Pipeline Framework, as was the case with prior versions of the framework.

To get around this issue the pipeline component included in the BRE Pipeline Framework has a new parameter called StreamsToReadBeforeExecution, which is a comma separated list of stream types that should be read prior to calling any BRE Policies, and it is pre-populated with the value Microsoft.BizTalk.Component.XmlDasmStreamWrapper.  If you are building a solution based on the BRE Pipeline Framework and do not need access to context properties that are promoted by disassemblers then I would urge you to remove the value from this parameter on the pipeline component so that your pipeline component behaves in a streaming manner.  Rest assured that regardless whether the parameter is populated or not, the promoted context properties will be on the output message once the pipeline has completed processing.

You will find that if you run a trace using the CAT Instrumentation Framework Controller, specifically for pipeline component tracing, that the stream type intercepted by the BRE Pipeline Framework component will be displayed and if the stream is being read that will be displayed as well as below.

[1]135C.09E0::07/10/2014-22:10:29.877 [Event]:9d59b3fb-5a39-43a3-8b90-7d33a5b2ec17 - Inbound message body had a stream type of Microsoft.BizTalk.Component.XmlDasmStreamWrapper
[1]135C.09E0::07/10/2014-22:10:29.877 [Event]:9d59b3fb-5a39-43a3-8b90-7d33a5b2ec17 - Inbound message body stream was not seekable so wrapping it with a ReadOnlySeekableStream
[2]135C.09E0::07/10/2014-22:10:29.890 [Event]:9d59b3fb-5a39-43a3-8b90-7d33a5b2ec17 - Reading stream to ensure it's read logic get's executed prior to pipeline component execution

The takeaway from this blog post is that you must not assume that context properties will be available for evaluation in your pipeline components following a disassembler unless you read the stream first (which might or might not be acceptable, based on your specific requirements).

I have just uploaded the BRE Pipeline Framework v1.5.1 installer to the CodePlex project page. If you have previously downloaded v1.5 then please uninstall it, download v1.5.1 and install that as it fixes a pretty major bug.

The bug (issue #1767) results in context properties promoted by XML/FF/EDI disassemblers prior to BRE Pipeline Framework components not being available for evaluation in execution policies.

I had actually found this bug during development on v1.5, fixed it, created unit tests, and then broke the code. Unfortunately due to a specific combination of rules in the test policy I was using I was getting a false positive in my tests. Be assured that I have updated my unit tests so that this bug is specifically tested for now.

The cause of this bug warrants an entire new blog post which I will write up in the next few days, and will highlight the difficulties in accessing context properties promoted by out of the box disassembler components in further stages of a pipeline.

While implementing dynamic transformation in the BRE Pipeline Framework I ran into an interesting problem.  In BizTalk 2013 Microsoft changed the way transformations are executed to be based on XSLCompiledTransform, rather than on the long deprecated XSLTransform, which delivers performance benefits in the mapping engine.  This however is a breaking change for all those that chose to implement dynamic transformation via custom .Net code in prior versions of BizTalk.  My specific problem was that I wanted to implement dynamic transformation in the BRE Pipeline Framework without forking the code to provide BizTalk 2010 and 2013+ support.

The code for BizTalk 2010 dynamic transformations in the BRE Pipeline Framework looks (note that it has been truncated to make it easier to view, visit the CodePlex page if you’d like to see the full source code) like the below.

TransformMetaData transformMetaData = TransformMetaData.For(mapType);
SchemaMetadata sourceSchemaMetadata = transformMetaData.SourceSchemas[0];
string schemaName = sourceSchemaMetadata.SchemaName;
SchemaMetadata targetSchemaMetadata = transformMetaData.TargetSchemas[0];

XPathDocument input = new XPathDocument(inmsg.BodyPart.GetOriginalDataStream());
XslTransform transform = transformMetaData.Transform;
Stream output = new VirtualStream();
transform.Transform(input, transformMetaData.ArgumentList, output, new XmlUrlResolver());
output.Position = 0;
inmsg.BodyPart.Data = output;

The above wouldn’t build on a BizTalk 2013 development machine since an ITransform object was expected instead of an XSLTransform object.  The working code looks like the below.

TransformMetaData transformMetaData = TransformMetaData.For(mapType);
SchemaMetadata sourceSchemaMetadata = transformMetaData.SourceSchemas[0];
string schemaName = sourceSchemaMetadata.SchemaName;
SchemaMetadata targetSchemaMetadata = transformMetaData.TargetSchemas[0];

XPathDocument input = new XPathDocument(inmsg.BodyPart.GetOriginalDataStream());
ITransform transform = transformMetaData.Transform;
Stream output = new VirtualStream();
transform.Transform(input, transformMetaData.ArgumentList, output, new XmlUrlResolver());
output.Position = 0;
inmsg.BodyPart.Data = output;

Note that the major point of difference in the above two code snippets is the type of the transform variable.  In order to cater for both scenarios I decided to take advantage of .Net 4’s dynamic type feature whereby instead of specifying a class name (XSLTransform or ITransform) I use the dynamic keyword instead as below.

dynamic transformMetaData = TransformMetaData.For(mapType);
SchemaMetadata sourceSchemaMetadata = transformMetaData.SourceSchemas[0];
string schemaName = sourceSchemaMetadata.SchemaName;
SchemaMetadata targetSchemaMetadata = transformMetaData.TargetSchemas[0];

XPathDocument input = new XPathDocument(inmsg.BodyPart.GetOriginalDataStream());
dynamic transform = transformMetaData.Transform;
Stream output = new VirtualStream();
transform.Transform(input, transformMetaData.ArgumentList, output, new XmlUrlResolver());
output.Position = 0;
inmsg.BodyPart.Data = output;

Note that in the above I also had to use the dynamic keyword in place of the TransformMetaData type since this class appears to belong to a different namespace in BizTalk 2013 compared to prior versions.

The dynamic keyword instructs the compiler to not perform any validation on methods/properties called on that object (so no intellisense) and to instead assume that the developer knows what he is doing.  The object type is resolved at runtime and if any of the called methods/properties don’t exist then that will result in a runtime error.

This of course is only a valid solution if you are targeting .Net 4.0 at the minimum since this feature didn’t exist in previous versions.  This solution works well for solutions targeting BizTalk 2010 and above.  I would also encourage any BizTalk 2010 shops that are dabbling in dynamic transformation to future proof their solutions by using the dynamic keyword.

This of course only scratches the surface of dynamic types.  If you want to read more check out this MSDN article.  I would definitely encourage thorough unit testing (as was the case for the BRE Pipeline Framework) to make up for the loss of compile-time validation.

Follow

Get every new post delivered to your Inbox.

Join 77 other followers

%d bloggers like this: