Tag Archive: Pipeline Components

I have seen a lot developers struggle with debugging pipeline components and other .NET classes that get executed by BizTalk host instances. I have also often heard statements claiming that these sorts of artifacts are hard or impossible to debug in the BizTalk runtime. I would very much like to clear up this notion for those who aren’t clear on how to debug such artifacts and to describe how one can debug them.

The first thing you need to do is to ensure that your component has been installed to the pipeline components folder (for pipeline components) or to the GAC (for other .NET assemblies) in its debug build profile, as this is a prerequisite to being able to debug your code. The next thing you need to do is to get the process ID (PID) for the relevant BizTalk host instance that is going to be executing your code, this is most important if you have multiple BizTalk host instances running in your environment as you need to know exactly which one to attach to and Visual Studio will not list them by host name, only by PID. In order to do this you need to open task manager (CTRL + SHIFT + ESC) and browse to the services tab. If you sort by name you should see a list of all your BizTalk host instances by name and the PIDs in the next column as below. Take note of the PID you are interested in attaching to.


Next up, open your Visual Studio project and apply breakpoints in your code wherever you want to by navigating to the line you want to break on and pressing F9.


Next, expand the Debug option in the file menu and choose attach to process. Highlight the BizTalk service with the PID matching the one you took note of from the task manager and choose to attach to it. Note that if the host instance is running under another user’s account then you will need to tick the checkbox option “Show processes from all Users” in order to see it displayed in the list.

Attach to process

Your instance of Visual Studio is now attached the BizTalk host instance in question and anytime the host instance executes code upon which you have placed a breakpoint, Visual Studio will enter debugging mode and you will be able to step through the code / inspect objects and properties etc… For a nice guide on the basics of debugging check out the article “Mastering Debugging in Visual Studio 2010 – A Beginner’s Guide“, while the article is focused on Visual Studio 2010 the majority of the practices described apply to earlier and later versions of Visual Studio as well.

I would like to announce that the BRE Pipeline Framework Codeplex project has just been made publicly available on Codeplex.

For the best part of the year I have been working on a context heavy BizTalk Server based integration project (which was not making use of the ESB Toolkit based itineraries) and after the first few months on the project and many pipeline components later I decided there had to be a better way to manage the logic that I wanted my pipelines to implement.

I investigated more flexible frameworks and the one I found most attractive was described on this blog post by Guo Ming Li.  It allowed you to execute as many context instructions as you wanted to.  I saw this as a great starting point but I really wanted to take this further, not limiting the requirements to simply being able to add hardcoded context properties.  I also wanted to be able to read context properties, or set context properties based on the result of an XPath statement, or to set the context properties based on  values read in from the SSO database.  I also wanted to be able to implement helper methods which I could use as part of my rule conditions so I could selectively apply actions.

Seeing that my wish list was just growing larger all the time, I decided that one of the design goals for the BRE Pipeline Framework is that it should be extensible, providing a base pipeline component and a framework and allowing developers to implement their own logic as long as they implement interfaces contained within the framework.  The full list of my design goals for this project are as below.

-Reduce the amount of time required to introduce new logic into a BizTalk pipeline.  Instead concentrate on capturing logic in reusable class libraries.
-Reduce the complexities surrounding the deployment of pipeline components.  Since logic is held in business rules and class libraries which the pipeline component doesn’t have any direct references to, the pipeline component will not need to be redeployed unless there are changes to the pipeline component itself (plumbing rather than logic).
-Promote the reuse of logic used within BizTalk pipelines rather than writing new pipeline components every time a slight variation of logic is required.
-Provide a simple design time experience (BRE) which encourages developers to use pipelines appropriately and makes it easier for analysts to understand the purpose of a pipeline.
-Provide an extensible framework that allows for developers to implement their relevant requirements if not catered for out of the box in the desired manner.

Take a look at the codeplex project page if this piques your interest as it contains a lot more detail, and let me know if you are interested in contributing at all as there is much work to be done.  I’ll leave you with a screenshot of an example BRE rule to give you an idea of what the framework is trying to achieve.


Some of my colleagues asked me to help them out with a really hairy problem today.  Their project involved the receipt of EDIFact files by email.  The emails were being saved on the file system as .eml files which were then picked up and processed by BizTalk.  The email would contain the message body which was of content type text/plain and the EDI message which was an attachment of content type application/octet-stream.

Everything worked fine on development when they were working with eml files that were created by the developers, but when they tried to test the solution against an instance of a real eml file sent from one of the trading partners the receive pipeline failed with the following error – “A body part or a part with the same name has already been added to this message. A message can have only one body part and part names must be unique.”

Inspecting their pipeline showed that they were making use of a MIME/SMIME decoder component (they were selecting the body part by type application/octet-stream rather than index), an EDI disassembler component, an EDI Party Resolution component, and a whole lot more custom components. 

My initial gut feel was that the problem must lie in the disassemble stage so I got an example message off them and whipped up a receive pipeline containing the MIME/SMIME decoder component and the EDI Disassembler component, both with exactly the same settings as they had.  As soon as I ran the file through my pipeline I saw the same error.  To isolate the issue I tried to remove the EDI Disassembler from the pipeline and the error was no long encountered.

I opened the eml file in notepad and noticed that the content-description for the body of the message had a value of body (see below for an excerpt).  I changed the value to body1 and tried playing the message through my pipeline and this time it worked!   

I didn’t have a copy of reflector handy so I couldn’t quite inspect what the MIME/SMIME decoder or the EDI Disassembler component were doing, but my educated guess was that the MIME/SMIME decoder component would name each message part according to the content-description in the eml file, and that when the EDI Disassembler disassembled the raw EDIFact into XML and was trying to add it to the message it was setting the part name to body which resulted in a clash because one of the non-body parts was already name body.

My colleagues asked me to whip up a quick workaround for them which would ensure that if any of the non-body parts had a name of body that it would rename them to something else.  Due to time constraints I largely based my code off the ArbitraryXPathPropertyHandler pipeline component which is a Microsoft sample that can be found in C:\Program Files (x86)\Microsoft BizTalk Server 2010\SDK\Samples\Pipelines\ArbitraryXPathPropertyHandler on any BizTalk 2010 dev PC.

The execute method of my pipeline component looks like the below.  Note that it creates a new instance of a message and copies the context over (it does so by reference here, you can always do a manual copy of the context properties if you want to but since we are not manipulating the message whatsoever it isn’t entirely necessary), creates a new message part which points at a stream derived from the original message’s body part (again you could clone the stream rather than use the original stream if you wanted to), and then calls the CopyMessageParts method passing in the original message as the source, the copied message as the target, and the copied body part (the extraction of the body part in the execute message is somewhat unnecessary but since this was a quick I didn’t bother to vary from the Microsoft sample too much).  It then returns the copied message to the remaining components in the pipeline.

The CopyMessageParts method looks like the below.  It literates through each of the message parts in the source message and copies them over to the target (again, you could clone the message part data instead of copying it over by reference if you want to).  If however the name of a non-body part message is body then it will replace the part name with oldBody plus a GUID.

Placing this pipeline component in the decode stage after the MIME/SMIME decoder component now allows the EDI Disassembler to process successfully and work around the problem.  This pipeline component might not be 100% as I did this as a quick and dirty exercise, and my colleagues will most likely clean it up but I hope this helps others who bump into the same issue to identify their problem and come up with an appropriate workaround as well.

I’ve been working on a BizTalk solution which includes a scenario in which I am receiving envelope messages that are to be debatched in a receive pipeline using an XML Disassembler pipeline component, correlating the debatched messages which will be of different message types to relevant orchestrations where they will once again be batched up into an envelope message using an XML Assembler based send pipeline before being sent out.

This sounds like BizTalk bread and butter, but the catch was that the messages that were being assembled were of different message types and that they had been previously debatched using an XML Disassembler. When a message is debatched by the XML Disassembler a property called DocumentSpecName MIGHT be written to the context of the message (I’ve highlighted the word might because I have not yet figured out what the exact conditions are that cause this property to be set, it did not happen when I tried to replicate the problem in a simpler solution but definitely still happens in my original solution). If the first message to be batched by the XML Assembler based send pipeline contains a DocumentSpecName context property, then the pipeline will choke if any of the subsequent messages are of a different message type and you will get the following error (which is rather misleading as it leads you to think that the pipeline has been misconfigured or that the schema has not been deployed properly) – There was a failure executing pipeline “PipelineName”. Error details: “The document type “xxxx#yyyyy” does not match any of the given schemas.

To get around this issue, you should remove the DocumentSpecName context property from any messages before they are batched up in the send pipeline. You can do this in a pipeline component in the receive pipeline that gets run in any stage after the disassemble stage. To remove the property you will need to add the below line to the execute method in your pipeline component (I’d encourage you to write pipeline components to be dynamic rather than hardcoding which properties to promote/remove so please only treat this as an example).

Of course if you are not debatching messages on the way in, or if all the debatched messages are of the same type then you will not encounter this problem.

Once again, I will reiterate that I don’t yet know what the exact conditions are for the XML Disassembler to set the DocumentSpecName property onto a message, but if you run into the same error message and you are sure that the schema mentioned in the error message is properly deployed, then do check to see if the DocumentSpecName property exists on your message, and if so remove it.  If at any point I do find the exact conditions I will update this blog post.

%d bloggers like this: