Tag Archive: Microsoft BizTalk Server

A common problem you might come across in orchestrations is the need to merge records in two different messages together based on key data.  You might even want to take it further and make it such that in those cases where there is no match for key data, you might want to make a record which only contains the data from the one source message that contained that key.

In this blog post, I will take you through the steps required to set this up, show you why you can’t achieve the desired result making use of the BizTalk mapper with the inbuilt functoids, and then show you how can achieve the desired result through the use of some creative XSLT in your map.

First for the setup.  Create the two source schemas, see the example below.  The EmployeeDetail and BankAccountDetail records should be set to have a max occurs of unbounded.

Next create the destination schema, see the example below.  The MergedEmployeeDetail record should have a max occurs of unbounded, and the Address and the BankAccountNumber elements should have a min occurs of 0.

Then lets create a property schema with an element called TransactionID, and lets promote the TransactionID element in the two source schemas based on this property.

Now lets create an orchestration that is activated on receive of the EmployeeDetails message, receives a BankAccountDetails message which has a matching TransactionID (you’ll need to setup a correlation set based on the TransactionID), transforms these two messages to the MergedEmployeeDetails message and sends it out to the MessageBox.

In case you didn’t know, when you create a BizTalk map in Visual Studio you are limited to having one source and one destination message, however there is a way to override this restriction.  If you add a transform shape into an orchestration and choose to create a new map rather than use an existing map then these restrictions are dropped.  This is because the only place in which a multi source/destination map can be executed is within orchestration.  I guess Microsoft wanted to ensure developers don’t try to use these maps elsewhere which is why they only let you create the maps from an orchestration…I do personally wish they wouldn’t try to handhold BizTalk developers quite so much.

Anyways, we are going to make use of this feature.  After receiving the EmployeeDetails and BankAccountDetails messages, drag in a transform shape and choose to create a new map based on the two source messages with the output message being a MergedEmployeeDetails message.

Now you might think that if you make use of some Equals and Value Mapping functoids that you might be able to satisfy your mapping requirement.  For arguments sake, try to setup your map as below.

Now lets run the below input message through the map.


Success, right?  See the output below.

But wait, there’s more.  What if we now try to add in multiple BankAccountDetail records into the second message.

Lets see what the output looks like?

Suddenly things aren’t looking quite so good anymore.  If you try to validate the map and view the xslt, you’ll notice that the map loops through all the EmployeeDetail records in the first message, however when it attempts to check for matching BankAccountDetail records, there is no loop.


You will find that there is no way to force the map to your will.  So some custom XSLT to the rescue.  Below is the XSLT I wrote to solve this issue.


Next I created a copy of the map and removed all links and functoids from the map.  I then added a scripting functoid which is connected to the MergedEquipmentDetail record in the output message, set the script type to Inline XSLT Call Template and pasted the XSLT in.

You’ll now get the results below, voila J


You’ll find the BizTalk 2010 source code for the above example here (click file and download to download the zip file).

While working on a BizTalk project that required the exchanging of EDI messages with external partners, I came across a situation whereby the partner’s EDIFact messages contained data elements which exceeded the maximum lengths as dictated by their Message Implementation Guide (MIG), which also happened to be the default maximum length in the out of the box BizTalk EDI schemas.  And even though they exceeded the maximum lengths, our trading partner still insisted that they were valid values!!!

This specific EDI message was of the COPARN (Container Announcement Message) variety, and the element in question was the temperature element as defined in the below out of the box Coparn D95B schema.

You’ll notice that the minimum and maximum lengths are both set to 3 (as supported by the trading partner’s MIG), so we would expect all temperature setting values to be 3 characters long.  These are just out of the box xsd schema properties, pretty simple.  Regardless whether the data type is string or int, the exact same validation should occur.

The catch is that it was possible (and extremely likely) for refrigerated containers to have negative temperatures, and the trading partner in question pointed us to some documentation which advises that the minus character in negative numbers is a special character which mustn’t be counted towards the length of a data element.

This article states that Numeric data element values are to be sent as positive. Although conceptually a deduction is negative, it is represented by a positive value: eg in a credit note all values are sent as positive amounts, and the application software will take note of the message name code (DE 1001) and process the values accordingly. In addition some data element and code combinations will lead to implied negative values, eg DE 5463 with code value “A” (allowance) in an ALC segment in an invoice. Again, the values are sent as positive amounts.

If, however, a value has to be explicitly represented as negative, it must be sent immediately preceded by a minus sign, eg -112. The minus sign is not counted as a character when computing the maximum field length of the data element.

Since refrigerated container temperatures could potentially be greater or less than 0, it is necessary to explicity precede negative numbers with the minus sign, and thus it is not counted.

I haven’t seen any documentation from Microsoft that claims that they have catered for this requirement in any fashion, nor was I able to find any blogs by anyone else at that time that suggested any ideas.

In this specific scenario we took the easy way out and just raised the Maximum Length on the temperature element to be 4 as we really didn’t have any time left to waste in the project to cater for what was essentially a big surprise to us given the lack of documentation (that I could find at least) around this.

Does anyone have a good way to deal with this or is this the only (or just the easiest) option?  I would be extremely wary of this in the future given how big EDI schemas tend to be, and will always discuss this specific scenario with trading partners first rather than trusting their MIGs before settling on using the out of the box schemas.

When attempting to resubmit an XML message (make sure you make the changes to the required stored procedures to set the correct content type for xml messages first, see http://midheach.wordpress.com/2012/03/23/esb-management-portal-customization/) to a receive location using the HTTP transport channel, you will get an error message saying that “A potentially dangerous Request.Form value was detected from the client”

This is because there are XML tags in the request message which would typically be considered unsafe.  Assuming your ESB Portal web site is only targeted at BizTalk administrator types you can always relax these restrictions by adding the below highlighted line into the web.config in your ESB Portal web directory.

Because these restrictions would effectively be relaxed on the entire website, this might not be ideal for everyone.  If someone finds a better way to get around this problem please let me know.

A small oversight (at least in my mind) in the ESB Portal quickly proves to become a major irritant after using it for awhile.  The default sort order on the faults page is by severity, which if you’re like me, is almost never the logical choice when viewing the page.  I would much rather be seeing it sorted by the time when the fault occurred and if I want to view the data any other way I can always change the sort order or apply filters.

The ESB portal uses an ASP.Net Grid and it’s default implementation doesn’t specify a sort order, thus on page load it will always sort by it’s first column which is severity.  Thus you’ve got two choices, either change the order of the columns, or add some new behavior which defines the default sort behavior.  I’ve explored the latter path.

In order to make the changes, you’ll want to open the ESB.Portal solution, expand the ESB.Portal project, and within that open the FaultList.ascx.cs file which is in the Lists directory (you’ll have to expand FaultList.ascx to see the .cs file).  You’re interested in the very first method – Page_Load.  You’ll want to add in the highlighted code from the below screenshot.

If you haven’t previously installed the portal then you can perform a build and use the msi installer package to deploy the portal.  If you have previously installed the portal and got it working then chances are you don’t want to start from scratch and only want to apply the updated dlls.  These would be the Microsoft.Practices.ESB.Portal.dll and Microsoft.Practices.ESB.Portal.XmlSerializers.dll files in the bin folder of your ESB.Portal project, and you’ll want to copy them to your bin folder of the ESB Portal folder that is used to host your IIS virtual application.

You can always take this a bit further and read in the default sort order from the web.config file if you want to make this more configurable, and you can of course replace DateTime in the Fault.Sort() method call with any SortingOrer you want.

Mapping became a whole lot more of a pleasure than a pain in BizTalk 2010.  Finally it became a lot easier to manage complex maps and the immediate temptation to just write up some custom XSLT has been put to the backburner (for me at least) for all but those specific scenarios where the mapping IDE will not play ball.

An interesting scenario I was faced with was mapping a flat message structure into a message which had a repeating record with elements in it.

An example output of the source structure looks like the below (note that I have skipped the records suffixed with 7-18 but assume they’re there).  The schema for this message is a delimited flat file schema, which will always contain 40 elements (in my scenario, setting these up as a repeating record in the flat file schema was not possible).

An example destination message looks like the below (there could actually be upto 20 records in this message, but I’ve kept it short in this screenshot).  One of the rules I was given was that if any of the HazCLx elements contained an empty string in the source message, then a record shouldn’t be created with that HazCLx element and it’s corresponding HazUNx element.

Ok, so first thing we need to do is drag a table looping functoid onto the mapping area.

If you inspect the Functoid Description, you’ll see that the first parameter is a scoping parameter so what you need to do is connect it to the root of the record which contains these elements, which could be your root node or as it is in my case, the CheckIn record.

The second parameter is meant to be the size of the table you are trying to create.  Now just by looking at the schema you would guess it would be 2, but there is a bit more to this.  If you move to the Table Looping Grid tab (I believe in BizTalk 2009 and earlier you will have to access this from the properties explorer area while focused on the functoid), you’ll notice that there is an option that allows you to treat the first column in the table as a logical gate for the row.  If you select this option, that means that the first column will contain a boolean value that will be used to evaluate whether the current row in the table is valid or not.  In our case this is important because we only want to output a record if the HazCLx element if the source message contains a non-empty string.  So let’s tick this option, and let’s set the second parameter of the looping functoid to 3, since there are going to be 3 columns – the boolean gate column,  the HazCL column, and the HazUN column.  Your functoid parameters should look like the below.

Ok, now for the next step you will need to supply all the values that will be needed to create this table.  In order to provide the value for the gate column, you can use logical not functoids, the first parameter of the functoid being the HazCLx element, and the second parameter being blank.  You’ll need 20 of these in this case since there are always 20 HazCLx elements in the source message.  The output of the logical not functoids should go to the table looping functoid.  You should also make links from the HazCLx and HazUNx elements in the source message to the table looping functoid.  Note that it does not matter what order you supply the parameters to the table looping functoid (except for the very first two parameters which you should have according to the screenshot just above this paragraph).

Your map should now look somewhat like the below.

As you can see, things are getting quite messy, what with there being the 2 initializing parameters + 60 value parameters being fed to the table looping functoid.  The next step has the potential to be a nightmare, but I have a trick to make it a lot easier for you.  You need to open the Table Looping Grid tab now and assign the values of the table looping functoid to the relevant rows and columns.  You do this by clicking on the drop down arrow in each cell and choosing which parameter’s value to use for that cell.  You’ll need to do this for each of the 20 rows.  While there isn’t much of the problem for column 2 (HazCLx) and column 3 (HazCLUnx), how on earth are you meant to choose which parameter to use for the gated column when all the parameters supplied by the 20 Not Equal functoids have the same name – “Not Equal”.  See the below screenshot to see what I mean.

So here’s a nifty trick for you BizTalkers.  If you click on the actual link (the lines between elements and functoids/other elements etc…) on a map, you’ll see that there is a parameter in the properties explorer called Label.  Setting a value for the label effectively gives the link a friendly name.  In this situation I am going to name all of the links from my Not Equal functoids to my Table Looping functoid Nx where x is the current iteration of the table row.  So the label for the link coming out of the Not Equal functoid driven off HazCL1 is N1, and the one for HazCL2 is N2 etc…..

If you now go back to the Table Looping Grid tab in the table looping functoid you’ll have a nice surprise waiting for you.  Rather than the generic name “Not Equal” in the drop down selections, you now see N1, N2, N3 etc…. and thus can choose appropriately.  This tab should look like the below once you are done setting it up (should keep going on until the 20th row).

Ok, so we now have a table structure which is populated with all the values from the source message.  Now all we need is to get each row in the table to make up a record in the destination message.  The very first thing you want to do is to define the output scope of the table, so the output of the Table Looping functoid should be connected to the repeating record hazardous in the destination schema.

The next thing we want to do is to drag two table extractor functoids onto the mapping area.  The first parameter for these functoids will be the table they need to perform the extraction from, so drag a link from the table looping functoid to each of these table extractor functoids.  The second parameter for the table extractor functoids is the column number in the table that they should be extracting the values from.  So this parameter should have a value of 2 for the first table extractor functoid and a value of 3 for the second table extractor functoid (remember column 1 is the gate and thus it’s value doesn’t need to be extracted).  Lastly set the ouput of the first table extractor functoid to the hazardousClass element in the destination schema and the output of the second table extractor functoid to the hazardousUN element in the destination schema.  You’re map should now look like the below.  Please let me know if you need any help with this.

%d bloggers like this: