Category: Visual Studio


Back in June I wrote a blog post in which I explored how the BizTalk XML Validator pipeline component could be used to prevent duplicate values in repeating records, the duplicate check being scoped to a single element/attribute value or a combination of them (do have a read of the blog post in question for an overview of how this can be done in schemas with an elementFormDefault of unqualified or with no namespaces).  However at the time I found a major problem in that I could not figure out the syntax to get this to work with schemas with an elementFormDefault or attributeFormDefault of qualified, and was constantly facing the error “The prefix ‘x’ in XPath cannot be resolved” (x being the relevant namespace prefix I was trying to use in the unique constraint) when executing the XML Validator pipeline component even though the BizTalk project containing said schemas built successfully, and was unable to work around the problem at the time.

prefixerrorr

While looking at implementing a workaround on a solution I was working on whereby I was going to reverse the elementFormDefault on contained schemas from qualified back to unqualified my colleague Shikhar and myself worked out how to solve the problem encountered with namespace prefixes.  Put very simply, it looks like unique constraints in a schema will not respect namespace prefixes declared at the schema level but rather must have the namespace prefixes defined at the constraint level.

For the purpose of this exercise I’ve created a new type schema which contains a complexType definition called ItemMetadata which contains an attribute called DateOfAvailability and an element called Price.  The elementFormDefault and attributeFormDefault attributes are both set to qualified on this schema.  I’ve then referenced it via an imports statement from the Purchases schema I used in my previous example and added ItemMetadata as a child node under the repeating Purchase node so that the schema structure now looks like the below.

SchemaStructure

The unique constraint that I had working in the previous blog post specified that if there is more than one Purchase node that contains the same combination of values for the PurchaseOrder, SupplierName, ItemBarCode, ItemDescription and SpecialDeals elements as well as the PurchaseDate attribute then the XML Validator should throw an exception.  None of the aforementioned elements or attributes belonged to a namespace since the elementFormDefault and attributeFormDefault on the schema were set to unqualified.  I now want to add the DateOfAvailability attribute and Price element which both belong to the namespace http://BizTalk_Server_Project6.Type to the constraint.  The namespace must be associated with a prefix and as we have previously seen we can’t do this on the schema level, so the prefix should instead be declared on the constraint level and can then be used on individual field xpaths as below.

Constraint

Using the namespace prefixes as above the below XML now generates the error “There is a duplicate key sequence ‘123213 Al Pacino PurchaseDate_0 42129841 Anti-dandruff shampoo 10% off 10/11/2013 $13.00’ for the http://uniquepurchases:SupplierPurchaseOrderConstriant key or unique identity constraint.” as expected.  If you wanted to make use of the qualified elements or attributes in multiple nodes in multiple unique constraints in the same schema then you would need to declare the namespace prefix on each constraint, potentially reusing the same prefix value since it is scoped to that specific constraint and thus there won’t be a clash.

Duplicate

I’ve uploaded the source code (as a Visual Studio 2012/BizTalk 2013 solution) to my Google drive account if you are interested in giving it a try.

In conclusion, duplicate value checks in a BizTalk solution are very well served by schema validation using unique constraints (even though the schema designer does not support implementing unique constraints and they have to be added manually), and alternatives methods on enforcing such validation would most likely turn out to be much harder to implement, support, and wouldn’t be anywhere near as efficient.

Advertisements

My colleague Ian Hui figured out a problem that has had me scratching my head for the last two months and he has made me a very happy man.

While porting unit tests using BizUnit for the BRE Pipeline Framework from BizTalk 2010 to BizTalk 2013 I encountered a System.BadImageFormatException exception with the following error message – “Could not load file or assembly ‘file:///C:\Program Files (x86)\Microsoft BizTalk Server 2013\Pipeline Components\BREPipelineFrameworkComponent.dll’ or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded”.

Failed test

When I removed my custom pipeline component and only left behind the out of the box pipeline components like the XML Disassembler I noticed that the problem disappeared.  This, and some further digging caused me to believe that the problem is encountered with all pipeline components built in .Net 4.0 and above.  I tried a whole bunch of workarounds such as rebuilding the BizUnit test steps, the Winterdom pipeline testing framework and even the PipelineObjects.dll assembly using .Net 4.5 thinking that this might help work around the problem but I just kept hitting brick walls.  What made the problem even more mind-boggling was that the pipeline components that caused problems in unit tests ran just find in the BizTalk runtime.

In comes Ian who persevered with the problem long after I had resigned this to being something we would have to wait for Microsoft to fix.  He found that you needed to adjust the vstest.executionengine.exe.config file typically found in the “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow” folder, setting the useLegacyV2RuntimeActivationPolicy attribute on the startup element to true as in the below  (you can download a copy of my config file from here if you want to copy/paste from it).

Config

You can read more about this setting and what it does here.  The discrepancy in behaviour between the test execution and BizTalk runtime is easily explained by examining the BTSNTSvc64.exe.config file in the BizTalk program files folder as you’ll notice that the aforementioned attribute is set to true in this config file by default, which is why the runtime never had a problem with these pipelines.

BT config

Funnily enough after Ian figured out the answer for this problem he found that Shashidharan Krishnan had encountered the same problem in the past (on BizTalk 2010) and fixed it in the exact same way (he has documented this here), however he encountered completely different error messages and I suspect that he was running his unit tests through a console application rather than Visual Studio unit tests.  Either ways as the error messages he encountered are totally different from the ones we did, chances are that if you have the same error as us you might not find his post which is why we decided we would document this anyways.

Thanks again Ian (and Shashidharan).  You guys have just made unit testing BizTalk 2013 components more robust, thus ensuring the success of integration solutions developed for BizTalk 2013.

I was recently confronted with an interesting problem where I was asked to design a BizTalk schema which contained a complex type that must contain at least one instance of some of its child elements or optionally some or all of them. Upon questioning the business requirement it turned out that I didn’t actually have to enforce this in the schema as this level of business logic was better served by a downstream system. However I did decide that this was an interesting enough scenario (albeit not necessarily a common or practical one) to dive into it a bit deeper and see how it could be implemented using an XSD schema. Note that while I have used BizTalk 2009 schema editors in Visual Studio 2008 to create my schemas, aside from the wording of error messages nothing in this blog post is really BizTalk specific.

Let’s assume that we are dealing with the XML structure below and we are advised that the record structure must always contain either an Element1 or an Element2 element or both (the rest of the elements are all optional).

Basic XML structure

One way we can achieve the above goal is by making use of choice structures as below.

2 Elements Untyped

Note that in the above screenshot the choice consisted of a sequence that contained both Element1 and an optional Element2, and a second sequence that contained only Element2. This caters for scenarios where both elements exist, or only one exist. See the below screenshots which illustrate that if either Element1 or Element2 are missing then that is not a problem but if both are missing then the XML message is invalid.

Valid NoElement2 Valid NoElement1 Invalid NoElement1or2

When I originally approached the problem I thought of creating a choice structure with three sequences, one with Element1, the second with Element2, and the third with Element1 and Element 2. However this is not allowed and trying to implement this will result in an error stating “Multiple definitions of element xxx causes the content model to become ambiguous…” as below.

Ambiguous error

Now what if we want to change the data type for Element2 (which exists in both of the sequences in the choice). If we try to change the data type from string to int on the Element2 element in one of the sequences using the BizTalk/Visual Studio schema editor we will get an error stating “Elements with the same name and in the same scope must have the same type” as below.

Same type error

The reason for this is that the data type must be the same for elements with the same name that exists in multiple sequences in a choice structure. Changing them one at a time is not an option. One way to get around this is to open the schema using an XML editor and to change the data type for both of the Element2 elements at the same time. An even cleaner solution would be to extract the definition of Element2 into a simple type, and to have both Element2 elements use the simple type as their type so that if the data type for Element2 needed to be changed it would only have to be changed on the simple type in the future. In the below screenshot this has been done for Element1, Element2, Element3, Element4, and Element5 and we are now enabled to change data types with ease.

2 Elements Typed

Now what if we wanted to add Element3 to the choice as well, the new rule stating that at least one of Element1, Element2, or Element3 are provided, or any combination of two or three of the elements are provided. This can be achieved quite easily with the below structure. Note that the pattern is to have all the elements in the first sequence where only Element1 is mandatory, then to remove Element1 in the second sequence with Element2 now being mandatory, and the third sequence only containing Element3 which is now mandatory. Adding more elements to the mix would require you to extend this pattern.

3 Elements

How about if Element2 is always mandatory, and you additionally also need to provide either Element1 or Element3 or both elements. You can achieve this with the below structure.

Element2 Mandatory

I hope that this blog post has illustrated to you how you can create schemas that specify multiple combinations of optional/mandatory rules for a given complex type through the use of choice structures with child sequence groups.

A customer asked me recently if the BizTalk business rules engine was a good place to search for repeating elements containing duplicate values in an XML message that is received on a one way BizTalk hosted WCF service and to throw an exception back to a service consumer if a duplicate is found. My initial gut feeling was that the BRE wasn’t quite the best place to do this and I decided to search for alternatives before exploring this option further.

I was surprised to find out that the W3C standards for Xml schemas cater for unique constraints in your XML messages, defining keys scoped to a complex type that define which element/attribute or combination of elements and attributes are not allowed to repeat. This is not a feature in XSD schemas that myself or anyone I spoke to about had heard of before, and a common reaction was that no one had ever encountered such a requirement before. I decided to find out whether BizTalk supports these unique constraints and whether the XML validator pipeline component could be used to enforce these constraints.

The first thing I discovered is that there is no way in the schema designer to define the unique constraints, at least not that I could find. I decided I would follow the W3C guidelines and handcraft my constraint. Take a look at the below schema and take note of how the constraint I’ve defined prevents the same SupplierName value from being used across two Purchase records.

SupplierNameConstraint

Validating the below instance file against the schema returns an error as expected since a duplicate SupplierName of Redmond has been used in two of the Purchase records. The error is reasonably detailed advising the name of the unique constraint that was not respected as well as which duplicate value broke the constraint. Providing unique values for the SupplierName elements results in the instance file validating successfully. Note that the XML editor in visual studio also highlights the fact that a constraint has been violated as in the below screenshot (look at the close tag of the second Purchase record).

SupplierNameConstraintViolated

Now what if we wanted to add a second constraint on the PurchaseOrder element as well. Not a problem, just add a second unique constraint. Now both constraints must be respected in order for an instance file to successfully validate.

TwoConstraints

Another interesting scenario is to extend the duplicate checking to actually check for combinations of multiple elements. Let’s throw attributes into the mix as well. The below unique constraint now allows for duplicate SupplierName or PurchaseOrder elements or duplicate PurchaseDate attributes but not duplicates of all three in combination.

CombinationWithAttribute

We can also extend the duplicate check to elements in contained records. In the below example the ItemBarCode element in the child ItemDetails record has now been added to the unique constraint.

Child

Running a duplicate message through an actual XML validator pipeline component also throw the error as expected.

PipelineFailureMessage

Now what if one of the elements in your duplicate check is optional. In the below screenshot the optional ItemDescription element has been added to the unique constraint and even though neither of the Purchase records contain a ItemDescription element and all the other elements in the constraint are duplicated the constraint is not deemed to be violated. The constraint is only violated if the ItemDescription element is specified with a duplicate value. If it is missing, even if it is missing in two record in which every other element contains duplicate values, the constraint won’t be violated.

Optionals don't count

Another scenario… What if the element in your duplicate check is a repeating element? I extended the duplicate check to the SpecialDeals element as well and what I found was that the second I had more than one SpecialDeals element in a given Purchase/ItemDetails record I would get an error stating that only one element was expected. Adding an element to a unique constraint disallows it from being a repeating element.

Repeating

(UPDATE 16/12/2013 – This post used to contain a further section that detailed problems I encountered back in June 2013 when dealing with elements or attributes which belong to a namespace in unique constraints.  At the time I could not figure out how to get them to work and I never managed to find any supporting information to help me, however I have now found out the correct syntax for this issue and have added a blog post detailing this here so I have now deleted the remaining sections of the post so as not to mislead anyone into thinking that including qualified elements/attributes in unique constraints are not supported.)

Most developers who have worked with BizTalk for a while will realize the benefits of using direct binding on orchestration ports to improve flexibility and loose coupling. What is not often realized is that using direct rather than specify later port binding comes with a bit of a trade-off in that the increased flexibility results in less hand holding for administrators, and it is altogether possible for them to stop/unenlist an orchestration or send port resulting in routing failures and unprocessed messages which might be difficult to replay. Seeing as guaranteed delivery is one of the biggest selling points on most projects involving BizTalk Server this is too big a hole to overlook. In this blog post I will detail how hand-holding is relaxed for direct binding and introduce error handling patterns that can be used in orchestrations to overcome routing failures.

One of the things most BizTalk administrators might notice when orchestrations use direct binding is that starting an orchestration no longer require starting all send ports that the orchestration publishes messages to. At run time this could mean that the loose coupling offered by direct binding has removed the guarantee that subscribers to your published messages will be active. If an orchestration publishes a message for which there are no subscribers then a non-resumable routing failure instance will be raised and a PersistenceException exception will be raised in the orchestration.

I have had some pretty concerned colleagues ask me about the dreaded PersistenceException. This is nothing more than a curiously named exception representing a routing failure. The way I tend to deal with such exceptions in guaranteed delivery scenarios is to create a scope around my send shape (possibly around other close by associated shapes as well) and to catch the PersistenceException (this is in the Microsoft.XLANGS.BaseTypes namespace and it’s assembly is referenced by default in BizTalk Server projects). If the exception is encountered then I raise an alert to administrators (via the ESB portal or relevant exception handling framework) advising of the routing failure, and suspend the orchestration instance. Once the administrators have fixed the problem they can resume the orchestration which will loop back to before the send shape and resend the message.

Single Message

Now of course this pattern comes with a bit more effort in terms of development but one has to ask whether their system can afford to lose a message or at the very least have to go through painful and manual message replay processes.

Another fun scenario I’ve encountered that can go very wrong is taking advantage of direct binding in orchestrations in tandem with a publish subscribe pattern with multiple subscribers to the same message. Say you have an orchestration that sends out a message which is subscribed to by a logging/auditing port and another port which actually performs an important action such as updating a LOB system. If the LOB send port was in an unenlisted state then the message would be directed to your auditing send port, no PersistenceException would be raised in your orchestration and the message would never update the LOB system. So much for guaranteed delivery…

What I would do in this case is create two copies of the same message in the orchestration, each with some sort of instructional context property that is used to direct the message to the relevant send port (the values of this context property being set to abstract values such as “Audit” or “UpdateLOB” rather than the name of the send port since this doesn’t steer away from the concept of loose coupling too far), wrap the send shapes for the two messages in an atomic scope so that only one persistence point is encountered when the messages are sent out (this is a whole other subject but it is important to keep your persistence point and thus your I/Os against your SQL server to a minimum), and wrap the atomic scope with a long running scope which catches a PersistenceException. I then implement the same exception handling pattern I mentioned earlier in this post.

Multiple messages

Once again this comes at a greater effort in development and introduces more overhead on the BizTalk runtime, muddies up your orchestration with logic which is arguably plumbing rather than business logic, and somewhat takes away from how loosely coupled your orchestration is.  That said it does ensure guaranteed delivery.

The exception handling patterns that I’ve discussed here stem from my own cautious nature but I have seen them pay off in multiple instances and have also seen the cost when such thought isn’t applied when required. I wouldn’t say they are required in every orchestration but I would at least encourage developers to think of these scenarios when they decide to what extent they are going to handle exceptions.  At the very least if such patterns aren’t implemented do think about putting a section or at least a blurb in your support documentation to discuss how to recover from such failures.

%d bloggers like this: