Category: Mapping


The book is now available, well done co-authors, reviewers and Packt Publishing team.

Connected Pawns

I am pleased to announce “SOA Patterns with BizTalk Server 2013 and Microsoft Azure – Second Edition” by Mark Brimble, Johann Cooper, Coen Dijgraaf and Mahindra Morar can now be purchased on the Amazon or from Packt. This is based on the first edition written by by Richard Seroter. Johann has given a nice preview of the book here.

This is the first book I have co-authored and I can now tick that off on my bucket list. It was an experience to work with such talented co-authors and I was continually amazed how they rose to the occasion. I agree with Johann’s statement about being privileged to write the second edition of what was the first BizTalk book I ever owned.

As I updated chapters I was continually struck by how little had changed. We added new chapters, Chapter 4, Chapter 5, and Chapter 6…

View original post 125 more words

A couple of months ago I released the BRE Pipeline Framework v1.5 (since superseded by v1.5.1) to CodePlex.  One of the new features in this version of the framework is support for dynamic transformation.  In this blog post I’ll explain some scenarios in which this feature might be useful to you and show you how you can use the BRE Pipeline Framework to execute your maps dynamically.

 

Why you’d want to use the BRE Pipeline Framework for Dynamic Transformation

The first reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is to build messaging only applications that support hot deployments for maps.  A hot deployment from a BizTalk perspective is effectively one in which a BizTalk application doesn’t need to be stopped during a redeployment of your map assemblies, the only requirement being that the relevant host instances are restarted after the deployment is complete.  In a messaging application with maps on your receive/send port you might find that BizTalk complains if you try to import an MSI with maps that are used on receive/send ports if the application is in a running state, in fact you might even need to delete the receive/send ports altogether or at the very least remove the maps from the ports before your redeployment takes.  A hot deployment would not require more than a momentary outage of your application while host instances restart which fits in well with an application that is not tolerant to outages.

The second reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is for transformation selectivity.  The way BizTalk works if you apply inbound/outbound maps on your receive/send ports is that it chooses the first map for which the source schema matches the message type of the message in question.  If you apply two maps with the same source schema on a receive/send port there is no way for you to specify any additional conditions that determine which map executes, the second map will always be ignored.  With the BRE Pipeline Framework you can apply complex conditions which include combinations of checking message body content through XPath statements or regexes, checking values in the message context, checking values from SSO configuration stores or EDI trading partner agreements, or even based on the current time of the day…and the best part is that if the selectivity functionality already exists in the BRE Pipeline Framework (which it does in all the mentioned scenarios and many more) then you can achieve all of this with 0 lines of code, and if the functionality doesn’t exist then the framework provides extensibility points to cater for this.  Another great advantage of using the BRE to choose which maps get executed is that you can always change the selectivity rules at runtime if required without any code changes, which can offer you much flexibility for applications that are not tolerant to outages.

The third reason to take advantage of the BRE Pipeline Framework for Dynamic Transformation is so that you can chain maps.  As mentioned above, on a BizTalk receive/send port only the first matching map will execute.  It is not possible to execute multiple maps sequentially within a port within a single direction (you can of course specify one inbound and one outbound map thus two if your port is a two-way port).  In a messaging only solution if you receive a message and send it out on a send port then the maximum number of maps you can execute is 2, one on the receive port in the inbound direction, and one on the send port in the outbound direction.  For the most part this is adequate but there might be some scenarios where this isn’t.  With the BRE Pipeline Framework you can specify as many maps to execute sequentially within a single rule or across multiple rules (make sure you set priorities across the different rules to guarantee the required map execution order), or you can execute a map in your BRE Policy and then apply one on your port as well (keep in mind that for an inbound message the pipeline executes before the port maps and the reverse is true for outbound messages).  Beyond just chaining maps, you can also chain other Instructions contained within the BRE Pipeline Framework together with your maps, so you could for example execute a map dynamically then perform a string find/replace against the message body.

Another possible benefit of the dynamic transformation is avoiding inter-application references, which is possibly one of the most painful aspects of BizTalk Server. When you want to pass a message from one application to another you can have almost no choice but to have one application reference the other (the one being referenced containing the schema). Inter-application referenced make deployments a lot more difficult since you can’t update the referenced application without deleting all the referencing applications first (forget about hot deployments altogether, this is the other extreme). To get around this problem you could potentially have separate schemas in each application, create a map in an assembly that only gets deployed to the GAC rather than to the BizTalk Management Database, and then use the BRE Pipeline Framework to transform the message in a pipeline (in either application, it shouldn’t matter). This could allow you to create more decoupled applications with much easier deployment models.

The aforementioned benefits can also be achieved through the use of the ESB Toolkit.  The main difference between the BRE Pipeline Framework and the ESB Toolkit is that the former is intended to provide utility whereas the latter is more about implementing the routing slip pattern with added utility as well.  The ESB Toolkit comes with a fair learning curve as there are a whole lot of new concepts to wrap your head around such as itineraries, resolves, messaging extenders, orchestration extenders etc… and you’ll find that at least the out of the box utility can be quite limited (there are of course community projects that have improved on these lackings).  I definitely see valid scenarios in which either framework should be used, and wouldn’t consider them to be competing frameworks…they could even be used in tandem.

One final reason that I can think of off the top of my head is traceability.  Given that the BRE Pipeline Framework caters for tracing based on the CAT Instrumentation framework and also provides rules execution logs (see this post for more info) you can always tell why a map was chosen for a given message.  This can be especially handy when you are debugging a BizTalk application.

Just like with the ESB Toolkit executing a map dynamically within your pipeline goes against one of the best practice principles of building streaming pipeline components, so please carefully evaluate the pros and cons of using this feature before implementing it.

 

Implementing Dynamic Transformation with the BRE Pipeline Framework

To illustrate how to use the BRE Pipeline Framework to execute maps dynamically I will provide you with and walk you through a simple example solution.  The solution contains three schemas, one called PurchaseOrder, one called PurchaseOrderEDI, and one called PurchaseOrderXML.  All the schemas contain an element (with varying names) which hold the purchase order number.  The PurchaseOrder schema also contains an additional node called OrderType which is also linked to a promoted property of the same name.  The rule we want to put in place for PurchaseOrder messages is that if the value of the OrderType context property is XML then we want to execute a map that converts the message to a PurchaseOrderXML.  If the result of an XPath query against the OrderType node is EDI then we want to execute a map that converts the message to a PurchaseOrderEDI.

On to the implementation…   You will of course need to download and install at least v1.5.1 of the framework from the CodePlex site and import the required vocabularies from the program files folder.  Once done you will need to create a receive pipeline (receive or send are both supported) and drag the BREPipelineFrameworkComponent from the toolbox (you’ll need to add it to your toolbox by right clicking within the toolbox choosing “Choose items” and selecting the component from within the pipeline components tab if it isn’t already in your toolbox) to the validate pipeline stage (you can choose any stage except disassemble/assemble).  The only parameter you have to set on the pipeline component is ExecutionPolicy which specifies which BRE Policy will be called to resolve the maps to execute (you could optionally specify the ApplicationContext parameter if you plan on calling the BRE Policy from multiple pipelines and you want some rules to only apply for certain pipelines).   For the purpose of this example we will use an XML Disassembler component prior to the BREPipelineFrameworkComponent and ensure the StreamsToReadBeforeExecution parameter on the BREPipelineFrameworkComponent is left at its default value of Microsoft.BizTalk.Component.XmlDasmStreamWrapper so that we will be able to inspect context property values promoted by the XML Disassembler (see this post for more info).

Pipeline

Once all the components are deployed to BizTalk we’ll create a receive location that picks up a file and makes use of the aforementioned receive pipeline, as well as a file send port that subscribes to messages from this receive port.  Finally we’ll create the BRE Policy which contains two rules.

The first rule is used to transform messages to the PurchaseOrderXML message format as below.  The rule is made of a single condition which uses the GetCustomContextProperty vocabulary definition from the BREPipelineFramework.SampleInstructions.ContextInstructions vocabulary to evaluate the value of a custom context property.

MapToXMLRule

The second rule is used to transform messages to the PurchaseOrderEDI message format as below.  The rule is made of a single condition which uses the GetXPathResult vocabulary definition from the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary to evaluate the value of a node from within the message body with the use of an XPath statement.

MapToEDIRule

Both of the aforementioned rules make use of the TransformMessage vocabulary definition from the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary in their actions to apply a map against the message.  The input format of the vocabulary definition is as follows – Execute the map {0} in fully qualified assembly {1} against the current message – {2}.  The first parameter in this vocabulary definition is the fully qualified map name (.NET namespace + .NET type), and the second parameter is the fully qualified assembly name including the assembly version and the PublicKeyToken (you can execute gacutil -l with the assembly name to get the fully qualified assembly name from a Visual Studio Command Prompt).  The third parameter is used to specify what sort of validation is performed against the input message before executing the map and is an enumeration with the below values.

  • ValidateSourceSchema – This option will validate the current messages BTS.MessageType context property against the source schema of the specified map.  If they don’t match up then an exception is thrown rather than the map being executed.  If a message type is not available then an exception will be thrown.
  • ValidateSourceSchemaIfKnown – This option will validate the current messages BTS.MessageType context property against the source schema of the specified map.  If they don’t match up then an exception is thrown rather than the map being executed.  If a message type is not available then no exception will be thrown and the map will execute.
  • DoNotValidateSourceSchema – This option will not result in any validation of the BTS.MessageType context property on the current message and will result in the map being executed regardless, possibly resulting in a runtime error during execution of your map or an empty output message.  If a message type is not available then no exception will be thrown and the map will execute.  I haven’t experimented with this myself but this might allow you to create generic maps which apply generic XSLT against varying input messages to create a given output message format.  If anyone decides to experiment with this then please do let me know your results.

That’s all that is required to stitch together a solution making use of the BRE Pipeline Framework which involves dynamic transformation.  If you push through a PurchaseOrder message with an OrderType of XML then it will get converted to a PurchaseOrderXML message, if the OrderType is EDI then it will get converted to a PurchaseOrderEDI message, and if the OrderType is anything else then the message will remain a PurchaseOrder as expected.

As previously mentioned the BRE Pipeline Framework comes along with a  lot of traceability features (also documented here).  If you set the CAT Instrumentation Framework Controller to capture pipeline component trace output you will get information such as the below which tells you which map is getting executed, what the source message type is, and what the destination message type is.

EventTrace
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:TRACEIN: BREPipelineFramework.PipelineComponents.BREPipelineFrameworkComponent.TraceIn() => [102f63bb-2c86-4213-a892-2a5175569469]
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:START -> 102f63bb-2c86-4213-a892-2a5175569469
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has started executing with an application context of , an Instruction Execution Order of RulesExecution and an XML Facts Application Stage of BeforeInstructionExecution.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has an optional Execution policy paramater value set to BREMaps.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - BRE Pipeline Framework pipeline component has an optional tracking folder paramater value set to c:\temp.
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Inbound message body had a stream type of Microsoft.BizTalk.Component.XmlDasmStreamWrapper
[3]1FF4.2690::08/16/2014-22:13:01.245 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Inbound message body stream was not seekable so wrapping it with a ReadOnlySeekableStream
[3]1FF4.2690::08/16/2014-22:13:01.246 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Reading stream to ensure it's read logic get's executed prior to pipeline component execution
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.CachingMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.ContextMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.HelperMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.MessagePartMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.255 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding MetaInstruction BREPipelineFramework.SampleInstructions.MetaInstructions.XMLTranslatorMetaInstructions to Execution Policy facts.
[1]1FF4.2690::08/16/2014-22:13:01.265 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Executing Policy BREMaps 1.0
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Adding Instruction BREPipelineFramework.SampleInstructions.Instructions.TransformationInstruction to the Instruction collection with a key of 0.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Starting to execute all MetaInstructions.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Executing instruction BREPipelineFramework.SampleInstructions.Instructions.TransformationInstruction.
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Applying transformation BREMaps.PurchaseOrder_To_PurchaseOrderXML,   BREMaps, Version=1.0.0.0, Culture=neutral, PublicKeyToken=21bb7669ee013ee3 to the message
[0]1FF4.2690::08/16/2014-22:13:01.277 [Event]:102f63bb-2c86-4213-a892-2a5175569469 - Message is being transformed from message type http://BREMaps.PurchaseOrder#PurchaseOrder to message type http://BREMaps.PurchaseOrderXML#PurchaseOrderXML
[0]1FF4.2690::08/16/2014-22:13:01.278 [Event]:END <- 102f63bb-2c86-4213-a892-2a5175569469: 32ms
[0]1FF4.2690::08/16/2014-22:13:01.278 [Event]:TRACEOUT: BREPipelineFramework.PipelineComponents.BREPipelineFrameworkComponent.Execute(...) = "102f63bb-2c86-4213-a892-2a5175569469"

If you set the TrackingFolder parameter on the BREPipelineFrameworkComponent pipeline component to a valid folder then you will get output like the below (note this is just an excerpt) which provides valuable information telling you which BRE rules get fired and why.

CONDITION EVALUATION TEST (MATCH) 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Test Expression: BREPipelineFramework.SampleInstructions.MetaInstructions.ContextMetaInstructions.GetContextProperty == XML
Left Operand Value: XML
Right Operand Value: XML
Test Result: True

CONDITION EVALUATION TEST (MATCH) 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Test Expression: BREPipelineFramework.SampleInstructions.MetaInstructions.HelperMetaInstructions.GetXPathResult == EDI
Left Operand Value: XML
Right Operand Value: EDI
Test Result: False

RULE FIRED 16/08/2014 10:13:01 p.m.
Rule Engine Instance Identifier: f2c966cf-b248-4e94-a96a-99d110d59a9b
Ruleset Name: BREMaps
Rule Name: Map To XML Format
Conflict Resolution Criteria: 0

One more thing worth mentioning is that once the BRE Pipeline Framework executes a map, it will promote the output message type to the BTS.MessageType context property just as a map on a port would.  This means that you can reliably create routing filters based on the BTS.MessageType context property if you make use of the dynamic transformation feature in the BRE Pipeline Framework.

The aforementioned solution is available for download here.  I’ve included the Visual Studio solution with the source code as well as an export of the BRE Policy, as well as an MSI installer which will create the example application for you (you might need to reconfigure the folders in the file receive and send port based on where you unzip the solution and you might have to provide full control permissions on these folders to your host instance user).  I’ve also included some example XML messages for your convenience.

Happy transforming.

While implementing dynamic transformation in the BRE Pipeline Framework I ran into an interesting problem.  In BizTalk 2013 Microsoft changed the way transformations are executed to be based on XSLCompiledTransform, rather than on the long deprecated XSLTransform, which delivers performance benefits in the mapping engine.  This however is a breaking change for all those that chose to implement dynamic transformation via custom .Net code in prior versions of BizTalk.  My specific problem was that I wanted to implement dynamic transformation in the BRE Pipeline Framework without forking the code to provide BizTalk 2010 and 2013+ support.

The code for BizTalk 2010 dynamic transformations in the BRE Pipeline Framework looks (note that it has been truncated to make it easier to view, visit the CodePlex page if you’d like to see the full source code) like the below.

TransformMetaData transformMetaData = TransformMetaData.For(mapType);
SchemaMetadata sourceSchemaMetadata = transformMetaData.SourceSchemas[0];
string schemaName = sourceSchemaMetadata.SchemaName;
SchemaMetadata targetSchemaMetadata = transformMetaData.TargetSchemas[0];

XPathDocument input = new XPathDocument(inmsg.BodyPart.GetOriginalDataStream());
XslTransform transform = transformMetaData.Transform;
Stream output = new VirtualStream();
transform.Transform(input, transformMetaData.ArgumentList, output, new XmlUrlResolver());
output.Position = 0;
inmsg.BodyPart.Data = output;

The above wouldn’t build on a BizTalk 2013 development machine since an ITransform object was expected instead of an XSLTransform object.  The working code looks like the below.

TransformMetaData transformMetaData = TransformMetaData.For(mapType);
SchemaMetadata sourceSchemaMetadata = transformMetaData.SourceSchemas[0];
string schemaName = sourceSchemaMetadata.SchemaName;
SchemaMetadata targetSchemaMetadata = transformMetaData.TargetSchemas[0];

XPathDocument input = new XPathDocument(inmsg.BodyPart.GetOriginalDataStream());
ITransform transform = transformMetaData.Transform;
Stream output = new VirtualStream();
transform.Transform(input, transformMetaData.ArgumentList, output, new XmlUrlResolver());
output.Position = 0;
inmsg.BodyPart.Data = output;

Note that the major point of difference in the above two code snippets is the type of the transform variable.  In order to cater for both scenarios I decided to take advantage of .Net 4’s dynamic type feature whereby instead of specifying a class name (XSLTransform or ITransform) I use the dynamic keyword instead as below.

dynamic transformMetaData = TransformMetaData.For(mapType);
SchemaMetadata sourceSchemaMetadata = transformMetaData.SourceSchemas[0];
string schemaName = sourceSchemaMetadata.SchemaName;
SchemaMetadata targetSchemaMetadata = transformMetaData.TargetSchemas[0];

XPathDocument input = new XPathDocument(inmsg.BodyPart.GetOriginalDataStream());
dynamic transform = transformMetaData.Transform;
Stream output = new VirtualStream();
transform.Transform(input, transformMetaData.ArgumentList, output, new XmlUrlResolver());
output.Position = 0;
inmsg.BodyPart.Data = output;

Note that in the above I also had to use the dynamic keyword in place of the TransformMetaData type since this class appears to belong to a different namespace in BizTalk 2013 compared to prior versions.

The dynamic keyword instructs the compiler to not perform any validation on methods/properties called on that object (so no intellisense) and to instead assume that the developer knows what he is doing.  The object type is resolved at runtime and if any of the called methods/properties don’t exist then that will result in a runtime error.

This of course is only a valid solution if you are targeting .Net 4.0 at the minimum since this feature didn’t exist in previous versions.  This solution works well for solutions targeting BizTalk 2010 and above.  I would also encourage any BizTalk 2010 shops that are dabbling in dynamic transformation to future proof their solutions by using the dynamic keyword.

This of course only scratches the surface of dynamic types.  If you want to read more check out this MSDN article.  I would definitely encourage thorough unit testing (as was the case for the BRE Pipeline Framework) to make up for the loss of compile-time validation.

The thing I like best about working on group projects is the fact that you get to pick up some really good tricks from others that you didn’t previously know about but they might just take for granted. On my current project I had my young and bright colleague Shikhar Bhagat show me a trick with the BizTalk 2010 (not sure if this applies to previous versions) map designer that just floored me and is sure to improve my productivity when developing maps in a huge way. I thought it a good idea to share this trick so that others who might have no idea that this is possible can benefit from it.

Oftentimes with maps we end up doing a round of development and testing only to find that we have linked the wrong elements from the source to the destination nodes or that we’ve used the wrong functoids (I remember that some previous versions of the mapping designer allowed you to overwrite functoids by dragging and dropping another functoid over an existing one but this doesn’t appear to be possible from BizTalk 2010 onwards). When dealing with complex schemas and maps it can be especially hard to make the corrections, especially if you have to delete the current link and then draw in the new link, possibly forgetting how things link up.

Luckily Shikhar showed me that this is not at all necessary (at least in BizTalk Server 2010/Visual Studio 2010). When wanting to move the source or the destination of a link, all you have to do is focus on the link by clicking on it, then drag and drop the little blue square at the beginning or end of the link (if the source/destination of your link is a schema element then you will see the little blue square at the border of your schema pane) to the desired source/destination. I hope the below video helps to illustrate my point.

 

In the previous installment of this series on refactoring BizTalk maps I discussed how you could take the easy road to ensure your maps work when the target namespace of your schema changes, or if the schema is moved to a different project or had it’s .NET type name changed. In this new entry to the series we will explore further variations on scenarios that could force you to tread down the refactoring path.

A question was posed to me on the previous installment on how best one should identify when a map needs to be refactored. In most cases when a schema changes and you open a map you will see a warning message advising that some links might have been lost as the referenced nodes do not exist in the schema (make sure you don’t save the map after seeing this dialog box or you will lose out on your chance to refactor). However if your map makes use of an external XSLT file then you will not get such an obvious indication that refactoring is necessary. I would suggest that regression unit testing is the way to go to identify that a map needs to be refactored, and I suggest you read my previous post discussing the BizTalk Map Testing Framework (keep in mind that there are alternative means to test your maps but this is my favorite) if you want a primer on this.

Disclaimer : when manually editing any BizTalk artifacts (only recommended for schemas and maps) you should always make sure that you have checked in a prior working version of your source code to a source control repository or made a copy of the components in a safe location. Proceed with caution.

 

Refactoring for changes to root node names in one of your schemas

This is one of the most common refactoring scenario faced when dealing with BizTalk maps, especially in the early stages of development while specifications are still more fluid than a developer might like them to be. Let’s assume that to start with the map looks like the below (note that while the root node on the source and destination messages are both called source, these are based on different schemas and have different target namespaces).

RootNodesSame

Now what if the root node name on the destination schema is changed from source to destination. If you try to open the map using the default map designer then you will see that all the links to the destination message have broken. As in the previous examples you will want to open the map in XML view (using your favorite XML editor or by right clicking on the map in Visual Studio and choosing Open With -> XML (Text) Editor), paying particular attention to the sections I have highlighted in the below screenshot.

RootNodesSameSource

The first section you will want to pay attention to will be the TrgTree node (it would have been SrcTree if the change was to our source schema), changing the value of the RootNode_Name attribute to the new root node name. The next thing you will want to do is to do a find/replace all searching for the XPath statements to the root node contained within the LinkTo (it would be LinkFrom if the change was to our source schema). In this case we would be performing a find/replace as below.

RootNodesSameSourceFindReplace

Once the above has been done you should once again be able to open the map using the default map designer.

 

Refactoring for changes to record/element/attribute names in one of your schemas

Refactoring for changes to record/element/attribute names in schemas is definitely the most common refactoring scenario that one would encounter. While a change to the name of a single element that is only used once in a simple map might not cause many problems to the developer and he should opt to fix up his map using the designer, oftentimes maps can be very complex and there might be many links to or from the same element which the developer does not want to revisit. Of course if a record name were to change then that would cause all the links to/from underlying elements to break as well and most developers will want to avoid this scenario at all costs.

Lets start by exploring changes to a record name using the below starting point (note that AnAttribute in the source schema is an aptly named attribute).

NewStartingPoint

If we have changed the name of the Dates record in the source schema to RelevantDates then we would once again have to open up the map in an XML Viewer.

ChangingRecordNameSource

In this case we can be pretty confident that there is only one node named Dates so we could theoretically even just do a find/replace which replaces all instances of the word Dates with RelevantDates and that would get us going. It would be a lot safer to run the below find/replace (replacing the XPath contained within the LinkFrom attribute to the changed record), after which we could once again open our map in the map designer and all our links would be preserved.

ChangingRecordNameSourceReplace

To demonstrate refactoring to cater for a changed element name lets assume that the name of the destination element called Element2 has changed to Age. Once again lets open the map in an XML view.

ChangedElementNameSource

Once again we could simply do a search for Element2 and replace it with Age, but the safer way to approach this is to do a find/replace on the XPath statement in the LinkTo attribute right from the root element to the changing node as below.

ChangedElementNameSourceReplace

If we wanted to change the attribute called AnAttribute in the source schema to DateOfMarriage then we would see that the process is not much different from that encountered while refactoring for changed element names. The below find/replace would do the trick.

Attribute

 

As would be evident to anyone who reads the blog posts in these series, maps are heavily based on XPath and the better you understand XPath syntax the more power you will have when dealing with BizTalk maps, and XML in general. I would definitely recommend using the DanSharp XmlViewer tool which is probably the #1 tool I could not live without while doing BizTalk development.

%d bloggers like this: