Category: Administration


The book is now available, well done co-authors, reviewers and Packt Publishing team.

Connected Pawns

I am pleased to announce “SOA Patterns with BizTalk Server 2013 and Microsoft Azure – Second Edition” by Mark Brimble, Johann Cooper, Coen Dijgraaf and Mahindra Morar can now be purchased on the Amazon or from Packt. This is based on the first edition written by by Richard Seroter. Johann has given a nice preview of the book here.

This is the first book I have co-authored and I can now tick that off on my bucket list. It was an experience to work with such talented co-authors and I was continually amazed how they rose to the occasion. I agree with Johann’s statement about being privileged to write the second edition of what was the first BizTalk book I ever owned.

As I updated chapters I was continually struck by how little had changed. We added new chapters, Chapter 4, Chapter 5, and Chapter 6…

View original post 125 more words

Advertisements

A quick blog post to document a tough lesson learnt by myself so others can hopefully avoid making the same mistake. Changing a BizTalk WCF send port’s send handler (i.e. the host instance it is associated with) will remove all passwords that are currently configured on the send port. This includes credentials to access the target service as well as the proxy server password if there is one.  This appears to be the case with WCF-SQL receive locations as well, however a quick test with the File adapter showed that this behavior might not be consistent with non-WCF adapters so it’s best to try your change on a development environment first.

How can you tell if your password has been wiped? Normally when you access the send port, if you have previously entered a password then you will see the black circles indicating a masked password is present. If the password has been wiped then the field will be totally blank as below.

PasswordBlanked

As a practice I would encourage others to avoid hardcoding passwords on send ports and to use the SSO’s credential mapping facilities instead (a blog post on how to achieve this using the BRE Pipeline Framework soon).  This isn’t applicable for proxy server credentials but those can typically be set on the send handler level so you don’t have to worry about associating then with your send ports.  

Moral of the story, if you’re changing your adapter handler on a port and there are passwords associated with the port then re-enter them.

I thought I would share a trick which might seem obvious to those who know it already, but should seem like an eureka moment to those who don’t.

How many of you have tried to use the Performance Monitor tool to try to track resource usage of a BizTalk application and have been stumped as to which Host Instances you are actually tracking?  The problem with Performance Monitor is that for many of the counters (especially the generic non-BizTalk specific counters such as Process or Memory) the Host Instance names are not listed, but instead all you see are the service names (BTSNTSVC or BTSNTSVC64) with a counter based on how many instances of that specific service are currently running, as in the below screenshot.

Random counters

You could be creative (you won’t believe the number of stories I’ve heard of how people work around this) and start your Host Instances one at a time and try to figure out which one relates to which Performance Monitor instance but there is a better way.  The key is to add the performance counter called “ID Process” under Process for each of the different BizTalk Services as in the below screenshot.

ID Process List

Now that you’ve added the ID Process performance counters, you can click on any of the specific instances in Performance Monitor and if you take a look at the values in the Last/Average/Minimum/Maximum columns (they’ll all have the same value) you’ll find that they contain the value of the PID (Process ID) for a specific host instance which you can find under the Services tab of the Windows Task Manager as per the below screenshot.  You have now built an association between the Performance Monitor Instance and the BizTalk Host Instance, and those associations will carry over to other Performance Monitor counters.

Link

Something to keep in mind is that when you restart Host Instances they will get new PIDs, and if you restart all the Host Instances at the same time there is a possibility that the order of Instances in Performance Monitor might get swapped around.  My recommendation is that if you are currently profiling your BizTalk environment using Performance Monitor and you want consistent results, you should either track your Data Collector Sets to separate output files each time you restart your Host Instances, or you should look into restarting your Host Instances one at a time to maximize (I’m not sure if this is a guarantee) the chance that the order will stay the same.  Regardless which path you choose, it is important to take note of which Host Instance relates to which Performance Monitor instance, especially if you plan on studying the results in the future at which point it might no longer be possible to link the associations.

A few years ago I successfully implemented a BizTalk solution making use of a WCF-NetMsmq receive location which was used to receive XML messages into BizTalk.  The solution worked really well for a good couple of years until a few months ago when the environment encountered a major unrelated outage during which a pretty nasty side effect, poisoned messages, was encountered.  The outcome of this experience is that I will only use the WCF-Custom adapter with a netMSMQBinding rather than using the WCF-NetMsmq adapter which has no capability to handle poisoned messages.

The outage was caused due to the file server on which the BizTalk database transaction logs were stored running out of disk space.  As a result all transactions that BizTalk attempted to make failed and were rolled back, including transactions trying to commit the received messages from the transactional MSMQ queue to the BizTalk message box.  The problem with this is that after retries are exhausted (the default MSMQ retry settings dictate that there will be 5 retries with 30 minute intervals between them) the message will be considered a poisoned message, and the receive location will be stuck in a faulted state until the message has been removed from the queue.  The WCF-NetMsmq adapter doesn’t allow the poison message handling settings to be overridden.

Thankfully, if you use the WCF-Custom adapter with the netMsmqBinding binding instead you will find that you have full control over the poison message handling settings (these settings are detailed in this MSDN article).  You’ll of course be able to override the number of retries and retry interval, but the setting which is most important to us is the one called ReceiveErrorHandling.  When using the WCF-NetMsmq adapter this setting is set to “Fault”, which means that poisoned messages remain in the queue and no messages can be consumed till the message has been removed.  We can instead set this to “Drop” if we want to get rid of the message automatically, “Reject” if we want to drop the message and send a negative acknowledgement back to the sending queue, or “Move” if we want to move the message to a queue (actually a sub-queue of the main queue) called poison.  Note that the aforementioned MSDN article states that the “Reject” and “Move” options are only available on Windows Vista, however I have successfully tested them on Windows Server 2008R2, and Windows Server 2012, and would be extremely surprised if they don’t also work on Windows 7 and Windows 8/8.1.

PoisonMessageSettings

We chose to go down the “Move” path ourselves because that results in the continued processing of our queue and allows us to deal with the poisoned message in our own time.  You’d need to find a smart way to deal with the poisoned messages, building some sort of notification process to ensure that messages don’t get left in the poison queue indefinitely.  One option is to use a vanilla MSMQ (rather than WCF based, in case the reason the message is poisoned is due to a problem with the SOAP envelope or malformed XML) receive location to receive messages off the poison queue (the URL for the poison queue is in the following format – net.msmq://<machine-name>/applicationQueue;poison) thus kicking off your notification process.

One takeaway from this post is to avoid using the WCF-NetMsmq adapter and to use it’s more flexible cousin, the WCF-Custom adapter with the netMsmqBinding binding.  I would extend this advice to the majority of the WCF adapters, since using the WCF-Custom adapter generally affords you a lot more flexibility in the way of being able to make use of WCF behaviors and binding settings that the WCF adapters don’t always expose.  The one definite exception (at least that I can think of) to this advice is for the webHttpBinding (i.e. REST) in which case it is best to use the WCF-WebHttp adapter, since it gives you access to URL variable mapping which you will not find on the WCF-Custom adapter.  Luckily Microsoft allow you to use WCF behaviors on the WCF-WebHttp adapter so you don’t lose anything by avoiding the WCF-Custom adapter in this case.

Another takeaway is that you should always consider how you will deal with poisoned messages when consuming messages off an MSMQ queue with BizTalk.  Even if you have full control over the WCF clients that send messages to the queue, you might still encounter poisoned messages out of no fault of the message sender, like in the outage scenario I described above.  It is best to familiarize yourself with the concept of poisoned messages and plan how you will handle them, rather than find yourself in the position of having to figure it out during a production outage (as was the case for me).

I’m pleased to announce that the White Paper titled “The A-Y of running BizTalk Server in Microsoft Azure” which I have been working on for the last two months is now available to download on the BizTalk 360 White Paper collection.

Writing this White Paper has been a momentous task, especially given that Microsoft Azure is an ever-changing entity (I’m pretty certain my synopsis on D-series VMs is already a bit dated since Microsoft have recently released new information stating that they sport faster CPUs, which I was previously unaware of, in addition to having SSDs) and I owe all the reviewers a big deal of thanks for their help.

This endeavour started out as a blog post (I just deleted the draft) and it quickly became apparent that the topic was well larger than anything I could cover within a single post and required a lot more attention to detail.

I hope the paper proves to be interesting and valuable to you, and as always welcome any feedback.

It will be nice to get back to blogging again 🙂

%d bloggers like this: