Tag Archive: BizUnit


I recently worked on a POC which had me spinning up an EDIFACT/AS2 exchange based on both BizTalk Server and MABS (Microsoft Azure BizTalk Server) and comparing the experiences.  Seeing as this was my first time dealing with AS2 (EDIFACT was already an old friend) the very first thing I worked on was finding a means of sending messages to the AS2 onramps.

I found a few open source programs that would let me submit AS2 messages but I decided I wanted to do this programmatically instead via BizUnit 4.0 so I had easily repeatable scenarios, which I could easily re-test across BizTalk Server and MABS.  Matt Frear has written a good post on how to send messages via AS2 using .NET, and I decided I was going to convert this into a BizUnit test step (with his blessing).

You can find the source code for the AS2SendTestStep here, or if you just want to download the assembly then you can do so here.

The test step support the below features.

  • Supports BizTalk Server and MABS (and theoretically other AS2 server products, but I haven’t tested any others).
  • Supports BizUnit data loaders which means that if you have many tests which require only slight variations to input messages then you don’t need to keep multiple copies of input files. You can just have a single base file and use a relevant DataLoader class to manipulate the stream as it is being read.
  • Supports signing and encrypting outbound messages.
  • Supports optional synchronous MDNs and allows you to validate them.
  • Logs decrypted MDN messages to the test execution report.

Usage tips are below.

  • You will need to reference the BizUnit.AS2TestSteps.dll assembly and the BizUnit.dll assembly provided by the BizUnit 4.0 framework.
  • Select an input file directly by supplying the file name into the InputFileLocation property or supply a BizUnit DataLoader (which allows you to manipulate the file as you read it in) in the InputFileLoader property.
  • Encrypt the outbound message – you will need a copy of the public key certificate and will need to supply the file path to it in the EncryptionCertificateFileLocation property.
  • Sign the outbound message – you will need a copy of the private key certificate and will need to know the password.  You will need to supply the path to the certificate to the SigningCertificateFileLocation property and the password to the SigningCertificatePassword property.
  • Supports setting AS2From and AS2To headers via the As2From and As2To properties.
  • Supports the use of a proxy server by setting appropriate values to the Ps property which allows you to supply the proxy server URL, and if required credentials as well.
  • Allows you to set the subject HTTP header by setting the As2FileName property.
  • Allows you to set the URL to post the request to by setting the Url property.
  • Allows you to override the default timeout of 20 seconds by setting the TimeoutMilliseconds property.
  • Allows you to run BizUnit substeps against the decrypted response message in case you want to validate it by suppying substeps into the SubSteps property.

An example test which sends an EDIFACT message to BizTalk Server (using a proxy server) and runs some regular expressions against the synchronous MDN response is below.  Note that the RegexValidationStep in use here is not part of the BizUnit framework and is proprietary so sorry I can’t share that.


var testCase = new BizUnit.Xaml.TestCase();

var as2TestStep = new AS2SendTestStep();

as2TestStep.As2From = "FABRIKAM";

as2TestStep.As2To = "CONTOSO";

as2TestStep.EncryptionCertificateFileLocation = @"c:\host.cer";

as2TestStep.As2FileName = "EFACT_D95B_CODECO_output.txt";

as2TestStep.InputFileLocation = @"c:\temp\EFACT_D95B_CODECO_output.txt";

as2TestStep.SigningCertificateFileLocation = @"c:\fab.pfx";

as2TestStep.SigningCertificatePassword = "test";

as2TestStep.TimeoutMilliseconds = 20000;

as2TestStep.Url = "http://localhost/AS2Receive/BTSHTTPReceive.dll";

WebTestPlugins.AS2Helpers.ProxySettings ps = new WebTestPlugins.AS2Helpers.ProxySettings();

ps.Name = "http://proxyserver.test.co.nz";

as2TestStep.Ps = ps;

var regexValidationStep = new RegexValidationStep();

regexValidationStep._RegexDefintion.Add(new RegexDefinition("Content-Type: message/disposition-notification"));

regexValidationStep._RegexDefintion.Add(new RegexDefinition("Disposition: automatic-action/MDN-sent-automatically; processed"));

regexValidationStep._RegexDefintion.Add(new RegexDefinition("Final-Recipient: rfc822; CONTOSO"));

as2TestStep.SubSteps.Add(regexValidationStep);

testCase.ExecutionSteps.Add(as2TestStep);

var bizUnitTest = new BizUnit.BizUnit(testCase);

bizUnitTest.RunTest();

And below is a screenshot of the test execution results.  Note that the MDN text is logged here, and you can see the regular expressions being evaluated against the MDN as well.

Capture

You can of course take your own test steps much further by validating that BizTalk/MABS has consumed the file and written the message to its target destination, be it a SQL database or a file system folder etc… using the out of the box BizUnit test steps or your own custom ones.

The test step doesn’t currently support compression.  If anyone can point me towards any good documentation on compressing AS2 messages in C# then I might try to include that.

Any other suggestions are welcome.

While doing some integration testing for BizTalk hosted WCF services I met a few challenges that required some creative thinking if I wanted to do things the right way…  As a quick intro I was making use of BizUnit 4.0 to execute my tests and making use of the WCF test step to call on my WCF service, executing my tests from Visual Studio 2012 against a local BizTalk 2013 environment.  This was a one way WCF service making use of the WSHttp binding with transport security and basic authentication, also making use of a WCF service behaviour to perform authorization ensuring that the user in question belongs to a specified active directory group.  At this stage I was only going to be executing the tests against my local VM however at some point in the future I would want to run these tests on actual test environments.

Challenge number one was that since my WCF service made use of transport security I couldn’t have my test project’s app.config reference an endpoint URL like https://localhost/…. since this wouldn’t match up with the self signed security certificate that I had bound to the HTTPS port in IIS.  I would instead need to put the fully qualified name of my VM which matches up with the certificate bound to the HTTPS port in my app.config.  This isn’t all that acceptable given that it isn’t necessarily going to be my PC on which the integration tests are next executed on, given that it could be any other developer working on them in the future.  I would ideally also like to be able to have my integration tests run against staging or UAT environments in the future, and would like to repoint my tests to these environments without making any changes to any code in source control including my app.config or adding new environment specific tests.  I decided that I wanted for the URL to be overridden by replacing a placeholder value in the URL in my app.config with one read in from an SSO application.

Challenge number two was that since my service made use of basic authentication and active directory group based authorization I had to be able to dynamically choose which credentials I was going to make use of when I called the service given that the credentials required to perform the same test on each environment would be different.  Being a very thorough (some might have ruder words for me but I will keep this blog post PG) tester I wanted to also be able to perform negative testing whereby I had credentials that did not authenticate or credentials that did authenticate but weren’t authorized to call the service and I didn’t want any of these credentials hardcoded in my app.config or code or the resolution of the credentials to result in me having to modify the WCF test step.  Once again I decided that I was going to make use of an SSO application to lookup the credentials that I wanted to use when I called on the service.

In order to do this I decided to implement a WCF endpoint behaviour which will implement the out of the box ClientVia behaviour which allows for the overriding of an endpoint URL and a ClientCredentials behaviour to set the relevant credentials.  Now I could have made use of the ClientVia or ClientCredentials behaviors directly in my app.config however that breaks my principle that I don’t want to update the source controlled app.config.  I could have also made a custom implementation of the BizUnit WCF test step which dynamically looks up the URL and credentials I want to use and which then programmatically implements the ClientVia and ClientCredentials behaviors with the resolved values, however that went against my principle of not updating the WCF test step which performs a very generic function very well.

I created a new WCF endpoint behavior and implemented the below ApplyClientBehavior method to make use of the out of the box behaviors with values resolved from an SSO config store (this could of course be split into two discrete WCF behaviors but for the purpose of this blog post I decided to treat them as one).

ApplyClientBehavior

Note in the above that the _SSOApplication variable is actually a parameter for the behavior’s constructor so it can be set within the app.config allowing the behaviour to be reused across different projects.  The same applies for the keyPrefix variable which is used to specify which set of credentials are being looked up, allowing for the SSO store to contain multiple sets of credentials.  The SSO application would look like the below.

SSO Application

In order to make use of these behaviors the test projects app.config would have to be adjusted so that it looks like the below.

App.config

Note the highlighted sections in the above screenshot and my notes below explaining what they do.

  • The first highlighted section shows the registration of the behaviour extension and linking it with a local friendly name.  Note that the type includes the fully qualified behaviour extension element class name and the fully qualified assembly name (you can get this using the “gacutil -l” command).  This allows for us to use the behavior in our app.config.
  • The second section is where we register a behavior configuration with a friendly name and specify which behaviors it implements and with what configuration.  In the above screenshot I registered two behavior configurations one which specifies a keyPrefix value of Johann and one which specifies AITM, both specifying an SSO application name of Testing.
  • The third section is adjusting the client endpoint registration such that they implement one of our registered behavior configurations.  We also adjust the endpoint name so that it is obvious which behavior configuration they implement.  Also note that the URLs have the word placeholder in them where we want to override the URL with the value fetched from the SSO config store.

Now when we actually create our BizUnit test step we need to ensure that we specify the endpoint name which matches the client endpoint in our app.config that implements the behavior that we want to implement for our test.  In the below screenshot the WCF test step specifies an endpoint name of MathJohannCredentials which means that the Johann username and password from the SSO config store will be used as credentials when calling the service, and that the https://placeholder/MathsService/Maths.svc url will be overridden with https://jc-bt2013vm.aitm.co.nz/MathsService/Maths.svc.  Making use of the AITM credentials would be as simple a task as implementing a similar test that specifies an endpoint name of MathAITMCredentials.

BizUnitTest

These are of course just examples of how you can use WCF behaviors to enable you to write flexible integration tests.  You could very easily use this concept to lookup values from other config stores such as a database, the business rules engine, or a configuration file, or perhaps to override other WCF behaviors.  If anyone has any other good ideas on how WCF behaviors could be used to make testing more flexible then please do add a comment to this post.

My colleague Ian Hui figured out a problem that has had me scratching my head for the last two months and he has made me a very happy man.

While porting unit tests using BizUnit for the BRE Pipeline Framework from BizTalk 2010 to BizTalk 2013 I encountered a System.BadImageFormatException exception with the following error message – “Could not load file or assembly ‘file:///C:\Program Files (x86)\Microsoft BizTalk Server 2013\Pipeline Components\BREPipelineFrameworkComponent.dll’ or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded”.

Failed test

When I removed my custom pipeline component and only left behind the out of the box pipeline components like the XML Disassembler I noticed that the problem disappeared.  This, and some further digging caused me to believe that the problem is encountered with all pipeline components built in .Net 4.0 and above.  I tried a whole bunch of workarounds such as rebuilding the BizUnit test steps, the Winterdom pipeline testing framework and even the PipelineObjects.dll assembly using .Net 4.5 thinking that this might help work around the problem but I just kept hitting brick walls.  What made the problem even more mind-boggling was that the pipeline components that caused problems in unit tests ran just find in the BizTalk runtime.

In comes Ian who persevered with the problem long after I had resigned this to being something we would have to wait for Microsoft to fix.  He found that you needed to adjust the vstest.executionengine.exe.config file typically found in the “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow” folder, setting the useLegacyV2RuntimeActivationPolicy attribute on the startup element to true as in the below  (you can download a copy of my config file from here if you want to copy/paste from it).

Config

You can read more about this setting and what it does here.  The discrepancy in behaviour between the test execution and BizTalk runtime is easily explained by examining the BTSNTSvc64.exe.config file in the BizTalk program files folder as you’ll notice that the aforementioned attribute is set to true in this config file by default, which is why the runtime never had a problem with these pipelines.

BT config

Funnily enough after Ian figured out the answer for this problem he found that Shashidharan Krishnan had encountered the same problem in the past (on BizTalk 2010) and fixed it in the exact same way (he has documented this here), however he encountered completely different error messages and I suspect that he was running his unit tests through a console application rather than Visual Studio unit tests.  Either ways as the error messages he encountered are totally different from the ones we did, chances are that if you have the same error as us you might not find his post which is why we decided we would document this anyways.

Thanks again Ian (and Shashidharan).  You guys have just made unit testing BizTalk 2013 components more robust, thus ensuring the success of integration solutions developed for BizTalk 2013.

Whipping up a BizUnit test step using the BizUnit 4.0 libraries is a piece of cake, but chances are that any test step you write is going to be used multiple times throughout the life of a project and in ways you can’t predict and you will want to spend the extra effort when developing your test step to cater for this.

BizUnit promotes the reuse of test steps by providing interfaces that allow you to write them in really flexible ways, with the ability to make use of plugins rather than directly implementing logic such as the loading of data or validation of data. This allows for late binding of actual data loader / validator implementations as it is not upto the test step implementor to choose which of these facilities to make use of, but is rather the responsibility of the test case implementor (ie. the person making use of a test step to implement a test).

For example, when writing a test step that calls upon a WCF service, rather than having the test step load its data from a file we can leave the choice wide open so that the test case implementor can choose to load his data from a database, or perhaps read it in from an XML file but run some XPath statements against the message first to adjust its content (making use of the out of the box XmlDataLoader data loader shipped with the BizUnit 4.0 framework).

In this blog post I will list some of the steps I undertook to port Michael Stephenson’s WCF test step for BizUnit 2.0 (many thanks to Michael for writing this fantastically useful step and giving me permission to discuss it) into a BizUnit 4.0 test step while leveraging off the aforementioned features.

Both the old and the new BizUnit test step require some field values to be set such as the WCF input message type name, and the method name etc… The old test step did this by reading the values from the test context as below.

Old Properties setting

BizUnit 4.0 allows you to programatically instantiate and setup your test steps in a test case, so rather than reading these field values in from the context (one could still do this if required) we can just expose these as public properties, or write a public constructor for the test step class that allows you set these field values.

New Properties setting

To make our test step more flexible its a good idea to support for as many substeps as are required to be executed against the response message (if we are dealing with a request-response service, obviously this is not applicable for one-way services). In order to do this we need to instantiate the SubSteps property (defined on the TestStepBase base class) in the test step constructor to ensure that we can add substeps to the collection when we implement our test cases.

Constructor

We will also want to implement a Validate method in our test step to check that it has been instantiated with all the required and valid field values necessary. This method would be run after the entire test case has been instantiated and is being prepared for execution (ie. after all the individual test steps have been setup and put into their relevant stages in the test case).

Test step Validation

On to the meat of test step. Whereas the previous version of the test step had the loading of the file built into the test step itself (loading it from the file system which would be good enough in many cases) we will take it one step further by making use of a generic data loader. Note that the dataLoader is a field which can be set through its corresponding public property or via a constructor (I haven’t allowed for it in my example but there is no reason why you couldn’t setup all your properties in the constructor).

Data loader

The code to call on the WCF service and optionally consume its response remains largely unchanged. What has been added is the ability to run the optional response through any number of substeps, giving the test case implementor the ability to validate the response in any number of given fashions.

Output Validation

To illustrate how the test step can be used I have created a WCF Service application with a method called GetDataUsingDataContract (yes, I was too lazy to create my WCF service from scratch) which takes in an object of type CompositeType and returns an object of type CompositeType. CompositeType is defined as below, and the method will add up the FirstValue and SecondValue elements and set the value in the Result element before returning the object.

CompositeTypeSchema

After publishing the WCF Service I created a test project and added a service reference to the published service. I then added references to the BizUnit assemblies as well as my custom test step project. The next step was to create a test method, instantiate the TestCase and the WCFTestStep as below.

SetupWCFTestStep

Note that the EndPointName is from the name attribute in the endpoint node in the test project’s app.config. If there were multiple endpoints defined in the config then we can choose any of them.

Config

The InputMessageTypeName and the InterfaceTypeName must be fully qualified, the namespaces being based on the service reference, and the application name being the test project itself (or whatever project your service reference is contained in).

The next step was to load the request message from a file, but what I wanted was to be able to replace values in the input file using XPath expressions, thus meaning that I didn’t have to maintain a copy of the input file for each test method. I achieved this using the XmlDataLoader step, replacing the FirstValue element with the value 4, and the SecondValue element with the value 7 in the below example. Note that the format of the input message is based on the type definition in the service reference which might differ slightly from the message definitions you might have used if you were creating your service in BizTalk (e.g. the message you want to submit might have to be wrapped in a parent node), you might want to run the xsd.exe tool against the test project dll to find out how you must format your message. Using a tool such as DanSharp XmlViewer can aid you in creating your XPath statements.

DataLoader in action

Next up it’s time to implement validation. In the below example I am making use of the XmlValidationStep to validate the response message against its schema (I generated this using the xsd.exe tool), and also to ensure that the value in the Result element is what I was expecting.

Validation

Lastly it’s time to add the WCF test step to the test case, to add the test case to a newly instantiated BizUnit object and to execute the test.

RunTest

If you run the test and view the test result details you get some rather detailed output.

TestResults

The beauty of the WCF test step is that it should work for any WCF Service regardless of the bindings as long as a service reference has been added which should automatically create the binding and endpoint details in your config file.

If you are interested in downloading the test step you can do so from my google drive. The solution also contains my example WCF Service (which should automatically ask for permission to deploy into IIS when you open the solution), as well as my test project. Let me know if you can think of any improvements to the test step, and once again kudos to Michael Stephenson as most of the implementation logic is based on his pioneering work.

I have recently been working on an agile BizTalk project which has required me to flesh out testing frameworks for all the different aspects of BizTalk.  This has been a somewhat uncharted territory for myself in the past, though not for the lack of desire to explore this area, and was a great exercise for me.  In this post I will aim to discuss some of the tools I have used, some of the reasoning behind why I chose to use those tools, and some notes about BizTalk testing in general.  In later posts I will aim to do more of a deep dive into the tools mentioned.

The very first thing to mention is that in order to test your schemas, maps, and pipelines you will need to make a change to your project file.  You will need to open the project properties, navigate to the deployment tab.  On here, you will want to set the “enable unit testing” option to true.

What this does is change the code-behind for all your schemas, maps, and pipeline classes.  By default all schemas inherit from the base class SchemaBase (if you want to prove this, just click the expand button on the left of any of your schema files in a BizTalk project in the Visual Studio solution explorer and view the .cs file), however when you change the enable unit testing property for the project, all the contained schemas will instead derive from the base class TestableSchemaBase.  Before you start panicking about the fact that your schema has just been unnecessarily changed, the TestableSchemaBase class also derives from the SchemaBase class, it just implements a few extra properties/methods.  Similar behavior can be observed for the code-behind for maps and pipeline artifacts as well.  Using the Testable base class for your artifacts enables you to unit test them.

However, doing this also presents a problem.  You’ll notice that once you enabled unit testing on a project that a reference to BizTalk.TestTools has been added to the project as well.  On a typical non-dev BizTalk environment where you don’t have Visual Studio installed, this dll would not be in the GAC.  This means that if you deploy your project with unit testing enabled to such a server, that you would encounter massive failures at runtime.  One option is to of course make the missing dll’s available on those servers (you might find that you have good reasons to do this), but another somewhat cleaner approach is to only enable the unit testing property on your debug builds which will only be deployed to your development box, and ensure that it is turned off on the release builds which can then be deployed to your non-dev environments.  Purists would argue that you are now releasing assemblies which are actually different from those that you unit tested, but the way I see it there has to be some degree of trust that we allocate to Microsoft to have gotten this right 🙂

On to the nitty gritty.  The first type of test I tried to implement was unit testing for schemas.  The idea of unit testing schemas is to validate instances of actual messages against a given schema, similar to right clicking on a schema file in the solution explorer and choosing validate instance.  I tried to follow the textbook examples for the out of the box unit testing for schemas (see http://msdn.microsoft.com/en-us/library/dd224279(v=bts.10) for an example) however I very quickly ran into some major roadblocks.

  • When tests fail they don’t report back to you the reason for the failure.  I consider this to be quite a hinderance as it then requires manual steps to be taken to find out the reason for failure.
  • If your schema imports types from another schema file then the test will always fail.

Clearly this is not quite acceptable so I started searching the internet for alternatives, and ended up implementing a static helper class that looks quite similar to those described here – http://stackoverflow.com/questions/751511/validating-an-xml-against-referenced-xsd-in-c-sharp.  This allows me to very quickly whip up a test with just a few lines of code to assert whether a given xml file validates against a specified schema.  Note that in the below example, you are able to specify whether the instance file is in native (flat file) or XML format.

Next up was to write unit tests for maps.  In this case I found that the out of the box map testing worked quite well (see here for some documentation – http://msdn.microsoft.com/en-us/library/dd224279(v=bts.10).aspx).  However once again I found a few things lacking with it.

  • For every variation in my input file that I wanted to test, I had to create and maintain an extra XML file.  You’ll find your repository of XML files building up very quickly.
  • The out of the box implementation of map unit tests helps you to execute the map, allowing for validation of the input and the output files, but does very little in the way of allowing you to actually check the data in the output file.  That is totally left upto you.

Once again, I decided to find out what those who came before me had decided to do, and it appeared that best of breed that suited my requirements was the BizTalk Map Testing Framework – http://mtf.codeplex.com/.  I must say that I am a huge fan of this framework because of the great flexibility it allows.  The way this framework works is that for each map you want to test, you supply a template input file and an output file.  You then supply xpath statements for all the nodes in the input file you want to vary, and the all the nodes in the output file that would be changed as a result of said variations.  You then create test scenarios, each one supplying values for the input nodes and expected values for the output nodes.  This allows you to very quickly create a lot of tests for complex maps, and if you find that you are dealing with a very simple map which only requires a simple test then just skip the xpaths and make use of the auto generated base test which ensures that when the map is executed against the template input file, that the output file matches the template output file.  The framework of course allows for actions to be taken before and after the map has actually been executed in order to cater for dynamic scenarios such as the output file containing the current time etc… and even provides some handy helper methods to read from and manipulate the output xml files before the test scenarios are executed.  I will definitely post in more detail about this in the future.

I had a glance at the out of the box unit testing for pipelines, and it didn’t look too bad at all.  However based on suggestions found around the internet I decided to make use of BizUnit 4.0 and the Winterdom pipeline testing framework which is very well documented here – http://blogdoc.biztalk247.com/article.aspx?page=a45b5fcd-a1fe-4219-844b-5e7e5660a4ec.  Once more, I can’t express how much of a convert I am to this framework and how easy it is to generate tests with this.  When setting up your test you can specify an input context file which contains all the context properties that should be applied to the message before the pipeline is executed (obviously only applies for send pipelines), the input file (or possibly files in the case of send pipelines which can be used for batching), and an instance config file which allows you to override the default parameters for the pipeline.  Executing the test will result in the generation of the resulting output message and optionally a resulting context file which contains all the context properties against the message after the execution of the pipeline.  Thanks to the robustness of the BizUnit 4.0 framework, it is very easy to implement validators against both of these files types.

Next up came integration testing.  By its very nature BizTalk is very much a black box and most people think of it as a system which messages go into, and messages eventually come out of.  In a large way this is what the focus of integration testing is, to evaluate the outcomes of a given scenario in a black box fashion.  Once again I decided to use BizUnit 4.0 as it is a very robust and flexible framework, with a very full featured set of test steps to carry out common functions required in integration testing.  I will have to write a whole other post regarding BizUnit 4.0 but I encourage all those interested in integration testing BizTalk or other types of integration projects to download and play around with the framework – http://bizunit.codeplex.com/

While we managed to create some pretty complex integration test scenarios, due to BizTalk’s black box nature we really had little idea whether we had actually managed to exercise all the different paths of our orchestration.  In comes the BizTalk Orchestration Profiler tool (http://biztalkorcprofiler.codeplex.com/) which allows you to view which parts of your orchestrations have been exercised, and which areas you need to focus more testing on.  A few tweaks are required to get the tool to work with BizTalk Server 2010, Colin Meade has made life easy for us by documenting the required steps on his blog – http://midheach.wordpress.com/2011/10/29/biztalk-orchestration-profiler-2010/.

Next on the list was performance testing.  The options really boiled down to using the load testing framework that is part of Visual Studio Ultimate or using LoadGen 2007 (earlier versions were not an option for me as I was testing against WCF transports which are only implemented from the 2007 version onwards).  In my case I had to go with LoadGen as the client did not have access to Visual Studio Ultimate, so that was an easy choice.  Since I had already used BizUnit 4.0 to implement my integration tests, I decided that it would be nice to use BizUnit 4.0 as a harness for LoadGen as well.  This also means that I could leverage of BizUnit test steps to actually validate the outputs of my performance test as well to ensure that all the records that I was expecting BizTalk to process were indeed being processed.

I hit a showstopper very quickly here.  LoadGen 2007 was unfortunately released in a state whereby its containing dll’s are not signed with a strong named key and thus can’t be added to the GAC as required by BizUnit 4.0.  This wasn’t an issue with earlier versions of LoadGen however as mentioned these weren’t an option to me as I needed to test against WCF transports.  Back to google once again I was guided to the following article – http://geekswithblogs.net/jorgennilsson/archive/2009/01/03/loadgen-2007-with-bizunit.aspx.  What this article describes is the process to disassemble the dll’s back to MSIL code, associate the IL files with a strong name key, and then update references in the dlls to also take note of the new strong name key.  Doing this and adding the resultant dlls to the GAC means that BizUnit 4.0 and LoadGen 2007 now play very nicely with each other, and allows for complex performance testing scenarios to be catered for.

While LoadGen 2007 allows for the generation of load on a server and BizUnit 4.0 test steps can be used to validate the outputs, an important part of performance testing is to study the impact that the load has on the BizTalk and SQL environments.  In order to do this I made use of the Performance Analysis tools in Windows Server 2008 and PAL (Performance Analysis of Logs – http://pal.codeplex.com/).  The idea is to generate data collection sets from Performance Monitor while the server is processing the load and to then have PAL process the resulting files.  It will then generate an HTML report for you which provides warnings when thresholds have been passed.  While this helps you analyze and search for problems, there are many important counters which need to be spot checked manually rather than solely relying on the PAL output, which is merely a guide and a starting point (albeit a fantastic one).

A further requirement which was explored, however was since dropped was to look at using Specflow (which is a BDD or Behavioral Driven Development testing framework – http://www.specflow.org/specflownew/) as part of our BizTalk integration testing.  Specflow allows for test analysts to specify tests scenarios using verbose statements which are based on the verbs Given (specifying prerequisites), When (specifying the actions to be taken in the test scenario), and Then (specifying the expected outcome of the test scenario).  Each sentence in the test scenario will end up being bound to a specific method which will then implement the actual test logic.  Each sentence in the test scenario can also specify variable inputs thus making the test methods quite reusable.  I chose to use BizUnit 4.0 to actually implement my test methods for all the reasons I mentioned previously.  Specflow also allows for your resulting trx test results file (assuming you are using MSTest to execute the tests) to be converted to a nice html format.  There are of course many free tools on the internet that allow you to do similar things (and one of my colleagues has even written a BizTalk project to do exactly this, now why didn’t I think of that first :)), but it is a nice little extra feature that comes with the toolset.

While there are many benefits to such a testing approach, especially for agile projects, it should be noted that there would be some up front overheads, especially with the first set of test scenarios being catered to.  My personal take on this is that the overhead mostly comes into play because each sentence on its own represents a test case, rather than the entire test scenario representing a test case.  Thus you have to structure your test cases quite carefully, in a reusable fashion (the reusability which might not necessarily be realized in some cases) and you have to think how you are going to flow test context between your different sentences.  The framework obviously caters for such things and is in no way limiting of what you are able to achieve, however it is up to each project to decide whether Specflow is a good fit for it or not.  There is some fantastic documentation about Specflow and BizTalk here – http://social.technet.microsoft.com/wiki/contents/articles/12322.behaviour-driven-development-with-biztalk.aspx (and do be sure to check out the very thorough videos by Michael Stephenson).

One final thing which I never managed to play around with myself, however did have on my wish list to investigate was to have my integration tests snoop on ETW tracing.  I have started using the CAT Instrumentation Framework in a large way (see http://blogs.msdn.com/b/appfabriccat/archive/2010/05/11/best-practices-for-instrumenting-high-performance-biztalk-solutions.aspx), especially in orchestrations to try to break down the black box barriers and get a peek into the internal state of the orchestration’s flow.  The “Testing inside BizTalk by using ETW Tracing” project (http://btsloggingeventsinbi.codeplex.com/) leverages off the ETW tracing enabled by the CAT Instrumentation Framework, allowing for your BizUnit tests to be aware of what the orchestration are currently doing.  This allows you to take testing in a whole new direction and tear down the black box walls.

Please let me know if you want to discuss any of these topics in further detail, I will definitely aim to post more on testing tools in the near future.

%d bloggers like this: