Latest Entries »

One of the key areas to focus on when using Service Bus Relays to expose on premise BizTalk hosted WCF services externally without making any firewall changes is availability. Service Bus Relay endpoints in Azure will only be enlisted upon initialization of your on premise service. NOT when your application pool starts, NOT when you create your Service Bus Namespace, but only when a request is made to the local endpoint of the service. Browsing (effectively an HTTP GET) to the local .svc file on the BizTalk VM serves just as well to enlist the Azure Service Bus Relay endpoint.

A common solution to this problem is to use the IIS 8.0 Application Initialization module which has been really well documented here. What this effectively results in is that your .svc file is activated any time IIS resets, your application recycles, or your app domain reloads. The Application Initialization module is also available for IIS 7.5 and can be downloaded here.  This results in your relay endpoint being enlisted in Azure.

Application Initialization is an absolutely core part to any Service Bus Relay service, regardless whether the backend service is a vanilla WCF service or based on a BizTalk receive location. No WCF Service Bus Relay solution should be built without this in mind.

However, I found that when dealing with BizTalk there is an additional consideration to keep in mind. What I observed is that if your BizTalk environment encounters outages due to an inability to connect to the BizTalk Management database (due to network or database issues) the services (not the application pool, not IIS, the services themselves!) will get shut down by BizTalk. When BizTalk recovers from the outage the services will not get spun up again until someone calls on the local service endpoint or browses to the local .svc file. Because the IIS application pool has not restarted, Application Initialization will not kick in and thus your endpoint will not be enlisted in Azure.

The solution I have put in place is to generate keepalive requests to the local endpoint every minute. This makes me feel a bit dirty, but I haven’t found a better solution yet, so I will detail it for you. If you can think of a better option, please do share it and I will update this post.

What I have done is setup a receive location based on the Scheduled Task adapter that generates a keepalive message every one minute (I’ve chosen quite a regular interval because availability is really key to my services, choose your interval appropriately) that gets routed to a solicit-response WCF-WebHttp send port (actually in my case 4 send ports, one corresponding to each service being exposed by Service Bus Relays that I want to keep alive). The endpoint address on this send port is pointed towards the .svc file for the service I want to keep alive, and the “HTTP Method and URL Mapping” section of the send port’s configuration is set to “GET” since we want to perform an equivalent action to browsing to the .svc file.

webhttp

One more thing of note is that since this is an HTTP GET, the target endpoint is not expecting a request message body, so we must use the WCF-WebHttp adapter’s functionality to supress it, as per the following screenshot.

Supress

Because I’m not really interested in the response messages being returned to the send port, I route these messages to a file send port which uses the BRE Pipeline Framework to discard the message (utilizing the NullifyMessage vocabulary definition in the BREPipelineInstructions.SampleInstructions.HelperInstructions vocabulary).  The address on the send port can be set to any folder since no file will actually get written out by the adapter.
Discard

Using the WCF-WebHttp adapter and pointing it directly to the .svc file helps minimize the requirement for additional development since you aren’t forced down the path of exposing additional operations in your service to cater for the keepalives.

Advertisements

While working on a project using the WCF webHttpRelayBinding binding with SAS based authentication over transport security, I found that my services were taking a very long time to spin up (30-60 seconds) and that my runtime performance was a bit less than optimal in terms of latency (and I had proven that the latency was not a result of my backend service).  To give you an idea what I was working with, my web.config file had contents similar to the below in the system.serviceModel element.

      <endpointBehaviors>
        <behavior name="sharedSecretClientCredentials">
          <transportClientEndpointBehavior>
            <tokenProvider type="SharedAccessSignature">
              <sharedAccessSignature keyName="keyname" key="key" />
            </tokenProvider>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
      <service name="Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance" behaviorConfiguration="ServiceBehaviorConfiguration">
        <endpoint name="RelayEndpoint" address="https://sbnamespace.servicebus.windows.net/service" binding="webHttpRelayBinding" bindingNamespace="http://tempuri.org/" bindingConfiguration="RelayEndpointConfig" behaviorConfiguration="sharedSecretClientCredentials" contract="Microsoft.BizTalk.Adapter.Wcf.Runtime.ITwoWayAsync" />
      </service>
    </services>
    <bindings>
      <webHttpRelayBinding>
        <binding name="RelayEndpointConfig">
          <security relayClientAuthenticationType="RelayAccessToken" mode="Transport" />
        </binding>
      </webHttpRelayBinding>     
    </bindings>

I didn’t observe such problems on my development VM (which I was running with pretty much no firewalls behind it), but did observe this on my client’s UAT environment. This was in spite of following Microsoft’s guidelines that suggest that you should have outbound TCP ports 9350-9354 open on your firewall to enable Service Bus connectivity.  I went through an exercise using the PortQuiz website to prove that these ports were indeed accessible from the UAT server so the performance issues were puzzling.

Next up, I spun up a fiddler capture.  To start with I applied the below filter into Fiddler to get rid of the extra noise.

tcp.port eq 9350 || tcp.port eq 9351 || tcp.port eq 9352 || tcp.port eq 9353 || tcp.port eq 9354

I then initialized some of my services (I shut them down forcibly and then spun them up again) to observe which ports were in use.  I saw that the conversation with Service Bus was being initialized on port 9350 as expected, however that appeared to be the end of the story.  I wasn’t seeing any comms on ports 9351-9354.  I then right clicked one of the displayed records in WireShark and chose “Conversation Filter -> IP” which updates the filter such that it displays anything with a source or destination IP address matching those on the selected record.

This suddenly resulted in a whole lot more records being displayed and helped me get to the root of the issue.  What I was observing was that after Service Bus made the initial connection on port 9350, it attempted to continue the conversation on port 5671 (AMQPS or AMQP over SSL) which hadn’t been opened on the firewall.  This connection attempt was obviously failing, and the observed behavior was that some retries were attempted with fairly large gaps in between until Service Bus finally decided to fall back to port 443 (HTTPS) instead.  Pay particular attention to the lines in the following screenshot with the numbers 1681, 2105, and 2905 in the first column.

2.6 Capture

This explained why my service was taking a long time to start up (because Service Bus was failing to connect via AMQPS and was going through retry cycles before falling back to HTTPS) and also explained why my runtime performance was lower than my expectation (because HTTPS is known to be slower than TCP).  However this didn’t explain why Service Bus was attempting to use port 5671 rather than 9351-9354 as per documentation.

Repeating the same test on my own VM showed that Service Bus was continuing the connection on ports 9351-9354 as expected… So why the difference? On the suggestion of my colleague Mahindra, I compared the assembly versions of the Microsoft.ServiceBus assembly across the two machines. You can do this by running “gacutil -l Microsoft.ServiceBus” in a Visual Studio command prompt, or by manually checking the GAC directory which is typically “C:\Windows\Microsoft.NET\assembly\GAC_MSIL” for .NET 4+ assemblies.

Voila. I found that I was running version 2.1.0.0 on the machine that was behaving correctly, and version 2.6.0.0 on the machine that was misbehaving. It appears that the protocol choosing behavior for Service Bus relays has changed sometime in between these two assembly versions. I have not pinpointed exactly which version this change occurred in, and I don’t yet know whether this change was by design or accidental. Either way, Microsoft have not yet updated their documentation, which means that others will be as confused as I was.

So what are your choices?

  1. You can downgrade to an older version of the assembly.  2.1.0.0 will definitely work for you, but you might be able to get away with a slightly higher version which is less than 2.6.0.0, but it will be up to you to establish which versions are acceptable since I haven’t managed to do this.  You’ll need to update the webHttpRelayBinding binding registration in your machine.config files (or wherever you’ve chosen to register it if you’ve gone for a more granular approach) to point to the older assembly as well.
  2. You can choose to stick with the latest version of the assembly and open up outbound TCP connections on port 5671 on your firewall.

I chose to stick with option #1 because I’m not sure at this stage whether this change in behavior is intentional or incidental, and also because my impression is that raw TCP over ports 9351-9354 would be faster than the AMQPS protocol.  You will find that option #2 is also functional.

With the older version of the assembly in play I could not see traffic on ports 9351-9354 as expected, my services were spinning up in less than a second, and latency was much more in line with my expectations.
2.1 Capture

In a recent post I mentioned how you could dynamically set HTTP headers on a WCF-WebHttp send port.  This post will take this a bit further and discuss how you can dynamically set the target endpoint address as well as the HTTP method.

First things first, just as I mentioned in the post about HTTP headers, you must ensure that the BTS.IsDynamicSend context property is set to true to enable the methods I am about to describe.

Overriding the endpoint address is a piece of cake.  Just set the BTS.OutboundTransportLocation context property to the target URL.

    public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(IPipelineContext pContext, Microsoft.BizTalk.Message.Interop.IBaseMessage pInMsg)
    {
        // Some non important logic here

        pInMsg.Context.Write("OutboundTransportLocation", "http://schemas.microsoft.com/BizTalk/2003/system-properties", "http://target.service.com/parameters/go/here");
        pInMsg.Context.Write("IsDynamicSend", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true);
        return pInMsg;
    }

Overriding the HTTP method took a bit more trial and error.  In order to achieve this you must set the WCF.HttpMethodAndUrl to the desired HTTP method.

    public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(IPipelineContext pContext, Microsoft.BizTalk.Message.Interop.IBaseMessage pInMsg)
    {
        // Some non important logic here

        pInMsg.Context.Write("HttpMethodAndUrl", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", "POST");
        pInMsg.Context.Write("IsDynamicSend", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true);
        return pInMsg;
    }

So what exactly does this context property represent and why is it named Method and URL?  This context property represents the value that would normally be contained in the HTTP method and URL mapping section of a WCF-WebHttp send port’s configuration.
MethodAndUrlMapping

If you inspect the help text underneath the textbox in the above screenshot you’ll notice that there are ways to set the HTTP method and URL mapping in this configuration.

  • By hardcoding the HTTP method in the text box such that every message being processed by the send port will have the same HTTP method.
  • By dynamically setting it (and the URL parameters as well) based on the XML BtsHttpUrlMapping configuration based on the value of the BTS.Operation context property on the message being processed.

Whatever value you have in the HTTP method and URL configuration of your send port will be put into the WCF.HttpMethodAndUrl context property on your message before it gets handled by the WCF adapter stack, which will apply the correct HTTP method and outbound URL.

So back to overriding the method, by setting a value to the WCF.HttpMethodAndUrl context property and ensuring that the HTTP method and URL configuration on the send port is blanked out, we are able to dynamically set the HTTP method on the outbound message.

I would be negligent if I didn’t mention that setting these two context properties (and BTS.IsDynamicSend) is a piece of cake when using the BRE Pipeline Framework which has out of the box capabilities to set these context properties. Being rules based, you can inspect the messages that are being processed and dynamically decide what the outbound URL and HTTP method should be based on message content and/or context.
BRE

Theoretically we could even dynamically set the WCF.HttpMethodAndUrl context property value to contain a BtsHttpUrlMapping XML instance so that we could dynamically set the HTTP method and URL based on the BTS.Operation context property value.  Given that it is a lot easier to just set the BTS.OutboundTransportLocation and WCF.HttpMethodAndUrl context properties to set the HTTP method and URL dynamically, this method is just too complicated to have its usage justified in most scenarios.

A big thanks to Mark and the rest of the crew for making this happen. It’s going to be a really exciting day and I encourage everyone in the Auckland area in the integration space to sign up.

Connected Pawns

The Auckland Systems User group(ACSG) will be holding a one day mini-symposium on Saturday July 18th in Datacom’s cafe. Please see the ACSG site for further details and how to register. This is a free event and will be restricted to the first 70 people who register .

This will be an day jam packed and is aimed at integration developers.

The keynote speaker is Bill Chesnut from Mexia who will start the meeting with the a presentation of the latest trends in integration. He will be followed by local integration experts who will present on  new integration patterns in the cloud, integration with REST, SOA , ESB, CI with BizTalk and Integration tips. For a full list of speakers see this link. A list of the talks can be found below;

1. What’s new on integration – Bill Chesnut  9.00- 9.45

2. Azure App Services – connecting the…

View original post 118 more words

I have just got through a rather difficult project that involved BizTalk calling on a mixture of SOAP/REST services, and also exposing multiple RESTFul services as well.  The project had a very short time frame and we had many technical hurdles to cross.  One of the toughest challenges was communicating appropriate details when we were having problems calling one of our target services from BizTalk, or getting appropriate details from the consumers of services exposed by BizTalk when they were having trouble calling said services.

With relatively simple services this is typically no big deal… You just ask the service consumer to share the message body with you so you can see whether it is properly formed or not.  If the service in question is being hosted by BizTalk and the message has got to the receive pipeline (i.e. it hasn’t failed on the adapter layer) then you could even take advantage of tracked message events and view the message body and associated context (which would also include custom SOAP/HTTP header values found in the tracked context properties).  You could always look at using tools like Fiddler to inspect HTTP requests as well.

On our project we decided to use RequestBin because the REST services we were exposing from BizTalk required a fair few custom HTTP headers, and were in fact exposed using Azure Service Bus Relays thus required an Authorization header containing a SAS signature as well (note that Service Bus would strip off this header before it gets to BizTalk so you will never see it in tracked message events).  Moreover, we were calling on the TripIt API which is a RESTFul web service that makes use of OAUTH for authentication/authorization.  Once again the outbound HTTP headers had to be just right or we would encounter errors.  In cases of HTTP Posts, the XML request also had to be URL encoded and made into a request body parameter, further complicating matters.  In some tricky scenarios when we encountered problems we needed to contact TripIt for help and provide them with the appropriate details.

RequestBin helped solve a lot of problems for us. At its core RequestBin is a service that tracks your entire HTTP request, including the HTTP headers and body. RequestBin will always reply to your requests with an HTTP 200 and a message body saying “ok”.

You start by browsing to the RequestBin website and choosing to create a RequestBin.
Create

You will then be presented with a bin URL.
Bin URL
This is the URL the service consumer should be directing their requests to instead of the actual target service when they want their requests to be tracked. If you append a ?inspect to the end of the URL and browse to this amalgamated URL in your web browser then you can view a history of all requests made to the RequestBin.
BinResult
Having all parties that were consuming services get accustomed to providing RequestBin captures to the service providers made communication between different parties much more streamlined. This was especially important given that some of the parties were in different countries and time zones, and every time a query resulted in more questions rather than an answer meant another day wasted for the project. I don’t think I will ever work on a project involving external web services providers or consumers without using RequestBin again.

Of note is that your RequestBin will only be valid for 48 hours and will only keep track of your last 20 requests. If you want a more durable service then you might want to look into the RunScope Traffic Inspector, however note that this is a paid product (albeit with a free trial). This product goes well beyond the scope of RequestBin and appears to be comparable (at least in some sense) to Azure API Management or Sentinet. I would think that if your requirements are similar to mine then RequestBin is the ideal solution given how lightweight and easy it is to use, and chances are the limits are not going to bother you.

A final suggestion I’ll leave you with is to use RequestBin to inspect HTTP requests for SOAP services based on different bindings (obviously this is limited to the HTTP based bindings). Doing so will give you a much deeper appreciation of how the request formats differ based on bindings. A very simple example of this is how the SOAP Action is contained in HTTP headers for the basicHttpBinding but in SOAP headers for the wsHttpBinding. You can of course use WCF tracing for this as well, but RequestBin is so easy to use that it’s a pretty good alternative.

%d bloggers like this: