GAPTHEGURU

Geek with special skills

BIZTALK TIPS: How to Cluster Message Queuing / How to Cluster MSDTC

Cluster support is provided for the BizTalk Server MSMQ adapter by running the MSMQ adapter handlers in a clustered instance of a BizTalk Host. If the BizTalk Server MSMQ adapter handlers are run in a clustered instance of a BizTalk Host, a clustered Message Queuing (MSMQ) resource should also be configured to run in the same cluster group as the clustered BizTalk Host when using the Send adapter or the Receive adapter for BizTalk Server 2006 R2 and earlier. This should be done for the following reasons:

  • MSMQ adapter receive handler – The MSMQ adapter receive handler for BizTalk Server 2006 R2 and earlier does not support remote transactional reads; only local transactional reads are supported. The MSMQ adapter receive handler on BizTalk Server 2006 R2 and earlier must run in a host instance that is local to the clustered MSMQ service in order to complete local transactional reads with the MSMQ adapter.
  • MSMQ adapter send handler – To ensure the consistency of transactional sends made by the MSMQ adapter, the outgoing queue used by the MSMQ adapter send handler should be highly available, so that if the MSMQ service for the outgoing queue fails, it can be resumed. Configuring a clustered MSMQ resource and the MSMQ adapter send handlers in the same cluster group will ensure that the outgoing queue used by the MSMQ adapter send handler will be highly available. This will mitigate the possibility of message loss in the event that the MSMQ service fails.

Many BizTalk Server operations are performed within the scope of a Microsoft Distributed Transaction Coordinator (MSDTC) transaction.

A clustered MSDTC resource must be available on the Windows Server cluster to provide transaction services for any clustered BizTalk Server components or dependencies. BizTalk Server components or dependencies that can be configured as Windows Server cluster resources include the following:

  • BizTalk Host
  • Enterprise Single Sign-On (SSO) service
  • SQL Server instance
  • Message Queuing (MSMQ) service
  • Windows File system

Windows Server 2003 only supports running MSDTC on cluster nodes as a clustered resource.

Windows Server 2008 supports running a local DTC on any server node in the failover cluster; even if a default clustered DTC resource is configured.


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, i the left pane, click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Message Queuing, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click a disk resource, and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.
  11. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To configure the Distributed Transaction Coordinator (DTC) for high availability (Windows Server 2008)


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left hand pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, in the left pane click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Distributed Transaction Coordinator, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click to select a disk resource and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2008)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then click Component Services.
  2. Click to expand Component Services, click to expand Computers, click to expand My Computer, click to expand Distributed Transaction Coordinator, click to expand Clustered DTCs, right-click the clustered DTC resource, and then click Properties.
  3. Click the Security tab.
  4. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  5. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  6. After changing security settings for the clustered distributed transaction coordinator resource, the resource will be restarted. Click Yes and OK when prompted.
  7. Close the Component Services management console.

 

  1. To start the Cluster Administrator program, click Start, point to Programs, point to Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Name and Disk resource.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSMQ.
  5. In the Resource type drop-down list, click Message Queuing, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the message queuing resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. Click OK in the dialog box that indicates that the resource was created successfully.
  9. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To add an MSDTC resource to an existing cluster group (Windows Server 2003)


  1. To start the Cluster Administrator program, click Start, Programs, Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Physical Disk, IP Address, and Network Name resource. To create a group with a Physical Disk, IP Address, and Network Name resource if one does not already exist.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSDTC.
  5. In the Resource type drop-down list, click Distributed Transaction Coordinator, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the distributed transaction coordinator resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. In the dialog box that indicates that the resource was created successfully, click OK.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2003)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then Component Services.
  2. Click to expand Component Services, and then click to expand Computer.
  3. Right-click My Computer, and then select the Properties menu item to display the My Computer Properties dialog box.
  4. Click the MSDTC tab.
  5. To display the Security Configuration dialog box, click Security Configuration .
  6. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  7. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  8. Stop and restart the Distributed Transaction Coordinator service.

03/08/2012 Posted by | Biztalk, Cluster Configuration | , , , , | 2 Comments

Windows Server 2008 : Configuring Server Clusters

Server Cluster Fundamentals

In Windows Server 2008, you can configure three types of server groups for load balancing, scalability, and high availability. First, a round-robindistribution group is a set of computers that uses DNS to provide basic load balancing with minimal configuration requirements. Next, a Network Load Balancing (NLB) cluster (also called an NLB farm) is a group of servers used not only to provide load balancing but also to increase scalability. Finally, a failover cluster can be used to increase the availability of an application or service in the event of a server failure.

Note: What is load balancing?

Load balancing is a means of distributing incoming connection requests to two or more servers in a manner that is transparent to users. Load balancing can be implemented with hardware, software, or a combination of both.

Round-Robin Distribution

Round-robin DNS is a simple method for distributing a workload among multiple servers. In round-robin, a DNS server is configured with more than one record to resolve another server’s name to an IP address. When clients query the DNS server to resolve the name (find the address) of the other server, the DNS server responds by cycling through the records one at a time and by pointing each successive client to a different address and different machine.

For example, suppose that a DNS server authoritative for the DNS domain contoso.com is configured with two separate resource records, each resolving the name web.contoso.com by pointing to a different server, as shown in Figure 1. When the first client (Client1) queries the DNS server to resolve the web.contoso.com web.contoso.com), the DNS server answers the query with the information provided in the second record matching “web.” This second record points to a server name websrv2, which is located at the 192.168.3.12 address. If a third client then queries the DNS server for the same name, the server will respond with information in the first record again.name, the DNS server answers by pointing the client to the server named websrv1 located at the 192.168.3.11 address. This is the information associated with the first DNS record matching “web.” When the next client, Client2, queries the DNS server to resolve the same name (

Figure 1. Round-robin uses DNS to distribute the client load between two or more servers

The purpose of DNS round-robin is to load balance client requests among servers. Its main advantage is that it is very easy to configure. Round-robin DNS is enabled by default in most DNS servers, so to configure this simple sort of load balancing, you only need to create the appropriate DNS records on the DNS server.

However, there are serious limitations to round-robin as a load balancing mechanism. The biggest drawback is that if one of the target servers goes down, the DNS server does not respond to this event, and it will keep directing clients to the inactive server until a network administrator removes the DNS record from the DNS server. Another drawback is that every record is given equal weight, regardless of whether one target server is more powerful than another or a given server is already busy. A final drawback is that round-robin does not always function as expected. Because DNS clients cache query responses from servers, a DNS client by default will keep connecting to the same target server as long as the cached response stays active.

Network Load Balancing

An installable feature of Windows Server 2008, NLB transparently distributes client requests among servers in an NLB cluster by using virtual IP addresses and a shared name. From the perspective of the client, the NLB cluster appears to be a single server. NLB is a fully distributed solution in that it does not use a centralized dispatcher.

In a common scenario, NLB is used to create a Web farm—a group of computers working to support a Web site or set of Web sites. However, NLB can also be used to create a terminal server farm, a VPN server farm, or an ISA Server firewall cluster. Figure 2 shows a basic configuration of an NLB Web farm located behind an NLB firewall cluster.

Figure 2. Basic diagram for two connected NLB clusters

As a load balancing mechanism, NLB provides significant advantages over round-robin DNS. First of all, in contrast to round-robin DNS, NLB automatically detects servers that have been disconnected from the NLB cluster and then redistributes client requests to the remaining live hosts. This feature prevents clients from sending requests to the failed servers. Another difference between NLB and round-robin DNS is that in NLB, you have the option to specify a load percentage that each host will handle. Clients are then statistically distributed among hosts so that each server receives its percentage of incoming requests.

Beyond load balancing, NLB also supports scalability. As the demand for a network service such as a Web site grows, more servers can be added to the farm with only a minimal increase in administrative overhead.

Failover Clustering

A failover cluster is a group of two or more computers used to prevent downtime for selected applications and services. The clustered servers (called nodes) are connected by physical cables to each other and to shared disk storage. If one of the cluster nodes fails, another node begins to take over service for the lost node in a process known as failover. As a result of failover, users connecting to the server experience minimal disruption in service.

Servers in a failover cluster can function in a variety of roles, including the roles of file server, print server, mail server, or database server, and they can provide high availability for a variety of other services and applications.

In most cases, the failover cluster includes a shared storage unit that is physically connected to all the servers in the cluster, although any given volume in the storage is accessed by only one server at a time.

Figure 3 illustrates the process of failover in a basic, two-node failover cluster.

Figure 3. In a failover cluster, when one server fails, another takes over, using the same storage

In a failover cluster, storage volumes or LUNs that are exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. Figure 4 illustrates this concept by showing two two-node failover clusters dividing up storage on a SAN.

Figure 4. Each failover cluster must isolate storage from other servers

Creating a Failover Cluster

Creating a failover cluster is a multistep process. The first step is to configure the physical hardware for the cluster. Then, you need to install the Failover Clustering feature and run the Failover Cluster Validation Tool, which ensures that the hardware and software prerequisites for the cluster are met. Next, once the configuration has been validated by the tool, create the cluster by running the Create Cluster Wizard. Finally, to configure the behavior of the cluster and to define the availability of selected services, you need to run the High Availability Wizard.

Preparing Failover Cluster Hardware

Failover clusters have fairly elaborate hardware requirements. To configure the hardware, review the following list of requirements for the servers, network adapters, cabling, controllers, and storage:

  • Servers Use a set of matching computers that consist of the same or similar components (recommended).
  • Network adapters and cablingThe network hardware, like other components in the failover cluster solution, must be compatible with Windows Server 2008. If you use iSCSI, each network adapter must be dedicated to either network communication or iSCSI, not both.In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
  • Device controllers or appropriate adapters for the storageIf you are using serial attached SCSI or FC in all clustered servers, the mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version. If you are using iSCSI, each clustered server must have one or more network adapters or HBAs that are dedicated to the cluster storage. The network you use for iSCSI cannot be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage target should be identical. It is also recommended that you use Gigabit Ethernet or higher. (Note also that for iSCSI, you cannot use teamed network adapters.)
  • Shared storage compatible with Windows Server 2008For a two-node failover cluster, the storage should contain at least two separate volumes (LUNs), configured at the hardware level.The first volume will function as the witness disk, a volume that holds a copy of the cluster configuration database. Witness disks, known as quorum disks in Microsoft Windows Server 2003, are used in many but not all cluster configurations.

    The second volume will contain the files that are being shared to users. Storage requirements include the following:

    • To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
    • It is recommended that you format the storage partitions with NTFS. (For the witness disk, the partition must be NTFS.)When deploying a storage area network (SAN) with a failover cluster, be sure to confirm with manufacturers and vendors that the storage, including all drivers, firmware, and software used for the storage, are compatible with failover clusters in Windows Server 2008.

After you have met the hardware requirements and connected the cluster servers to storage, you can then install the Failover Cluster feature.

Note: What is the quorum configuration?

The quorum configurationin a failover cluster determines the number of failures that the cluster can sustain before the cluster stops running. In Windows Server 2008, you can choose from among four quorum configurations. The first option is the Node Majority quorum configuration, which is recommended for clusters with an odd number of nodes. In node majority, the failover cluster runs as long as a majority of the nodes are running. The second option is the Node and Disk Majority quorum configuration, which is recommended for clusters with an even number of nodes. In node and disk majority, the failover cluster uses a witness disk as a tiebreaker node, and the failover cluster then runs as long as a majority of these nodes are online and available. The third option is the Node And File Share Majority quorum configuration. In node and file share majority, which is recommended for clusters that have an even number of nodes and that lack access to a witness disk, a witness file share is used as a tiebreaker node, and the failover cluster then runs as long as a majority of these nodes are online and available. The fourth and final option is the No Majority: Disk Only quorum configuration. In this configuration, which is generally not recommended, the failover cluster remains as long as a single node and its storage remain online.

Installing the Failover Clustering Feature

Before creating a failover cluster, you have to install the Failover Clustering feature on all nodes in the cluster.

To install the Failover Clustering feature, begin by clicking Add Features in Server Manager. In the Add Features Wizard, select the Failover Clustering check box. Click Next, and then follow the prompts to install the feature.

Once the feature is installed on all nodes, you are ready to validate the hardware and software configuration.

Validating the Cluster Configuration

Before you create a new cluster, use the Validate A Configuration Wizard to ensure that your nodes meet the hardware and software prerequisites for a failover cluster.

To run the Validate A Configuration Wizard, first open Failover Cluster Management Administrative Tools program group. In Failover Cluster Management, click Validate A Configuration in the Management area or the Actions pane, as shown in Figure 5.

Figure 5. Validating failover server prerequisites

After the wizard completes, make any configuration changes if necessary, and then rerun the test until the configuration is successfully validated. After the cluster prerequisites have been validated, you can use the Create Cluster Wizard to create the cluster.

Running the Create Cluster Wizard

The next step in creating a cluster is to run the Create Cluster Wizard. The Create Cluster Wizard installs the software foundation for the cluster, converts the attached storage into cluster disks, and creates a computer account in Active Directory for the cluster. To launch this tool, in Failover Cluster Management, click Create A Cluster in the Management area or Actions pane.

In the Create Cluster Wizard, simply enter the names of the cluster nodes when prompted. The wizard then enables you to name and assign an IP address for the cluster, after which the cluster is created.

After the wizard completes, you need to configure the services or applications for which you wish to provide failover. To perform this aspect of the configuration, run the High Availability Wizard.

Running the High Availability Wizard

The High Availability Wizard configures failover service for a particular service or application. To launch the High Availability Wizard, in Failover Cluster Management, click Configure A Service Or Application in the Action pane or Configure area.

To complete the High Availability Wizard, perform the following steps:

1.
On the Before You Begin page, review the text, and then click Next.
2.
On the Select Service Or Application page, select the service or application for which you want to provide failover service (high availability), and then click Next.
3.
Follow the instructions in the wizard to specify required details about the chosen service. For example, for the File Server service, you would need to specify the following:

  • A name for the clustered file server
  • Any IP address information that is not automatically supplied by your DHCP settings—for example, a static IPv4 address for this clustered file server
  • The storage volume or volumes that the clustered file server should use
4.
After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.
5.
To close the wizard, click Finish.

Testing the Failover Cluster

After you complete the wizard, test the failover cluster in Failover Cluster Management. In the console tree, make sure Services and Applications is expanded, and then select the service you have just added with the High Availability Wizard. Right-click the clustered service, click Move This Service Or Application To Another Node, and then click the available choice of node. You can observe the status changes in the center pane of the snap-in as the clustered service instance is moved. If the service moves successfully, the failover is functional.

 Configuring an NLB Cluster

Creating an NLB cluster is a relatively simple process. To begin, install Windows Server 2008 on two servers and then, on both servers, configure the service or application (such as IIS) that you want to provide to clients. Be sure to create identical configurations because you want the client experience to be identical regardless of which server users are connected to.

The next step in configuring an NLB cluster is to install the Network Load Balancing feature on all servers that you want to join the NLB cluster. For this step, simply open Server Manager, and then click Add Features. In the Add Features Wizard, select Network Load Balancing, click Next, and then follow the prompts to install.

The final step in creating an NLB cluster is to use Network Load Balancing Manager to configure the cluster. This procedure is outlined in the following section.

▸ To create an NLB cluster

1.
Launch Network Load Balancing Manager from Administrative Tools. (You can also open Network Load Balancing Manager by typing Nlbmgr.exe from a command prompt.)
2.
In the Network Load Balancing Manager console tree, right-click Network Load Balancing Clusters, and then click New Cluster.
3.
Connect to the host that is to be a part of the new cluster. In Host, enter the name of the host, and then click Connect.
4.
Select the interface you want to use with the cluster, and then click Next. (The interface hosts the virtual IP address and receives the client traffic to load balance.)
5.
On the Host Parameters page, select a value in the Priority (Unique host identifier) drop-down list. This parameter specifies a unique ID for each host. The host with the lowest numerical priority among the current members of the cluster handles all the cluster’s network traffic not covered by a port rule. You can override these priorities or provide load balancing for specific ranges of ports by specifying rules on the Port rules tab of the Network Load Balancing Properties dialog box.
6.
On the Host Parameters page, verify that the dedicated IP address from the chosen interface is visible in the list. If not, use the Add button to add the address, and then click Next to continue.
7.
On the Cluster IP Addresses page, click Add to enter the cluster IP address shared by every host in the cluster. NLB adds this IP address to the TCP/IP stack on the selected interface of all hosts chosen to be part of the cluster. Click Next to continue.

Note: Use only static addresses

NLB doesn’t support Dynamic Host Configuration Protocol (DHCP). NLB disables DHCP on each interface it configures, so the IP addresses must be static.

8.
On the Cluster Parameters page, in the Cluster IP Configuration area, verify appropriate values for IP address and subnet mask, and then type a full Internet name (Fully Qualified Domain Name) for the cluster.
Note that for IPv6 addresses, a subnet mask is not needed. Note also that a full Internet name is not needed when using NLB with Terminal Services.
9.
On the Cluster Parameters page, in the Cluster Operation Mode area, click Unicast to specify that a unicast media access control (MAC) address should be used for cluster operations. In unicast mode, the MAC address of the cluster is assigned to the network adapter of the computer, and the built-in MAC address of the network adapter is not used. It is recommended that you accept the unicast default settings. Click Next to continue.
10.
On the Port Rules page, click Edit to modify the default port rules. Configure the rules as follows:

  • In the Port Range area, specify a range corresponding to the service you want to provide in the NLB cluster. For example, for Web services, type80 to 80 so that the new rule applies only to HTTP traffic. For Terminal Services, type 3389 to 3389 so that the new rule applies only to RDP traffic.
  • In the Protocols area, select TCP or UDP, as needed, as the specific TCP/IP protocol the port rule should cover. Only the network traffic for the specified protocol is affected by the rule. Traffic not affected by the port rule is handled by the default host.
  • In the Filtering mode area, select Multiple Host if you want multiple hosts in the cluster to handle network traffic for the port rule. Choose Single Host if you want a single host to handle the network traffic for the port rule.
  • In Affinity (which applies only for the Multiple host filtering mode), select None if you want multiple connections from the same client IP address to be handled by different cluster hosts (no client affinity). Leave the Single option if you want NLB to direct multiple requests from the same client IP address to the same cluster host. Select Network if you want NLB to direct multiple requests from the local subnet to the same cluster host.
11.
After you add the port rule, click Finish to create the cluster.
To add more hosts to the cluster, right-click the new cluster, and then click Add Host To Cluster. Configure the host parameters (including host priority and dedicated IP addresses) for the additional hosts by following the same instructions that you used to configure the initial host. Because you are adding hosts to an already configured cluster, all the cluster-wide parameters remain the same.

01/19/2012 Posted by | Cluster Configuration, Windows Server | , , , | Leave a comment

   

%d bloggers like this: