GAPTHEGURU

Geek with special skills

Step-by-Step: Configuring Windows Server 8 Beta iSCSI Target Software for Use in a Cluster

If you just download the bits for Windows Server 8 Beta and you are anxious to try out all the great new features including Windows Storage Spaces, Continuously Available Fail Servers and Hyper-V Availability. Many of those new features are going to require you become familiar with Windows Server Failover Clustering. In addition, things like Storage Spaces are going require that you have access to additional storage to simulate JBODS. Windows iSCSI Target Software is a great way for you to provide storage for Failover Clustering and Spaces in a lab environment so you can play around with these new features.

This Step-by-Step Article assumes you have three Windows Server 8 servers running in a domain environment. My lab environment consists of the following:

Hardware
My three servers are all virtual machines running on VMware Workstation 8 on top of my Windows 7 laptop with 16 GB of RAM. See my article on how to install Windows Server 8 on VMware Workstation 8.

Server Names and Roles
PRIMARY.win8.local – my cluster node 1
SECONDARY.win8.local – my cluster node 2
WIN-EHVIK0RFBIU.win8.local – my domain controller (guess who forgot to rename his DC before I promoted it to be a Domain ControllerJ)

Network
192.168.37.X/24 – my public network also used to carry iSCSI traffic
10.X.X.X /8– a private network defined just between PRIMARY and SECONDARY for cluster communication

This article is going to walk you through step-by-step on how to do the following:

The article consist mostly of screen shots, but I also add notes where needed.

Install the iSCSI Target Role on your Domain Controller

Click on Add roles and features to install the iSCSI target role.

You will find that the iSCSI target role is a feature that is found under File And Storage Servers/File Services. Just select iSCSI Target Server and click Next to begin the installation of the iSCSI Target Server role.

Configure the iSCSI Target

The iSCSI target software is managed under File and Storage Services on the Server Manager Dashboard, click on that to continue

The first step in creating an iSCSI target is to create an iSCSI Virtual Disk. Click on Launch the New Virtual Disk wizard to create a virtual disk.

Connect to the iSCSI Target using the iSCSI Initiator

Format the iSCSI Target

Connect to the shared iSCSI Target from the SECONDARY Server

Configure Windows Server 8 Failover Clustering

Advertisements

05/10/2012 Posted by | Cluster Configuration, Clustering, iSCSI, Windows Server | , , | 2 Comments

Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2 – Part 1

Creating your cluster and configuring the quorum: Node and File Share Majority

Introduction

Welcome to Part 1 of my series “Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2″. Before we jump right in to the details, let’s take a moment to discuss what exactly a multi-site cluster is and why I would want to implement one. Microsoft has a great webpage and white paper that you will want to download to get you all of the details, so I won’t repeat everything here. But basically a multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.

I mentioned “cross-subnet failover” as a great new feature of Windows Server 2008 Failover Clustering, and it is a great new feature. However, SQL Server has not yet embraced this functionality, which means you will still be required to span your subnet across sites in a SQL Server multi-site cluster. As of Tech-Ed 2009, the SQL Server team reported that they plan on supporting this feature, but they say it will come sometime after SQL Server 2008 R2 is released. For the foreseeable future you will be stuck with spanning your subnet across sites in a SQL Server multi-site cluster. There are a few other network related issues that you need to consider as well, such as redundant communication paths, bandwidth and file share witness placement.

Network Considerations

All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity. You will want to be sure that whatever networks you have in place, you are not overwhelming the network or you will have unreliable behavior. Your replication traffic will most likely require the greatest amount of bandwidth; you will need to work with your replication vendor to determine how much bandwidth is required.

With your redundant communication paths in place, the last thing you need to consider is your quorum model. For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum. For a detailed description of the quorum types, have a look at this article.

The most common cause of confusion with the Node and File Share Majority quorum is the placement of the File Share Witness. Where should I put the server that is hosting the file share? Let’s look at the options.

Option 1 – place the file share in the primary site.

This is certainly a valid option for disaster recovery, but not so much for high availability. If the entire site fails (including the Primary node and the file share witness) the Secondary node in the secondary site will not come into service automatically, you will need to force the quorum online manually. This is because it will be the only remaining vote in the cluster. One out of three does not make a majority! Now if you can live with a manual step being involved for recovery in the event of a disaster, then this configuration may be OK for you.

Option 2 – place the file share in the secondary site.

This is not such a good idea. Although it solves the problem of automatic recovery in the event of a complete site loss, it exposes you to the risk of a false failover. Consider this…what happens if your secondary site goes down? In this case, your primary server (Node1) will go also go offline as it is now only a single node in the primary site and will no longer have a node majority. I can see no good reason to implement this configuration as there is too much risk involved.

Option 3 – place the file share witness in a 3rd geographic location

This is the preferred configuration as it allows for automatic failover in the event of a complete site loss and eliminates any the possibility of a failure of the secondary site causing the primary node to go offline. By having a 3rd site host the file share witness you have eliminated any one site as a single point of failure, so now the cluster will act as you expect and automatic failover in the event of a site loss is possible. Identifying a 3rd geographic location can be challenging for some companies, but with the advent of cloud based utility computing it is well within the reach of all companies to put a file share witness in the clouds and have the resiliency required for effective multi-site clusters. In fact, you may consider the cloud itself as your secondary data center and just failover to the cloud in the event of a disaster. I think the possibilities of cloud based computing and disaster recovery configurations are extremely enticing and in fact I plan on doing a whole blog post on a just that in the near future.

Configure the Cluster

Now that we have the basics in place, let’s get started with the actual configuration of the cluster. You will want to add the Failover Clustering feature to both nodes of your cluster. For simplicity sake, I’ve called my nodes PRIMARY and SECONDARY. This is accomplished very easily through the Add Features Wizard as shown below.

Figure 1 – Add the Failover Clustering Role

Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.

Figure 2- Change the names of your network connections

You will also want to go into the Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network is first in the list.

Figure 3- Make sure your public network is first

Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.

Figure 4 – Private network settings

Once you have your network configured, you are ready to build your cluster. The first step is to “Validate a Configuration”. Open up the Failover Cluster Manager and click on Validate a Configuration.

Figure 5 – Validate a Configuration

The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.

Figure 6 – Add the cluster nodes

A multi-site cluster does not need to pass the storage validation (see Microsoft article). Toskip the storage validation process,click on “Run only the tests I select” and click Continue.

Figure 7 – Select “Run only tests I select”

In the test selection screen, unselect Storage and click Next

Figure 8 – Unselect the Storage test

You will be presented with the following confirmation screen. Click Next to continue.

Figure 9 – Confirm your selection

If you have done everything right, you should see a summary page that looks like the following. Notice that the yellow exclamation point indicates that not all of the tests were run. This is to be expected in a multi-site cluster because the storage tests are skipped. As long as everything else checks out OK, you can proceed. If the report indicates any other errors, fix the problem, re-run the tests, and continue.

Figure 10 – View the validation report

You are now ready to create your cluster. In the Failover Cluster Manager, click on Create a Cluster.

Figure 11 – Create your cluster

The next step asks whether or not you want to validate your cluster. Since you have already done this you can skip this step. Note this will pose a little bit of a problem later on if installing SQL as it will require that the cluster has passed validation before proceeding. When we get to that point I will show you how to by-pass this check via a command line option in the SQL Server setup. For now, choose No and Next.

Figure 12 – Skip the validation test

The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.

Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.

Figure 13 – Choose a unique name and IP address

Confirm your choices and click Next.

Figure 14 – Confirm your choices

Congratulation, if you have done everything right you will see the following Summary page. Notice the yellow exclamation point; obviously something is not perfect. Click on View Report to find out what the problem may be.

Figure 15 – View the report to find out what the warning is all about

If you view the report, you should see a few lines that look like this.

Figure 16 – Error report

Don’t fret; this is to be expected in a multi-site cluster. Remember we said earlier that we will be implementing a Node and File Share Majority quorum. We will change the quorum type from the current Node Majority Cluster (not a good idea in a two node cluster) to a Node and File Share Majority quorum.

Implementing a Node and File Share Majority quorum

First, we need to identify the server that will hold our File Share witness. Remember, as we discussed earlier, this File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named DEMODC.

The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions. If you recall back at Figure 13, I created my cluster and gave it the name “MYCLUSTER”. You will need to make sure you give the cluster computer account read/write permissions as shown in the following screen shots.

Figure 17 – Make sure you search for Computers

Figure 18 – Give the cluster computer account NTFS permissions

Figure 19 – Give the cluster computer account share level permissions

Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.

Figure 20 – Change your quorum type

On the next screen choose Node and File Share Majority and click Next.

Figure 21 – Choose Node and File Share Majority

In this screen, enter the path to the file share you previously created and click Next.

Figure 22 – Choose your file share witness

Confirm that the information is correct and click Next.

Figure 23 – Click Next to confirm your quorum change to Node and File Share Majority

Assuming you did everything right, you should see the following Summary page.

Figure 24 – A successful quorum change

Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.

Figure 25 – You now have a Node and File Share Majority quorum

The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster. This step will vary from depending upon your replication solution, so you really need to be in close contact with your replication vendor to get it right.

Other parts of this series will describe in detail how to install SQL, File Servers and Hyper-V in multi-site clusters. I will also have a post on considerations for multi-node clusters of three or more nodes.

05/10/2012 Posted by | Cluster Configuration, Clustering | , , | Leave a comment

Windows Server 2008 and 2008R2 Failover Cluster Startup Switches

I am here today to discuss the troubleshooting switches used to start a Windows 2008 and 2008 R2 Failover Cluster. From time to time, the Failover Cluster Service will not start on its own. You need to start it with a diagnostic switch for troubleshooting purposes and/or to get it back to production.

In Windows 2003 Server Cluster, we had the following switches:

image

More detailed information on the above switches can be found in KB258078. However, the above switches have changed for Windows 2008 and 2008R2 Failover Clusters. The only switch that is available for Windows Server 2008 Failover Cluster is the FORCEQUORUM (or FQ for abbreviation) switch. The behavior differs from the FORCEQUORUM (or FO abbreviation) that was used previously in Windows Server 2003.

So for our example, let’s say we a 2-node Failover Cluster that is set for Node and Disk Majority. That means that we have a total of three votes. To achieve “quorum”, it needs a majority of votes (two) for fully bring all resources online and make it available to users.

In Windows 2008 Failover Cluster, when you tell the Cluster Service to start, it just immediately starts. The next thing it does is send out notifications to all the nodes that it wants to join a Cluster. It is also going to calculate the number of votes needed to achieve “quorum”. As long as there is another node running or it can bring the Witness Disk online, it will join and merrily go on its way. If there is not another node up and it cannot bring the Witness Disk online, the Cluster Service will start; however, it will be in a “joining” type mode. This means it will be sitting idle waiting for another node to join and achieve “quorum”. If this is the case, you would see something like this:

image

As discussed, we need at least 2 votes to achieve “quorum”. We currently have one node up, so we have one vote. The other node is down and the Witness Disk is unavailable which would account for the other two votes. But you can see that the Cluster Service itself is started. The reason it stays started is that is sitting there just listening for another node to join and give it a majority. Once it does, the Cluster resources will be made available for everyone to use. If you were to run the command to get the state of the nodes, you would see this:

image

This is where the FORCEQUORUM switch comes into play. When using this, it will force the Cluster to become available even though there is no “quorum”. There are multiple ways of forcing the Cluster Service to start. However, please keep in mind that there are some implications when running this. The implications are explained in this article.

1.  Go into Service Control Manager and start the Cluster Service with /FORCEQUORUM (or /FQ)
2.  Go to an Administrative Command Prompt and use:
          a.  net start clussvc /forcequorum
b.  net start clussvc /fq

3.  In Failover Cluster Management, highlight the name of the Cluster in the left pane, and
on the far right pane in the Actions column, there is a FORCE CLUSTER START option that
you can select shown below.

image

This switch differs from Windows 2003. When you use it on Windows 2003 Server Clusters, you must also specify all other nodes that will be joining while in this state. If I was to just use the commands above and not specify the additional nodes, the other nodes will not be allowed to join the Cluster. I would need to basically fix the problem of the other nodes not being up, then stop the Cluster Service and start it again without the switch. This causes downtime and no one wants that. In Windows 2008 Failover Cluster, the switch will remain in effect until “quorum” is achieved. All you would need to do is start the other node Cluster Service and it will join. Once “quorum” is achieved, mode of the Cluster dynamically changes.

In Windows Server 2008 R2 Failover Cluster, there is the same FORCEQUORUM (or FQ) switch as well as a new switch.

This new switch is /IPS or /IgnorePersistentState. This switch is a little different in what it does. What this switch does is to start the Cluster Service as well as make the resources available; but, all groups and resources will be in an offline state.

Under normal circumstances, when the Cluster Service starts, the default behavior is to bring all the resources online. What this switch does is ignore the current PersistentState value of the resources and leave everything offline. When you go into Failover Cluster Management and look at the groups, you will see all resources offline.

image

I do need to bring up a couple of important notes about this switch.

1. The Cluster Group will still be brought online. This switch will only affect the Services
and Applications groups that you have in the Cluster.

2. You must still be able to achieve “quorum.” In the case of a Node and Disk Majority,
the Witness Disk must still be able to come online.

This switch is not one that would be used that often, but when you need it, it is a blessing. Here are a couple of scenarios where the /IPS switch would come in handy.

SCENARIO 1

I have a Failover Cluster that held the limit of 1000 Hyper-V Virtual Machines. If you are trying to troubleshoot an issue, you can use the switch and then manually bring online only a couple of them. Do whatever troubleshooting you need to accomplish without the stress that all these machines coming online would put on the node. Once your troubleshooting is complete, you can then start the other nodes, bring the other virtual machines online, go about your business, etc.

SCENARIO 2

I am the administrator of the Failover Cluster and get called that my Cluster node that holds the John’s Cluster Application resource is in a pseudo hung state. Both Explorer and Failover Cluster Management hang up while the rest of the machine is real slow. If I try and move this group over to another node, that node experiences the same problems and errors. So I reboot them and when the Cluster Service starts, the machine goes into this pseudo hung state again. Looking through the event logs, I see that the Cluster Service starts fine. But I do see that John’s Cluster Application is throwing errors in the event log and those were the last things listed. I do some research on the errors and see that it is caused by a log file this application uses as being corrupt. All I have to do is delete this file and the application will dynamically recreate the file, start fine, and no longer hang the machine. That seems simple enough. But wait, I do not have access to the Clustered Drive that this application is on as Explorer hangs and I also cannot get to it from a command prompt.

In the days before Windows 2008 R2 Failover Cluster, I would have to:

  • Power off all other nodes.
  • Set the Cluster Service to MANUAL or DISABLED
  • Disable the Cluster Disk Driver
  • Reboot this machine
  • Delete the file
  • Re-enable the Cluster Disk Driver
  • Set the Cluster Service to AUTOMATIC and start it
  • Power up all other nodes

The above was the only way I was going to be able to get access to the drives. Something like this can be painful and time consuming. If the nodes take about 15 minutes to boot because of the devices and the memory, it just adds to the frustrations.

This is where the /IPS Switch comes in. Your steps would now be:

  • Stop the Cluster Service on all other nodes
  • Reboot this one node since it is hung
  • While that node is rebooting, on the other node, start the Cluster Service with the IPS Switch:

Net start clussvc /ips

  • Go to the group that has the disk
  • Bring the disk online
  • Delete the file
  • Bring the rest of the group online

For those who like to see stuff on MSDN, you can get a little more information on the /IPS switch here.

So as a recap, these are the only switches available for Windows Server 2008 and 2008 R2 Failover Clusters.

image

The switches can make things easier, less frustrating, and causes less downtime. This can mean production/dollars lost are more at a minimum and that makes everyone happy.

05/10/2012 Posted by | Cluster Configuration, Clustering, Windows Server | | Leave a comment

Configuring the Microsoft iSCSI Software Target

Introduction

This post describes how to configure the Microsoft iSCSI Software Target offered with Windows Storage Server.

One of the goals here is to describe the terminology used like iSCSI Target, iSCSI Initiator, iSCSI Virtual Disk, etc. It also includes the steps to configure the iSCSI Software Target and the iSCSI Initiator.

Initial State

We’ll start with a simple scenario with three servers: one Storage Server and two Application Servers.

iSCSI-01

In my example, the Storage Server runs WSS 2008 and the two Application Servers run Windows Server 2008.

The Application Servers could be running any edition of Windows Server 2003 (using the downloadable iSCSI Initiator) or Windows Server 2008 / Windows Server 2008 R2 (which come with an iSCSI Initiator built-in).

The iSCSI Initiator configuration applet can be found in the Application Server’s Control Panel. In the “General” tab of that applet you will find the iQN (iSCSI Qualified Name) for the iSCSI Initiator, which you may need later while configuring the Storage Server.

The Microsoft iSCSI Software Target Management Console can be found on the Administration Tools menu in the Storage Server.

Add iSCSI Targets

The first thing to do is add two iSCSI Targets to the Storage Server. To do this, right-click the iSCSI Targets node in the Microsoft iSCSI Software Target MMC and select the “Create iSCSI Target” option. You will then specify a name, an optional description and the identifier for the iSCSI Initiator associated with that iSCSI Target.

There are four methods to identify the iSCSI Initiators: iQN (iSCSI Qualified Name), DNS name, IP address and MAC address. However, you only need to use one of the methods. The default is the iQN (which can be obtained from the iSCSI Initiator’s control panel applet). If you don’t have access to the iSCSI Initiator to check the iQN, you can use its DNS name. If you’re using the Microsoft iSCSI Initiator on your application server, that iQN is actually constructed with a prefix (“iqn.1991-05.com.microsoft:”) combined with the DNS name of the computer.

For instance, if the Application Server runs the Microsoft iSCSI Initiator, is named “s2” and is a member of the “contoso.com” domain, its iQN would be “FQDN:s2.contoso.com” and its DNS name would be “s2.contoso.com”. You could also use Its IP address (something like “10.1.1.120”) or its MAC address (which would look like “12-34-56-78-90-12”).

Typically, you assign just one iSCSI Initiator to each iSCSI Target. If you assign multiple iSCSI Initiators to the same iSCSI Targets, there is a potential for conflict between Application Servers. However, there are cases where this can make sense, like when you are using clusters.

In my example, we created two iSCSI Targets named T1 (assigned to the iSCSI Initiator in S2) and T2  (assigned to the iSCSI Initiator in S3). It did not fit in the diagram, but assume we used the complete DNS names of the Application Servers to identify their iSCSI Initiators.

iSCSI-02

Add Virtual Disks

Next, you need to create the Virtual Disks on the Storage Server. This is the equivalent of creating an LUN in a regular SAN device. The Microsoft iSCSI Software Target stores those Virtual Disks as files with the VHD extension in the Storage Server.

This is very similar to the Virtual Disks in Virtual PC and Virtual Server. However, you can only use the fixed size format for the VHDs (not the dynamically expanding or differencing formats). You can extend those fixed-size VHDs later if needed.

Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Create Virtual Disk” option. For each Virtual Disk you will specify a filename (complete with drive, folder and extension), a size (between 8MB and 16TB) and an optional description. You can also assign the iSCSI Targets at this point, but we’ll skip that and do it as a separate step.

In my example, I have created three virtual disks: D:\VHD1.vhd, E:\VHD2.vhd and E:\VHD3.vhd.

iSCSI-03

You can create multiple VHD files on the same disk. However, keep in mind that there are performance implications in doing so, since these VHDs will be sharing the same spindles (not unlike any scenario where two applications store data in the same physical disks).

The VHD files created by the Microsoft iSCSI Software Target cannot be used by Virtual PC or Virtual Server, since the format was adapted to support larger sizes (up to 16 TB instead of the usual 2 TB limit in Virtual PC and Virtual Server).

Assign Virtual Disks to iSCSI Targets

Once you created the iSCSI Targets and the Virtual Disks, it’s time to associate each virtual disk to their iSCSI Targets. Since the iSCSI Initiators were already assigned to the iSCSI Targets, this is the equivalent of unmasking an LUN in a regular SAN device.

Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Assign/Remove Target” option. This will take you directly to the “Target Access” tab in the properties of the virtual disk. Click the “Add” button to pick a target. You will typically assign a virtual disk to only one iSCSI Target. As with multiple iSCSI Initiators per iSCSI Target, if you assign the same disk to multiple iSCSI Targets, there is a potential for conflict if two Application Servers try to access the virtual disk at the same time.

You can assign multiple disks to a single iSCSI Target. This is very common when you are exposing several disks to the same Application Server. However, you can also expose multiple virtual disks to the same Application Server using multiple iSCSI Targets, with a single virtual disk per iSCSI Target. This will improve performance if your server runs a very demanding application in terms of storage, since each target will have its own request queue. Having too many iSCSI Targets will also tax the system, so you need to strike a balance if you have dozens of Virtual Disks, each associated with very demanding Application Servers.

In my example, I have assigned VHD1 and VHD2 to T1, then assigned VHD3 to T2.

iSCSI-04

Add Target Portal

Now that we finished the configuration on the Storage Server side, let’s focus on the Application Servers.

Using the iSCSI Initiator control panel applet, click on the “Discovery” tab and add your Storage Server DNS name or IP address to the list of Target Portals. Keep the default port (3260).

Next, select the “Targets” tab and click on the “Refresh” button. You should see the iQNs of iSCSI Targets that were assigned to this specific iSCSI Initiator.

In my example, the iSCSI Initiators in Application Server S2 and S3 were configured to use Storage Server S1 as target portal.

iSCSI-05

The iQN of the iSCSI Target (which you will see in the iSCSI Initiator) is constructed by the Storage Server using a prefix (“FQDN:”) combined with the Storage Server computer name, the name of the iSCSI Target and a suffix (“-target”). In our example, when checking the list of Targets on the iSCSI Initiator in S3, we found “FQDN:s1-t2-target” .

Logon to iSCSI Targets

Now you need to select the iSCSI Target and click on the “Log on” button to connect to the target, making sure to select the “Automatically restore this connection when the system boots” option.

Once the iSCSI Initiators have successfully logged on to the targets,  the virtual disks will get exposed to the Application Servers.

In our example, S2’s iSCSI Initiator was configured to logon to the T1 target and S3’s iSCSI Initiator was configured to logon to the T2 target.

iSCSI-06

Format, Mount and Bind Volumes

At this point, the virtual disks look just like locally-attached disk, showing up in the Disk Management MMC as an uninitialized disk. Now you need to format and mount the volumes.

To finish the configuration, open the Computer Management MMC (Start, Administrative Tools, Computer Management or right-click Computer and click Manage). Expand the “Storage” node on the MMC tree to find the “Disk Management” option. When you click on the Disk Management option, you should immediately see the “Initialize and Convert Disk Wizard”. Follow the wizard to initialize the disk, making sure to keep it as a basic disk (as opposed to dynamic).

You should then use the Disk management tool to create a partition, format it and mount it (as a drive letter or a path), as you would for any local disk. For larger volumes, you should convert the disk to a GPT disk (right click the disk, select “Convert to GPT Disk”). Do not convert to GPT if you intend to boot from that disk.

After the partition is created and the volumes are formatted and mounted, you can go to the “Bound Volumes/Devices” tab in the iSCSI Initiator applet, make sure all volumes mounted are listed there and then use the “Bind All” option. This will ensure that the volumes will be available to services and applications as they are started by Windows.

In my example, I have created a single partition for each disk, formatted them as NTFS and mounted each one in an available drive letter. In Application Server S2, we ended up with disks F: (for VHD1) and G: (for VHD2). On S3, we used F: (for VHD3).

iSCSI-07

Create Snapshot

Next, we’ll create a snapshot of a volume. This is basically a point-in-time copy of the data, which can be used as a backup or an archive. You can restore the disk to any previous snapshot in case your data is damaged in any way. You can also look at the data as it was at that time without restoring it. If you have enough disk space, you can keep many snapshots of your virtual disks, going back days, months or years.

To create a snapshot in the Storage Server, right-click the Devices node in the Microsoft iSCSI Software Target MMC and select the “Create Snapshot” option.  No additional information is required and a snapshot will be created.

You can also schedule the automatic creation of snapshots. For example, you could do it once a day at 1AM. This is done using the “Schedules” option under the “Snapshots” node in the Microsoft iSCSI Software Target MMC.

In my example, i have created a snapshot of the VHD3 virtual disk at 1AM.

iSCSI-08

Microsoft also offers a VSS Provider for the Microsoft iSCSI Software Target, which you can use on the Application Server to create a VSS-based snapshot.

Export Snapshot to iSCSI Target

Snapshots are usually not exposed to targets at all. You can use them to “go back in time” by rolling back to a previous snapshot, which requires no reconfiguration of the iSCSI Initiators. In some situations, however, it might be useful to expose a snapshot so you can check what’s in it before you roll back.

You might also just grab one or two files from the exported snapshot and never really roll back the entire virtual disk. Keep in mind that snapshots are read-only.

To make a snapshot visible to an Application Server, right-click the snapshot in the Microsoft iSCSI Software Target MMC and select the “Export Snapshot” option. You will only need to pick the target you want to use.

Unlike regular virtual disks, you can choose to export snapshots to multiple iSCSI Targets or to an iSCSI Target with multiple iSCSI Initiators assigned. This is because you cannot write to them and therefore there is no potential for conflicts.

In our example, we exported the VHD3 at 1AM snapshot to target T2, which will caused it to show up on Application Server S3.

iSCSI-09

Mount Snapshot Volume

The last step to expose the snapshot is to mount it as a path or drive on your Application Server. Note that you do not need to initialize the disk, create a partition or format the volume, since these things were already performed with the original virtual disk. You would not be able to perform any of those operations on a snapshot anyway, since you cannot write to it.

Again, open the Computer Management MMC, expand the “Storage” node and find the “Disk Management” option. If you already have it open, simply refresh the view to find the additional disk. Then use the properties of the volume to mount it.

In my example, I have mounted the snapshot of VHD3 at 1AM as the G: drive on Application Server S3.

iSCSI-10

Now you might be able to find a file you deleted on that F: drive after 1AM by looking at drive G:. You can then decide to copy files from the G: drive to F: drive at the Application Server side. You can also decide to roll back to that snapshot on the Storage Server side, keeping in mind that you will lose any changes to F: after 1AM.

Advanced Scenario

Now that you have the basics, you can start designing more advanced scenarios. As an example, see the diagram showing two Storage Servers and two Application Servers.

iSCSI-11

There are a few interesting points about that diagram that are worth mentioning. First, the iSCSI Initiators in the Application Servers (S3 and S4) point to two Target Portals (S1 and S2).

Second, you can see that VHD1 and VHD2 are exposed to Application Server S3 using two separate iSCSI Targets (T1 and T2). A single iSCSI Target could be used, but this was done to improve performance.

You can also see that the snapshot of VHD5 at 3AM is being exported simultaneously to Application Servers S3 and S4. This is fine, since snapshots are write-protected.

Clustering Example

This last scenario shows how to configure the Microsoft iSCSI Software Target for a cluster environment. The main difference here is the fact that we are assigning the same iSCSI Target to multiple iSCSI Initiators at the same time. This is usually not a good idea for regular environments, but it is common for a cluster.

iSCSI-12

This example shows an active-active cluster, where the node 1 (running on Application Server S2) has the Quorum disk and the Data1 disk, while node 2 (running on Application Server S3) has the Data2 disk. When running in a cluster environment, the servers know how to keep the disks they’re not using offline, bringing them online just one node at time, as required.

In case of a failure of node 1, node 2 will first verify that it should take over the services and then it will mount the disk resources and start providing the services that used to run on the failed node. Also note that we avoid conflicting drive letters on cluster nodes, since that could create a problem when you move resources between them. As you can see, the nodes need a lot of coordination to access the shared storage and that’s one of the key abilities of the cluster software.

Again, we could have used a single iSCSI Target for all virtual disks, but two were used because of performance requirements of the application associated with the Data2 virtual disk.

Conclusion

I hope this explanation helped you understand some of the details on how to configure the Microsoft iSCSI Software Target included in Windows Storage Server.

Links and References

For general information on iSCSI on the Windows platform, including a link to download the iSCSI Initiator for Windows Server 2003, check http://www.microsoft.com/iscsi

For step-by-step instructions on how to configure the Microsoft iSCSI Software Target, with screenshots, check this post: http://blogs.technet.com/josebda/archive/2009/02/02/step-by-step-using-the-microsoft-iscsi-software-target-with-hyper-v-standalone-full-vhd.aspx

For details on how VSS works, check this post: http://blogs.technet.com/josebda/archive/2007/10/10/the-basics-of-the-volume-shadow-copy-service-vss.aspx

For details on how iSCSI names are constructed using the iQN format, check IETF’s RFC 3721 at http://www.ietf.org/rfc/rfc3721

05/10/2012 Posted by | Cluster Configuration, Windows Server | , | Leave a comment

BIZTALK TIPS: How to Cluster Message Queuing / How to Cluster MSDTC

Cluster support is provided for the BizTalk Server MSMQ adapter by running the MSMQ adapter handlers in a clustered instance of a BizTalk Host. If the BizTalk Server MSMQ adapter handlers are run in a clustered instance of a BizTalk Host, a clustered Message Queuing (MSMQ) resource should also be configured to run in the same cluster group as the clustered BizTalk Host when using the Send adapter or the Receive adapter for BizTalk Server 2006 R2 and earlier. This should be done for the following reasons:

  • MSMQ adapter receive handler – The MSMQ adapter receive handler for BizTalk Server 2006 R2 and earlier does not support remote transactional reads; only local transactional reads are supported. The MSMQ adapter receive handler on BizTalk Server 2006 R2 and earlier must run in a host instance that is local to the clustered MSMQ service in order to complete local transactional reads with the MSMQ adapter.
  • MSMQ adapter send handler – To ensure the consistency of transactional sends made by the MSMQ adapter, the outgoing queue used by the MSMQ adapter send handler should be highly available, so that if the MSMQ service for the outgoing queue fails, it can be resumed. Configuring a clustered MSMQ resource and the MSMQ adapter send handlers in the same cluster group will ensure that the outgoing queue used by the MSMQ adapter send handler will be highly available. This will mitigate the possibility of message loss in the event that the MSMQ service fails.

Many BizTalk Server operations are performed within the scope of a Microsoft Distributed Transaction Coordinator (MSDTC) transaction.

A clustered MSDTC resource must be available on the Windows Server cluster to provide transaction services for any clustered BizTalk Server components or dependencies. BizTalk Server components or dependencies that can be configured as Windows Server cluster resources include the following:

  • BizTalk Host
  • Enterprise Single Sign-On (SSO) service
  • SQL Server instance
  • Message Queuing (MSMQ) service
  • Windows File system

Windows Server 2003 only supports running MSDTC on cluster nodes as a clustered resource.

Windows Server 2008 supports running a local DTC on any server node in the failover cluster; even if a default clustered DTC resource is configured.


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, i the left pane, click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Message Queuing, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click a disk resource, and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.
  11. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To configure the Distributed Transaction Coordinator (DTC) for high availability (Windows Server 2008)


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left hand pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, in the left pane click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Distributed Transaction Coordinator, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click to select a disk resource and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2008)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then click Component Services.
  2. Click to expand Component Services, click to expand Computers, click to expand My Computer, click to expand Distributed Transaction Coordinator, click to expand Clustered DTCs, right-click the clustered DTC resource, and then click Properties.
  3. Click the Security tab.
  4. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  5. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  6. After changing security settings for the clustered distributed transaction coordinator resource, the resource will be restarted. Click Yes and OK when prompted.
  7. Close the Component Services management console.

 

  1. To start the Cluster Administrator program, click Start, point to Programs, point to Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Name and Disk resource.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSMQ.
  5. In the Resource type drop-down list, click Message Queuing, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the message queuing resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. Click OK in the dialog box that indicates that the resource was created successfully.
  9. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To add an MSDTC resource to an existing cluster group (Windows Server 2003)


  1. To start the Cluster Administrator program, click Start, Programs, Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Physical Disk, IP Address, and Network Name resource. To create a group with a Physical Disk, IP Address, and Network Name resource if one does not already exist.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSDTC.
  5. In the Resource type drop-down list, click Distributed Transaction Coordinator, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the distributed transaction coordinator resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. In the dialog box that indicates that the resource was created successfully, click OK.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2003)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then Component Services.
  2. Click to expand Component Services, and then click to expand Computer.
  3. Right-click My Computer, and then select the Properties menu item to display the My Computer Properties dialog box.
  4. Click the MSDTC tab.
  5. To display the Security Configuration dialog box, click Security Configuration .
  6. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  7. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  8. Stop and restart the Distributed Transaction Coordinator service.

03/08/2012 Posted by | Biztalk, Cluster Configuration | , , , , | 2 Comments

Installation of SSO on SQL Failover Cluster

In this post I will tell a story of my experience with installation of SSO on SQL Cluster. Each BizTalk Server has an Enterprise Single Sign-On service (EntSSO.exe). Enterprise Single Sign-On is also referred as SSO or EntSSO. SSO serves two purposes. One is for data encryption, that is, port URI data. And the other is, as what the name indicates, Single Sign-On. Single Sign-On is about credential mapping. BizTalk Server SSO currently supports only Windows-initiated Single Sign-On. That means you can only map Windows users accounts to external application (affiliate application) user accounts. On the inbound side the sender is authenticated with Windows; on the outbound side, BizTalk Server automatically authenticates with the affiliate applications using the preconfigured credential mapping. Single Sign-On is a useful feature in business-to-business (B2B) scenarios.

Note: However the encryption function is mandatory for a BizTalk system. Single Sign-On for credential mapping can be solved with other tools like Oracle Wallet.

In addition to the SSO services running on each of the BizTalk Servers, there is a master secret server. The master secret server is a server with the SSO service running on it. The master secret server can be one of the SSO services running on one of the BizTalk Servers, or a dedicated master secret server.

It is the same executable called EntSSO.exe, but with an additional sub component responsible for maintain and supply the master secret key to the SSO services on the other BizTalk Servers. The other SSO services running on the BizTalk Servers check every 30 seconds to see whether the master secret has changed. If it has changed, they read it securely; otherwise, they continue to use the master secret they already have cached in memory.

Considering there is only one master secret server in your entire environment and the dependency of BizTalk Server, it is recommended that you use an active-passive cluster configuration for the master secret server. Because the master secret server doesn’t consume a lot of resources, it is very common to use the SQL Server failover cluster for clustering the master secret server.

On the first server cluster node where you run BizTalk Server Configuration, you choose to create a new SSO system. That makes the cluster node as the master secret server. The host name of the master secret server is the host name of the physical cluster node. The master secret key is automatically generated on that node. On the other cluster node, you choose to join the existing SSO system. To cluster the master secret server, you need to change the master secret server from the first cluster node host name to the virtual SQL Server failover cluster network name (NameSQL1), and create a SSO generic service cluster resource. At the end, you restore the master secret key to the other cluster nodes. So, when the cluster fails over to another node, that node has the master secret. These steps can be done using the domain admin account (usually a network senior administrator will perform these steps with this account i.e. as example I will name the account InstallBizTalk).

Clustering the master secret server service is a complicated process. You might find it confusing when and where you need to perform a step, and the order of the steps. Here are some general rules:

· You must install and configure SSO on each of the cluster nodes. When you create the new SSO system on the first cluster node, this node can be either an active cluster node or a passive cluster node.

· After you successfully installed and configured SSO on all of the cluster nodes, you must update the master secret server host name from a physical cluster node host name to the virtual cluster network name, and you must change the rename from an active node.

· After the master secret server host name is changed, you must restart the SSO service on the active node to refresh the cache by taking the SSO cluster resource offline and then online.

· You must create an SSO cluster resource before restoring the master secret key on the other cluster nodes.

· Before you restore the master secret key on a cluster node, you must make it the active node first.

Steps involved to successfully install and configure SSO on cluster will be outlined here.

There are several SSO user groups. Two of them are required when configuring the master secret server. SSO Administrators have the highest level user rights in the SSO system; and SSO Affiliate Administrators defines the affiliate applications that the SSO system contains.

To create a domain group account for the SQL Server service groups

1. If you haven’t already logged on or if you are logged on with a different credential, log on to Cluster Node A using domain admin account.

2. Click Start, and then click Run.

3. In the Run dialog box, enter dsa.msc, and then click OK.

4. From Active Directory Users and Computers, if the YourDomain domain is not already expanded, click the plus sign (+) to expand the YourDomain.com domain.

5. In the left pane, right-click Users, point to New, and then click Group.

6. From New Object – Group, enter the following values, and then click OK.

Name Value
Group name SSO Administrators
Group scope Global
Group type Security

7. Repeat steps 5 to 6 to create one more group:

Name Value
Group name SSO Affiliate Administrators
Group scope Global
Group type Security

To create a domain user account for the SSO Service

1. (continue from the previous procedure)

2. In the left pane, right-click Users, point to New, and then click User.

3. From New Object – User, enter the following values, and then click Next.

Name Value
First name SSO
Last name Service
User logon name SSOService

4. Enter or select the following values, and then click Next.

Name Value
Password TBD
Confirm password TBD
User must change password at next logon (clear)
User cannot change password (select)
Password never expires (select)
Account is disabled (clear)

5. Click Finish.

Both YourDomain\SSOSerivce and domain admin account need to be members of the YourDomain\SSO Administrators group. It is designated for installing and configuring the BizTalk Server system.

To make YourDomain\SSOService and domain admin account members of SSO Administrators

1. (continue from the previous procedure)

2. In the left pane, highlight Users.

3. In the right pane, right-click SSO Service, point to All Tasks, and then click Add to a group.

4. From Select Group, enter or select the following values, and then click OK.

Name Value
Select this object type Group or Built-in security principal
From this location YourDomain
Enter the object name to select SSO administrators

5. To acknowledge that the account was created, click OK.

6. Repeat steps 3 to 5 to add domain admin account into the same group.

Granting YourDomain\SSO Administrators Full Control on Cluster Node A

You need to grant YourDomain\SSOService or YourDomain\SSO Administrators with the full control privilege on the cluster administrator.

To grant YourDomain\SSO Administrators full control on the cluster

1. If you haven’t already logged on or if you are logged on with a different credential, log on to Cluster Node A as YourDomain\IInstallBizTalk.

2. Click Start, point to All Programs, point to Administrative Tools, and then click Cluster Administrator.

3. From Cluster Administrator, in the left pane, right-click CLUSTER NODE A, and then click Properties.

4. From CLUSTER NODE A Properties, click the Security tab, and then click Add.

5. From Select Users, Computers, or Groups, enter the following values, and then click OK.

Name Value
Select this object type Group or Built-in security principal
From this location YourDomain.com
Enter the object name to select SSO administrators

6. Verify the Allow box is selected, and then click OK.

Installing the SSO Components on Cluster Node A

With the accounts and permissions configured in the last step, you can now install the master secret server. BizTalk Server is not cluster aware as SQL Server is. You will need to install SSO on each of the cluster nodes. You will also need to create the SSO cluster resource manually.

BizTalk Server installation process has two parts. In this step, you will install the components. And the next step is configuring the master secret server.

To install the SSO components on Cluster Node A and Cluster Node B

1. If you haven’t logged on, log on to Cluster Node A as YourDomain\InstallBizTalk.

2. Run setup.exe to install BizTalk Server 2006 R2.

3. On the Start page, click Install Microsoft BizTalk Server 2006 R2 on this computer.

4. On the Customer Information page, enter information in the User name box, the Organization box, and the Product key box, and then click Next.

5. On the License Agreement page, read the license agreement, select yes, I accept the terms of the license agreement, and then click Next.

6. On the Component Installation page, clear all the check boxes, select Enterprise Single Sign-On Administration Module and Enterprise Single Sing-On Master Secret Server from the Additional Software group, and then click Next.

clip_image002

7. On the Summary page, click Install.

8. On the Installation Completed page, clear Launch BizTalk Server Configuration check box, and then click Finish.

Installing the SSO Components on Cluster Node B

Repeat the same steps to install the SSO components on Cluster Node B.

Configuring the Master Secret Server on Cluster Node A

Configuring the master secret server has three parts, creating SSO database, assigning SSO service account, and backing-up the master secret. Notice there are two options, create a new SSO system, and join an existing SSO system. On the first cluster nodes, you must choose to create a new SSO system. When you create a new SSO system, you must specify the database server name, and the database name. But you don’t need to specify the master secret server host name. The current host name, becomes the default master secret server. Later, you must change the master secret server from the physical cluster node host name to the virtual cluster host name, YourVirtualServerName.

It doesn’t matter whether Cluster Node A is the active node or a passive node when you go through this procedure.

To configure the master secret server on Cluster Node A

1. If you haven’t logged on, log on to Cluster Node A as YourDomain\InstallBizTalk.

2. Click Start, point to All Programs, point to Microsoft BizTalk Server 2006, and then click BizTalk Server Configuration.

3. On the Microsoft BizTalk Server 2006 Configuration page, choose Custom Configuration, enter the following values, and then click Configure.

Name Value
Database server name Database Name Cluster
User name YourDomain\SSOService
Password TBD

4. in the left pane, click Enterprise SSO.

5. In the right pane, enter or select the following values:

Name Value
Enable Enterprise Single Sign-On on this computer (checked)
Create a new SSO system (selected)
SSO Database: Server Name Database Name Cluster
SSO Database: Database Name SSODB
Enterprise Single Sign-On Service: Account YourDomain\SSOService
SSO Administrator(s): Windows Group YourDomain\SSO Administrators
SSO Affiliate Administrators(s): Windows Group YourDomain\SSO Affiliate Administrators

image

6. In the left pane, click Enterprise SSO Secret Backup. The Enterprise SSO secret is very critical. You must back it up to a file. It is a good practice to burn the key into a CD and store the CD in a safe place.

7. In the right pane, enter the following values:

Name Value
Secret back password TBD
Confirm password TBD
Password reminder TBD
Backup file location C:\Program Files\Common Files\Enterprise Single Sign-On\SSOSecret.bak)

8. Click Apply Configuration.

9. On the Summary page, to apply the configuration, click Next.

10. Verify that the Configuration Result is Success, and then click Finish.

11. Close Microsoft BizTalk Server 2006 Configuration.

1.1.4 Configuring SSO on Cluster Node B

On the second node, you choose to join the existing SSO system. When joining the existing SSO system, it shares the SSO database of the existing SSO system.

To configure the master secret server on Cluster Node B

1. If you haven’t logged on, log on to Cluster Node B as YourDomain\InstallBizTalk.

2. Click Start, point to All Programs, point to Microsoft BizTalk Server 2006, and then click BizTalk Server Configuration.

3. On the Microsoft BizTalk Server 2006 Configuration page, choose Custom Configuration, enter the following values, and then click Configure.

Name Value
Database server name ServerName Database Cluster
User name YourDomainSSOService
Password TBD

4. In the left pane, click Enterprise SSO.

5. In the right pane, enter or select the following values:

Name Value
Enable Enterprise Single Sign-On on this computer (checked)
Join an existing SSO system (selected)
Server Name ServerName Database Cluster
Database Name SSODB
Account YourDomain\SSOService

6. Click Apply Configuration.

7. On the Summary page, to apply the configuration, click Next.

8. Verify that the Configuration Result is Success, and then click Finish.

9. Close Microsoft BizTalk Server 2006 Configuration.

1.1.5 Updating the Master Secret Server Host Name

When SSO was configured on the first cluster node, it created a new SSO system. It used the host name of the physical cluster node as the master secret server host name that is Cluster Node A. You must change it to the server cluster virtual name, which is YourVirtualClusterName. This procedure must be carried out from the active cluster node. All it does is to update the master secret server field in the SSO database.

To configure the master secret server on Cluster Node B

1. If you haven’t already logged on, log on to Cluster Node A as YourDomain\InstallBizTalk.

<sso>
  <globalInfo>
    <secretServer> ServerName Database Cluster</secretServer>
  </globalInfo>
</sso>

2. Open a notepad.exe, create a file with the following content, and then save it as “C:\Program Files\Common Files\Enterprise Single Sign-On\SSOCluster.xml” (with the double quotes). The content is case sensitive.

clip_image002[5]

3. Open a command prompt, and then change directory to the C:\Program Files\ Common Files\Enterprise Single Sign-On\ folder.

4. From the command prompt, execute the following command:

ssomanage -updatedb SSOCluster.xml

5. Verify the master secrete server name has been changed to  as shown below:

C:\Program Files\Common Files\Enterprise Single Sign-On>ssomanage -updatedb ssocluster.xml
Using SSO server on this computer
 
Updated SSO global information with the following values -
 
SSO secret server name                  : ServerName Database Cluster
SSO Admin account name                  : NOT CHANGED
SSO Affiliate Admin account name        : NOT CHANGE

image

Creating SSO Cluster Resource

BizTalk Server is not cluster aware. So you must manually create the master secret server cluster resource. You can either create a dedicated virtual server (cluster group) for the SSO cluster resource, or use an existing cluster group. The instructions provided use the SQL Server Cluster Group. If you create a dedicated cluster group, you also need to create a network name cluster resource depended by the SSO cluster resource.

To Create SSO cluster resource

1. If you haven’t already logged on, log on to Cluster Node A as YourDomain\InstBizTalk.

2. Click Start, point to All Programs, point to Administrative Tools, and then click Cluster Administrator.

3. In the left pane, expand CLUSTER NODE A, expand Groups, and then expand SQL Server Cluster Group. If you get a prompt before it opens Cluster Administrator, choose Open Existing Cluster, and point it to CLUSTER NODE A.

4. Right-click SQL Server Cluster Group, click New, and then click Resource.

5. From New Resource, enter or select the following values, and then click Next.

clip_image002[9]

6. From Possible Owners, verify that CLUSTER NODE A and CLUSTER NODE B are in the Possible owners list, and then click Next.

7. From Dependencies, select SQL Network Name (Cluster Node A) and click Add. And then click Next.

8. From Generic Service Parameters, type or select the following values, and then click Next:

clip_image002[11]

9. From Registry Replication, click Finish.

clip_image001Note
Do not configure any registry keys for replication in the Registry Replication dialog box. Replication of registry keys is not a requirement when creating a SSO cluster resource and, in fact, may cause problems when failover of this cluster resource is attempted.

10. In the details pane, right-click ENTSSO, and click Bring Online. Verify that the state is changed to Online.

Restoring the Master Secret on Cluster Node B

Before restoring the master secret on Cluster Node B, you must make Cluster Node B as the active cluster node, and restart the cluster resource by taking the cluster resource offline and then online.

To make Cluster Node B the active cluster node

1. Log on to Cluster Node B as RBW-NL\InstBizTalk.

2. Click Start, point to All Programs, point to Administrative Tools, and then click Cluster Administrator.

3. From Cluster Administrator, expand CLUSTER NODE A in the left pane, expand Groups, and then expand SQL Server Cluster Group. In the details pane, the Owner column of the cluster resources shows the active cluster node

4. If Cluster Node B is not the active cluster node, in the left pane right-click SQL Server Cluster Group, and then click Move Group. Wait until all the cluster resources are online.

5. In the details pane, right-click ENTSSO, and then click Take Offline. Wait until all the cluster resources are offline.

6. In the details pane, right-click ENTSSO, and then click Bring Online. Wait until all the cluster resources are online.

To restore the master secret on the second cluster node

1. Copy the master secret backup file, C:\Program Files\Common Files\Enterprise Single Sign-On\SSOSecret.bak, on Cluster Node A to the same folder on Cluster Node B. SSOSecret.bak is how you named the file when you configured the master secret server on Cluster Node A.

2. Open a command prompt, and then change the directory to C:\Program Files\Common Files\Enterprise Single Sign-On.

3. Type and execute the following command in the command prompt:

ssoconfig -restoresecret SSOSecret.bak

image

Through following this procedure you will be successful in deploying SSO on SQLCluster and configure SSO. My experience is that in following this procedure with an senior administrator inside an organization works the best. This procedure us done with BizTalk Server 2006 R2, but is also suitable for BizTalk Server 2009.

02/28/2012 Posted by | Active Directory, Biztalk, Cluster Configuration, Sql Server, SSO, Windows Server | , , , | 1 Comment

SSIS and clustering: What you should do instead

Lots of customers ask about configuring SQL Server Integration Services in a failover cluster. I recommend that you DON’T configure SSIS as a cluster resource. There are almost no benefits to doing so, and you can gain many of the benefits that you want by simple configuration changes. By editing the configuration file for the SSIS service on each node of a cluster, you can manage the packages on any node from any other node. For more information, please see the Books Online topic, Configuring Integration Services in a Clustered Environment.

Microsoft Senior Premier Field Engineer Steve Howard provided these additional details about the recommendations that he makes to customers who ask about clustering. Thanks, Steve, for permission to share:


I agree that restarting a running package automatically would be neat, but this would be different from other cluster-aware technologies (and other failover technologies) that we have. For example, if a failover happens with SQL Server, the queries do not restart, and other jobs running in Agent do not restart. I suppose it would be possible to write a job to check for jobs that did not complete successfully and restart those jobs, then schedule the that job to run at startup. That sounds feasible, but I have never done that.

What I’m describing is supported out of the box. It is really the same process that you must go through to manage package on a standalone machine with multiple instances (or even just one named instance). I find this question to be the most common question that customers have when I go onsite. Customers usually just do not understand the function of the SSIS service. When I explain it to them, and we work through it together, they are satisfied. I’ll explain here what I go through with customers, and students in the SSIS workshop.

In the installation folder, they will find the configuration file. For SQL 2005, by default, this path is: C:\Program Files\Microsoft SQL Server\90\DTS\Binn and for SQL 2008, this is C:\Program Files\Microsoft SQL Server\100\DTS\Binn. In either case, the name of the file is MsDtsSrvr.ini.xml. When first installed, this file will look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

<Folder xsi:type=”SqlServerFolder”>

<Name>MSDB</Name>

<ServerName>.</ServerName>

</Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

(I just use notepad for this task, but some people prefer to use XML notepad or some XML editor like that.)

In the top level folder, the servername is “.”, which means it will only connect to the default instance on the local machine (local to where the service is running). So when I connect to that SSIS service, I can only see the default instance on the machine where the SSIS service is running. Everything here is relative to where the service is running. (I tell students that it is the center of management for SSIS). I can connect to this machine with Management Studio on any machine, but with this configuration, I will only see the default instance running on the machine where the SSIS service I connected to is running.

If I have multiple instances on this machine, I need to add top-level folders so I can manage all the instances installed on this machine from this SSIS service. (I’ll get to clusters in a moment). Let’s say that I have both a SQL 2005 instance and a SQL 2008 instance on this machine. Then in the SQL 2008 SSIS MsDtsSrvr.ini.xml, I need to set it up to manage these instances. (I cannot manage SQL 2008 instances from SQL 2005 SSIS, so I must configure the SQL 2008 SSIS to be able to manage both from one service.) In that case, I would add the top-level folders with names that let me distinguish among the servers where I am managing packages:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2008 MSDB</Name>

      <ServerName>.\SQL2K8</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2005 MSDB</Name>

      <ServerName>.</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So, I have added one folder that is named “SQL 2008 MSDB” and that points to the named instance SQL2k8 on the local machine. The other folder is named “SQL 2005 MSDB” and that points to the default instance on the local machine. When I make this edit, restart the SSIS service so it will read the modified configuration file, then connect to this SSIS instance, I can now see both servers and manage packages on both:

 

So now, I can see running packages on either server, I can import, export, or manually start the packages. But none of this is really necessary to be able to design, install, or run those packages. The Service is just for convenience for managing this.

So now, let’s take this concept to a cluster. For our cluster, let’s have 4 nodes names Node1, Node2, Node3, and Node4. On this, let’s install 4 instances of SQL in 4 separate resource groups. Let’s use the network names net1, net2, net3, and net4, and let’s install instances InstanceA, InstanceB, InstanceC, and InstanceD on those net names respectively so that the full names of our instances will be net1\InstanceA; net2\InstanceB; net3\InstanceC, and net4\InstanceD. Any of the 4 nodes can host any of the instances in our setup.

To be able to manage packages on any of those instances, you are going to have to modify your config file. To be able to manage packages on all 4 instances from any one machine, we would make modifications like I did above so that the config file will now look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceA MSDB</Name>

      <ServerName>net1\InstanceA</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceB MSDB</Name>

      <ServerName>net2\InstanceB</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceC MSDB</Name>

      <ServerName>net3\InstanceC</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceD MSDB</Name>

      <ServerName>net4\InstanceD</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So now, whatever machine I put that config file onto will see and be able to manage packages on those 4 machines, just as in the screenshot above, I can see the packages and manage them on those two instances. If I put this on node1, then if I connect to node1, I can manage all of them from that machine. But just having it on one node will be a bit of a pain. So, once I have this configured, and I have tested to make sure it will see all the instances where I want to manage packages, I just copy the MsDtsSrvr.ini.xml file into place on node2, node3, and node4 (if I have installed the SSIS service on those nodes). Now, I can connect to SSIS on any of those nodes.

Most DBAs don’t care what the node names are, but they know the network names of their SQL Server instances very well. In that cluster configuration we described, these network names resolve to IP addresses that move with the SQL Server instance when it fails over. So from Management Studio on the DBA’s workstation, he can connect to the SSIS service on net1 and see all 4 instances on his cluster. If it fails over, and he still wants to connect to SSIS to manage packages on any of the 4 nodes on that cluster, he could connect to net1, and it would connect to the SSIS service running on the node where Net1\InstanceA is now hosted, and he will still see the same thing – he doesn’t know or care that he is now connected to the SSIS service on a different node. If he wanted to, he could even specify the cluster name (instead of any of the SQL network names) and still connect to an SSIS service and still see the same set of folders.

In some environments, the DBA has one server that is his/hers where they set up their management tools. The SSIS configuration that we have allows the DBA to be able to configure the XML file on that one machine to see and manage packages on all instances and machines that they manage by connecting to a single SSIS service. He/she just needs to configure the XML file on that one machine.

Where I see confusion/frustration from customers is that they think of Management Studio as the center of their management tools. With SSIS, it is the SSIS service that is the center of the management tools. Customers, before education, think of the SSIS service as running the packages, but this is not the case. The SSIS service is for management. Management Studio gives them a graphical interface into the service, but the center of management for SSIS is the SSIS service.

If I have one complaint about this, it is that we do not really have a front end for customers so that they don’t have to manually edit the XML files. But really, that XML file is so simple that it is not difficult to edit with tools like Notepad or XML Notepad.

And in that situation, what have we gained if we cluster the SSIS service?


The preceding information is presented here for exactly what it is: the educated opinion of an experienced Microsoft field engineer.

What many corporate customers are really looking for, presumably, is high availability for ETL processes, especially long-running processes. Except for its support for transactions, and its ability to restart from checkpoints after failure, SSIS out of the box doesn’t currently have a complete answer for HA concerns.

01/31/2012 Posted by | Cluster Configuration, Sql Server, SSIS | , , , , , , | Leave a comment

Windows Server 2008 : Configuring Server Clusters

Server Cluster Fundamentals

In Windows Server 2008, you can configure three types of server groups for load balancing, scalability, and high availability. First, a round-robindistribution group is a set of computers that uses DNS to provide basic load balancing with minimal configuration requirements. Next, a Network Load Balancing (NLB) cluster (also called an NLB farm) is a group of servers used not only to provide load balancing but also to increase scalability. Finally, a failover cluster can be used to increase the availability of an application or service in the event of a server failure.

Note: What is load balancing?

Load balancing is a means of distributing incoming connection requests to two or more servers in a manner that is transparent to users. Load balancing can be implemented with hardware, software, or a combination of both.

Round-Robin Distribution

Round-robin DNS is a simple method for distributing a workload among multiple servers. In round-robin, a DNS server is configured with more than one record to resolve another server’s name to an IP address. When clients query the DNS server to resolve the name (find the address) of the other server, the DNS server responds by cycling through the records one at a time and by pointing each successive client to a different address and different machine.

For example, suppose that a DNS server authoritative for the DNS domain contoso.com is configured with two separate resource records, each resolving the name web.contoso.com by pointing to a different server, as shown in Figure 1. When the first client (Client1) queries the DNS server to resolve the web.contoso.com web.contoso.com), the DNS server answers the query with the information provided in the second record matching “web.” This second record points to a server name websrv2, which is located at the 192.168.3.12 address. If a third client then queries the DNS server for the same name, the server will respond with information in the first record again.name, the DNS server answers by pointing the client to the server named websrv1 located at the 192.168.3.11 address. This is the information associated with the first DNS record matching “web.” When the next client, Client2, queries the DNS server to resolve the same name (

Figure 1. Round-robin uses DNS to distribute the client load between two or more servers

The purpose of DNS round-robin is to load balance client requests among servers. Its main advantage is that it is very easy to configure. Round-robin DNS is enabled by default in most DNS servers, so to configure this simple sort of load balancing, you only need to create the appropriate DNS records on the DNS server.

However, there are serious limitations to round-robin as a load balancing mechanism. The biggest drawback is that if one of the target servers goes down, the DNS server does not respond to this event, and it will keep directing clients to the inactive server until a network administrator removes the DNS record from the DNS server. Another drawback is that every record is given equal weight, regardless of whether one target server is more powerful than another or a given server is already busy. A final drawback is that round-robin does not always function as expected. Because DNS clients cache query responses from servers, a DNS client by default will keep connecting to the same target server as long as the cached response stays active.

Network Load Balancing

An installable feature of Windows Server 2008, NLB transparently distributes client requests among servers in an NLB cluster by using virtual IP addresses and a shared name. From the perspective of the client, the NLB cluster appears to be a single server. NLB is a fully distributed solution in that it does not use a centralized dispatcher.

In a common scenario, NLB is used to create a Web farm—a group of computers working to support a Web site or set of Web sites. However, NLB can also be used to create a terminal server farm, a VPN server farm, or an ISA Server firewall cluster. Figure 2 shows a basic configuration of an NLB Web farm located behind an NLB firewall cluster.

Figure 2. Basic diagram for two connected NLB clusters

As a load balancing mechanism, NLB provides significant advantages over round-robin DNS. First of all, in contrast to round-robin DNS, NLB automatically detects servers that have been disconnected from the NLB cluster and then redistributes client requests to the remaining live hosts. This feature prevents clients from sending requests to the failed servers. Another difference between NLB and round-robin DNS is that in NLB, you have the option to specify a load percentage that each host will handle. Clients are then statistically distributed among hosts so that each server receives its percentage of incoming requests.

Beyond load balancing, NLB also supports scalability. As the demand for a network service such as a Web site grows, more servers can be added to the farm with only a minimal increase in administrative overhead.

Failover Clustering

A failover cluster is a group of two or more computers used to prevent downtime for selected applications and services. The clustered servers (called nodes) are connected by physical cables to each other and to shared disk storage. If one of the cluster nodes fails, another node begins to take over service for the lost node in a process known as failover. As a result of failover, users connecting to the server experience minimal disruption in service.

Servers in a failover cluster can function in a variety of roles, including the roles of file server, print server, mail server, or database server, and they can provide high availability for a variety of other services and applications.

In most cases, the failover cluster includes a shared storage unit that is physically connected to all the servers in the cluster, although any given volume in the storage is accessed by only one server at a time.

Figure 3 illustrates the process of failover in a basic, two-node failover cluster.

Figure 3. In a failover cluster, when one server fails, another takes over, using the same storage

In a failover cluster, storage volumes or LUNs that are exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. Figure 4 illustrates this concept by showing two two-node failover clusters dividing up storage on a SAN.

Figure 4. Each failover cluster must isolate storage from other servers

Creating a Failover Cluster

Creating a failover cluster is a multistep process. The first step is to configure the physical hardware for the cluster. Then, you need to install the Failover Clustering feature and run the Failover Cluster Validation Tool, which ensures that the hardware and software prerequisites for the cluster are met. Next, once the configuration has been validated by the tool, create the cluster by running the Create Cluster Wizard. Finally, to configure the behavior of the cluster and to define the availability of selected services, you need to run the High Availability Wizard.

Preparing Failover Cluster Hardware

Failover clusters have fairly elaborate hardware requirements. To configure the hardware, review the following list of requirements for the servers, network adapters, cabling, controllers, and storage:

  • Servers Use a set of matching computers that consist of the same or similar components (recommended).
  • Network adapters and cablingThe network hardware, like other components in the failover cluster solution, must be compatible with Windows Server 2008. If you use iSCSI, each network adapter must be dedicated to either network communication or iSCSI, not both.In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
  • Device controllers or appropriate adapters for the storageIf you are using serial attached SCSI or FC in all clustered servers, the mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version. If you are using iSCSI, each clustered server must have one or more network adapters or HBAs that are dedicated to the cluster storage. The network you use for iSCSI cannot be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage target should be identical. It is also recommended that you use Gigabit Ethernet or higher. (Note also that for iSCSI, you cannot use teamed network adapters.)
  • Shared storage compatible with Windows Server 2008For a two-node failover cluster, the storage should contain at least two separate volumes (LUNs), configured at the hardware level.The first volume will function as the witness disk, a volume that holds a copy of the cluster configuration database. Witness disks, known as quorum disks in Microsoft Windows Server 2003, are used in many but not all cluster configurations.

    The second volume will contain the files that are being shared to users. Storage requirements include the following:

    • To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
    • It is recommended that you format the storage partitions with NTFS. (For the witness disk, the partition must be NTFS.)When deploying a storage area network (SAN) with a failover cluster, be sure to confirm with manufacturers and vendors that the storage, including all drivers, firmware, and software used for the storage, are compatible with failover clusters in Windows Server 2008.

After you have met the hardware requirements and connected the cluster servers to storage, you can then install the Failover Cluster feature.

Note: What is the quorum configuration?

The quorum configurationin a failover cluster determines the number of failures that the cluster can sustain before the cluster stops running. In Windows Server 2008, you can choose from among four quorum configurations. The first option is the Node Majority quorum configuration, which is recommended for clusters with an odd number of nodes. In node majority, the failover cluster runs as long as a majority of the nodes are running. The second option is the Node and Disk Majority quorum configuration, which is recommended for clusters with an even number of nodes. In node and disk majority, the failover cluster uses a witness disk as a tiebreaker node, and the failover cluster then runs as long as a majority of these nodes are online and available. The third option is the Node And File Share Majority quorum configuration. In node and file share majority, which is recommended for clusters that have an even number of nodes and that lack access to a witness disk, a witness file share is used as a tiebreaker node, and the failover cluster then runs as long as a majority of these nodes are online and available. The fourth and final option is the No Majority: Disk Only quorum configuration. In this configuration, which is generally not recommended, the failover cluster remains as long as a single node and its storage remain online.

Installing the Failover Clustering Feature

Before creating a failover cluster, you have to install the Failover Clustering feature on all nodes in the cluster.

To install the Failover Clustering feature, begin by clicking Add Features in Server Manager. In the Add Features Wizard, select the Failover Clustering check box. Click Next, and then follow the prompts to install the feature.

Once the feature is installed on all nodes, you are ready to validate the hardware and software configuration.

Validating the Cluster Configuration

Before you create a new cluster, use the Validate A Configuration Wizard to ensure that your nodes meet the hardware and software prerequisites for a failover cluster.

To run the Validate A Configuration Wizard, first open Failover Cluster Management Administrative Tools program group. In Failover Cluster Management, click Validate A Configuration in the Management area or the Actions pane, as shown in Figure 5.

Figure 5. Validating failover server prerequisites

After the wizard completes, make any configuration changes if necessary, and then rerun the test until the configuration is successfully validated. After the cluster prerequisites have been validated, you can use the Create Cluster Wizard to create the cluster.

Running the Create Cluster Wizard

The next step in creating a cluster is to run the Create Cluster Wizard. The Create Cluster Wizard installs the software foundation for the cluster, converts the attached storage into cluster disks, and creates a computer account in Active Directory for the cluster. To launch this tool, in Failover Cluster Management, click Create A Cluster in the Management area or Actions pane.

In the Create Cluster Wizard, simply enter the names of the cluster nodes when prompted. The wizard then enables you to name and assign an IP address for the cluster, after which the cluster is created.

After the wizard completes, you need to configure the services or applications for which you wish to provide failover. To perform this aspect of the configuration, run the High Availability Wizard.

Running the High Availability Wizard

The High Availability Wizard configures failover service for a particular service or application. To launch the High Availability Wizard, in Failover Cluster Management, click Configure A Service Or Application in the Action pane or Configure area.

To complete the High Availability Wizard, perform the following steps:

1.
On the Before You Begin page, review the text, and then click Next.
2.
On the Select Service Or Application page, select the service or application for which you want to provide failover service (high availability), and then click Next.
3.
Follow the instructions in the wizard to specify required details about the chosen service. For example, for the File Server service, you would need to specify the following:

  • A name for the clustered file server
  • Any IP address information that is not automatically supplied by your DHCP settings—for example, a static IPv4 address for this clustered file server
  • The storage volume or volumes that the clustered file server should use
4.
After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.
5.
To close the wizard, click Finish.

Testing the Failover Cluster

After you complete the wizard, test the failover cluster in Failover Cluster Management. In the console tree, make sure Services and Applications is expanded, and then select the service you have just added with the High Availability Wizard. Right-click the clustered service, click Move This Service Or Application To Another Node, and then click the available choice of node. You can observe the status changes in the center pane of the snap-in as the clustered service instance is moved. If the service moves successfully, the failover is functional.

 Configuring an NLB Cluster

Creating an NLB cluster is a relatively simple process. To begin, install Windows Server 2008 on two servers and then, on both servers, configure the service or application (such as IIS) that you want to provide to clients. Be sure to create identical configurations because you want the client experience to be identical regardless of which server users are connected to.

The next step in configuring an NLB cluster is to install the Network Load Balancing feature on all servers that you want to join the NLB cluster. For this step, simply open Server Manager, and then click Add Features. In the Add Features Wizard, select Network Load Balancing, click Next, and then follow the prompts to install.

The final step in creating an NLB cluster is to use Network Load Balancing Manager to configure the cluster. This procedure is outlined in the following section.

▸ To create an NLB cluster

1.
Launch Network Load Balancing Manager from Administrative Tools. (You can also open Network Load Balancing Manager by typing Nlbmgr.exe from a command prompt.)
2.
In the Network Load Balancing Manager console tree, right-click Network Load Balancing Clusters, and then click New Cluster.
3.
Connect to the host that is to be a part of the new cluster. In Host, enter the name of the host, and then click Connect.
4.
Select the interface you want to use with the cluster, and then click Next. (The interface hosts the virtual IP address and receives the client traffic to load balance.)
5.
On the Host Parameters page, select a value in the Priority (Unique host identifier) drop-down list. This parameter specifies a unique ID for each host. The host with the lowest numerical priority among the current members of the cluster handles all the cluster’s network traffic not covered by a port rule. You can override these priorities or provide load balancing for specific ranges of ports by specifying rules on the Port rules tab of the Network Load Balancing Properties dialog box.
6.
On the Host Parameters page, verify that the dedicated IP address from the chosen interface is visible in the list. If not, use the Add button to add the address, and then click Next to continue.
7.
On the Cluster IP Addresses page, click Add to enter the cluster IP address shared by every host in the cluster. NLB adds this IP address to the TCP/IP stack on the selected interface of all hosts chosen to be part of the cluster. Click Next to continue.

Note: Use only static addresses

NLB doesn’t support Dynamic Host Configuration Protocol (DHCP). NLB disables DHCP on each interface it configures, so the IP addresses must be static.

8.
On the Cluster Parameters page, in the Cluster IP Configuration area, verify appropriate values for IP address and subnet mask, and then type a full Internet name (Fully Qualified Domain Name) for the cluster.
Note that for IPv6 addresses, a subnet mask is not needed. Note also that a full Internet name is not needed when using NLB with Terminal Services.
9.
On the Cluster Parameters page, in the Cluster Operation Mode area, click Unicast to specify that a unicast media access control (MAC) address should be used for cluster operations. In unicast mode, the MAC address of the cluster is assigned to the network adapter of the computer, and the built-in MAC address of the network adapter is not used. It is recommended that you accept the unicast default settings. Click Next to continue.
10.
On the Port Rules page, click Edit to modify the default port rules. Configure the rules as follows:

  • In the Port Range area, specify a range corresponding to the service you want to provide in the NLB cluster. For example, for Web services, type80 to 80 so that the new rule applies only to HTTP traffic. For Terminal Services, type 3389 to 3389 so that the new rule applies only to RDP traffic.
  • In the Protocols area, select TCP or UDP, as needed, as the specific TCP/IP protocol the port rule should cover. Only the network traffic for the specified protocol is affected by the rule. Traffic not affected by the port rule is handled by the default host.
  • In the Filtering mode area, select Multiple Host if you want multiple hosts in the cluster to handle network traffic for the port rule. Choose Single Host if you want a single host to handle the network traffic for the port rule.
  • In Affinity (which applies only for the Multiple host filtering mode), select None if you want multiple connections from the same client IP address to be handled by different cluster hosts (no client affinity). Leave the Single option if you want NLB to direct multiple requests from the same client IP address to the same cluster host. Select Network if you want NLB to direct multiple requests from the local subnet to the same cluster host.
11.
After you add the port rule, click Finish to create the cluster.
To add more hosts to the cluster, right-click the new cluster, and then click Add Host To Cluster. Configure the host parameters (including host priority and dedicated IP addresses) for the additional hosts by following the same instructions that you used to configure the initial host. Because you are adding hosts to an already configured cluster, all the cluster-wide parameters remain the same.

01/19/2012 Posted by | Cluster Configuration, Windows Server | , , , | Leave a comment

   

%d bloggers like this: