GAPTHEGURU

Geek with special skills

Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2 – Part 1

Creating your cluster and configuring the quorum: Node and File Share Majority

Introduction

Welcome to Part 1 of my series “Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2″. Before we jump right in to the details, let’s take a moment to discuss what exactly a multi-site cluster is and why I would want to implement one. Microsoft has a great webpage and white paper that you will want to download to get you all of the details, so I won’t repeat everything here. But basically a multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.

I mentioned “cross-subnet failover” as a great new feature of Windows Server 2008 Failover Clustering, and it is a great new feature. However, SQL Server has not yet embraced this functionality, which means you will still be required to span your subnet across sites in a SQL Server multi-site cluster. As of Tech-Ed 2009, the SQL Server team reported that they plan on supporting this feature, but they say it will come sometime after SQL Server 2008 R2 is released. For the foreseeable future you will be stuck with spanning your subnet across sites in a SQL Server multi-site cluster. There are a few other network related issues that you need to consider as well, such as redundant communication paths, bandwidth and file share witness placement.

Network Considerations

All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity. You will want to be sure that whatever networks you have in place, you are not overwhelming the network or you will have unreliable behavior. Your replication traffic will most likely require the greatest amount of bandwidth; you will need to work with your replication vendor to determine how much bandwidth is required.

With your redundant communication paths in place, the last thing you need to consider is your quorum model. For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum. For a detailed description of the quorum types, have a look at this article.

The most common cause of confusion with the Node and File Share Majority quorum is the placement of the File Share Witness. Where should I put the server that is hosting the file share? Let’s look at the options.

Option 1 – place the file share in the primary site.

This is certainly a valid option for disaster recovery, but not so much for high availability. If the entire site fails (including the Primary node and the file share witness) the Secondary node in the secondary site will not come into service automatically, you will need to force the quorum online manually. This is because it will be the only remaining vote in the cluster. One out of three does not make a majority! Now if you can live with a manual step being involved for recovery in the event of a disaster, then this configuration may be OK for you.

Option 2 – place the file share in the secondary site.

This is not such a good idea. Although it solves the problem of automatic recovery in the event of a complete site loss, it exposes you to the risk of a false failover. Consider this…what happens if your secondary site goes down? In this case, your primary server (Node1) will go also go offline as it is now only a single node in the primary site and will no longer have a node majority. I can see no good reason to implement this configuration as there is too much risk involved.

Option 3 – place the file share witness in a 3rd geographic location

This is the preferred configuration as it allows for automatic failover in the event of a complete site loss and eliminates any the possibility of a failure of the secondary site causing the primary node to go offline. By having a 3rd site host the file share witness you have eliminated any one site as a single point of failure, so now the cluster will act as you expect and automatic failover in the event of a site loss is possible. Identifying a 3rd geographic location can be challenging for some companies, but with the advent of cloud based utility computing it is well within the reach of all companies to put a file share witness in the clouds and have the resiliency required for effective multi-site clusters. In fact, you may consider the cloud itself as your secondary data center and just failover to the cloud in the event of a disaster. I think the possibilities of cloud based computing and disaster recovery configurations are extremely enticing and in fact I plan on doing a whole blog post on a just that in the near future.

Configure the Cluster

Now that we have the basics in place, let’s get started with the actual configuration of the cluster. You will want to add the Failover Clustering feature to both nodes of your cluster. For simplicity sake, I’ve called my nodes PRIMARY and SECONDARY. This is accomplished very easily through the Add Features Wizard as shown below.

Figure 1 – Add the Failover Clustering Role

Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.

Figure 2- Change the names of your network connections

You will also want to go into the Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network is first in the list.

Figure 3- Make sure your public network is first

Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.

Figure 4 – Private network settings

Once you have your network configured, you are ready to build your cluster. The first step is to “Validate a Configuration”. Open up the Failover Cluster Manager and click on Validate a Configuration.

Figure 5 – Validate a Configuration

The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.

Figure 6 – Add the cluster nodes

A multi-site cluster does not need to pass the storage validation (see Microsoft article). Toskip the storage validation process,click on “Run only the tests I select” and click Continue.

Figure 7 – Select “Run only tests I select”

In the test selection screen, unselect Storage and click Next

Figure 8 – Unselect the Storage test

You will be presented with the following confirmation screen. Click Next to continue.

Figure 9 – Confirm your selection

If you have done everything right, you should see a summary page that looks like the following. Notice that the yellow exclamation point indicates that not all of the tests were run. This is to be expected in a multi-site cluster because the storage tests are skipped. As long as everything else checks out OK, you can proceed. If the report indicates any other errors, fix the problem, re-run the tests, and continue.

Figure 10 – View the validation report

You are now ready to create your cluster. In the Failover Cluster Manager, click on Create a Cluster.

Figure 11 – Create your cluster

The next step asks whether or not you want to validate your cluster. Since you have already done this you can skip this step. Note this will pose a little bit of a problem later on if installing SQL as it will require that the cluster has passed validation before proceeding. When we get to that point I will show you how to by-pass this check via a command line option in the SQL Server setup. For now, choose No and Next.

Figure 12 – Skip the validation test

The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.

Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.

Figure 13 – Choose a unique name and IP address

Confirm your choices and click Next.

Figure 14 – Confirm your choices

Congratulation, if you have done everything right you will see the following Summary page. Notice the yellow exclamation point; obviously something is not perfect. Click on View Report to find out what the problem may be.

Figure 15 – View the report to find out what the warning is all about

If you view the report, you should see a few lines that look like this.

Figure 16 – Error report

Don’t fret; this is to be expected in a multi-site cluster. Remember we said earlier that we will be implementing a Node and File Share Majority quorum. We will change the quorum type from the current Node Majority Cluster (not a good idea in a two node cluster) to a Node and File Share Majority quorum.

Implementing a Node and File Share Majority quorum

First, we need to identify the server that will hold our File Share witness. Remember, as we discussed earlier, this File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named DEMODC.

The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions. If you recall back at Figure 13, I created my cluster and gave it the name “MYCLUSTER”. You will need to make sure you give the cluster computer account read/write permissions as shown in the following screen shots.

Figure 17 – Make sure you search for Computers

Figure 18 – Give the cluster computer account NTFS permissions

Figure 19 – Give the cluster computer account share level permissions

Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.

Figure 20 – Change your quorum type

On the next screen choose Node and File Share Majority and click Next.

Figure 21 – Choose Node and File Share Majority

In this screen, enter the path to the file share you previously created and click Next.

Figure 22 – Choose your file share witness

Confirm that the information is correct and click Next.

Figure 23 – Click Next to confirm your quorum change to Node and File Share Majority

Assuming you did everything right, you should see the following Summary page.

Figure 24 – A successful quorum change

Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.

Figure 25 – You now have a Node and File Share Majority quorum

The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster. This step will vary from depending upon your replication solution, so you really need to be in close contact with your replication vendor to get it right.

Other parts of this series will describe in detail how to install SQL, File Servers and Hyper-V in multi-site clusters. I will also have a post on considerations for multi-node clusters of three or more nodes.

05/10/2012 Posted by | Cluster Configuration, Clustering | , , | Leave a comment

Configuring iSCSI MPIO on Windows Server 2008 R2

I have recently gone through the process of wiping out one of my lab environment and rebuilding it from scratch on Windows Server 2008 R2 Enterprise.  During this process, I recorded the steps I used to configure MPIO with the iSCSI initiator in R2.  Just to make life more complex, my servers only have 2 NICs, so I am balancing the host traffic, virtual machine traffic, and MPIO across those two NIC devices.  Is this supported?  I seriously doubt it.  🙂  In the real world you would separate out iSCSI traffic on dedicated NICs, cables, and separate switch paths.  The following step-by-step process should be relatively the same though.

Foundation

The workflow I am following assumes that when starting out one NIC is configured for host traffic and the other for a VM network.  On the WSS the secondary NIC was already configured not to register in DNS.  Also, since I am using WSS and the built-in iSCSI Target I don’t have to configure a DSM for the storage device.  If your configuration is different than that, you may have to ignore or add to a few parts of the below instructions.  Sorry about that.  I can only document what I have available for testing…

First I just want to show a screenshot of the iSCSI target on our Windows Storage Server, to indicate that it does have two IPs.  Once again, I am cheating the system here.  These are not dedicated TOE adapters for iSCSI on a separate network.  This is a poor man’s environment with 1 VLAN and minimal network hardware.  My highly available environment is anything but!  To view this information on your own WSS, right-click on the words “Microsoft iSCSI Software Target” and click Properties.

image

Enable the MPIO Feature on the initiating servers

Next I needed to enable MPIO on the servers making the iSCSI connections.  MPIO is a Feature in Server 2008 R2 listed as Multipath I/O.  Adding the Feature did not require a reboot on any of my servers.

image

Configuring MPIO to work with iSCSI was simple.  Click Start and type “MPIO”, launch the control panel applet, and you should see the window below.  Click on the Discover Multi-Paths tab, check the box for “Add support for iSCSI devices”, and click Add.  You should immediately be prompted to reboot.  This was consistent across 4 servers where I followed this process.

image

image

After rebooting, if you open the MPIO Control Panel applet again, you should see the iSCSI bus listed as a device.  Note on my servers, the Discover Multi-Paths page becomes grayed out.

image image

Check the IP of the existing connection path

Now click Start and type “iSCSI”.  Launch the iSCSI Initiator applet.  Add your iSNS server or Target portal.  There is plenty of documentation on how to do this on TechNet if you need assistance. I want to stay focused on the MPIO configuration.

Once you are connected to the target, click the button labeled “Devices…”.  You should see each of the volumes you have connected listed in the top pane.  Select a Disk and click the MPIO button.  In the Device Details pane you should see information on the current path and session.  If you click the Details button, you can verify the local and remote IPs the current connection is using.  It should be the IPs that resolve from the hostnames of each server.  See my remedial diagram below.

I recommend taking note of this IP, to make life easier later on!

image

So everything is setup for MPIO but you are only using a single path and that’s not really going to accomplish much now is it?  Since I only have 2 NICs in my test server I need my host to share the second NIC with the VM network.  This is not ideal but again I am using what I have and this is only a test box.

Setting a second IP on my hosts

In R2 the host does not communicate by default on a NIC where a virtual network is assigned.  To change this, open the Hyper-V console and click “Virtual Network Manager…”.  Check the box “Allow management operating system to share this network adapter”.

image

This will create a third device in the network console (to get there click Start, type “ncpa.cpl”, and launch the applet).  You should see the name of the new device matches your Virtual Network name.  In my case Local Area Connection 4 has a device name “External1”.  Right click on the connection and then click Properties.  Select “Internet Protocal Version 4 (TCP/IPv4)” and click the Properties button.  Configure your address and subnet but not the gateway as it should already be assigned on the first adapter.  You also shouldn’t need to set the DNS addresses in the new adapter.  You will however, want to click the “Advanced…” button followed by the DNS tab and uncheck the box next to “Register this connection’s address in DNS”.  This really should be the job of your primary adapter, no need to have multiple addresses for the same hostname registering and causing confusion unless you have a unique demand for it.

image

Add a second path

Back in the iSCSI Initiator Applet, click the Connect button.  I know you already have a connection.  In this step we are adding an additional connection to the Target to provide a second path.

In the subsequent dialogue make sure you check the box next to “Enable multi-path” and then click the Advanced… button.  In the Advanced Settings dialogue you will need to choose the IP for your second path.  In the drop-down menu next to “Local adapter:” select Microsoft iSCSI Initiator”.  In the drop-down next to “Initiator IP:” select the IP on your local server you would like the Initiator to use when making a connection for the secondary path.  In the third drop-down, next to “Target portal IP:” select the IP of the iSCSI Target server you would like to connect to.  This should be the opposite IP of the session we observed a few steps back when I mentioned you should take note of the IP.

image

Check your work

Just one more step.  Let’s verify that you now have 2 connections available for each disk, that they are using separate paths, and have the opportunity to choose the types of load balancing available.  Once you have hit OK out of each of the open dialogues from the step above, click on the Devices… button again and check out the top pane.  On each of my servers I see each disk listed twice, once per Target 0 and once per Target 1, as seen below.  If you follow my remedial diagrams one more time and select a disk, then the MPIO button, you should now see two paths.  Select the first path and click the Details button.  It should be using the local and remote IPs we took note of earlier.  Click OK.  Now select the second path and then the Details… button.  You should see it using the other adapter’s IP on BOTH the local and remote hosts.

image

05/10/2012 Posted by | Clustering, iSCSI, MPIO, Windows Server | , , | Leave a comment

Configuring the Microsoft iSCSI Software Target

Introduction

This post describes how to configure the Microsoft iSCSI Software Target offered with Windows Storage Server.

One of the goals here is to describe the terminology used like iSCSI Target, iSCSI Initiator, iSCSI Virtual Disk, etc. It also includes the steps to configure the iSCSI Software Target and the iSCSI Initiator.

Initial State

We’ll start with a simple scenario with three servers: one Storage Server and two Application Servers.

iSCSI-01

In my example, the Storage Server runs WSS 2008 and the two Application Servers run Windows Server 2008.

The Application Servers could be running any edition of Windows Server 2003 (using the downloadable iSCSI Initiator) or Windows Server 2008 / Windows Server 2008 R2 (which come with an iSCSI Initiator built-in).

The iSCSI Initiator configuration applet can be found in the Application Server’s Control Panel. In the “General” tab of that applet you will find the iQN (iSCSI Qualified Name) for the iSCSI Initiator, which you may need later while configuring the Storage Server.

The Microsoft iSCSI Software Target Management Console can be found on the Administration Tools menu in the Storage Server.

Add iSCSI Targets

The first thing to do is add two iSCSI Targets to the Storage Server. To do this, right-click the iSCSI Targets node in the Microsoft iSCSI Software Target MMC and select the “Create iSCSI Target” option. You will then specify a name, an optional description and the identifier for the iSCSI Initiator associated with that iSCSI Target.

There are four methods to identify the iSCSI Initiators: iQN (iSCSI Qualified Name), DNS name, IP address and MAC address. However, you only need to use one of the methods. The default is the iQN (which can be obtained from the iSCSI Initiator’s control panel applet). If you don’t have access to the iSCSI Initiator to check the iQN, you can use its DNS name. If you’re using the Microsoft iSCSI Initiator on your application server, that iQN is actually constructed with a prefix (“iqn.1991-05.com.microsoft:”) combined with the DNS name of the computer.

For instance, if the Application Server runs the Microsoft iSCSI Initiator, is named “s2” and is a member of the “contoso.com” domain, its iQN would be “FQDN:s2.contoso.com” and its DNS name would be “s2.contoso.com”. You could also use Its IP address (something like “10.1.1.120”) or its MAC address (which would look like “12-34-56-78-90-12”).

Typically, you assign just one iSCSI Initiator to each iSCSI Target. If you assign multiple iSCSI Initiators to the same iSCSI Targets, there is a potential for conflict between Application Servers. However, there are cases where this can make sense, like when you are using clusters.

In my example, we created two iSCSI Targets named T1 (assigned to the iSCSI Initiator in S2) and T2  (assigned to the iSCSI Initiator in S3). It did not fit in the diagram, but assume we used the complete DNS names of the Application Servers to identify their iSCSI Initiators.

iSCSI-02

Add Virtual Disks

Next, you need to create the Virtual Disks on the Storage Server. This is the equivalent of creating an LUN in a regular SAN device. The Microsoft iSCSI Software Target stores those Virtual Disks as files with the VHD extension in the Storage Server.

This is very similar to the Virtual Disks in Virtual PC and Virtual Server. However, you can only use the fixed size format for the VHDs (not the dynamically expanding or differencing formats). You can extend those fixed-size VHDs later if needed.

Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Create Virtual Disk” option. For each Virtual Disk you will specify a filename (complete with drive, folder and extension), a size (between 8MB and 16TB) and an optional description. You can also assign the iSCSI Targets at this point, but we’ll skip that and do it as a separate step.

In my example, I have created three virtual disks: D:\VHD1.vhd, E:\VHD2.vhd and E:\VHD3.vhd.

iSCSI-03

You can create multiple VHD files on the same disk. However, keep in mind that there are performance implications in doing so, since these VHDs will be sharing the same spindles (not unlike any scenario where two applications store data in the same physical disks).

The VHD files created by the Microsoft iSCSI Software Target cannot be used by Virtual PC or Virtual Server, since the format was adapted to support larger sizes (up to 16 TB instead of the usual 2 TB limit in Virtual PC and Virtual Server).

Assign Virtual Disks to iSCSI Targets

Once you created the iSCSI Targets and the Virtual Disks, it’s time to associate each virtual disk to their iSCSI Targets. Since the iSCSI Initiators were already assigned to the iSCSI Targets, this is the equivalent of unmasking an LUN in a regular SAN device.

Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Assign/Remove Target” option. This will take you directly to the “Target Access” tab in the properties of the virtual disk. Click the “Add” button to pick a target. You will typically assign a virtual disk to only one iSCSI Target. As with multiple iSCSI Initiators per iSCSI Target, if you assign the same disk to multiple iSCSI Targets, there is a potential for conflict if two Application Servers try to access the virtual disk at the same time.

You can assign multiple disks to a single iSCSI Target. This is very common when you are exposing several disks to the same Application Server. However, you can also expose multiple virtual disks to the same Application Server using multiple iSCSI Targets, with a single virtual disk per iSCSI Target. This will improve performance if your server runs a very demanding application in terms of storage, since each target will have its own request queue. Having too many iSCSI Targets will also tax the system, so you need to strike a balance if you have dozens of Virtual Disks, each associated with very demanding Application Servers.

In my example, I have assigned VHD1 and VHD2 to T1, then assigned VHD3 to T2.

iSCSI-04

Add Target Portal

Now that we finished the configuration on the Storage Server side, let’s focus on the Application Servers.

Using the iSCSI Initiator control panel applet, click on the “Discovery” tab and add your Storage Server DNS name or IP address to the list of Target Portals. Keep the default port (3260).

Next, select the “Targets” tab and click on the “Refresh” button. You should see the iQNs of iSCSI Targets that were assigned to this specific iSCSI Initiator.

In my example, the iSCSI Initiators in Application Server S2 and S3 were configured to use Storage Server S1 as target portal.

iSCSI-05

The iQN of the iSCSI Target (which you will see in the iSCSI Initiator) is constructed by the Storage Server using a prefix (“FQDN:”) combined with the Storage Server computer name, the name of the iSCSI Target and a suffix (“-target”). In our example, when checking the list of Targets on the iSCSI Initiator in S3, we found “FQDN:s1-t2-target” .

Logon to iSCSI Targets

Now you need to select the iSCSI Target and click on the “Log on” button to connect to the target, making sure to select the “Automatically restore this connection when the system boots” option.

Once the iSCSI Initiators have successfully logged on to the targets,  the virtual disks will get exposed to the Application Servers.

In our example, S2’s iSCSI Initiator was configured to logon to the T1 target and S3’s iSCSI Initiator was configured to logon to the T2 target.

iSCSI-06

Format, Mount and Bind Volumes

At this point, the virtual disks look just like locally-attached disk, showing up in the Disk Management MMC as an uninitialized disk. Now you need to format and mount the volumes.

To finish the configuration, open the Computer Management MMC (Start, Administrative Tools, Computer Management or right-click Computer and click Manage). Expand the “Storage” node on the MMC tree to find the “Disk Management” option. When you click on the Disk Management option, you should immediately see the “Initialize and Convert Disk Wizard”. Follow the wizard to initialize the disk, making sure to keep it as a basic disk (as opposed to dynamic).

You should then use the Disk management tool to create a partition, format it and mount it (as a drive letter or a path), as you would for any local disk. For larger volumes, you should convert the disk to a GPT disk (right click the disk, select “Convert to GPT Disk”). Do not convert to GPT if you intend to boot from that disk.

After the partition is created and the volumes are formatted and mounted, you can go to the “Bound Volumes/Devices” tab in the iSCSI Initiator applet, make sure all volumes mounted are listed there and then use the “Bind All” option. This will ensure that the volumes will be available to services and applications as they are started by Windows.

In my example, I have created a single partition for each disk, formatted them as NTFS and mounted each one in an available drive letter. In Application Server S2, we ended up with disks F: (for VHD1) and G: (for VHD2). On S3, we used F: (for VHD3).

iSCSI-07

Create Snapshot

Next, we’ll create a snapshot of a volume. This is basically a point-in-time copy of the data, which can be used as a backup or an archive. You can restore the disk to any previous snapshot in case your data is damaged in any way. You can also look at the data as it was at that time without restoring it. If you have enough disk space, you can keep many snapshots of your virtual disks, going back days, months or years.

To create a snapshot in the Storage Server, right-click the Devices node in the Microsoft iSCSI Software Target MMC and select the “Create Snapshot” option.  No additional information is required and a snapshot will be created.

You can also schedule the automatic creation of snapshots. For example, you could do it once a day at 1AM. This is done using the “Schedules” option under the “Snapshots” node in the Microsoft iSCSI Software Target MMC.

In my example, i have created a snapshot of the VHD3 virtual disk at 1AM.

iSCSI-08

Microsoft also offers a VSS Provider for the Microsoft iSCSI Software Target, which you can use on the Application Server to create a VSS-based snapshot.

Export Snapshot to iSCSI Target

Snapshots are usually not exposed to targets at all. You can use them to “go back in time” by rolling back to a previous snapshot, which requires no reconfiguration of the iSCSI Initiators. In some situations, however, it might be useful to expose a snapshot so you can check what’s in it before you roll back.

You might also just grab one or two files from the exported snapshot and never really roll back the entire virtual disk. Keep in mind that snapshots are read-only.

To make a snapshot visible to an Application Server, right-click the snapshot in the Microsoft iSCSI Software Target MMC and select the “Export Snapshot” option. You will only need to pick the target you want to use.

Unlike regular virtual disks, you can choose to export snapshots to multiple iSCSI Targets or to an iSCSI Target with multiple iSCSI Initiators assigned. This is because you cannot write to them and therefore there is no potential for conflicts.

In our example, we exported the VHD3 at 1AM snapshot to target T2, which will caused it to show up on Application Server S3.

iSCSI-09

Mount Snapshot Volume

The last step to expose the snapshot is to mount it as a path or drive on your Application Server. Note that you do not need to initialize the disk, create a partition or format the volume, since these things were already performed with the original virtual disk. You would not be able to perform any of those operations on a snapshot anyway, since you cannot write to it.

Again, open the Computer Management MMC, expand the “Storage” node and find the “Disk Management” option. If you already have it open, simply refresh the view to find the additional disk. Then use the properties of the volume to mount it.

In my example, I have mounted the snapshot of VHD3 at 1AM as the G: drive on Application Server S3.

iSCSI-10

Now you might be able to find a file you deleted on that F: drive after 1AM by looking at drive G:. You can then decide to copy files from the G: drive to F: drive at the Application Server side. You can also decide to roll back to that snapshot on the Storage Server side, keeping in mind that you will lose any changes to F: after 1AM.

Advanced Scenario

Now that you have the basics, you can start designing more advanced scenarios. As an example, see the diagram showing two Storage Servers and two Application Servers.

iSCSI-11

There are a few interesting points about that diagram that are worth mentioning. First, the iSCSI Initiators in the Application Servers (S3 and S4) point to two Target Portals (S1 and S2).

Second, you can see that VHD1 and VHD2 are exposed to Application Server S3 using two separate iSCSI Targets (T1 and T2). A single iSCSI Target could be used, but this was done to improve performance.

You can also see that the snapshot of VHD5 at 3AM is being exported simultaneously to Application Servers S3 and S4. This is fine, since snapshots are write-protected.

Clustering Example

This last scenario shows how to configure the Microsoft iSCSI Software Target for a cluster environment. The main difference here is the fact that we are assigning the same iSCSI Target to multiple iSCSI Initiators at the same time. This is usually not a good idea for regular environments, but it is common for a cluster.

iSCSI-12

This example shows an active-active cluster, where the node 1 (running on Application Server S2) has the Quorum disk and the Data1 disk, while node 2 (running on Application Server S3) has the Data2 disk. When running in a cluster environment, the servers know how to keep the disks they’re not using offline, bringing them online just one node at time, as required.

In case of a failure of node 1, node 2 will first verify that it should take over the services and then it will mount the disk resources and start providing the services that used to run on the failed node. Also note that we avoid conflicting drive letters on cluster nodes, since that could create a problem when you move resources between them. As you can see, the nodes need a lot of coordination to access the shared storage and that’s one of the key abilities of the cluster software.

Again, we could have used a single iSCSI Target for all virtual disks, but two were used because of performance requirements of the application associated with the Data2 virtual disk.

Conclusion

I hope this explanation helped you understand some of the details on how to configure the Microsoft iSCSI Software Target included in Windows Storage Server.

Links and References

For general information on iSCSI on the Windows platform, including a link to download the iSCSI Initiator for Windows Server 2003, check http://www.microsoft.com/iscsi

For step-by-step instructions on how to configure the Microsoft iSCSI Software Target, with screenshots, check this post: http://blogs.technet.com/josebda/archive/2009/02/02/step-by-step-using-the-microsoft-iscsi-software-target-with-hyper-v-standalone-full-vhd.aspx

For details on how VSS works, check this post: http://blogs.technet.com/josebda/archive/2007/10/10/the-basics-of-the-volume-shadow-copy-service-vss.aspx

For details on how iSCSI names are constructed using the iQN format, check IETF’s RFC 3721 at http://www.ietf.org/rfc/rfc3721

05/10/2012 Posted by | Cluster Configuration, Windows Server | , | Leave a comment

BIZTALK TIPS: How to Cluster Message Queuing / How to Cluster MSDTC

Cluster support is provided for the BizTalk Server MSMQ adapter by running the MSMQ adapter handlers in a clustered instance of a BizTalk Host. If the BizTalk Server MSMQ adapter handlers are run in a clustered instance of a BizTalk Host, a clustered Message Queuing (MSMQ) resource should also be configured to run in the same cluster group as the clustered BizTalk Host when using the Send adapter or the Receive adapter for BizTalk Server 2006 R2 and earlier. This should be done for the following reasons:

  • MSMQ adapter receive handler – The MSMQ adapter receive handler for BizTalk Server 2006 R2 and earlier does not support remote transactional reads; only local transactional reads are supported. The MSMQ adapter receive handler on BizTalk Server 2006 R2 and earlier must run in a host instance that is local to the clustered MSMQ service in order to complete local transactional reads with the MSMQ adapter.
  • MSMQ adapter send handler – To ensure the consistency of transactional sends made by the MSMQ adapter, the outgoing queue used by the MSMQ adapter send handler should be highly available, so that if the MSMQ service for the outgoing queue fails, it can be resumed. Configuring a clustered MSMQ resource and the MSMQ adapter send handlers in the same cluster group will ensure that the outgoing queue used by the MSMQ adapter send handler will be highly available. This will mitigate the possibility of message loss in the event that the MSMQ service fails.

Many BizTalk Server operations are performed within the scope of a Microsoft Distributed Transaction Coordinator (MSDTC) transaction.

A clustered MSDTC resource must be available on the Windows Server cluster to provide transaction services for any clustered BizTalk Server components or dependencies. BizTalk Server components or dependencies that can be configured as Windows Server cluster resources include the following:

  • BizTalk Host
  • Enterprise Single Sign-On (SSO) service
  • SQL Server instance
  • Message Queuing (MSMQ) service
  • Windows File system

Windows Server 2003 only supports running MSDTC on cluster nodes as a clustered resource.

Windows Server 2008 supports running a local DTC on any server node in the failover cluster; even if a default clustered DTC resource is configured.


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, i the left pane, click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Message Queuing, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click a disk resource, and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.
  11. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To configure the Distributed Transaction Coordinator (DTC) for high availability (Windows Server 2008)


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left hand pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, in the left pane click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Distributed Transaction Coordinator, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click to select a disk resource and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2008)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then click Component Services.
  2. Click to expand Component Services, click to expand Computers, click to expand My Computer, click to expand Distributed Transaction Coordinator, click to expand Clustered DTCs, right-click the clustered DTC resource, and then click Properties.
  3. Click the Security tab.
  4. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  5. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  6. After changing security settings for the clustered distributed transaction coordinator resource, the resource will be restarted. Click Yes and OK when prompted.
  7. Close the Component Services management console.

 

  1. To start the Cluster Administrator program, click Start, point to Programs, point to Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Name and Disk resource.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSMQ.
  5. In the Resource type drop-down list, click Message Queuing, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the message queuing resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. Click OK in the dialog box that indicates that the resource was created successfully.
  9. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To add an MSDTC resource to an existing cluster group (Windows Server 2003)


  1. To start the Cluster Administrator program, click Start, Programs, Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Physical Disk, IP Address, and Network Name resource. To create a group with a Physical Disk, IP Address, and Network Name resource if one does not already exist.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSDTC.
  5. In the Resource type drop-down list, click Distributed Transaction Coordinator, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the distributed transaction coordinator resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. In the dialog box that indicates that the resource was created successfully, click OK.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2003)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then Component Services.
  2. Click to expand Component Services, and then click to expand Computer.
  3. Right-click My Computer, and then select the Properties menu item to display the My Computer Properties dialog box.
  4. Click the MSDTC tab.
  5. To display the Security Configuration dialog box, click Security Configuration .
  6. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  7. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  8. Stop and restart the Distributed Transaction Coordinator service.

03/08/2012 Posted by | Biztalk, Cluster Configuration | , , , , | 2 Comments

SSIS and clustering: What you should do instead

Lots of customers ask about configuring SQL Server Integration Services in a failover cluster. I recommend that you DON’T configure SSIS as a cluster resource. There are almost no benefits to doing so, and you can gain many of the benefits that you want by simple configuration changes. By editing the configuration file for the SSIS service on each node of a cluster, you can manage the packages on any node from any other node. For more information, please see the Books Online topic, Configuring Integration Services in a Clustered Environment.

Microsoft Senior Premier Field Engineer Steve Howard provided these additional details about the recommendations that he makes to customers who ask about clustering. Thanks, Steve, for permission to share:


I agree that restarting a running package automatically would be neat, but this would be different from other cluster-aware technologies (and other failover technologies) that we have. For example, if a failover happens with SQL Server, the queries do not restart, and other jobs running in Agent do not restart. I suppose it would be possible to write a job to check for jobs that did not complete successfully and restart those jobs, then schedule the that job to run at startup. That sounds feasible, but I have never done that.

What I’m describing is supported out of the box. It is really the same process that you must go through to manage package on a standalone machine with multiple instances (or even just one named instance). I find this question to be the most common question that customers have when I go onsite. Customers usually just do not understand the function of the SSIS service. When I explain it to them, and we work through it together, they are satisfied. I’ll explain here what I go through with customers, and students in the SSIS workshop.

In the installation folder, they will find the configuration file. For SQL 2005, by default, this path is: C:\Program Files\Microsoft SQL Server\90\DTS\Binn and for SQL 2008, this is C:\Program Files\Microsoft SQL Server\100\DTS\Binn. In either case, the name of the file is MsDtsSrvr.ini.xml. When first installed, this file will look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

<Folder xsi:type=”SqlServerFolder”>

<Name>MSDB</Name>

<ServerName>.</ServerName>

</Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

(I just use notepad for this task, but some people prefer to use XML notepad or some XML editor like that.)

In the top level folder, the servername is “.”, which means it will only connect to the default instance on the local machine (local to where the service is running). So when I connect to that SSIS service, I can only see the default instance on the machine where the SSIS service is running. Everything here is relative to where the service is running. (I tell students that it is the center of management for SSIS). I can connect to this machine with Management Studio on any machine, but with this configuration, I will only see the default instance running on the machine where the SSIS service I connected to is running.

If I have multiple instances on this machine, I need to add top-level folders so I can manage all the instances installed on this machine from this SSIS service. (I’ll get to clusters in a moment). Let’s say that I have both a SQL 2005 instance and a SQL 2008 instance on this machine. Then in the SQL 2008 SSIS MsDtsSrvr.ini.xml, I need to set it up to manage these instances. (I cannot manage SQL 2008 instances from SQL 2005 SSIS, so I must configure the SQL 2008 SSIS to be able to manage both from one service.) In that case, I would add the top-level folders with names that let me distinguish among the servers where I am managing packages:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2008 MSDB</Name>

      <ServerName>.\SQL2K8</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2005 MSDB</Name>

      <ServerName>.</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So, I have added one folder that is named “SQL 2008 MSDB” and that points to the named instance SQL2k8 on the local machine. The other folder is named “SQL 2005 MSDB” and that points to the default instance on the local machine. When I make this edit, restart the SSIS service so it will read the modified configuration file, then connect to this SSIS instance, I can now see both servers and manage packages on both:

 

So now, I can see running packages on either server, I can import, export, or manually start the packages. But none of this is really necessary to be able to design, install, or run those packages. The Service is just for convenience for managing this.

So now, let’s take this concept to a cluster. For our cluster, let’s have 4 nodes names Node1, Node2, Node3, and Node4. On this, let’s install 4 instances of SQL in 4 separate resource groups. Let’s use the network names net1, net2, net3, and net4, and let’s install instances InstanceA, InstanceB, InstanceC, and InstanceD on those net names respectively so that the full names of our instances will be net1\InstanceA; net2\InstanceB; net3\InstanceC, and net4\InstanceD. Any of the 4 nodes can host any of the instances in our setup.

To be able to manage packages on any of those instances, you are going to have to modify your config file. To be able to manage packages on all 4 instances from any one machine, we would make modifications like I did above so that the config file will now look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceA MSDB</Name>

      <ServerName>net1\InstanceA</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceB MSDB</Name>

      <ServerName>net2\InstanceB</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceC MSDB</Name>

      <ServerName>net3\InstanceC</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceD MSDB</Name>

      <ServerName>net4\InstanceD</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So now, whatever machine I put that config file onto will see and be able to manage packages on those 4 machines, just as in the screenshot above, I can see the packages and manage them on those two instances. If I put this on node1, then if I connect to node1, I can manage all of them from that machine. But just having it on one node will be a bit of a pain. So, once I have this configured, and I have tested to make sure it will see all the instances where I want to manage packages, I just copy the MsDtsSrvr.ini.xml file into place on node2, node3, and node4 (if I have installed the SSIS service on those nodes). Now, I can connect to SSIS on any of those nodes.

Most DBAs don’t care what the node names are, but they know the network names of their SQL Server instances very well. In that cluster configuration we described, these network names resolve to IP addresses that move with the SQL Server instance when it fails over. So from Management Studio on the DBA’s workstation, he can connect to the SSIS service on net1 and see all 4 instances on his cluster. If it fails over, and he still wants to connect to SSIS to manage packages on any of the 4 nodes on that cluster, he could connect to net1, and it would connect to the SSIS service running on the node where Net1\InstanceA is now hosted, and he will still see the same thing – he doesn’t know or care that he is now connected to the SSIS service on a different node. If he wanted to, he could even specify the cluster name (instead of any of the SQL network names) and still connect to an SSIS service and still see the same set of folders.

In some environments, the DBA has one server that is his/hers where they set up their management tools. The SSIS configuration that we have allows the DBA to be able to configure the XML file on that one machine to see and manage packages on all instances and machines that they manage by connecting to a single SSIS service. He/she just needs to configure the XML file on that one machine.

Where I see confusion/frustration from customers is that they think of Management Studio as the center of their management tools. With SSIS, it is the SSIS service that is the center of the management tools. Customers, before education, think of the SSIS service as running the packages, but this is not the case. The SSIS service is for management. Management Studio gives them a graphical interface into the service, but the center of management for SSIS is the SSIS service.

If I have one complaint about this, it is that we do not really have a front end for customers so that they don’t have to manually edit the XML files. But really, that XML file is so simple that it is not difficult to edit with tools like Notepad or XML Notepad.

And in that situation, what have we gained if we cluster the SSIS service?


The preceding information is presented here for exactly what it is: the educated opinion of an experienced Microsoft field engineer.

What many corporate customers are really looking for, presumably, is high availability for ETL processes, especially long-running processes. Except for its support for transactions, and its ability to restart from checkpoints after failure, SSIS out of the box doesn’t currently have a complete answer for HA concerns.

01/31/2012 Posted by | Cluster Configuration, Sql Server, SSIS | , , , , , , | Leave a comment

   

%d bloggers like this: