GAPTHEGURU

Geek with special skills

The Perfect Combination: SQL Server 2012, Windows Server 2012 and System Center 2012

Information from insider news about SQL Server and Microsoft’s Information Platform http://blogs.technet.com/b/dataplatforminsider/archive/2012/12/06/the-perfect-combination-sql-server-2012-windows-server-2012-and-system-center-2012.aspx

Delivering a Complete Data Platform for the Modern Datacenter with Cloud OS

Today’s organizations need the ability to seamlessly build, deploy and manage applications and services across on-premise and cloud computing environments. The Cloud OS platform with Windows Server® 2012, Windows Azure, Microsoft® SQL Server® 2012, Microsoft System Center 2012 and Visual Studio 2012 work together to provide a consistent platform from on-premises to cloud computing environments.  For database applications, we have identified 3 (three) important scenarios where customers will benefit with the Cloud OS platform:

  1. Tackling mission critical OLTP workload SLAs and performance requirements
  2. Revolutionizing enterprise data warehousing
  3. Migrating large mission critical SQL Server workloads into Microsoft private cloud

For non-virtualized environments in an on-premises data center, Windows Server 2012 and SQL Server 2012 provide the best platform for mission-critical workloads in these areas:

    • Performance & Scalability:  SQL Server 2012 can consume the operating system maximum for both processors and memory.  Windows Server 2012 supports logical 640 processors (cores) over a max of 64 sockets and up to 4 TB of RAM, allowing SQL Server applications to scale to meet the demand of most mission critical applications. The new NIC Teaming feature in Windows Server 2012 allows 2 or more network adapters to behave as a single, virtual device.  This improves the reliability of the networking subsystem – if one NIC dies, the other continues to function – and allows the bandwidth available to each to be pooled for greater total network throughput for SQL Server data. With SMB improvements in Windows Server 2012, SQL Server can store database files on remote (SMB) file shares, providing customers with many more deployment options for their database server storage. The new data de-duplication feature in Windows Server 2012 provides compression on steroids and delivers 30-90% storage savings for FILESTREAM BLOBs and other external files in SQL Server applications.
    • Availability:  SQL Server 2012 support for Windows Server Core is expected to eliminate the need for 50-60% of the OS-level patches.  With Windows Server 2012, the server admin can configure the SQL Server to run with full support for graphical interfaces and then switch to run in Server Core mode. Cluster Aware Updating automates SQL Server cluster node maintenance, making the process easier, faster, more consistent and more reliable with significantly less downtime. With dynamic quorum management, the cluster can dynamically reconfigure itself to keep running down to the last surviving node to allow a SQL Server AlwaysOn cluster to adjust the number of quorum votes dynamically that are required to keep running, while simplifying set up by as much as 80%.

Organizations are also seeking a cloud-optimized IT infrastructure that can span from a private cloud behind your firewall to a public cloud behind a service provider’s firewall.  One key element to achieving this is having a common virtualization platform across private and public clouds.  This increases efficiency and performance across infrastructures, which is essential for database applications. Windows Server 2012 offers the best virtualization platform for SQL Server 2012. By working together, SQL Server 2012, Windows Server 2012, and System Center 2012 offer a seamlessly integrated, on-premise and cloud-ready information platform to meet the demands of today’s enterprise.  We have just published a white paper on the detailed benefits on this integration. Key benefits include:

    • Better Scalability: Higher capacity vCPUs (up to 64), memory (up to 1 TB), and VM density (up to 8,000 per cluster)
    • Better Performance: Hyper-V support on NUMA and fiber channel
    • Better Availability: Faster & simultaneous live migration and dynamic quorum support in SQL Server AlwaysOn cluster
    • Better Manageability: Same management tool (System Center) for SQL Server virtual machines in both private and public cloud

We have also published the latest performance report for SQL Server 2012 running on Windows Server 2012 Hyper-V. Key points from the performance report include:

    • With Windows Server 2012 Hyper-V’s new support for up to 64 vCPUs, ESG Lab took an existing SQL Server 2012 OLTP workload that was previously vCPU limited and increased the performance by six times, while the average transaction response times improved by five times.
    • Manageably-low Hyper-V overhead of 6.3% was recorded when comparing SQL Server 2012 OLTP workload performance of a physical server to a virtual machine configured with the same number of virtual CPU cores and the same amount of RAM.

When compared to VMware vSphere 5.1, Windows Server 2012 Hyper-V offers a number of advantages for SQL Server workloads:

    • Performance & Scalability: Windows Server 2012 Hyper-V is better equipped to deploy mission critical SQL Server workloads in virtualized environment, allowing up to 64 virtual processors per VM with no SKU-specific restrictions. By contrast, the free vSphere Hypervisor, along with vSphere 5.1 Essentials, Essentials Plus and Standard editions support only 8 vCPUs per VM, with vSphere 5.1 Enterprise supporting 32vCPUs and only the most expensive edition, vSphere 5.1 Enterprise Plus, allows support up to 64 vCPUs. No such SKU-specific restrictions are in place with Hyper-V. Hyper-V offers superior performance for SQL Server virtualization, supporting 320 logical processors per host, whilst vSphere 5.1 supports just half that number, restricting scalability and density. Hyper-V also supports up to 4TB of host physical memory, with an individual VM able to utilize up to 1TB of memory. Compared with VMware, where the vSphere Hypervisor host physical memory is capped at 32GB and 2TB for vSphere 5.1 Enterprise Plus.
    • Storage & High Availability: For the mission critical SQL Server AlwaysOn scenario that makes use of Windows Server Failover Clustering (WSFC), customers retain full Hyper-V functionality, whereas when virtualizing Windows Server based clusters, VMware recommends turning off key features such as vMotion for VM mobility, DRS for dynamic resource allocation, Memory Overcommit, meaning sacrificed density, and finally, vSphere Fault Tolerance (FT). Also, when using Fiber Channel for Guest Clusters, VMware restrict scale to just 5 nodes. No such restriction applies with Hyper-V, with unmatched scale for failover clustering, with support for up to 64 nodes and 8,000 VMs per cluster. Hyper-V Live Migration also offers unlimited simultaneous Live Migrations and Shared-Nothing Live Migration for seamlessly moving VMs between hosts and clusters. Additionally, Hyper-V fully supports Guest Clustering with Live Migration and Dynamic memory, unlike VMware. On storage, Hyper-V is optimized to take advantage of increased capacity of single virtual disks to store huge databases, file repositories or document archives of up to 64TB in size, while vSphere is restricted to only 2TB per virtual disk. Hyper-V also supports the latest hardware innovations such as 4K Advanced Format Disks, which comes with higher capacities, better alignment and resiliency, and ultimately, higher performance. vSphere unfortunately, doesn’t support this new innovation in hardware.
    • Deployment & Management: Hyper-V, combined with System Center, supports VM migration and management from private (behind your firewall) to public cloud (behind service provider’s firewall) through a single pane of glass. This provides organizations with unparalleled levels of flexibility. Additionally, System Center not only supports Hyper-V, but also VMware vSphere and Citrix XenServer based infrastructures. Hyper-V, combined with System Center also provides complete infrastructure monitoring (hardware, hypervisor, operating system, and applications) which is especially useful for deploying, optimizing and monitoring the ongoing performance of workloads such as SQL Server. With VMware, customers are required to purchase expensive additional products to deliver any form of monitoring beyond the standard virtual machine metrics.
    • Lower costs: Hyper-V provides a significantly lower total cost of ownership (TCO) than VMware vSphere for initial licensing and ongoing operations. More details on the cost comparison can be obtained through this web site where the analysis shows that a VMware private cloud solution can cost 5.5 times more than a Microsoft based private cloud solution.

Hyper-V proves to be the best solution for virtualizing SQL Server databases, with superior capabilities in many areas, whilst offering significantly better TCO than VMware. Many customers understand the benefits outlined in the summary and they have chosen to run their SQL Servers using Hyper-V or have switched their existing SQL Server to Hyper-V from VMware. See these case studies for more details.

Microsoft’s Cloud OS platform consisting of SQL Server 2012, Windows Server 2012, System Center 2012, Windows Azure, and Visual Studio 2012 offer a unique and consistent platform, from on-premises, to cloud computing environments, to help organizations modernize their datacenters by leveraging the CAPEX and OPEX efficiencies that cloud computing environments provide. Customers should consider using this platform by trying SQL Server 2012, Windows Server 2012, System Center 2012, Windows Azure, and Visual Studio 2012.

 

Advertisements

12/06/2012 Posted by | Sql Server, Windows Server | Leave a comment

How to setup Active Directory Federation Services

Part 1

Active directory federation service is Windows component which enables authentication of users on sites beyond its administrative domain. Example for this type of authentication is when users from one site have to access resources on some external site such as resources in partner network (e.g. Partner web sites etc.) When resource on remote site needs authentication for accessing, but “local” credentials should to be used, that is point where Active Directory Federation Service (AD FS) takes place.

Active Directory Federation Services enable using your AD (Active Directory) service to authenticate its users when they access resources belonging to other domains and placed on remote locations. To enable service which allows this type of authentication Active Directory federation should de established between two remote sites. There should be Active Directory Federation servers placed on both locations.

User authentication on site where resources reside and where user tries to access are based on token issued by federation services server on user location. Next picture shows AD FS architecture:

On user side, where Account Federation server resides is AD controller which authenticates users. On remote location in resources site is Resource Federation Server which participate in user authentication on remote site.

Follow scenario happens when user tries to access recourse on remote side:

1. User send request to access resource

2. Application server(SharePoint on picture) contacts Resource Federation server to authenticate user

3. Resource Federation server claim Account Federation server providing user identities

4. Account Federation server send user identities to Active Directory which authenticates user

5. Account Federation server creates token for user and send it to Resource federation server

6. When receive token for user Receive Federation server creates service token and forward it to resource server (SharePoint) and authentication process is completed.

On described way is provided SSO (single sign on) mechanism for users accessing resources on locations which are out of its administration boundary. Trust between two organizations is established though Active Directory Federation services.

To establish federation between two sites a few steps should be performed:

– Installation AD Federation Services on Account and Resource Federation servers

– Configuration of resource server(web server or other application server to which resources clients access)

– Configuration of Federation Servers(both account and resource) to establish trusted relationships

–  Client configuration

Installation AD Federation Services on Account and Resource Federation servers

AD FS services can be installed as role on Windows Server 2008. To begin installation go to Start->Administrative Tools->Server Manager. Then right click on roles and Add Role. Opens Add roles wizard:

Click on Active Directory Federation Services and then Next. Next window is Role Services. Click on check box by component you want to install (in this case Federation Service):

Maybe you will be prompted to install additional services such as IIS or some components of IIS needed for AD FS working. Confirm installation additional services (click on Add Required Role Services). When installation finishes click Next. Window for choosing SSL certificate for AD FS server appears. This certificate will be used for securing communication between clients and federation server.

As certificate for SSL connection you can choose existing certificate issued by your enterprise CA authority or create self signed certificate created on federation server. In this example self signed certificate will be used. For that click Create a self-signed certificate for SSL encryption and click Next. Opens window for token signing certificate:

Token signing certificate is used for signing tokens issued for client authentication on remote site where are resources which client want to access. When client make request to access resources application server (SharePoint on picture) request Resource Federation server to identify client. Resource Federation Server then contacts Account Federation Server on client side. When request for client authentication comes to Account Federation Server this server contacts AD controller which authenticates client. After client authentication  by AD is finished, Account Federation Server generates token signed by Token-Signing Certificate and sent it to Resource Federation Server which then generates service token and send it to resource server. When this process is completed client can access resources.

After Token-signing certificate is generated click next. Appears window for trust policy generation:

Trust policy defines rules applied when request from partner Federation Server comes to authenticate user. It defines situations in request should be accept or denied and types of information should be included in token issued to Federation Server on the side of partner organization. Trust policy can be created for this purpose or existing can be used. In this example we will create new policy. Click on Create new policy and then Next. Appears window with list of services which will be installed:

In this case AD Federation service and IIS server will be installed.

When verify which services will be installed click Install to begin installation process. After finishing window which shows results of installation is displayed:

With this last step AD FS server role is installed. On the same way this role installs on both Recourse and Account Federation server. After installing AD FC roles servers should be properly configured for trust relationship and communication establishing. Also resource server should be configured to be aware of federation server.

PART 2

When you have Federation Services installed as server roles on both sides of federation (account and resource) you have to properly configure servers to establish trust between them. Configuration includes configuring trust policy on both servers, create and configure group claim and AD account store and establish trust by importing policy from one federation server to another, on partner side. In this article I will describe process of AD FS server configuration. Configuration of both federation servers (account and resource) will be described.

Configuring Federation Services on federation servers

 Trust policy configuration

First thing to configure is trust policy configuration. To do that go through next steps:

1. Open AD FS configuration console. Go to Start->Administrative Tools and click Active Directory Federation Services. Opens next window:

2. On console tree double click Federation Services and then right click on Trust Policy and then Properties:

3. On General tab in Federation Services URI type URI of AD Federation Service. This URI is used to identify federation service on federation server. If AD FS is installed on server farm this URI should be same for all servers in farm. Also, in partner organization this URI should be same in trust policy imported from partner.

4. In the Federation Service endpoint URL text box URL of federation service appear. There is default URL, you can change it.

5. On Display Name tab type trust policy name and click OK.

On the same way trust policy should be configured on both federation servers, with differences in policy names, URLs and AD FS URIs.

Create group claim for claim-aware application

1. For authentication requests from partner side to be handled group claim should be created. To create group claim Go to Start->Administrative Tools and click Active Directory Federation Services

2. On console tree double click Federation Services, double click Trust Policy, double click My Organization, right click Organization Claims, go to New and click Organization Claim

3. In the Create a New Organization Claim dialog box in Claim name type name of new organization claim. It is claim from AD FS service on other (partner) side, let say Partner Claim.

4. Ensure that Group claim is selected and click OK

Claim should be configured on both sides, Account Federation server and Resource Federation Server

Add AD account store

When group claim is created and configured account store should be added. It is store in which are placed user credentials authenticated during resource accessing . In this case Active Directory will be used as account store. AD is most efficient and most used store for users when AD FS services are used.

To add AD as account store do next steps:

1. Go to Start->Administrative Tools and click Active Directory Federation Services

2. On console tree double click Federation Services, double click Trust Policy, double click My Organization, right click Account Stores, go to New and click Account Store:

3. On the Welcome to the Add Account Store Wizard page, click Next.

4. On the Account Store Page Active Directory Domain Services should be selected. Click Next

5. Enable this Account Store page appears. Enable this account store check box should be selected. Click Next

6.  On the Completing the Add Account Store Wizard page, click Finish

Adding account store in configuring AD FS server should be performed on both federation servers (Account and Resource)

Map a global group to the group claim for claims aware application

1. Go to Start->Administrative Tools and click Active Directory Federation Services

2. On console tree double click Federation Services, double click Trust Policy, double click My Organization , double click Account Stores, right click Active Directory, go to New and click Group Claim Extraction.

3. In Create a New Group Claim Extraction dialog box, click Add, type name of your mapping, for example partnerclaimusers, and then click OK.

4. Map to this Organization Claim menu should display group claim for partner organization, in this case Partner Claim we configured earlier. Click OK

Mapping a global group to the group claim should be configuring only on account federation server side because user that accessing resources are on that side, in same domain as account AD FS server. On this side of resource federation server claims aware application should be added and configured.

Adding and configuring claims aware application on AD FS of resource federation server

When configuring account federation server map group claim to group of users in AD is needed. On the other side, on resource federation server mapping of claims aware application should be done to connect application to which clients access with federation service. To add claim aware application do next steps on Resource Federation server:

1. Go to Start->Administrative Tools and click Active Directory Federation Services

2. On console tree double click Federation Services, double click Trust Policy, double click My Organization, right click Application, go to New and click Application.

3. On the Welcome to the Add Application Wizard page, click Next

4. On the Application Type page, click Claims-aware application, and then click Next.

5. On the Application Details page, in Application Display Name, type Claims-aware Application

6. In application URL type URL of your web application to which client access (e.g. http://web.domain.com/application

7. On the Accepted Identity Claims page, click User principal name (UPN), and then click Next.

8. On the Enable this application page check Enable this application and click Next

9. Click finish on Completing the Add Application Wizard page

After adding claims aware application group claim should be added to application. For this go to Applications folder, click to Claims aware application, right click to your group claim and click Enable.

When described settings are done federation servers are configured for federation but for establishing full trust between servers exporting and importing trust policies between servers is needed. When policies are exchanged trust between servers is established and AD FS service is configured between organizations.

05/23/2012 Posted by | Active Directory, ADFS, Federation, Security, SSO, Windows Server | , , | 1 Comment

How to verify MPIO setup on the iSCSI Initiator

To verify all the disks have two paths, I opened the iSCSI Initiator control panel applet, and checked the device path:

clip_image003

As you can see each disk listed in the Devices pane had 2 paths associated with it, as well as the MPIO policy. You can change the policy by click the dropdown box on the Device details page.

There is also a report you can generate by:

  • Open Microsoft iSCSI Initiator, and then click the Configuration tab.
  • Click Report.
  • Enter the file name, and then click Save.

My report file looks like:

iSCSI Initiator Report
=======================
List of Discovered Targets, Sessions and devices
==================================================
Target #0
========
Target name = iqn.1991-05.com.microsoft:svr1-target
Session Details
===============
Session #1 <= first session to the target
===========
Number of Connections = 1
Connection #1
==============
Target Address = 10.10.0.51
Target Port = 3260
#0. Disk 2
========
Address:Port 3: Bus 0: Target 0: LUN 0
#1. Disk 4
========
Address:Port 3: Bus 0: Target 0: LUN 1
#2. Disk 5
========
Address:Port 3: Bus 0: Target 0: LUN 2
Session #2 <= second session to the target
===========
Number of Connections = 1
Connection #1
==============
Target Address = 10.10.0.51
Target Port = 3260
#0. Disk 2
========
Address:Port 3: Bus 0: Target 1: LUN 0
#1. Disk 4
========
Address:Port 3: Bus 0: Target 1: LUN 1
#2. Disk 5
========
Address:Port 3: Bus 0: Target 1: LUN 2

How to verify MPIO setup on the iSCSI Target

To view the session/connection information on the Target server, you need to use WMI. The easiest way to execute WMI queries is the WMIC.exe in the commandline window.

C:\>wmic /namespace:\\root\wmi Path WT_HOST where (hostname = “T2”) get /format:list

Where T2 is my target object name.

A sample output is listed below with minor formatting changes. Comments have been added to help understand the output and a prefixed with “<=”:

instance of WT_Host
{
    CHAPSecret = "";
    CHAPUserName = "";
    Description = "";
    Enable = TRUE;
    EnableCHAP = FALSE;
    EnableReverseCHAP = FALSE;
    EnforceIdleTimeoutDetection = TRUE;
    HostName = "T2";
    LastLogIn = "20110502094448.082000-420";
    NumRecvBuffers = 10;
    ResourceGroup = "";
    ResourceName = "";
    ResourceState = -1;
    ReverseCHAPSecret = "";
    ReverseCHAPUserName = "";
    Sessions = {
instance of WT_Session    <= First session information from initiator 10.10.2.77
{
    Connections = {
instance of WT_Connection    <= First connection information from initiator 10.10.2.77, since the iSCSI Target supports only one connection per session, you will see each session contains one connection.
{
    CID = 1;
    DataDigestEnabled = FALSE;
    HeaderDigestEnabled = FALSE;
    InitiatorIPAddress = "10.10.2.77";
    InitiatorPort = 63042;
    TargetIPAddress = "10.10.2.73";
    TargetPort = 3260;
    TSIH = 5;
}};
    HostName = "T2";
    InitiatorIQN = "iqn.1991-05.com.microsoft:svr.contoso.com";
    ISID = "1100434440256";
    SessionType = 1;
    TSIH = 5;
}, 
instance of WT_Session    <=  Second session information from initiator 10.10.2.77 (multiple sessions from the same initiator as above
{
    Connections = {
instance of WT_Connection
{
    CID = 1;
    DataDigestEnabled = FALSE;
    HeaderDigestEnabled = FALSE;
    InitiatorIPAddress = "10.10.2.77";
    InitiatorPort = 63043;
    TargetIPAddress = "10.10.2.73";
    TargetPort = 3260;
    TSIH = 6;
}};
    HostName = "T2";
    InitiatorIQN = "iqn.1991-05.com.microsoft:svr.contoso.com";
    ISID = "3299457695808";
    SessionType = 1;
    TSIH = 6;
}, 
instance of WT_Session    <= First session information from initiator 10.10.2.69
{
    Connections = {
instance of WT_Connection
{
    CID = 1;
    DataDigestEnabled = FALSE;
    HeaderDigestEnabled = FALSE;
    InitiatorIPAddress = "10.10.2.69";
    InitiatorPort = 60063;
    TargetIPAddress = "10.10.2.73";
    TargetPort = 3260;
    TSIH = 10;
}};
    HostName = "T2";
    InitiatorIQN = "iqn.1991-05.com.microsoft:svr2.contoso.com";
    ISID = "2199946068032";
    SessionType = 1;
    TSIH = 10;
}, 
instance of WT_Session    <= Second session information from initiator 10.10.2.69

{
    Connections = {
instance of WT_Connection
{
    CID = 1;
    DataDigestEnabled = FALSE;
    HeaderDigestEnabled = FALSE;
    InitiatorIPAddress = "10.10.2.69";
    InitiatorPort = 60062;
    TargetIPAddress = "10.10.2.73";
    TargetPort = 3260;
    TSIH = 11;
}};
    HostName = "T2";
    InitiatorIQN = "iqn.1991-05.com.microsoft:svr2.contoso.com";
    ISID = "922812480";
    SessionType = 1;
    TSIH = 11;
}};
    Status = 1;
    TargetFirstBurstLength = 65536;
    TargetIQN = "iqn.1991-05.com.microsoft:cluster-yan03-t2-target";
    TargetMaxBurstLength = 262144;
    TargetMaxRecvDataSegmentLength = 65536;
};

As you can see in the above session information, each node (as the iSCSI initiator) has connected to the target with 2 sessions. You may have also noticed both sessions are using the same network path. This is because, when you configure iSCSI initiator, by default, it will pick the connection path for you. In the case of one path failure, another path will be used for the session reconnection. This configuration is easy to setup, and you don’t need to worry about the IP address assignment. It is good for failover MPIO policy.

image

If you want to use specific network paths, or want to use both network paths, you will need to specify the settings when you connect the initiators. You can do this by going to the “Advanced” setting page.

clip_image002

This configuration allows you to use specific IPs, and can utilize multiple paths at the same time with different MPIO load balancing policies.

A word of caution on using the specific IP for Initiator and Target, if you are using DHCP in the environment, and if the IP address changes after the reboot, the initiator may not be able to reconnect. From the initiator UI, you will see the initiator is trying to “Reconnect” to the target after reboot. You will need to reconfigure the connection to get it out of this state:

  1. Remove the iSCSI Target Portal
  2. Add the iSCSI Target Portal back
  3. Connect to the discovered iSCSI Targets

05/10/2012 Posted by | Clustering, iSCSI, MPIO | Leave a comment

Step-by-Step: Configuring Windows Server 8 Beta iSCSI Target Software for Use in a Cluster

If you just download the bits for Windows Server 8 Beta and you are anxious to try out all the great new features including Windows Storage Spaces, Continuously Available Fail Servers and Hyper-V Availability. Many of those new features are going to require you become familiar with Windows Server Failover Clustering. In addition, things like Storage Spaces are going require that you have access to additional storage to simulate JBODS. Windows iSCSI Target Software is a great way for you to provide storage for Failover Clustering and Spaces in a lab environment so you can play around with these new features.

This Step-by-Step Article assumes you have three Windows Server 8 servers running in a domain environment. My lab environment consists of the following:

Hardware
My three servers are all virtual machines running on VMware Workstation 8 on top of my Windows 7 laptop with 16 GB of RAM. See my article on how to install Windows Server 8 on VMware Workstation 8.

Server Names and Roles
PRIMARY.win8.local – my cluster node 1
SECONDARY.win8.local – my cluster node 2
WIN-EHVIK0RFBIU.win8.local – my domain controller (guess who forgot to rename his DC before I promoted it to be a Domain ControllerJ)

Network
192.168.37.X/24 – my public network also used to carry iSCSI traffic
10.X.X.X /8– a private network defined just between PRIMARY and SECONDARY for cluster communication

This article is going to walk you through step-by-step on how to do the following:

The article consist mostly of screen shots, but I also add notes where needed.

Install the iSCSI Target Role on your Domain Controller

Click on Add roles and features to install the iSCSI target role.

You will find that the iSCSI target role is a feature that is found under File And Storage Servers/File Services. Just select iSCSI Target Server and click Next to begin the installation of the iSCSI Target Server role.

Configure the iSCSI Target

The iSCSI target software is managed under File and Storage Services on the Server Manager Dashboard, click on that to continue

The first step in creating an iSCSI target is to create an iSCSI Virtual Disk. Click on Launch the New Virtual Disk wizard to create a virtual disk.

Connect to the iSCSI Target using the iSCSI Initiator

Format the iSCSI Target

Connect to the shared iSCSI Target from the SECONDARY Server

Configure Windows Server 8 Failover Clustering

05/10/2012 Posted by | Cluster Configuration, Clustering, iSCSI, Windows Server | , , | 2 Comments

Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2 – Part 1

Creating your cluster and configuring the quorum: Node and File Share Majority

Introduction

Welcome to Part 1 of my series “Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2″. Before we jump right in to the details, let’s take a moment to discuss what exactly a multi-site cluster is and why I would want to implement one. Microsoft has a great webpage and white paper that you will want to download to get you all of the details, so I won’t repeat everything here. But basically a multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.

I mentioned “cross-subnet failover” as a great new feature of Windows Server 2008 Failover Clustering, and it is a great new feature. However, SQL Server has not yet embraced this functionality, which means you will still be required to span your subnet across sites in a SQL Server multi-site cluster. As of Tech-Ed 2009, the SQL Server team reported that they plan on supporting this feature, but they say it will come sometime after SQL Server 2008 R2 is released. For the foreseeable future you will be stuck with spanning your subnet across sites in a SQL Server multi-site cluster. There are a few other network related issues that you need to consider as well, such as redundant communication paths, bandwidth and file share witness placement.

Network Considerations

All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity. You will want to be sure that whatever networks you have in place, you are not overwhelming the network or you will have unreliable behavior. Your replication traffic will most likely require the greatest amount of bandwidth; you will need to work with your replication vendor to determine how much bandwidth is required.

With your redundant communication paths in place, the last thing you need to consider is your quorum model. For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum. For a detailed description of the quorum types, have a look at this article.

The most common cause of confusion with the Node and File Share Majority quorum is the placement of the File Share Witness. Where should I put the server that is hosting the file share? Let’s look at the options.

Option 1 – place the file share in the primary site.

This is certainly a valid option for disaster recovery, but not so much for high availability. If the entire site fails (including the Primary node and the file share witness) the Secondary node in the secondary site will not come into service automatically, you will need to force the quorum online manually. This is because it will be the only remaining vote in the cluster. One out of three does not make a majority! Now if you can live with a manual step being involved for recovery in the event of a disaster, then this configuration may be OK for you.

Option 2 – place the file share in the secondary site.

This is not such a good idea. Although it solves the problem of automatic recovery in the event of a complete site loss, it exposes you to the risk of a false failover. Consider this…what happens if your secondary site goes down? In this case, your primary server (Node1) will go also go offline as it is now only a single node in the primary site and will no longer have a node majority. I can see no good reason to implement this configuration as there is too much risk involved.

Option 3 – place the file share witness in a 3rd geographic location

This is the preferred configuration as it allows for automatic failover in the event of a complete site loss and eliminates any the possibility of a failure of the secondary site causing the primary node to go offline. By having a 3rd site host the file share witness you have eliminated any one site as a single point of failure, so now the cluster will act as you expect and automatic failover in the event of a site loss is possible. Identifying a 3rd geographic location can be challenging for some companies, but with the advent of cloud based utility computing it is well within the reach of all companies to put a file share witness in the clouds and have the resiliency required for effective multi-site clusters. In fact, you may consider the cloud itself as your secondary data center and just failover to the cloud in the event of a disaster. I think the possibilities of cloud based computing and disaster recovery configurations are extremely enticing and in fact I plan on doing a whole blog post on a just that in the near future.

Configure the Cluster

Now that we have the basics in place, let’s get started with the actual configuration of the cluster. You will want to add the Failover Clustering feature to both nodes of your cluster. For simplicity sake, I’ve called my nodes PRIMARY and SECONDARY. This is accomplished very easily through the Add Features Wizard as shown below.

Figure 1 – Add the Failover Clustering Role

Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.

Figure 2- Change the names of your network connections

You will also want to go into the Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network is first in the list.

Figure 3- Make sure your public network is first

Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.

Figure 4 – Private network settings

Once you have your network configured, you are ready to build your cluster. The first step is to “Validate a Configuration”. Open up the Failover Cluster Manager and click on Validate a Configuration.

Figure 5 – Validate a Configuration

The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.

Figure 6 – Add the cluster nodes

A multi-site cluster does not need to pass the storage validation (see Microsoft article). Toskip the storage validation process,click on “Run only the tests I select” and click Continue.

Figure 7 – Select “Run only tests I select”

In the test selection screen, unselect Storage and click Next

Figure 8 – Unselect the Storage test

You will be presented with the following confirmation screen. Click Next to continue.

Figure 9 – Confirm your selection

If you have done everything right, you should see a summary page that looks like the following. Notice that the yellow exclamation point indicates that not all of the tests were run. This is to be expected in a multi-site cluster because the storage tests are skipped. As long as everything else checks out OK, you can proceed. If the report indicates any other errors, fix the problem, re-run the tests, and continue.

Figure 10 – View the validation report

You are now ready to create your cluster. In the Failover Cluster Manager, click on Create a Cluster.

Figure 11 – Create your cluster

The next step asks whether or not you want to validate your cluster. Since you have already done this you can skip this step. Note this will pose a little bit of a problem later on if installing SQL as it will require that the cluster has passed validation before proceeding. When we get to that point I will show you how to by-pass this check via a command line option in the SQL Server setup. For now, choose No and Next.

Figure 12 – Skip the validation test

The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.

Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.

Figure 13 – Choose a unique name and IP address

Confirm your choices and click Next.

Figure 14 – Confirm your choices

Congratulation, if you have done everything right you will see the following Summary page. Notice the yellow exclamation point; obviously something is not perfect. Click on View Report to find out what the problem may be.

Figure 15 – View the report to find out what the warning is all about

If you view the report, you should see a few lines that look like this.

Figure 16 – Error report

Don’t fret; this is to be expected in a multi-site cluster. Remember we said earlier that we will be implementing a Node and File Share Majority quorum. We will change the quorum type from the current Node Majority Cluster (not a good idea in a two node cluster) to a Node and File Share Majority quorum.

Implementing a Node and File Share Majority quorum

First, we need to identify the server that will hold our File Share witness. Remember, as we discussed earlier, this File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named DEMODC.

The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions. If you recall back at Figure 13, I created my cluster and gave it the name “MYCLUSTER”. You will need to make sure you give the cluster computer account read/write permissions as shown in the following screen shots.

Figure 17 – Make sure you search for Computers

Figure 18 – Give the cluster computer account NTFS permissions

Figure 19 – Give the cluster computer account share level permissions

Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.

Figure 20 – Change your quorum type

On the next screen choose Node and File Share Majority and click Next.

Figure 21 – Choose Node and File Share Majority

In this screen, enter the path to the file share you previously created and click Next.

Figure 22 – Choose your file share witness

Confirm that the information is correct and click Next.

Figure 23 – Click Next to confirm your quorum change to Node and File Share Majority

Assuming you did everything right, you should see the following Summary page.

Figure 24 – A successful quorum change

Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.

Figure 25 – You now have a Node and File Share Majority quorum

The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster. This step will vary from depending upon your replication solution, so you really need to be in close contact with your replication vendor to get it right.

Other parts of this series will describe in detail how to install SQL, File Servers and Hyper-V in multi-site clusters. I will also have a post on considerations for multi-node clusters of three or more nodes.

05/10/2012 Posted by | Cluster Configuration, Clustering | , , | Leave a comment

Windows Server 2008 and 2008R2 Failover Cluster Startup Switches

I am here today to discuss the troubleshooting switches used to start a Windows 2008 and 2008 R2 Failover Cluster. From time to time, the Failover Cluster Service will not start on its own. You need to start it with a diagnostic switch for troubleshooting purposes and/or to get it back to production.

In Windows 2003 Server Cluster, we had the following switches:

image

More detailed information on the above switches can be found in KB258078. However, the above switches have changed for Windows 2008 and 2008R2 Failover Clusters. The only switch that is available for Windows Server 2008 Failover Cluster is the FORCEQUORUM (or FQ for abbreviation) switch. The behavior differs from the FORCEQUORUM (or FO abbreviation) that was used previously in Windows Server 2003.

So for our example, let’s say we a 2-node Failover Cluster that is set for Node and Disk Majority. That means that we have a total of three votes. To achieve “quorum”, it needs a majority of votes (two) for fully bring all resources online and make it available to users.

In Windows 2008 Failover Cluster, when you tell the Cluster Service to start, it just immediately starts. The next thing it does is send out notifications to all the nodes that it wants to join a Cluster. It is also going to calculate the number of votes needed to achieve “quorum”. As long as there is another node running or it can bring the Witness Disk online, it will join and merrily go on its way. If there is not another node up and it cannot bring the Witness Disk online, the Cluster Service will start; however, it will be in a “joining” type mode. This means it will be sitting idle waiting for another node to join and achieve “quorum”. If this is the case, you would see something like this:

image

As discussed, we need at least 2 votes to achieve “quorum”. We currently have one node up, so we have one vote. The other node is down and the Witness Disk is unavailable which would account for the other two votes. But you can see that the Cluster Service itself is started. The reason it stays started is that is sitting there just listening for another node to join and give it a majority. Once it does, the Cluster resources will be made available for everyone to use. If you were to run the command to get the state of the nodes, you would see this:

image

This is where the FORCEQUORUM switch comes into play. When using this, it will force the Cluster to become available even though there is no “quorum”. There are multiple ways of forcing the Cluster Service to start. However, please keep in mind that there are some implications when running this. The implications are explained in this article.

1.  Go into Service Control Manager and start the Cluster Service with /FORCEQUORUM (or /FQ)
2.  Go to an Administrative Command Prompt and use:
          a.  net start clussvc /forcequorum
b.  net start clussvc /fq

3.  In Failover Cluster Management, highlight the name of the Cluster in the left pane, and
on the far right pane in the Actions column, there is a FORCE CLUSTER START option that
you can select shown below.

image

This switch differs from Windows 2003. When you use it on Windows 2003 Server Clusters, you must also specify all other nodes that will be joining while in this state. If I was to just use the commands above and not specify the additional nodes, the other nodes will not be allowed to join the Cluster. I would need to basically fix the problem of the other nodes not being up, then stop the Cluster Service and start it again without the switch. This causes downtime and no one wants that. In Windows 2008 Failover Cluster, the switch will remain in effect until “quorum” is achieved. All you would need to do is start the other node Cluster Service and it will join. Once “quorum” is achieved, mode of the Cluster dynamically changes.

In Windows Server 2008 R2 Failover Cluster, there is the same FORCEQUORUM (or FQ) switch as well as a new switch.

This new switch is /IPS or /IgnorePersistentState. This switch is a little different in what it does. What this switch does is to start the Cluster Service as well as make the resources available; but, all groups and resources will be in an offline state.

Under normal circumstances, when the Cluster Service starts, the default behavior is to bring all the resources online. What this switch does is ignore the current PersistentState value of the resources and leave everything offline. When you go into Failover Cluster Management and look at the groups, you will see all resources offline.

image

I do need to bring up a couple of important notes about this switch.

1. The Cluster Group will still be brought online. This switch will only affect the Services
and Applications groups that you have in the Cluster.

2. You must still be able to achieve “quorum.” In the case of a Node and Disk Majority,
the Witness Disk must still be able to come online.

This switch is not one that would be used that often, but when you need it, it is a blessing. Here are a couple of scenarios where the /IPS switch would come in handy.

SCENARIO 1

I have a Failover Cluster that held the limit of 1000 Hyper-V Virtual Machines. If you are trying to troubleshoot an issue, you can use the switch and then manually bring online only a couple of them. Do whatever troubleshooting you need to accomplish without the stress that all these machines coming online would put on the node. Once your troubleshooting is complete, you can then start the other nodes, bring the other virtual machines online, go about your business, etc.

SCENARIO 2

I am the administrator of the Failover Cluster and get called that my Cluster node that holds the John’s Cluster Application resource is in a pseudo hung state. Both Explorer and Failover Cluster Management hang up while the rest of the machine is real slow. If I try and move this group over to another node, that node experiences the same problems and errors. So I reboot them and when the Cluster Service starts, the machine goes into this pseudo hung state again. Looking through the event logs, I see that the Cluster Service starts fine. But I do see that John’s Cluster Application is throwing errors in the event log and those were the last things listed. I do some research on the errors and see that it is caused by a log file this application uses as being corrupt. All I have to do is delete this file and the application will dynamically recreate the file, start fine, and no longer hang the machine. That seems simple enough. But wait, I do not have access to the Clustered Drive that this application is on as Explorer hangs and I also cannot get to it from a command prompt.

In the days before Windows 2008 R2 Failover Cluster, I would have to:

  • Power off all other nodes.
  • Set the Cluster Service to MANUAL or DISABLED
  • Disable the Cluster Disk Driver
  • Reboot this machine
  • Delete the file
  • Re-enable the Cluster Disk Driver
  • Set the Cluster Service to AUTOMATIC and start it
  • Power up all other nodes

The above was the only way I was going to be able to get access to the drives. Something like this can be painful and time consuming. If the nodes take about 15 minutes to boot because of the devices and the memory, it just adds to the frustrations.

This is where the /IPS Switch comes in. Your steps would now be:

  • Stop the Cluster Service on all other nodes
  • Reboot this one node since it is hung
  • While that node is rebooting, on the other node, start the Cluster Service with the IPS Switch:

Net start clussvc /ips

  • Go to the group that has the disk
  • Bring the disk online
  • Delete the file
  • Bring the rest of the group online

For those who like to see stuff on MSDN, you can get a little more information on the /IPS switch here.

So as a recap, these are the only switches available for Windows Server 2008 and 2008 R2 Failover Clusters.

image

The switches can make things easier, less frustrating, and causes less downtime. This can mean production/dollars lost are more at a minimum and that makes everyone happy.

05/10/2012 Posted by | Cluster Configuration, Clustering, Windows Server | | Leave a comment

Configuring iSCSI MPIO on Windows Server 2008 R2

I have recently gone through the process of wiping out one of my lab environment and rebuilding it from scratch on Windows Server 2008 R2 Enterprise.  During this process, I recorded the steps I used to configure MPIO with the iSCSI initiator in R2.  Just to make life more complex, my servers only have 2 NICs, so I am balancing the host traffic, virtual machine traffic, and MPIO across those two NIC devices.  Is this supported?  I seriously doubt it.  🙂  In the real world you would separate out iSCSI traffic on dedicated NICs, cables, and separate switch paths.  The following step-by-step process should be relatively the same though.

Foundation

The workflow I am following assumes that when starting out one NIC is configured for host traffic and the other for a VM network.  On the WSS the secondary NIC was already configured not to register in DNS.  Also, since I am using WSS and the built-in iSCSI Target I don’t have to configure a DSM for the storage device.  If your configuration is different than that, you may have to ignore or add to a few parts of the below instructions.  Sorry about that.  I can only document what I have available for testing…

First I just want to show a screenshot of the iSCSI target on our Windows Storage Server, to indicate that it does have two IPs.  Once again, I am cheating the system here.  These are not dedicated TOE adapters for iSCSI on a separate network.  This is a poor man’s environment with 1 VLAN and minimal network hardware.  My highly available environment is anything but!  To view this information on your own WSS, right-click on the words “Microsoft iSCSI Software Target” and click Properties.

image

Enable the MPIO Feature on the initiating servers

Next I needed to enable MPIO on the servers making the iSCSI connections.  MPIO is a Feature in Server 2008 R2 listed as Multipath I/O.  Adding the Feature did not require a reboot on any of my servers.

image

Configuring MPIO to work with iSCSI was simple.  Click Start and type “MPIO”, launch the control panel applet, and you should see the window below.  Click on the Discover Multi-Paths tab, check the box for “Add support for iSCSI devices”, and click Add.  You should immediately be prompted to reboot.  This was consistent across 4 servers where I followed this process.

image

image

After rebooting, if you open the MPIO Control Panel applet again, you should see the iSCSI bus listed as a device.  Note on my servers, the Discover Multi-Paths page becomes grayed out.

image image

Check the IP of the existing connection path

Now click Start and type “iSCSI”.  Launch the iSCSI Initiator applet.  Add your iSNS server or Target portal.  There is plenty of documentation on how to do this on TechNet if you need assistance. I want to stay focused on the MPIO configuration.

Once you are connected to the target, click the button labeled “Devices…”.  You should see each of the volumes you have connected listed in the top pane.  Select a Disk and click the MPIO button.  In the Device Details pane you should see information on the current path and session.  If you click the Details button, you can verify the local and remote IPs the current connection is using.  It should be the IPs that resolve from the hostnames of each server.  See my remedial diagram below.

I recommend taking note of this IP, to make life easier later on!

image

So everything is setup for MPIO but you are only using a single path and that’s not really going to accomplish much now is it?  Since I only have 2 NICs in my test server I need my host to share the second NIC with the VM network.  This is not ideal but again I am using what I have and this is only a test box.

Setting a second IP on my hosts

In R2 the host does not communicate by default on a NIC where a virtual network is assigned.  To change this, open the Hyper-V console and click “Virtual Network Manager…”.  Check the box “Allow management operating system to share this network adapter”.

image

This will create a third device in the network console (to get there click Start, type “ncpa.cpl”, and launch the applet).  You should see the name of the new device matches your Virtual Network name.  In my case Local Area Connection 4 has a device name “External1”.  Right click on the connection and then click Properties.  Select “Internet Protocal Version 4 (TCP/IPv4)” and click the Properties button.  Configure your address and subnet but not the gateway as it should already be assigned on the first adapter.  You also shouldn’t need to set the DNS addresses in the new adapter.  You will however, want to click the “Advanced…” button followed by the DNS tab and uncheck the box next to “Register this connection’s address in DNS”.  This really should be the job of your primary adapter, no need to have multiple addresses for the same hostname registering and causing confusion unless you have a unique demand for it.

image

Add a second path

Back in the iSCSI Initiator Applet, click the Connect button.  I know you already have a connection.  In this step we are adding an additional connection to the Target to provide a second path.

In the subsequent dialogue make sure you check the box next to “Enable multi-path” and then click the Advanced… button.  In the Advanced Settings dialogue you will need to choose the IP for your second path.  In the drop-down menu next to “Local adapter:” select Microsoft iSCSI Initiator”.  In the drop-down next to “Initiator IP:” select the IP on your local server you would like the Initiator to use when making a connection for the secondary path.  In the third drop-down, next to “Target portal IP:” select the IP of the iSCSI Target server you would like to connect to.  This should be the opposite IP of the session we observed a few steps back when I mentioned you should take note of the IP.

image

Check your work

Just one more step.  Let’s verify that you now have 2 connections available for each disk, that they are using separate paths, and have the opportunity to choose the types of load balancing available.  Once you have hit OK out of each of the open dialogues from the step above, click on the Devices… button again and check out the top pane.  On each of my servers I see each disk listed twice, once per Target 0 and once per Target 1, as seen below.  If you follow my remedial diagrams one more time and select a disk, then the MPIO button, you should now see two paths.  Select the first path and click the Details button.  It should be using the local and remote IPs we took note of earlier.  Click OK.  Now select the second path and then the Details… button.  You should see it using the other adapter’s IP on BOTH the local and remote hosts.

image

05/10/2012 Posted by | Clustering, iSCSI, MPIO, Windows Server | , , | Leave a comment

Configuring the Microsoft iSCSI Software Target

Introduction

This post describes how to configure the Microsoft iSCSI Software Target offered with Windows Storage Server.

One of the goals here is to describe the terminology used like iSCSI Target, iSCSI Initiator, iSCSI Virtual Disk, etc. It also includes the steps to configure the iSCSI Software Target and the iSCSI Initiator.

Initial State

We’ll start with a simple scenario with three servers: one Storage Server and two Application Servers.

iSCSI-01

In my example, the Storage Server runs WSS 2008 and the two Application Servers run Windows Server 2008.

The Application Servers could be running any edition of Windows Server 2003 (using the downloadable iSCSI Initiator) or Windows Server 2008 / Windows Server 2008 R2 (which come with an iSCSI Initiator built-in).

The iSCSI Initiator configuration applet can be found in the Application Server’s Control Panel. In the “General” tab of that applet you will find the iQN (iSCSI Qualified Name) for the iSCSI Initiator, which you may need later while configuring the Storage Server.

The Microsoft iSCSI Software Target Management Console can be found on the Administration Tools menu in the Storage Server.

Add iSCSI Targets

The first thing to do is add two iSCSI Targets to the Storage Server. To do this, right-click the iSCSI Targets node in the Microsoft iSCSI Software Target MMC and select the “Create iSCSI Target” option. You will then specify a name, an optional description and the identifier for the iSCSI Initiator associated with that iSCSI Target.

There are four methods to identify the iSCSI Initiators: iQN (iSCSI Qualified Name), DNS name, IP address and MAC address. However, you only need to use one of the methods. The default is the iQN (which can be obtained from the iSCSI Initiator’s control panel applet). If you don’t have access to the iSCSI Initiator to check the iQN, you can use its DNS name. If you’re using the Microsoft iSCSI Initiator on your application server, that iQN is actually constructed with a prefix (“iqn.1991-05.com.microsoft:”) combined with the DNS name of the computer.

For instance, if the Application Server runs the Microsoft iSCSI Initiator, is named “s2” and is a member of the “contoso.com” domain, its iQN would be “FQDN:s2.contoso.com” and its DNS name would be “s2.contoso.com”. You could also use Its IP address (something like “10.1.1.120”) or its MAC address (which would look like “12-34-56-78-90-12”).

Typically, you assign just one iSCSI Initiator to each iSCSI Target. If you assign multiple iSCSI Initiators to the same iSCSI Targets, there is a potential for conflict between Application Servers. However, there are cases where this can make sense, like when you are using clusters.

In my example, we created two iSCSI Targets named T1 (assigned to the iSCSI Initiator in S2) and T2  (assigned to the iSCSI Initiator in S3). It did not fit in the diagram, but assume we used the complete DNS names of the Application Servers to identify their iSCSI Initiators.

iSCSI-02

Add Virtual Disks

Next, you need to create the Virtual Disks on the Storage Server. This is the equivalent of creating an LUN in a regular SAN device. The Microsoft iSCSI Software Target stores those Virtual Disks as files with the VHD extension in the Storage Server.

This is very similar to the Virtual Disks in Virtual PC and Virtual Server. However, you can only use the fixed size format for the VHDs (not the dynamically expanding or differencing formats). You can extend those fixed-size VHDs later if needed.

Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Create Virtual Disk” option. For each Virtual Disk you will specify a filename (complete with drive, folder and extension), a size (between 8MB and 16TB) and an optional description. You can also assign the iSCSI Targets at this point, but we’ll skip that and do it as a separate step.

In my example, I have created three virtual disks: D:\VHD1.vhd, E:\VHD2.vhd and E:\VHD3.vhd.

iSCSI-03

You can create multiple VHD files on the same disk. However, keep in mind that there are performance implications in doing so, since these VHDs will be sharing the same spindles (not unlike any scenario where two applications store data in the same physical disks).

The VHD files created by the Microsoft iSCSI Software Target cannot be used by Virtual PC or Virtual Server, since the format was adapted to support larger sizes (up to 16 TB instead of the usual 2 TB limit in Virtual PC and Virtual Server).

Assign Virtual Disks to iSCSI Targets

Once you created the iSCSI Targets and the Virtual Disks, it’s time to associate each virtual disk to their iSCSI Targets. Since the iSCSI Initiators were already assigned to the iSCSI Targets, this is the equivalent of unmasking an LUN in a regular SAN device.

Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Assign/Remove Target” option. This will take you directly to the “Target Access” tab in the properties of the virtual disk. Click the “Add” button to pick a target. You will typically assign a virtual disk to only one iSCSI Target. As with multiple iSCSI Initiators per iSCSI Target, if you assign the same disk to multiple iSCSI Targets, there is a potential for conflict if two Application Servers try to access the virtual disk at the same time.

You can assign multiple disks to a single iSCSI Target. This is very common when you are exposing several disks to the same Application Server. However, you can also expose multiple virtual disks to the same Application Server using multiple iSCSI Targets, with a single virtual disk per iSCSI Target. This will improve performance if your server runs a very demanding application in terms of storage, since each target will have its own request queue. Having too many iSCSI Targets will also tax the system, so you need to strike a balance if you have dozens of Virtual Disks, each associated with very demanding Application Servers.

In my example, I have assigned VHD1 and VHD2 to T1, then assigned VHD3 to T2.

iSCSI-04

Add Target Portal

Now that we finished the configuration on the Storage Server side, let’s focus on the Application Servers.

Using the iSCSI Initiator control panel applet, click on the “Discovery” tab and add your Storage Server DNS name or IP address to the list of Target Portals. Keep the default port (3260).

Next, select the “Targets” tab and click on the “Refresh” button. You should see the iQNs of iSCSI Targets that were assigned to this specific iSCSI Initiator.

In my example, the iSCSI Initiators in Application Server S2 and S3 were configured to use Storage Server S1 as target portal.

iSCSI-05

The iQN of the iSCSI Target (which you will see in the iSCSI Initiator) is constructed by the Storage Server using a prefix (“FQDN:”) combined with the Storage Server computer name, the name of the iSCSI Target and a suffix (“-target”). In our example, when checking the list of Targets on the iSCSI Initiator in S3, we found “FQDN:s1-t2-target” .

Logon to iSCSI Targets

Now you need to select the iSCSI Target and click on the “Log on” button to connect to the target, making sure to select the “Automatically restore this connection when the system boots” option.

Once the iSCSI Initiators have successfully logged on to the targets,  the virtual disks will get exposed to the Application Servers.

In our example, S2’s iSCSI Initiator was configured to logon to the T1 target and S3’s iSCSI Initiator was configured to logon to the T2 target.

iSCSI-06

Format, Mount and Bind Volumes

At this point, the virtual disks look just like locally-attached disk, showing up in the Disk Management MMC as an uninitialized disk. Now you need to format and mount the volumes.

To finish the configuration, open the Computer Management MMC (Start, Administrative Tools, Computer Management or right-click Computer and click Manage). Expand the “Storage” node on the MMC tree to find the “Disk Management” option. When you click on the Disk Management option, you should immediately see the “Initialize and Convert Disk Wizard”. Follow the wizard to initialize the disk, making sure to keep it as a basic disk (as opposed to dynamic).

You should then use the Disk management tool to create a partition, format it and mount it (as a drive letter or a path), as you would for any local disk. For larger volumes, you should convert the disk to a GPT disk (right click the disk, select “Convert to GPT Disk”). Do not convert to GPT if you intend to boot from that disk.

After the partition is created and the volumes are formatted and mounted, you can go to the “Bound Volumes/Devices” tab in the iSCSI Initiator applet, make sure all volumes mounted are listed there and then use the “Bind All” option. This will ensure that the volumes will be available to services and applications as they are started by Windows.

In my example, I have created a single partition for each disk, formatted them as NTFS and mounted each one in an available drive letter. In Application Server S2, we ended up with disks F: (for VHD1) and G: (for VHD2). On S3, we used F: (for VHD3).

iSCSI-07

Create Snapshot

Next, we’ll create a snapshot of a volume. This is basically a point-in-time copy of the data, which can be used as a backup or an archive. You can restore the disk to any previous snapshot in case your data is damaged in any way. You can also look at the data as it was at that time without restoring it. If you have enough disk space, you can keep many snapshots of your virtual disks, going back days, months or years.

To create a snapshot in the Storage Server, right-click the Devices node in the Microsoft iSCSI Software Target MMC and select the “Create Snapshot” option.  No additional information is required and a snapshot will be created.

You can also schedule the automatic creation of snapshots. For example, you could do it once a day at 1AM. This is done using the “Schedules” option under the “Snapshots” node in the Microsoft iSCSI Software Target MMC.

In my example, i have created a snapshot of the VHD3 virtual disk at 1AM.

iSCSI-08

Microsoft also offers a VSS Provider for the Microsoft iSCSI Software Target, which you can use on the Application Server to create a VSS-based snapshot.

Export Snapshot to iSCSI Target

Snapshots are usually not exposed to targets at all. You can use them to “go back in time” by rolling back to a previous snapshot, which requires no reconfiguration of the iSCSI Initiators. In some situations, however, it might be useful to expose a snapshot so you can check what’s in it before you roll back.

You might also just grab one or two files from the exported snapshot and never really roll back the entire virtual disk. Keep in mind that snapshots are read-only.

To make a snapshot visible to an Application Server, right-click the snapshot in the Microsoft iSCSI Software Target MMC and select the “Export Snapshot” option. You will only need to pick the target you want to use.

Unlike regular virtual disks, you can choose to export snapshots to multiple iSCSI Targets or to an iSCSI Target with multiple iSCSI Initiators assigned. This is because you cannot write to them and therefore there is no potential for conflicts.

In our example, we exported the VHD3 at 1AM snapshot to target T2, which will caused it to show up on Application Server S3.

iSCSI-09

Mount Snapshot Volume

The last step to expose the snapshot is to mount it as a path or drive on your Application Server. Note that you do not need to initialize the disk, create a partition or format the volume, since these things were already performed with the original virtual disk. You would not be able to perform any of those operations on a snapshot anyway, since you cannot write to it.

Again, open the Computer Management MMC, expand the “Storage” node and find the “Disk Management” option. If you already have it open, simply refresh the view to find the additional disk. Then use the properties of the volume to mount it.

In my example, I have mounted the snapshot of VHD3 at 1AM as the G: drive on Application Server S3.

iSCSI-10

Now you might be able to find a file you deleted on that F: drive after 1AM by looking at drive G:. You can then decide to copy files from the G: drive to F: drive at the Application Server side. You can also decide to roll back to that snapshot on the Storage Server side, keeping in mind that you will lose any changes to F: after 1AM.

Advanced Scenario

Now that you have the basics, you can start designing more advanced scenarios. As an example, see the diagram showing two Storage Servers and two Application Servers.

iSCSI-11

There are a few interesting points about that diagram that are worth mentioning. First, the iSCSI Initiators in the Application Servers (S3 and S4) point to two Target Portals (S1 and S2).

Second, you can see that VHD1 and VHD2 are exposed to Application Server S3 using two separate iSCSI Targets (T1 and T2). A single iSCSI Target could be used, but this was done to improve performance.

You can also see that the snapshot of VHD5 at 3AM is being exported simultaneously to Application Servers S3 and S4. This is fine, since snapshots are write-protected.

Clustering Example

This last scenario shows how to configure the Microsoft iSCSI Software Target for a cluster environment. The main difference here is the fact that we are assigning the same iSCSI Target to multiple iSCSI Initiators at the same time. This is usually not a good idea for regular environments, but it is common for a cluster.

iSCSI-12

This example shows an active-active cluster, where the node 1 (running on Application Server S2) has the Quorum disk and the Data1 disk, while node 2 (running on Application Server S3) has the Data2 disk. When running in a cluster environment, the servers know how to keep the disks they’re not using offline, bringing them online just one node at time, as required.

In case of a failure of node 1, node 2 will first verify that it should take over the services and then it will mount the disk resources and start providing the services that used to run on the failed node. Also note that we avoid conflicting drive letters on cluster nodes, since that could create a problem when you move resources between them. As you can see, the nodes need a lot of coordination to access the shared storage and that’s one of the key abilities of the cluster software.

Again, we could have used a single iSCSI Target for all virtual disks, but two were used because of performance requirements of the application associated with the Data2 virtual disk.

Conclusion

I hope this explanation helped you understand some of the details on how to configure the Microsoft iSCSI Software Target included in Windows Storage Server.

Links and References

For general information on iSCSI on the Windows platform, including a link to download the iSCSI Initiator for Windows Server 2003, check http://www.microsoft.com/iscsi

For step-by-step instructions on how to configure the Microsoft iSCSI Software Target, with screenshots, check this post: http://blogs.technet.com/josebda/archive/2009/02/02/step-by-step-using-the-microsoft-iscsi-software-target-with-hyper-v-standalone-full-vhd.aspx

For details on how VSS works, check this post: http://blogs.technet.com/josebda/archive/2007/10/10/the-basics-of-the-volume-shadow-copy-service-vss.aspx

For details on how iSCSI names are constructed using the iQN format, check IETF’s RFC 3721 at http://www.ietf.org/rfc/rfc3721

05/10/2012 Posted by | Cluster Configuration, Windows Server | , | Leave a comment

Active Directory Windows 2008 and 2008 R2 Documentation

Here are some documents that may help you with some specific Active Directory tasks
(I’ll try to keep this list updated).

Global:
Changes in Functionality from Windows Server 2003 with SP1 to Windows Server 2008
Changes in Functionality in Windows Server 2008 R2 and from TechNet
Active Directory Design Guide
AD DS Deployment Guide
AD DS Installation and Removal Step-by-Step Guide
Active Directory Domain Services – Technet & Operations Guide.doc
Running Domain Controllers in Hyper-V

Licensing
Windows Server 2008 R2 Licensing Overview
Licensing Microsoft Server Products in Virtual Environments white paper

DNS:
DNS Step-by-Step Guide
DNSSEC Deployment Guide

Migration/Upgrade:
Upgrading Active Directory Domains to Windows Server 2008 and Windows Server 2008 R2 AD DS Domains & from TechNet
Migrate Server Roles to Windows Server 2008 R2
ADMT Guide: Migrating and Restructuring Active Directory Domains
Windows Server 2008 R2 Migration Utilities x64 Edition

Read Only Domain Controllers:
Read-Only Domain Controllers (RODC) Branch Office Guide and from TechNet
Read-Only Domain Controllers (RODC) in the Perimeter Network
Read-only Domain Controllers Step-by-Step Guide from TechNet
Read-only Domain Controllers Known Issues for Deploying RODCs
Read-Only Domain Controllers Planning and Deployment Guide

Firewall:
Active Directory and Active Directory Domain Services Port Requirements
Service overview and network port requirements for the Windows Server system
How to configure a firewall for domains and trusts
How to restrict FRS replication traffic to a specific static port
Restricting Active Directory replication traffic and client RPC traffic to a specific port
Active Directory Domain Services in the Perimeter Network (Windows Server 2008)
Active Directory in Networks Segmented by Firewalls (Windows Server 2003)

Security/Delegation:
How to Delegate Basic Server Administration To Junior Administrators
Best Practice Guide for Securing Active Directory Installations.doc
Active Directory Domain Services in the Perimeter Network (Windows Server 2008)
Windows 2000 Security Event Descriptions (Part 1 of 2)
Windows 2000 Security Event Descriptions (Part 2 of 2)
Description of security events in Windows Vista and in Windows Server 2008
Description of security events in Windows 7 and in Windows Server 2008 R2
How to use Group Policy to configure detailed security auditing settings KB 921469
Security Auditing Windows Server 2008, Windows Server 2008 R2 TechNet

Backup/Recover:
Recovering Your Active Directory Forest
How to restore deleted user accounts and their group memberships in Active Directory KB840001
How to restore deleted user accounts and their group memberships in Active Directory
Best practices around Active Directory Authoritative Restores in Windows Server 2003 and 2008
The importance of following ALL the authoritative restore steps

05/03/2012 Posted by | Active Directory, Windows Server | | Leave a comment

BIZTALK TIPS: How to Cluster Message Queuing / How to Cluster MSDTC

Cluster support is provided for the BizTalk Server MSMQ adapter by running the MSMQ adapter handlers in a clustered instance of a BizTalk Host. If the BizTalk Server MSMQ adapter handlers are run in a clustered instance of a BizTalk Host, a clustered Message Queuing (MSMQ) resource should also be configured to run in the same cluster group as the clustered BizTalk Host when using the Send adapter or the Receive adapter for BizTalk Server 2006 R2 and earlier. This should be done for the following reasons:

  • MSMQ adapter receive handler – The MSMQ adapter receive handler for BizTalk Server 2006 R2 and earlier does not support remote transactional reads; only local transactional reads are supported. The MSMQ adapter receive handler on BizTalk Server 2006 R2 and earlier must run in a host instance that is local to the clustered MSMQ service in order to complete local transactional reads with the MSMQ adapter.
  • MSMQ adapter send handler – To ensure the consistency of transactional sends made by the MSMQ adapter, the outgoing queue used by the MSMQ adapter send handler should be highly available, so that if the MSMQ service for the outgoing queue fails, it can be resumed. Configuring a clustered MSMQ resource and the MSMQ adapter send handlers in the same cluster group will ensure that the outgoing queue used by the MSMQ adapter send handler will be highly available. This will mitigate the possibility of message loss in the event that the MSMQ service fails.

Many BizTalk Server operations are performed within the scope of a Microsoft Distributed Transaction Coordinator (MSDTC) transaction.

A clustered MSDTC resource must be available on the Windows Server cluster to provide transaction services for any clustered BizTalk Server components or dependencies. BizTalk Server components or dependencies that can be configured as Windows Server cluster resources include the following:

  • BizTalk Host
  • Enterprise Single Sign-On (SSO) service
  • SQL Server instance
  • Message Queuing (MSMQ) service
  • Windows File system

Windows Server 2003 only supports running MSDTC on cluster nodes as a clustered resource.

Windows Server 2008 supports running a local DTC on any server node in the failover cluster; even if a default clustered DTC resource is configured.


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, i the left pane, click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Message Queuing, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click a disk resource, and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.
  11. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To configure the Distributed Transaction Coordinator (DTC) for high availability (Windows Server 2008)


  1. To start the Failover Cluster Management program, click Start, Programs, Administrative Tools, and then click Failover Cluster Management.
  2. In the left hand pane, right-click Failover Cluster Management, and then click Manage a Cluster.
  3. In the Select a cluster to manage dialog box, enter the cluster to be managed, and then click OK.
  4. To start the High Availability Wizard, in the left pane click to expand the cluster, right-click Services and Applications, and then click Configure a Service or Application.
  5. If the Before You Begin page of the High Availability Wizard is displayed, click Next.
  6. On the Select Service or Application page, click Distributed Transaction Coordinator, and then click Next.
  7. On the Client Access Point page, enter a value for Name, enter an available IP address under Address, and then click Next.
  8. On the Select Storage page, click to select a disk resource and then click Next.
  9. On the Confirmation page, click Next.
  10. On the Summary page, click Finish.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2008)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then click Component Services.
  2. Click to expand Component Services, click to expand Computers, click to expand My Computer, click to expand Distributed Transaction Coordinator, click to expand Clustered DTCs, right-click the clustered DTC resource, and then click Properties.
  3. Click the Security tab.
  4. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  5. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  6. After changing security settings for the clustered distributed transaction coordinator resource, the resource will be restarted. Click Yes and OK when prompted.
  7. Close the Component Services management console.

 

  1. To start the Cluster Administrator program, click Start, point to Programs, point to Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Name and Disk resource.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSMQ.
  5. In the Resource type drop-down list, click Message Queuing, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the message queuing resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. Click OK in the dialog box that indicates that the resource was created successfully.
  9. To create a clustered MSDTC resource on the cluster so that there is transaction support for the clustered MSMQ resource, follow this steps:

 

To add an MSDTC resource to an existing cluster group (Windows Server 2003)


  1. To start the Cluster Administrator program, click Start, Programs, Administrative Tools, and then click Cluster Administrator.
  2. Click to select a cluster group other than the quorum group that contains a Physical Disk, IP Address, and Network Name resource. To create a group with a Physical Disk, IP Address, and Network Name resource if one does not already exist.
  3. On the File menu, point to New, and then click Resource.
  4. Enter a value for the Name field of the New Resource dialog box, for example, MSDTC.
  5. In the Resource type drop-down list, click Distributed Transaction Coordinator, and then click Next.
  6. In the Possible Owners dialog box, include each cluster node as a possible owner of the distributed transaction coordinator resource, and then click Next.
  7. In the Dependencies dialog box, add a dependency to a network name resource and the disk resource associated with this group, and then click Finish.
  8. In the dialog box that indicates that the resource was created successfully, click OK.

 

To configure the MSDTC transaction mode as Incoming Caller Authentication Required (Windows Server 2003)


  1. To open the Component Services management console, click Start, Programs, Administrative Tools, and then Component Services.
  2. Click to expand Component Services, and then click to expand Computer.
  3. Right-click My Computer, and then select the Properties menu item to display the My Computer Properties dialog box.
  4. Click the MSDTC tab.
  5. To display the Security Configuration dialog box, click Security Configuration .
  6. If network DTC access is not already enabled, click to enable the Network DTC Access option. Network DTC access must be enabled to accommodate transactional support for BizTalk Server.
  7. Under Transaction Manager Communication, enable the following options:
    • Allow Inbound
    • Allow Outbound
    • Incoming Caller Authentication Required
  8. Stop and restart the Distributed Transaction Coordinator service.

03/08/2012 Posted by | Biztalk, Cluster Configuration | , , , , | 2 Comments

%d bloggers like this: