GAPTHEGURU

Geek with special skills

SQL SERVER – Backup Timeline and Understanding of Database Restore Process in Full Recovery Model

I assume you all know that there are three types of Database Backup Models, so I will not discuss on this commonly known topic today. In fact, I will just talk about how to restore database that is in full recovery model.

In general, databases backup in full recovery mode are taken in three different kinds of database files.

  1. Full Database Backup
  2. Differential Database Backup
  3. Log Backup

What really perplexes most people is differential database backup.

Let me narrate here a real story. One of the DBA at one of my customers once called me up and laughingly said that he has just found something not smart about SQL Server Product Team in terms of database backup. I just could not believe this negative remark against SQL Server Product Team so I asked him to explain me what is it. He told me that in his opinion it is an extra step when it is about Differential Database backup. I asked him how he restores his database. He replied that he starts with first full database backup and then sequentially all log backups and differential database backups. He continued his explanation and said that he has figured something interesting; that is, if he does not restore all the differential database backups and only restores log backups it just works fine. According to him this is an unnecessary step.

Well, I am extremely happy to say he is wrong. He has totally failed to understand the concept of differential database backup. I called up another friend in his company and told him this story and he found it funny too! He suggested that he will explain to my friend that he needs to do all differential backups first and then all log backups after the last differential backup. I was again amazed and didn’t know what to do. He was wrong too!

After interacting with many DBAs I have realized that it is quite confusing to most of the people how differential database is useful and many are not aware of the correct method to restore full recovery model. Before I start explaining please understand the following diagram where I have demonstrated time line when a backup was taken.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Let us remember the golden rule for restore first.

‘After restoring full database backup, restore latest differential database backup and all the transaction log backup after that to get database to current state.’

From the above-listed rule, it is very clear that there is no need to restore all the differential database backups when restoring databases. You can only restore the latest Differential database backup. Differential database backup is the backup of all the changes made in database from last full backup; it is cumulative itself. That is the reason why the size of next differential database backup is much more than the previous differential database backup. All differential database backups contain all the data of previous differential database backups. You just have to restore the latest differential database backup and right after that install all the transaction database backups to bring database to the current state.

If you do not want to have differential database backup and have all the transaction log backups, in that case, you will have to install all the transactional database backups, which will be very time consuming and is not recommended when disastrous situation is there and getting server back online is the priority. In this way, differential database backups are very useful to save time as well as are very convenient to restore database. Instead of restoring many transaction database logs, which needs to be done very carefully without missing a single transaction in between, this is very convenient.

In our example, there are multiple paths to get server to the current state.

Path 1 (SLOWEST) : Full Database Restore >> Log Backup 0_1 >> Log Backup 1_1 to all remaining logs.

Path 2 : Full Database Restore >> Differential Database Backup 1 >> Log Backup 1_1 to all remaining logs.

Path 3 (FASTEST) : Full Database Restore >> Differential Database Backup 2 >> Log Backup 2_1 to all remaining logs.

Let me know if my explanation is clear to you all. If there is any confusion regarding how full database backup restore works then do inform me.

Advertisements

01/31/2012 Posted by | Sql Server | , , , , , , , | Leave a comment

SQL SERVER – Rollback TRUNCATE Command in Transaction

If you use TRANSACTIONS in your code, TRUNCATE can be rolled back. If there is no transaction is used and TRUNCATE operation is committed, it can not be retrieved from log file. TRUNCATE is DDL operation and it is not logged in log file.

Truncate *IS* a logged operation, it just doesn’t log removing the records, it logs the page deallocations.

The code to simulate above result is here.

USE tempdb

GO

-- Create Test Table

CREATE TABLE TruncateTest (ID INT)

INSERT INTO TruncateTest (ID)

SELECT 1

UNION ALL

SELECT 2

UNION ALL

SELECT 3

GO

-- Check the data before truncate

SELECT * FROM TruncateTest

GO

-- Begin Transaction

BEGIN TRAN

-- Truncate Table

TRUNCATE TABLE TruncateTest

GO

-- Check the data after truncate

SELECT * FROM TruncateTest

GO

-- Rollback Transaction

ROLLBACK TRAN

GO

-- Check the data after Rollback

SELECT * FROM TruncateTest

GO

-- Clean up

DROP TABLE TruncateTest

GO

Example from sql authority:

Following example demonstrates how during the transaction truncate can be rolled back.

 

01/31/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , , , , , | 2 Comments

SQL SERVER – Get All the Information of Database using sys.databases

SELECT database_id,

CONVERT(VARCHAR(25), DB.name) AS dbName,

CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status],

state_desc,

(SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) =

DB.name AND type_desc = 'rows') AS DataFiles,

(SELECT SUM((size*8)/1024) FROM sys.master_files

WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB],

(SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) =

DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE

DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB],

user_access_desc AS [User access],

recovery_model_desc AS [Recovery model],

CASE compatibility_level

WHEN 60 THEN '60 (SQL Server 6.0)'

WHEN 65 THEN '65 (SQL Server 6.5)'

WHEN 70 THEN '70 (SQL Server 7.0)'

WHEN 80 THEN '80 (SQL Server 2000)'

WHEN 90 THEN '90 (SQL Server 2005)'

WHEN 100 THEN '100 (SQL Server 2008)'

END AS [compatibility level],

CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20),

create_date, 108) AS [Creation date],

-- last backup

ISNULL((SELECT TOP 1

CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L'

THEN 'Transaction log' END + ' – ' +

LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' +

CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' +

CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' +

CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' +

CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)'

FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE

WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext],

CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option],

CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink],

CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics],

CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics],

CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby],

CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS

[cleanly shutdown]

FROM sys.databases DB

ORDER BY dbName, [Last backup] DESC, NAME

01/31/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , | Leave a comment

SSIS and clustering: What you should do instead

Lots of customers ask about configuring SQL Server Integration Services in a failover cluster. I recommend that you DON’T configure SSIS as a cluster resource. There are almost no benefits to doing so, and you can gain many of the benefits that you want by simple configuration changes. By editing the configuration file for the SSIS service on each node of a cluster, you can manage the packages on any node from any other node. For more information, please see the Books Online topic, Configuring Integration Services in a Clustered Environment.

Microsoft Senior Premier Field Engineer Steve Howard provided these additional details about the recommendations that he makes to customers who ask about clustering. Thanks, Steve, for permission to share:


I agree that restarting a running package automatically would be neat, but this would be different from other cluster-aware technologies (and other failover technologies) that we have. For example, if a failover happens with SQL Server, the queries do not restart, and other jobs running in Agent do not restart. I suppose it would be possible to write a job to check for jobs that did not complete successfully and restart those jobs, then schedule the that job to run at startup. That sounds feasible, but I have never done that.

What I’m describing is supported out of the box. It is really the same process that you must go through to manage package on a standalone machine with multiple instances (or even just one named instance). I find this question to be the most common question that customers have when I go onsite. Customers usually just do not understand the function of the SSIS service. When I explain it to them, and we work through it together, they are satisfied. I’ll explain here what I go through with customers, and students in the SSIS workshop.

In the installation folder, they will find the configuration file. For SQL 2005, by default, this path is: C:\Program Files\Microsoft SQL Server\90\DTS\Binn and for SQL 2008, this is C:\Program Files\Microsoft SQL Server\100\DTS\Binn. In either case, the name of the file is MsDtsSrvr.ini.xml. When first installed, this file will look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

<Folder xsi:type=”SqlServerFolder”>

<Name>MSDB</Name>

<ServerName>.</ServerName>

</Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

(I just use notepad for this task, but some people prefer to use XML notepad or some XML editor like that.)

In the top level folder, the servername is “.”, which means it will only connect to the default instance on the local machine (local to where the service is running). So when I connect to that SSIS service, I can only see the default instance on the machine where the SSIS service is running. Everything here is relative to where the service is running. (I tell students that it is the center of management for SSIS). I can connect to this machine with Management Studio on any machine, but with this configuration, I will only see the default instance running on the machine where the SSIS service I connected to is running.

If I have multiple instances on this machine, I need to add top-level folders so I can manage all the instances installed on this machine from this SSIS service. (I’ll get to clusters in a moment). Let’s say that I have both a SQL 2005 instance and a SQL 2008 instance on this machine. Then in the SQL 2008 SSIS MsDtsSrvr.ini.xml, I need to set it up to manage these instances. (I cannot manage SQL 2008 instances from SQL 2005 SSIS, so I must configure the SQL 2008 SSIS to be able to manage both from one service.) In that case, I would add the top-level folders with names that let me distinguish among the servers where I am managing packages:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2008 MSDB</Name>

      <ServerName>.\SQL2K8</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2005 MSDB</Name>

      <ServerName>.</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So, I have added one folder that is named “SQL 2008 MSDB” and that points to the named instance SQL2k8 on the local machine. The other folder is named “SQL 2005 MSDB” and that points to the default instance on the local machine. When I make this edit, restart the SSIS service so it will read the modified configuration file, then connect to this SSIS instance, I can now see both servers and manage packages on both:

 

So now, I can see running packages on either server, I can import, export, or manually start the packages. But none of this is really necessary to be able to design, install, or run those packages. The Service is just for convenience for managing this.

So now, let’s take this concept to a cluster. For our cluster, let’s have 4 nodes names Node1, Node2, Node3, and Node4. On this, let’s install 4 instances of SQL in 4 separate resource groups. Let’s use the network names net1, net2, net3, and net4, and let’s install instances InstanceA, InstanceB, InstanceC, and InstanceD on those net names respectively so that the full names of our instances will be net1\InstanceA; net2\InstanceB; net3\InstanceC, and net4\InstanceD. Any of the 4 nodes can host any of the instances in our setup.

To be able to manage packages on any of those instances, you are going to have to modify your config file. To be able to manage packages on all 4 instances from any one machine, we would make modifications like I did above so that the config file will now look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceA MSDB</Name>

      <ServerName>net1\InstanceA</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceB MSDB</Name>

      <ServerName>net2\InstanceB</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceC MSDB</Name>

      <ServerName>net3\InstanceC</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceD MSDB</Name>

      <ServerName>net4\InstanceD</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So now, whatever machine I put that config file onto will see and be able to manage packages on those 4 machines, just as in the screenshot above, I can see the packages and manage them on those two instances. If I put this on node1, then if I connect to node1, I can manage all of them from that machine. But just having it on one node will be a bit of a pain. So, once I have this configured, and I have tested to make sure it will see all the instances where I want to manage packages, I just copy the MsDtsSrvr.ini.xml file into place on node2, node3, and node4 (if I have installed the SSIS service on those nodes). Now, I can connect to SSIS on any of those nodes.

Most DBAs don’t care what the node names are, but they know the network names of their SQL Server instances very well. In that cluster configuration we described, these network names resolve to IP addresses that move with the SQL Server instance when it fails over. So from Management Studio on the DBA’s workstation, he can connect to the SSIS service on net1 and see all 4 instances on his cluster. If it fails over, and he still wants to connect to SSIS to manage packages on any of the 4 nodes on that cluster, he could connect to net1, and it would connect to the SSIS service running on the node where Net1\InstanceA is now hosted, and he will still see the same thing – he doesn’t know or care that he is now connected to the SSIS service on a different node. If he wanted to, he could even specify the cluster name (instead of any of the SQL network names) and still connect to an SSIS service and still see the same set of folders.

In some environments, the DBA has one server that is his/hers where they set up their management tools. The SSIS configuration that we have allows the DBA to be able to configure the XML file on that one machine to see and manage packages on all instances and machines that they manage by connecting to a single SSIS service. He/she just needs to configure the XML file on that one machine.

Where I see confusion/frustration from customers is that they think of Management Studio as the center of their management tools. With SSIS, it is the SSIS service that is the center of the management tools. Customers, before education, think of the SSIS service as running the packages, but this is not the case. The SSIS service is for management. Management Studio gives them a graphical interface into the service, but the center of management for SSIS is the SSIS service.

If I have one complaint about this, it is that we do not really have a front end for customers so that they don’t have to manually edit the XML files. But really, that XML file is so simple that it is not difficult to edit with tools like Notepad or XML Notepad.

And in that situation, what have we gained if we cluster the SSIS service?


The preceding information is presented here for exactly what it is: the educated opinion of an experienced Microsoft field engineer.

What many corporate customers are really looking for, presumably, is high availability for ETL processes, especially long-running processes. Except for its support for transactions, and its ability to restart from checkpoints after failure, SSIS out of the box doesn’t currently have a complete answer for HA concerns.

01/31/2012 Posted by | Cluster Configuration, Sql Server, SSIS | , , , , , , | Leave a comment

SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified

Users often see this error message when connection to a SQL Server and don’t know where to start to solve the problem. In most forums, people says this is because remote connection is not enabled on the server. This is not exactly correct. Actually, this error message give customers very specific information and the solution is quite simple.

First of all, you get this error message only if you are trying to connect to a SQL Server named instance. For default instance, you never see this.

Why?

Because even if we failed at this stage (i.e. error locating server/instance specified), we will continue to try connect using default values, e.g defaul TCP port 1433, default pipe name for Named Pipes. You may see other error message due to failure later, but not this error message.

Every time client makes a connection to SQL Server named instance, we will send a SSRP UDP packet to the server machine UDP port 1434. We need this step to know configuration information of the SQL instance, e.g., protocols enabled, TCP port, pipe name etc. Without these information, client does know how to connect the server and it fails with this specified error message.
In a word, the reason that we get this error message is the client stack could not receive SSRP response UDP packet from SQL Browser. It’s easy to isolate the issue.

Here are the steps:

1) Make sure your server name is correct, e.g., no typo on the name.

2) Make sure your instance name is correct and there is actually such an instance on your target machine. Some application converts \\ to \. If you are not sure about your application, please try both Server\Instance and Server\\Instance in your connection string.

3) Make sure the server machine is reachable, e.g, DNS can be resolve correctly, you are able to ping the server (not always true).

4) Make sure SQL Browser service is running on the server. 5) If firewall is enabled on the server, you need to put sqlbrowser.exe and/or UDP port 1434 into exception.

Once you are done the steps, you should not see this error message anymore. You may still fail to connect your SQL server, but error message should be different and you have a different issue now.

If it still fails, you may replace server\instance with tcp:server\instance and/or np:server\instance and see if it succeeds with either TCP or NP protocol. That way, you can isolate the issue a little bit. ]

There is one corner case where you may still fail after you checked step 1)-4).

It happens when:

a) your server is a named instance on cluster or on a multi-homed machine

b) your client is a machine with Firewall on.

I found a good tool online which could be very helpful for users to isolate issues related to this error message. You can download PortQry from http://support.microsoft.com/kb/832919, run “portqry.exe -n yourservername -p UDP -e 1434”. If this command returns information and it contains your target instance, then you can rule out possiblity 4) and 5) above, meaning you do have a SQL Browser running and your firewall does not block SQL Browser UDP packet. In this case, you can check other issue, e.g. wrong connection string.

01/31/2012 Posted by | Sql Server | , , , | 3 Comments

How to Move the OperationsManager Database in Operations Manager 2007

After the initial deployment of Microsoft System Center Operations Manager 2007, you might need to move the Operations Manager database from one Microsoft SQL Server-based computer to another.

SQL Server 2005 and SQL Server 2008 support the ability to change the location of the data files and of the log files between SQL Server-based computers, between instances on the same SQL Server-based computer, and different volumes on the same SQL Server-based computer.  For more information about using this function in SQL Server, see the SQL Server documentation (http://go.microsoft.com/fwlink/?LinkId=93787).

The high-level steps of moving the OperationsManager database are as follows:

  1. Back up the OperationsManager database.
  2. Uninstall the OperationsManager database.
  3. Delete the Operations Manager database.
  4. Restore the OperationsManager database.
  5. Update management servers with the new database server name.
  6. Update the Operations Manager database with the new database server name.
  7. Update the Operations Manager database logins on the new database server. Ensure that for the root management server, the SDK Account and the Action Account are included in the logins and that they have appropriate permissions. If reporting is installed, ensure that the Data Warehouse Action Account has appropriate permissions.
  8. Set ENABLE_BROKER if needed.
  9. Verify that the move is successful by ensuring that the console is displaying valid data.

OperationsManager Database Relocation

Use the procedure below to move the OperationsManager database to a new server.

To move the OperationsManager database

  1. Install and configure a new SQL Server-based computer. Ensure that you have system administrator permissions on both the original SQL Server-based computer and the new SQL Server-based computers.
  2. Back up the following:
    • Back up all databases. On the current server that hosts the Operations Manager database, use SQL Server Management Studio to back up the Operations Manager (default name) database.
    • Back up the encryption key on the root management server by using the SecureStorageBackup.exe utility.
  3. Stop the Operations Manager services (System Center Management, System Center Data Access, and System Center Management Configuration for root management servers, and System Center Management for management servers) on the management servers in the management group.In a clustered root management server environment, use Cluster Administrator (Windows Server 2003) or Failover Cluster Management (Windows Server 2008) to configure each of the three services listed above with the Take Offline option.
  4. On the current server that hosts the OperationsManager database, uninstall the database component as follows (these steps do not physically remove the OperationsManager database from SQL Server):
    1. Click Start, click Control Panel, and then click Add or Remove Programs for Windows Server 2003 or Programs and Features for Windows Server 2008.
    2. In the Add or Remove Programs dialog box for Windows Server 2003 or Programs and Features dialog box for Windows Server 2008, select System Center Operations Manager 2007 R2, and then select Remove for Windows Server 2003 or select Uninstall for Windows Server 2008.
    3. Complete the wizard
  5.  On the current server that hosts the OperationsManager database, delete the OperationsManager database as follows:
    1. In Microsoft SQL Server Management Studio, navigate to Databases.
    2. Right-click OperationsManager, and then click Delete.
    3. In the Delete Object dialog box, ensure that the Delete backup and restore history information for databases and Close existing connections options are both selected.
    4. Click OK to complete the operation.
  6. On the new server, use Microsoft SQL Server Management Studio to restore the OperationsManager database that you previously backed up. To access the database backup file, copy the backup file to a local drive or map a local drive to the folder that contains the backup file.
  7. Update the registry on each management server in the management group to reference the new SQL Server-based computer. Complete this step also on the root management server. If the root management server is clustered, you must complete this step on all the nodes in the cluster.
    1. Log on to the management server with Administrator permissions.
    2. Click Start, select Run, type regedit in the Open box, and then click OK to start Registry Editor.
    3. Under HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Operations Manager\3.0\Setup, double-click the value DatabaseServerName, and then change the value to the hostname of the SQL Server-based computer now hosting the OperationsManager database. If you are using a named instance of SQL Server, be sure to use the ServerName\Instance name format.
    4. Click OK.
    5. Close the Registry Editor.
    6. After you have completed this step on all management servers in the management group, restart the System Center Management, System Center Data Access, and System Center Management Configuration services on the root management server, and then restart only the System Center Management service on the remaining management servers.
  8. Update the OperationsManager database with the New Database Server Name, and ensure that the account that you are logged on with has sufficient privileges on the SQL Server instance.
    1. Open SQL Server Management Studio.
    2. Expand Databases, OperationsManager, and Tables.
    3. Right-click dbo.MT_ManagementGroup, and then click Open Table if you are using SQL Server 2005 or click Edit Top 200 Rows if you are using SQL Server 2008.
    4. Change the value in the SQLServerName_6B1D1BE8_EBB4_B425_08DC_2385C5930B04 column to reflect the name of the new SQL Server-based computer.
    5. Save your change.
  9. On the new server hosting the OperationsManager database, add the correct permission for the login of the root management server on which the SDK Account is running, as follows:
    1. Open Microsoft SQL Server Management Studio, and in the Object Explorer pane, navigate to Security and then expand Logins.
    2. Locate the SDK Account,and add the account if it is not listed.
    3. Right-click the SDK Account, and select Properties.
    4. In the Login Properties dialog box, in the Select a page pane, select User Mapping.
    5. In the Users mapped to this login list, in the Map column, select the box that corresponds to OperationsManager (default name).
    6. In the Database role membership for: OperationsManager list, ensure that the following items are selected: configsvc_users, db_datareader, db_datawriter, db_ddladmin, and sdk_users.
    7. Click OK to save your changes and to close the Login Properties dialog box.
  10. On the new server hosting the Operations Manager database, add the correct permission for the login of the root management server on which the Action Account is running, as follows:
    1. Open Microsoft SQL Server Management Studio, and in the Object Explorer pane, navigate to Security and then expand Logins.
    2. Locate the Action Account, and add the account if it is not listed. If the Action Account is running as LocalSystem, use the format <domain\computername$> in SQL Logins, where <computername> is the name of the root management server.
    3. Right-click the Action Account, and select Properties.
    4. In the Login Properties dialog box, in the Select a page pane, select User Mapping.
    5. In the Users mapped to this login list, in the Map column, select the box that corresponds to OperationsManager (default name).
    6. In the Database role membership for: OperationsManager list, ensure that the following items are selected: db_datareader, db_datawriter, db_ddladmin, and dbmodule_users.
    7. Click OK to save your changes and to close the Login Properties dialog box.
  11. On the new server hosting the Operations Manager database, add the correct permission for the login of the Data Warehouse server on which the Data Warehouse Action Account is running, as follows:
    1. Open Microsoft SQL Server Management Studio, and in the Object Explorer pane, navigate to Security and then expand Logins.
    2. Locate the Data Warehouse Action Account, and add the account if it is not listed.
    3. Right-click the Data Warehouse Action Account, and select Properties.
    4. In the Login Properties dialog box, in the Select a page pane, select User Mapping.
    5. In the Users mapped to this login list, in the Map column, select the box that corresponds to OperationsManager (default name).
    6. In the Database role membership for: OperationsManager list, ensure that the following items are selected: db_datareader and dwsynch_users.
    7. Click OK to save your changes and to close the Login Properties dialog box.            

Set ENABLE_BROKER

Before you can run tasks and use the Discovery Wizard to install agents, you need to set the ENABLE_BROKER value.

After moving the Operations Manager database, the status of the Sql Broker Availability Monitor might be set to Critical or Sql Broker is disabled. You can check the state of the Sql Broker Availability Monitor by running the following SQL query:

SELECT is_broker_enabled FROM sys.databases WHERE name=’OperationsManager’Where ‘OperationsManager’ is the default database name, replace this name as appropriate.

If the query result is ‘0’, the Sql Broker is disabled and you must re-enable it using the following procedure.

To set ENABLE_BROKER

  1. Open SQL Server Management Studio.
  2. In the Connect to Server dialog box, select the appropriate values in the Server type list, Server name list, and Authentication list, and then click Connect.
  3. Click New Query.
  4.  In the query window, enter the following query:                ALTER DATABASE OperationsManager SET SINGLE_USER WITH ROLLBACK IMMEDIATE
  5. Click Execute.
  6. Enter the following query:                ALTER DATABASE OperationsManager SET ENABLE_BROKER
  7. Click Execute.
  8. Close SQL Server Management Studio.
  9. Open SQL Server Management Studio.
  10. In the Connect to Server dialog box, select the appropriate values in the Server type list, Server name list, and Authentication list, and then click Connect.
  11. Click New Query.
  12. In the query window, enter the following query:                ALTER DATABASE OperationsManager SET MULTI_USER
  13. Click Execute.

You can verify the setting for ENABLE_BROKER is set to 1 by using this SQL query: SELECT is_broker_enabled FROM sys.databases WHERE name=’OperationsManager’.

01/31/2012 Posted by | SCOM, Sql Server | , , , | Leave a comment

Windows Server 2008 : Configuring Server Clusters

Server Cluster Fundamentals

In Windows Server 2008, you can configure three types of server groups for load balancing, scalability, and high availability. First, a round-robindistribution group is a set of computers that uses DNS to provide basic load balancing with minimal configuration requirements. Next, a Network Load Balancing (NLB) cluster (also called an NLB farm) is a group of servers used not only to provide load balancing but also to increase scalability. Finally, a failover cluster can be used to increase the availability of an application or service in the event of a server failure.

Note: What is load balancing?

Load balancing is a means of distributing incoming connection requests to two or more servers in a manner that is transparent to users. Load balancing can be implemented with hardware, software, or a combination of both.

Round-Robin Distribution

Round-robin DNS is a simple method for distributing a workload among multiple servers. In round-robin, a DNS server is configured with more than one record to resolve another server’s name to an IP address. When clients query the DNS server to resolve the name (find the address) of the other server, the DNS server responds by cycling through the records one at a time and by pointing each successive client to a different address and different machine.

For example, suppose that a DNS server authoritative for the DNS domain contoso.com is configured with two separate resource records, each resolving the name web.contoso.com by pointing to a different server, as shown in Figure 1. When the first client (Client1) queries the DNS server to resolve the web.contoso.com web.contoso.com), the DNS server answers the query with the information provided in the second record matching “web.” This second record points to a server name websrv2, which is located at the 192.168.3.12 address. If a third client then queries the DNS server for the same name, the server will respond with information in the first record again.name, the DNS server answers by pointing the client to the server named websrv1 located at the 192.168.3.11 address. This is the information associated with the first DNS record matching “web.” When the next client, Client2, queries the DNS server to resolve the same name (

Figure 1. Round-robin uses DNS to distribute the client load between two or more servers

The purpose of DNS round-robin is to load balance client requests among servers. Its main advantage is that it is very easy to configure. Round-robin DNS is enabled by default in most DNS servers, so to configure this simple sort of load balancing, you only need to create the appropriate DNS records on the DNS server.

However, there are serious limitations to round-robin as a load balancing mechanism. The biggest drawback is that if one of the target servers goes down, the DNS server does not respond to this event, and it will keep directing clients to the inactive server until a network administrator removes the DNS record from the DNS server. Another drawback is that every record is given equal weight, regardless of whether one target server is more powerful than another or a given server is already busy. A final drawback is that round-robin does not always function as expected. Because DNS clients cache query responses from servers, a DNS client by default will keep connecting to the same target server as long as the cached response stays active.

Network Load Balancing

An installable feature of Windows Server 2008, NLB transparently distributes client requests among servers in an NLB cluster by using virtual IP addresses and a shared name. From the perspective of the client, the NLB cluster appears to be a single server. NLB is a fully distributed solution in that it does not use a centralized dispatcher.

In a common scenario, NLB is used to create a Web farm—a group of computers working to support a Web site or set of Web sites. However, NLB can also be used to create a terminal server farm, a VPN server farm, or an ISA Server firewall cluster. Figure 2 shows a basic configuration of an NLB Web farm located behind an NLB firewall cluster.

Figure 2. Basic diagram for two connected NLB clusters

As a load balancing mechanism, NLB provides significant advantages over round-robin DNS. First of all, in contrast to round-robin DNS, NLB automatically detects servers that have been disconnected from the NLB cluster and then redistributes client requests to the remaining live hosts. This feature prevents clients from sending requests to the failed servers. Another difference between NLB and round-robin DNS is that in NLB, you have the option to specify a load percentage that each host will handle. Clients are then statistically distributed among hosts so that each server receives its percentage of incoming requests.

Beyond load balancing, NLB also supports scalability. As the demand for a network service such as a Web site grows, more servers can be added to the farm with only a minimal increase in administrative overhead.

Failover Clustering

A failover cluster is a group of two or more computers used to prevent downtime for selected applications and services. The clustered servers (called nodes) are connected by physical cables to each other and to shared disk storage. If one of the cluster nodes fails, another node begins to take over service for the lost node in a process known as failover. As a result of failover, users connecting to the server experience minimal disruption in service.

Servers in a failover cluster can function in a variety of roles, including the roles of file server, print server, mail server, or database server, and they can provide high availability for a variety of other services and applications.

In most cases, the failover cluster includes a shared storage unit that is physically connected to all the servers in the cluster, although any given volume in the storage is accessed by only one server at a time.

Figure 3 illustrates the process of failover in a basic, two-node failover cluster.

Figure 3. In a failover cluster, when one server fails, another takes over, using the same storage

In a failover cluster, storage volumes or LUNs that are exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. Figure 4 illustrates this concept by showing two two-node failover clusters dividing up storage on a SAN.

Figure 4. Each failover cluster must isolate storage from other servers

Creating a Failover Cluster

Creating a failover cluster is a multistep process. The first step is to configure the physical hardware for the cluster. Then, you need to install the Failover Clustering feature and run the Failover Cluster Validation Tool, which ensures that the hardware and software prerequisites for the cluster are met. Next, once the configuration has been validated by the tool, create the cluster by running the Create Cluster Wizard. Finally, to configure the behavior of the cluster and to define the availability of selected services, you need to run the High Availability Wizard.

Preparing Failover Cluster Hardware

Failover clusters have fairly elaborate hardware requirements. To configure the hardware, review the following list of requirements for the servers, network adapters, cabling, controllers, and storage:

  • Servers Use a set of matching computers that consist of the same or similar components (recommended).
  • Network adapters and cablingThe network hardware, like other components in the failover cluster solution, must be compatible with Windows Server 2008. If you use iSCSI, each network adapter must be dedicated to either network communication or iSCSI, not both.In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
  • Device controllers or appropriate adapters for the storageIf you are using serial attached SCSI or FC in all clustered servers, the mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version. If you are using iSCSI, each clustered server must have one or more network adapters or HBAs that are dedicated to the cluster storage. The network you use for iSCSI cannot be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage target should be identical. It is also recommended that you use Gigabit Ethernet or higher. (Note also that for iSCSI, you cannot use teamed network adapters.)
  • Shared storage compatible with Windows Server 2008For a two-node failover cluster, the storage should contain at least two separate volumes (LUNs), configured at the hardware level.The first volume will function as the witness disk, a volume that holds a copy of the cluster configuration database. Witness disks, known as quorum disks in Microsoft Windows Server 2003, are used in many but not all cluster configurations.

    The second volume will contain the files that are being shared to users. Storage requirements include the following:

    • To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
    • It is recommended that you format the storage partitions with NTFS. (For the witness disk, the partition must be NTFS.)When deploying a storage area network (SAN) with a failover cluster, be sure to confirm with manufacturers and vendors that the storage, including all drivers, firmware, and software used for the storage, are compatible with failover clusters in Windows Server 2008.

After you have met the hardware requirements and connected the cluster servers to storage, you can then install the Failover Cluster feature.

Note: What is the quorum configuration?

The quorum configurationin a failover cluster determines the number of failures that the cluster can sustain before the cluster stops running. In Windows Server 2008, you can choose from among four quorum configurations. The first option is the Node Majority quorum configuration, which is recommended for clusters with an odd number of nodes. In node majority, the failover cluster runs as long as a majority of the nodes are running. The second option is the Node and Disk Majority quorum configuration, which is recommended for clusters with an even number of nodes. In node and disk majority, the failover cluster uses a witness disk as a tiebreaker node, and the failover cluster then runs as long as a majority of these nodes are online and available. The third option is the Node And File Share Majority quorum configuration. In node and file share majority, which is recommended for clusters that have an even number of nodes and that lack access to a witness disk, a witness file share is used as a tiebreaker node, and the failover cluster then runs as long as a majority of these nodes are online and available. The fourth and final option is the No Majority: Disk Only quorum configuration. In this configuration, which is generally not recommended, the failover cluster remains as long as a single node and its storage remain online.

Installing the Failover Clustering Feature

Before creating a failover cluster, you have to install the Failover Clustering feature on all nodes in the cluster.

To install the Failover Clustering feature, begin by clicking Add Features in Server Manager. In the Add Features Wizard, select the Failover Clustering check box. Click Next, and then follow the prompts to install the feature.

Once the feature is installed on all nodes, you are ready to validate the hardware and software configuration.

Validating the Cluster Configuration

Before you create a new cluster, use the Validate A Configuration Wizard to ensure that your nodes meet the hardware and software prerequisites for a failover cluster.

To run the Validate A Configuration Wizard, first open Failover Cluster Management Administrative Tools program group. In Failover Cluster Management, click Validate A Configuration in the Management area or the Actions pane, as shown in Figure 5.

Figure 5. Validating failover server prerequisites

After the wizard completes, make any configuration changes if necessary, and then rerun the test until the configuration is successfully validated. After the cluster prerequisites have been validated, you can use the Create Cluster Wizard to create the cluster.

Running the Create Cluster Wizard

The next step in creating a cluster is to run the Create Cluster Wizard. The Create Cluster Wizard installs the software foundation for the cluster, converts the attached storage into cluster disks, and creates a computer account in Active Directory for the cluster. To launch this tool, in Failover Cluster Management, click Create A Cluster in the Management area or Actions pane.

In the Create Cluster Wizard, simply enter the names of the cluster nodes when prompted. The wizard then enables you to name and assign an IP address for the cluster, after which the cluster is created.

After the wizard completes, you need to configure the services or applications for which you wish to provide failover. To perform this aspect of the configuration, run the High Availability Wizard.

Running the High Availability Wizard

The High Availability Wizard configures failover service for a particular service or application. To launch the High Availability Wizard, in Failover Cluster Management, click Configure A Service Or Application in the Action pane or Configure area.

To complete the High Availability Wizard, perform the following steps:

1.
On the Before You Begin page, review the text, and then click Next.
2.
On the Select Service Or Application page, select the service or application for which you want to provide failover service (high availability), and then click Next.
3.
Follow the instructions in the wizard to specify required details about the chosen service. For example, for the File Server service, you would need to specify the following:

  • A name for the clustered file server
  • Any IP address information that is not automatically supplied by your DHCP settings—for example, a static IPv4 address for this clustered file server
  • The storage volume or volumes that the clustered file server should use
4.
After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.
5.
To close the wizard, click Finish.

Testing the Failover Cluster

After you complete the wizard, test the failover cluster in Failover Cluster Management. In the console tree, make sure Services and Applications is expanded, and then select the service you have just added with the High Availability Wizard. Right-click the clustered service, click Move This Service Or Application To Another Node, and then click the available choice of node. You can observe the status changes in the center pane of the snap-in as the clustered service instance is moved. If the service moves successfully, the failover is functional.

 Configuring an NLB Cluster

Creating an NLB cluster is a relatively simple process. To begin, install Windows Server 2008 on two servers and then, on both servers, configure the service or application (such as IIS) that you want to provide to clients. Be sure to create identical configurations because you want the client experience to be identical regardless of which server users are connected to.

The next step in configuring an NLB cluster is to install the Network Load Balancing feature on all servers that you want to join the NLB cluster. For this step, simply open Server Manager, and then click Add Features. In the Add Features Wizard, select Network Load Balancing, click Next, and then follow the prompts to install.

The final step in creating an NLB cluster is to use Network Load Balancing Manager to configure the cluster. This procedure is outlined in the following section.

▸ To create an NLB cluster

1.
Launch Network Load Balancing Manager from Administrative Tools. (You can also open Network Load Balancing Manager by typing Nlbmgr.exe from a command prompt.)
2.
In the Network Load Balancing Manager console tree, right-click Network Load Balancing Clusters, and then click New Cluster.
3.
Connect to the host that is to be a part of the new cluster. In Host, enter the name of the host, and then click Connect.
4.
Select the interface you want to use with the cluster, and then click Next. (The interface hosts the virtual IP address and receives the client traffic to load balance.)
5.
On the Host Parameters page, select a value in the Priority (Unique host identifier) drop-down list. This parameter specifies a unique ID for each host. The host with the lowest numerical priority among the current members of the cluster handles all the cluster’s network traffic not covered by a port rule. You can override these priorities or provide load balancing for specific ranges of ports by specifying rules on the Port rules tab of the Network Load Balancing Properties dialog box.
6.
On the Host Parameters page, verify that the dedicated IP address from the chosen interface is visible in the list. If not, use the Add button to add the address, and then click Next to continue.
7.
On the Cluster IP Addresses page, click Add to enter the cluster IP address shared by every host in the cluster. NLB adds this IP address to the TCP/IP stack on the selected interface of all hosts chosen to be part of the cluster. Click Next to continue.

Note: Use only static addresses

NLB doesn’t support Dynamic Host Configuration Protocol (DHCP). NLB disables DHCP on each interface it configures, so the IP addresses must be static.

8.
On the Cluster Parameters page, in the Cluster IP Configuration area, verify appropriate values for IP address and subnet mask, and then type a full Internet name (Fully Qualified Domain Name) for the cluster.
Note that for IPv6 addresses, a subnet mask is not needed. Note also that a full Internet name is not needed when using NLB with Terminal Services.
9.
On the Cluster Parameters page, in the Cluster Operation Mode area, click Unicast to specify that a unicast media access control (MAC) address should be used for cluster operations. In unicast mode, the MAC address of the cluster is assigned to the network adapter of the computer, and the built-in MAC address of the network adapter is not used. It is recommended that you accept the unicast default settings. Click Next to continue.
10.
On the Port Rules page, click Edit to modify the default port rules. Configure the rules as follows:

  • In the Port Range area, specify a range corresponding to the service you want to provide in the NLB cluster. For example, for Web services, type80 to 80 so that the new rule applies only to HTTP traffic. For Terminal Services, type 3389 to 3389 so that the new rule applies only to RDP traffic.
  • In the Protocols area, select TCP or UDP, as needed, as the specific TCP/IP protocol the port rule should cover. Only the network traffic for the specified protocol is affected by the rule. Traffic not affected by the port rule is handled by the default host.
  • In the Filtering mode area, select Multiple Host if you want multiple hosts in the cluster to handle network traffic for the port rule. Choose Single Host if you want a single host to handle the network traffic for the port rule.
  • In Affinity (which applies only for the Multiple host filtering mode), select None if you want multiple connections from the same client IP address to be handled by different cluster hosts (no client affinity). Leave the Single option if you want NLB to direct multiple requests from the same client IP address to the same cluster host. Select Network if you want NLB to direct multiple requests from the local subnet to the same cluster host.
11.
After you add the port rule, click Finish to create the cluster.
To add more hosts to the cluster, right-click the new cluster, and then click Add Host To Cluster. Configure the host parameters (including host priority and dedicated IP addresses) for the additional hosts by following the same instructions that you used to configure the initial host. Because you are adding hosts to an already configured cluster, all the cluster-wide parameters remain the same.

01/19/2012 Posted by | Cluster Configuration, Windows Server | , , , | Leave a comment

Understanding FSMO Roles in Server 2008 Active Directory

Overview

FSMO stands for Flexible Single Master Operations, and FSMO roles (also known as operations master roles) help you prevent conflicts in your Active Directory.

In this article I will examine the difference between the single and multi-master models in Windows Server 2000, 2003 and 2008 and I will go through what you need to know about the different FSMO roles. I will also take a look at FSMO reliability and availability and what’s new with FSMO in Windows Server 2008.

Windows 2000/2003/2008 Multi-Master Model

  • For most Active Directory objects, the task of updating can be performed by any Domain Controller except those Domain Controllers that are read-only. Updates such as computer object properties, renamed organizational units, and user account password resets can be handled by any writable domain controller.

    After an object is changed on one domain controller, those changes are propagated to the other domain controllers through replication. During replication all of the Domain Controllers share their updates. So a user that has their password reset in one part of the domain may have to wait until those changes are replicated to the Domain Controller that they are signing in from.

    This model works very well for most objects. In the case of any conflicts, such as a user’s password being reset by both the central helpdesk as well as an administrator working at the user’s site, then conflicts are resolved by whichever made the last change. However, there are some changes that are too important, and are not well suited to this model.

    Windows 2000/2003/2008 Single-Master Model

    There are 5 specific types of updates to Active Directory that are very specific, and conflicts should be avoided. To help alleviate any potential conflicts, those updates are all performed on a single Domain Controller. And though each type of update must be performed on a single Domain Controller, they do not all have to be handled by the same Domain Controller.

    These types of updates are handled by Domain Controllers Flexible Single Master Operations roles, or FSMO roles. Each of the five roles is assigned to only one domain controller.

    There are five of these FSMO roles in every forest. They are:

    • Schema Master
    • Domain Naming Master
    • Infrastructure Master
    • Relative ID (RID) Master
    • Primary Domain Controller (PDC) Emulator

    Additionally, three of those FSMO roles are needed once in every domain in the forest:

    • Infrastructure Master
    • Relative ID (RID) Master
    • Primary Domain Controller (PDC) Emulator

    Here is what you need to know about the different FSMO roles.

    • All Schema Changes and Updates to Active Directory are Processed by the DC with the Schema Master Role

    Whenever the schema is modified at all, those updates are always completed by the domain controller with the schema master role. Schema is updated during the normal replication, and the schema updates are replicated throughout all the domains in the forest. Since the schema master role is only needed once in the forest, it is kept in the forest root domain. It’s advisable to place the schema master role on the same domain controller (DC) as the primary domain controller (PDC) emulator.

    • Changes to Which Domains are Part of the Forest are Processed by the DC with the Domain Naming Master Role

    As domains join or leave the forest, the domain naming master makes the updates into active directory. Only this DC actually commits those changes into the directory. The domain naming master also commits the changes to application partitions. Like the schema master role, this role is a forest level FSMO, and it is only needed once across all domains in a forest. Also like the schema master, it is suggested to let this role be handled by the same domain controller – the PDC emulator in the forest root.

    • Each Domain in a Forest Translates Names for Other Domains Through Their Infrastructure Master

    The infrastructure master is a translator, between globally unique identifiers (GUIDs), security identifiers (SIDs), and distinguished names (DNs) for foreign domain objects. If you’ve ever looked at group memberships of a domain local group which has members from other domains, you can sometimes see those users and groups from the other domain listed only by their SID. The infrastructure master of the domain of which those accounts are in is responsible for translating those from a SID into their name.

    Each domain has their own infrastructure master, including the forest root and every child domain. Usually, you do not put the infrastructure master role on a domain that holds the global catalog. However, if you’re in a single domain forest, the infrastructure master has no work to do, since there is no translation of foreign principals. In that case it’s acceptable to place the infrastructure master it on any domain controller (DC), even if it has the global catalog. For a forest with multiple domains, if there’s even one domain controller that doesn’t have the global catalog on it, then you need to put the infrastructure master role on a domain controller that does not have the global catalog.

    • The Unique Part of a Security Identifier is Assigned from the Relative ID (RID) Master

    One of the first things understood about a security identifier (SID) is that they are unique. There are two parts of a SID: the domain identifier (domain ID), and the relative ID (RID). The domain identifier part of the SID is uniform among all security principals in the domain. When looking at a list of SIDs in a domain, it’s easy to identify the domain SIDs – they all look the same. On the contrary, the relative ID part of the SID is the unique part. The two parts together make up what we commonly identify as a SID.

    It is conceivable, then, that if two or more domain controllers were responsible for determining the relative IDs for the SIDs that two domain controllers may come up with the same relative ID for two different objects before they’ve replicated with each other.

    That is impossible when only one DC in a domain is responsible for the creation of the relative IDs for SIDs. The relative ID master, or RID master, hands out batches of relative IDs to individual domain controllers, then each domain controller can use their allotment to create new users, groups, and computers. When domain controllers need more relative IDs in reserve, they request them from, and are assigned by, the domain controller with the RID master FSMO role.

    Every domain in a forest must have a domain controller with the RID master FSMO role assigned to it. It is recommended that the RID master FSMO role be assigned to whichever domain controller has the PDC emulator FSMO role.

    • The Domain Controller (DC) That is the Primary Domain Controller (PDC) Emulator is the Authoritative DC in a Domain

    The domain controller that has the PDC emulator FSMO role assigned to it has many duties and responsibilities in the domain. For example, the DC with the PDC emulator role is the DC that updates passwords for users and computers. When a user attempts to login, and enters a bad password, it’s the DC with the PDC emulator FSMO role that is consulted to determine if the password has been changed without the replica DC’s knowledge. The PDC emulator is also the default domain controller for many administrative tools, and is likewise the default DC used when Group Policies are updated.

    Additionally, it’s the PDC emulator which maintains the accurate time that the domain is regulated by. It’s the time on the PDC emulator which identifies when the last write time for an object was (to resolve conflicts, for example.) If it’s a forest with multiple domains, then the forest root PDC is the authoritative time source for all domains in the forest.

    Each domain in the forest needs its own PDC emulator.

    // <![CDATA[
    ord = window.ord || Math.floor(Math.random()*1E16);
    document.write(”);
    //]]>
    // ]]>

    // <![CDATA[
    document.write(”);
    // ]]>
    //

    // <a href=”http://ad2.netshelter.net/jump/ns.petri/general;ppos=btf;kw=;tile=1;sz=300×600,300×250;ord=123456789?&#8221; target=”_blank” ><img src=”http://ad2.netshelter.net/ad/ns.petri/general;ppos=btf;kw=;tile=1;sz=300×600,300×250;ord=123456789?&#8221; border=”0″ alt=”” /></a>

    FSMO Reliability and Availability

    Due to the importance of the FSMO roles, the domain controllers need to be online at the time the services are needed. For some of the FSMO roles, such as schema master, this is not very much. It only needs to be online when the schema is updated. For other roles, such as the PDC emulator, it needs to be online and accessible all the time.

    Ideally, you put the PDC emulator on the domain controller with the best hardware available, and ensure that it’s in a reliable hub site. It should have other domain controllers in the same active directory domain and site to replicate with. Then, to reduce administration and complexity, you also assign at least some of the other FSMO roles to the same DC – the RID master to the PDC of each domain, and the schema master and domain naming master the PDC of the forest root.

    In the event that a DC with one of the FSMO roles is unavailable, especially the PDC emulator, it is critical for the domain to get that FSMO role back. If, for example, you know that the PDC emulator is going to have to be turned off for scheduled maintenance, you should transfer the FSMO role to a different domain controller. In the unfortunate event that the PDC emulator has crashed and is now down with an unplanned outage, you will have domain errors until the PDC emulator is bought back online. If you cannot get the PDC emulator back online, you may have to seize the FSMO role to another domain controller. It is always better to transfer ahead of time then have to seize the role after a crash. Seizing a role should be done only as a last resort. In the event of a seizure, you cannot ever bring the DC that previously held the role back online.

    New with Windows Server 2008 Active Directory is the ability to designate ahead of time a standby operations master. This domain controller is connected directly to the primary operations master role holders through replication to

    Summary

    When updating a part of Active Directory is too critical of an operation to risk a conflict, Windows Active Directory Domains utilize a single-server model to provide updates to those services. The right to update or perform certain duties in Active Directory is granted to domain controllers through the assignment of one of the Flexible Single-Master roles, or FSMO roles.

    There are five FSMO roles. Two of them, schema master and domain naming master, are only assigned once in the forest, in the domain at the forest root. The other three FSMO roles: RID master, PCD emulator, and the infrastructure master, are assigned in each domain, typically all to the same domain controller.

    The availability requirements of the domain controller with an FSMO role are dependent on the role. For example, the schema master may be offline without causing any concern until an update to the schema is attempted. FSMO roles can be transferred to another domain controller to improve performance or to allow for continued access during a scheduled outage. In the event of an unscheduled outage, FSMO roles may be seized as a last resort.

    Links

01/18/2012 Posted by | Active Directory, FSMO, SID, Windows Server | , , | Leave a comment

SQL SERVER – 2005 – Find Database Status Using sys.databases or DATABASEPROPERTYEX

The IT department at a customer had a lot off SCOM error messages from several databases that reported wrong database status, I came across sys.databases and DATABASEPROPERTYEX. It is a very easy way to check the database status. Follow this query.

Following are main database status: (Reference: BOL Database Status)

ONLINE Database is available for access.

OFFLINE Database is unavailable.

RESTORING One or more files of the primary filegroup are being restored, or one or more secondary files are being restored offline.

RECOVERING Database is being recovered.

RECOVERY PENDING SQL Server has encountered a resource-related error during recovery.

SUSPECT At least the primary filegroup is suspect and may be damaged.

EMERGENCY User has changed the database and set the status to EMERGENCY.

 

Let us see how we can find out database status using this  sys.databases and DATABASEPROPERTYEX.

SELECT DATABASEPROPERTYEX('Databasename', 'Status')

DatabaseStatus_DATABASEPROPERTYEX

GO

SELECT state_desc DatabaseStatus_sysDatabase

FROM sys.databases

WHERE name = 'Databasename'

GO

ResultSet:

DatabaseStatus_DATABASEPROPERTYEX

——————————————————

ONLINE

DatabaseStatus_sysDatabase

——————————————————

ONLINE

01/06/2012 Posted by | Sql Server | , , , | Leave a comment

   

%d bloggers like this: