GAPTHEGURU

Geek with special skills

SQL SERVER – Backup Timeline and Understanding of Database Restore Process in Full Recovery Model

I assume you all know that there are three types of Database Backup Models, so I will not discuss on this commonly known topic today. In fact, I will just talk about how to restore database that is in full recovery model.

In general, databases backup in full recovery mode are taken in three different kinds of database files.

  1. Full Database Backup
  2. Differential Database Backup
  3. Log Backup

What really perplexes most people is differential database backup.

Let me narrate here a real story. One of the DBA at one of my customers once called me up and laughingly said that he has just found something not smart about SQL Server Product Team in terms of database backup. I just could not believe this negative remark against SQL Server Product Team so I asked him to explain me what is it. He told me that in his opinion it is an extra step when it is about Differential Database backup. I asked him how he restores his database. He replied that he starts with first full database backup and then sequentially all log backups and differential database backups. He continued his explanation and said that he has figured something interesting; that is, if he does not restore all the differential database backups and only restores log backups it just works fine. According to him this is an unnecessary step.

Well, I am extremely happy to say he is wrong. He has totally failed to understand the concept of differential database backup. I called up another friend in his company and told him this story and he found it funny too! He suggested that he will explain to my friend that he needs to do all differential backups first and then all log backups after the last differential backup. I was again amazed and didn’t know what to do. He was wrong too!

After interacting with many DBAs I have realized that it is quite confusing to most of the people how differential database is useful and many are not aware of the correct method to restore full recovery model. Before I start explaining please understand the following diagram where I have demonstrated time line when a backup was taken.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Let us remember the golden rule for restore first.

‘After restoring full database backup, restore latest differential database backup and all the transaction log backup after that to get database to current state.’

From the above-listed rule, it is very clear that there is no need to restore all the differential database backups when restoring databases. You can only restore the latest Differential database backup. Differential database backup is the backup of all the changes made in database from last full backup; it is cumulative itself. That is the reason why the size of next differential database backup is much more than the previous differential database backup. All differential database backups contain all the data of previous differential database backups. You just have to restore the latest differential database backup and right after that install all the transaction database backups to bring database to the current state.

If you do not want to have differential database backup and have all the transaction log backups, in that case, you will have to install all the transactional database backups, which will be very time consuming and is not recommended when disastrous situation is there and getting server back online is the priority. In this way, differential database backups are very useful to save time as well as are very convenient to restore database. Instead of restoring many transaction database logs, which needs to be done very carefully without missing a single transaction in between, this is very convenient.

In our example, there are multiple paths to get server to the current state.

Path 1 (SLOWEST) : Full Database Restore >> Log Backup 0_1 >> Log Backup 1_1 to all remaining logs.

Path 2 : Full Database Restore >> Differential Database Backup 1 >> Log Backup 1_1 to all remaining logs.

Path 3 (FASTEST) : Full Database Restore >> Differential Database Backup 2 >> Log Backup 2_1 to all remaining logs.

Let me know if my explanation is clear to you all. If there is any confusion regarding how full database backup restore works then do inform me.

Advertisements

01/31/2012 Posted by | Sql Server | , , , , , , , | Leave a comment

SQL SERVER – Rollback TRUNCATE Command in Transaction

If you use TRANSACTIONS in your code, TRUNCATE can be rolled back. If there is no transaction is used and TRUNCATE operation is committed, it can not be retrieved from log file. TRUNCATE is DDL operation and it is not logged in log file.

Truncate *IS* a logged operation, it just doesn’t log removing the records, it logs the page deallocations.

The code to simulate above result is here.

USE tempdb

GO

-- Create Test Table

CREATE TABLE TruncateTest (ID INT)

INSERT INTO TruncateTest (ID)

SELECT 1

UNION ALL

SELECT 2

UNION ALL

SELECT 3

GO

-- Check the data before truncate

SELECT * FROM TruncateTest

GO

-- Begin Transaction

BEGIN TRAN

-- Truncate Table

TRUNCATE TABLE TruncateTest

GO

-- Check the data after truncate

SELECT * FROM TruncateTest

GO

-- Rollback Transaction

ROLLBACK TRAN

GO

-- Check the data after Rollback

SELECT * FROM TruncateTest

GO

-- Clean up

DROP TABLE TruncateTest

GO

Example from sql authority:

Following example demonstrates how during the transaction truncate can be rolled back.

 

01/31/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , , , , , | 2 Comments

SQL SERVER – Get All the Information of Database using sys.databases

SELECT database_id,

CONVERT(VARCHAR(25), DB.name) AS dbName,

CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status],

state_desc,

(SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) =

DB.name AND type_desc = 'rows') AS DataFiles,

(SELECT SUM((size*8)/1024) FROM sys.master_files

WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB],

(SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) =

DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE

DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB],

user_access_desc AS [User access],

recovery_model_desc AS [Recovery model],

CASE compatibility_level

WHEN 60 THEN '60 (SQL Server 6.0)'

WHEN 65 THEN '65 (SQL Server 6.5)'

WHEN 70 THEN '70 (SQL Server 7.0)'

WHEN 80 THEN '80 (SQL Server 2000)'

WHEN 90 THEN '90 (SQL Server 2005)'

WHEN 100 THEN '100 (SQL Server 2008)'

END AS [compatibility level],

CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20),

create_date, 108) AS [Creation date],

-- last backup

ISNULL((SELECT TOP 1

CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L'

THEN 'Transaction log' END + ' – ' +

LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' +

CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' +

CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' +

CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' +

CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)'

FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE

WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext],

CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option],

CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink],

CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics],

CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics],

CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby],

CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS

[cleanly shutdown]

FROM sys.databases DB

ORDER BY dbName, [Last backup] DESC, NAME

01/31/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , | Leave a comment

SSIS and clustering: What you should do instead

Lots of customers ask about configuring SQL Server Integration Services in a failover cluster. I recommend that you DON’T configure SSIS as a cluster resource. There are almost no benefits to doing so, and you can gain many of the benefits that you want by simple configuration changes. By editing the configuration file for the SSIS service on each node of a cluster, you can manage the packages on any node from any other node. For more information, please see the Books Online topic, Configuring Integration Services in a Clustered Environment.

Microsoft Senior Premier Field Engineer Steve Howard provided these additional details about the recommendations that he makes to customers who ask about clustering. Thanks, Steve, for permission to share:


I agree that restarting a running package automatically would be neat, but this would be different from other cluster-aware technologies (and other failover technologies) that we have. For example, if a failover happens with SQL Server, the queries do not restart, and other jobs running in Agent do not restart. I suppose it would be possible to write a job to check for jobs that did not complete successfully and restart those jobs, then schedule the that job to run at startup. That sounds feasible, but I have never done that.

What I’m describing is supported out of the box. It is really the same process that you must go through to manage package on a standalone machine with multiple instances (or even just one named instance). I find this question to be the most common question that customers have when I go onsite. Customers usually just do not understand the function of the SSIS service. When I explain it to them, and we work through it together, they are satisfied. I’ll explain here what I go through with customers, and students in the SSIS workshop.

In the installation folder, they will find the configuration file. For SQL 2005, by default, this path is: C:\Program Files\Microsoft SQL Server\90\DTS\Binn and for SQL 2008, this is C:\Program Files\Microsoft SQL Server\100\DTS\Binn. In either case, the name of the file is MsDtsSrvr.ini.xml. When first installed, this file will look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

<Folder xsi:type=”SqlServerFolder”>

<Name>MSDB</Name>

<ServerName>.</ServerName>

</Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

(I just use notepad for this task, but some people prefer to use XML notepad or some XML editor like that.)

In the top level folder, the servername is “.”, which means it will only connect to the default instance on the local machine (local to where the service is running). So when I connect to that SSIS service, I can only see the default instance on the machine where the SSIS service is running. Everything here is relative to where the service is running. (I tell students that it is the center of management for SSIS). I can connect to this machine with Management Studio on any machine, but with this configuration, I will only see the default instance running on the machine where the SSIS service I connected to is running.

If I have multiple instances on this machine, I need to add top-level folders so I can manage all the instances installed on this machine from this SSIS service. (I’ll get to clusters in a moment). Let’s say that I have both a SQL 2005 instance and a SQL 2008 instance on this machine. Then in the SQL 2008 SSIS MsDtsSrvr.ini.xml, I need to set it up to manage these instances. (I cannot manage SQL 2008 instances from SQL 2005 SSIS, so I must configure the SQL 2008 SSIS to be able to manage both from one service.) In that case, I would add the top-level folders with names that let me distinguish among the servers where I am managing packages:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2008 MSDB</Name>

      <ServerName>.\SQL2K8</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2005 MSDB</Name>

      <ServerName>.</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So, I have added one folder that is named “SQL 2008 MSDB” and that points to the named instance SQL2k8 on the local machine. The other folder is named “SQL 2005 MSDB” and that points to the default instance on the local machine. When I make this edit, restart the SSIS service so it will read the modified configuration file, then connect to this SSIS instance, I can now see both servers and manage packages on both:

 

So now, I can see running packages on either server, I can import, export, or manually start the packages. But none of this is really necessary to be able to design, install, or run those packages. The Service is just for convenience for managing this.

So now, let’s take this concept to a cluster. For our cluster, let’s have 4 nodes names Node1, Node2, Node3, and Node4. On this, let’s install 4 instances of SQL in 4 separate resource groups. Let’s use the network names net1, net2, net3, and net4, and let’s install instances InstanceA, InstanceB, InstanceC, and InstanceD on those net names respectively so that the full names of our instances will be net1\InstanceA; net2\InstanceB; net3\InstanceC, and net4\InstanceD. Any of the 4 nodes can host any of the instances in our setup.

To be able to manage packages on any of those instances, you are going to have to modify your config file. To be able to manage packages on all 4 instances from any one machine, we would make modifications like I did above so that the config file will now look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceA MSDB</Name>

      <ServerName>net1\InstanceA</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceB MSDB</Name>

      <ServerName>net2\InstanceB</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceC MSDB</Name>

      <ServerName>net3\InstanceC</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceD MSDB</Name>

      <ServerName>net4\InstanceD</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So now, whatever machine I put that config file onto will see and be able to manage packages on those 4 machines, just as in the screenshot above, I can see the packages and manage them on those two instances. If I put this on node1, then if I connect to node1, I can manage all of them from that machine. But just having it on one node will be a bit of a pain. So, once I have this configured, and I have tested to make sure it will see all the instances where I want to manage packages, I just copy the MsDtsSrvr.ini.xml file into place on node2, node3, and node4 (if I have installed the SSIS service on those nodes). Now, I can connect to SSIS on any of those nodes.

Most DBAs don’t care what the node names are, but they know the network names of their SQL Server instances very well. In that cluster configuration we described, these network names resolve to IP addresses that move with the SQL Server instance when it fails over. So from Management Studio on the DBA’s workstation, he can connect to the SSIS service on net1 and see all 4 instances on his cluster. If it fails over, and he still wants to connect to SSIS to manage packages on any of the 4 nodes on that cluster, he could connect to net1, and it would connect to the SSIS service running on the node where Net1\InstanceA is now hosted, and he will still see the same thing – he doesn’t know or care that he is now connected to the SSIS service on a different node. If he wanted to, he could even specify the cluster name (instead of any of the SQL network names) and still connect to an SSIS service and still see the same set of folders.

In some environments, the DBA has one server that is his/hers where they set up their management tools. The SSIS configuration that we have allows the DBA to be able to configure the XML file on that one machine to see and manage packages on all instances and machines that they manage by connecting to a single SSIS service. He/she just needs to configure the XML file on that one machine.

Where I see confusion/frustration from customers is that they think of Management Studio as the center of their management tools. With SSIS, it is the SSIS service that is the center of the management tools. Customers, before education, think of the SSIS service as running the packages, but this is not the case. The SSIS service is for management. Management Studio gives them a graphical interface into the service, but the center of management for SSIS is the SSIS service.

If I have one complaint about this, it is that we do not really have a front end for customers so that they don’t have to manually edit the XML files. But really, that XML file is so simple that it is not difficult to edit with tools like Notepad or XML Notepad.

And in that situation, what have we gained if we cluster the SSIS service?


The preceding information is presented here for exactly what it is: the educated opinion of an experienced Microsoft field engineer.

What many corporate customers are really looking for, presumably, is high availability for ETL processes, especially long-running processes. Except for its support for transactions, and its ability to restart from checkpoints after failure, SSIS out of the box doesn’t currently have a complete answer for HA concerns.

01/31/2012 Posted by | Cluster Configuration, Sql Server, SSIS | , , , , , , | Leave a comment

SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified

Users often see this error message when connection to a SQL Server and don’t know where to start to solve the problem. In most forums, people says this is because remote connection is not enabled on the server. This is not exactly correct. Actually, this error message give customers very specific information and the solution is quite simple.

First of all, you get this error message only if you are trying to connect to a SQL Server named instance. For default instance, you never see this.

Why?

Because even if we failed at this stage (i.e. error locating server/instance specified), we will continue to try connect using default values, e.g defaul TCP port 1433, default pipe name for Named Pipes. You may see other error message due to failure later, but not this error message.

Every time client makes a connection to SQL Server named instance, we will send a SSRP UDP packet to the server machine UDP port 1434. We need this step to know configuration information of the SQL instance, e.g., protocols enabled, TCP port, pipe name etc. Without these information, client does know how to connect the server and it fails with this specified error message.
In a word, the reason that we get this error message is the client stack could not receive SSRP response UDP packet from SQL Browser. It’s easy to isolate the issue.

Here are the steps:

1) Make sure your server name is correct, e.g., no typo on the name.

2) Make sure your instance name is correct and there is actually such an instance on your target machine. Some application converts \\ to \. If you are not sure about your application, please try both Server\Instance and Server\\Instance in your connection string.

3) Make sure the server machine is reachable, e.g, DNS can be resolve correctly, you are able to ping the server (not always true).

4) Make sure SQL Browser service is running on the server. 5) If firewall is enabled on the server, you need to put sqlbrowser.exe and/or UDP port 1434 into exception.

Once you are done the steps, you should not see this error message anymore. You may still fail to connect your SQL server, but error message should be different and you have a different issue now.

If it still fails, you may replace server\instance with tcp:server\instance and/or np:server\instance and see if it succeeds with either TCP or NP protocol. That way, you can isolate the issue a little bit. ]

There is one corner case where you may still fail after you checked step 1)-4).

It happens when:

a) your server is a named instance on cluster or on a multi-homed machine

b) your client is a machine with Firewall on.

I found a good tool online which could be very helpful for users to isolate issues related to this error message. You can download PortQry from http://support.microsoft.com/kb/832919, run “portqry.exe -n yourservername -p UDP -e 1434”. If this command returns information and it contains your target instance, then you can rule out possiblity 4) and 5) above, meaning you do have a SQL Browser running and your firewall does not block SQL Browser UDP packet. In this case, you can check other issue, e.g. wrong connection string.

01/31/2012 Posted by | Sql Server | , , , | 3 Comments

How to Move the OperationsManager Database in Operations Manager 2007

After the initial deployment of Microsoft System Center Operations Manager 2007, you might need to move the Operations Manager database from one Microsoft SQL Server-based computer to another.

SQL Server 2005 and SQL Server 2008 support the ability to change the location of the data files and of the log files between SQL Server-based computers, between instances on the same SQL Server-based computer, and different volumes on the same SQL Server-based computer.  For more information about using this function in SQL Server, see the SQL Server documentation (http://go.microsoft.com/fwlink/?LinkId=93787).

The high-level steps of moving the OperationsManager database are as follows:

  1. Back up the OperationsManager database.
  2. Uninstall the OperationsManager database.
  3. Delete the Operations Manager database.
  4. Restore the OperationsManager database.
  5. Update management servers with the new database server name.
  6. Update the Operations Manager database with the new database server name.
  7. Update the Operations Manager database logins on the new database server. Ensure that for the root management server, the SDK Account and the Action Account are included in the logins and that they have appropriate permissions. If reporting is installed, ensure that the Data Warehouse Action Account has appropriate permissions.
  8. Set ENABLE_BROKER if needed.
  9. Verify that the move is successful by ensuring that the console is displaying valid data.

OperationsManager Database Relocation

Use the procedure below to move the OperationsManager database to a new server.

To move the OperationsManager database

  1. Install and configure a new SQL Server-based computer. Ensure that you have system administrator permissions on both the original SQL Server-based computer and the new SQL Server-based computers.
  2. Back up the following:
    • Back up all databases. On the current server that hosts the Operations Manager database, use SQL Server Management Studio to back up the Operations Manager (default name) database.
    • Back up the encryption key on the root management server by using the SecureStorageBackup.exe utility.
  3. Stop the Operations Manager services (System Center Management, System Center Data Access, and System Center Management Configuration for root management servers, and System Center Management for management servers) on the management servers in the management group.In a clustered root management server environment, use Cluster Administrator (Windows Server 2003) or Failover Cluster Management (Windows Server 2008) to configure each of the three services listed above with the Take Offline option.
  4. On the current server that hosts the OperationsManager database, uninstall the database component as follows (these steps do not physically remove the OperationsManager database from SQL Server):
    1. Click Start, click Control Panel, and then click Add or Remove Programs for Windows Server 2003 or Programs and Features for Windows Server 2008.
    2. In the Add or Remove Programs dialog box for Windows Server 2003 or Programs and Features dialog box for Windows Server 2008, select System Center Operations Manager 2007 R2, and then select Remove for Windows Server 2003 or select Uninstall for Windows Server 2008.
    3. Complete the wizard
  5.  On the current server that hosts the OperationsManager database, delete the OperationsManager database as follows:
    1. In Microsoft SQL Server Management Studio, navigate to Databases.
    2. Right-click OperationsManager, and then click Delete.
    3. In the Delete Object dialog box, ensure that the Delete backup and restore history information for databases and Close existing connections options are both selected.
    4. Click OK to complete the operation.
  6. On the new server, use Microsoft SQL Server Management Studio to restore the OperationsManager database that you previously backed up. To access the database backup file, copy the backup file to a local drive or map a local drive to the folder that contains the backup file.
  7. Update the registry on each management server in the management group to reference the new SQL Server-based computer. Complete this step also on the root management server. If the root management server is clustered, you must complete this step on all the nodes in the cluster.
    1. Log on to the management server with Administrator permissions.
    2. Click Start, select Run, type regedit in the Open box, and then click OK to start Registry Editor.
    3. Under HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Operations Manager\3.0\Setup, double-click the value DatabaseServerName, and then change the value to the hostname of the SQL Server-based computer now hosting the OperationsManager database. If you are using a named instance of SQL Server, be sure to use the ServerName\Instance name format.
    4. Click OK.
    5. Close the Registry Editor.
    6. After you have completed this step on all management servers in the management group, restart the System Center Management, System Center Data Access, and System Center Management Configuration services on the root management server, and then restart only the System Center Management service on the remaining management servers.
  8. Update the OperationsManager database with the New Database Server Name, and ensure that the account that you are logged on with has sufficient privileges on the SQL Server instance.
    1. Open SQL Server Management Studio.
    2. Expand Databases, OperationsManager, and Tables.
    3. Right-click dbo.MT_ManagementGroup, and then click Open Table if you are using SQL Server 2005 or click Edit Top 200 Rows if you are using SQL Server 2008.
    4. Change the value in the SQLServerName_6B1D1BE8_EBB4_B425_08DC_2385C5930B04 column to reflect the name of the new SQL Server-based computer.
    5. Save your change.
  9. On the new server hosting the OperationsManager database, add the correct permission for the login of the root management server on which the SDK Account is running, as follows:
    1. Open Microsoft SQL Server Management Studio, and in the Object Explorer pane, navigate to Security and then expand Logins.
    2. Locate the SDK Account,and add the account if it is not listed.
    3. Right-click the SDK Account, and select Properties.
    4. In the Login Properties dialog box, in the Select a page pane, select User Mapping.
    5. In the Users mapped to this login list, in the Map column, select the box that corresponds to OperationsManager (default name).
    6. In the Database role membership for: OperationsManager list, ensure that the following items are selected: configsvc_users, db_datareader, db_datawriter, db_ddladmin, and sdk_users.
    7. Click OK to save your changes and to close the Login Properties dialog box.
  10. On the new server hosting the Operations Manager database, add the correct permission for the login of the root management server on which the Action Account is running, as follows:
    1. Open Microsoft SQL Server Management Studio, and in the Object Explorer pane, navigate to Security and then expand Logins.
    2. Locate the Action Account, and add the account if it is not listed. If the Action Account is running as LocalSystem, use the format <domain\computername$> in SQL Logins, where <computername> is the name of the root management server.
    3. Right-click the Action Account, and select Properties.
    4. In the Login Properties dialog box, in the Select a page pane, select User Mapping.
    5. In the Users mapped to this login list, in the Map column, select the box that corresponds to OperationsManager (default name).
    6. In the Database role membership for: OperationsManager list, ensure that the following items are selected: db_datareader, db_datawriter, db_ddladmin, and dbmodule_users.
    7. Click OK to save your changes and to close the Login Properties dialog box.
  11. On the new server hosting the Operations Manager database, add the correct permission for the login of the Data Warehouse server on which the Data Warehouse Action Account is running, as follows:
    1. Open Microsoft SQL Server Management Studio, and in the Object Explorer pane, navigate to Security and then expand Logins.
    2. Locate the Data Warehouse Action Account, and add the account if it is not listed.
    3. Right-click the Data Warehouse Action Account, and select Properties.
    4. In the Login Properties dialog box, in the Select a page pane, select User Mapping.
    5. In the Users mapped to this login list, in the Map column, select the box that corresponds to OperationsManager (default name).
    6. In the Database role membership for: OperationsManager list, ensure that the following items are selected: db_datareader and dwsynch_users.
    7. Click OK to save your changes and to close the Login Properties dialog box.            

Set ENABLE_BROKER

Before you can run tasks and use the Discovery Wizard to install agents, you need to set the ENABLE_BROKER value.

After moving the Operations Manager database, the status of the Sql Broker Availability Monitor might be set to Critical or Sql Broker is disabled. You can check the state of the Sql Broker Availability Monitor by running the following SQL query:

SELECT is_broker_enabled FROM sys.databases WHERE name=’OperationsManager’Where ‘OperationsManager’ is the default database name, replace this name as appropriate.

If the query result is ‘0’, the Sql Broker is disabled and you must re-enable it using the following procedure.

To set ENABLE_BROKER

  1. Open SQL Server Management Studio.
  2. In the Connect to Server dialog box, select the appropriate values in the Server type list, Server name list, and Authentication list, and then click Connect.
  3. Click New Query.
  4.  In the query window, enter the following query:                ALTER DATABASE OperationsManager SET SINGLE_USER WITH ROLLBACK IMMEDIATE
  5. Click Execute.
  6. Enter the following query:                ALTER DATABASE OperationsManager SET ENABLE_BROKER
  7. Click Execute.
  8. Close SQL Server Management Studio.
  9. Open SQL Server Management Studio.
  10. In the Connect to Server dialog box, select the appropriate values in the Server type list, Server name list, and Authentication list, and then click Connect.
  11. Click New Query.
  12. In the query window, enter the following query:                ALTER DATABASE OperationsManager SET MULTI_USER
  13. Click Execute.

You can verify the setting for ENABLE_BROKER is set to 1 by using this SQL query: SELECT is_broker_enabled FROM sys.databases WHERE name=’OperationsManager’.

01/31/2012 Posted by | SCOM, Sql Server | , , , | Leave a comment

   

%d bloggers like this: