GAPTHEGURU

Geek with special skills

SSMS Timeout Expired Error When Making Changes To a Table

Problem

I was required to add a new integer type column with an identity property to an existing table through SSMS designer. The operation failed and generated a timeout message. The table was large and I expected it would take some time for this task, but I didn’t think it would have a timeout issue.  In this tip we look at how to set this timeout value as well as look at other timeout values that can be set for SQL Server.

Solution

To simulate the timeout error, let’s create a table with 500,000 rows. We will use this for the examples.

-- Create the table for demo
IF EXISTS (SELECT * FROM sys.objects where name = 'TestingTimeOut')
DROP TABLE TestingTimeOut
GO
create table TestingTimeOut (EmpName varchar(75), Designation varchar(50), Department varchar(50))
GO

-- Populate the table with 500000 rows 
-- Multiple executions by providing number with GO is not ANSI standard 
-- Should not be used in production environment
INSERT INTO TestingTimeOut VALUES ('Atif Shehzad', 'DBA', 'Human Resource')
GO 500000

Transaction timeout for SSMS Designer operates only when SQL Server object modification is performed through the SSMS Designer. It is not a server level parameter and may be viewed or modified in the ‘Designers’ link in the Options menu of SSMS. Its value can be different for each SSMS client and is not stored as a server level configuration.

To check or change this setting, within SSMS go to Tools > Options and click on the Designers link in the left panel as shown below.

Time-out for SSMS designers

The default value is 30 seconds, but for the timeout simulation let’s change it to 1 second and save the changes.

In SSMS, find the table that we just created, right click and select Desgin.  Add an integer type column EmpID as a primary key with an identity property and try to save the modifications. Since we changed the timeout to 1 second, this should cause a timeout error as shown below. If the error is not generated you may need to increase the number of rows in the demo table and try again. (Note if you get this error “Saving changes is not permitted” then check out this tip.)

ssms designer timeout designer expired for operation

This is the T-SQL that SSMS generates and is trying to execute.  We can see here that it is just doing an ALTER TABLE with the ADD option.

/* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
ALTER TABLE dbo.TestingTimeOut ADD
 EmpID int NOT NULL IDENTITY (1, 1)
GO
ALTER TABLE dbo.TestingTimeOut ADD CONSTRAINT
 PK_TestingTimeOut PRIMARY KEY CLUSTERED 
 (
 EmpID
 ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
ALTER TABLE dbo.TestingTimeOut SET (LOCK_ESCALATION = TABLE)
GO
COMMIT

If we try to add the EmpID as the first column, as shown below we can see that SSMS generates a pre-save warning that this will take some time. .

This is the T-SQL that SSMS generates and we can see that this is a much more intensive script that drops and recreates the table.

/* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_TestingTimeOut
 (
 EmpID int NOT NULL IDENTITY (1, 1),
 EmpName varchar(75) NULL,
 Designation varchar(50) NULL,
 Department varchar(50) NULL
 )  ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_TestingTimeOut SET (LOCK_ESCALATION = TABLE)
GO
SET IDENTITY_INSERT dbo.Tmp_TestingTimeOut OFF
GO
IF EXISTS(SELECT * FROM dbo.TestingTimeOut)
  EXEC('INSERT INTO dbo.Tmp_TestingTimeOut (EmpName, Designation, Department)
  SELECT EmpName, Designation, Department FROM dbo.TestingTimeOut WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.TestingTimeOut
GO
EXECUTE sp_rename N'dbo.Tmp_TestingTimeOut', N'TestingTimeOut', 'OBJECT' 
GO
ALTER TABLE dbo.TestingTimeOut ADD CONSTRAINT
 PK_TestingTimeOut PRIMARY KEY CLUSTERED 
 (
 EmpID
 ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
COMMIT

Summary

So if you face these timeout issues when using SSMS to make changes you now know the exact parameter to manipulate. The solution to this problem is to increase the timeout limit for Designer. The specified limit would be implemented for any sever connected via that SSMS connection.

Advertisements

05/23/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , , | Leave a comment

Upgrading SQL Server databases and changing compatibility levels

SQL Server DBA Tips SQL Server Developer Tips SQL Server Business Intelligence Tips SQL Server Career Tips SQL Server Tip Categories SQL Server Tutorials SQL Server Webcasts SQL Server Whitepapers SQL Server Tools SQL Server Questions and Answers MSSQLTips Authors About MSSQLTips SQL Server User Groups MSSLQTips Giveaways MSSQLTips Advertising Options

Problem
When upgrading databases from an older version of SQL Server using either the backup and restore method or detach and attach method the compatibility level does not automatically change and therefore your databases still act as though they are running using an earlier version of SQL Server.  From an overall standpoint this is not a major problem, but there are certain features that you will not be able to take advantage of unless your database compatibly level is changed.  This tip will show you how to check the current compatibly level, how to change the compatibly level and also some of the differences between earlier versions and SQL Server 2005.

Solution
The first thing that you need to do is to check the compatibility level that your database is running under.  As mentioned above any database that is upgraded using the backup and restore or detach and attach method will not change the compatibly level automatically, so you will need to check each database and make the change.

Although SQL Server has changed its naming convention to SQL Server 2000, 2005 and soon to be released 2008 the internal version numbers still remain.  Here is a list of the compatibly levels (versions) that you will see:

  • 60 = SQL Server 6.0
  • 65 = SQL Server 6.5
  • 70 = SQL Server 7.0
  • 80 = SQL Server 2000
  • 90 = SQL Server 2005

Identifying Compatibly Level

To check the compatibility level of your databases you can use one of these methods:

Using SQL Server Management Studio, right click on the database, select “Properties” and look at the “Options” page for each database as the following image shows:

Another option is to use sp_helpdb so you can get the information for all databases at once:

EXEC sp_helpdb

Or select directly from the sys.databases catalog to get the information for all databases.

SELECT * FROM sys.databases


Compatibly Level for New Databases

When issuing a CREATE DATABASE statement there is not a way to select which compatibility level you want to use.  The compatibility level that is used is the compatibility level of your model database.

Here is a sample CREATE DATABASE command, but there is not an option to change the compatibility level.

CREATE DATABASE [test] ON PRIMARY
( NAME = N’test’, FILENAME = N’Z:\SQLData\test.mdf’ , SIZE = 2048KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N’test_log’, FILENAME = N’Y:\SQLData\test3_log.ldf’ , SIZE = 3072KB , FILEGROWTH = 10%)
GO

When creating a database using SQL Server Management Studio you have the ability to change the compatibility level on the “Options” tab such as follows:

If we use the “Script” option we can see that SQL Server issues the CREATE DATABASE statement and then issues “sp_dbcmptlevel” to set the database compatibility level to 80 as shown below.

CREATE DATABASE [test] ON PRIMARY
( NAME = N’test’, FILENAME = N’Z:\SQLData\test.mdf’ , SIZE = 2048KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N’test_log’, FILENAME = N’Y:\SQLData\test3_log.ldf’ , SIZE = 3072KB , FILEGROWTH = 10%)
GO
EXEC dbo.sp_dbcmptlevel @dbname=N’test’, @new_cmptlevel=80
GO

Changing Compatibility Level

So once you have identified the compatibility level of your database and know what you want to change it to you can use the sp_dbcmptlevel system stored procedure to make the change.  The command has the following syntax:

sp_dbcmptlevel [ [ @dbname = ] name ]
[ , [ @new_cmptlevel = ] version ]
–to change to level 80
dbo.sp_dbcmptlevel @dbname=N’test’, @new_cmptlevel=80

–to change to level 90
dbo.sp_dbcmptlevel @dbname=N’test’, @new_cmptlevel=90

–or
sp_dbcmptlevel ‘test’, ’80’

sp_dbcmptlevel ‘test’, ’90’


Differences

There are several differences on how compatibly levels affect your database operations.  SQL Server Books Online has a list of these differences and the following list shows you a few of these items:

Compatibility level setting of 80 or earlier Compatibility level setting of 90 Possibility of impact
For locking hints in the FROM clause, the WITH keyword is always optional. With some exceptions, table hints are supported in the FROM clause only when the hints are specified with the WITH keyword. For more information, see FROM (Transact-SQL). High
The *= and =* operators for outer join are supported with a warning message. These operators are not supported; the OUTER JOIN keyword should be used. High
SET XACT_ABORT OFF is allowed inside a trigger. SET XACT_ABORT OFF is not allowed inside a trigger. Medium

(Source: SQL Server 2005 Books Online)  For a complete list of these items look here:

In addition, each new compatibility level offers a new list of reserved keywords.  Here is a list of the new keywords for SQL Server 2005.

Compatibility level setting Reserved keywords
90 PIVOT, UNPIVOT, REVERT, TABLESAMPLE
80 COLLATE, FUNCTION, OPENXML
70 BACKUP, CONTAINS, CONTAINSTABLE, DENY, FREETEXT, FREETEXTTABLE, PERCENT, RESTORE, ROWGUIDCOL, TOP
65 AUTHORIZATION, CASCADE, CROSS, DISTRIBUTED, ESCAPE, FULL, INNER, JOIN, LEFT, OUTER, PRIVILEGES, RESTRICT, RIGHT, SCHEMA, WORK

(Source: SQL Server 2005 Books Online)

If one of these keywords is being used and your database is set to this compatibly level the commands will fail.  To get around this you could put the keyword in either square brackets ([ ]) or use quotation marks (” “) such as [PIVOT] or “PIVOT”.


Summary

The compatibly level setting is used by SQL Server to determine how certain new features should be handled.  This was setup so you could migrate your databases to a later release of SQL Server without having to worry about the application breaking.  This setting can be changed forward and backwards if needed, so if you do change your compatibly level and find that there are problems you can set the value back again until you resolve all of the issues that you may be facing during the upgrade.

In addition, there are certain features that only work if the database is set to the latest compatibly level, therefore to get all of the benefits of the version of SQL Server you are running you need to make sure you are using the latest compatibly level.

04/10/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , , , , | Leave a comment

A guide to recover a database from SUSPECT mode

One of the worst situations I can imagine for a database professional is to get a call reporting a production database is in a “Suspect” state and the business cannot continue. This is a “code red” situation where DBA needs to bring the database online as soon as possible. In this article, I will formulate couple of steps which may be used as a high level process to handle this situation.

The first step is always to inform your customer about the outage before they come back to you. I find most of the times this proactive step becomes a life saver. In this way, you will not be questioned for the time you will spend to bring the database online.

Then refer to the SQL Server error log to find the root cause of the issue. It will give you a clear reason mentioning why the database is in Suspect mode. From SQL Server 2005 onwards, I recommend to use a filter while viewing the log and use “Database name” as the filter criteria. In this way you will only see the logs related to that particular database and then refer to the latest log/s for the root cause. Up until SQL Server 2000, the only option is reading the log from the latest entry backwards to find the root cause. Assuming you found the reason why database is in suspect mode, now you need to take appropriate step to fix the issue.

Here I will discuss some of the possible issues which can put a database in Suspect mode and recommend appropriate resolution.

Possibility 1: Some one deleted/misplaced a data/log file when SQL Server was offline and the database can not start because of missing file. The solution is to place the missing data/log file in proper location. The SQL Server Error Log error message will give you the exact name and path of the missing file. Once you place the file execute below command to bring your database online with no data loss.

RESTORE DATABASE WITH RECOVERY

Possibility 2: SQL Server could not access or place an exclusive lock on the data or log file while coming online. Typically I experience this when SQL Server is shared with some other tool (like Antivirus), which puts an exclusive lock on the data/log file. To resolve it, use process explorer and kill the file handler which placed lock on the file. You may want to involve your System Admins to get this step executed. Then execute below command and you will have your database online with no data loss:

RESTORE DATABASE WITH RECOVERY

Possibility 3: This is a worst case scenario. Database is in suspect because of a corrupted transaction. This is a bad news as you may have to lose data at this point unless you have a good backup! Also this is the most common case I saw for putting an OLTP database in Suspect mode.

The root cause of this issue is actually from a guarantee taken by SQL Server to ensure transaction consistency under fundamental ACID property of RDBMS. The root cause of this issue is most likely SQL server abruptly went down/restarted in the middle of a transaction and while coming back, SQL server could not complete (commit/rollback) the transaction.

At this point, I recommend you to take a decision. If you have a good backup and can restore the database in an acceptable time up to an acceptable point, then go ahead for it. But if restore is not an option at this point, then you have to execute below steps:

Caution! Below steps will cause you to lose data and hence are extremely dangerous to execute. I recommend trying all other possible options including calling Microsoft Support before executing below steps.

1. Switch the Emergency mode on for the database using below command:

ALTER DATABASE SET EMERGENCY;

2. Then execute below command:

dbcc checkdb (”,repair_allow_data_loss)

Please note that as stated above this is an extremely dangerous command to execute. It is a one- way command (that is you can not rollback back this execution once you started it) which can cause loss in data or database integrity. Technically, by executing this command you are actually authorizing SQL Server to force transactional recovery to run and skip the errors. If it faces errors, this operation scavenges as much out of the transaction log as it can and then rebuilds the transaction log from scratch. So taking this step is really the last resort you should try after every other attempts fails.

After this operation is complete you will have your database back online. However, you’ll most likely have lost a bunch of data, broken constraints and inherent business logic in the database but at least you haven’t lost everything. You may want to involve your customers to run a sanity check on the data quality at this point.

Possibility 4: If you find out that your data file is corrupted then most likely you have OS / Hardware level failure. For this these type of failure or anything really weird which you can not fixed easily with in acceptable amount of time, your best bet is restore from backup with out really wasting time in a “Code Red” situation.

So in this article, I covered some possible reasons which can put a database in “Suspect” and then the options to be back in business in shortest possible time. To conclude, I would like to add below basic homework tips so that you can be in a good shape while handling this situation:

1. Always ensure that you have a good backup/DR strategy and your customer is in agreement with the risk of possible data/time loss in case of disaster
2. Do not ever attempt to “detach” a database which is in Suspect. This will do nothing but you will probably loose your ability to repair it.
3. Read and understand the error log before taking any action and do not panic. Nothing in computer science is “unexplainable” so if you keep your calm and apply intelligence, you will come out of any situation with honor.

04/10/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , , , , | 2 Comments

SSIS and clustering: What you should do instead

Lots of customers ask about configuring SQL Server Integration Services in a failover cluster. I recommend that you DON’T configure SSIS as a cluster resource. There are almost no benefits to doing so, and you can gain many of the benefits that you want by simple configuration changes. By editing the configuration file for the SSIS service on each node of a cluster, you can manage the packages on any node from any other node. For more information, please see the Books Online topic, Configuring Integration Services in a Clustered Environment.

Microsoft Senior Premier Field Engineer Steve Howard provided these additional details about the recommendations that he makes to customers who ask about clustering. Thanks, Steve, for permission to share:


I agree that restarting a running package automatically would be neat, but this would be different from other cluster-aware technologies (and other failover technologies) that we have. For example, if a failover happens with SQL Server, the queries do not restart, and other jobs running in Agent do not restart. I suppose it would be possible to write a job to check for jobs that did not complete successfully and restart those jobs, then schedule the that job to run at startup. That sounds feasible, but I have never done that.

What I’m describing is supported out of the box. It is really the same process that you must go through to manage package on a standalone machine with multiple instances (or even just one named instance). I find this question to be the most common question that customers have when I go onsite. Customers usually just do not understand the function of the SSIS service. When I explain it to them, and we work through it together, they are satisfied. I’ll explain here what I go through with customers, and students in the SSIS workshop.

In the installation folder, they will find the configuration file. For SQL 2005, by default, this path is: C:\Program Files\Microsoft SQL Server\90\DTS\Binn and for SQL 2008, this is C:\Program Files\Microsoft SQL Server\100\DTS\Binn. In either case, the name of the file is MsDtsSrvr.ini.xml. When first installed, this file will look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

<Folder xsi:type=”SqlServerFolder”>

<Name>MSDB</Name>

<ServerName>.</ServerName>

</Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

(I just use notepad for this task, but some people prefer to use XML notepad or some XML editor like that.)

In the top level folder, the servername is “.”, which means it will only connect to the default instance on the local machine (local to where the service is running). So when I connect to that SSIS service, I can only see the default instance on the machine where the SSIS service is running. Everything here is relative to where the service is running. (I tell students that it is the center of management for SSIS). I can connect to this machine with Management Studio on any machine, but with this configuration, I will only see the default instance running on the machine where the SSIS service I connected to is running.

If I have multiple instances on this machine, I need to add top-level folders so I can manage all the instances installed on this machine from this SSIS service. (I’ll get to clusters in a moment). Let’s say that I have both a SQL 2005 instance and a SQL 2008 instance on this machine. Then in the SQL 2008 SSIS MsDtsSrvr.ini.xml, I need to set it up to manage these instances. (I cannot manage SQL 2008 instances from SQL 2005 SSIS, so I must configure the SQL 2008 SSIS to be able to manage both from one service.) In that case, I would add the top-level folders with names that let me distinguish among the servers where I am managing packages:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2008 MSDB</Name>

      <ServerName>.\SQL2K8</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>SQL 2005 MSDB</Name>

      <ServerName>.</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So, I have added one folder that is named “SQL 2008 MSDB” and that points to the named instance SQL2k8 on the local machine. The other folder is named “SQL 2005 MSDB” and that points to the default instance on the local machine. When I make this edit, restart the SSIS service so it will read the modified configuration file, then connect to this SSIS instance, I can now see both servers and manage packages on both:

 

So now, I can see running packages on either server, I can import, export, or manually start the packages. But none of this is really necessary to be able to design, install, or run those packages. The Service is just for convenience for managing this.

So now, let’s take this concept to a cluster. For our cluster, let’s have 4 nodes names Node1, Node2, Node3, and Node4. On this, let’s install 4 instances of SQL in 4 separate resource groups. Let’s use the network names net1, net2, net3, and net4, and let’s install instances InstanceA, InstanceB, InstanceC, and InstanceD on those net names respectively so that the full names of our instances will be net1\InstanceA; net2\InstanceB; net3\InstanceC, and net4\InstanceD. Any of the 4 nodes can host any of the instances in our setup.

To be able to manage packages on any of those instances, you are going to have to modify your config file. To be able to manage packages on all 4 instances from any one machine, we would make modifications like I did above so that the config file will now look like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance“>

<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>

<TopLevelFolders>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceA MSDB</Name>

      <ServerName>net1\InstanceA</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceB MSDB</Name>

      <ServerName>net2\InstanceB</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceC MSDB</Name>

      <ServerName>net3\InstanceC</ServerName>

    </Folder>

    <Folder xsi:type=”SqlServerFolder”>

      <Name>InstanceD MSDB</Name>

      <ServerName>net4\InstanceD</ServerName>

    </Folder>

<Folder xsi:type=”FileSystemFolder”>

<Name>File System</Name>

<StorePath>..\Packages</StorePath>

</Folder>

</TopLevelFolders>

</DtsServiceConfiguration>

So now, whatever machine I put that config file onto will see and be able to manage packages on those 4 machines, just as in the screenshot above, I can see the packages and manage them on those two instances. If I put this on node1, then if I connect to node1, I can manage all of them from that machine. But just having it on one node will be a bit of a pain. So, once I have this configured, and I have tested to make sure it will see all the instances where I want to manage packages, I just copy the MsDtsSrvr.ini.xml file into place on node2, node3, and node4 (if I have installed the SSIS service on those nodes). Now, I can connect to SSIS on any of those nodes.

Most DBAs don’t care what the node names are, but they know the network names of their SQL Server instances very well. In that cluster configuration we described, these network names resolve to IP addresses that move with the SQL Server instance when it fails over. So from Management Studio on the DBA’s workstation, he can connect to the SSIS service on net1 and see all 4 instances on his cluster. If it fails over, and he still wants to connect to SSIS to manage packages on any of the 4 nodes on that cluster, he could connect to net1, and it would connect to the SSIS service running on the node where Net1\InstanceA is now hosted, and he will still see the same thing – he doesn’t know or care that he is now connected to the SSIS service on a different node. If he wanted to, he could even specify the cluster name (instead of any of the SQL network names) and still connect to an SSIS service and still see the same set of folders.

In some environments, the DBA has one server that is his/hers where they set up their management tools. The SSIS configuration that we have allows the DBA to be able to configure the XML file on that one machine to see and manage packages on all instances and machines that they manage by connecting to a single SSIS service. He/she just needs to configure the XML file on that one machine.

Where I see confusion/frustration from customers is that they think of Management Studio as the center of their management tools. With SSIS, it is the SSIS service that is the center of the management tools. Customers, before education, think of the SSIS service as running the packages, but this is not the case. The SSIS service is for management. Management Studio gives them a graphical interface into the service, but the center of management for SSIS is the SSIS service.

If I have one complaint about this, it is that we do not really have a front end for customers so that they don’t have to manually edit the XML files. But really, that XML file is so simple that it is not difficult to edit with tools like Notepad or XML Notepad.

And in that situation, what have we gained if we cluster the SSIS service?


The preceding information is presented here for exactly what it is: the educated opinion of an experienced Microsoft field engineer.

What many corporate customers are really looking for, presumably, is high availability for ETL processes, especially long-running processes. Except for its support for transactions, and its ability to restart from checkpoints after failure, SSIS out of the box doesn’t currently have a complete answer for HA concerns.

01/31/2012 Posted by | Cluster Configuration, Sql Server, SSIS | , , , , , , | Leave a comment

   

%d bloggers like this: