GAPTHEGURU

Geek with special skills

SQL Server 2014: Unlocking Real-Time Insights

http://blogs.technet.com/b/dataplatforminsider/archive/2013/06/03/sql-server-2014-unlocking-real-time-insights.aspx

Today at TechEd North America we announced a wave of products and services that will help customers embrace the “enterprise cloud era.” The next version of our data platform – SQL Server 2014 – is a key part of the day’s news. Designed and developed with our cloud-first principles in mind, it delivers built-in in-memory capabilities, new hybrid cloud scenarios and enables even faster data insights.

The journey to SQL Server 2014

It’s been a mere 14 months since we shipped SQL Server 2012 and in that time we’ve advanced our data platform portfolio with a preview of Windows Azure HDInsight, customer successes on Windows Azure SQL Database and releases of PowerPivot and Power View for Office 2013, project codename “Data Explorer” in Excel, Parallel Data Warehouse 2.0 and so much more. It’s been an incredibly busy time building a modern data platform that empowers businesses to unlock real-time insights from big data.

The Evolution of SQL Server

06/05/2013 Posted by | Sql Server | Leave a comment

Some of my favorite SQL scripts

— SQL Server 2008/2008R2/(2012) Diagnostic Information Queries — Geir Atle Paulsen

— SQL and OS Version information for current instance

SELECT @@VERSION AS [SQL Version Info];   _________________________________________________________________________________________________   — Hardware information from SQL Server 2008 — (Cannot distinguish between HT and multi-core)

SELECT cpu_count AS [Logical CPU Count], hyperthread_ratio AS [Hyperthread Ratio], cpu_count/hyperthread_ratio AS [Physical CPU Count], physical_memory_in_bytes/1048576 AS [Physical Memory (MB)], sqlserver_start_time FROM sys.dm_os_sys_info;

_________________________________________________________________________________________________

— Get sp_configure values for instance

EXEC sp_configure ‘Show Advanced Options’, 1; GO RECONFIGURE; GO EXEC sp_configure;

— Focus on — backup compression default — clr enabled — lightweight pooling (should be zero) — max degree of parallelism — max server memory (MB) — optimize for ad hoc workloads — priority boost (should be zero)

_________________________________________________________________________________________________

— File Names and Paths for all databases in instance

SELECT [dbid], fileid, [filename] FROM sys.sysaltfiles;   — Things to look at — Are data files and log files on different drives? — Is everything on C: drive? — Is TempDB on dedicated drives? — Are there multiple data files?

_________________________________________________________________________________________________

— Recovery model, log reuse wait description, and compatibility level for all databases on instance

SELECT [name], recovery_model_desc, log_reuse_wait_desc, [compatibility_level] FROM sys.databases;   — Things to look at — How many databases are on the instance? — What recovery models are they using? — What is the log reuse wait description? — What compatibility level are they on?   _________________________________________________________________________________________________

— Clear Wait Stats DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR);

 

_________________________________________________________________________________________________

— Isolate top waits for server instance since last restart or statistics clear

WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,     100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,     ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn  FROM sys.dm_os_wait_stats  WHERE wait_type NOT IN( ‘SLEEP_TASK’, ‘BROKER_TASK_STOP’,   ‘SQLTRACE_BUFFER_FLUSH’, ‘CLR_AUTO_EVENT’, ‘CLR_MANUAL_EVENT’,   ‘LAZYWRITER_SLEEP’)) — filter out additional irrelevant waits SELECT W1.wait_type,   CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,   CAST(W1.pct AS DECIMAL(12, 2)) AS pct,   CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) – W1.pct < 95; — percentage threshold

_________________________________________________________________________________________________

INFORMATION: Common Significant Wait types with BOL explanations

*** Network Related Waits *** ASYNC_NETWORK_IO        Occurs on network writes when the task is blocked behind the network   *** Locking Waits *** LCK_M_IX                Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock LCK_M_IU                Occurs when a task is waiting to acquire an Intent Update (IU) lock LCK_M_S                Occurs when a task is waiting to acquire a Shared lock

*** I/O Related Waits *** ASYNC_IO_COMPLETION  Occurs when a task is waiting for I/Os to finish IO_COMPLETION        Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits PAGEIOLATCH_SH       Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX       Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. WRITELOG             Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. PAGELATCH_EX         Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. BACKUPIO             Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data

*** CPU Related Waits *** SOS_SCHEDULER_YIELD  Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. THREADPOOL           Occurs when a task is waiting for a worker to run on. This can indicate that the maximum worker setting is too low, or that batch executions are                      taking unusually long, thus reducing the number of workers available to satisfy other batches. CX_PACKET            Occurs when trying to synchronize the query processor exchange iterator You may consider lowering the degree of parallelism if contention on this wait type becomes a problem

_________________________________________________________________________________________________

— Signal Waits for instance

SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits],        CAST(100.0 * SUM(wait_time_ms – signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats;

— Signal Waits above 10-15% is usually a sign of CPU pressure

_________________________________________________________________________________________________

— Page Life Expectancy (PLE) value for default instance

SELECT cntr_value AS [Page Life Expectancy] FROM sys.dm_os_performance_counters WHERE object_name = ‘SQLServer:Buffer Manager’ — Modify this if you have named instances AND counter_name = ‘Page life expectancy’;   INFORMATION: — PLE is a good measurement of memory pressure — Higher PLE is better. Below 300 is generally bad. — Watch the trend, not the absolute value

_________________________________________________________________________________________________

— Buffer Pool Usage for instance

SELECT TOP(20) [type], SUM(single_pages_kb) AS [SPA Mem, Kb] FROM sys.dm_os_memory_clerks GROUP BY type  ORDER BY SUM(single_pages_kb) DESC;

INFORMATION:  CACHESTORE_SQLCP  SQL Plans – These are cached SQL statements or batches that aren’t in stored procedures, functions and triggers CACHESTORE_OBJCP  Object Plans – These are compiled plans for stored procedures, functions and triggers CACHESTORE_PHDR   Algebrizer Trees – An algebrizer tree is the parsed SQL text that resolves the table and column names

_________________________________________________________________________________________________    — Individual File Sizes and space available for current database

SELECT name AS [File Name] , physical_name AS [Physical Name], size/128 AS [Total Size in MB], size/128.0 – CAST(FILEPROPERTY(name, ‘SpaceUsed’) AS int)/128.0 AS [Available Space In MB] FROM sys.database_files;

INFORMATION:  — Look at how large and how full the files are and where they are located — Make sure transaction log is not full!!

_________________________________________________________________________________________________    — Top Cached SPs By Execution Count (SQL 2008)

SELECT TOP(50) p.name AS [SP Name], qs.execution_count, ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GetDate()), 0) AS [Calls/Second], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.total_worker_time AS [TotalWorkerTime],  qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time, qs.cached_time FROM sys.procedures AS p INNER JOIN sys.dm_exec_procedure_stats AS qs ON p.object_id = qs.object_id ORDER BY qs.execution_count DESC;

_________________________________________________________________________________________________      — Top Cached SPs By Total Worker time (SQL 2008). Worker time relates to CPU cost

SELECT TOP(25) p.name AS [SP Name], qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count, ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GetDate()), 0) AS [Calls/Second], qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time, qs.cached_time FROM sys.procedures AS p INNER JOIN sys.dm_exec_procedure_stats AS qs ON p.object_id = qs.object_id ORDER BY qs.total_worker_time DESC;

 

_________________________________________________________________________________________________    — Top Cached SPs By Total Logical Reads (SQL 2008). Logical reads relate to memory pressure

SELECT TOP(25) p.name AS [SP Name], qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads], ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GetDate()), 0) AS [Calls/Second], qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count, qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time, qs.cached_time FROM sys.procedures AS p INNER JOIN sys.dm_exec_procedure_stats AS qs ON p.object_id = qs.object_id ORDER BY qs.total_logical_reads DESC;

_________________________________________________________________________________________________    — Top Cached SPs By Total Physical Reads (SQL 2008). Physical reads relate to disk I/O pressure

SELECT TOP(25) p.name AS [SP Name], qs.total_physical_reads AS [TotalPhysicalReads], qs.total_physical_reads/qs.execution_count AS [AvgPhysicalReads], ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GetDate()), 0) AS [Calls/Second], qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads], qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count, qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time, qs.cached_time FROM sys.procedures AS p INNER JOIN sys.dm_exec_procedure_stats AS qs ON p.object_id = qs.object_id ORDER BY qs.total_physical_reads DESC;

 

_________________________________________________________________________________________________          — Top Cached SPs By Total Logical Writes (SQL 2008). Logical writes relate to both memory and disk I/O pressure

SELECT TOP(25) p.name AS [SP Name], qs.total_logical_writes AS [TotalLogicalWrites], qs.total_logical_writes/qs.execution_count AS [AvgLogicalWrites], ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GetDate()), 0) AS [Calls/Second], qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads], qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.execution_count, qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time, qs.cached_time FROM sys.procedures AS p INNER JOIN sys.dm_exec_procedure_stats AS qs ON p.object_id = qs.object_id ORDER BY qs.total_logical_writes DESC;

_________________________________________________________________________________________________      — Possible Bad Indexes (writes > reads)

SELECT object_name(s.object_id) AS [Table Name], i.name AS [Index Name], i.index_id,         user_updates AS [Total Writes], user_seeks + user_scans + user_lookups AS [Total Reads],         user_updates – (user_seeks + user_scans + user_lookups) AS [Difference] FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK) INNER JOIN sys.indexes AS i WITH (NOLOCK) ON s.object_id = i.object_id AND i.index_id = s.index_id WHERE objectproperty(s.object_id,’IsUserTable’) = 1 AND s.database_id = db_id() AND user_updates > (user_seeks + user_scans + user_lookups) AND i.index_id > 1 ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC;

_________________________________________________________________________________________________      — Missing Indexes for entire instance by Index Advantage

SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS index_advantage, migs.last_user_seek, mid.statement AS [Database.Schema.Table], mid.equality_columns, mid.inequality_columns, mid.included_columns, migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK) INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK) ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK) ON mig.index_handle = mid.index_handle ORDER BY index_advantage DESC;

INFORMATION: — Look at last user seek time, number of user seeks to help determine source and importance — SQL Server is overly eager to add included columns, so beware

_________________________________________________________________________________________________      — Breaks down buffers used by current database by object (table, index) in the buffer cache

SELECT OBJECT_NAME(p.object_id) AS [ObjectName], p.object_id, p.index_id, COUNT(*)/128 AS [buffer size(MB)],  COUNT(*) AS [buffer_count] FROM sys.allocation_units AS a INNER JOIN sys.dm_os_buffer_descriptors AS b ON a.allocation_unit_id = b.allocation_unit_id INNER JOIN sys.partitions AS p ON a.container_id = p.hobt_id WHERE b.database_id = db_id() AND p.object_id > 100 GROUP BY p.object_id, p.index_id ORDER BY buffer_count DESC;

INFORMATION: — Tells you what tables and indexes are using the most memory in the buffer cache

_________________________________________________________________________________________________      — Detect blocking (run multiple times)

SELECT t1.resource_type AS ‘lock type’,db_name(resource_database_id) AS ‘database’, t1.resource_associated_entity_id AS ‘blk object’,t1.request_mode AS ‘lock req’, — lock requested t1.request_session_id AS ‘waiter sid’, t2.wait_duration_ms AS ‘wait time’, — spid of waiter  (SELECT [text] FROM sys.dm_exec_requests AS r                              — get sql for waiter CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) WHERE r.session_id = t1.request_session_id) AS ‘waiter_batch’, (SELECT substring(qt.text,r.statement_start_offset/2,     (CASE WHEN r.statement_end_offset = -1     THEN LEN(CONVERT(nvarchar(max), qt.text)) * 2     ELSE r.statement_end_offset END – r.statement_start_offset)/2) FROM sys.dm_exec_requests AS r CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS qt WHERE r.session_id = t1.request_session_id) AS ‘waiter_stmt’,    — statement blocked t2.blocking_session_id AS ‘blocker sid’,                         — spid of blocker (SELECT [text] FROM sys.sysprocesses AS p                        — get sql for blocker CROSS APPLY sys.dm_exec_sql_text(p.sql_handle) WHERE p.spid = t2.blocking_session_id) AS ‘blocker_stmt’ FROM sys.dm_tran_locks AS t1 INNER JOIN sys.dm_os_waiting_tasks AS t2 ON t1.lock_owner_address = t2.resource_address;

_________________________________________________________________________________________________

 

12/06/2012 Posted by | SQL Scripting, Sql Server, T-SQL | Leave a comment

SQL Server performance – first step

1) System Configuration

SELECT *

FROM sys.configurations

ORDER BY name OPTION (RECOMPILE);

 

2) Filename and Paths of Database

SELECT DB_NAME([database_id])AS [DBName],

name, physical_name, type_desc, state_desc,

CONVERT( bigint, size/128.0) [SizeinMB]

FROM sys.master_files

ORDER BY DB_NAME([database_id])

 

3)Capturing Wait Types and Wait Stats Information

— Create Table

CREATE TABLE [MyWaitStatTable](

[wait_type] [nvarchar](60) NOT NULL,

[waiting_tasks_count] [bigint] NOT NULL,

[wait_time_ms] [bigint] NOT NULL,

[max_wait_time_ms] [bigint] NOT NULL,

[signal_wait_time_ms] [bigint] NOT NULL,

[CurrentDateTime] DATETIME NOT NULL,

[Flag] INT

)

GO

— Populate Table at Time 1

INSERT INTO MyWaitStatTable

([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms],

[CurrentDateTime],[Flag])

SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms],

GETDATE(), 1

FROM sys.dm_os_wait_stats

GO

—– Desired Delay (for one hour) WAITFOR DELAY ’01:00:00′

— Populate Table at Time 2

INSERT INTO MyWaitStatTable

([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms],

[CurrentDateTime],[Flag])

SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms],

GETDATE(), 2

FROM sys.dm_os_wait_stats

GO

— Check the difference between Time 1 and Time 2

SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime,

T2.wait_time_ms LaterWaitTime,

(T2.wait_time_ms – T1.wait_time_ms) DiffenceWaitTime

FROM MyWaitStatTable T1

INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type

WHERE T2.wait_time_ms > T1.wait_time_ms

AND T1.Flag = 1 AND T2.Flag = 2

ORDER BY DiffenceWaitTime DESC

GO

— Clean up

DROP TABLE MyWaitStatTable

GO

12/06/2012 Posted by | SQL Scripting, Sql Server, T-SQL | Leave a comment

The Perfect Combination: SQL Server 2012, Windows Server 2012 and System Center 2012

Information from insider news about SQL Server and Microsoft’s Information Platform http://blogs.technet.com/b/dataplatforminsider/archive/2012/12/06/the-perfect-combination-sql-server-2012-windows-server-2012-and-system-center-2012.aspx

Delivering a Complete Data Platform for the Modern Datacenter with Cloud OS

Today’s organizations need the ability to seamlessly build, deploy and manage applications and services across on-premise and cloud computing environments. The Cloud OS platform with Windows Server® 2012, Windows Azure, Microsoft® SQL Server® 2012, Microsoft System Center 2012 and Visual Studio 2012 work together to provide a consistent platform from on-premises to cloud computing environments.  For database applications, we have identified 3 (three) important scenarios where customers will benefit with the Cloud OS platform:

  1. Tackling mission critical OLTP workload SLAs and performance requirements
  2. Revolutionizing enterprise data warehousing
  3. Migrating large mission critical SQL Server workloads into Microsoft private cloud

For non-virtualized environments in an on-premises data center, Windows Server 2012 and SQL Server 2012 provide the best platform for mission-critical workloads in these areas:

    • Performance & Scalability:  SQL Server 2012 can consume the operating system maximum for both processors and memory.  Windows Server 2012 supports logical 640 processors (cores) over a max of 64 sockets and up to 4 TB of RAM, allowing SQL Server applications to scale to meet the demand of most mission critical applications. The new NIC Teaming feature in Windows Server 2012 allows 2 or more network adapters to behave as a single, virtual device.  This improves the reliability of the networking subsystem – if one NIC dies, the other continues to function – and allows the bandwidth available to each to be pooled for greater total network throughput for SQL Server data. With SMB improvements in Windows Server 2012, SQL Server can store database files on remote (SMB) file shares, providing customers with many more deployment options for their database server storage. The new data de-duplication feature in Windows Server 2012 provides compression on steroids and delivers 30-90% storage savings for FILESTREAM BLOBs and other external files in SQL Server applications.
    • Availability:  SQL Server 2012 support for Windows Server Core is expected to eliminate the need for 50-60% of the OS-level patches.  With Windows Server 2012, the server admin can configure the SQL Server to run with full support for graphical interfaces and then switch to run in Server Core mode. Cluster Aware Updating automates SQL Server cluster node maintenance, making the process easier, faster, more consistent and more reliable with significantly less downtime. With dynamic quorum management, the cluster can dynamically reconfigure itself to keep running down to the last surviving node to allow a SQL Server AlwaysOn cluster to adjust the number of quorum votes dynamically that are required to keep running, while simplifying set up by as much as 80%.

Organizations are also seeking a cloud-optimized IT infrastructure that can span from a private cloud behind your firewall to a public cloud behind a service provider’s firewall.  One key element to achieving this is having a common virtualization platform across private and public clouds.  This increases efficiency and performance across infrastructures, which is essential for database applications. Windows Server 2012 offers the best virtualization platform for SQL Server 2012. By working together, SQL Server 2012, Windows Server 2012, and System Center 2012 offer a seamlessly integrated, on-premise and cloud-ready information platform to meet the demands of today’s enterprise.  We have just published a white paper on the detailed benefits on this integration. Key benefits include:

    • Better Scalability: Higher capacity vCPUs (up to 64), memory (up to 1 TB), and VM density (up to 8,000 per cluster)
    • Better Performance: Hyper-V support on NUMA and fiber channel
    • Better Availability: Faster & simultaneous live migration and dynamic quorum support in SQL Server AlwaysOn cluster
    • Better Manageability: Same management tool (System Center) for SQL Server virtual machines in both private and public cloud

We have also published the latest performance report for SQL Server 2012 running on Windows Server 2012 Hyper-V. Key points from the performance report include:

    • With Windows Server 2012 Hyper-V’s new support for up to 64 vCPUs, ESG Lab took an existing SQL Server 2012 OLTP workload that was previously vCPU limited and increased the performance by six times, while the average transaction response times improved by five times.
    • Manageably-low Hyper-V overhead of 6.3% was recorded when comparing SQL Server 2012 OLTP workload performance of a physical server to a virtual machine configured with the same number of virtual CPU cores and the same amount of RAM.

When compared to VMware vSphere 5.1, Windows Server 2012 Hyper-V offers a number of advantages for SQL Server workloads:

    • Performance & Scalability: Windows Server 2012 Hyper-V is better equipped to deploy mission critical SQL Server workloads in virtualized environment, allowing up to 64 virtual processors per VM with no SKU-specific restrictions. By contrast, the free vSphere Hypervisor, along with vSphere 5.1 Essentials, Essentials Plus and Standard editions support only 8 vCPUs per VM, with vSphere 5.1 Enterprise supporting 32vCPUs and only the most expensive edition, vSphere 5.1 Enterprise Plus, allows support up to 64 vCPUs. No such SKU-specific restrictions are in place with Hyper-V. Hyper-V offers superior performance for SQL Server virtualization, supporting 320 logical processors per host, whilst vSphere 5.1 supports just half that number, restricting scalability and density. Hyper-V also supports up to 4TB of host physical memory, with an individual VM able to utilize up to 1TB of memory. Compared with VMware, where the vSphere Hypervisor host physical memory is capped at 32GB and 2TB for vSphere 5.1 Enterprise Plus.
    • Storage & High Availability: For the mission critical SQL Server AlwaysOn scenario that makes use of Windows Server Failover Clustering (WSFC), customers retain full Hyper-V functionality, whereas when virtualizing Windows Server based clusters, VMware recommends turning off key features such as vMotion for VM mobility, DRS for dynamic resource allocation, Memory Overcommit, meaning sacrificed density, and finally, vSphere Fault Tolerance (FT). Also, when using Fiber Channel for Guest Clusters, VMware restrict scale to just 5 nodes. No such restriction applies with Hyper-V, with unmatched scale for failover clustering, with support for up to 64 nodes and 8,000 VMs per cluster. Hyper-V Live Migration also offers unlimited simultaneous Live Migrations and Shared-Nothing Live Migration for seamlessly moving VMs between hosts and clusters. Additionally, Hyper-V fully supports Guest Clustering with Live Migration and Dynamic memory, unlike VMware. On storage, Hyper-V is optimized to take advantage of increased capacity of single virtual disks to store huge databases, file repositories or document archives of up to 64TB in size, while vSphere is restricted to only 2TB per virtual disk. Hyper-V also supports the latest hardware innovations such as 4K Advanced Format Disks, which comes with higher capacities, better alignment and resiliency, and ultimately, higher performance. vSphere unfortunately, doesn’t support this new innovation in hardware.
    • Deployment & Management: Hyper-V, combined with System Center, supports VM migration and management from private (behind your firewall) to public cloud (behind service provider’s firewall) through a single pane of glass. This provides organizations with unparalleled levels of flexibility. Additionally, System Center not only supports Hyper-V, but also VMware vSphere and Citrix XenServer based infrastructures. Hyper-V, combined with System Center also provides complete infrastructure monitoring (hardware, hypervisor, operating system, and applications) which is especially useful for deploying, optimizing and monitoring the ongoing performance of workloads such as SQL Server. With VMware, customers are required to purchase expensive additional products to deliver any form of monitoring beyond the standard virtual machine metrics.
    • Lower costs: Hyper-V provides a significantly lower total cost of ownership (TCO) than VMware vSphere for initial licensing and ongoing operations. More details on the cost comparison can be obtained through this web site where the analysis shows that a VMware private cloud solution can cost 5.5 times more than a Microsoft based private cloud solution.

Hyper-V proves to be the best solution for virtualizing SQL Server databases, with superior capabilities in many areas, whilst offering significantly better TCO than VMware. Many customers understand the benefits outlined in the summary and they have chosen to run their SQL Servers using Hyper-V or have switched their existing SQL Server to Hyper-V from VMware. See these case studies for more details.

Microsoft’s Cloud OS platform consisting of SQL Server 2012, Windows Server 2012, System Center 2012, Windows Azure, and Visual Studio 2012 offer a unique and consistent platform, from on-premises, to cloud computing environments, to help organizations modernize their datacenters by leveraging the CAPEX and OPEX efficiencies that cloud computing environments provide. Customers should consider using this platform by trying SQL Server 2012, Windows Server 2012, System Center 2012, Windows Azure, and Visual Studio 2012.

 

12/06/2012 Posted by | Sql Server, Windows Server | Leave a comment

MS SQL 2012 SP1 released

http://www.microsoft.com/en-us/download/details.aspx?id=35575

If you have questions about  SQL Server 2012, please visit the SQL Server  2012 forums on MSDN.   Your feedback is important. Help improve SQL Server 2012 by submitting bugs to Microsoft Connect.

11/08/2012 Posted by | Sql Server | Leave a comment

How to: Apply a service pack or hotfix SQL Server 2008 to a failover cluster instance

Installing Service Pack SQL Server 2008 in failover cluster is very different than the SQL Server 2005 cluster failover.

With SQL Server 2005, when you start installing cluster service pack (or hotfix), it must be launched on the active node (node that hosts the instance). When installing the Setup will launch simultaneously  “remote silence” on all passive nodes. All nodes in the cluster containing the SQL Server instance are updated in the same time.

With SQL Server 2008, to reduce the downtime, we have revised the method of deployment. Now if you want to apply a service pack (or hotfix), you must install in first on the passive nodes. The passive nodes are updated before the active node.

Therefore, for your instance SQL Server 2008  in failover cluster, you must follow the scenario below for the application of Service Pack, Cumulative Update or Hotfix :

1.  Apply the hotfix on pasive node N2
2.  Reboot the passive node N2
3.  Failover on SQL resource : the passive node become the active node
4.  Apply the hotfix on the passive node N1
5.  Reboot the passive node N1

08/10/2012 Posted by | Sql Server | , | Leave a comment

Rebuild MSDB Database

If you accidentally delete the transaction log file of msdb database on a newly installed SQL Server 2008 R2 instance do the following steps

I searched Books Online and found this article about Rebuilding System Databases, which helps in rebuild the msdb database.

Steps to Follows

  1. Stop all the SQL Server services & start the command prompt with elevated administrative privilege & execute the following command:
    NET START MSSQLSERVER /T3608
  2. Once you start the SQL Server with trace flag 3608, you will be able to detach the msdb database. To do that, execute the following command in SQLCMD mode:
    SQLCMD -E -S DBS03 -dmaster -Q"EXEC sp_detach_db msdb"
  3. Rename the msdb data file, and execute the instmsdb.sql file from the install folder, as shown below:
    SQLCMD -E -S DBS03 -i"E:\SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Install\instmsdb.sql" -o"E:\instmsdb.out"
  4. Review the instmsdb.out for any errors and re-apply the service packs.
  5. Stop the SQL Server.
  6. Start the SQL Server normally

StepstoFollow

Since I was able to connect to the instance without any error, I stopped the SQL Server instance and copy all the system databases files. Later I restarted the SQL Server Agent and the instance was online.

Hope, this may help someone, Happy Learning Smile

06/19/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , , | Leave a comment

Tracking Login Password Changes in SQL Server

Problem

By default, SQL Server does not keep track of login password changes. When the question initially came up with a user, I thought that perhaps it might be in the default trace or in the system_health extended event session. No such luck. So I was in search of an alternate way to keep track of these events, if not retroactively, at least going forward.

Solution

In a short time you can be up and running with collecting password change information using three different methods: server-side trace, event notifications, and SQL Server audit. Below I will provide an example using each technology. Note that all three examples are able to track password changes using ALTER LOGIN, the system procedure sp_password (deprecated since SQL Server 2005), or the Management Studio Login properties dialog.


Server-Side Trace

Trace includes an event called “Audit Login Change Password Event” – which is much more reliable than capturing all batches and filtering on ‘%sp_password%’ and ‘%ALTER%LOGIN%PASSWORD%’. The EventID is 107, so you can set up a very simple trace with the following code (make sure to set a proper path to the desired trace file):

DECLARE @TraceID INT, @MaxFileSize BIGINT;
SET @MaxFileSize = 5;

EXEC sp_trace_create 
    @TraceID OUTPUT, 
    2, 
    N'C:\Traces\PasswordChangeTrace', -- make sure to change this!
    @MaxFileSize,
    10; 

EXEC sp_trace_setevent @TraceID,107, 1,  1;
EXEC sp_trace_setevent @TraceID,107, 11, 1;
EXEC sp_trace_setevent @TraceID,107, 8,  1;
EXEC sp_trace_setevent @TraceID,107, 12, 1;
EXEC sp_trace_setevent @TraceID,107, 14, 1;
EXEC sp_trace_setevent @TraceID,107, 40, 1;
EXEC sp_trace_setevent @TraceID,107, 42, 1;

EXEC sp_trace_setstatus @TraceID, 1;

SELECT @TraceID;

Make note of the TraceID in the output. Once this has been running, you can use that TraceID to review the events that have been captured using the following query:

DECLARE @path NVARCHAR(255);

SELECT @path = [path]
FROM  sys.traces
WHERE id = <traceID from above>;

SELECT 
  LoginName  = TargetLoginName, 
  EventCount = COUNT(*), 
  FirstEvent = MIN(StartTime), 
  LastEvent  = MAX(StartTime)
FROM sys.fn_trace_gettable(@path, DEFAULT)
WHERE EventClass = 107 -- in case you've added other events
GROUP BY TargetLoginName;

Since the above trace definition specifies a max of 10 x 5MB files, eventually an event that happens today will no longer be available through the above query. So as an added exercise you may consider periodically taking a snapshot of this data into a permanent table, and running your queries from there.


Event Notifications

An alternative to trace is to set up a targeted Event Notification. These are lightweight, asynchronous messages sent via Service Broker that can be used to perform various actions in response to a specific event. One such event is AUDIT_LOGIN_CHANGE_PASSWORD_EVENT. In a lot of cases people use these to send an e-mail or start a job, but in this case we’re just going to log to a table. We can create the following table in msdb:

USE [msdb];
GO

CREATE TABLE dbo.PasswordChangeLog
(
    LoginName  SYSNAME,
    EventTime  DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);

We will then need to set up a queue and a notification to handle our events:

CREATE QUEUE PasswordChangeQueue;
GO

CREATE SERVICE PasswordChangeService ON QUEUE PasswordChangeQueue
  ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO

CREATE EVENT NOTIFICATION PasswordChangeNotification
    ON SERVER WITH FAN_IN
    FOR AUDIT_LOGIN_CHANGE_PASSWORD_EVENT
    TO SERVICE 'PasswordChangeService', 'current database';
GO

And then the following procedure can be used to log events to our table:

CREATE PROCEDURE dbo.LogPasswordChange
WITH EXECUTE AS OWNER
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @message_body XML;

    WHILE (1 = 1)
    BEGIN
       WAITFOR 
       ( 
         RECEIVE TOP(1) @message_body = message_body
         FROM dbo.PasswordChangeQueue
       ), TIMEOUT 1000;

       IF (@@ROWCOUNT = 1)
       BEGIN
        INSERT dbo.PasswordChangeLog(LoginName) 
          SELECT @message_body.value('(/EVENT_INSTANCE/LoginName)[1]', 'sysname');
       END
    END
END
GO

Finally, we can change the queue to call this stored procedure in response to the event:

ALTER QUEUE PasswordChangeQueue
WITH ACTIVATION
(
   STATUS = ON,
   PROCEDURE_NAME = dbo.LogPasswordChange,
   MAX_QUEUE_READERS = 1,
   EXECUTE AS OWNER
);
GO

Now change the password for a few logins, and you should see results from the following query:

SELECT 
  LoginName, 
  EventCount = COUNT(*), 
  FirstEvent = MIN(EventTime), 
  LastEvent  = MAX(EventTime)
FROM dbo.PasswordChangeLog
GROUP BY LoginName;

Server Audit

The final option I’ll present here is creating a Server Audit Specification. You may already be using Server Audit, and if so, handling password change auditing using this technology might make more sense than using either of the above two methods. (However note that Server Audit requires Enterprise Edition of SQL Server 2008 or SQL Server 2008 R2 – in SQL Server 2012, this feature has been made available in all editions.)

One of the options for a Server Audit Specification is LOGIN_CHANGE_PASSWORD_GROUP. We can set up a file-based audit to capture these events with the following code (note that this needs to be performed in master and you should update the file path appropriately – you probably don’t want to rely on C:\ for this):

USE [master];
GO

CREATE SERVER AUDIT ChangePasswordAudit
  TO FILE (FILEPATH = 'C:\Audits\', MAXSIZE = 5MB, MAX_ROLLOVER_FILES = 10)
  WITH (ON_FAILURE = CONTINUE); -- important unless you want your server to halt on failure

ALTER SERVER AUDIT ChangePasswordAudit
  WITH (STATE = ON);

CREATE SERVER AUDIT SPECIFICATION ChangePasswordAuditSpecification
  FOR SERVER AUDIT ChangePasswordAudit
  ADD (LOGIN_CHANGE_PASSWORD_GROUP)
  WITH (STATE = ON);
GO

Once this is running, you can change a few passwords and then retrieve data from the audit using the following query:

DECLARE @folder VARCHAR(255);

SELECT @folder = log_file_path + '*' 
  FROM sys.server_file_audits 
  WHERE name = 'ChangePasswordAudit';

SELECT 
  LoginName  = target_server_principal_name, 
  EventCount = COUNT(*),
  FirstEvent = MIN(event_time), 
  LastEvent  = MAX(event_time)
FROM sys.fn_get_audit_file(@folder, DEFAULT, DEFAULT)
WHERE action_id IN ('PWR', 'PWC') -- PWR = ALTER LOGIN / SSMS, PWC = sp_password
GROUP BY target_server_principal_name;

As with the trace above, this file-based audit is limited to 10 x 5MB files. So you may want to change those options to have the audit data hang around longer, or you may consider occasionally storing the result of this query in a permanent table.

One important thing to note about Server Audit is that it records the event time in UTC, so you might notice that the timestamps are off depending on your time zone. Therefore you may need to look into adding a helper function that will convert any UTC date to your time zone. Since this can get complicated with Daylight Saving Time, I’ve often found it easier to just set up all of our servers to do everything in UTC. 🙂


Conclusion

As you can see, there are a variety of ways to set up tracking for password changes, and each method is relatively straightforward to implement. While it is still impossible to obtain this information from the past, once you have implemented one of the above solutions, you will be able to look back on this information over time.

 

06/13/2012 Posted by | SQL Scripting, Sql Server, T-SQL | , | Leave a comment

Use PowerShell to gather SQL Server database physical file sizes

Problem

I am hired as an DBA at a customer and one of the SQL servers has a database that grows very rapidly (1 GB per week). I needed an easy way to monitor the growth of this database. I also needed an easy way to monitor all my servers for growth trending.

Solution

I know there are tools that will monitor database file size, but I wanted something easy and something I could use to gather and retain historical data. It also had to be free. After a little research I decided to try using PowerShell. I did some searching on the internet and found some sample scripts for connecting to servers, getting data from those servers and then writing the data to an Excel spreadsheet.

Prepare the environment

Before we start let me say that I am not a PowerShell guru, so some of the terminology I use may not be exact. The first thing you will need to do is start PowerShell. In Windows 7 click the Start button and type PowerShell in the Search field. When I do this on my system I am presented with a couple choices. The first two items in the list will start PowerShell, Windows PowerShell ISE is a GUI interface and Windows PowerShell is the command line interface. I like to use Windows PowerShell ISE.

Program Selection

The next thing you will need to do to use PowerShell on your system is to change the execution policy. By default the execution policy is set to Restricted, this means that no scripts will run, not any at all. To check what the current execution policy is set to, open PowerShell and enter the command Get-ExecutionPolicy. I have set my execution policy to RemoteSigned, this allows scripts I have written to run and to run scripts from the internet only if those scripts have been signed by a trusted publisher. To set the execution policy enter the command Set-ExecutionPolicy RemoteSigned. Note: there are other options for execution policy, check the documentation for the appropriate policy. Now that I have the execution policy set I am ready to go.

What I wanted to accomplish was to look at all my SQL servers and get the actual data file and log file size in megabytes. The script shown below reads a .txt file that lists all my SQL servers and gets the file name, file path, file size, and used size for both data file (.mdf) and log file (.ldf) for each database on each server in the .txt file. The data is then written to a spreadsheet on my local machine.

#Create a new Excel object using COM 
$Excel = New-Object -ComObject Excel.Application
$Excel.visible = $True 
$Excel = $Excel.Workbooks.Add()
$Sheet = $Excel.Worksheets.Item(1)
$intRow = 1
 ForEach ($instance in Get-Content "C:\Users\dkelly\Documents\PowershellScripts\sqlservers.txt")
    {
    [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
    $s = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $instance
    $dbs=$s.Databases

     $intRow++
     #Create column headers
     $Sheet.Cells.Item($intRow,1) = "Server: $s"
     $Sheet.Cells.Item($intRow,1).Font.Size = 12
     $Sheet.Cells.Item($intRow,1).Font.Bold = $True

     $intRow++
     $Sheet.Cells.Item($intRow,2) = "Database"
     $Sheet.Cells.Item($intRow,2).Font.Bold = $True
     $Sheet.Cells.Item($intRow,3) = "Data Name"
     $Sheet.Cells.Item($intRow,3).Font.Bold = $True
     $Sheet.Cells.Item($intRow,4) = "Data File"
     $Sheet.Cells.Item($intRow,4).Font.Bold = $True
     $sheet.Cells.Item($intRow,5) = "Data Size (MB)"
     $Sheet.Cells.Item($intRow,5).Font.Bold = $True
     $Sheet.Cells.Item($intRow,6) = "Data Used Space (MB)"
     $Sheet.Cells.Item($intRow,6).Font.Bold = $True
     $Sheet.Cells.Item($intRow,7) = "Log Name"
     $Sheet.Cells.Item($intRow,7).Font.Bold = $True
     $Sheet.Cells.Item($intRow,8) = "Log Size (MB)"
     $Sheet.Cells.Item($intRow,8).Font.Bold = $True
     $Sheet.Cells.Item($intRow,9) = "Log Used Space (MB)"
     $Sheet.Cells.Item($intRow,9).Font.Bold = $True
     $Sheet.Cells.Item($intRow,10) = "Log File"
     $Sheet.Cells.Item($intRow,10).Font.Bold = $True

    foreach ($db in $dbs) 
        {

          $dbname = $db.Name
          $fileGroups = $db.FileGroups

          ForEach ($fg in $fileGroups)
            {
       #   write-host $fg.files | select name
            If ($fg) 
                {

                    $intRow++

                    $mdfInfo = $fg.Files | Select Name, FileName, size, UsedSpace
                    $Sheet.Cells.Item($intRow,2) = $dbname
                    $Sheet.Cells.Item($intRow,3) = $mdfInfo.Name
                    $Sheet.Cells.Item($intRow,4) = $mdfInfo.FileName
                    $Sheet.Cells.Item($intRow,5) = ($mdfInfo.size / 1000)
                    $Sheet.Cells.Item($intRow,6) = ($mdfInfo.UsedSpace / 1000)

                    $logInfo = $db.LogFiles | Select Name, FileName, Size, UsedSpace
                    $Sheet.Cells.Item($intRow,7) = $logInfo.Name
                    $Sheet.Cells.Item($intRow,8) = ($logInfo.Size / 1000)
                    $Sheet.Cells.Item($intRow,9) = ($logInfo.UsedSpace / 1000)
                    $Sheet.Cells.Item($intRow,10) = $logInfo.FileName
                }
            }
        }
        $intRow++
    }
    $Sheet.UsedRange.EntireColumn.AutoFit()

The sqlserver.txt file is just a list of server names, one name per line with no punctuation as shown here;

testserver1 testserver2 testserver3

The first four lines in the script create the Excel spreadsheet. Then for each server in the sqlserver.txt file we use the Microsoft.SQLServer.Management.SMO.Server class to create a server object of SQL Server. From the server object we can get the databases, $dbs=$s.Databases, on each server. The next several lines in the script set the values in the columns of the header row in the spreadsheet. The section of the script starting with foreach ($db in $dbs) is the part of the script that actually gets the database file data and writes it to the spreadsheet.

Shown below is the resulting spreadsheet. I broke the spreadsheet image into 3 images for readability.

The first four lines in the script create the Excel spreadsheet
The next several lines in the script set the values in the columns of the header row in the spreadsheet
the resulting spreadsheet

In my next tip I will describe the process for getting the data into a database using SSIS then reporting on that data. Please stayed tuned.

05/23/2012 Posted by | Powershell, Sql Server | , | 2 Comments

SQL Server 2008 R2 Build List

A list of all the builds that I can find and install on my Build VM. If you find a build not listed here, please let me know.

All builds are listed in reverse order, so the newest are at the top and the earliest at the bottom. You can find your build number with:

select @@Version

in a query window. This gives you the current version you are running and it should match up with one of the builds listed below.

Major Builds

Build KB   Article Title Description
10.50.2500.0 Service   Pack 1 Service Pack 1 for SQL Server 2008 R2 The first roll-up of patches into a service pack for SQL Server 2008   R2. Note this IS NOT for SQL Server 2008, only the R2 version.
10.5.1600.1 RTM Release of Manufacturing of SQL Server 2008 R2 The first public, supported version of SQL Server 2008 R2

Note that Service Packs include all builds below that Service Pack level. So any builds listed below Service Pack 1 are included in Service Pack 1. Those below Service Pack 2 (including Service Pack 1), are included in SP2, etc. Hotfixes may or may not include updates below them because there are two trees of code. The cumulative updates do include all hotfixes before them.

You can also download a CSV of the builds attached to this article.

Build Kb   Article URL Description
10.50.2811 Cumulative update package 6 for SQL Server 2008 R2 Service Pack 1 http://support.microsoft.com/kb/2679367/ A total of 20 fixes for this CU that are listed on the support page.
10.50.2806.0 Cumulative update package 5 for SQL Server 2008 R2 Service Pack 1 http://support.microsoft.com/kb/2659694/ Fixed for identity issues, CPU spikes, SSRS, Replication and more.
10.50.2796 Cumulative update   package 4 for SQL Server 2008 R2 Service Pack 1 http://support.microsoft.com/kb/2633146 The fourth update for SP1 includes fixes for replication, deadlocks,   and MDX query issues.
10.50.2789 CU3   for SQL Server 2008 R2 SP1 http://support.microsoft.com/kb/2591748/en-us This article describes cumulative update package 3 for SQL Server   2008 R2 Service Pack 1. This update contains hotfixes for issues that were   fixed after the release of Microsoft SQL Server 2008 R2 Service Pack 1 (SP1).
10.50.2772.0 CU2   for SQL Server 2008 R2 SP1 http://support.microsoft.com/kb/2567714 CU 2 for SQL Server 2008 R2 SP1 includes fixes for T-SQL query results, trace erros with TVPs, data collector network port issues among others.
10.50.2769.0 CU 1   for SQL Server 2008 R2 SP1 http://support.microsoft.com/kb/2544793/en-us   CU 1 for SQL Server 2008 R2 SP1 includes fixes for TDS, timeouts, CDC, and BIDS among others.
10.50.2500.0 Service Pack 1 for SQL Server 2008 R2 http://support.microsoft.com/kb/2528583   SP 1 for SQL Server 2008 R2 RTM includes fixes for Powershell, spatial data types, locking, and multi-CPU parallel queries among others.
10.50.1810 Cumulative   update package 12 for SQL Server 2008 R2 http://support.microsoft.com/kb/2659692/ Three fixes listed in the KB article.
10.50.1809 Cumulative   update package 11 for SQL Server 2008 R2 http://support.microsoft.com/kb/981356 The eleventh update for RTM includes fixes for replication,   deadlocks, and MDX query issues.
10.50.1807 CU10   for SQL Server 2008 R2 http://support.microsoft.com/kb/2591746/en-us CU10 includes fixed for deadlocks, various query issues, CDC   problems, and XML data used in replication.
10.50.1804 CU9   for SQL Server 2008 R2 http://support.microsoft.com/kb/2567713 CU9 includes fixes for SSIS, MDX, XML, and SSRS
10.50.1797.0 Cumulative   Update package 8 for SQL Server 2008 R2 http://support.microsoft.com/kb/2534352   CU 8 for SQL Server 2008 R2 RTM includes fixes for BIDS, replication issues with large   stored procedures, timeouts with snapshot isolation, and SSRS dynamic images among others.
10.50.1777.0 Cumulative   Update package 7 for SQL Server 2008 R2 http://support.microsoft.com/kb/2507770   CU 7 for SQL Server 2008 R2 RTM includes fixes for better performance   for some bcp operations, merge replication issues, SQL Agent job scheduling, and SSRS performance among others.
10.50.1765.0 Cumulative   Update package 6 for SQL Server 2008 R2 http://support.microsoft.com/kb/2489376/en-us   CU 6 for SQL Server 2008 R2 RTM includes fixes for better performance   for some self joins, Filestream with third party filters, and a number of   SSAS filters.
10.50.1753.0 Cumulative   Update package 5 for SQL Server 2008 R2 http://support.microsoft.com/kb/2438347 CU 5 for SQL Server 2008 R2 RTM includes fixes for Reporting   Services, replication with XML columns, and a number of engine fixes for   various items
10.50.1746.0 Cumulative   Update package 4 for SQL Server 2008 R2 http://support.microsoft.com/kb/2345451 CU 4 for SQL Server 2008 R2 RTM includes fixes for TDE and full-text   search with compression enabled.
10.50.1734.0 Cumulative   Update package 3 for SQL Server 2008 R2 http://support.microsoft.com/kb/2261464/ CU 3   for SQL Server 2008 R2 RTM
10.50.1720.0 Cumulative   Update package 2 for SQL Server 2008 R2 http://support.microsoft.com/kb/2072493 CU 2   for SQL Server 2008 R2 RTM
10.50.1702.0 Cumulative   Update package 1 for SQL Server 2008 R2 http://support.microsoft.com/kb/981355/en-us CU 1   for SQL Server 2008 R2 RTM
10.50.1600.1 N/A RTM
10.50.1450.3 N/A RC 0

Again, if you find something not listed, let me know.

 

05/23/2012 Posted by | Sql Server | Leave a comment

%d bloggers like this: