How to move Tables/objects between Filegroups

In this post, I will be explaining how to move tables/objects between different FileGroups.

  • Create a sample database
USE master
GO
CREATE DATABASE MoveFG
GO
  • Create Two file groups. In this example i have created two  filegroups MoveFG_DATA_1 and MoveFG_DATA_2
ALTER DATABASE MoveFG ADD FILEGROUP MoveFG_DATA_1
GO
ALTER DATABASE MoveFG ADD FILEGROUP MoveFG_DATA_2
GO
  • Create data files in different filegroups. In this example, i have created four data files, two for each filegroup
ALTER DATABASE MoveFG
ADD FILE
( NAME = MoveFG11,
FILENAME = 'C:\Ram\MoveFGDB\MoveFG_11.ndf',
SIZE = 1MB,
MAXSIZE = 10MB,
FILEGROWTH = 1MB)
TO FILEGROUP MoveFG_DATA_1
GO

ALTER DATABASE MoveFG
ADD FILE
( NAME = MoveFG20,
FILENAME = 'C:\Ram\MoveFGDB\MoveFG_20.ndf',
SIZE = 1MB,
MAXSIZE = 10MB,
FILEGROWTH = 1MB)
TO FILEGROUP MoveFG_DATA_2
GO

ALTER DATABASE MoveFG
ADD FILE
( NAME = MoveFG21,
FILENAME = 'C:\Ram\MoveFGDB\MoveFG_21.ndf',
SIZE = 1MB,
MAXSIZE = 10MB,
FILEGROWTH = 1MB)
TO FILEGROUP MoveFG_DATA_2
GO
  • Now, create a table on filegroup MoveFG_DATA_1 and later move the filegroup to another group name MoveFG_DATA_2
USE MoveFG
GO

CREATE TABLE TAB1
(
TAB1_ID INT IDENTITY(1,1),
TAB1_NAME VARCHAR(100),
CONSTRAINT PK_TAB1 PRIMARY KEY(TAB1_ID)
) ON MoveFG_DATA_1 -- Filegroup we created.
GO
  • Insert few records into the table TAB1
SET NOCOUNT OFF
INSERT INTO TAB1(TAB1_NAME)
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
GO 10000
  • Verify which file group the data resides. You can use sp_help to get the details as shown in below.

1.png

  • As you can see above screenshot the table is on First file group MoveFG_DATA_1. Let us move the table TAB1 to another filegroup name MoveFG_DATA_2. To accomplish this we need to alter table with drop constraint with move option to move to another file group as shown below
ALTER TABLE TAB1 DROP CONSTRAINT PK_TAB1 WITH (MOVE TO MoveFG_DATA_2)
GO
ALTER TABLE TAB1 ADD CONSTRAINT PK_TAB1 PRIMARY KEY(TAB1_ID)
GO
  • Now, Check which file group the table TAB1 resides

2

  • There you go, The table has been moved to secondary file group MoveFG_DATA2. You can also use below query to get the details of tables/objects on different filegroups. I have taken this from Pinal’s Blog
SELECT obj.[name], obj.[type], i.[name], i.[index_id], fg.[name] FROM sys.indexes i
INNER JOIN sys.filegroups fg
ON i.data_space_id = fg.data_space_id
INNER JOIN sys.all_objects obj
ON i.[object_id] = obj.[object_id] WHERE i.data_space_id = fg.data_space_id
AND obj.type = 'U' -- User Created Tables
GO

3.PNG

  • Be careful running this code on a production system, You may want to consider running this code during a maintenance window so the users are not impacted.

Hope you like the post. Stay tuned for next tip.

Ramasankar Molleti

MSDN:LinkedIn:Twitter

New SQL Server 2016 Live Query Statistics

SQL Server 2016 introduce a new feature Live query statistics which allows us to view the  real-time insights into query without needing to wait for the query to complete.

The live query plan displays the overall query progress and operator-level run-time execution statistics such as the number of rows produced, elapsed time, operator progress, etc.

Let’s see how it works.

I have opened the SSMS and write a sample query from Adventureworks databasse.

select a.* from HumanResources.Employee a cross apply HumanResources.Employee b

1.png

To view the live query execution plan, on the tools menu click the Live Query Statistics icon. Execute the query. The live query plan displays the execution plan as below

2.png

You can also highlight the query and right click and select “Include Live Query Statistics”

3.png

Another way of selecting the live query plan is to use activity monitor.

4.png

When you specify Include Live Query Statistics information it automatically enables statistics profile infrastructure for the current query session.

There are two other ways to enable the statistics infrastructure which can be used to view the live query statistics from other sessions (such as from Activity Monitor).

  • Execute SET STATISTICS XML ON; or SET STATISTICS PROFILE ON; in the target session.
  • Enable the query_post_execution_showplan extended event. This is a server wide setting that enable live query statistics on all sessions.

 

Note: This feature is available beginning with SQL Server 2016 Management Studio, however it can work with SQL Server 2014.

That’s all about Live Query Statistics

Cheers

Hope you like the post!

Ramasankar Molleti

MSDN:LinkedIn:Twitter

 

New SQL Server 2016 SCOPED CONFIGURATION

Microsoft SQL Server 2016 introduced new way of configuring some of the instance level setting to database level which gives you the ability to easily make several database level configuration changes such as

  • Setting MAXDOP for an individual database
  • Set the query optimizer cardinality estimation model independent of the database to compatibility level.
  • Enable or disable parameter sniffing for an individual database
  • Enable or disable query optimization hotfixes for an individual database
  • Clear the plan cache for an individual database without using DBCC Command

Let’s us explore each option:

MAXDOP: 

Set the MAXDOP parameter to an arbitrary value (0,1,2, …) to control the maximum degree of parallelism for the queries in the database. It is recommended to switch to db-scoped configuration to set the MAXDOP instead of using sp_configure at the server level, especially for Azure SQL DB where sp_configure is not available. You can set the different MAXDOP settings for primary and secondary. For Example, you can set the MAXDOP value to 1 on a primary and on the secondary where used for reporting can be set to 4 as below.

-- Set MAXDOP for Primary database
 ALTER DATABASE SCOPED CONFIGURATION 
 SET MAXDOP = 1;
 GO

 -- Set MAXDOP for Secondary database(s)
 ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY 
 SET MAXDOP = 4;
 GO

 

Legacy Cardinality Estimation:

Set the option “LEGACY_CARDINALITY_ESTIMATION” Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version independent of the compatibility level of the database. This is equivalent to Trace Flag 9481. To set this at the instance level, see Trace Flags (Transact-SQL). To accomplish this at the query level, add the QUERYTRACEON query hint.

You can use the below sql statements to enable the legacy cardinality estimation for primary and secondary based on your requirement.

-- Enable legacy Cardinality Estimation for Primary database
 ALTER DATABASE SCOPED CONFIGURATION 
 SET LEGACY_CARDINALITY_ESTIMATION = ON;
 GO
 -- Enable legacy Cardinality Estimation for Secondary database
 ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY 
 SET LEGACY_CARDINALITY_ESTIMATION = ON;
 GO

 -- Set legacy Cardinality Estimation for the Secondary database(s) 
--to the same value as the Primary database
 ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY 
 SET LEGACY_CARDINALITY_ESTIMATION = PRIMARY;
 GO

 

Enable/Disable Parameter Sniffing:

Enables or disables parameter sniffing. This is equivalent to Trace Flag 4136. o set this at the instance level, see Trace Flags (Transact-SQL). To set this at the query level, see the OPTIMIZE FOR UNKNOWN query hint.

You can use the below sql statements to disable parameter siniffing.

 -- Disable parameter sniffing for Primary database
 ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING = OFF;
 GO

 -- Disable parameter sniffing for Secondary database
 ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET PARAMETER_SNIFFING = OFF;
 GO

 -- Set parameter sniffing for the Secondary database(s) to 
 ---the same value as the Primary database

 ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY 
 SET PARAMETER_SNIFFING = PRIMARY;
 GO

 

Query Optimizer Hotfixes:

Enable or disable “QUERY_OPTIMIZER_HOTFIXES” at the database level, to take advantage of the latest query optimizer hotfixes, regardless of the compatibility level of the database. This is equivalent to Trace Flag 4199

Sample T-SQL to enable query optimizer hotfixes

-- Enable query optimizer fixes for Primary database
ALTER DATABASE SCOPED CONFIGURATION
SET QUERY_OPTIMIZER_HOTFIXES = ON;
GO

-- Enable query optimizer fixes for Secondary database
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY
SET QUERY_OPTIMIZER_HOTFIXES = ON;
GO

 

Clear Procedure Cache:

“Clear Procedure Cache” option allows to clear procedure cache at the database level without impacting other databases.

 Sample T-SQL Script to clear the procedure cache
 -- Clear plan cache for current database (only possible for Primary database)
 ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
That’s all about new sql server 2016 scoped configuration option.

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

“If you could give a DBA just one piece of advice, what would it be?”

“If you could give a DBA just one piece of advice, what would it be?”

For those of you hunger to learn and explore your skills you may read the book DBA Jump start 

This book contains advice from most renowned successful Database Professionals.

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Export and Import the database objects using BACPAC file

In this post i will be explaining about how to export the database objects to backpac and import from bacpac file.

  1. Connect to SQL Server , Right click on the database and Go to Tasks –> Export Data Tier Application

1

 

2. Introduction screen will be appeared as shown below, click next to continue

2.PNG

3. Export setting screen , please select where you want to store the bacpac file. It can be either on windows azure machine or on local machine. I have selected local in this example.

3

4. Please provide the location of the file to save the bacpac file.

4.PNG

5. Same export settings screen, click on advance tab to select the objects you would like to export. Once you have selected, click next to proceed

5.PNG

6. Progress screen will appear, it will show the progress of the export as shown in below figure

7

7. Results screen will appear after completion of export as shown in below figure.

8.PNG

8. You have successfully exported the database objects as bacpac file. Let us create new database by importing the bacpac file in another server. Right click on databases and select import Data-tier Application as shown in below screen

`

9.png

9.  Introduction screen will appear as show in below figure. Click next to continue

10.PNG

10. Import settings screen will appear and specify the bacpac file which was created before. If you have stored the bacpac file in windows azure blob storage then you can select second option to import the bacpac file. In this example, i have saved the bacpac file on the local machine.  Click next to continue.

11.PNG

11. Database settings screen will appear as below, specify the name of the database , data and log file path. In this case i have specified database name as “Hello_Test” and the paths are on the same drive. Best practice is to keep the data and log files separated. Click next to continue.

12.PNG

12. You would see summary screen as below after the above step. Click next to continue.

13.PNG

13. Progress screen will appear and will display the progress of the database import as show in the below figure. Click next to continue.

14.PNG

14. Results screen will show you the results of import progress. As you can see in the below screeshot, the import operation completed. Let us see the new database “Hello_Test” and the objects.

15.PNG

15. There you go. New database “Hello_Test” has been created.

 

16

So far you have learned how to export database objects as bacpac and import the bacpac to create a database. You may think, the same can be achieved by doing backup and restore. Yes, we can achieve using backup and restore. But,

  1. You do not have an option to chose few database objects/Schemas to import rather than complete database, unless you maintain separate file groups for different objects.
  2. Import a BACPAC file to create a new Azure SQL database.
  3. Restore SQL Azure database to MS SQL Server.

 

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

Different ways to Monitor Deadlocks

Every database environment, deadlocks are common problems and it is important for DBA’s to monitor deadlocks and make sure to prevent them from reoccurring.

There are multiple ways to monitor deadlocks

Using Trace Flags 

We can enable two trace flags 1204 and 1222 to capture deadlock information in error log. Run the below code, this will enable deadlock trace flags.

DBCC TRACEON (1204,1222);GO

If you would like to enable deadlock trace flags globally, you can use the below

DBCC TRACEON (1204,1222,-1);GO

1

Let’s force deadlock by creating two tables. Please run these queries in the order

--Session 1:
CREATE TABLE tbl_dlck1 (colum1 INT);GO
INSERT INTO tbl_dlck1 VALUES (1);GO
BEGIN TRAN
UPDATE tbl_dlck1 SET colum1 = 2;GO
-- Session 2 :
CREATE TABLE tbl_dlck2 (colum1 INT);
INSERT INTO tbl_dlck2 VALUES (1);GO
BEGIN TRAN
UPDATE tbl_dlck2 SET colum1 = 2;GO
--Session 1:
UPDATE tbl_dlck2 SET colum1 = 2;
--Session 2:
UPDATE tbl_dlck1 SET colum1 = 2;

2

Let’s see the error log. Here’s the deadlock information in the error log.

3.JPG

Using Profiler: 

I have selected “Deadlock graph”, “Lock:Deadlock” and “Lock:Deadlock Chain” events in the profiler. Here is the result

4.png

Using Extended Events:

There are two ways we can capture deadlock information one is with system_health and another one is to create own extended event session. System_Health is default extended event which runs continuously by SQL Server.

Using System_Health extended event. Use the below queries to get the information.

select CAST(target_data as xml) as TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = 'system_health' 

Using XPath we can then get just the deadlock graphs from the XML by using a derived table and a CROSS APPLY to get the individual Event Nodes as follows:

select XEventData.XEvent.value('(data/value)[1]', 'varchar(max)') as DeadlockGraph
FROM
(select CAST(target_data as xml) as TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = 'system_health') AS Data
CROSS APPLY TargetData.nodes ('//RingBufferTarget/event') AS XEventData (XEvent)
where XEventData.XEvent.value('@name', 'varchar(4000)') = 'xml_deadlock_report'

This query will take long time to execute, but there is no server performance impact but you would expect this query to take more time.

5

Using creating own extended session to capture the deadlock. Select xml_deadlock_report event as shown below

6

7.png

That’s all about deadlocks.

References:

http://www.sqlservercentral.com/articles/deadlocks/65658/

https://technet.microsoft.com/en-us/magazine/2009.01.sql2008.aspx

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

 

SQL Server 2008 SP4 and SQL Server 2008 R2 SP3 now available in Microsoft Update Catalog

SQL Server 2008 Service Pack 4 and SQL Server 2008 R2 Service Pack 3 were the last service packs for both versions respectively and they are now available on Microsoft Update Catalog.

Refer Original Post: Click Here 

References:

SQL Server 2008 R2 Service Pack 3

https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2008-r2-service-pack-3-has-released/

SQL Server 2008 Service Pack 4

https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2008-service-pack-4-has-released/

 

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Find Missing Backups and send email notification

This is another useful script to find missing backups and send email notifications to DBA

IF OBJECT_ID(N'tempdb..#Missing_Backups', N'U') IS NOT NULL
DROP TABLE #Missing_Backups;
GO
create table #Missing_Backups (
[DB Name] [varchar](100) NOT NULL,
[Type] [varchar] (5) NOT NULL,
[Last Backup] [varchar](100) NULL)

insert into #Missing_Backups
SELECT d.name AS "Database",'Full' as "Type",
ISNULL(CONVERT(VARCHAR,b.backupdate,120),'NEVER') AS "Last Full Backup"
FROM sys.databases d
LEFT JOIN (SELECT database_name,type,MAX(backup_finish_date) backupdate FROM msdb.dbo.backupset
WHERE type LIKE 'D'
GROUP BY database_name,type) b on d.name=b.database_name
WHERE (backupdate IS NULL OR backupdate < getdate()-1)
AND d.name <> 'tempdb'
AND d.state_desc = 'ONLINE'
UNION
SELECT d.name AS "Database",'Trn' as "Type",
ISNULL(CONVERT(VARCHAR,b.backupdate,120),'NEVER') AS "Last Log Backup"
FROM sys.databases d
LEFT JOIN (SELECT database_name,type,MAX(backup_finish_date) backupdate FROM msdb.dbo.backupset
WHERE type LIKE 'L'
GROUP BY database_name,type) b on d.name=b.database_name
WHERE recovery_model = 1
AND (backupdate IS NULL OR backupdate < getdate()-1)
AND d.name <> 'tempdb'
AND d.state_desc = 'ONLINE'
declare @cnt int
select @cnt=COUNT(1) from #Missing_Backups
if (@cnt > 0)
begin

declare @strsubject varchar(100)
declare @oper_email nvarchar(100)

set @oper_email = (select email_address from msdb.dbo.sysoperators where name = 'DBA')
select @strsubject='Check for missing backups on ' + @@SERVERNAME

declare @tableHTML nvarchar(max);
set @tableHTML =
N'<H3>Databases Missing Backups Listing - ' + @@SERVERNAME +'</H1>' +
N'
<table border="1">' +
N'
<tr>
<th>DB Name</th>
<th>Type</th>
' +
N'
<th>Last Backup</th>
</tr>
' +
CAST ( ( SELECT td = [DB Name], '',
td = [Type], '',
td = [Last Backup]
FROM #Missing_Backups
FOR XML PATH('tr'), TYPE
) AS NVARCHAR(MAX) ) +
N'</table>
' ;

EXEC msdb.dbo.sp_send_dbmail
@from_address='',
@recipients= @oper_email,
@subject = @strsubject,
@body = @tableHTML,
@body_format = 'HTML',
@profile_name='DBAProfile'

end
IF OBJECT_ID(N'tempdb..#Missing_Backups', N'U') IS NOT NULL
DROP TABLE #Missing_Backups;
go

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Free SQL Server 2016 Developer Edition

Earlier this month, Microsoft announced the general availability of SQL Server 2016 for Windows and Windows Server.  You can get started today with SQL Server 2016 Developer Edition, which is free and has all the features and capabilities of Enterprise Edition (Dev/Test only; not for production environments or use with production data).

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Script to find failed sql agent jobs and notify via email

This is one of the useful script to check and find failed jobs on sql server every day and send notification via email.

USE MSDB
GO

create table #Failed_Jobs (
[Status] [varchar](6) NOT NULL,
[Job Name] [varchar](100) NULL,
[Step ID] [varchar](5) NULL,
[Step Name] [varchar](30) NULL,
[Start Date Time] [varchar](30) NULL,
[Message] [nvarchar](4000) NULL)

insert into #Failed_Jobs
select 'FAILED' as Status, cast(sj.name as varchar(100)) as "Job Name",
cast(sjs.step_id as varchar(5)) as "Step ID",
cast(sjs.step_name as varchar(30)) as "Step Name",
cast(REPLACE(CONVERT(varchar,convert(datetime,convert(varchar,sjh.run_date)),102),'.','-')+' '+SUBSTRING(RIGHT('000000'+CONVERT(varchar,sjh.run_time),6),1,2)+':'+SUBSTRING(RIGHT('000000'+CONVERT(varchar,sjh.run_time),6),3,2)+':'+SUBSTRING(RIGHT('000000'+CONVERT(varchar,sjh.run_time),6),5,2) as varchar(30)) 'Start Date Time',
sjh.message as "Message"
from sysjobs sj
join sysjobsteps sjs
on sj.job_id = sjs.job_id
join sysjobhistory sjh
on sj.job_id = sjh.job_id and sjs.step_id = sjh.step_id
where sjh.run_status <> 1
and cast(sjh.run_date as float)*1000000+sjh.run_time >
cast(convert(varchar(8), getdate()-1, 112) as float)*1000000+70000 --yesterday at 7am
union
select 'FAILED',cast(sj.name as varchar(100)) as "Job Name",
'MAIN' as "Step ID",
'MAIN' as "Step Name",
cast(REPLACE(CONVERT(varchar,convert(datetime,convert(varchar,sjh.run_date)),102),'.','-')+' '+SUBSTRING(RIGHT('000000'+CONVERT(varchar,sjh.run_time),6),1,2)+':'+SUBSTRING(RIGHT('000000'+CONVERT(varchar,sjh.run_time),6),3,2)+':'+SUBSTRING(RIGHT('000000'+CONVERT(varchar,sjh.run_time),6),5,2) as varchar(30)) 'Start Date Time',
sjh.message as "Message"
from sysjobs sj
join sysjobhistory sjh
on sj.job_id = sjh.job_id
where sjh.run_status <> 1 and sjh.step_id=0
and cast(sjh.run_date as float)*1000000+sjh.run_time >
cast(convert(varchar(8), getdate()-1, 112) as float)*1000000+70000 --yesterday at

declare @cnt int
select @cnt=COUNT(1) from #Failed_Jobs
if (@cnt > 0)
begin

declare @strsubject varchar(100)
declare @oper_email nvarchar(100)

set @oper_email = (select email_address from msdb.dbo.sysoperators where name = 'DBA')
select @strsubject='Check the following failed jobs on ' + @@SERVERNAME

declare @tableHTML nvarchar(max);
set @tableHTML =
N'<H1>Failed Jobs Listing - ' + @@SERVERNAME +'</H1>' +
N'
<table border="1">' +
N'
<tr>
<th>Status</th>
<th>Job Name</th>
' +
N'
<th>Step ID</th>
<th>Step Name</th>
<th>Start Date</th>
' +
N'
<th>Message</th>
</tr>
' +
CAST ( ( SELECT td = [Status], '',
td = [Job Name], '',
td = [Step ID], '',
td = [Step Name], '',
td = [Start Date Time], '',
td = [Message]
FROM #Failed_Jobs
FOR XML PATH('tr'), TYPE
) AS NVARCHAR(MAX) ) +
N'</table>
' ;

EXEC msdb.dbo.sp_send_dbmail
@from_address='',
@recipients= @oper_email,
@subject = @strsubject,
@body = @tableHTML,
@body_format = 'HTML',
@profile_name='DBAProfile'
end

drop table #Failed_Jobs

GO

 

Cheers

Hope you like the post!

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter