Merge Statement to perform insert , update and delete in one statement

This example, demonstrate how to perform insert, update and delete on single statement using MERGE. If you are a Database Developer/BI expert, you would need to refresh the target table to match the source tables periodically. Prior to SQL Server 2008, you would need to perform this task by writing separate T-SQL logic for insert, delete and update. Starting from SQL Server 2008, you can perform all three SQL Statements (Insert, Update and Delete) in one statement using MERGE Statement.

What is the use of MERGE statement in SQL Server?

Merge statement introduced in SQL Server 2008 allows us to perform inserts, updates and deletes in one statement, which means we no longer have to use multiple statements to perform insert, update and delete.

Basic Merge syntax:

Merge_Statement

To try the example , you’ll need to first run the following script to create and populate the tables used in the examples:

 -- CREATE A SOURCE TABLE
CREATE TABLE [DBO].[STUDENT_SOURCE](
[ID] [INT] NOT NULL,
[STUDENTNAME] [NCHAR](50) NOT NULL,
CONSTRAINT [PK_STUDENT_SOURCE] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] 
 -- CREATE A TARGET TABLE</pre>
CREATE TABLE [DBO].[STUDENT_TARGET](
[ID] [INT] NOT NULL,
[STUDENTNAME] [NCHAR](50) NOT NULL,
CONSTRAINT [PK_STUDENT_TARGET] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] 
 --- INSERT RECORDS INTO SOURCE TABLE
INSERT INTO [STUDENT_SOURCE] VALUES (1,'RAMA')
INSERT INTO [STUDENT_SOURCE] VALUES (2,'SANKAR')
--- INSERT RECORDS INTO TARGET TABLE
INSERT INTO [STUDENT_TARGET] VALUES (1, 'RAMASANKAR')
INSERT INTO [STUDENT_TARGET] VALUES (3, 'MURTHY') 

After you ran the above script, you have two tables(STUDENT_SOURCE and STUDENT_TARGET)  created with some sample data as below.

Tables

Now, I will use MERGE statement to synchronize target table with source.

 MERGE [STUDENT_TARGET] AS T
USING [STUDENT_SOURCE] AS S
ON T.ID = S.ID
WHEN MATCHED THEN
UPDATE SET T.STUDENTNAME = S.STUDENTNAME
WHEN NOT MATCHED BY TARGET THEN
INSERT (ID, STUDENTNAME) VALUES (S.ID,S.STUDENTNAME)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
(3 row(s) affected) 

There you go, Three rows are modified which means one update, one Insert  and one Delete performed in single statement.

After executing the Merge statement we can see that both source and target are identical.

 SELECT * FROM STUDENT_SOURCE;</pre>
SELECT * FROM STUDENT_TARGET; 

After_Table

MERGE statement is very useful improvement to update database tables with complex logic. Better Performance and scalability can be achieved with MERGE statement.

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Tempdb Enhancements in SQL Server 2016

Tempdb Trace Flags 1117 and 1118 are no longer needed in SQL Server 2016. 1117 flag is controlled by the AUTOGROW_SINGLE_FILE and AUTOGROW_ALL_FILES option of ALTER DATABASE and 1118 flag is controlled by the SET MIXED_PAGE_ALLOCATION option of ALTER DATABASE.

What are these Trace Flags

TRACE FLAG 1117 – GROW ALL FILES IN A FILE GROUP EQUALLY

Trace flag (TF) 1117 is related strictly to file groups and how data files grow within them. A file group is a logical container for one or more data files within a database. TF 1117 forces all data files in the same file group to grow at the same rate, which prevents one file from growing more than others, leading to the hotspot issue described earlier in this chapter. Enabling this trace flag in earlier versions of SQL Server is a minor tradeoff in performance. For example, if you were using multiple data files in user databases, this trace flag affects them as well as TempDB’s data files. Depending on your scenario, that could be problematic—an example would be if you had a file group that you did not want to grow as a single unit. Starting with SQL Server 2016, the behavior to grow all data files at the same rate is built into TempDB by default, which means you no longer need this trace flag.

TRACE FLAG 1118 – FULL EXTENTS ONLY

Administrators use trace flag 1118 to change page allocation from a GAM page. When you enable TF 1118, SQL Server allocates eight pages, or one extent, at a time to create a dedicated (or uniform) extent, in contrast to the default behavior to allocate a single page from a mixed extent. Unlike with TF 1117, there was no potential downside to enabling TF 1118—it is generally recommended for all SQL Server implementations in earlier releases. Starting with SQL Server 2016, all allocations of TempDB pages use uniform extent allocation, thus eliminating the need to use TF 1118.

In earlier versions of SQL Server, the default configuration uses one data file for TempDB. This limitation sometimes results in page-latch contention, in order to mitigate/remedy this behavior in SQL Server is to add more data files, in turn SQL Server creates more of three special types of pages (SGAM, GAM and PFS) and gives more throughput to TempDB. Importantly, these files should all be the same size.  Creation of multiple data files depends on many factors like number of cores vs CPU sockets, Hyper-threading.  Microsoft suggested in the article KB2154845, about the guidelines/recommendation to reduce allocation contention.

The recommendation in the KB2154845 is now built into product setup in SQL Server 2016. When you install SQL Server 2016, the default configuration of TempDB now adjusts to your environment, as show in the below Figure.

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

Powershell Script to Backup the file system folders/files to Azure blob storage

One of the requirement from the customer was backup the important application files to azure blob storage and delete the backups older than 7 days. I have written a below script to automate the task.

</pre>
#--------------------------------------------------------------------------------
#Script: Backup the files to azure blob storage
#Auther : Ramasankar Molleti
#This will make sure to run the powershell as administrator for Azcopy command
if (!([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")) { Start-Process powershell.exe "-NoProfile -ExecutionPolicy Bypass -File `"$PSCommandPath`"" -Verb RunAs; exit }

Write-Host "started"
#Source of the file
$source = "Path of the files"

#Initializing a destination file
$destination = "Temporary Destinationpath
#Adding current date to the file
$date = Get-Date
$date = $date.ToString("yyyy-MM-dd")
$destination = $destination+$date
$destination =$destination+'.zip'
write-host $destination
Write-Host "Compressing is in progress" -ForegroundColor Yellow
#This assembly is used to compress the folder
Add-Type -Assembly 'System.IO.Compression.FileSystem'
[System.IO.Compression.ZipFile]::CreateFromDirectory($source, $destination,'Optimal',$false)
Write-Host "Compression is done" -ForegroundColor Green
# Backup the compressed Appplication folders/files to blob storage
Write-Host "Begin backup to azure blob" -ForegroundColor Yellow
AzCopy /Source:D:\temp /Dest:https://yourstorageaccount.blob.core.windows.net/Provideyourazurecontainer /DestKey:yourazurblobstorageprimarykeydetails
Write-Host "Complete Backup to AZure Blob" -ForegroundColor Green
#Define the source storage account and context.
$SourceStorageAccountName = "Provide your storage account"
$SourceStorageAccountKey = "Provide your storage account primary/master key"
$SrcContainerName = "Your AzureContainername"
$SourceContext = New-AzureStorageContext -StorageAccountName $SourceStorageAccountName -StorageAccountKey $SourceStorageAccountKey
#$blobs = Get-AzureStorageBlob -Container $SrcContainerName -Context $SourceContext
#Set the threshold value
$isOldDate = [DateTime]::UtcNow.AddDays(-7)
#Get a reference to blobs in the source container.
$blobs = Get-AzureStorageBlob -Container $SrcContainerName -Context $SourceContext | Where-Object { $_.LastModified.UtcDateTime -lt $isOldDate }
$blobs| Remove-AzureStorageBlob
Write-Host "end" 

 

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Fix – Availability Group Listener Failed with the message WSFC could not bring the network name resource

One of the common issue I and my team have encountered several times when we setup availability group listener in SQL Server as below

Msg 19471, Level 16, State 0, Line 2
The WSFC cluster could not bring the Network Name resource with DNS name ‘<DNS name>’ online. The DNS name may have been taken or have a conflict with existing name services, or the WSFC cluster service may not be running or may be inaccessible. Use a different DNS name to resolve name conflicts, or check the WSFC cluster log for more information.

Msg 19476, Level 16, State 4, Line 2
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator.

As per the documentation cluster object name should have “Create computer objects” and “Read all properties” permissions within Active Directory.

Reference: https://msdn.microsoft.com/en-us/library/hh213080.aspx#WinPermissions

We have requested the Active directory team to grant “Create Computer Objects” permissions for windows cluster and listener within Active Directory.

After granting the permission, we were able to create availability groups listener from SQL Server.

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

How To Change SQL Server Instance Collation After Installation

Last week one of the support team member contacted me for help how to change the collation without re-installing sql server. Team has installed and configured sql server but forgot to change the collation as per the customer application needs. Good thing about this , it was a brand new install

Here is how we can change the collation of sql server instance without re-installing.

  1. Backup any user databases, jobs (In this case Only one database configured, no jobs yet)
  2. Detach user databases
  3. Determine the Sql server instance collation (In this case Latin1_General_100_CI_AS)
  4. Rebuild the system databases by using the below command

Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=MSSQLSERVER

/SQLSYSADMINACCOUNTS=accuntname /[ SAPWD= accountpwd ]

/SQLCOLLATION=Latin1_General_100_CI_AS

 

Reference: https://msdn.microsoft.com/en-us/library/ms179254.aspx

5.  Once the rebuild operation is complete, check the collation to verify whether this change is successful or not. There you go!

 

 

6. Attach all user databases if any which were detached in step 2

7. Change the collation of user databases if any (ALTER DATABASE DBName collate SQL_Latin1_General_CP1_CI_AI)

 

Now we have new instance with new collation setting.

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

Linkedin: Linkedin Profile

Twitter: Twitter