Forgot SQL Server “SA” password? Don’t Worry

Hi Rama,

Need your help. I forgot the “SA” password and I am unable to access to SQL Server via windows/sql authentication.  Is there any way we can recover the “SA” account?

Hello Vamshi,

Yes, we can as long as you are able to RDP and you are a built in administrator to your windows box.

Thanks Rama, I am an admin to the windows box, I am able to RDP to the server. Could you guide me how to recover the “SA” account?

Yes Vamshi. Sure. Here is the way you can do that. Please follow the below steps to reset the “SA” account.

  • Log on to window box and make sure the built in administrator enabled

1.png

  • Go to SQL Server Configuration Manager and add the -m parameter to run the SQL Server in single user mode as show in below screenshot

2.png

3.png4.png

  • Re-start the SQL Server

5

  • Now logon to SQL Server with builtin administrator as below

6.png

  •  Here you go, you are able to connect to SQL Server. Now change the “SA” account password

7.png

  • After that remove the -m parameter from SQL Server configuration manager and restart the sql server

8.png

  • Connect SQL Server  with  new “SA” password. There you go!

910

 

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

 

 

Powershell Script to Add New Database to Existing Availability Groups

One of the common tasks for DBA to add the new database to existing availability groups. This can be done fairly easy via SSMS. The requirement is how to deploy the new database from Team Foundation Server (TFS) to SQL Server Alway on AG listener and automatically add this database to Availability groups. Here is the script written for this purpose. I have used powershell as this one of the tool which is very useful and easy way to automate from TFS.

 Write-host "Adding Database to existing Availability Groups"

$BackupLocation = "Provide the shared location where both nodes can access"
$dbName = "DatabaseName"
$ListenerName = "AGListener"

$ListReplicaNames = Invoke-Sqlcmd -Query "SELECT name, replica_server_name, role_desc FROM master.sys.availability_replicas as AR
INNER JOIN master.sys.dm_hadr_availability_replica_states as ARS ON AR.replica_id = ARS.replica_id
INNER JOIN master.sys.availability_groups as AG ON AR.group_id = AG.group_id" -ServerInstance $ListenerName
$replica1 = $ListReplicaNames | Where-Object {$_.role_desc -eq "PRIMARY"}
$replica2 = $ListReplicaNames | Where-Object {$_.role_desc -eq "SECONDARY"}
$server1 = $replica1.replica_server_name
$server2 = $replica2.replica_server_name
$AGName = $replica1.name
$dateStamp=get-date

try
{

#Create Backup of DB node 1
$bakPath = $BackupLocation + $dbName + "_" + $dateStamp + ".bak"
$trnPath = $BackupLocation + $dbName + "_" + $dateStamp + ".trn"

set-location "SQLSERVER:\SQL\$server1\Databases"
Write-Host "Backing up $server1 $dbName to $bakPath"
Backup-SqlDatabase -database $dbName -backupFile $bakPath
Write-Host " ... backup complete"

Write-Host "Backing up $server1 $dbName to $trnPath"
Backup-SqlDatabase -Database $dbName -BackupFile $trnPath -BackupAction Log
Write-Host " ... backup complete"

#Resotore Backup of DB in node 2
set-location "SQLSERVER:\SQL\$server2\Databases"
Write-Host "Restoring $server2 $dbName from $bakPath"
Restore-SqlDatabase -Database $dbName -BackupFile $bakPath -ServerInstance $server2 -NoRecovery
Write-Host " ... restoring complete"

Write-Host "Restoring $server2 $dbName from $trnPath"
Restore-SqlDatabase -Database $dbName -BackupFile $trnPath -ServerInstance $server2 -RestoreAction Log -NoRecovery
Write-Host " ... restoring complete"
#Add Database in AvailabilityGroups
Write-Host "Add database: $dbName to availability group: $AGName"
Add-SqlAvailabilityDatabase -Path SQLSERVER:\Sql\$server1\AvailabilityGroups\$AGName\AvailabilityDatabases -Database $dbName
Add-SqlAvailabilityDatabase -Path SQLSERVER:\Sql\$server2\AvailabilityGroups\$AGName\AvailabilityDatabases -Database $dbName
Write-Host " Database $dbname has been added"
}
catch
{
$ErrorMessage = $_.Exception.Message
Write-Host $ErrorMessage
}
finally
{
# This will write the execution time logged in the AddDBToAG.log file
$Time=Get-Date
"This script executed at $Time" | out-file D:\logs\AddDBToAG.log -append
}

 

 

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

 

SSIS Error – The Binary code for the script is not found

Application team reported me that one of the SSIS package is failing after recent changes. It was failing with message “Binary Code for the script is not found”

I have been asked to troubleshoot the issue. I have opened the SSIS package in SQL Server Data tools and try to run the package from SSDT. Here is the error i have received

1.PNG

The package was using Script Task and i have opened the script tasks to see any coding/syntax errors. I found there was missing semicolon and un compiled code. I have modified the logic in Script task and save the package.

After that i have ran the package and it is successful.

 

Conclusion:

Error “The Binary code for the script is not found” usually occurs when you use Script task in SSIS and save the task with errors. Edit the script task, correct the issues and then execute the package.

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

Query Store – Key feature in SQL Server 2016

As a DBA, how many times you hear about a user reporting that a query suddenly running more slowly than in the past?

SQL Agent job usually takes 3 hours , but this is running more than 12 hours, Could you please investigate?

What would you do in this case?

There are number of common reasons for slow running queries

  • Is the performance problem related to a component other than queries?
  • Are there any other components that might be causing or contributing to performance degradation?
  • If the performance issue is related to queries, which query or set of queries is involved?
  • How do I analyze the performance of a slow-running query?
  • Was the query optimized with useful statistics?
  • Are suitable indexes available?
  • Are there any data or index hot spots?
  • Is the query optimizer provided with the best opportunity to optimize a complex query?
  • If you have a large volume of data, do you need to partition it?

We can use Few of the free Tools like Plan Explorer, Sp_Whoisactive, Sp_BlitzCache to determine the root cause of this. But every method you do, you would require to collect data from various DMV’s, running traces analyze the performance. Most of the cases the root cause is Plan Regression, where the execution plan of a specific query has changed.

How does the query store new feature in SQL Server 2016 helps?

The new Query Store feature in SQL Server 2016 is used to simplify identification of performance issues and to simplify performance troubleshooting for queries caused by changes in execution plans (Plan Regression).

The Query store captures query run-time statistics and providing a dashboard to sort queries by resource consumption. This collection of data is very useful  for DBA to troubleshoot performance issues.

How to Enable Query Store?

Connect to SQL Server Database Engine –> Go to Object Explorer –> Right click on User Database –> Go to properties –> Click on Query Store –>Operation Mode

2.png

Once you enable the Query Store, you can see the query store folder create under database where you have enabled the query store. Expand the query store and check the options like “Regression Queries”, “Overall Resource Consumption” , “Top Resource Consuming Queries”, “Tracked Queries”

3.png

What is “Regressed Queries”?

This is used to review queries that might have regressed because of execution plan changes. The dashboard allows you to view the queries and their plans as well as to select queries based on statistics (total, average, minimum, maximum, and standard deviation) by query metric (duration, CPU time, memory consumption, logical reads, logical writes, and physical reads) for the top 25 regressed queries over the last hour.

8.png

What is Overall Resource Consumption?

This is used to visualize overall resource consumption during the last month in four charts: duration, execution count, CPU time, and logical reads. You have the option to toggle between a chart view and a grid view of the query store data.

5.png

What is Top Resource Consuming Queries?

This is used to review queries in the set of top 25 resource consumers during the last hour. You can filter the queries by using the same criteria available in the Regressed Queries dashboard.

9.png

What is Tracked Queries?

This is used  to monitor a specific query.

10.png

That’s really Cool!

References: 

https://msdn.microsoft.com/en-us/library/dn817826.aspx

https://blogs.msdn.microsoft.com/microsoft_press/2015/12/22/free-ebook-introducing-microsoft-sql-server-2016-mission-critical-applications-deeper-insights-hyperscale-cloud-preview-edition/

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

 

The report server cannot open a connection to the report server database

One of the student asked me while connecting to report server url, he was prompted the error as “The report server cannot open a connection to the report server database”

1.PNG

Looking at the error, it states that the report server database is not accessible by the report server.

Let us see the SQL Server report server configuration settings.

Go to SQL Server Configuration Manager –> Click on SQL Server Reporting Services Configuration Manager –> Click on Database as shown below

2.PNG

Figure 2: 

As you can see in the Figure 2, there were no report server database(s) configured on the instance. Let’s configure the report server database

Click on Change Database and follow the below screenshots

3.PNG

Figure 3:

4.PNG

Figure 4:

5.PNG

Figure 5:

6.PNG

Figure 6:

7.PNG

Figure 7:

8.PNG

Figure 8:

 

9.PNG

Figure 9:

Now, connect the report manager url , Here you go!

11.PNG

Figure 10:

Note: Sometimes you may face permission issue while connecting to report manager url as show in Figure 11

10.PNG

Figure 11:

When you get a permission error as above,

  • open the browser as administrator
  • Go to report manager url –> click on site setting –> Click on Security

12.png

Figure 12:

  • Click on New Role Assignment

13.png

Figure 13:

  • Add user to System Administrator role

14.PNG

  • Close the browser and reconnect the browser. There you go!. You are able to connect report server.

11

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

Backup and Restore Database without non clustered indexes

In this post, I’m going to demonstrate how to take backup of database without considering non clustered indexes.

Before describing how to eliminate index data from backups, it’s first important to
point out that a non clustered index contains simply a copy of the rows from its
associated table, sorted differently by the index’s keys to allow efficient searching
during processing of queries. Eliminating this redundant index data from routine
database backups does not limit the recoverability of the database data from its
backup; indexes can always be re-created later if necessary.

Before we create any tables and indexes, first we will create a database “TestBackup”

 -- Create a testbackup to demonstrate the example
CREATE DATABASE TestBackup;

This will create a database called “TestBackup” on a “Primary Group”. By default, SQL Server creates databases on a primary group unless you specifically mention.

Let’s see on what file-group the database is in

 USE TestBackup;
SELECT name, data_space_id, is_default FROM sys.filegroups; 

1.PNG

Figure 1

By default, unless further filegroups are explicitly created, all the tables created within the database are stored in “Primary” Group.

Let’s continue by creating a simple table

 CREATE TABLE dbo.Employee (
Employee_No INT NOT NULL primary key
, Ename NVARCHAR(50) NOT NULL,
Sal MONEY NOT NULL
); 

To verify that the preceding table is created within the PRIMARY filegroup, execute the
following command:

 SELECT d.*
FROM sys.data_spaces d, sys.indexes i
WHERE i.object_id = OBJECT_ID('dbo.Employee')
AND d.data_space_id = i.data_space_id
AND i.index_id < 2; 

2.PNG

Figure 2

Now, we will create non clustered index on Employee Table as below

 CREATE NONCLUSTERED INDEX ncix_Employee ON dbo.Employee (Employee_No); 

Verify  on which file group the non clustered index created

 SELECT I.NAME, I.INDEX_ID, I.TYPE_DESC, I.DATA_SPACE_ID
FROM SYS.DATA_SPACES D, SYS.INDEXES I
WHERE I.OBJECT_ID = OBJECT_ID('DBO.EMPLOYEE')
AND D.DATA_SPACE_ID = I.DATA_SPACE_ID
AND I.INDEX_ID > 1; 

3.PNG

Figure 3

You can see that both table and index is on default Primary Group. See the Figure 2 and 3 for the column data_Space_id (=1).  Next,we are going to add a new file group and secondary file for non clustered indexes as below and later move the non clustered index to newly added file group.

 ALTER DATABASE TestBackup
ADD FILEGROUP NCIX_FG;</pre>
ALTER DATABASE TESTBACKUP
ADD FILE (
NAME = N'TESTBACKUP_NCIX_FG1'
, FILENAME = N'C:\RAM\TESTBACKUP_NCIX_FG1.NDF')
TO FILEGROUP NCIX_FG; 

To move the non clustered index from primary group to the dedicated new file group, we would need to create non clustered index with DROP_EXISTING command as below

 CREATE NONCLUSTERED INDEX ncix_Employee ON dbo.Employee (Employee_No)
WITH DROP_EXISTING
ON NCIX_FG; 

Note: DROP_EXISTING option causes the newly created index to be created as the replacement of the existing index, without needing to explicitly drop the existing
index .

With this the new non clustered index with the same name created in new file group.

Next, I will take Backing up only the PRIMARY file group and restore the PRIMARY file group.

 BACKUP DATABASE TestBackup
FILEGROUP = 'PRIMARY'
TO DISK = 'C:\Ram\TestBackup_Primary.bak' 

Restoring the PRIMARY file group backup

 RESTORE DATABASE TestBackup_New
FILEGROUP = 'PRIMARY'
FROM DISK = 'C:\Ram\TestBackup_Primary.bak' WITH RECOVERY, MOVE N'TestBackup' TO N'C:\Ram\TestBackup.mdf',
MOVE N'TestBackup_log' TO N'C:\Ram\TestBackup_log.ldf', NOUNLOAD, STATS = 5 

Note: I have restored on the same instance with different name (TestBackup_New) and move the files to different folders. This will bring the database online and available for querying.

4

Figure 4

5.PNG

Figure 5

You cannot perform insert, update and delete commands because we have only restored the primary group, the underlying non clustered indexes associated with tables are offline. If you attempt to do, you would get the below error

6.PNG

Figure 6

7.PNG

Figure 7

Our main intention in this demonstration is to extract the data from the database without restoring non clustered indexes. This will be useful, if you are testing the data for analysis and you do not want to restore the non clustered indexes due to low disk space on the destination server.

Hope you enjoyed the post!

Happy Weekend!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

 

How to deploy SSIS package to SQL Server 2012 SSISDB Catalog

In this post, I’m going to present how to deploy SSIS package to SQL Server 2012 integration services catalog.

  • Create SSIS catalog

1.png

2.png

  • After step 1 you can see SSIS catalog name called “SSISDB”  created. Right click and create a folder “TestSSISPackages”

4.png

 

5.png

 

6

  • Now create a visual studio SSIS integration sample package. For the purpose of this demo I have created a sample package called “TestSSIS”.

7

  • After the step 3, right click on project name and click deploy

8.png

  • The below screen will appear for deploying the package, follow the screenshots to complete the package deployment

9.png

 

10.png

 

11.png

 

12.png

 

13.png

 

14

 

15.png

  • After the step 5, you can see that the package “TestSSIS” deployed in SSIS catalog as shown below.  There you go!

16.png

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

Database size report for multiple database on multiple servers

As a DBA, you are responsible for monitoring the growth of the databases on multiple servers in different environments. There are many ways we can achieve this, my requirement was to use linked servers to get the databases size of all the linked servers and email the report to DBA group.

Here is the script i have written to get the database sizes of all the linked servers.

 

 CREATE PROCEDURE [dbo].[SumofDBsize]
AS
BEGIN
--This procedure is used to sum all the databases sizes in all the Linked servers.</pre>
IF OBJECT_ID('tempdb.dbo.#sum_volum') IS NOT NULL
BEGIN
DROP TABLE tempdb.dbo.#sum_volum;
END
ELSE
BEGIN
CREATE TABLE #sum_volum (
Server_name varchar(50),
DATABASE_NAME varchar(500),
DATABASE_SIZE bigint
)
END
DECLARE @tmp_key varchar(50)
DECLARE @db CURSOR
DECLARE @sql nvarchar(max)
SET @db = CURSOR FOR
SELECT
name 'server_name'
FROM sys.servers
ORDER BY server_id

OPEN @db
FETCH NEXT FROM @db INTO @tmp_key

WHILE (@@FETCH_STATUS = 0)
BEGIN
--select @tmp_key
SET @sql = N'insert into #sum_volum(DATABASE_NAME,DATABASE_SIZE)

select
db.name ,
DATABASE_SIZE = convert(bigint, convert(bigint, sum(s_mf.size))*8)
--REMARKS = convert(varchar(500),null)
from ' + QUOTENAME(@tmp_key) + '.master.sys.master_files s_mf right outer join ' + QUOTENAME(@tmp_key) + '.master. sys.databases db on s_mf.database_id = db.database_id
where
s_mf.state = 0
group by db.name
order by 1 ';

EXEC sp_sqlexec @sql
SELECT
@tmp_key
UPDATE #sum_volum
SET Server_name = @tmp_key
WHERE Server_name IS NULL
OR Server_name = ''
FETCH NEXT FROM @db INTO @tmp_key

END;

CLOSE @db
DEALLOCATE @db
EXEC msdb.dbo.sp_send_dbmail @profile_name = 'SUM_DB_Volumes',
@recipients = 'youremail@gmail.com',
@subject = 'SQL servers Database Volume Reports',
@body = 'Please Find the attachment for the Volume reports',
@execute_query_database = 'YourDBName',
@query = '
select server_name,DATABASE_NAME,sum(DATABASE_SIZE) ''DATABASE_SIZE''
from #sum_volum
group by server_name,DATABASE_NAME,DATABASE_SIZE
union all
select '''',''Total Size In KB'',sum(DATABASE_SIZE)
from #sum_volum
union all
select '''',''Total Size In GB'',((sum( DATABASE_SIZE )/1024)/1024)
from #sum_volum
union all
select '''',''Total Size In TB'',(((sum( DATABASE_SIZE )/1024)/1024)/1024)
from #sum_volum',

@attach_query_result_as_file = 1,
@query_result_separator = ' ',
@exclude_query_output = 1,
@query_result_no_padding = 1,
@query_result_header = 1,
@query_attachment_filename = 'SQL server volume report.csv'
END

GO 

 

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

T-SQL Script to find Free space By Database

This is one of the handy script to find the free space by each database. This is useful when you troubleshoot the database space issues.

 DECLARE @DatabaseInfo TABLE
( ServerName VARCHAR(100),
DatabaseName VARCHAR(100),
FileSizeMB INT,
LogicalFileName sysname,
PhysicalFileName NVARCHAR(520),
Status sysname,
Updateability sysname,
RecoveryMode sysname,
FreeSpaceMB INT,
FreeSpacePct VARCHAR(7),
FreeSpacePages INT,
PollDate datetime)</pre>
DECLARE @command VARCHAR(5000)

SELECT @command = 'Use [' + '?' + '] SELECT
@@servername as ServerName,
' + '''' + '?' + '''' + ' AS DatabaseName,
CAST(sysfiles.size/128.0 AS int) AS FileSize,
sysfiles.name AS LogicalFileName, sysfiles.filename AS PhysicalFileName,
CONVERT(sysname,DatabasePropertyEx(''?'',''Status'')) AS Status,
CONVERT(sysname,DatabasePropertyEx(''?'',''Updateability'')) AS Updateability,
CONVERT(sysname,DatabasePropertyEx(''?'',''Recovery'')) AS RecoveryMode,
CAST(sysfiles.size/128.0 - CAST(FILEPROPERTY(sysfiles.name, ' + '''' +
'SpaceUsed' + '''' + ' ) AS int)/128.0 AS int) AS FreeSpaceMB,
CAST(100 * (CAST (((sysfiles.size/128.0 -CAST(FILEPROPERTY(sysfiles.name,
' + '''' + 'SpaceUsed' + '''' + ' ) AS int)/128.0)/(sysfiles.size/128.0))
AS decimal(4,2))) AS varchar(8)) + ' + '''' + '%' + '''' + ' AS FreeSpacePct,
GETDATE() as PollDate FROM dbo.sysfiles'
INSERT INTO @DatabaseInfo
(ServerName,
DatabaseName,
FileSizeMB,
LogicalFileName,
PhysicalFileName,
Status,
Updateability,
RecoveryMode,
FreeSpaceMB,
FreeSpacePct,
PollDate)
EXEC sp_MSForEachDB @command

SELECT
ServerName,
DatabaseName,
FileSizeMB,
LogicalFileName,
PhysicalFileName,
Status,
Updateability,
RecoveryMode,
FreeSpaceMB,
FreeSpacePct,
PollDate
FROM @DatabaseInfo
ORDER BY FreeSpacePct DESC,
ServerName,
DatabaseName 

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter

 

 

 

Enable Instant File Initialization is easy in SQL Server 2016

Starting from SQL Server 2005, Microsoft introduced a feature called Instant File Initialization. By default, when you create a database, add a new data file, increase the size of an existing file,  in SQL Server first initialized by filling the files with zeros.

File initialization causes these operations to take longer. However, when data is written to the files for the first time, the operating system does not have to fill the files with zeros. To accomplish this we need to enable “Instant File Initialization” in SQL Server.  Prior to SQL Server 2016, to enable this feature you had to edit the Local Security Policy to give the account that runs the SQL Server service the “Perform volume maintenance tasks” right as shown in Figure 1

  1. Run secpol.msc on the server.
  2. Expand the Local Policies Folder
  3. Click on User Rights Assignment
  4. Go down to the “Perform Volume Maintenance Tasks” option and double click it
  5. Add your SQL Server Service account, and click OK out of the dialog.

Secpol

Figure 1

Microsoft made this feature easy in SQL Server 2016, When you install SQL Server you have a choice to enable this feature under server configuration as show in Figure 2. You do not have to configure after installation, once you enable it will automatically grant required permission to the service account the SQL Server installed.

screen.png

Figure 2

Hope you enjoyed the post!

Cheers

Ramasankar Molleti

LinkedIn: LinkedIn Profile

Twitter: Twitter