Tuesday, October 4, 2016

SQL Server - Top CPU Consuming Queries

The query mention below will list down top 10 top resource (CPU) consuming queries. This can be a good tool for a SQL Server database professional to get to know how is your database doing and what are the queries which are taking maximum CPU of the server.

Also, you can get to know if any query is getting recompiled using the columns “execution_count” and “plan_generation_num”.
I use this query to find the database queries which require performance optimization and hoping this will help to others as well.

SELECT TOP 10
Substring(qt.TEXT, ( qs.statement_start_offset / 2 ) + 1,
      ( ( CASE qs.statement_end_offset WHEN -1
THEN Datalength(qt.TEXT) ELSE qs.statement_end_offset
END - qs.statement_start_offset ) / 2 ) + 1)
AS sql_text,
      qs.execution_count,
      qs.plan_generation_num,
      qs.total_logical_reads,
      qs.last_logical_reads,
      qs.total_logical_writes,
      qs.last_logical_writes,
      qs.total_worker_time,
      qs.last_worker_time,
      qs.total_elapsed_time / 1000000 total_elapsed_time_in_S,
      qs.last_elapsed_time / 1000000 last_elapsed_time_in_S,
      qs.last_execution_time,
      qp.query_plan
FROM   sys.dm_exec_query_stats qs
       CROSS APPLY sys.Dm_exec_sql_text(qs.sql_handle) qt
       CROSS APPLY sys.Dm_exec_query_plan(qs.plan_handle) qp
ORDER  BY qs.total_worker_time DESC


Query Result


Friday, September 30, 2016

SQL Server - Log Shipping Overview

For distributed database application environment, it is always required to synchronize different database servers, backup, copy transaction logs, etc. SQL Server 2005 provides an advanced featured called Log Shipping. Log Shipping is an Automated Process for backing up, restoring, copying the transaction logs and synchronizing the database for distributed database server applications which can improve the performance and availability of the database.


What is Log Shipping?
Log shipping is used to synchronize the Distributed Database Server. Synchronize the database by copying transaction log, backing up, and restoring data. SQL Server used “SQL Server Job Agents” for making those processes automatic. Log Shipping does not involve automatic transfer of server if there is any failure. This means is has just synchronize the databases but if the primary server fails, it will not redirect your application to the secondary server. This has to be done manually.
The main functions of log shipping are:
  • Backing of the transaction log of the primary database
  • Copying the transaction log backup to the secondary servers
  • Restoring the transaction log backup on the secondary servers

Log Shipping Prerequisites
  • Must have at least two Database Servers or two SQL Server 2005 Database Engine/Instances
  • Configuration user should have Admin privilege on the server
  • SQL Server Agent Service Configured properly
  • Primary database “Recovery Model” must be FULL
  • Shared folder (for both primary and secondary servers) for copying the transaction logs.

Components of Log Shipping
For implementing log shipping, we need the following components:
  • Primary Database Server: Primary server is main database server of SQL Server database instance, which is being accessed by the application. Primary server contains the primary database.
  • Secondary Database Server: Secondary database server is a SQL Server database instance or a different server that contains the backup of primary database. There are possibilities to have multiple secondary servers if needed.
  • Monitoring Server: Monitoring server is a SQL Server instance which tracks the log shipping process.


Log Shipping Database Server Configuration
You can start the log shipping by Right click on the primary database > properties > Transaction Log Shipping > Select check box, and configure your settings.
Figure 1: Enable Log Shipping




























Further configurations:
Figure 2: Transaction Log Backup Setting for Primary Server






























Configure Secondary Server:
Figure 3: Secondary Server Configuration


























Post Log shipping activity
After you have configured the log shipping, you’ll assume that there are three main activities/jobs will be created for log shipping and they are “backup job”, “copy backup file job” and “restore job”. You can monitor information about the status of all the log shipping servers. The history and status of log shipping operations are always saved locally by the log shipping jobs. The history and status of the backup operation are stored at the primary server and the history and status of the copy and restore operations are stored at the secondary server. If monitor server has been implemented then this information is also stored on the monitor server.
Followings are the catalog tables which can explain about almost everything related to the configured and running log shipping process:
  • Log_shipping_monitor_alert: Stores alert job ID.
  • Log_shipping_monitor_error_detail: Stores error details for log shipping jobs.
  • Log_shipping_monitor_history_detail: Contains history details of log shipping agent.
  • Log_shipping_monitor_primary: Stores one monitor record for the primary database in each log shipping configuration, including information about the last backup file and last restored file that is useful for monitoring.
  • Log_shipping_monitor_secondary: Stores one monitoring record for each secondary database, including information about the last backup file and last restored file.


Thursday, September 29, 2016

SQL Server - Disk I/O Baseline Capture


Primary job of DBA to monitor SQL Server instance continuously through out the day to troubleshooting any performance issue. At server level you need to keep monitoring how the following utilization goes:

  •  RAM
  • CPU
  • Disk


I already mentioned in my post SQL Server - RAM Used By Database that how to identify memory usage by database (and by table). This can be a good tool to know about the memory usage for taking right steps to resolve any memory bottleneck.

Let’s talk about the Disk I/O by database. In fact, using below query you can capture multiple I/O occurrences (by a specific time interval) for baseline report. You can also compare the old Disk I/O report with latest one to know the difference and it is a significant then you can plan for future action to take care any future problem.

-- Capturing baseline disk I/O statistics from sys.dm_io_virtual_file_stats.
SELECT Db_name(mf.database_id) AS databaseName,
       mf.physical_name,
       divfs.num_of_reads,
       divfs.num_of_bytes_read,
       divfs.io_stall_read_ms,
       divfs.num_of_writes,
       divfs.num_of_bytes_written,
       divfs.io_stall_write_ms,
       divfs.io_stall,
       size_on_disk_bytes,
       Getdate()               AS baselineDate
FROM   sys.Dm_io_virtual_file_stats(NULL, NULL) AS divfs
       JOIN sys.master_files AS mf
         ON mf.database_id = divfs.database_id
            AND mf.file_id = divfs.file_id

Query Result








Tuesday, September 27, 2016

SQL Server - Best Practices for Using Service Owner Account

Best Practices for using SQL Server Service Accounts


Securing SQL Server is one of the top priorities of SQL Server DBA. It is very important for DBA’s and system admin to make sure that SQL Server is HIGHLY SECURED and it is not exposed to users who don’t need access to SQL Server. Only valid and required users MUST be granted required permissions.

Apart from configuring the right security/permissions for SQL Server users/logins, it is also very important to select the right account to run SQL Server services.


When to use Domain User Account?

When your SQL Server interacts with other servers, services or resources on the network (ex: Files Shares, etc.) or when uses linked servers to connect to other SQL Server on the network, then a DBA must use a low privileged domain user account for running SQL Server service. Domain user account is the most recommended account for setting up SQL Server service that interact with others servers on the network. The advantage of using domain user account is that the account is controlled by Windows active directory therefore, domain level policy on the accounts apply to SQL Server account as well.


When to use Network Service Account

A DBA should never use Network Service Account for running SQL Server services. Network Service accounts are shared with the other services running on the local computers. Network Service Account is a built-in account that has more access to server resources and objects than users’ accounts of local user groups.

Any SQL Server service that runs on Network Service Account, can access network resources by using the credentials of the computer account. This account shows up a (NETAUTHORITY\NETWORK SERVICE) when configured SQL Server Service.


When to use Local User Account

When SQL Server does not interact with other servers or resources on the network (ex: files, shares, linked servers, etc.) then DBA can use  a low privileged local server account for running SQL Server Service.

Recommendation: DBA must not require using the local user account with administrative privileges to run SQL Server Service.


When to use Local System Account

Never ever should a DBA use local system account for sunning SQL Server Service. Local System Account has more permissions than a DBA would think. It is a very high-privileged built-in account created by Windows O/S.

Local System Account has extensive privileges on the entire local system and acts as a computer on your company’s network. This account shows up as (NT AUTHORITY\SYSTEM) when configuring SQL Server service.


SQL Server - FileStream

FILESTREAM

Filestream allows us to store and manage unstructured data in SQL Server more easily. FILESTREAM feature is available in SQL Server 2008 and above releases.

Steps to configure and run statements using SQL Server FileStream:

Enable Filestream (Step – 1)

EXEC sp_configure filestream_access_level, 2

RECONFIGURE

GO



Create a FILESTREAM-enabled database (Step – 2)

CREATE DATABASE Sample_FS ON PRIMARY
      (
            NAME = SamFS1,
            FILENAME = 'E:\Data\samfs1.mdf'),
            FILEGROUP FSGroup1
      CONTAINS FILESTREAM
            (
                  NAME = SamFS2,
                  FILENAME ='E:\Data\Filestream1'
            )
      LOG ON
            (
                  NAME = SamFSlog1,
                  FILENAME ='E:\Data\samfslog1.ldf'
            )
GO


Create a Table for Storing FILESTREAM Data (Step – 3)

CREATE TABLE Sample_FS.dbo.Example
(
[ExampleID] [UNIQUEIDENTIFIER] ROWGUIDCOL NOT NULL UNIQUE,
[SNo] INT UNIQUE,
[FS_Column] VARBINARY(MAX) FILESTREAM NULL
)
GO


Inserting a Row That Contains FILESTREAM Data (Step – 4)

-- Inserting NULL
INSERT INTO Sample_FS.dbo.Example VALUES (newid(), 1001, NULL);

-- Inserting a Zero-Length Record
INSERT INTO Sample_FS.dbo.Example VALUES (newid(), 2001, CAST('' AS VARBINARY(MAX)));

-- Creating a Data File
INSERT INTO Sample_FS.dbo.Example VALUES (newid(), 3001, CAST('This will be viewed using NOTEPAD from the FileStreal location.' AS VARBINARY(MAX)));

-- Creating a Data File using existing .jpg file
DECLARE @p_img AS VARBINARY(MAX);
SELECT @p_img = CAST(bulkcolumn AS VARBINARY(MAX))
FROM OPENROWSET(BULK 'D:\error.jpg', SINGLE_BLOB ) AS x;
INSERT INTO Sample_FS.dbo.Example ([ExampleID], [SNo], [FS_Column])
SELECT NEWID(), 4001, @p_img;


Update FILESTREAM Data (Step – 5) 

UPDATE Sample_FS.dbo.Example

SET    [FS_Column] = Cast('This is the update' AS VARBINARY(MAX))
WHERE  [SNo] = 2001;


Deleting FILESTREAM Data (Step – 6) 

DELETE FROM Sample_FS.dbo.Example

WHERE  [SNo] = 1001;


Useful Instructions

  • Make sure E drive must have a “DATA” named folder. (see Step-2)
  • Make sure D drive must have a “error.jpg” file. (see step-4)
  • You can use the SELECT statement on the table “Example” but the data of column where “FILESTREAM” is used will be in hexa.
  • You can open the files which are stored using FILESTREAM after it is deleted from its original location or source location i.e. in step-4 “error.jpg” file is on D drive after successful execution of the insert statement, the same file can be removed from D drive.
  • You can get/see the same “error.jpg” from the location “E:\Data\Filestream1\_____”.