A closer look at everything related to SQL Server

Buffer pages and checkpoints are one of the often written and talked about topics of SQL Server. Here I will provide my understanding and thoughts and research on these two topics.

Buffer cache is the the RAM allocated to store data pages requested by the system. This avoids making trip to the the disk subsystem and saves disk IO. Performance monitor , a windows tool to monitor server performance, has a counter call “Buffer Cache Hit Ratio” and it shows how SQL Server utilizes buffer cache. It gives the ratio of the data pages found and read from the SQL Server buffer cache and all data page requests. Ideally the value should be above 95 for OLTP systems. The pages that are not found in the buffer cache are read from the disk, which is significantly slower and affects performance. But according to Jonathan Kehiyas’s article https://www.simple-talk.com/sql/database-administration/great-sql-server-debates-buffer-cache-hit-ratio/, BCHR is not a reliable counter. In various tests, even when sql server is under memory pressure, this counter remains above threshold. So relying solely on this counter could be misleading. It is recommended that a DBA should never look at one performance monitor counter to base his or her decision about the poor performance but should look at other counters such “Page life expectancy” or PLE and “Read-ahead pages” etc to get the full picture.

Whenever the data in sql server database changes (such as inserts, updates or deletes), it is written to a transaction log. As soon as these records are written out, the transaction can commit. This implies that the speed of SQL Server transactions depends on the speed of transaction log writing. Pages that were changed in memory are written to the storage subsystem through a “Checkpoint” process. By default the this process runs about every minute, and requests all dirty pages from the buffer manager. A dirty page is page that was changed in memory, but hasn’t yet been written to the storage. As soon as a dirty page is written out to the storage, the page is marked as a clean page.

(Applies to SQL Server 2012 Enterprise Edition only)

Problem:
A feature or bug in SQL Server 2012 results in an unexpected behavior for identity columns. Technically, the specification for SQL does not dictate that identity columns be numerically contiguous, only that they are steadily increasing. In previous versions of SQL Server identity columns had been implemented by Microsoft to be contiguous, but in 2012, Microsoft changed the behavior so that identity columns cache values differently and may no longer be contiguous.

In practice what tends to happen is that when SQL Server service is restarted, failed over, hits a log checkpoint or possibly under other circumstances, you may see a number jump in identity column numbers, for example:

1, 2, 3, 4, (restart occurs), 1001, 1002, 1003, 1004, 1005, (failover occurs), 2000, 2001, 2002. . .

If an application is expecting contiguous numbering, this may result in unexpected issues. This can cause serious data integrity issue.

Solution:
There is a trace flag 272 that can be enabled to force the contiguous numbering behavior from previous versions of SQL Server. The downside is slightly slower identity column values upon insert, which should merely result in the same identity column performance from previous versions. You can read the comments SQL Server community leaders has sent to Microsoft on this issue.

http://connect.microsoft.com/SQLServer/feedback/details/739013/failover-or-restart-results-in-reseed-of-identity

Test:
I reproduced the error using sql server 2012 on my laptop. Anybody can use it on test server to see the issue.

– Example is using AdventureWorks2012.Production.WorkOrder table
– Check if table has identity column using sp_help [production.workorder]

– First Check the latest value of identity column
select max(workorderID) from production.workorder — 72591

– Now insert a row
INSERT INTO [Production].[WorkOrder]
([ProductID]
,[OrderQty]
,[ScrappedQty]
,[StartDate]
,[EndDate]
,[DueDate]
,[ScrapReasonID]
,[ModifiedDate])
VALUES
(1
,1
,1
,getdate()
,Sysdatetime()
,getdate() +1
,1
,Sysdatetime())
GO

– Check the value of identity column
select max(workorderID) from production.workorder — 72592

——–*************** Restart sql server service *************——————
– Insert another row
INSERT INTO [Production].[WorkOrder]
([ProductID]
,[OrderQty]
,[ScrappedQty]
,[StartDate]
,[EndDate]
,[DueDate]
,[ScrapReasonID]
,[ModifiedDate])
VALUES
(2
,2
,2
,getdate()
,Sysdatetime()
,getdate() +1
,2
,Sysdatetime())
GO

– Finally check the identity colum value again. This time it jumped to 73592, an increase of 1000.
select max(workorderID) from production.workorder — 73592

Result:

After enabling traceflag -T272, I repeated the previous steps and there was identity column increase of 1. I repeated this test on Developer edition SQL server 2012 and got increase of 2 after enabling traceflag. So at this point this traceflag only helps in Enterprise Edition only.

WorkOrderID ProductID OrderQty StockedQty ScrappedQty StartDate
72591 531 26 26 0 7/3/08 0:00
72592 1 1 0 1 6/16/14 14:36
73592 2 2 0 2 6/16/14 14:37
73593 3 3 0 3 6/16/14 14:37
74592 4 4 0 4 6/16/14 14:38
75593 4 5 0 5 6/16/14 14:42
75594 4 6 0 6 6/16/14 14:43
75595 4 7 0 7 6/16/14 14:44
75596 4 8 0 8 6/16/14 14:46

There is a SQL 2012 bug found (http://sqlperformance.com/2014/06/sql-indexes/hotfix-sql-2012-rebuilds) when index rebuilt is done with ONLINE option.

A related KB article (http://support.microsoft.com/kb/2969896) states that:
The online index rebuild can cause index corruption or data loss when it is used together with concurrent queries that modify many rows. For example, a replication updates many rows on a table when the online index build is running on the same table concurrently.

This issue occurs when the following conditions are true:

  1. The online index build query runs in parallel mode.
  2. Deadlock error and a fatal error such as “lock timeout” occur in a specific order.

Note To work around this issue, run the online index build in serial mode (DOP = 1).

According to the article link I pasted above, SQL server 2012 SP2 is out now. But fix for this bug is not included because it was discovered recently. So it suggests that you DO NOT apply SP2 until CU for this bug is available.
In the meantime, we have these options.
1. Add maxdop = 1 in the index rebuild query with ONLINE
OR
2. Run defrag with ONLINE = OFF
OR
3. Apply hotfix (http://support.microsoft.com/kb/2969896) available for sql server 2012 SP1. As with any hotfix, they are not tested thoroughly and can introduce new bugs.
OR
4. Wait for SP2 CU2, targeted to release by end of July and apply both of them together to SQL server 2012 SP1.

Implicit Conversion:

Implicit Data type conversion is done by SQL Server without notifying user.

 

Explicit Conversion:
Explicit conversion is done by user either by issuing Convert or Cast function command. An example of explicit conversion would be converting Date data types to the format you want .

SELECT GETDATE(), CAST(GETDATE() AS time(7)) AS ‘time’ ,CAST(GETDATE() AS date) AS ‘date’
,CAST(GETDATE() AS smalldatetime) AS ‘smalldatetime’ ,CAST(GETDATE()AS datetime) AS ‘datetime’
,CAST(GETDATE() AS datetime2(7)) AS ‘datetime2′ ,CAST(GETDATE() AS datetimeoffset(7)) AS ‘datetimeoffset’;

For more information, see http://msdn.microsoft.com/en-us/library/ms187752.aspx

 

Image

There is so much we miss out in this world and in our life by just not being present in present. We equate not doing anything to wasting time. In reality it is quite the opposite. I have experienced it myself and the feeling is so freeing and exhilarating that you will regret why you have not experienced it before. I think the old saying “idle mind is a Devil’s workshop” has misled us.

We are in this world for a short period of time and yet 50% of our time we are thinking about something that is not present. Weather we are listening to enticing music or watching a favorite TV show or reading an interesting book, we will not add these adjectives if we are not focused on what we are doing. ImageAn enticing music becomes irritable if some nagging thought in your mind keep bothering you. A favorite TV Show becomes boring if you constantly think about what you have to do next. An interesting reading becomes a chore if you are feeling anxious and stressful.

Actually I love all these 3 activities that I have used as an example here. So I have no problem on focusing on these activities. But there are other areas such as work and social life where I want to learn to apply this thinking. The benefit would be more satisfied business interaction which in turn will result in doing it again and again. It is just like a happiness cycle, once you start it it will get the momentum of its own.

 

 

 

 

 

Who absolutely look forward to annual performance reviews at work? I do not consider myself in this group. I hate the logistic part of it. Tons of rows to fill in. Sometimes questions are ambiguous or repetitive or just plain senseless. Long story short, I got my annual performance review and lets say it was above average but my manager suggested one development goal for me and it  was to get training on Advanced T-SQL queries.

Do not get me wrong. I am a DBA for more than 10 years and I can write tsql and keep myself updated with the addition and changes in tsql syntax with the new versions of sql server. I do lot of reading and researching in my areas of expertise (High Availability, Disaster Recovery and performance tuning) almost daily. I do not consider myself to be a DBA developer. I never have to write complex stored procedures or views or triggers etc. According to my manager, this is where I can grow more. Let’s keep the discussion of whether I agree with him or not for a later time. I planned to take this as a challenge. I have grown in so many areas, why not excel in this one too? Real challenge is to work on something which is outside of your comfort zone and you have to make yourself do it.

So I decided to blog my journey of Advanced T-SQL Learning. I searched the net but have not found any classroom training that meets my need. But there is lot of material (Blogs, YouTube videos, books, white papers, BOL etc ) available. And this was one of the reason, I never concentrated on learning development stuff because whenever I needed something I always find that T-SQL or PowerShell  script online written by MVPs and other very intelligent people of the SQL World (Wait this sounds like name of my Blog!)  available. With little tweaking I was able to use it and it definitely served the purpose.

Anyways, this is how I plan to learn and blog about Advance T-SQL. I will select one item such as advanced error handling or CTE or DMVs etc an d will work on each piece for a week. So I will write one blog per week for 4 weeks. Then I will summarize my 4 weeks of learning in the 5th week blog with examples or maybe little project or something fun. Then again I will start the cycle of learning to get to the next T-SQL concept.

As for now, my intention with this blog is that it is for my own eyes only. It will also serve as a reminder\repository\evidence etc of what I have learned and achieved. This will also be a tool down the road, when going gets tough, to remind me how persistent I was and as a result I will get new motivation to move forward.

Cheers to Learning and Blogging!

 

TSQL 2012 IFF Function

In a nutshell, IFF function is a shorthand for Case function. Here is the example to illustrate its use and syntax.

Create database myDB;
Go
Use myDB;
Go
Create table Employees (id int, name nvarchar(30), city nvarchar(30), region nvarchar(30), country nvarchar(30));
Go

Insert some records to play with. Read the rest of this entry »

Follow

Get every new post delivered to your Inbox.