Thursday, December 11, 2014

New and Enhanced tSQL Functions in SQL Server 2012

Enhanced and New  tSQL Functions in SQL Server 2012

1. Try_Convert 
/*Try_Convert and Try_Parse Functions to handle errors by returning NULL values*/

SELECT TRY_CONVERT(INT, 100) AS CorrrectConvert, TRY_CONVERT(INT, 'abc') AS ErrorConvert;

Result:
CorrrectConvert ErrorConvert
100 NULL


SELECT PARSE('7/17/2011' AS DATE USING 'en-US') AS USDate, PARSE('2011/7/17' AS DATE USING 'ja-JP') AS JapanDate;
Result:
USDate JapanDate
7/17/2011 7/17/2011


2. TRY_PARSE
/*If we try to parse Invalid values it will show below error like japan support 2011 but if we pass only 11
it will show error*/
SELECT PARSE('7/17/11' AS DATE USING 'ja-JP') AS dt;

Error:
--Msg 9819, Level 16, State 1, Line 1
-- Error converting string value '7/17/11' into data type date using culture 'ja-JP'.

SELECT TRY_PARSE('7/17/11' AS DATE USING 'ja-JP') AS dt;



3. Date and Time Functions
SELECT
DATEFROMPARTS(2014, 07, 22)    AS DATE_FROMPARTS,
DATETIME2FROMPARTS(2014, 07, 22, 14, 31, 7, 2, 3) AS DATETIME2_FROMPARTS,
DATETIMEFROMPARTS(2014, 07, 22, 14, 31, 7, 997) AS DATETIME_FROMPARTS,
DATETIMEOFFSETFROMPARTS(2014, 07, 22, 14, 31, 5, 1, -5, 0, 7) AS DATETIMEOFFSET_FROMPARTS,
SMALLDATETIMEFROMPARTS(2014, 07, 22, 14, 31) AS SMALLDATETIME_FROMPARTS,
TIMEFROMPARTS(15, 33, 6, 1, 7) AS TIME_FROMPARTS;



4. Logical/Conditional Functions
SELECT CHOOSE(3, 43466, 'Delhi', 'Agra') AS '1stPosition', CHOOSE(5, 'MUM', 'pune', 'Bangalore','allahabad','KNP') AS '5thPosition',CHOOSE(3,'a','b') 'NoPosition' ;

SELECT IIF(4 = 5, 'Ram', 'Shyam') AS iif_result;
SELECT IIF(4 = 4, 'Ram', 'Shyam') AS iif_result;




5. Concate and Format
DECLARE   @Passenger TABLE
  (ID INT,
  PassengerID VARCHAR(10),
       CityName   VARCHAR(10),
       StartDate  DATE
    )

INSERT INTO @Passenger VALUES (1,'P00001', 'Delhi', '1-Sep-2014')
INSERT INTO @Passenger VALUES (2,'P00001', 'Mathura', '1-Oct-2014')
INSERT INTO @Passenger VALUES (3,'P00001', 'Etah', '10-Oct-2014')

Query1.
SELECT *,CONCAT(PassengerID, ', ' + 'Name&Type_Desc', ', ' + CityName) AS ConcatinatedInfo
FROM  @Passenger
Result:
ID PassengerID CityName StartDate ConcatinatedInfo
1 P00001 Delhi 9/1/2014 P00001, PsgrID&CtyName, Delhi
2 P00001 Mathura 10/1/2014 P00001, PsgrID&CtyName, Mathura
3 P00001 Etah 10/10/2014 P00001, PsgrID&CtyName, Etah

Query2.
SELECT *,CONCAT(PassengerID, ', ' + 'PsgrID&CtyName', ', ' + CityName) AS ConcatinatedInfo
FROM  @Passenger

Result:
USDateFormat FormatWithZero
12/11/2014     00000001
12/11/2014     00000002
12/11/2014     00000003
Query3.
SELECT FORMAT(GETDATE(), 'd', 'en-us') AS USCulture, FORMAT(GETDATE(), 'd', 'ja-JP') AS JAPANECULTURE,FORMAT(GETDATE(), 'd', 'de-DE') AS GERMANCULTURE

Result:
USCulture JAPANECULTURE GERMANCULTURE
12/11/2014 12/11/2014 11.12.2014




6. LOG
Query4.
SELECT LOG(256, 2);
Result.
LOG
8



7. EOMONTH

Query5.
SELECT EOMONTH('12/15/2014')Endofmonth ,EOMONTH(SYSDATETIME()) AS Endofmonth1,EOMONTH(getdate()) AS endofmonth2;

Result:
Endofmonth Endofmonth1 endofmonth2
12/31/2014 12/31/2014 12/31/2014

Wednesday, December 10, 2014

Pagination in SQL Server 2012

Pagination in SQL Server 2012

/*Queries to Generate Data for pagination*/

DECLARE   @Passenger TABLE
  (
   ID INT,
       PassengerID VARCHAR(10),
       CityName   VARCHAR(10),
       StartDate  DATE
    )

INSERT INTO @Passenger VALUES (1,'P00001', 'Delhi', '1-Sep-2014')
INSERT INTO @Passenger VALUES (2,'P00001', 'Mathura', '1-Oct-2014')
INSERT INTO @Passenger VALUES (3,'P00001', 'Etah', '10-Oct-2014')
INSERT INTO @Passenger VALUES (4,'P00001', 'Kanpur', '25-Oct-2014')
INSERT INTO @Passenger VALUES (5,'P00001', 'Lucknow', '27-Oct-2014')
INSERT INTO @Passenger VALUES (6,'P00002', 'Allahabad', '1-Oct-2014')
INSERT INTO @Passenger VALUES (7,'P00002', 'Banaras', '1-Nov-2014')
INSERT INTO @Passenger VALUES (8,'P00002', 'Mumbai', '10-Nov-2014')
INSERT INTO @Passenger VALUES (9,'P00002', 'Banaras', '15-Nov-2014')
INSERT INTO @Passenger VALUES (10,'P00002', 'Pune', '25-Nov-2014')
INSERT INTO @Passenger VALUES (11,'P00002', 'Hyd', '28-Nov-2014')
INSERT INTO @Passenger VALUES (12,'P00002', 'Chennai', '30-Nov-2014')
INSERT INTO @Passenger VALUES (13,'P00003', 'Bangalore', '1-Dec-2014')


Query 1:
SELECT   * 
FROM     @Passenger 
ORDER BY Id
OFFSET 3 ROWS
FETCH NEXT 6 ROWS ONLY
Result:
IDPassengerIDCityNameStartDate
4P00001Kanpur25-Oct-2014
5P00001Lucknow27-Oct-2014
6P00002Allahabad1-Oct-2014
7P00002Banaras1-Nov-2014
8P00002Mumbai10-Nov-2014
9P00002Banaras15-Nov-2014
Query 2:
SELECT   * 
FROM     @Passenger 
ORDER BY  Id
OFFSET 9 ROWS
FETCH NEXT 3 ROWS ONLY


Result:
IDPassengerIDCityNameStartDate
10P00002Pune25-Nov-2014
11P00002Hyd28-Nov-2014
12P00002Chennai30-Nov-2014

Note: OFFSETand Next Rows count never be Negative.

SQL Server Logical/Physical Reads

SQL Server Logical/Physical Reads

Summary Info:

Logical Reads : Reading Data pages from Cache
Physical Reads : Reading Data pages from Hard Disk
Buffer Cache Hit Ratio:  (logical reads – physical reads)/logical read * 100%



Detailed Info:

Logical Reads:
Logical read indicates total number of data pages needed to be accessed from data cache to process query. It is very possible that logical read will access same data pages many times, so count of logical read value may be higher than actual number of pages in a table. Usually the best way to reduce logical read is to apply correct index or to rewrite the query.

Physical Reads 
Physical read indicates total number of data pages that are read from disk. In case no data in data cache, the physical read will be equal to number of logical read. And usually it happens for first query request. And for subsequent same query request the number will be substantially decreased because the data pages have been in data cache.

Buffer Cache Hit Ratio:
Buffer hit ratio will be calculated based on these two kinds of read as the following formula: (logical reads – physical reads)/logical read * 100%. The high buffer hit ratio (if possible to near 100%) indicates good database performance on SQL Server level. So use information from physical read and buffer hit ratio to measure performance in server level and logical read to measure individual query level 


Excess of the Logical Reads tends high memory Usage, there are some ways by which we can Reduce Logical Reads:
1. Improper/Useless/Insufficient Indexes: Indexes should be build on the basis of data access or retrieval process if any of the indexes is build on the columns which are not used in a query will leads to High Logical reads and will degrade the performance while reads and writing the data....

2.Poor Fill Factor/Page Density: Page use should should not be very less. otherwise large number of page will be used for small amount of data which will also leads to High Logical Reads....

3.Wide Indexes: Indexing on the large number of columns will leads to high logical reads....

4. Index scanning: if query is leads to index scanning on the table then logical reads will be high...


Logical Reads count can be get by using following ways

Below are the ways to check logical Reads:
1. set statistics io on 

2. sys.dm_exec_query_Stats
by executing the below statement we can find detailed info about reads/writes
select * from sys.dm_exec_query_Stats

3. SQL Profiler: by executing the sql profiler on that database we can find out logical reads..

There are some other SQL DMV/F which also help us to get logical reads...

also visit:
http://www.sqlservercentral.com/Forums/Topic1250154-391-1.aspx

LEAD And LAG Analytic Functions in SQL Server 2012

LEAD And LAG Functions in SQL Server 2012

LEAD And LAG provides access to a row at a given physical offset respective to current row.
Use these analytic function in a SELECT statement to compare values in the current row with values in a Previous and following row according to given offset.
Note: Offset cannot be negative.

Let us take a passengers example: 

suppose some passengers are travelling from one city to another, they are passing
through many cities, now if we want to find out which was the previous
& city which will be next city we can achieve this using Common table expression and ranking
functions as follow

-- Let us code this in SQL Server
-- Declare @Passenger Table as below.
DECLARE   @Passenger TABLE
  (
       PassengerID VARCHAR(10),
       CityName   VARCHAR(10),
       StartDate  DATE
    )
/* Insert record in Passenger table, suppose passengers are passing different cities 
on different dates as below*/

INSERT INTO @Passenger VALUES ('P00001', 'Delhi', '1-Sep-2014')
INSERT INTO @Passenger VALUES ('P00001', 'Mathura', '1-Oct-2014')
INSERT INTO @Passenger VALUES ('P00001', 'Etah', '10-Oct-2014')
INSERT INTO @Passenger VALUES ('P00001', 'Kanpur', '25-Oct-2014')
INSERT INTO @Passenger VALUES ('P00001', 'Lucknow', '27-Oct-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Allahabad', '1-Oct-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Banaras', '1-Nov-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Mumbai', '10-Nov-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Banaras', '15-Nov-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Pune', '25-Nov-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Hyd', '28-Nov-2014')
INSERT INTO @Passenger VALUES ('P00002', 'Chennai', '30-Nov-2014')
INSERT INTO @Passenger VALUES ('P00003', 'Bangalore', '1-Dec-2014')

/*Now if we want to find out where passengers were on next day and previous day we need to right below queries*/
--- Query 1
;
WITH   CTE as
(
        SELECT RN = ROW_NUMBER() OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC),   *
        FROM @Passenger
)
SELECT
        [Current Row].*,
        ISNULL(DATEADD(DAY, -1, [Next Row].StartDate), '31-Dec-2099') AS EndDate

,ISNULL([Previous Row].CityName,'NoPrv_Cty') PrvCity
,ISNULL([Next Row].CityName,'NoNxt_Cty') NextCity

FROM   CTE AS  [Current Row]
        LEFT JOIN CTE AS [Next Row] ON [Current Row].PassengerID   = [Next Row].PassengerID   AND  [Next Row].RN   = [Current Row].RN   + 1
LEFT JOIN CTE AS [Previous Row] ON [Current Row].PassengerID   = [Previous Row].PassengerID   AND  [Previous Row].RN = [Current Row].RN-1
ORDER BY [Current Row].PassengerID, [Current Row].RN;

/*But using LEAD and LAG Function we can achieve using below query*/
--- Query 2
SELECT *,
DATEADD(DAY, -1, LEAD(StartDate, 1,'01-Jan-2100')
        OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC))   AS EndDate,
LAG(Cityname, 1,'NoPrv_Cty')
OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC)   AS PreviousCity,
LEAD(Cityname, 1,'NoNxt_Cty')
OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC)   AS NextCity
FROM @Passenger

-- Queries will give result like below
PassengerID CityName StartDate EndDate PreviousCity NextCity
P00001 Delhi 1-Sep-2014 30-Sep-2014 NoPrv_Cty Mathura
P00001 Mathura 1-Oct-2014 9-Oct-2014 Delhi Etah
P00001 Etah 10-Oct-2014 24-Oct-2014 Mathura Kanpur
P00001 Kanpur 25-Oct-2014 26-Oct-2014 Etah Lucknow
P00001 Lucknow 27-Oct-2014 31-Dec-2099 Kanpur NoNxt_Cty
P00002 Allahabad 1-Oct-2014 31-Oct-2014 NoPrv_Cty Banaras
P00002 Banaras 1-Nov-2014 9-Nov-2014 Allahabad Mumbai
P00002 Mumbai 10-Nov-2014 14-Nov-2014 Banaras Banaras
P00002 Banaras 15-Nov-2014 24-Nov-2014 Mumbai Pune
P00002 Pune 25-Nov-2014 27-Nov-2014 Banaras Hyd
P00002 Hyd 28-Nov-2014 29-Nov-2014 Pune Chennai
P00002 Chennai 30-Nov-2014 31-Dec-2099 Hyd NoNxt_Cty
P00003 Bangalore 1-Dec-2014 31-Dec-2099 NoPrv_Cty NoNxt_Cty

/*Now if we want to find out which was the Previous to previous City and Which is Next to Next city
we need to modify our queries but in case of LEAD and LAG we just need to change off set Value
as below*/

SELECT *, 
DATEADD(DAY, -1, LEAD(StartDate, 1,'01-Jan-2100') 
        OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC))   AS EndDate,
LAG(Cityname, 2,'NoPrv_Cty') 
OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC)   AS PreviousCity,
LEAD(Cityname, 2,'NoNxt_Cty') 
OVER (PARTITION BY   PassengerID ORDER BY   StartDate ASC)   AS NextCity
FROM @Passenger

PassengerID CityName StartDate EndDate PreviousCity NextCity
P00001 Delhi 1-Sep-2014 30-Sep-2014 NoPrv_Cty Etah
P00001 Mathura 1-Oct-2014 9-Oct-2014 NoPrv_Cty Kanpur
P00001 Etah 10-Oct-2014 24-Oct-2014 Delhi Lucknow
P00001 Kanpur 25-Oct-2014 26-Oct-2014 Mathura NoNxt_Cty
P00001 Lucknow 27-Oct-2014 31-Dec-2099 Etah NoNxt_Cty
P00002 Allahabad 1-Oct-2014 31-Oct-2014 NoPrv_Cty Mumbai
P00002 Banaras 1-Nov-2014 9-Nov-2014 NoPrv_Cty Banaras
P00002 Mumbai 10-Nov-2014 14-Nov-2014 Allahabad Pune
P00002 Banaras 15-Nov-2014 24-Nov-2014 Banaras Hyd
P00002 Pune 25-Nov-2014 27-Nov-2014 Mumbai Chennai
P00002 Hyd 28-Nov-2014 29-Nov-2014 Banaras NoNxt_Cty
P00002 Chennai 30-Nov-2014 31-Dec-2099 Pune NoNxt_Cty
P00003 Bangalore 1-Dec-2014 31-Dec-2099 NoPrv_Cty NoNxt_Cty



















Wednesday, August 6, 2014

Memory optimized Tables and Natively Compiled Stored Procedures




Memory optimized Tables:
SQL Server 2014 support fully durable and delayed duration transaction by default
Memory-optimized tables are fully durable like traditional/existing disk based tables; they support ACID property in case of Default or fully durable setting.
The primary store for memory-optimized tables is main memory and rows in the table are read from and written to memory. The entire table resides in memory. A second copy of the table data is maintained on disk, but only for durability purposes.
For delayed durability see link.
Besides the default durable memory-optimized tables, SQL Server also supports non-durable memory-optimized tables, which are not logged and their data is not persisted on disk. This means that transactions on these tables do not require any disk IO, but the data will not be recovered if there is a server failover.
In memory table rows versioned. Each row has different versions which allow us concurrent reads and writes, Row versions in memory-optimized tables are addressed using 8-byte memory pointers.
 Data in memory-optimized tables is accessed in two ways:
  • Through natively compiled SP’s (only support memory optimized tables).
  • Through traditional/interpreted SP’s and Ad-hoc queries.
Natively Compiled Stored Procedures:
Natively compiled stored procedures are compiles to native code that access memory-optimized tables and table types only. Allows fast execution of the queries and business logic in it, they get compiled as and when they created, we cannot alter natively compiled sp’s
As they compiled in first creation we comes to know the divide by zero, type conversion and arithmetic over flow errors prior to executions.

Example:
Let us create memory optimized tables and natively compiled SP's

-- DROP DATABASE MyTestingMOT
CREATE DATABASE mytestingmot 

GO

--------------------------------------
-- create database with a memory-optimized filegroup and a container.
-- to add the file  we should have folder DB on drive C

ALTER DATABASE mytestingmot ADD filegroup mytestingmot_filegrp CONTAINS
memory_optimized_data

ALTER DATABASE mytestingmot ADD FILE (name='MyTestingMOT_File1', filename=
'c:\DB\MyTestingMOT_File1') TO filegroup mytestingmot_filegrp

ALTER DATABASE mytestingmot
SET memory_optimized_elevate_to_snapshot=ON
go 

USE MyTestingMOT
go 

-- create a durable (data will be persisted) memory-optimized table
-- two of the columns are indexed  

  CREATE TABLE dbo.moemployee
  (
     moemployeeid   INT NOT NULL PRIMARY KEY NONCLUSTERED,
     moemployeename VARCHAR(100),
     deptid         INT NOT NULL
INDEX IX_DeptId NONCLUSTERED HASH WITH (BUCKET_COUNT=10000),
     hiredate       DATETIME2 NOT NULL,
     salary         MONEY
  )
WITH (MEMORY_OPTIMIZED=ON)


INSERT dbo.moemployee
VALUES (1,'Allen', 8798, Getdate(), NULL)

INSERT dbo.moemployee
VALUES (2,'Derren', 23, Getdate(), 4445.4)

INSERT dbo.moemployee
VALUES (3,'Sean', 80, Getdate(), NULL)

INSERT dbo.moemployee
VALUES (4,'Alex', 342, Getdate(), 6665.4)
GO

-- verify table contents  
   SELECT * FROM   dbo.moemployee 
  GO
 
--  update statistics on memory-optimized tables  
UPDATE STATISTICS dbo.moemployee WITH fullscan, norecompute 
  GO

-- natively compiled stored procedure for to manipulate salary employee salary based on EmpId
-- DROP PROC uspMOEmployee

CREATE PROCEDURE dbo.uspMOEmployee @MOEmployeeId INT,@Salary MONEY
 WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
 AS
 BEGIN ATOMIC
 WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
   
  IF @MOEmployeeId IS NULL
  THROW 51000, N'The MOEmployeeId does not exist.', 1
 
  UPDATE dbo.MOEmployee SET Salary=@Salary WHERE MOEmployeeId=@MOEmployeeId
 END
GO  

/* Unit test Case
Pass paramters @MOEmployeeId and @Salary
EXEC uspMOEmployee @MOEmployeeId= 2, @Salary= 9900  

*/

Execution OutPut:
MOEmployeeId MOEmployeeName DeptId HireDate Salary
1 Allen 8798 11:51.4 NULL
2 Derren 23 11:51.4 4445.4
3 Sean 80 11:51.4 NULL
4 Alex 342 11:51.4 6665.4


 Note: 
we can not alter Natively compiles sp's, we need to drop and create them. if we try to alter them it shows Error.
Msg 10794, Level 16, State 25, Procedure uspMOEmployee, Line 3
The operation 'ALTER PROCEDURE' is not supported with natively compiled stored procedures.





Monday, August 4, 2014

Delayed Durability In SQL Server 2014

DELAYED DURABILITY In SQL Server 2014
Delayed durability return the control to client/application before the data is committed to disk from Memory/Log.


let us understand Fully & Delayed durability:

Fully Durable Transaction (SQL Server):  First Data is written to Log --> Data is committed to Disk -->successful commit Command is issued to Client/application
    1 .       No Data loss
    2 .       I/O latency
    3 .       Synchronous

Delayed Durable Transaction (SQL 2014):  First data is written to Log --> Successful commit command is issue to client/application --> later data commits to Disk
    1 .       Data loss of the defined interval
    2.       Less I/O Latency and contention(Data is committed in batches)
    3.       Asynchronous

In Delayed Durable Transaction is visible to other transactions:
   1 .  When fully delayed transaction commits the changes to disk in same database
   2 .   When we manually successfully Executes system defined procedure sys.sp_flush_log to write            the log info to disk.
   3 .    In memory transaction log buffer fills up and flushed the log to disk.

Delayed Durability can be controlled at  Database level, Commit Level, Atomic block Level
ALTER DATABASE DBNameHere
SET DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }




Delayed Durability Vs SQL Features
S#
SQL Features
With Delayed Durable
1
Always On/ Mirroring
No Guarantee ( Some Data may lost)
2
Failover Clustering
No Guarantee( Some Data may lost)
3
Transactional Replication
No Guarantee(Only durable data is shipped))
4
Log Shipping
No Guarantee(Only durable data is shipped)
5
Log Backup
No Guarantee(Only durable data is backed up)
6
Cross Database and DTC
No Guarantee(Only durable data is backed up)
7
Crash Recovery
Data is consistent only for Durable Data
8
CT & CDC
No Guarantee(Only durable data is backed up)