Index and Execution Plan in SQL Server 2012,2014

 

 

 

 

 

 

 

Hi guys more DBA don’t know more information about the index and Execution plan and how you can understand Execution plan to build the best index model to enhance the performance for the Query from the Execution time and from the IO and CPU so for this reason i collected the most important posts on our community related to the index and Execution plan on one post to be as repository.

 

Follow up us on

Facebook Page LinkedIn Group,Twitter,Networked Blogs,Facebook Group,Youtube Channel, 

 

Spread the word

Secrets of SQL Server Execution plan (4/5)

Secrets of SQL Server Execution plan (4/5)

plan

Hi guys, thanks for traveling with me along the series of post to explore the SQL server Execution Plan Secrets. For the people those who missed out some basics, no worries here you go. limitations of execution plan, some alarming signs and complex execution plans

Really we are going to see some interesting things on this post. Most of the time, while we examine the execution plan of a query which join two or more tables, we used to saw the join operators – nested loop join, hash match and merge join operator . We often not concentrate, why it is using hast match join etc. Which was prime subject of our post today, also we will see how resolve the warnings on our execution plan.

Okay, let us see some basics about this join operators before step into details.  For the people those who saw the below image and puzzled, no more such situation.

nested Nested Loops:-  If you want to understand the nested loop join in a simplest way, imagine you have two streams of data rows, uses one join input as the outer input table (shown as the top input in the graphical execution plan) and one as the inner (bottom) input table. The outer loop consumes the outer input table row by row. The inner loop, executed for each outer row, searches for matching rows in the inner input table.
hashMatch Hash Match:-  Hast match has two phase: the build phase and probe phase. Better to be explained with a example. Consider you are retrieving the rows from table A & B, during first phase the values needed to be extracted will be collected which is has build phase. The data collected on the hash buckets. On the second the phase was the probe phase, on which the collected data will be matched with the hash buckets. 

merge
Merge join:-  Merge join will be used by the execution plan when there is two set of ordered results exists from table A & B.  which are defined by the equality (ON) clauses of the join predicate. The query optimizer typically scans an index, if one exists on the proper set of columns, or it places a sort operator below the merge join.

Let us move on to see some real demos on the three join types one by one. I use Adventureworks2012 database for our demo

SELECT
P1.FirstName,
P1.LastName,
P2.PhoneNumber
FROM	Person.Person P1
INNER JOIN Person.PersonPhone P2 	ON P1.BusinessEntityID = P2.BusinessEntityID

The Query will generate a execution plan like below. Check the index used by the personphone table, it has phone number on the key column,  as i explained before it was the build phase, which builds the data using this index on the hash buckets. And the second phase was the probe phase, which probes the matching records using other index on the  Person table. You can mouse over the Hast Match operator to see more details on it..

HashResult

MemoryGrants Now we will see, when will get a warning symbol and how to resolve it. Hash bucket, which was used on the hash match will reside on memory, if the statistics are not update the optimizer will grant 1MB memory instead of 8MB of memory. Just mouse over on the Select icon which was on the beginning of the execution plan. You will see a Memory grant as displayed on image beside here. Once the memory granted was not enough, because of the statics not updated, the SQL Server has to spill the hash table into the temp table means it make additional IO to the query which degrades the performance of the query.  To resolve this kind warning symbols over the hash match join we need update the statistics of the participant tables, which enables the optimizer to grant the correct memory to accommodate the hash tables on memory.

Thanks for your time, and the game is not over yet. On next post we will see how we can make the optimizer to use different execution plan. Meaning you will see the same query i demonstrated above, will use nested loop and merge joins. By the end of next post, you will be able to change the execution plan by supplying some index, by which the join operator used by the optimizer will differ accordingly..

Would like to see your comments and feed backs as well.

Post source from authors blog

Spread the word

Where is index location on Database?

Where is index location on Database?

 

 

 

 

 

 

 

Where is index location on Database? 

Hi guys today I will show smoothing is easy but more important to know about it {Where is the index live} by the below DMV we can return all the index with table name and with File group hosted on it , another thing you can use this DMV to know the heap tables , Clustered index , non-Clustered index SOURCE POST

Index Location:


WITH C AS
 (

SELECT ps.data_space_id
, f.name
, d.physical_name
FROM sys.filegroups f
JOIN sys.database_files d ON d.data_space_id = f.data_space_id
JOIN sys.destination_data_spaces dds ON dds.data_space_id = f.data_space_id
JOIN sys.partition_schemes ps ON ps.data_space_id = dds.partition_scheme_id

UNION

SELECT f.data_space_id
, f.name
, d.physical_name
FROM sys.filegroups f
JOIN sys.database_files d ON d.data_space_id = f.data_space_id
)
--SELECT * FROM c
SELECT [ObjectName] = OBJECT_NAME(i.[object_id])
, [IndexID] = i.[index_id]
, [IndexName] = i.[name]
, [IndexType] = i.[type_desc]
, [Partitioned] = CASE WHEN ps.data_space_id IS NULL THEN 'No'
ELSE 'Yes'
END
, [StorageName] = ISNULL(ps.name, f.name)
, [FileGroupPaths] = CAST(( SELECT name AS "FileGroup"
, physical_name AS "DatabaseFile"
FROM C

WHERE i.data_space_id = c.data_space_id
FOR
XML PATH('')
) AS XML)
FROM [sys].[indexes] i
LEFT JOIN sys.partition_schemes ps ON ps.data_space_id = i.data_space_id
LEFT JOIN sys.filegroups f ON f.data_space_id = i.data_space_id
WHERE OBJECTPROPERTY(i.[object_id], 'IsUserTable') = 1
ORDER BY [ObjectName], [IndexName]

 

index

Follow me because next post i will explain every thing about the index and How you can build you index model How you can Enhance your expensive query by index more Secrets in index …ETC

Follow the author:

View all my tips , LinkedIn Website Slideshare 

Follow up us on

Facebook Page LinkedIn Group,Twitter,Networked Blogs,Facebook Group,Youtube Channel, 

Spread the word

Create Column Store index on all Supported tables in SQL SERVER 2014

Create Column Store index on all Supported tables in SQL SERVER 2014

 

 

 

 

 

 

 

 

Hi Guys in the last Script for How to can Create Clustered ColumnStore Index VI i explained the idea for the Script after my work and more workshop i do more updates on the Script to be more easy and more comprehensive version two will do 90 % from the operation of How to create Clustered column Store index on your usage databases in one click let’s go to know at the first what is Column Store index before going for the DEMO PART:

Our Script will cover utmost importance and sensitivity parts for How to Create Clustered Column Store index :

  1. List by Supported tables and Non Supported tables
  2. Drop Foreign key for Supported Tables with the Rollback.
  3. Drop Clustered and Non Clustered index on Supported tables with the Rollback
  4. Create Clustered Column Store index on all supported table (Contain the last three Scripts Sufficient for the purpose)

NOTE: Script not create any thing direct on your database only print and you you can check it first then execute it .

Introduction about Clustered ColumnStore Index:

New index type released in SQL SERVER 2014 Creates an in-memory clustered columnstore index on a SQL Server table. Use a clustered columnstore index to improve data compression and query performance for data warehousing workloads that primarily perform bulk loads and read-only queries. Since the clustered columnstore index is updateable, the workload can perform some insert, update, and delete operations.IC700102

Improvement in Column Store Index in SQL SERVER 2014:

  • With SQL Server 2014 you can create a columnstore index without having much impact on write-ability on the table not like NonClustered Column Store index in SQL Server 2012 when we create a column store index and it makes the table read only. This means you can issue some INSERT, UPDATE, DELETE statements with a table with clustered columnstore index. No more tedious workaround is required for writing data to a table with columnstore index in this release like the previous release.
  • For columnstore index, ALTER INDEX … REBUILD has a new COLUMNSTORE_ARCHIVE data compression option that further compresses the specified partitions of a columnstore index, resulting in even less disk space being used. You can use this option for archival, or for other situations that require a smaller data storage size and can afford more time for storage and retrieval.
Spread the word

SQL Server Administration skills jump start (15/100)

SQL Server Administration skills jump start (15/100)

 

 

 

 

 

 

 

 

 

By the last blog , I have explained very rich and important stacks of CMD prompts and T-SQL used for rebuilding and restoring master DBs at catastrophic cases , by this blog I am moving more advanced to fancy our blogs scope to pursue more trickier cases related to DB consolidation and migration which can be considered largely as an essential services of data engineering service and indeed it is requested by many customers especially those they did never taste SQL Server for Microsoft practices and they would like to re-create their DB servers from scratch or probably they would like to upgrade  their DB servers as side by side upgrade , here below is 10 configurations outlines that you should look after carefully for any DB consolidation or migration from a server to  a server and indeed they represent unique potentials for your DB Consolidation /migration service that can leverage distinctly your service quality and shape you very well in front of your customers:

1-      Backup DBs and restore DBs with ZERO down time and ZERO data loss in the same time …Awesome …!

2-      Cloning all DB special features that are not covered by backups and restores such as :

  • RCSI (Read committed snapshot isolation level using row versioning)
  • Service brokers
  • Encryption keys and certificates
Spread the word

Table Partitioning Implementation with advanced scenarios (Part 5)

Spread the word

ONE HUNDRED Technical BLOG POST IN SQL SERVER

ONE HUNDRED Technical BLOG POST IN SQL SERVER

 

 

 

 

 

 

 

images

 

 

Thank God by this end of year 2014 i completed one Hundred blog Post Published on the biggest SQL Server Community in the middle East SQLSERVER performance Tuning and 174 BLOG Post Published on My Community SQL Database Administration   { This blog was viewed about 25,000 times in 2014.} Check Annual report https://mostafaelmasry.wordpress.com/2014/annual-report/  174  technical post in SQL Server in the most important subject in SQL Server like :

  • SQL Server 2014 technology
  • SQL Server 2012 new feature
  • Reporting Services
  • SSIS
  • Database administration
  • SQL Server Performance Tuning
  • T-SQL DMV
  • Database Configuration
  • Database Implementation
  • Disaster recovery cases

ONE HUNDRE Blog Post list :

Spread the word

SQL Server Administration skills jump start (14/100)

SQL Server Administration skills jump start (14/100)

 

 

 

 

 

 

 

 

 

What are DMVs and why we use DMVs?
The DMVs were introduced in SQL 2005 and with each new release, Microsoft has been adding additional DMVs to help troubleshoot issues. DMVs actually come in two flavors DMVs (dynamic management views) and DMFs (dynamic management functions) and are sometimes classified as DMOs (dynamic management objects).  The DMVs act just like any other view where you can select data from them and the DMFs require values to be passed to the function just like any other function.

As most of the Developer and DBA’s are already, know about the DMV’s and their usage during troubleshooting and monitoring in SQL Server. In this blog, we will see most often used DVMs in day-to-day activities. You can use the below script to check the list of DMV’s available in SQL Server.

SELECT name, type, type_desc
FROM sys.system_objects
WHERE name LIKE ‘dm[_]%’
ORDER BY name

Spread the word

Fixing the collation on Instance,Database and column levels

Fixing the collation on Instance,Database and column levels

 

 

 

 

 

 

 

Hi dears, Many of us come across the collation issue on many levels. We will see how to resolve the issues on each level. This post will cover the fallowing collation issues.

language

  • Collation issues on the Instance level.
  • Collation issues on the database level.
  • Collation issues on the table on column levels.

Collation issues on the Instance level :-

Collation on the instance level was the collation we set during the installation of the instance, due to lack of knowledge about what suppose the Instance collation we may choose the different one. But it is not so straight forward to fix the issue, of course there in the field of information technology we suppose to give solutions.  Here is how, before we proceed what are all effects of collation on the instance level.  All the system databases will exists with server collation, which will be default collation for the new databases if you not specify the collation explicitly. Okay you may think no big issues, but of course not. Think about you are creating a temp table on any the stored procedure, this table created on the temp table will take the collation from instance collation.  We will proceed now to resolve the issue. For that we need to rebuild the master databases, those details can be found on the link. Rebuild master database

Collation issues on the database level:-

Fixing the collation on the database level was not a big deal, you can do that happily by the below script. But changing the collation of the database won’t change the collation of tables created already on the database.

USE [master]
GO

ALTER DATABASE [Database_name] SET  SINGLE_USER WITH NO_WAIT
GO
ALTER DATABASE [[Database_name] COLLATE SQL_Latin1_General_CP1256_CI_AS
GO
ALTER DATABASE [[Database_name] SET  MULTI_USER WITH NO_WAIT
GO

Spread the word

SQL Server Administration skills jump start (13/100)

SQL Server Administration skills jump start (13/100)

 

 

 

 

 

 

 

 

 

By the last blog ,  I explained some DMV stacks related to backup and now I will try to fancy bit more my scope of writing to include NOT ONLY DMV stacks but cmd and power shell (cmdlet) prompts that are commonly used for various DB administration tasks ,in this blog I will explain one of the most trickier scenario that if happened it means catastrophic case and indeed most of DBA will think about the destructive action which is destroying the current DB cluster  /server and rebuilding a new DB cluster /standalone  server , this scenario is actually  master and msdb DBs corruption which can end up eventually with a complete standstill case for your production DBa and thus the other interrelated systems and applications ..Terrible ..!

First a quick question here ..? Why corruption happens ..? why master DBs might reach to this catastrophic case and drill down to this culprit point ..? what indicators that can lead DBA to drive early decisions and actions to secure his environment from such awkward situations ..?    

Indeed, 95% of corruption are coming out of HW reasons either related to servers or storage and less likely to be related to SQL Server , I am giving below some examples of some HW failure reasons that cause such data corruption:

  1. Unplanned electricity outage.
  2. Unplanned network outage.
  3. Unplanned SAN shutdown or movement.
  4. Unplanned Servers shutdown
  5. Uuplanned VM machine shutdown due to network disconnect or VM host machine power failure
Spread the word

TEMPDB Recomendations

TEMPDB Recomendations

 

 

 

 

 

 

What is Tempdb?

It is a temporary database which is re-created every time the SQL Server service is started and at a higher level, it could be considered to be the

page file for sql server, which implies that with every restart the tempdb gets recreated and is copied from a model database.

2014-09-22_17-13-13

Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down. Therefore, there is never anything in tempdb to be saved from one session of SQL Server to another. Backup and restore operations are not allowed on tempdb.

Recommendations while creating TEMPDB files and size:

Set the log file to be approximately double the size of a single data file.

The formula recommend for calculating the size of the tempdb files given a drive with a specific size is:

  • Data file = (Size of drive * 90%) / (Number of data files + 2 [log file will be

double the data file])

  • Log file = Data file * 2

For example, if I have a 100 GB drive to use for tempdb, and I am creating

1 data file per CPU for an 8 CPU SQL Server, I will calculate the file sizes

to be:

Data file = (100 GB * 90%) / (8 + 2) = 90 GB / 10 = 9 GB

Log file = 9 GB * 2 = 18 GB

The final calculation is 8 data files of 8 GB each and 1 log file of 18 GB

leaving 10 GB free for possible log file growth.

Spread the word

MVA (Microsoft virtual Academy) courses on SQL Server 2012 , 2014

MVA (Microsoft virtual Academy) courses on SQL Server 2012 , 2014

 

 

 

 

 

 

 

 

Actually guys while I was watching  MVA  (Microsoft Virtual Academy) training courses  , I really enjoyed them and indeed they are much valuable and helpful for may interesting topics and trickier subjects at SQL Server such as BI, clouding, Big data. SQL Server 2014 performance dreams, database fundamentals and SQL Server essentials for  Oracle DBA  ….etc that is why SQL Server performance tuning community believe in them and thus we started to share them also over the community to spread the word and broadcast more those new wonderful SQL Server 2014 and clouding features , you can easily watch them and round them easily and happily in a single page as below.

 

Microsoft logo1

 

 

Spread the word

Table Partitioning Implementation with advanced scenarios (Part 4)

Table Partitioning Implementation with advanced scenarios (Part 4)

 

 

 

 

 

 

 

 

Check the previous and next parts :Part1 part2,Part3 , Part4

Hi  guys today i will complete my series about Table Partitioning Implementation last Post I explained some important point and today i will complete another important subject

Points covered in the last post :

  • How to implement Table partitioning Step by step (RANGE LEFT)
  • Determining the Partition in Which a Particular Row is Stored (T-SQL Script)

Points will cover in this post :

  • Special Conditions for Partitions with advanced Cases
    • Merging two partitions of a partitioned table into one partition
    • Splitting a partition of a partitioned table or index into two partitions
    • NEXT USED (Alter Partition Schema)

Merging two partitions or Splitting a partition  this is meaning (ALTER PARTITION FUNCTION)

Merging two partitions of a partitioned table into one partition:

IF you wanted to remove a boundary instead.  Let’s say you have a boundary in use that is completely empty, and you want to stop using that partition, to free it up for some reason, then you want to use the MERGE operator. The syntax is just like SPLIT, except the boundary point you pass will be removed from your PARTITION FUNCTION

Note : Merge will not remove physical filegroup from the database structure, it just do merging of two different sets into one

so let’s go now to MERGE boundary ’2009-01-01′ o will work on the Example of RANGE RIGHT Please Check part 2 to know more about our example

Spread the word

Table Partitioning Implementation with advanced scenarios (Part 3)

Table Partitioning Implementation with advanced scenarios (Part 3)

 

 

 

 

 

 

 

 

 

Check the previous and next parts :Part1 part2,Part3, Part4

Hi  guys today i will complete my series about Table Partitioning Implementation last Post I explained some important point and today i will complete another important subject

Points covered in the last post :

  • Table partitioning dependence
  • How to implement Table partitioning Step by step (RANGE RIGHT)
  • How to monitor table partitioning by T-SQL Stored Procedure

Points will cover in this post :

  • How to implement Table partitioning Step by step (RANGE LEFT)
  • Determining the Partition in Which a Particular Row is Stored (T-SQL Script)

How to implement table partitioning step by step (RANGE RIGHT):

we will work on the same database i sued it in the part 2 [DBForPartitioning] DB

Range Direction LEFT and Datetime Boundary Values

  • Create partitioning function

in this case i will do partition function with boundary_value DATETIME and

Use DBForPartitioning
GO
CREATE PARTITION FUNCTION PF_DBForPartitioning_RangeLeft (datetime)
AS RANGE left FOR VALUES ( '2008-01-01','2009-01-01', '2010-01-01');
Spread the word

Table Partitioning Implementation with advanced scenarios (Part 2)

Table Partitioning Implementation with advanced scenarios (Part 2)

 

 

 

 

 

 

 

 

Check the previous and next parts :Part1 , part2 , Part3, Part4

Dwonlad Scripts used in this Part : 1-Basic Configuration

Hi  guys today i will complete my series about Table Partitioning Implementation Last post i explain the Concept of Table Partitioning and i covered the below points let’s go today to See How we can Implement Table Partitioning by T-SQL Step by Step with multiple scenarios

Points covered in the last post :

  • What is table partitioning?
  • Why we need to use table partitioning?
  • PROS and CONS of table partitioning?
  • Partitioned Table Rules and Gotchas
  • What is difference Table partitioning Type in SQL Server?

before going to the Technical demo we should know what is table partitioning dependence

Table5

PARTITION FUNCTION:

that maps the rows of a table or index into partitions based on the boundary_value , we can build the Partition Function by two ways based on our Requirement and our business needed

  • RANGE RIGHT
  • RANGE LEFT

another thing the boundary_value we can do it on three data type

  • INT COLUMN
  • DATETIME COLUMN
  • CHAR COLUMN

PARTITION SCHEME:

that maps the partitions of a partitioned table or index to filegroups , partition scheme must be created on partition function , partition scheme can be used for one or more partitioned tables, indexes, and indexed views.

PARTITION TABLE/INDEX:

is tied to a particular partition scheme when it is created , partition table has only an indirect relationship, through the partition scheme, to the partition function at the end The relationship between a partition function and a partition scheme is one-to-many as is the relationship between a partition scheme and partitioned tables and indexes , because moving data in and out of a partitioned table usually requires modifying the partition function for more information :

let’s go now for the TECHNICAL DEMO to know How we can implement the table partitioning step by step from Scratch

Spread the word

Table Partitioning Implementation with advanced scenarios (Part 1)

Table Partitioning Implementation with advanced scenarios (Part 1)

 

 

 

 

 

 

 

 

 

Check the next parts :part2 , Part3, Part4

HI …. Today I will write in very wonderful topic SQL Server Partitioning and my post will be as Repository for the beginner and for Expert because I will cover the main and most important point and the most cases can be happened by Partitioning , Partitioning Is big story with big history and no one can cover avery thing on it but i will do my best to cover the most important point in Table Partitioning.

Agenda and overview 

  • What is table partitioning?
  • Why we need to use table partitioning?
  • PROS and CONS of table partitioning?
  • Partitioned Table Rules and Gotchas
  • What is difference Table partitioning Type in SQL Server?
    • Vertical Partitioning
    • Horizontal Partitioning
  • How to Implement Partitioned Tables Step by Step With Multiple Scenario
    • Create File Group
    • Create partitioned Function (Range Right / Range Left)
    • Create partitioned Schema
    • Table Partitioned for new Table
    • Partitioned Existing table
    • Index Partitioned
  • Check partitioned Function and Schema using T-SQL
    • List all Partitions
    • get partition id with Partition Number
    • partitioned Columns
    • Partition table Detail
    • FileGroup Detail
  • Special Conditions for Partitions with advanced Cases
    • MERGE
    • Split
    • SWITCH OUT / SWITCH IN
  • Manage multiple partitions in multiple filegroups
  • Table partitioning  improvement in SQL Server 2014
  • Performance Benefits with Partitioning
  • References

What is table partitioning?

Concept of data Partitioing

Table partitioning consists in a technique adopted by some database management systems to deal with large databases. Instead of a single table storage location, they split your table in several files for quicker queries. If you have a table which will store large amounts of data (I mean REALLY large amounts, like millions of records) table partitioning will be a good option. Queries that access only a fraction of the data can run faster because there is less data to scan.

Spread the word

Secrets of SQL Server Execution plan (3/5)

Secrets of SQL Server Execution plan (3/5)


So for we have saw some limitations of execution plan, some alarming signs which tell us about the need for update statistics etc on my previous blogs. Now we will see how to analyse some more complex execution plans. As usual we will start with tip. It is always better to set the IO Statistics on while optimizing the query, which gives some clear picture about your query. While working with very big execution plan it will be little bit hard for us to check whether our indexes are used are not. for such situation just wright click anywhere on the plan and click “Show Execution Plan on XML” then you can use CTL+F to find any specific index. Also generating a XML plan has lot more benefits, such as memory fractions, parameters list used while optimization etc..

Also we should keep in mind the execution plan distributes the cost across the operators,  you may saw high number on Key lockups which require covering index. And you may saw warning symbol over hash match join etc.. which are all just instigators. There is no one rule fits all in query optimization which is the reason why we don’t have a query tuning robot yet.

planing

Now we will move further to analyse a query deeply.

Using AdventureWorks as an example, the Production.Production ListPriceHistory table maintains a running list of changes to product
price. To start with, we want to see what the execution plan looks like not using a derived table so, instead of a derived table, we’ll use a
subselect within the ON clause of the join to limit the data to only the latest versions of the ListPrice.


SELECT p.Name,
 p.ProductNumber,
 ph.ListPrice
FROM Production.Product p
 INNER JOIN Production.ProductListPriceHistory ph
 ON p.ProductID = ph.ProductID
 AND ph.StartDate = (SELECT TOP (1)
 ph2.StartDate
 FROM Production.ProductListPriceHistory ph2
 WHERE ph2.ProductID = p.ProductID
 ORDER BY ph2.StartDate DESC
 ) ;

fig6
Using AdventureWorks as an example, the Production.Production ListPriceHistory table maintains a running list of changes to product
price. To start with, we want to see what the execution plan looks like not using a derived table so, instead of a derived table, we’ll use a
sub select within the ON clause of the join to limit the data to only the latest versions of the ListPrice.

What appears to be a somewhat complicated query turns out to have a straightforward execution plan. Reading it in physical operator order, there are two Clustered Index Scans against Production.Product and Production.ProductListPriceHistory. These two data streams are combined using the Merge Join operator.

The Merge Join requires that both data inputs are ordered on the join key, in this case, ProductId. The data resulting from a clustered index is always retrieved in an ordered fashion, so no additional sort operation is required here.

Merge Joins create a worktable to complete the operation. The creation of a worktable adds a great deal of cost to the operation because it will mean writing data out to tempdb. However, the Merge Join operation will generally still be less costly than the use of a Hash Match join, which is the other choice the query optimizer can make to solve the same type of join.

The two data streams are then joined through a Nested Loops operator, to produce the final result. On my next blog we will travel to the next level to explore other operators on the execution plan. To read my previous post click the links below. Thanks your time on my blog

1 2 

Spread the word

Walk inside DMV Stacks (12/100)

Walk inside DMV Stacks (12/100)

 

 

 

 

 

 

 

 

By the last blog , I explained DMV stacks that are used often to configure mail profile and use it for mail notifications for different purposes either SQL Server alerts, jobs failures alerts …etc , now I will turn to very trickier subject which is backup and restore ,of course I am not gonna to write about backup policies or how to configure scheduled backups jobs , this is not our subject of discussions here coz I have talked about this subject before but my words here are about backup monitoring while and after backup/Restore executions as follows:

1-      How monitor backup/restore executions ..?

Simply you can find out if any backups or restore execution run on your DB server and moreover you can determine its start time, session ID (of course you can get more info about user , machine , backup tools ..etc ) , elapsed time, estimated remaining time ,percentage of completion and end time using the below script :

 
Spread the word

Create Clustered ColumnStore index for All tables(VI)

 

 

 

 

 

 

 

Problem :

we are working in big project SQL Server 2014 upgrade and SQL Server 2014 Performance this project our role to test the new feature in SQL Server 2014 from the performance wise so we decide  to deploy the new Enhancement in SQL Server 2014 (Clustered ColumnStore index ) CCI so i take backup from one database in my work environment and restored it again with prefix _2014 .

this new type of index to create it on the table this table should be no index on it so i drooped all the index from all the tables but the Problem now How can i create the Clustered columnstore index on all tables( tables supported CCI) by one Click.

Solution :

i do one script to create Clustered columnstore index on all supported tables and here the description of the Script

1- to create CCI on any table we have some limitations one of this limitations the data type of the Columns CCI not supported Some data type.

2- I inserted the data type not supported by CCI in temp Tables

</pre>
CREATE TABLE #CCI_DataType_Limitation ( DataType NVARCHAR(MAX) )
 INSERT INTO #CCI_DataType_Limitation
 VALUES ( 'text' ),
 ( 'timestamp' ),
 ( 'hierarchyid' ),
 ( 'Sql_variant' ),
 ( 'xml' ),
 ( 'geography' ),
 ( 'geometry' )
Select * from #CCI_DataType_Limitation
<pre>
Spread the word

New Permission in SQL Server 2014

New Permission in SQL Server 2014

 

 

 

 

 

 

 

Hi Dears as we know SQL Server 2014 come with more new features and more improvements in the Performance today we will take about the new Feature in SQL Server 2014 in particular in SQL Server security SQL Server 2014 come with three new permission really amazing permission it will help any DBA in his work to can grant simple permission in All Databases in the server in the minimum time this operation it will not take from him one second lets go for the scenario and for the DEMO.

in SQL 2012 and we can see 31 permission whereas SQL 2014 has 34 permissions. These permissions are listed under Server properties >  permissions tab.

New Permission in SQL Server 2014:

  1. CONNECT ANY DATABASE Permission.
  2. SELECT ALL USER Securables Permission.
  3. IMPERSONATE ANY LOGIN Permission.

New permisions

 

Spread the word

Breakthrough in SQL Server 2012 Security fixed by SQLServer 2014

Breakthrough in SQL Server 2012 Security fixed by SQLServer 2014

 

 

 

 

 

 

 

Hi dear we have big bug in SQL Server 2012 security this from my personal view it’s Breakthrough in SQL Server 2012 Security in SQL Server 2012 can the user IMPERSONATE another user and take his privilege and do what he need What !! what i am saying is fact in SQL Server 2012 but it’s fixed in SQL Sever 2014 let’s go for demo:

Open your SQL Server 2014 and open 2 sessions

First Session : Create sysadmin user and read user

1- Sysadmin user


USE [master]
GO
CREATE LOGIN [adminuser] WITH PASSWORD=N'admin', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
ALTER SERVER ROLE [sysadmin] ADD MEMBER [adminuser]
GO

2- Read user


USE [master]
GO
CREATE LOGIN [Readeruser] WITH PASSWORD=N'P@$$w0rd'
GO

Spread the word

How to Grant Show Plan Privilege

 

 

 

 

 

 

Showplan Privilege it’s granted for any one need to see the execution plane for SQL Server query to check the performance of the index or doing index analysis.

Grant Showplan for one user in one database  :


GRANT Showplan TO [DominName\username]

Grant Showplan for one user in All databases in one SQL instance:


<strong> EXEC sp_MSforeachdb N’IF EXISTS
(
SELECT 1 FROM sys.databases WHERE name = ”?”
AND Is_read_only <> 1
)
BEGIN
print ”Use [?]; GRANT Showplan TO [DominName\username]”
END’;</strong>

After the execution take the Print scripts and run it in another session.

to check the Source go here To See all my posts you can go here  View all my tips

Follow up us on

Facebook Page LinkedIn Group,Twitter,Networked Blogs,Facebook Group,Youtube Channel, 

Spread the word

How to check the last restart for the Server

How to check the last restart for the Server

 

 

 

 

 

 

Hi Dears i received request from the developers by the Staging DB was down yesterday evening, between 6:00 PM to 9:00 PM, can you please advice at this time to do Scientific investigation and troubleshooting so at this time you should check 2 point :

  1. Last SQL Server instance restart.
  2. Last DB server (windows services ) restart.

Last SQL Server Restart :

we can check it easily and by the Easiest ways (SQL Query)

SELECT sqlserver_start_time FROM sys.dm_os_sys_info
SELECT ‘Statistics since: ‘ + CAST(sqlserver_start_time AS VARCHAR) FROM sys.dm_os_sys_info
Server Restart
Spread the word

SQL Gulf#1 Memories and Excerpts (1/3)

SQL Gulf#1 Memories and Excerpts (1/3)

 

 

 

 

 

 

 

 

2 weeks passed since SQL Gulf #1 so far and we are still in conjunction with its cheerful memories , they are really memory stuck and we can never ever each single beautiful moments either with speakers or audience , indeed they were the energy of SQL Gulf #1 and could resurrect the event from the first moment till the last moment of the event therefore I am going to share with you sequential portfolios of SQL Gulf#1 memories along with the upcoming weeks to keep them memorized , also you can download now all presentations and demos of all speakers as below :

 Hugo Sessions

Satya Sessions

kamal Session

Omar Seession

Shehap Sessions

Spread the word

A Wonderful Weekend in Riyadh, for SQLGulf #1, at Alfaisal University

A Wonderful Weekend in Riyadh, for SQLGulf #1, at Alfaisal University

 

 

 

 

 

 

 

 

 

 

 

 


Thanks to the perseverance of SQL MVP Shehap El-Nagar’s invite last year, this brave man from the Saudi Ministry of Higher Education, managed to organise a great inaugural event.  After the longest visa request process in my life, and twice, since originally his goal was to host this event
in December 2013, it finally happened: – the 1st SQLGulf event held in Riyadh, K.S.A. this past Saturday at the beautiful new campus of AlFaisal University, Riyadh.
Spread the word