Hello every body in the previous post we figured more detail about How to monitor and mange your SQL Server jobs ..? but now all thing What we Covered in the Previous Post ..? we Covered 10 Point
- Check SQL Server jobs First Vision..?
- Check jobs Status ..? Is it enabled or Disabled ..?
- When was SQL agent job was created or when was last time it was modified..?
- Identify newly created SQL Server Agent Jobs
- Check enabled jobs Without notification setup ..?
- retrieving enabled operators in SQL Server Agent
- Update SQL Server Agent Jobs without any notification by New notification
- SQL Server Agent Job Setup and Configuration Information..
- Check jobs With Enabled Schedule ..
- Check SQL Server Agent Job Schedule Information..
and i will going today to complete this Subject by Covering another Impressive point :
- Configuring SQL Agent Jobs to Write to Windows Event Log.
- Generate SQL Agent Job Schedule Report
- SQL Server Agent Job Setup and Configuration Information
- SQL Server Agent Job Steps Execution Information
- Jobs Report by OnSuccessAction and on_fail_action
- Check or Change job owner
- List by jobs are ruining now on your DB Server.
- Jobs With Execution Long Time .
- Select Failed Job
- How to search on your Jobs TEXT.
let’ s drill down to figure this 10 new point one by one .
By the last blog , it was apparent that breakdown of all DR site configurations related to DR failover scenario from HQ site to DR site and failback , those configuration are nothing except just how to Sync the information of File structure , security directory (Logins, users and permissions) form HQ site to DR site then apply them once decided to failover from HQ site to DR site, simple as is , the same almost I am going to do by this blog but at HQ site itself , yes this is true , DR configurations are not excluded to DR site itself but HQ site as well but changes can possible happen on DR site and need to be reflected back to HQ site once fail back scenario takes place
Just to oversimplify things, we have 2 options for Failback from DR to HQ site :
- One is much easy and safe also which can’t be used for short term fail over time where it is less likely to have File structure or security changes on DR site , this actually is based on turning off completely HQ DB clusters to permit for SAN replication to work with no possibility for any disk corruption as you know and what all you should have to do if decided to fail back is just turning on HQ clusters but you should pay high attention that SAN Replication is completed 100% not to corrupt your DBs once bringing online your DB clusters
- The other one is based on the same way exactly we did by the last blog and then you have to follow me for the rest of this blog (It is your choice now to choose which direction to go with )
By the last blog , I have illustrated the conceptual model of active-passive DR model and I tried to zoom more on the abstraction level of this model for perfect of your understanding about the entire of model and now I am turning into more details about each step in terms of schema design scripts , SPs and jobs and I am going to capsulate all steps also in a single package to be able to download them easier
Today , I am going to make a breakdown for all DR site configurations related to DR failover scenario from HQ site to DR site and failback , those configuration are nothing except just how to Sync the information of File structure , security directory (Logins, users and permissions) form HQ site to DR site then apply them once decided to failover from HQ site to DR site as shown below, simple as is , knowing that those configurations represent the majority of DR active-passive solution so I am urging you to practice them enough on DR site
in the previous Post i finished the Part of How to Diagnosis MSSQL Server or MSSQL Database with fast way..? but to be honest with you this really very big part and can’t Finished it coz this need much T-SQL , Professional Third party tool like dynatrace really wonderful Tool don’t miss any corn.
So today i will start new Part in our Series How to Administrate and Monitor SQL Server Jobs by T-SQL..?
So let’s go to Start
WHAT YOU NEED in SQL SERVER JOB WE COVERED HERE
First thing any DBA need to monitor the SQL Agent at the first HE/SHE need to know the First Vision in his/here SQL Server jobs
Check SQL Server jobs First Vision..?
SELECT DISTINCT substring(a.name,1,100) AS [Job Name], 'Enabled'=case WHEN a.enabled = 0 THEN 'No' WHEN a.enabled = 1 THEN 'Yes' end, substring(b.name,1,30) AS [Name of the schedule], 'Frequency of the schedule execution'=case WHEN b.freq_type = 1 THEN 'Once' WHEN b.freq_type = 4 THEN 'Daily' WHEN b.freq_type = 8 THEN 'Weekly' WHEN b.freq_type = 16 THEN 'Monthly' WHEN b.freq_type = 32 THEN 'Monthly relative' WHEN b.freq_type = 32 THEN 'Execute when SQL Server Agent starts' END, 'Units for the freq_subday_interval'=case WHEN b.freq_subday_type = 1 THEN 'At the specified time' WHEN b.freq_subday_type = 2 THEN 'Seconds' WHEN b.freq_subday_type = 4 THEN 'Minutes' WHEN b.freq_subday_type = 8 THEN 'Hours' END, cast(cast(b.active_start_date as varchar(15)) as datetime) as active_start_date, cast(cast(b.active_end_date as varchar(15)) as datetime) as active_end_date, Stuff(Stuff(right('000000'+Cast(c.next_run_time as Varchar),6),3,0,':'),6,0,':') as Run_Time, convert(varchar(24),b.date_created) as Created_Date FROM msdb.dbo.sysjobs a INNER JOIN msdb.dbo.sysJobschedules c ON a.job_id = c.job_id INNER JOIN msdb.dbo.SysSchedules b on b.Schedule_id=c.Schedule_id GO
This Script will return to you :
in the previous post we Speaking about our idea for this Series of posts and i Started my First post with How to Diagnosis MSSQL Server or MSSQL Database with fast way ..? Today i will complete this part from How to Diagnosis Your Database,
Database Information : in this Script we can cover this below points
- How many databases are on the instance?
- What recovery models are they using?
- What is the log reuse wait description?
- How full are the transaction logs ?
- What compatibility level are they on?
- What is the Page Verify Option?
- Make sure auto_shrink and auto_close are not enabled!
SELECT db.[name] AS [Database Name], db.recovery_model_desc AS [Recovery Model], db.log_reuse_wait_desc AS [Log Reuse Wait Description], ls.cntr_value AS [Log Size (KB)], lu.cntr_value AS [Log Used (KB)], CAST(CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)AS DECIMAL(18,2)) * 100 AS [Log Used %], db.[compatibility_level] AS [DB Compatibility Level], db.page_verify_option_desc AS [Page Verify Option], db.is_auto_create_stats_on, db.is_auto_update_stats_on, db.is_auto_update_stats_async_on, db.is_parameterization_forced, db.snapshot_isolation_state_desc, db.is_read_committed_snapshot_on, db.is_auto_close_on, db.is_auto_shrink_on, db.target_recovery_time_in_seconds FROM sys.databases AS db INNER JOIN sys.dm_os_performance_counters AS lu ON db.name = lu.instance_name INNER JOIN sys.dm_os_performance_counters AS ls ON db.name = ls.instance_name WHERE lu.counter_name LIKE N’Log File(s) Used Size (KB)%’ AND ls.counter_name LIKE N’Log File(s) Size (KB)%’ AND ls.cntr_value > 0 OPTION (RECOMPILE);
Small Script but more useful
DECLARE @AuthenticationMode INT EXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'LoginMode', @AuthenticationMode OUTPUT SELECT CASE @AuthenticationMode WHEN 1 THEN 'Windows Authentication' WHEN 2 THEN 'Windows and SQL Server Authentication' ELSE 'Unknown' END as [Authentication Mode]
Follow up us on
my friend asked me this Question today
My Friend : Dear you deployed this job [XXX] on Staging Server?
ME : No
My Friend : Can you check all jobs in Staging Server to know Which job Executed this SP [XX] ?
Me : yes i will try to do it to you .
at this time i think to Check the jobs manually but i see this big hard actually this action need much effort, at this time i canceled this idea and i turn all my think to T-SQL Script do this job Automatically. Actually by this T-SQL DMV you can pass any SP name or any Text to this DMV to know Which job Executed this SP or this text..Really it’s Wonderful and Amazing DMV.
By the last blog I introduced 3 kinds of solutions for DR and I am going to go through each of them along the upcoming blogs and I prefer to start with the most suitable one for medium to enterprise environments (SAN Replication) then later I am going to consider those environments ranged from small to medium and haven’t either enough room for SAN Replication technology or even they haven’t SAN storages at all ..!
Ideally , each DR solution has its own definition ,architectural design , caveats and recommendations and steps to build , I know well that you are fascinated to know the steps of design but I would like from you to follow me by the same sequence so that you can understand concretely the steps of design and their porpoises
1- Definition :
SAN Replication solution is based on replicating the content of SAN disks used by DB clusters from the prime site to the DR site and mounting them on DR DB clusters, this applies better for those environments having clustering solutions and can apply also for standalone DB servers but if all of DBs are not stored locally but on SAN storage so I am going to assume that we are using DB cluster because ideally no one pays for SAN storage and dispose clustering solution
2- Architectural design
Ideally , our DB clusters should consist of multiple disks , one for master DBs , one for tempdb , one for log file , one or more for user DBs and another one for backup files and such kinds of cluster implementations much more granular for SAN Replication solution besides of other performance respects as you know but if not so, we will just loose some of the rich powers of SAN Replication solution but still applicable to go with SAN replication (I am trying to be optimistic as much as possible .. J )
Any DBA or DB Analyst need to know two important point
How to administrate DBs in a single click..?
How to monitor your SQL Server instances..?
this from my opinion first point How to administrate DBs in a single click..? we covered the most important subject in this point in this Series posted by SHEHAP EL-NAGAR :
but this not the end because we will start very Wonderful and Impressive Series to complete our plan to help any DBA or DB Analyst in his/here job now you can mange your database and administrate it but How to monitor your SQL Server instances..?
1- How to Diagnosis MSSQL Server or MSSQL Database with fast way ..?
2- How to Administrate and Monitor SQL Server Agent by T-SQL..?
3- How to Administrate and Monitor SQL Server Backup and Restore by T-SQL..?
4- How to Audit your SQL Server instance and database ..?
5-How To Monitor your instance and Database by Using ( DMV , Alerts , UCP Moitor , Third party tools..Etc)..?
and most of Question and topic i will cover it in this series to be repository for any DBA to Administrate and monitor his/here SQL Sever instance and Databases it will be DBA portfolio. Let’s Start
By the last blog , we explained the 4th job of any Database administrator once he/she gets into the ground OIR (Onine Index Rebuild) , now I am turning into the 5th related to index rebuild
5th job :Update statistics
It is extremely important to keep your index and column statistics to help out query optimizer for determining the right query execution plans for queries especially those having much joins, unions between multiple data entities where types of physical joins (nested loop , merge join and hash join ) to be used to translate logical joins (inner joins, Full outer join ,left outer join right outer join, cross apply , outer apply ..etc) based on the entity data volume calculated within statistics , in addition those statistics help query optimize to define the right indexes to be used so if an index is outdated it might be not chosen within query execution plan although its index design suit the T-SQL query ..!
Hence, any database properties allow (by default) to update and create statistics Synchronously to improve query execution plans and guide the query analyzer / optimizer to move right way coz the query optimizer checks for out-of-date statistics before compiling a query and before executing a cached query plan. Before compiling a query, the query optimizer uses the columns, tables, and indexed views in the query predicate to determine which statistics might be out-of-date. Before executing a cached query plan, the Database Engine verifies that the query plan references up-to-date statistics
It is extremely important to leverage your database availability to afford all partial and full disaster cases not such normal cases like Server restart/shutdown, Network disconnect ..etc because it can terrible if you faced such partial or full disaster cases like virus , total cluster nodes failure, attack or total network outage ..etc and it really scares me much as DB Administrator to fall down into one of those cases , therefore I am starting with you this new series of blogs to aspire your mind about different DR architectures using different technologies and techniques either SAN Replication ,HAG (High availability always on group) or Transactional replication ..etc) with defining the architectural design ,perquisites , steps ,caveats and recommendations for each one..
However before diving deeply in those ways, we should know the major principals as well as objectives of any DR solution so that you can build proportionally our DR Architecture and we can go a right way
By the last blog , we explained the 3rd job of any Database administrator once he/she gets into the ground
, now I am turning into the 4th related to indexes rebuild
4th job :Index Rebuild:
It is extremely important to rebuild you Indexes from time to time by a suitable frequency either daily , weekly or Monthly according to index fragmentation of indexes but first of all let us turn into some definitions about index fragmentation to understand it clearly and be able to use it proportionally .
We have 2 types of fragmentation one is logical and the other one is physical and the logical one is broken into internal and external as follows :
1- Physical fragmentation:
It is resulted of a fragmentation on IO subsystem itself that cause SQL Server to go back and forth to read from the DB file because OS is not able to determine a contiguous space for it on the disk and this might happen to due to many reasons such as :
- Repeated DB growth in small chunks that needs you to determine a good initial DB size as well as good DB growth rate in MB to avoid such repeated DB growth
- Allocation DB file with another OS or applications files , giving an example , while installing a standalone DB server you are allocating DB files on C: \ drive which is risky from fragmentation perspective plus performance and availability coz its cluster size is 4 KB by default not 64 KB as per the best practices of IO disk performance
And this can’t be resolved using index rebuild at all and even SQL Service can’t recognize this kind of fragmentation but just you can run OS defragmentation while your SQL Service is stopped not to cause any corruption or much better to move DBs to another defragmented disk
How we can do Fast Assessment on new Database ? really this Subject is very big and we need lot of posts to cover all thing related to this subject but what i will write today on my blog the fast way for the most important point in the circle of the database Assessment .
Database Assessment Phases :
- Database Schema Design.
- Stress test for Expensive Query and Ad Hoc Query.
- Most SP hitting the home page Application.
- Index Analysis
Phase 1 (Database Schema design) :
in this phase we must cover some important points i will explain it one by one with the Scripts to can be very wonderful repository for any DBA or DB Analyst to do Assessment on any Database Schema Design
By the last blog , we explained the 2nd job of any Database administrator once he/she gets into the ground and now I am turning into the 3rd part of database administrations which is related to transaction log file shrink by an adequate size not to increase VLF number and thus kill DB performance if it has much OLTP transactions
First of all, shrink log file is not ultimately good but it depends on the shrink file size as well as frequency of shrink jobs because it might impact on the number of VLF (Virtual log files) because each time you perform a shrink ,an inactive VLF is phased out and a new VLF is added to free space (you can determine VLF number it that using DBCC LOGINFO ), this in turn might impact on the recovery time of any database if exceeded 1000 VLF , also it might cause slowness therefore I am recommending you to select an appropriate initial log file size like 1 GB or factors of 1 GB for critical mission DBs and less like 500 MB for less transactional DBs like Share point DBs and thus you should use the same value for shrinking log file
By the last blog , we started to explain the 1st job that any Database administrator should do at the beginning “Basic Master Configurations” and now I am going to turn into the 2nd part of database administrations which is backup operations ,first let me illustrate some basic fundamentals about backup policies before drilling into multi T-SQL Scripts to make them more meaningful for you..
Normally , any backup policy of production DBs should consist of 3 types of backups regardless of how they are used, how they are scheduled and what is their retention period as follows:
Full backup : Backup for all data files (.mdf , .ndf and .hdr (File stream files) ) and transaction log file (.ldf) and it has 2 main types regular backup which is necessarily for further differential backups or transaction log backups or Copy only which is used a single time and can’t benefit for any further differential backup or transactions log backup
- Restore way: A single full Backup file should be restored
Differential backup : backup for data files only but just the incremental changes with compare to the last Full backup
- Restore way : 2 backups files ( The last full backup (regular one not copy only one as explained above) + the targeted differential backup file)
Transaction log backup: Backup for transaction log file changes since the last transaction backup taken and thus all transaction log backups are serialized in row till reaching the last differential backup or the last Full backup
Really i found this Case is big Problem for any Developer or DBA Working in Big Data or large Volumes of Data … How to Optimize the update queries for large data volumes ?? Without No Impact from your server level and Database level and without any Lock on your Database . really i found this post is very terrific for any DEV or DBA to learn How to write healthy Update Statement for Big Data ?
Explain the problem : now if i going to update one table have millions of the record in one time do you Imagine what is will happen on your Server or on your Database ?? on your Server you will found more CXPACKET and this indicator for you server is not healthy and this Query Will Consume more CPU and IO from your Server so at the end this will Gradually returns negatively on your Server performance . From the other side your database level : your database may be locked .
The best solution to the problem :
we need to take promise on ourselves TO DON’T UPDATE LARGE DATA IN ONE TIME MUST BE TO UPDATE IT AS CHUNKS.
so this the Steps to Write best Update Statement for Large Data :
Today i will talk in important Subject if we need to make our Server is more Healthy and to improve your performance to Empower the Index performance you will need to learn How To Check :
- Duplicated Index
- Missing Index
- Unused Index
At the first i will do my Demo on the Database AdventureWork2012 .
Create table Structure :
Use AdventureWork2012 SELECT * INTO [Sales].[MainSalesOrderDetail] FROM [Sales].[SalesOrderDetail] WHERE 1 = 2 GO
How To Count All object in your Database : this the name of my post of today really i see this Question is very important to any DBA or DB Architecture to know the Count for all object or Specific object in his Database with the fast way . With this Small Script we can grap waht we need related to this Subject . Really i admire it.
SELECT 'Count' = COUNT(*), 'Type' = CASE type WHEN 'C' THEN 'CHECK constraints' WHEN 'D' THEN 'Default or DEFAULT constraints' WHEN 'F' THEN 'FOREIGN KEY constraints' WHEN 'FN' THEN 'Scalar functions' WHEN 'IF' THEN 'Inlined table-functions' WHEN 'K' THEN 'PRIMARY KEY or UNIQUE constraints' WHEN 'L' THEN 'Logs' WHEN 'P' THEN 'Stored procedures' WHEN 'R' THEN 'Rules' WHEN 'RF' THEN 'Replication filter stored procedures' WHEN 'S' THEN 'System tables' WHEN 'TF' THEN 'Table functions' WHEN 'TR' THEN 'Triggers' WHEN 'U' THEN 'User tables' WHEN 'V' THEN 'Views' WHEN 'X' THEN 'Extended stored procedures' END , GETDATE() FROM sysobjects GROUP BY type ORDER BY type GO
Many database options are available, but the most important are these:
- AUTOMATIC OPTIONS
- MISCELLANEOUS OPTIONS
- RECOVERY OPRTIONS
- STATE OPTIONS
The previous post Lock VS Block i explained what is Lock , Block and Deadlock today we will Focus on how to Monitor the Lock and Deadlock in SQL Server.
i have More Ways to Monitor this Issue :
1- tools (SQL Deadlock Detector) it is very helpful in this issue .
2- SQL Server Stored Procedure .
4- Active Monitor
5- SQl Server Reports
in the first let’s Create our workshop
When I developed my Organization Phone List Site that based on SQL Server as DBMS and ASP.net .I see i need to add image of employee with his related contact info that already stored in isolated ERP Oracle BLOB Field.
So in this section i will demonstrate how i can retrieve BLOB Field from Oracle DB to show it In ASP.Net Web Site .
First we will add tag as the following code
<img id=”EmpImg” src=”ShowImage.ashx?Account=”
alt=”" width=”100px” height=”100px” style=”float:right”/>
DECLARE @Result int declare @File_Location as nvarchar(300) = 'F:\fff.txt' DECLARE @FSO_Token int EXEC @Result = sp_OACreate 'Scripting.FileSystemObject', @FSO_Token OUTPUT EXEC @Result = sp_OAMethod @FSO_Token, 'DeleteFile', NULL, @File_Location EXEC @Result = sp_OADestroy @FSO_Token
In the previous post we saw some limitations of execution plan, along with some tweaks. In this post we will go through some of the backstage things which are happening on the SQL Engine.
First i would like share some truths regarding the query optimizer, actually it was the disclaimer by the query optimizer that the indexes suggested by the optimizer should be reviewed by human brain. But some guys will apply the index suggested by the optimizer without much analysis, in that case this index will not be used by the query at all.
For example you are optimizing a code, and the optimizer is suggesting for a missing index. And you have created the suggested index and go back to the query execution plan but still it is suggesting the same missing index which you have created. Once this happens you can make sure, what you have created is not appropriate for that query. In this case the best way to resolve is to look deeper into the query. The query may need the same index with some small changes, which include changing the order of the key column or included column according to the need or adding index filter according to your where condition. By this way you will get the full benefit of the query optimizer.
الزملاء الأعزاء من العالم!
السلام عليكم و رحمه الله و بركاته
أقدم لكم دورة تمهيدية شاملة في SQL Server 2012. وتقدم هذه الدورة التدريبية باللغة العربية، وتهدف إلى المساعدة فى التقديم للمفاهيم الأساسية للمهنيين المتخصصين فى قواعد البيانات عامه وبخاصه SQL Server في محاولة متواضعه منى لنشر هذه المعرفة والعلم بين اخوانى الناطقين باللغه العربية.
تهدف هذة الدورة التدريبية الى تلبيه فضول واثاره اهتمام المتخصصين فى مجال SQL Server وعلى وجه الخصوص DBAs و كذلك Developers كما ان هذة الدورة مبسطه الشرح خاصة فى الدروس الاولى لكى يسهل على المبتدئين فى مجال SQL Server متابعتها واضعا فى اعتبارى انه لا يوجد الا القليل من المواد فى هذا المجال باللغة العربية. فقررت بعون الله وتوفيقه ان ايسر هذا العلم بين ايديكم لعل الله ينفع به احد من اخوانى و اخواتى فى اي مكان فى العالم. اسال الله عز وجل ان تنال هذه الدروس اعجاب الجميع و خاصه المهتمين بمجال SQL Server ارجو من جميع الأخوة والأخوات ان يذكرونى فى صالح دعائهم
و لاتترددوا فى مراسلتى و ابداء ارائكم و مقترحاتكم البناءة لتطوير هذا العمل الخيرى باءذن الله تعالى
ديسمبر2013 فى واشنطن- الولايات المتحدة الأمريكية
Dear Colleagues of the Database World!
I present to you a comprehensive introductory course in SQL Server 2012. This course is presented in the Arabic language, and is intended to introduce core concepts to Database Professionals that are trying to acquire knowledge in SQL Server. The course is geared towards those that inspire to become DBAs, Developers, or those just interesting in learning the basics of SQL Server. Since there is very little material in Arabic, I decided to try to use my skills to bridge the knowledge gap for my SQL Family that communicates in Arabic. I hope you enjoy the classes and please feel free to share and leave constructive feedback.
Thank you and good luck future SQL Server Professionals world wide!
Special thanks to my friends Mohamed Elsharkawy, Jihad Abouhatab and my brother Islam El-Ghazali for their help and support with this production.
I have just finished yesterday my 2 presentations at SQLSaturday event at Ancona , Italy about “Data Warehousing Guiltiness for BI and BAM” and “T-SQL Performance Guidelines for better DB stress powers”, indeed it was very successful event , thanks for PASS SQL Saturday team for their nice invitation for me to speak there , it was momentum event and full of wonderful sessions , speakers and sure the audience
I have just finished yesterday my 2 presentations at SQL Saturday event ay South America about “Performance Dreams wait for you at SQL Server 2014” and “T-SQL Performance Guidelines for better DB stress powers”, indeed it was very successful event , thanks for PASS SQL Saturday team for their nice invitation for me to speak there , now I will start my trip back to Saudi Arabia , really it is long trip to pass to London then Florida then South America and now coming back to Florida then Paris to Saudi Arabia , it is almost 36 hours..! See you next Friday at SQL Saturday Italy where I will speak to you about DWH Solutions and T-SQL Performance Guidelines
Alright, I’ve got your attention. You’re probably thinking “who is this crazy guy that is using the word Love with Windows 8 in the same sentence?” Honestly though, I have not had many problems with Windows 8 and I’ve been happy running it for a long time. I’m actually disappointed that I don’t have Windows 8 at my job, so I can only use it at home. It has been extremely beneficial for me for my SQL Server related work. Why? And why am I stalling to answer your question? And why am I going to change the subject to Hyper-V?
If you’re still reading, then great, you’ve got the patience for what is coming next. Windows 8 Pro includes a Hyper Visor in it (Hyper-V to be exact). I’m not talking about software to connect to virtual environments, I’m talking about an actual Hyper Visor that allows you to create and manage virtual machines. Once you enable the option you’re ready to go after a restart, it’s that simple.
So what does this have to do with me as a SQL Server Professional? Well, how else are you going to build your own lab environment at home or at work where you can play around, build and break things, and be your own domain admin (warning: power may go to your head on that last one). It’s all part of your life long learning journey and your drive to become a better SQL Server Professional and get the latest and greatest version on your machine to test out without ruining your OS. It so happens that I’m a nice guy, and I’ll show you how to get your environment setup and how to create “templates” for future VMs with this Hyper-V Tricks for the SQL Professional Video I put together.
Are you ready to push SQL Server to it’s limits, test things you’ve never tested before, build and break environments without losing your job? If the answer is yes then this video is for you! When you’re done with this video make sure you check out my Virtual SQL Server Lab with Clustering post to enhance your own Virtual playground.
Hyper-V Tips and Tricks for SQL Server Virtual Lab
(My apologies, I could not embed the video into this post so please use the link above)
PowerShell Scripts mentioned in video (Remember to rename your VM if you like and to change the paths to match your setup):
new-vm -Name “SQLPS” -MemoryStartupBytes 2048MB -Path R:\HyperV\
new-vhd -ParentPath “R:\HyperV\SQL 2012 Template\SQL2012 Template Drive.vhdx” -Path R:\HyperV\SQLPS\SQLPS.vhdx -Differencing
Add-VMHardDiskDrive -VMName SQLPS -Path “R:\HyperV\SQLPS\SQLPS.vhdx”
Follow us on
We are counting the days to launch the registration of the 1st SQL Server 2014 event in the middle east organized by Our community http://sqlserver-performance-tuning.net/ at Riyadh where :
- Worldly known MVP speakers from US, Canada , UK, Egypt and Saudi Arabia will come to speak to you
- In addition all registered participants will get Attendance certificates ,Community membership cards and Bags
- Draws with more than 12000 $ from global companies .
- Open buffet and Coffee breaks
- 5 jobs opportunities at governmental organizations ,
- Peer to peer Workshops with speakers in special lab rooms ,
- Reportage with TV media
- Also you will be able to meet VIP persons
In this blog we are going to see some basic things on how to interpret a execution plan for a query . The first first step on enhancing a query for optimal performance is to understand the query execution on cost basis , which part of the of the query is costly and which part is not . To do so, SQL Servers query execution plan is the best choice, by which we can point out which part of our query is more costlier.
Learning a query execution plan is like learning a new language, which is iconic language. Just explaining each and every icon of this language will be wage for you. More things on execution plan can be garbed while practicing it .
Now We have much honor to be the 1st community all over Middle East to publish its Arabic series of SQL Server blogs on MSDN Arabia , in addition our blogs are the majority there so we have become the leader now on MSDN Arabia, please find out us there at: