MCITP

MCITP

Monday, June 20, 2016

Install SQL Server using configuration file?


SQL Server allows to generate configuration file and using this file we can deploy sql instance with same configuration for different server. We can also standardize manual for installing sql with batch files which will launch SQL setup.

Setup supports the use of the configuration file only through the command prompt. The configuration file overwrites the defaults in a package and Command-line values overwrite the values in the configuration file

The configuration file can be used to track the parameters and values for each installation. This makes the configuration file useful for verifying and auditing the installations.

The ConfigurationFile.ini file is a text file with parameters (name/value pair) and descriptive comments.
To generate configuration file we need to start setup using media (setup.exe) till ready to install page.
The path to the configuration file is specified in the Ready to Install page in the configuration file path section.
Then cancel the setup,

Normally file location would be : File Location and Name: %programfiles%\Microsoft SQL Server\110\Setup Bootstrap\Log\\ConfigurationFile.ini.
To see the progress of installation during setup using ConfigurationFile.ini we can manually edit the file and set the parameter INDICATEPROGRESS to TRUE and QUITESIMPLE to TRUE.
NB: In ConfigurationFile.ini sa password not be included even if you have entered while creating file, we must add parameter SAPWD=”Password” below SECURITYMODE=SQL in file to include sa password or we can add later after installation done.

To install the SQL=>
Copy ConfigurationFile.ini to setup folder where setup.exe is and use command prompt to use syntax from setup folder:
ð  Setup / ConfigurationFile= ConfigurationFile.ini
To specify passwords at the command prompt instead of in the configuration file:

ð  Setup.exe /SQLSVCPASSWORD="*****" /AGTSVCPASSWORD="*****" /ASSVCPASSWORD="*****" /ISSVCPASSWORD="*****" /RSSVCPASSWORD="******”  /SECURITYMODE=SQL /SAPWD ="******” /ConfigurationFile=MyConfigurationFile.INI


Tuesday, June 14, 2016

Page File for SQL Server

Page File for SQL Server

Page file is space allocated on system drive as a file and its functionality is: operating system works with memory and paging file. The page file is located on the disk, and when there is not enough physical memory left, operating system can use the page file and move data from the physical memory to it. When it will need that data, it will have to read it from the disk. The problem is that disk is a lot slower than the physical memory. If SQL Server's cache will be stored on the disk instead of the on the physical memory, you'll have a huge performance problem.

In general page file is physical extension of RAM and if not enough memory left during process page files save server from crash or being hung out.

Maximum memory space required by windows commit limit will be: RAM size + page file size

Normally a DBA set page file to 2X of RAM size which is not always correct and it depends over server functionality. For 32 bit size maximum page size should be 4 GB.

But for modern server we should have less page file size if RAM size is more, we can use performance counter to know the value of page file.

Create counter Memory\Committed Bytes and set it for one week and once we have data calculation would be

Max value if committed byte over period (20 GB) + 30% of committed byte as buffer – RAM Size.
For 16 gb Ram = data would be 20+6-16 = 10 gb pf page file
+  we also need to consider kernel or memory dump size which also store data primarily on page file before moving it to specified dump folder.

From Windows 2008 and onward we can put page file on any partition, not only system partition as before 2008. If page are created along multiple drives sql will generally use page file from faster drive.

To configure page file setting: go to:
 system->advanced system setting->advanced->setting under performance tab->Advanced








Friday, May 20, 2016

Database Backup Myth

1.       backup operations cause blocking
No. Backup operations do not take locks on user objects. Backups do cause a really heavy read load on the I/O subsystem so it might *look* like the workload is being blocked, but it isn't really. It's just being slowed down. There's a special case where a backup that has to pick up bulk-logged extents will take a file lock which could block a checkpoint operation – but DML is never blocked.

2.       switching from the FULL recovery model to the BULK_LOGGED recovery model and back again breaks the log backup chain.
No. It just doesn't. Switching from either FULL or BULK_LOGGED to SIMPLE *does* break the log backup chain however.

3.       breaking the log backup chain requires a full backup to restart it
No. You can restart the log backup chain with either a full or differential backup – anything that bridges the LSN gap from the point at which the log backup chain was broken.

4.        Concurrent log backups are not possible while a full or differential backup is in progress.
No,  In SQL Server 2000, a concurrent log backup with either a full or diff backup (I’ll just say ‘full’ from now on but take it to mean ‘full or diff’) was not permitted. The reason is that a log backup would clear the inactive portion of the log once it’s been backed up, but a full backup may still need some of that log so it can’t be cleared
In SQL Server 2005, the restriction was lifted, but there’s a subtle twist. You can do concurrent log backups with full BUT the log is not cleared when the log backup ends. The clearing of the inactive portion of the log is delayed until the full backup completes. This could cause you to have disk space problems if your log generation rate is huge and you’re relying on very frequent backups to manage the log size.

5.       A full or differential backup clears the log.
No. A log backup includes all the log since the last log backup – nothing can change that – no matter whether that log was also backed up by a full or differential backup. In the FULL or BULK_LOGGED recovery models, the only thing that clears the log is a log backup.

6.       Using the BULK_LOGGED recovery model for minimally-logged operations reduces the size of the next transaction log backup
No. A minimally-logged operation is so-named because only the page allocations are logged. A log backup needs all the information necessary to reconstitute the transaction, so a log backup following a minimally-logged operation must backup the log plus all extents changed by the minimally-logged operation. This will result in the log backup being roughly the same size as if the operation was fully logged.

7.       Full and differential backups only contain the log generated while the backup was running
No. A full or differential backup contains enough log to be able to recover the database to a transactional consistent view of the database at the time the data-reading portion of the backup finished (or as far back as the oldest log record that transactional replication has not yet processed – to ensure that replication works properly after a restore).

8.       Backups always test existing page checksums.
No. It only does it when you use the WITH CHECKSUM option – which you should.

9.       Backups read data through the buffer pool.
No. The backup subsystem opens its own channels to the database files to avoid the performance hit of having to read everything into SQL Server's memory and back out to the backup device (and also effectively flushing the buffer pool in the process). If you ask for the page-checksum checking, it uses its own small portion of memory.

10.   Backups perform consistency checks (a la DBCC CHECKDB)
No.

11.   If the backup works, the restore will too
No. You must regularly validate your backups to give yourself a high level of confidence that they will work if a disaster occurs. 

12.   A mirrored backup will succeed if the mirror location becomes unavailable.
No. If any one of the paths to a mirrored backup fails, the entire mirrored backup operation fails.

13.   A tail-of-the-log backup is always possible.
No. A tail-of-the-log backup is one that backs up all the log generated since the last log backup, in an exceptional situation. If the data files are damaged, you can still do a tail-of-the-log backup EXCEPT if the un-backed-up log contains a minimally-logged operation. That would require reading data extents – which cannot be done if the data files are damaged. For this reason, the BULK_LOGGED recovery model should not be used on databases that have 24×7 user transactions.

14.   Backup a database snapshot possible.
No. It's not implemented

15.   Log backups will be the size of the log.
No. The log has to accommodate the space necessary to roll back active transactions, the amount of space returned by DBCC SQLPERF (LOGSPACE) on a busy system doesn't accurately reflect the amount of log records in the log. log backup is just all the log generated since the last log backup – not the whole log file usually.

16.   Backup a corrupt database not possible.
No. In most cases you can use the WITH CONTINUE_AFTER_ERROR option to back up the corrupt database.  If that fails (maybe because of a damaged boot page or file-header page), there are no other options apart from OS-level file backups.     
       
17.   Log backups always clear the log.
No. If there's no concurrent data backup running, a log backup will always *try* to clear the log, and only succeed in clearing the inactive portion of the log – the log that's only considered 'required' by SQL Server because it hasn't yet been backed up. If anything else is holding the log 'required', it cannot be cleared, even though it has been backed up. Subsequent log backups will check again and again until the time comes when that portion of the log can be cleared.
Also, if there is a concurrent data backup running, the log clearing will be delayed until the data backup finishes. See the blog post in myth 30-04 for more details.

18.  You can back up a mirror database.
No. A mirror database is not accessible except through a database snapshot. And you can't back up that either. But it has changed in New feature in Always On where backup is permitted for secondary database.
19.  Back up a single table is possible.
No. You can effectively back up single table if it happens to be wholely contained on a single filegroup, but there's no way to say BACKUP TABLE (except export/import method).
20.   My transaction is guaranteed to be contained in the backup if it committed before the backup operation completed.
No. The commit log record for the transaction has to have been written out before the data-reading portion of the backup finished.

21.    You should shrink the database before a backup to reduce the backup size.
No. Shrink just moves pages around so won't make any difference.

22.   You don't need to back up master, msdb, model.
No. You should always back up the system databases. Master contains all the security info, what databases exist – msdb contains all the SSIS packages, Agent jobs, backup history – model contains the configuration for new databases. Don't fall into the trap of only backing up user databases otherwise you'll be in a world of hurt if you have to do a bare-metal install.

23.   Transaction log and differential backups are possible for the master database in full recovery model.
We cannot take transaction log or differential backups of the master database. This has been configured because the master database cannot be in a restoring mode as this database is essential for your SQL Server to be online. Thus we can only take full and copy only backups for the master database. We can change the recovery model of master database from simple to bulk logged or full. But log and diff backup possible for Model and MSDB database.

24.   We can take backup of tempdb database and we can change the recovery model of the database.
We cannot take backup of tempdb database nor we can change the recovery model of the database. The recovery model of tempdb will always be simple.

25.    The native SQL Server 2005 backups are in clear text.
True before SQL 2008 - With SQL Server 2008 is the introduction of natively encrypted database backups.
Prior to SQL Server 2008 a third party product was necessary to encrypt the database backups.

26.    We can stop SQL Server successful backup writing to the SQL Server Error Log.

A trace flag 3226 has been introduced that allows you to turn off logging of all successful backups in your SQL Server error logs. DBCC TRACEON (3226,-1) will turn it up globally.

Most common Transaction log myths

1.       A Truncate stamen can’t be rolled back

TRUNCATE can’t be Rolled Back Using Log Files after Transaction Session Is Closed.
While database is in full recovery mode, it can rollback any changes done by DELETE using Log files. TRUNCATE cannot be rolled back using log files in full recovery mode if data files have been overwritten.
DELETE and TRUNCATE both can be rolled back when surrounded by TRANSACTION if the current session is not closed. If TRUNCATE is written in Query Editor surrounded by TRANSACTION and if session is closed, it cannot be rolled back but DELETE can be rolled back.
In case of DELETE, SQL Server removes all the rows from table and records them in Log file in case it is needed to rollback in future. Due to that reason it is slow.
In case of TRUNCATE, The exact deleted values are not logged in the online SQL Server transaction log, only the IDs of the pages that held the truncated records are logged. These pages are marked for overwriting in the database data file and the truncated data will be gone for good when the new transactions are written to these pages However, while using T-SQL code TRUNCATE can be rolled back for that particular session.
Point in time restore using log file cannot roll back TRUNCATE if the data file which is deallocate is over-written. That is the difference. That is “TRUNCATE may or may not be recovered always from log files.”
2.       The truncation process reduce the size of a physical log file

During the truncation process, only the active portion of the online SQL Server transaction log file is scanned. Some parts of the scanned portion are marked as inactive and they will be used as free space to write down new transactions. There isn’t a change in the online transaction log size because the inactive parts remain intact, nothing is deleted or removed

Every SQL Server transaction log is made of Virtual Log Files (VLFs). During the truncation process, only the Logical log is scanned. A Logical log is made of active VLFs. A Log Sequence Number (LSN) is used to uniquely identify every transaction in the online transaction log. The MinLSN is the starting point of the oldest active transaction in the online transaction log

The online SQL Server transaction log file is circular by internal organization. When logging reaches the end of the transaction log, it starts again from the beginning by overwriting the parts marked as inactive

3.       Having multiple online SQL Server transaction log files will increase performance
This myth is based on the belief that having multiple online transaction log files will result in parallel writing of transactions into the files and therefore result in performance gain. SQL Server can’t operate with more than one online transaction log file at the time, so any kind of parallel I/O is not possible
Having multiple transaction log files is needed only in the situations where the initial SQL Server transaction log can’t record more transactions due to a lack of free space on the disk

4.       The SQL Server transaction log won’t grow if the database is in the Simple recovery model.

             However, it happens just in some specific situations – when there is a long running transaction or                    transaction that creates many changes

            In the Simple recovery model, the online transaction log is cleared automatically. SQL Server                       automatically reclaims log space to keep space requirements small – but that doesn’t mean it won’t               grow. The online transaction log must provide enough information for a database rollback, therefore               it must provide enough space for all necessary information. As all transactions must be written into                 the online transaction log, in case of a large number of changes in a transaction, there might not be                 enough space in the log, so it must be expanded.

5.       A SQL Server transaction log backup will be the same size as the online transaction log itself.

The online transaction log must have enough information to rollback active transactions, so some space is reserved for eventual rollbacks. If a rollback occurs, SQL Server doesn’t want to expand the online transaction log because if the expanding fails, the SQL Server database can become inconsistent or go into the Suspect mode. That’s why the online transaction log has some reserved space and is usually bigger than the SQL Server transaction log backup
Moreover, a transaction log backup contains only the transactions made after the last transaction log backup. If the online transaction log contains the transactions that have already been backed up, they will not be present in the new SQL Server transaction log backup, therefore the transaction log backup will be smaller for that amount of space

6.       A full or differential database backup clears the online transaction log.

The inactive parts of the online SQL Server transaction log are marked for clearing only when a transaction log backup is created
Full and differential database backups don’t contain much SQL Server transaction log information, only the transactions necessary to recover the database into a consistent state. These transactions are not a backup of the online transaction log, therefore these transactions are not marked for overwriting in the online transaction log

7.       The TRUNCATE TABLE and DROP TABLE commands are not logged into the online transaction log.

The exact deleted values are not logged in the online SQL Server transaction log, only the IDs of the pages that held the truncated records are logged. These pages are marked for overwriting in the database data file and the truncated data will be gone for good when the new transactions are written to these pages
This myth is also based on the fact that these commands take little time to execute, they are almost instantaneous

8.       My SQL Server is too busy, I don’t want to make SQL Server transaction log backups.

One of the biggest performance intensive operations in SQL Server is an auto-grow event of the online transaction log file. By not making transaction log backups often enough, the online transaction log will become full and will have to grow. The default growth size is 10%. The busier the database is, the quicker the online transaction log will grow if transaction log backups are not created
Creating a SQL Server transaction log backup doesn’t block the online transaction log, but an auto-growth event does. It can block all activity in the online transaction log

9.       A SQL Server transaction log backup isn’t needed for a point in time restore. A full database backup is enough.

This myth comes from using the RESTORE command with STOPAT clause to restore from a full database backup. The STOPAT clause specifies a point in time for the RESTORE LOG command, and it works well when it’s used with a transaction log backup. The fact that it can be used with a full database backup makes you believe that transaction log backups are not needed to recover to a specific point in time

RESTORE DATABASE AdventureWorks FROM DISK = 'D:\AdventureWorks.bak' WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'D:\AdventureWorks.bak' WITH RECOVERY, STOPAT = 'Dec 31, 2013 10:59:00 PM'
GO

Although the SQL Server database cannot be restored to a point in time, SQL Server doesn’t clearly identify the problem, and it allows you to use the STOPAT clause without a transaction log backup specified

10.   SQL Server transaction log backups are not needed for successful disaster recovery if the full database backup is taken daily.

It also depends on how much data you can lose. If you can afford to lose up to 24 hours of data, then you don’t need transaction log backups and you should use the Simple recovery model
If the information you can lose is measured in minutes and hours, regular transaction log backups are necessary, as the maximum you will lose is the time between to transaction log backups
11.   The SQL Server transaction log shrinking will make free space in the online transaction log so I don’t need to create transaction log backups.

The shrink operation is not a good maintenance practice because it doesn’t solve the transaction log size issue permanently. After the initial shrinking, the transaction log will grow again. As the auto-growth event is one of the most intensive SQL Server operations, it should be avoided. The recommended method to keep the size of the online transaction log is to create transaction log backups regularly. Or, switching to the Simple recovery model, if you can tolerate data loss.

12.   Log records are written to disk immediately.


FALSE. There is a 60KB log cache in the RAM that log records are written to, not the disk. That cache is written to the disk, to the transaction log file, as soon as all 60KB is filled, any session of that database issues a commit (a real commit, not the nested one that just decreases @@TRANCOUNT), and on some other events. All log records flushed together to the disk will have the same block offset (LSN consists of VLF number, block offset, and log record number). Therefore, log block cannot be larger than 60KB (cache’s size).

Thursday, May 12, 2016

Query Store new feature in SQL Server 2016


Query Store new feature in SQL 2016

Query store feature is new functionality to look into query plan history for performance tuning, it helps to find out the performance degradation by analysing performance difference in query plan.
 
How Query store helps:

Collects all query texts along with relevant properties
Store all plan choices and performance metrics.
Allows you to force plans from history. (One can force database to choose particular plan instead sql engine do it for you)
Identify queries that have gotten slower recently
Make sure this works across server restart, upgrade and query recompiles.

All the compiled or executed query plan will be stored in user database with statistics with history data as well.

One can choose how much data it will store and what would be analysing time for query store.
 
Use ‘Alter Database’ or query store properties in GUI for database statement to enable query store.

eg: Alter Database QueryDB Set Query_store = ON;  (not possible for Master and tempdb database)
 

Execution plan for query over time get costlier or evolved due to multiple reason like statistics changes, Schema Changes, index modification etc. he procedure cache (where cached query plans are stored) only stores the latest execution plan. Plans also get evicted from the plan cache due to memory pressure. As a result, query performance regressions caused by execution plan changes can be non-trivial and time consuming to resolve.

Since a query store can store multiple execution plans per query processor one can force query to use particular plan to execute.

There are 4 different option available for query plan  troubleshooting under QUERY STORE option in GUI

Regressed Queries
Pinpoint queries for which execution metrics have recently regressed (i.e. changed to worse). Use this view to correlate observed performance problems in your application with the actual queries that needs to be fixed or improved.
Top Resource Consuming Queries
Choose an execution metric of interest and identify queries that had the most extreme values for a provided time interval. Use this view to focus your attention on the most relevant queries which have the biggest impact to database resource consumption.
Tracked Queries
Track the execution of the most important queries in real-time. Typically, you use this view when you have queries with forced plans and you want to make sure that query performance is stable.
Overall Resource Consumption
Analyse the total resource consumption for the database for any of the execution metrics. Use this view to identify resource patterns (daily vs. nightly workloads) and optimize overall consumption for your database.

By using compare option one can compare the multiple plan of single query and find the problem.

To force a plan, select a query and plan, and then click Force Plan. You can only force plans that were saved by the query plan feature and are still retained in the query plan cache.

Configuration option available for Query Store:
OPERATION_MODE: Can be READ_WRITE or READ_ONLY.

CLEANUP_POLICY:  Configure the STALE_QUERY_THRESHOLD_DAYS argument to specify the number of days to retain data in the query store.
DATA_FLUSH_INTERVAL_SECONDS: Determines the frequency at which data written to the query store is persisted to disk. To optimize for performance, data collected by the query store is asynchronously written to the disk. The frequency at which this asynchronous transfer occurs is configured via DATA_FLUSH_INTERVAL_SECONDS.

MAX_STORAGE_SIZE_MB: Configures the maximum size of the query store. If the data in the query store hits the MAX_STORAGE_SIZE_MB limit, the query store automatically changes the state from read-write to read-only and stops collecting new data.
ALTER DATABASE QueryDB SET QUERY_STORE (MAX_STORAGE_SIZE_MB = );

INTERVAL_LENGTH_MINUTES: Determines the time interval at which runtime execution statistics data is aggregated into the query store. To optimize for space usage, the runtime execution statistics in the Runtime Stats Store are aggregated over a fixed time window. This fixed time window is configured via INTERVAL_LENGTH_MINUTES. 
ALTER DATABASE QueryDB SET QUERY_STORE (INTERVAL_LENGTH_MINUTES = 15);

SIZE_BASED_CLEANUP_MODE: Controls whether the clean-up process will be automatically activated when total amount of data gets close to maximum size.
QUERY_CAPTURE_MODE: Designates if the Query Store captures all queries, or relevant queries based on execution count and resource consumption, or stops adding new queries and just tracks current queries.

MAX_PLANS_PER_QUERY: An integer representing the maximum number of plans maintained for each query.

To set all option in single query:

ALTER DATABASE QueryDB
SET QUERY_STORE (
    OPERATION_MODE = READ_WRITE,
    CLEANUP_POLICY =
    (STALE_QUERY_THRESHOLD_DAYS = 30),
    DATA_FLUSH_INTERVAL_SECONDS = 3000,
    MAX_STORAGE_SIZE_MB = 500,
    INTERVAL_LENGTH_MINUTES = 15,
    SIZE_BASED_CLEANUP_MODE = AUTO,
    QUERY_CAPTURE_MODE = AUTO,
    MAX_PLANS_PER_QUERY = 1000
);
 

Following Catalog views and stored procedure to use Query store function:

sys.database_query_store_options (Transact-SQL)

·         sys.query_context_settings (Transact-SQL)
·         sys.query_store_plan (Transact-SQL)
·         sys.query_store_query (Transact-SQL)
·         sys.query_store_query_text (Transact-SQL)
·         sys.query_store_runtime_stats (Transact-SQL)
·         sys.query_store_runtime_stats_interval (Transact-SQL)

·         sp_query_store_flush_db (Transact-SQL)
·         sp_query_store_reset_exec_stats (Transact-SQL) (to clear runtime statistics for a given plan)
·         sp_query_store_force_plan (Transact-SQL) (To force a query with specific plan id)
      EXEC sp_query_store_force_plan @query_id = 12, @plan_id = 32;
 
·         sp_query_store_unforce_plan (Transact-SQL) (To unforce a query with specific plan id)
   EXEC sp_query_store_force_plan @query_id = 12, @plan_id = 32;
 
·         sp_query_store_remove_plan (Transct-SQL)  (use plan id)
·         sp_query_store_remove_query (Transact-SQL) (use query id)

Query Store internal tables are created in the PRIMARY filegroup during database creation and that configuration cannot be changed later. If you are running out of space you might want to clear older Query Store data by using the following statement.

ALTER DATABASE QueryDB SET QUERY_STORE CLEAR;

Time based clean-up of data can be set like:

ALTER DATABASE QueryDB SET QUERY_STORE (CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 14));