Contained Databases
MCITP
Wednesday, October 28, 2015
Contained Databases
Contained Databases is new
feature introduced in SQL 2012. It provide a means for effectively decoupling
SQL Server databases (and their users, collations, and other assets) from the
underlying SQL Server instance itself. , contained databases are much more
"portable" in the sense that they can be easily moved from one server
to another -- without worries about orphaned users and other issues that have typically made moving databases
problematic. Thus, not only are contained databases a big win in corporate environments where DBAs
need to shunt databases around for load-balancing purposes, they're also a huge
win for ISVs (and developers) who want to more easily copy, move, or deploy
databases with their software. Similarly, for web developers who rely on SQL
Server as a back-end database, contained databases help make developers less dependent on hosting
platforms as their databases become more portable
N.B: The containment setting of a database can be NONE, PARTIAL or FULL. But only NONE and PARTIAL are supported on SQL Server
2012.
To create a contained database we must enable
sql instance for containment using:
GUI -> Server properties -> Advance –
choose Enable contained database. Or
TSQL -> Exec sp_configure ‘contained
database authentication’, 1
Reconfigure
Then we must set the database option to Partial
while creating new database in containment type option. Or using TSQL
Create/Alter Database Con_test Containment = Partial
Once DB Is ready , we can create user on
database with password and this user won’t be dependent on server login, this
can be done by GUI or TSQL :
Create user Con_User with password =’********’
default_Schema= [Dbo]
N.B; There is one very important note we must know that while
connecting server using contained database user we must set the default
database to contained db from option window in GUI in server connection. Same
while connecting application using connection string.
Using SQLCmd we must use –d switch or else we
can get login error ->
Ø SQLCMD –S .\Servername
–U con_user –P ********* -d Con_db
While changing to uncontained database to
contained we might need to convert existing user to contained user:
First find out out the uncontained entity
using ->
Select * from sys.dm_db_uncontained_entities
Select * from Sys.database_priciples
To migrate from uncontained user to contained
user :
Exec Sp_migrate_user_to_contained
@Username = ‘TestUser’,
@Rename = N’Keep_Name’,
@Disable
= N‘Disable_login’/’do_not_Disable_login’
Caution: Make sure not to disable login while migrating unless you
really want to. Because, this login might have a user mapping in some other
databases which will break things severely.
Now this can be used to login to database
using existing password of user.
Backup/Restore is the same as other database and we don’t need to worry
of orphan user anymore.
There are few disadvantages also of using it:
1.
Connection
string must specify the database name.
2.
Cross db query
not possible using same name/password user as both are different in in different
contained database until it on server level as well.
3.
Collation cannot
rely on tempdb we might need to tweak codes as well.
Wednesday, October 21, 2015
Locks in SQL Server:
SQL Server allow locking different type resources to
maintain accurate read and writes, locking at smaller resources like rows, key page
maintain high concurrency but cause overhead on engine as lots of lock to be
maintained. And locking on bigger resources like table, DB cause low
concurrency but little overhead for fewer number of locks.
SQL Server can place locks on different type of resources:
·
RID:
Row identifier. Used to lock a single row within a table.
·
KEY: Row lock within an index.
Used to protect key ranges in serializable transactions.
·
Page: 8 kilobyte data page or
index page.
· Extent: Contiguous group of eight data pages or index pages.
· Table: Entire table, including all data and indexes.
· DB: Database.
SQL Server place lock on resources using multiple lock
modes:
1.
Shared (S): Shared (S) locks allow
concurrent transactions to read (SELECT) a resource. No other transactions can
modify the data while shared (S) locks exist on the resource. Shared (S) locks
on a resource are released as soon as the data has been read, unless the
transaction isolation level is set to repeatable read or higher, or a locking
hint is used to retain the shared (S) locks for the duration of the transaction.
2. Update (U): Update
(U) locks prevent a common form of deadlock. A typical update pattern consists
of a transaction reading a record, acquiring a shared (S) lock on the resource
(page or row), and then modifying the row, which requires lock conversion to an
exclusive (X) lock. If two transactions acquire shared-mode locks on a resource
and then attempt to update data concurrently, one transaction attempts the lock
conversion to an exclusive (X) lock. The shared-mode-to-exclusive lock
conversion must wait because the exclusive lock for one transaction is not
compatible with the shared-mode lock of the other transaction; a lock wait
occurs. The second transaction attempts to acquire an exclusive (X) lock for
its update. Because both transactions are converting to exclusive (X) locks,
and they are each waiting for the other transaction to release its shared-mode
lock, a deadlock occurs. To
avoid this potential deadlock problem, update (U) locks are used. Only one
transaction can obtain an update (U) lock to a resource at a time. If a
transaction modifies a resource, the update (U) lock is converted to an
exclusive (X) lock. Otherwise, the lock is converted to a shared-mode lock.
3.
Exclusive
(X): Exclusive locks (X) are used to lock data being modified by one
transaction thus preventing modifications by other concurrent transactions. You
can read data held by exclusive lock only by specifying a NOLOCK hint or using
a read uncommitted isolation level. Because DML statements first need to read
the data they want to modify you'll always find Exclusive locks accompanied by
shared locks on that same data.
1.
Intent
(I) : An intent lock (I)
indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X)
lock on some of the resources lower down in the hierarchy. For example, a
shared intent lock placed at the table level means that a transaction intends
on placing shared (S) locks on pages or rows within that table. Setting an
intent lock at the table level prevents another transaction from subsequently
acquiring an exclusive (X) lock on the table containing that page. Intent locks
improve performance because SQL Server examines intent locks only at the table
level to determine if a transaction can safely acquire a lock on that table.
This removes the requirement to examine every row or page lock on the table to
determine if a transaction can lock the entire table. Intent locks include
intent shared (IS), intent exclusive (IX), and shared with intent exclusive
(SIX).
a.
Intent shared (IS): Indicates the
intention of a transaction to read some (but not all) resources lower in the
hierarchy by placing S locks on those individual resources.
b.
Intent exclusive (IX): Indicates
the intention of a transaction to modify some (but not all) resources lower in
the hierarchy by placing X locks on those individual resources. IX is a
superset of IS.
c.
Shared with intent exclusive (SIX):
Indicates the intention of the transaction to read all of the resources lower
in the hierarchy and modify some (but not all) resources lower in the hierarchy
by placing IX locks on those individual resources.
2.
Schema Lock (Sch-) : Schema
modification (Sch-M) locks are used when a table data definition language (DDL)
operation (such as adding a column or dropping a table) is being performed. Schema
stability (Sch-S) locks are used when compiling queries. Schema stability
(Sch-S) locks do not block any transactional locks, including exclusive (X)
locks. Therefore, other transactions can continue to run while a query is being
compiled, including transactions with exclusive (X) locks on a table. However,
DDL operations cannot be performed on the table.
3.
BULK
Update (BU): Bulk update (BU) locks are used when bulk copying data into a
table and either the TABLOCK hint
is specified or the table lock on bulk load table
option is set using sp_tableoption. Bulk update (BU) locks allow processes to bulk copy data
concurrently into the same table while preventing other processes that are not
bulk copying data from accessing the table.
Key - Range locks
Key-range locks protect a
range of rows implicitly included in a record set being read by a Transact-SQL
statement while using the serializable transaction isolation level. Key-range
locking prevents phantom reads. By protecting the ranges of keys between rows,
it also prevents phantom insertions or deletions into a record set accessed by
a transaction. In the example we can see that there are two types of key-range
locks taken:
- RangeX-X -
exclusive lock on the interval between the keys and exclusive lock on the
last key in the range
- RangeS-U –
shared lock on the interval between the keys and update lock on the last
key in the range
Locking Hints can be
specified using the SELECT, INSERT, UPDATE, and DELETE statements to direct SQL
Server™ 2000 to the type of locks to be used. Table-level locking hints can be
used when a finer control of the types of locks acquired on an object is
required. These locking hints override the current transaction isolation level
for the session.
HOLDLOCK: Hold a
shared lock until completion of the transaction instead of releasing the lock
as soon as the required table, row, or data page is no longer required.
HOLDLOCK is equivalent to SERIALIZABLE.
·
NOLOCK: Do not
issue shared locks and do not honour exclusive locks. When this option is in
effect, it is possible to read an uncommitted transaction or a set of pages
that are rolled back in the middle of a read. Dirty reads are possible. Only
applies to the SELECT statement.
·
READCOMMITTED: Perform a
scan with the same locking semantics as a transaction running at the READ
COMMITTED isolation level. By default, SQL Server 2000 operates at this
isolation level.
·
ROWLOCK: Use
row-level locks instead of the coarser-grained page- and table-level locks.
·
TABLOCK: Use a table
lock instead of the finer-grained row- or page-level locks. SQL Server holds
this lock until the end of the statement. However, if you also specify
HOLDLOCK, the lock is held until the end of the transaction.
·
UPDLOCK: Use update
locks instead of shared locks while reading a table, and hold locks until the
end of the statement or transaction. UPDLOCK has the advantage of allowing you
to read data (without blocking other readers) and update it later with the
assurance that the data has not changed since you last read it.
·
XLOCK: Use an exclusive lock
that will be held until the end of the transaction on all data processed by the
statement. This lock can be specified with either PAGLOCK or TABLOCK, in which
case the exclusive lock applies to the appropriate level of granularity.
EG: WITH (NOLOCK)
Lock Escalation
Lock Escalations are an optimization
technique used by SQL Server to control the amount of locks that are held
within the Lock Manager of SQL Server.
Lock hierarchy start in in Order as
Database->Table-> Page –Row.
The lock hierarchy starts at the database
level, and goes down to the row level. You always have a Shared Lock (S) on the
database level itself. When your query is connected to a database (e.g. USE
MyDB), the Shared Lock prevents the dropping of the database, or that
backups are restored over that database. And underneath the database level, you
have locks on the table, on the pages, and the records when you are performing
an operation.
In SELECT statement, you
have an Intent Shared Lock (IS) on the table and page level, and a Shared Lock
(S) on the record itself. In modification statement (INSERT, UPDATE, DELETE),
you have an Intent Exclusive or Update Lock (IX or IU) on the table and page
level, and a Exclusive or Update Lock (X or U) on the changed records. SQL
Server always acquires locks from top to bottom to prevent so-called Race Conditions, when multiple threads trying to acquire locks
concurrently within the locking hierarchy. Imagine now how the lock hierarchy
would look like, when you perform a DELETE operation on a
table against 20.000 rows. Let’s assume that a row is 400 bytes long, means
that 20 records fit onto one page of 8kb
We have now 1 IS lock on database, 1 IS on
table, 1000 X on page and 20k on rows. Every lock needs 96 bytes of memory, so
we look at 1.9 MB of locks just for 1 simple query. This will not scale
indefinitely when you run multiple queries in parallel. For that reason SQL
Server implements now the Lock Escalation.
As soon as you have more than 5k locks on
one level in your locking hierarchy, SQL Server escalates these many
fine-granularity locks into a simple coarse-granularity lock. By default SQL
Server always escalates to the table level skipping key page level locks.
Now the above delete operation will have 1
lock on table and 1 on database. But this may impact on concurrency of data, as
X lock on table would deny any other read/write operation on that table.
Also In repeatable read isolation level,
select statement keeps the lock on till transaction completes so as soon as
select statement read more than 5000 rows it will place shared lock on table
stopping any further modification.
We can manage lock escalation using sql
server to make is it disable or working:
Using trace flag we can disable lock
escalation completely on table:
·
1211 –
Disables Lock Escalation completely – allows to use 60% of the allocated memory
– if 60% of memory is used and more locking is needed you will get an
out-of-memory error.
·
1224 –
Disables Lock Escalation until the memory threshold of 40% allocated memory is
reached – after that Lock Escalation is enabled.
But that was in most cases not a good
choice and caused a lot of performance problems. In SQL-Server 2008 and above
there is a new table option (ALTER-TABLE) that can be used to change the
default Lock-Escalation
Alter table
Set Lock_Escalation =
Auto means if tables is partitioned – the locks will be
escalated to the partition-level if table is not partitioned – the locks
will be escalated to the table-level, In Table – It will always be
escalated to table, Disable-> lock escalation will be deactivated for table.
Lock Escalation needs to be handled
carefully, modification on more than 5.000 rows from a table will run into Lock
Escalations? You can disable Lock Escalation temporarily, but you have to be
very careful here. Another option is to make your DML statements
in a loop as different, separate transactions: DML less than
5.000 rows, so that you can prevent Lock Escalations. But it also has its
drawback as it may cause frequent log growth issue.
Deadlock
A deadlock occurs when two or more tasks permanently block each
other by each task having a lock on a resource which the other tasks are trying
to lock. For example:
- Transaction A
acquires a share lock on row 1.
- Transaction B
acquires a share lock on row 2.
- Transaction A
now requests an exclusive lock on row 2, and is blocked until transaction
B finishes and releases the share lock it has on row 2.
- Transaction B
now requests an exclusive lock on row 1, and is blocked until transaction
A finishes and releases the share lock it has on row 1.
Both transactions in a deadlock will wait
forever unless the deadlock is broken by an external process. The
Microsoft SQL Server Database Engine deadlock monitor periodically checks
for tasks that are in a deadlock. If the monitor detects a cyclic dependency,
it chooses one of the tasks as a victim and terminates its transaction with an
error.
Deadlocking is often confused with normal
blocking. When a transaction requests a lock on a resource locked by another
transaction, the requesting transaction waits until the lock is released. By
default, SQL Server transactions do not time out, unless LOCK_TIMEOUT is set.
The requesting transaction is blocked, not deadlocked.
Deadlocks can also occur when a table is
partitioned and the LOCK_ESCALATION setting of ALTER TABLE is set to AUTO. When
LOCK_ESCALATION is set to AUTO, concurrency increases by allowing the Database
Engine to lock table partitions at the partition level instead of at the
TABLE level. However, when separate transactions hold partition locks in a
table and want a lock somewhere on the other transactions partition, this
causes a deadlock. This type of deadlock can be avoided by setting
LOCK_ESCALATION to TABLE; although this setting will reduce concurrency by
forcing large updates to a partition to wait for a table lock.
Deadlock demo:
T1:
Begin transaction
Update table1 set id = 5 where name = ‘Adi’
Waitfor delay ’00:00:05’
Update table2 set name = ‘Aditya’
where surname = ‘Jha’
Commit transaction
T1:
Begin transaction
Update table2 set name = ‘jon’
where surname = ‘cena’
Waitfor delay ’00:00:05’
Update table12 set id = 10
where name = ‘jon’
Commit transaction
If both
the transactions are executed at the same time, then T1 locks and updates table1 whereas T2 locks and updates table2. After a delay of 5 ms,
transaction A looks for the lock on table2 which is already held by T2
and T2 looks for lock on table1 which
is held by T1. So both the transactions cannot proceed further, the deadlock
occurs and the SQL server returns the error message 1205 for the aborted
transaction.
Trace Flag 1204 and Trace Flag 1222:
When deadlocks occur, trace flag 1204 and trace flag 1222 return
information that is captured in the SQL Server error log. Trace flag 1204
reports deadlock information formatted by each node involved in the deadlock.
Trace flag 1222 formats deadlock information, first by processes
and then by resources. It is possible to enable both trace flags to obtain two
representations of the same deadlock event.
Thursday, October 15, 2015
SQL Server Patching
Best Practice for
Patching SQL Server
1.
Where can I find out what the latest patch level is for SQL Server?
In addition to knowing what patches are out
there, you should also be aware of what the current supported patch level
is. The support lifecycle for SQL Server is documents on the
Microsoft support website:http://support.microsoft.com/lifecycle/?c2=1044. This site
will show you the mainstream and extended support start and end dates for the
different versions of SQL Server. Note that this site shows the minimum
required patch level for Microsoft support, not the current latest patch level.
Basic overview of service pack as on 15th Oct 2015
:
RTM (no SP) |
SP1
|
SP2
|
SP3
|
SP4
|
|
Community Technology Preview (beta release)
|
|||||
12.0.2000.8
|
|||||
11.0.2100.60
|
|||||
10.50.1600.1
|
|||||
10.0.1600.22
|
|||||
9.0.1399.06
|
|||||
8.0.194
|
|||||
7.0.623
|
N.B : All SQLServer service
packs are cumulative, meaning that each new service pack contains all the fixes
that are included with previous service packs and any new fixes.
2. Type of Patches Are Out There For SQL Server
- Service
Packs
- Security
Patches
- Cumulative
Updates
SQL Server patches are are
cumulative, so we can apply latest patch without the old patches if forgotten
Service Packs should be
considered as major patch levels and generally speaking it is the product at a
certain patch level that forms the minimum supported build for Microsoft
support. Once a Service Pack has been released, you have 1 year to apply that
Service Pack in order to remain at a supported level.,
3. How to apply patch
on SQL Server?
On stand-alone:
Run patch-Select
sql instance-reboot server.
Or
Eg: SQLServer2008-KB956717-x64.exe
/QUIET /ALLINSTANCES
On Cluster (2008 onwards):
Run patch on
passive node – failover active node- apply patch on remaining passive node.
4.
What is Slipstreaming patching during setup?
As of SQL 2008 SP2, it has ability to integrate Service Packs and CU's
into the installation media of SQL Server. This has the great advantage
of speeding up the patching process for new installations, as well as allowing
us to address any setup related issues. Slipstreaming places the patch
files into the installation media so that they are installed as the SQL
instance is being installed, so that you only run setup once, and you have a
fully patched and ready to go instance. It is a great time save, and I
use this all the time for Service Packs and CU's for my demo systems. In
this blog as a full description of the process can be found on this blog:
Subscribe to:
Posts (Atom)