Tuesday, September 23, 2014

Execution plan explaination

Execution plan :
There are two type of execution plan
1. Estimated execution plan :
a. Created without even running the query.
b. Uses statistics of table and indexes to find the detail.
c. Good for long running query tuning.

2. Actual execution plan :
a. It creates execution plan when actual query runs.
b. It displays result after query completes the execution so wait for large data query.
c. It gives exact plan of query with calculated on actual data and indexes.
d. If statistics are out of date result of estimated and actual could be different.

assuming you're not sysadmin, dbcreator or db_owner, you'll need to be granted the ShowPlan permission within the database being tested.

Execution plan uses physical joins for any logical joins (inner, left, right and outer) used by user in a query.
These physical joins are: HASH Join, MERGE Join and NESTED join.

Execution plans internally uses above joins according to the size of tables, indexes used in joined tables and Sorted order on the join column. The query optimizer will be smart and always try to pick up the most optimal physical joins. As we know SQL optimizer creates a plan cost based and depends upon the query cost it will choose the best join.

HASH JOIN: Uses each row from top input to build a hash table, and the each row from below input to probe into the hash table and outputting all matching rows. It creates hash table in available memory or temp database.

MERGE JOIN: Match rows from two suitably sorted input tables exploiting the sort order. Merge join only works with unique clustered index.

NESTED JOIN: For each row in top(outer) input, scan the bottom(inner) input, and output the matching row.

For below conditions with unique clustered index:
Table1 (BIG) and Table2 (BIG)
1. If the both or eitherof table has no index optimizer will use hash join internally.
2. If both the big table has unique clustered index optimizer will use merge join internally.
Table1 (Medium) and Table2 (Medium)
1. If the both the table has no index optimizer will use hash join internally.
2. If the both or eitherof table has unique index optimizer will use merge join internally.
Table1 (small) and Table2 (small)
1. If the both the table has no index optimizer will use hash join internally.
2. If the both or eitherof table has unique index optimizer will use nested join internally.
Table1 (Big) and Table2 (small/medium)
1. If the both the table has no index optimizer will use hash join internally.
2. If the both or eitherof table has unique index optimizer will use nested join internally.
Please note if clustered index is used instead of unique clustered index all joins will use only hash except nested join for small tables.if the index created without unique keyword then SQL will not know its unique (data) so it will create GUID of 4-byte integer for joins.
If no other match suits to optimizer it uses hash match using hash to retrieve output.
Execution plan also displays the type of scan or seek used by table.
They are : Table scan , Index scan and index seek.

Index seeks are generally preferred for the highly selective queries. What that means is that the query is just requesting a fewer number of rows or just retrieving the other 10 (some documents says 15 percent) of the rows of the table.

Table scan means iterate over all table rows.

Index scan means iterate over all index items, when item index meets search condition, table row is retrived through index.

table scan/index scan/index seek possibilities:--

A table with no index will do table scan.·
A table with only clustered index will do index scan when no search criteria used.·
A table with only clustered index will do index seek when clustered index column used in search criteria.·
A table with only clustered index will do index scan when nonindexed column used in search criteria.· A table with only non clustered index will do table scan when non indexed column used in search criteria.·
A table with only non clustered index will do index scan/index seek depending on volume of table when indexed column used in search criteria.·
A table with clustered and non clustered index will do index scan or seek depending on the search column and volume of table in where clause.

Execution plan cache :
It is expensive for the Server to generate execution plans for each query so SQL Server will keep and reuse plans wherever possible. As they are created, plans are stored in a section of memory called the plan cache) and can be checked using DMV sys.dm_exec_cached_plansEach unique query gets an Execution Plan
–Performed by the Query Optimizer
–Identical queries gets the same Execution Plan
–Problematic when used with dynamic T-SQL
Creation takes some time
Stored in the Plan Cache
When a query is submitted to the server, an estimated execution plan is created by the optimizer. Once that plan is created, and before it gets passed to the storage engine, the optimizer compares this estimated plan to actual execution plans that already exist in the plan cache . If an actual plan is found that matches the estimated one, then the optimizer will reuse the existing plan, since it's already been used before by the query engine. This reuse avoids the overhead of creating actual execution plans for large and complex queries or even simple plans for small queries called thousands of times in a minute.
Each plan is stored once, unless the cost of the plan lets the optimizer know that a parallel execution might result in better performance. If the optimizer sees parallelism as an option, then a second plan is created and stored with a different set of operations to support parallelism. In this instance, one query gets two plans.
Execution plans are not kept in memory forever. A plan with a cost of 10 that has been referenced 5 times has an "age" value of 50. The lazy writer process, an internal process that works to free all types of cache (including plan cache ), periodically scans the objects in the cache and decreases this value by one each time.
If the following criteria are met, the plan is removed from memory:
More memory is required by the system
The "age" of the plan has reached zero
The plan isn't currently being referenced by an existing connection
The plan also modifies or recompiles (may be expensive operation) in following event:
· Change in schema or structure of table used in query.
· Changing a index or updating statistics used by query.
· Using sp_recompile function.
To clear the cache use below command:
To see the object referenced and query used by optimizaor plan we can use below query.

SELECT [cp].[refcounts]
, [cp].[usecounts]
, [cp].[objtype]
, [st].[dbid]
, [st].[objectid]
, [st].[text]
, [qp].[query_plan]
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text ( cp.plan_handle ) st
CROSS APPLY sys.dm_exec_query_plan ( cp.plan_handle ) qp ;

Above columns in query defines :
· Refcounts : Number of objects referecing this plan
· Usecounts : Usage count
· Objtype : Proc, Ad hoc, View
· Plan_handle : Identifier for this plan in memory (You can use the XML directly or open it as a graphical execution plan.)
Different ways of getting execution plan:
To get the graphical execution plan we can use SSMS icon for estimated and actual execution plan.
To get text execution plan :
To activate the text version of the estimated text execution plan, simply issue the following command at the start of the query:
In order to activate and deactivate the text version of the Actual execution plan, use:
To get the XML format execution plan:
In order to activate and deactivate the XML version of the Estimated execution plan, use:
For the XML version of the Actual plan, use:
Automating Plan Capture Using SQL Server Profiler

A production system may be subject to tens or hundreds of sessions executing tens or hundreds or queries, each with varying parameter sets and varying plans. In this situation we need a way to automate plan capture so that we can collect a large number of plans simultaneously. In SQL Server 2005 you can use Profiler to capture XML execution plans, as the queries are executing. You can then examine the collected plans, looking for the queries with the highest costs, or simply searching the plans to find, for example, Table Scan operations that you'd like to eliminate.
Showplan All, Showplan Statistics Profile, Showplan XML, Showplan XML Statistics Profile are few events of profilers for execution plan.

Capturing all of the execution plans, using Showplan XML or Showplan XML Statistics Profile, inherently places a sizeable load on the server. These are not lightweight event capture scenarios. Even the use of the less frequent Showplan XML for Query Compile will cause a small performance hit. Use due diligence when running traces of this type against any production machine.

Working with Index and keys


Index Scan: Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table. Thus, a scan is an efficient strategy if the table is small or if most of the rows qualify for the predicate.

Index Seek: Since a seek only touches rows that qualify and pages that contain these qualifying rows, the cost is proportional to the number of qualifying rows and pages rather than to the total number of rows in the table.

A table with no index is forced to do table scan. Note that for some queries it will use a table scan even when a useful index is present - this is usually because the data in the table is so small that it is more hassle to traverse the indexes .

Index seeks are generally preferred for the highly selective queries. What that means is that the query is just requesting a fewer number of rows or just retrieving the other 10 (some documents says 15 percent) of the rows of the table.

Table scan means iterate over all table rows.

Index scan means iterate over all index items, when item index meets search condition, table row is retrieved through index.

Usually index scan is less expensive than a table scan because index is more flat than a table.
Clustered Indexes: With a clustered index the table data is stored in the leaf nodes of that index instead of being in a separate heap structure. This means that there will never need to be any extra row lookups after finding rows using that index no matter what columns are needed [unless you have off-page data like TEXT columns or VARCHAR(MAX) columns containing long data]. You can only have one clustered index for this reason, so if you use one chose where you put it carefully in
order to get maximum gain.
table scan/index scan/index seek possibilities
· A table with no index will do table scan.
· A table with only clustered index will do index scan when no search criteria used.
· A table with only clustered index will do index seek when clustered index column used in search criteria.
· A table with only clustered index will do index scan when nonindexed column used in search criteria.
· A table with only nonclustered index will do table scan when nonindexed column used in search criteria.
· A table with only nonclusterd index will do index scan/index seek depending on volume of table when indexed column used in search criteria.
· A table with clustered and non clustered index will do index scan or seek depending on the search column and volume of table in where clause.

ON ( column [ ASC | DESC ] [ ,...n ] )
[ INCLUDE ( column_name [ ,...n ] ) ]
[ WITH ( [ ,...n ] ) ]
[ ON { partition_scheme_name ( column_name )
| filegroup_name
| default
[ FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | "NULL" } ]

[ ; ]

| FILLFACTOR = fillfactor
| ONLINE = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
[ , ...n ] ) ]

By default a nonclustered index is created if not specified.

Creates a unique index on a table or view. A unique index is one in which no two rows are permitted to have the same index key value. A clustered index on a view must be unique.

The Database Engine does not allow creating a unique index on columns that already include duplicate values, whether or not IGNORE_DUP_KEY is set to ON. If this is tried, the Database Engine displays an error message. Duplicate values must be removed before a unique index can be created on the column or columns. Columns that are used in a unique index should be set to NOT NULL, because multiple null values are considered duplicates when a unique index is created.

Creates an index in which the logical order of the key values determines the physical order of the corresponding rows in a table. The bottom, or leaf, level of the clustered index contains the actual data rows of the table. A table or view is allowed one clustered index at a time.

A view with a unique clustered index is called an indexed view. Creating a unique clustered index on a view physically materializes the view. A unique clustered index must be created on a view before any other indexes can be defined on the same view.

Create the clustered index before creating any nonclustered indexes. Existing nonclustered indexes on tables are rebuilt when a clustered index is created.

Creates an index that specifies the logical ordering of a table. With a nonclustered index, the physical order of the data rows is independent of their indexed order.
The bottom,or leaf, level of the Nonclustered index contains the pointer address of actual data rows of the table

Creates a filtered index by specifying which rows to include in the index. The filtered index must be a nonclustered index on a table. Creates filtered statistics for the data rows in the filtered index.

PAD_INDEX = { ON | OFF } :
ON : The percentage of free space that is specified by fillfactor is applied to the intermediate-level pages of the index.
OFF or fillfactor is not specified : The intermediate-level pages are filled to near capacity, leaving sufficient space for at least one row of the maximum size the index can have.

FILLFACTOR =fillfactor :
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or rebuild. fillfactor must be an integer value from 1 to 100. If fillfactor is 100, the Database Engine creates indexes with leaf pages filled to capacity.

Specifies that the named, preexisting clustered, or nonclustered is dropped and rebuilt. The default is OFF.

ONLINE = { ON | OFF } :

Specifies whether underlying tables and associated indexes are available for queries and data modification during the index operation. The default is OFF.


Super Key :
Super key is a set of one or more than one keys that can be used to identify a record uniquely in a table.Example : Primary key, Unique key, Alternate key are subset of Super Keys.

Candidate Key :
A Candidate Key is a set of one or more fields/columns that can identify a record uniquely in a table. There can be multiple Candidate Keys in one table. Each Candidate Key can work as Primary Key.

Primary Key :
Primary key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It can not accept null, duplicate values. Only one Candidate Key can be Primary Key.

Alternate key :
A Alternate key is a key that can be work as a primary key. Basically it is a candidate key that currently is not primary key.

Composite/Compound Key :
Composite Key is a combination of more than one fields/columns of a table. It can be a Candidate key, Primary key.

Unique Key :
Uniquekey is a set of one or more fields/columns of a table that uniquely identify a record in database table. It is like Primary key but it can accept only one null value and it can not have duplicate values. For more help refer the article

Foreign Key :
Foreign Key is a field in database table that is Primary key in another table. It can accept multiple null, duplicate values. For more help refer the article Difference between primary key and foreign key.

Natural key :
A natural key is a single column or set of columns that uniquely identifies a single record in a table, where the key columns are made up of real data. “real data” means data that has meaning and occurs naturally in the world of data. A natural key is a column value that has a relationship with the rest of the column values in a given data record. Here are some examples of natural keys values: Social Security Number, ISBN, and TaxId.

surrogate key :
A surrogate key like a natural key is a column that uniquely identifies a single record in a table. But this is where the similarity stops. Surrogate keys are similar to surrogate mothers. They are keys that don’t have a natural relationship with the rest of the columns in a table. The surrogate key is just a value that is generated and then stored with the rest of the columns in a record. The key value is typically generated at run time right before the record is inserted into a table. It is sometimes also referred to as a dumb key, because there is no meaning associated with the value. Surrogate keys are commonly a numeric number.

Identity Key :
The IDENTITY columns are auto incrementing columns provided by SQL Server.
There can only be one IDENTITY column per table.

SQL Server will take care of incrementing this column automatically.  

Statistics good for performance

You can add statistics on columns that don’t have statistics in order to boost query performance and also in order to create more optimal execution plans.

Statistics can be created different ways- Statistics are automatically created for each index key you create- If the database setting autocreate stats is on, then SQL Server will automatically create statistics for non-indexed columns that are used in queries  

To get Statistics  detail :

- Auto Update Statistics basically means, if there is an incoming query but statistics are stale, SQL Server will update statistics first before it generates an execution plan.
- Auto Update Statistics Asynchronously on the other hand means, if there is an incoming query but statistics are stale, SQL Server uses the stale statistics to generate the execution plan, then updates the statistics afterwards.  

How do we know statistics are being used?

One good check you can do is when you generate execution plans for your queries: check out your “Actual Number of Rows” and “Estimated Number of Rows”.  If these numbers are (consistently) fairly close, then most likely your statistics are up-to-date and used by the optimizer for the query. If
not, time for you to re-check your statistics create/update frequency.

---There may be cases when you may want to disable statistics update temporarily while you’re doing massive updates on a table, and you don’t want it to be slowed down by the autoupdate.

--- to manually update statistics, you can use either
-- sp_updatestats or  UPDATE STATISTICS  

One way to get better statistics is to manually update the statistics for the table using the

UPDATE STATISTICs commandwith the FULLSCAN option or SAMPLE option.
e.g :
USE AdventureWorks;
CREATE STATISTICS Products    ON Production.Product ([Name], ProductNumber)  

UPDATE STATISTICS Production.Product(Products)     WITH SAMPLE 50 PERCENT;

USE AdventureWorks;

Monday, April 14, 2014

Memory management in SQL Server


3G Switch
We can place the 3G switch in boot.ini file and useful only for 32 bit OS system.

Windows 32-bit Operating Systems implement a virtual memory system based on a flat 32-bit address space. This address space translates into 4GB of virtual memory - no more, and no less. By default, the address space is divided into two equal chunks. 2GB for the Kernel and 2GB for User-mode processes
/3GB boot-time option to allow the user mode address space to grow to 3GB. The /3GB option was intended as a short term solution to allow applications such as database servers to maintain more data in memory than a 2GB address space allowed. However, using the /3GB method to increase the user-mode memory space comes at a cost.
Using 3G has adverse impact on OS as kernal now have only 1GB available to fulfill kernel and other OS programs.

PAE Switch (Physical Address Extension)

PAE switch enable windows 32 bit server to support RAM memory upto 64 GB (for all windows 2003 edition) and 128GB (for windows 2003 datacenter edition).
NB: by default all 32 bit OS can support only 4GB of RAM.

To boot the system and utilize PAE memory, the /PAE switch must be added to the corresponding entry in the Boot.ini file. If a problem should arise, Safe Mode may be used, which causes the system to boot using the normal kernel (support for only 4 GB of RAM) even if the /PAE switch is part of the Boot.ini file.
This reduces the need to swap the memory of the page file and results in increased performance. The program itself is not aware of the actual memory size so its need to use its own memory manager to use increased space. (eg : AWE for sql server). Without PAE, AWE cannot reserve memory in excess of 4 GB.

AWE does not require PAE  but is often used together with PAE to allocate more than 4 GB of physical memory from a single 32-bit process
(AWE is a set of APIs that allows a process to allocate nonpaged physical memory and then dynamically map portions of this memory into the virtual address space of the process.)

when the /3GB switch is used in conjunction with the /PAE switch. In this case, the operating system does not use any memory in excess of 16 GB. This behavior is caused by kernel virtual memory space considerations. Thus, if the system restarts with the /3GB entry in the Boot.ini file, and the system has more than 16 GB of physical memory, the additional physical random access memory (RAM) is not used by the operating system. Restarting the computer without the /3GB switch enables the use of all the physical memory.

AWE (Address Windowing Extensions)
AWE is used by SQL Server when it has to support very large amounts of physical memory. AWE feature is only available in SQL Server Enterprise, Standard, and Developer editions with of SQL Server 32 bit version

Support for AWE is available only in the SQL Server Enterprise, Standard, and Developer editions and only applies to 32-bit versions of SQL Server. Analysis Services cannot take advantage of AWE mapped memory. If the available physical memory is less than the user mode virtual address space, AWE cannot be enabled.
AWE lets applications acquire physical memory, and then dynamically map views of the nonpaged memory to the 32-bit address space. Although the 32-bit address space is limited to 4 GB, the nonpaged memory can be much larger. This enables memory-intensive applications, such as large database systems, to address more memory than can be supported in a 32-bit address space.

·         To support more than 4 GB of physical memory on 32-bit operating systems, you must add the /pae parameter to the Boot.ini file and reboot the computer

·         If there is more than 16 GB of physical memory available on a computer, the operating system requires 2 GB of virtual address space for system purposes and therefore can support only a 2 GB user mode virtual address space. For the operating system to use the memory range above 16 GB, be sure that the /3gb parameter is not in the Boot.ini file. If it is, the operating system cannot use any physical memory above 16 GB.

·         The "Locked Pages in Memory" privilege must be enabled before AWE can be enabled

 Lock Pages in Memory :
Lock Pages in Memory is a Windows security setting policy that determines which accounts can use a process to keep data in physical memory, preventing the system from paging the data to virtual memory on disk. This policy must be enabled to configure Address Windowing Extensions (AWE) and it is disable by default.

The operating system works with memory and paging file. The paging file is located on the disk, and when there is not enough physical memory left, the operating system can use the paging file and move data from the physical memory to it. When it will need that data, it will have to read it from the disk. The problem is that disk is a lot slower then the physical memory. If SQL Server's cache will be stored on the disk instead of the on the physical memory, you'll have a huge performance problem. When you grant SQL Server's service the right to lock pages in memory, the operating system won't be able to move memory pages that are used by SQL Server to the paging file and your are guaranteed that SQL Server will not use the paging file.

Disadvantage: Operating system will starve for memory when there is system wide memory pressure. OS has to completely rely on SQL Server to respond to low memory notification and scale down its memory usage . SQL Server may not respond fast enough to low memory condition at system level because OS is already starving for memory.

As sql server doesn’t release memory once acquire but OS can claim memory in emergency from sql server not using Lock pages in memory option.
It is advisable to set max server memory while using Lock pages in memory with leaving sufficient memory for OS and other programs.

To enable the lock pages in memory option

  • On the Start menu, click Run. In the Open box, type gpedit.msc.
  • On the Local Group Policy Editor console, expand Computer Configuration, and then expand Windows Settings.
  • Expand Security Settings, and then expand Local Policies.
  • Select the User Rights Assignment folder.
  • In the pane, double-click Lock pages in memory.
  • In the Local Security Setting – Lock pages in memory dialog box, click Add User or Group.
  • In the Select Users, Service Accounts, or Groups dialog box, add an account with privileges to run sqlservr.exe.
  • Log out and then log back in for this change to take effect.

Max and Min server memory:

Min server memory controls the minimum amount of Physical memory that sql server will try to keep committed. When the SQL Server service starts, it does not acquire all the memory confi gured in Min Server Memory but instead starts with only the minimum required, growing as necessary. Once memory usage has increased beyond the Min Server Memory setting, SQL Server won’t release any memory below that amount.

Use max server memory to control memory usage. Establish maximum settings for each instance, being careful that the total allowance is not more than the total physical memory on your machine. You might want to give each instance memory proportional to its expected workload or database size. This approach has the advantage that when new processes or instances start up, free memory will be available to them immediately. The drawback is that if you are not running all of the instances, none of the running instances will be able to utilize the remaining free memory.

Friday, April 11, 2014

Log shipping breaks what to do now

Log shipping breaks what to do now :

1.       Either we can reconfigure log shipping using full and log backup.

2.       Or if lucky we can take one differential backup and keep log shipping active but there is a catch:
We must make sure there is no full backup performed on primary database after log shipping breaks and before taking differential backup.

Reason :

A differential backup can be restored on database
a.       After last full back up has been restored on secondary server  with norecovery or
b.      Any previous(old) full backup together with all subsequent log backup and atleast one log backup after last full backup and before final diff backup taken has been restored before another full backup

Consider the following scenario for restoring differential backup (DF3) take at today (T) at 7 PM
DB1                                                                                                                                        DB2

Backup type (Base LSN)
Restored database (Base LSN)
Full backup  (..031) FB1
Log backup  (..031)  LB1
Log backup  (..031)  LB2
Diff backup  (..031) DB1
Full backup  (..045) FB2
Not restored  (..031)
Log backup  (..045) LB3
Log backup  (..045) LB4
Full backup  (..056) FB3
Not restored  (..45)

Assume no other backup taken apart from above backup and diff backup taken today.
We can restore diff backup (DF3) if

1.       FB3 backup not taken (FB1 + LB1+LB2+LB3+LB4) + DF3

2.       FB3 backup not taken (FB1 + LB1+LB2+LB3) + DF3

3.       FB3 backup not taken (FB2+ LB3+LB4) + DF3

4.       FB3 backup not taken (FB2+ LB3) + DF3

5.       FB3 backup not taken (FB2) + DF3

6.       FB3 backup taken but not restored DF3 won’t work with any combination and will be invalid until we restore the FB3.

what this means is that, as long as log backups are restored to the secondary, the Differential base LSN remains in step allowing you to bridge LSN gaps between the primary and the secondary databases using a differential backup.

SQL Browser service explaination

SQL Server browser service:

The SQL Server Browser program runs as a Windows service. SQL Server Browser listens for incoming requests for Microsoft SQL Server resources and provides information about SQL Server instances installed on the computer. SQL Server Browser contributes to the following actions:
·         Browsing a list of available servers
·         Connecting to the correct server instance
·         Connecting to dedicated administrator connection (DAC) endpoints

TCP port 1433 and pipe \sql\query are assigned to the default instance, but those can be changed later by the server administrator using SQL Server Configuration Manager. Because only one instance of SQL Server can use a port or pipe, different port numbers and pipe names are assigned for named instances, including SQL Server Express.
If specific port is assigned client can connect with specifying por tno along with instance name but for dynamic port client won’t be aware of port no of instance. Here sql browser comes in picture to access the port no and instance name installed on server.

If the SQL Server Browser service is not running, you are still able to connect to SQL Server if you provide the correct port number or named pipe. For instance, you can connect to the default instance of SQL Server with TCP/IP if it is running on port 1433.
However, if the SQL Server Browser service is not running, the following connections do not work:

·         Any component that tries to connect to a named instance without fully specifying all the parameters (such as the TCP/IP port or named pipe).
·       Any component that generates or passes server\instance information that could later be used by other components to reconnect.
·         Connecting to a named instance without providing the port number or pipe.

·         DAC to a named instance or the default instance if not using TCP/IP port 1433.

If you are using SQL Server in a client-server scenario (for example, when your application is accessing SQL Server across a network), if you stop or disable the SQL Server Browser service, you must assign a specific port number to each instance and write your client application code to always use that port number. This approach has the following problems:

·         You must update and maintain client application code to ensure it is connecting to the proper port.
·         The port you choose for each instance may be used by another service or application on the server, causing the instance of SQL Server to be unavailable.


Network protocol in SQL Server

Network Protocol :
To connect to SQL Server Database Engine you must have a network protocol enabled. There are 4 types of protocol available with sql server.
1.       Shared memory
2.       TCP/IP
3.       Named pipes
4.       VIA
For any changes done in protocol properties requires sql server restart.
Shared memory :

Shared memory is the simplest protocol to use and has no configurable settings. Because clients using the shared memory protocol can only connect to a SQL Server instance running on the same computer, it is not useful for most database activity.
·         You cannot create an alias using the shared memory protocol.
·         if shared memory is enabled, then connecting to the Database Engine by name, creates a shared memory connection.
·         Connection name (except ip address) is same for TCP/IP(for remote client and local client) and shared memory(local client).If both enable local client always use shared memory.
·         Specifying an IP Address in the Server box will result in a TCP/IP connection.

The following names will connect to the local computer with the shared memory protocol if it is enabled:

·         \
·         (local)
·         localhost

The following query will return the protocol used for the current connection.
SELECT net_transport FROM sys.dm_exec_connections WHERE session_id = @@SPID;
TCP/IP : (Transmission Control Protocol/Internet Protocol)

TCP/IP is a common protocol widely used over the Internet. For TCP/IP Sockets, data transmissions are more streamlined and have less overhead. Data transmissions can also take advantage of TCP/IP Sockets performance enhancement mechanisms such as windowing, delayed acknowledgements, and so on. This can be very helpful in a slow network. Depending on the type of applications, such performance differences can be significant.

At the time of connection, the SQL Server Native Client component reads the server, protocol, and port values from the registry for the specified alias name, and creates a connection string in the format tcp:[\], or tcp:[\],.

To create a valid connection string using TCP/IP, you must:
  • Specify an Alias Name.
  • For the Server, enter either a server name to which you can connect using the PING utility, or an IP address to which you can connect using the PING utility. For a named instance append the instance name.
  • Specify TCP/IP for the Protocol.
  • Optionally, enter a port number for the Port No. The default is 1433, which is the port number of the default instance of the Database Engine on a server. To connect to a named instance or a default instance that is not listening on port 1433, you must provide the port number, or start the SQL Server Browser service.

At the time of connection, the SQL Server Native Client component reads the server, protocol, and port values from the registry for the specified alias name, and creates a connection string in the format tcp:[\], or tcp:[\],.

 TCP/IP parameters:
·         Enabled  : Possible values are Yes and No.
·         Keep Alive : Specify the interval (milliseconds) in which keep-alive packets are transmitted to verify that the computer at the remote end of a connection is still available.
·         Listen All : Specify whether SQL Server will listen on all the IP addresses that are bound to network cards on the computer. If set to No, configure each IP address separately using the properties dialog box for each IP address. If set to Yes, the settings of the IPAll properties box will apply to all IP addresses. Default value is Yes.
·         No Delay : SQL Server does not implement changes to this property.
·         TCP Dynamic Ports  :Blank, if dynamic ports are not enabled. To use dynamic ports, set to 0.
For IPAll, displays the port number of the dynamic port used.

The default instance of SQL Server listens for incoming connections on port 1433. The port can be changed for security reasons or because of a client application requirement. By default, named instances (including SQL Server Express) are configured to listen on dynamic ports. To configure a static port, leave the TCP Dynamic Ports box blank and provide an available port number in the TCP Port box.

SQL Server Database Engine can listen on multiple ports on the same IP address, list the ports, separated by commas, in the format 1433,1500,1501. This field is limited to 2047 characters.

To configure a single IP address to listen on multiple ports, the Listen All parameter must also be set to No, on the Protocols Tab of the TCP/IP Properties

Since the dynamic port number can change each time SQL Server starts, SQL Server provides the SQL Server Browser Service, to monitor the ports, and direct incoming connections to the current port for that instance. Using dynamic ports complicates connecting SQL Server through a firewall because the port number may change when SQL Server is restarted, requiring changes to the firewall settings. To avoid connection problems through a firewall, configure SQL Server to use a static port.

Named Pipes:

Named Pipes is a protocol developed for local area networks. A part of memory is used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer).

For named pipes, network communications are typically more interactive. A peer does not send data until another peer asks for it using a read command. A network read typically involves a series of peek named pipes messages before it starts to read the data. These can be very costly in a slow network and cause excessive network traffic, which in turn affects other network clients.

Unless changed by the user, when the default instance of Microsoft SQL Server listens on the named pipes protocol, it uses \\.\pipe\sql\query as the pipe name. The period indicates that the computer is the local computer, pipe indicates that the connection is a named pipe, and sql\query is the name of the pipe. To connect to the default pipe, the alias must have \\\pipe\sql\query as the pipe name. If SQL Server has been configured to listen on a different pipe, the pipe name must use that pipe. For instance, if SQL Server is using \\.\pipe\unit\app as the pipe, the alias must use \\\pipe\unit\app as the pipe name.

To create a valid pipe name, you must:

  • Specify an Alias Name.
  • Select Named Pipes as the Protocol.
  • Enter the Pipe Name. Alternatively, you can leave Pipe Name blank and SQL Server Configuration Manager will complete the appropriate pipe name after you specify the Protocol and Server
  • Specify a Server. For a named instance you can provide a server name and instance name.

At the time of connection, the SQL Server Native Client component reads the server, protocol, and pipe name values from the registry for the specified alias name, and creates a pipe name in the format np:\\\pipe\ or np:\\\pipe\.

 For a named instance, the default pipe name is \\\pipe\MSSQL$\sql\query.

Generally, TCP/IP is preferred in a slow LAN, WAN, or dial-up network, whereas named pipes can be a better choice when network speed is not the issue, as it offers more functionality, ease of use, and configuration options


Virtual Interface Adapter (VIA) protocol works with VIA hardware. For information about how to use VIA, contact your hardware vendor.

The VIA protocol is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.