SQL Server transaction log cannot shrink

This morning, an email alert said one of our Database server disk space is beyond the limit. After the checking, found it is due to one database transaction log was growing to 3 times than regular. So it needs to shrink, however, no matter how I shrink, the transaction log, ldf file still kept its size.
You can reference my previous post to analyses of the shrink log issue, but here is an quick solution.

Solution:
1. Check the status of database
run script as below
select name, log_reuse_wait_desc from sys.databases where name=’mydatabase’

and the value is “LOG_BACKUP”
so it means, this database is full recovery mode and waiting for a transaction log backup.

2. Truncate transaction log
run the script as below
BACKUP LOG mydatabase WITH truncate_only
got the error: “‘truncate_only’ is not a recognized BACKUP option.”
OHOH, seems this command deprecated in SQL Server 2008R2 and later version

then run the following:
BACKUP LOG laddar TO DISK=’NUL:’

it works, and then shrink the database ldf file, shrink successfully!

BTW, there is a trick: the upper process may need to repeat a few times to get the best result. The first round, it may just shrink a little bit, but repeat those steps in second round, then the transaction log size could shrink to 1M.

Snapshot replication trick

Regarding how to select a proper replication type on SQL Server, many article gave many opinions, also tons of “best practice” may help as well. The official rules may be referenced to Microsoft MSDN: Selecting the appropriate type of replication

Recently we found an issue on our website, the web pages will be broken sometimes, it occur rarely but it did happen now and then which is annoying. By further investigation, I found the issue is due to replication, whenever the replication starting, some web pages which reference to the tables which involved in the replication will be broken. Our replication is snapshot replication, which I thought it should change to transactional replication anyway, just because some application needs to trigger the replication by clicking button.

However, when I troubleshooting the configuration of snapshot, just found under articles–>”Destination object” –> “Action if name is in use”, the default setting is “drop existing object and create a new one”, that’s the root reason of the problem. No surprise the web pages will broke as whenever the replication in progress, those tables are dropped, the query cannot refer to the tables and throw some wired errors. Also as this is not a heavy duty replication, so clients rarely meet this issue unless they just click the page at the replication time(5-10 seconds).

The solution is easy, just change the default setting to “Truncate all data in the existing object”

SQL Server Transaction Logs Shrink

As a DBA, I watched our database transaction log file, sometimes it grows really crazy and occupied even the full hard drive.  How to resolve this issue.

1. Check if there is a effective transaction log backup.

We all know that there are two database backup mode, full recovery mode and simple mode. Under Full mode, the transaction log won’t shrink till there is transaction log backup happens. But there is a trick, If you just switch into Full Recovery mode, but never take an initial Full Backup, SQL Server will not honor your request to be in Full Recovery model. Your transaction log will continue to operate as it has in simple until you switch to Full Recovery Model AND Take your first Full Backup.

2. Other concerns my cause the trans log keep growing

  • Uncommitted Transactions
  • Extremely Large Transactions
  • Operations: DBCC DBREINDEX and CREATE INDEX
  • While Restoring from Transaction Log Backups
  • Client Applications Do Not Process All Results
  • Queries Time Out Before a Transaction Log Completes the Expansion and You Receive False ‘Log Full’ Error Messages
  • Un-replicated Transactions

3. If it still doesn’t work

There are maybe two solutions you can consider.

a. setup a limit of the transaction log size

b. force the log file to shrink either manually or by script/schedule job

Back up your database! 

Execute the following, substituting <log_file_name_Log> with the appropriate logical name of the database log file, no quotes needed:

  1. DBCC SHRINKFILE(<log_file_name_Log>)
  2. BACKUP LOG <database> WITH TRUNCATE_ONLY
  3. DBCC SHRINKFILE(<log_file_name_Log>)

Afterwords, perform a full backup of the database.

The file should shrink to a ridiculously small shadow of its former self, most likely it could be 1MB.

SQL Server 2008 or Higher Version Database Backup

Recently our group consider re-organizing the backup solution for all production/development database. This gave us a time to re-consider the SQL Server backup activity.

First of all, Recovery Models.
We have to fully understand the three Recovery Models of SQL server backup. Except the system database and development environment, normally the full mode should be selected. The details to explain this is as below:Recovery Model Overview

Second, Database Files and File Groups
Under those recovery model, SQLServer can perform database backup (full and differential), file backup (full and differential), filegroup backup and transaction log backup. So the concept how SQLServer organize database file/filegroups is the basic things to know, you can reference to here Files and File Group Architechture.

Third, understand the transaction log backup
You can reference here to overview the transaction log Overview, the transaction log physical structure transaction log physical architecture and the transaction log logical structure Logical structure. How to manage and working with the transaction log: working with transaction log. The transaction log backup is the must be requirement for Full recovery mode, but sometimes even the transaction log being backed up often, the log file still doesn’t shrink, the factors to impact the log file truncate can referenced here Factors to delay the transaction log truncation

Fourth, Backup and Restore Strategy.
You can reference here to overview the strategy of backing up and restoring the database. Backing up and Restoring
and reference here to see backup concept and overview the backup considerations Backup Overview and reference here to see restore concept and overview Restoring Overview

Fifth, the following are some special scenario or features for backing up or restoring SQLServer2008 database
0. Restore or re-build system database

1. Using Marked Transaction

2.Perform an online restore

3. Recovering database without recovering the data

4. Compressed backup (Available in SQLServer 2008 Enterprise version and SQLServer 2008 R2 Standard version and all highest version)
and Backup Compressed

Last, some Best Practice you may need to consider.
The following items are just the best practice for regular case, but as requirement and resource are different, some may not fit for the special case or need to re-consider for sure.

1. Setup the database backup to full recovery mode except those system database and development environment.

2. Perform a full database backup per night off peak hours, differential backup at noon, transaction log backup per hour or more frequent.

3. Separate the system database backup and user database backup, the reason is the separation will isolate different database group backup and won’t impact each other if some errors occurred.

4. Perform Index rebuild maintenance per week when fragmentation is great than 40%, and transaction log backup should be stopped before the index re-build process and continue after (No means to backup those index rebuild logs, also the fragmentation between 10% and 40% should do index re-organize not re-build).

5. If compression backup is selected, try to use resources governance to allocate proper CPUs to the backup process.

6. Either leverage by checksum option before the backup or verify the backup file by trying to restore as “VERIFYYONLY” option after the backup completed. Build a standalone test environment to test the backups occasionally.

7. When accident occured, before restore the available backups(full, differential, translog), do a trailer log backup, also can leverage the feature called restore to the point in time or marked transaction. Please reference to A Quick Script for Point in Time Recovery – Back Up and Restore

8.Backup database to a fileshare, then back the share up to tape.
This mainly consider the restore time frame issue. Tape drives these days are fast enough that the vendors like to say DBAs should go straight to tape, and they’re technically right: tape backup & restore speed is not a bottleneck. However, there’s a limited number of drives available. When the clients needs a restore right away, the tape drives aren’t necessarily sitting idle. it can be a lag of hours if the tape device is busy. Disk backups, on the other hand, are always available, that’s why normally we kept 2-3 days database backup on the disk, rotate and delete the oldest backup per day.

Clarifying the Concepts of SQL Server 2008 Schema and Fixed Roles

You must noticed in the SSMS, every database have some default schema(dbo, db_dbdatawriter etc.), default roles (db_owner, db_datawriter etc.), they seems to match each other and if you assign the user to db_datawriter, seems the user automatically has the write permission to the tables no matter this table reside in which schema, is that the initial design as we see, does those default schema has to be there, should we put some tables into the default schema’s such as db_datareader schema? Actually the answers are no, if you want, you even can change the db_datareader role to have write permission. Confused? Let talk from the basic concepts.

Schema:

Beginning with SQL Server 2005, Microsoft introduced the concept of database schemas. A schema is now an independent entity- a container of objects distinct from the user who created those objects. Previously, the terms ‘user’ and ‘database object owner’ meant one and the same thing, but now the two are separate.

Perhaps an example may better illustrate the concept: In SQL Server 2000, a schema was owned by, and was inextricably linked to, only one database principal (a principal is any entity or object that has access to SQL Server resources, for example a user, role or a group). This meant that if, say, a user creates a table in the database, that user cannot be deleted without deleting the table or first transferring it to another user. But in SQL Server 2005/2008 one can now simply create the table first and attach it to a schema, even without having created the user. This can be accomplished via the Transact-SQL statement below:

CREATE TABLE MySchema.MyTable (col1 int, col2 int)

Note that here ‘MySchema’ refers to the schema that owns ‘MyTable’, as contrasted to SQL Server 2000 in which for the same statement, ‘MySchema’ would have referred to the user who owns the table. This separation means objects and schemas can be created before users are added to the database. It also means a user can be dropped without specifically dropping the objects owned by that user. A schema can only be owned by one user at a time, but a single user can simultaneously own many schemas.

Default Schema

Because objects are no longer tied to the user creating them, users can now be defined with a default schema. The default schema is the first schema that is searched when resolving unqualified object names. The default schema for a user can be defined by using the DEFAULT_SCHEMA option of the CREATE USER or ALTER USER commands. If no default schema is defined for a user account, SQL Server will assume dbo is the default schema.

Apart from the obvious benefit that objects can now be manipulated independently of users, usage of schemas also offers the following advantages:

Managing logical entities in one physical database: we can think of schemas as containers to organize objects. If you take a look at the AdventureWorks sample database, you will see that the tables are organized by department or function, such as “HumanResources” or “Production”. This looks similar to the old owner concept, but has many advantages. First of all, since the objects are not tied to any user accounts, you do not have to worry about changing the owner of objects when an account is to be removed. Another advantage is that the schemas can be used to simplify managing permissions on tables and other objects. The schema has an owner, but the owner is not tied to the name. So, if an account owns a schema and the account must be removed from the database, the owner of the schema can be changed without breaking any code. If you do not wish to organize your database objects into schemas, the dbo schema is available.

Object protection: Through schemas, a DBA can control access to crucial objects that would otherwise be open to potentially destructive changes by the users.

Protecting ISV database access: in custom ISV (Independent Software Vendor) applications, the database schemas are usually complex, and in the case of applications such as Siebel and SAP, they are tuned for specific application access paths by using many customized indexes. Ad hoc access or alteration to the underlying base tables of these applications can severely impact performance of queries and the application itself. Using schemas, a developer can logically group objects and even create custom objects without running the risk of messing up the underlying ISV database.

It is important note that if the user is authenticated by SQL Server via the Windows operating system, no default schema will be associated with the user. Therefore if the user creates an object, a new schema will be created and named the same as the user, and the object will be associated with that user schema, though not directly with the user.

SQL Server fixed database roles

I assuming everybody understand the basic concept of DB roles and users, so just ignore that part and just discuss the confused fixed db roles which existing in every database. First thing to mention is those db roles are independent of the schema, it means when the user is belong to this role, this user has the role permission on all schemas. Second thing, when you assign the role to schema, you will see “Grant” and “With Grant”, former one means the user get the permission but cannot grant to others this permission, the latter one means after the user get the permission, it can grant to others the same permission.

The fixed database roles are:

db_owner
db_securityadmin
db_accessadmin
db_backupoperator
db_ddladmin
db_datareader
db_datawriter
db_denydatareader
db_denydatawriter

db_owner: 

A member of the db_owner role can do anything inside the database. Now there is a difference between a member of the db_owner role and the dbo user. That difference is that if someone maps into the database as the dbo user, that person bypasses all security checks. An example of this is anyone who is a member of the sysadmin fixed server role. They map in as dbo. And as a result, they don’t receive security checks.If a user is not dbo but is a member of the db_owner role, it does receive a security check. Of course, unless you’ve explicitly used DENY to block access, that user can do what he or she wants. The DENY stops them cold (it does not stop dbo). However, a member of the db_owner role could remove the DENY, so effectively that person can do anything, even if you put roadblocks in place.

  • The db_owner role allows a user to do anything within the database.
  • DBAs who are already members of the sysadmin fixed server role come in as dbo and don’t need this role explicitly granted to them.
  • Normal users should not be a member of this role.
  • Applications might require their user account to be a member of this role.

db_securityadmin

Like the securityadmin fixed server role, the db_securityadmin fixed database role manages security.

  • The db_securityadmin role can manage role membership and permissions on securables.
  • Since DBAs usually manage security and are usually coming in as dbo, this role is little used.
  • Normal users should not be a member of this role.
  • Applications should tend not to need this role.
  • Since it’s little used, you should audit its membership for exceptions.

db_accessadmin

The db_accessadmin role also manages security, but handles access to the database. The db_accessadmin role grants, denies, or revokes permission to enter the database for logins. Combined with db_securityadmin, and you can completely manage security into and throughout the database.

  • The db_accessadmin role can allow access into or block access to the database for logins.
  • Since DBAs usually manage security and have an appropriate server-level role, this role is little used.
  • Normal users should not be a member of this role.
  • Applications should tend not to need this role.
  • This is another role you should audit for membership exceptions.

db_backupoperator

The db_backupoperator allows a member of the role to take backups of the database. However, it’s only going to allow native backups, as in the standard backups through SQL Server itself. If you’re using a third party product, chances are it is usually the methods which allow for high speed backups. Unfortunately, these methods require the login executing them to be a member of the sysadmin fixed server role. As a result, this role tends to be of limited usefulness. Add to it that you’re backing up to a local drive, and it’s rare to see a non-DBA having this level of access, even in a development system.

  • The db_backupoperator role allows a user to take backups of the database.
  • Most 3rd party backup utilities utilize methods that require sysadmin rights, which this doesn’t give.
  • Another role that is little used because this functionality is usually handled by DBAs or a service account.
  • Normal users should not be a member of this role.
  • Applications should tend not to need this role, though I have seen exceptions.

db_ddladmin

The db_ddladmin is another powerful role because it allows a user to create, drop, or modify any objects within a database, regardless of who owns it. So a user could alter a stored procedure owned by dbo, for instance. This role is sometimes given to developers on non-production systems as they built custom applications. However, there is typically no reason anyone should be a member of this role on a production database. One thing the db_ddladmin does not do is allow the user to alter permissions on the objects. So a member of this role can create or modify the object, such as a stored procedure, but not alter the permissions on it unless he or she is the owner.  So, for instance, a member of this role could create a stored procedure in a schema owned by dbo, but couldn’t grant the ability to execute it.

  • The db_ddladmin role can create, drop, and alter objects within the database, regardless of who the owner is.
  • The db_ddladmin role cannot alter security.
  • It is not unusual to grant this role to developers in a non-production environment.
  • Normal users should not be a member of this role.
  • Applications should not need this role.
  • No one should normally be a member of this role on a production database.

db_datareader

The db_datareader role allows a user to be able to issue a SELECT statement against all tables and views in the database. DENY for a user (or a role the user is a member of) will still block the SELECT, however. But if there are no permissions set, whatsoever, the user will have the ability to SELECT against the table or view. The catch with this role is that the permission is implicit. That means if you query sys.database_permissions, you will not see any permission granted, either to the db_datareader role or directly to the user. Therefore, if you need to audit for everyone who has SELECT access to particular tables in a database, you’ll have to query the membership of this group via the use of sp_helprolemember:

EXEC sp_helprolemember 'db_datareader';

It is not unusual to see the db_datareader role used in databases. It’s an easy way to grant SELECT permissions to everything without having to worry about it. However, due to the fact that it uses implicit permissions, I prefer to create a user-defined database role and explicitly grant permissions. With that said, here are things to remember:

  • The db_datareader role gives implicit access to SELECT against all tables and views in a database.
  • In SQL Server 2005 and up, an explicit DENY will block access to objects.
  • It is not unusual to see this role used in production for developers.
  • It is not unusual to see this role used in production for normal users.
  • Applications will occasionally need this role.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

db_datawriter

The db_datawriter role is like the db_datareader role in that it gives implicit access to tables and views within a database. It also can be blocked by an explicit DENY for the user or for a role the user is a member of. Unlike db_datareader, however, db_datawriter gives INSERT, UPDATE, and DELETE permissions . Again, since the permission is implicit, you will not see these rights show up in sys.database_permissions. And like with db_datareader, you’ll have to check the membership of this role to determine actual permissions in the event of an audit.

  • The db_datawriter role gives implicit access to INSERT, UPDATE, and DELETE against all tables and views in a database.
  • In SQL Server 2005 and up, an explicit DENY will block access to objects.
  • Typically developer are not members of this role in production unless all users are.
  • While less common than with db_datareader, it is not all that unusual to see this role used in production for normal users.
  • Applications will occasionally need this role.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

db_denydatareader

Unlike the previous two roles, db_denydatareader denies access. In this case, the db_denydatareader is the same as having a DENY for SELECT on all tables and views in the database. Because DENY trumps everything else, this is not a role I’ve seen used frequently. If there are no permissions for a given user on an object, such as the user has no SELECT permissions on a table, then SQL Server blocks access. Therefore, if a user doesn’t have SELECT permission on TableA, then the user cannot successfully issue a SELECT query against TableA. An explicit DENY is not needed. And since this affects all tables and views, that adds to the reason this database role is typically not used. And like db_datareader and db_datawriter, the DENY is implicit, meaning you’ll have to query for membership in this role to determine who is affected.

  • The db_denydatareader role is denied access to SELECT against any table or view in the database.
  • Typically this role is not used.
  • The DENY is implicit.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

db_denydatawriter

Wrapping up our list of roles is db_denydatawriter. The db_denydatawriter has an implicit DENY on INSERT, UPDATE, and DELETE for all tables and views in the database. Again, this is not a role that sees much use, for the same reasons as db_denydatareader.

  • The db_denydatawriter role is denied access to INSERT, UPDATE, or DELETE against all tables and views in the database.
  • Typically this role is not used.
  • The DENY is implicit.
  • Creating a user-defined database role and explicitly defining permissions is still preferred over the use of this role.

Discussion of the relationship

Ownership of schemas and schema-owned objects is transferable. This is accomplished using the ALTER AUTHORIZATION command. A single schema can contain objects owned by multiple database users or roles. Multiple database users can share a single default schema. Permissions on schemas and schema-contained objects can be managed with greater precision than in earlier releases. This is accomplished using schema GRANT permissions object GRANT permissions. A schema can be owned by any database principal. This includes roles and application roles. A database user can be dropped without dropping objects in a corresponding schema.

Those default schemas such as db_datareader, db_datawriter etc. are for backwards compatibility reasons. In SQL Server 2000, schema is as the same as the user and the schema must be created by the corresponding user. So if migrated from 2000, it must be match each other otherwise there is an error.  But the schema and fixed roles are not necessary to be there. The database fixed roles such as “db_datareader” are owned by dbo. But the schemas are ownded by corresponding roles. Both of the above are re-configurable.

“db_datareader” and “db_datawriter” can perform SELECT/INSERT/DELETE/UPDATE operation on any table or view with any schema which due to “Implicit Permissions” of fixed roles. But in SSMS, for an user with db_datareader role, he can only view the tables/views with the schema db_datareader or dbo or other schemas owned by db_datareader or dbo or the user itself.

Some reference:

SQL Server partitioning (IV) — An ideal example of partitioning

USE [master]
GO
if exists (select * from sys.databases where name = 'Test_1')
drop database Test_1
GO
--create a new db with two file groups "Test_A" and "Test_B"

CREATE DATABASE [Test_1] ON  PRIMARY 
( NAME = N'test_1', FILENAME = N'D:\sqldata\test_1.mdf' , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ),
 FILEGROUP [test_A] 
( NAME = N'Test_A', FILENAME = N'D:\sqldata\test_A.ndf' , SIZE = 1024KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ),
FILEGROUP [test_B] 
( NAME = N'Test_B', FILENAME = N'D:\sqldata\test_B.ndf' , SIZE = 1024KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
 LOG ON 
( NAME = N'Test_log', FILENAME = N'D:\sqldata\Test_log.ldf' , SIZE = 7616KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
 COLLATE Chinese_PRC_CI_AS
GO

USE [Test_1]
GO

--if partition function exist needs to drop it first
IF  EXISTS (SELECT * FROM sys.partition_functions WHERE name = N'test_partition')
DROP PARTITION FUNCTION [test_partition]
GO

--create partition function for the schema, 
--partition function is set a range to ensure the row with what field value should be in which partition ---create a simple partition function, 
--this function using 1000 as the range border and split to two area

create partition function test_partition(int)
AS
RANGE LEFT FOR VALUES (1000) 
go

--create schema, schema needs to set a partition function 
--and point the partition distribution on the existing file groups
IF  EXISTS (SELECT * FROM sys.partition_schemes WHERE name = N'test_scheme')
DROP PARTITION SCHEME test_scheme
GO

Create partition scheme test_scheme 
AS 
PARTITION [test_partition] TO (test_A,test_B)
GO

--create table with partition
if object_id('student','U') is not null
drop table student;
go

create table student
(
    id int identity(1,1) not null,
    name varchar(10) not null,
    class int not null,
    grade int
) on test_scheme(class) --point the schema and partition based on which column
go

--insert some test data

-- this row should reside on filegroup A
insert into student values ('AQU',10,100);

--this row should reside on filegroup A as the option is "LEFT" 
insert into student values ('AQU_边界',1000,89); 

--this row should reside on filegroup B
insert into student values ('BQU',1001,90); 

go

--Check the result, $partition.partition function(partition column) 
--is able to return row partition number*/

select *, $partition.test_partition(class) from student as PartitionNumber
GO

SQL Server partitioning (III) — T-SQL to setup table partitioning (more complicated example)

To optimize performance of your warehouse, you can create multiple file groups within the database, perhaps one file group for each month. You could do so with statements similar to the following:

ALTER DATABASE Transaction_Warehouse  ADD FILEGROUP FG2005January   
ALTER DATABASE Transaction_Warehouse  ADD FILEGROUP FG2005February

Next add files to each file group with statements similar to this:

ALTER DATABASE Transaction_Warehouse  ADD FILE    (NAME = N'2005January',    FILENAME = N'E:\MSSQL\Data\2005January.ndf',    SIZE = 500MB,    MAXSIZE = 50000MB,    FILEGROWTH = 100MB)  TO FILEGROUP [FG2005January]

Define a partition function to assign each partition a range of date and time values.

CREATE PARTITION FUNCTION Transaction_Warehouse_PFN(datetime)  AS  RANGE LEFT FOR VALUES ( '20050101 12:00:00.000',      '20050228 23:59:59.997',                          '20050331 23:59:59.997',      '20050430 23:59:59.997',      '20050531 23:59:59.997',      '20050630 23:59:59.997',      '20050731 23:59:59.997',      '20050831 23:59:59.997',  '20050930 23:59:59.997',  '20051031 23:59:59.997',  '20051130 23:59:59.997',  '20051231 23:59:59.997')

The partition function will be used by the partition scheme to assign the range of dates to each file group:

CREATE PARTITION SCHEME Transaction_Warehouse_Partition_Scheme  AS  PARTITION Transaction_Warehouse_PFN  TO (  [FG2005January],  [FG2005February],  [FG2005March],  [FG2005April],  [FG2005May],  [FG2005June],  [FG2005July],  [FG2005August],  [FG2005September],  [FG2005October],  [FG2005November],  [FG2005December],   [PRIMARY])

Next create the table against the partition scheme as opposed to placing it on a single file group, as follows:

CREATE TABLE [dbo].[fact_warehouse_transaction](   [transaction_ident] [int] NOT NULL IDENTITY(1, 1),   [natural_key] [decimal](18, 0) NULL,   [dimension1_ident] [int] NULL,   [dimension2_ident] [int] NULL,  --… (many other columns)  [transaction_date_and_time] [datetime] NOT NULL  ) ON  [Transaction_Warehouse_Partition_Scheme] (transaction_date_and_time)

Using this schema, SQL Server will stripe fact_warehouse_transaction table records across twelve files (each associated with a single file group) based on the value of transaction_date_and_time column. This configuration is completely transparent from the SELECT / INSERT/ UPDATE / DELETE statements affecting the table. The statement doesn’t have to specify which partition is to be affected.

You can partition indexes much the same way as you partition tables. For example, you could create the following partitioned index on transaction_date_and_time column:

CREATE INDEX ix_transaction_warehouse_partitioned ON  [dbo].[fact_warehouse_transaction] (transaction_date_and_time)  ON Transaction_Warehouse_Partition_Scheme (transaction_date_and_time)

The advantage of table partitioning is that instead of scanning a gigantic table (or index), each query will affect a considerably smaller subset of the table. Furthermore, SQL Server can actually retrieve data from multiple partitions in parallel if the query needs to return data from more than one partition.

You can exploit partition swapping functionality in order to significantly reduce the downtime associated with loading data into the fact tables. If two tables are built using identical partition schemes, you can swap partitions so that the data in table A will be transferred into table B. In reality there is no data movement, only the metadata is being updated. Because of this, swapping partitions typically takes less than a second.

Let’s suppose you need to load data for December of 2005 into fact_warehouse_transaction. So far the table contains data for all other months of 2005. You can create another table using the same partition scheme, with identical column names using the statement similar to the following:

CREATE TABLE [dbo].[fact_warehouse_transaction_clone](   [transaction_ident] [int] NOT NULL IDENTITY(1, 1),   [natural_key] [decimal](18, 0) NULL,   [dimension1_ident] [int] NULL,   [dimension2_ident] [int] NULL,  --… (many other columns)  [transaction_date_and_time] [datetime] NOT NULL  ) ON  [Transaction_Warehouse_Partition_Scheme] (transaction_date_and_time)

Next you can load December 2005 data into fact_warehouse_transaction_clone. Since this table is simply a temporary working area locking mechanism SQL Server employs to populate this table really doesn’t matter. We’re only concerned with fact_warehouse_transaction availability. Once the clone table is populated you can issue a statement like following:

ALTER TABLE fact_warehouse_transaction_clone  SWITCH PARTITION 12 TO fact_warehouse_transaction PARTITION 12

This statement will take just a few seconds at most and result in having fact_warehouse_transaction being populated with all December 2005 data. At that point you can drop fact_warehouse_transaction_clone table.