We have a SSRS report which contains a dataset which has an ID and NAME field, then a parameter was created and set its value to ID from dataset and Label to NAME of dataset. The label can be displayed in preview report and select it, but the selected parameter label display also needs to show on the report, if just use Parameters!Parametername.Value, it just shows the value or ID directly. So the correct way is as below.
Case when the parameter is Single Value :
Case when the parameter is Multi Values :
This morning, an email alert said one of our Database server disk space is beyond the limit. After the checking, found it is due to one database transaction log was growing to 3 times than regular. So it needs to shrink, however, no matter how I shrink, the transaction log, ldf file still kept its size.
You can reference my previous post to analyses of the shrink log issue, but here is an quick solution.
1. Check the status of database
run script as below
select name, log_reuse_wait_desc from sys.databases where name=’mydatabase’
and the value is “LOG_BACKUP”
so it means, this database is full recovery mode and waiting for a transaction log backup.
2. Truncate transaction log
run the script as below
BACKUP LOG mydatabase WITH truncate_only
got the error: “‘truncate_only’ is not a recognized BACKUP option.”
OHOH, seems this command deprecated in SQL Server 2008R2 and later version
then run the following:
BACKUP LOG laddar TO DISK=’NUL:’
it works, and then shrink the database ldf file, shrink successfully!
BTW, there is a trick: the upper process may need to repeat a few times to get the best result. The first round, it may just shrink a little bit, but repeat those steps in second round, then the transaction log size could shrink to 1M.
Regarding how to select a proper replication type on SQL Server, many article gave many opinions, also tons of “best practice” may help as well. The official rules may be referenced to Microsoft MSDN: Selecting the appropriate type of replication
Recently we found an issue on our website, the web pages will be broken sometimes, it occur rarely but it did happen now and then which is annoying. By further investigation, I found the issue is due to replication, whenever the replication starting, some web pages which reference to the tables which involved in the replication will be broken. Our replication is snapshot replication, which I thought it should change to transactional replication anyway, just because some application needs to trigger the replication by clicking button.
However, when I troubleshooting the configuration of snapshot, just found under articles–>”Destination object” –> “Action if name is in use”, the default setting is “drop existing object and create a new one”, that’s the root reason of the problem. No surprise the web pages will broke as whenever the replication in progress, those tables are dropped, the query cannot refer to the tables and throw some wired errors. Also as this is not a heavy duty replication, so clients rarely meet this issue unless they just click the page at the replication time(5-10 seconds).
The solution is easy, just change the default setting to “Truncate all data in the existing object”
Recently we have a requirement which use macro to print some recordset value from a database table. Basically the VBA code which reference to DB part in Word Macro is as below
Dim strSQL As String
Dim rs As ADODB.Recordset
Dim totalcount As Integer
strConn = "Provider=SQLOLEDB.1; Data Source= myserver\myinstance;Initial Catalog=mydb; Integrated Security=SSPI;"
Set conn = New ADODB.Connection
Dim cmd As ADODB.Command
Set cmd = New ADODB.Command
cmd.ActiveConnection = conn
cmd.CommandText = "SpGetSomethingFromSomeDbTable"
cmd.CommandType = adCmdStoredProc
cmd.Parameters(1).Value = CInt(divisionid)
cmd.Parameters(2).Value = 1
Set rs = cmd.Execute()
totalcount = 0
Do While Not rs.EOF
ReDim Preserve membersFor(totalcount) As memberinfo
membersFor(totalcount).field1 = RemoveQCInLastName(rs("fieldname1"))
membersFor(totalcount).field2 = RemoveQCInLastName(rs("fieldname2"))
membersFor(totalcount).field3 = RemoveQCInLastName(rs("fieldname3"))
totalcount = totalcount + 1
And part of the stored procedure is as below
if object_id('tempdb..#mytemptable') IS NOT NULL
drop table #mytemptable
as(SELECT * FROM
(SELECT *, ROW_NUMBER() OVER(PARTITION BY field1, field2 ORDER BY FieldDateTime DESC) AS rn
FROM AnotherDatabase.dbo.table1 WHERE fieldDateTime < @specificdate) as T
WHERE rn = 1 AND IsFlag = 1
SELECT m.field3, a.field4, a.field5
FROM table1_CTE m JOIN someothertable a ON m.samefield = a.samefield
WHERE m.field1 = @currentvalue
When I run macro, the error “Operation is not allowed when the object is closed” popup and stop at line
Set rs = cmd.Execute()
Actually this issue is not due to temp table or CTE table but the insert/delete/update DDL operation in stored procedure. The root reason is the count of the number of rows affected by those DDL statements will return as part of the result set. So the solution is easy and simple just need to add the following behind “AS” before the “BEGIN” in the stored procedure.
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
It took me 2 hours to figure out, hope it is helpful.
As a DBA, I watched our database transaction log file, sometimes it grows really crazy and occupied even the full hard drive. How to resolve this issue.
1. Check if there is a effective transaction log backup.
We all know that there are two database backup mode, full recovery mode and simple mode. Under Full mode, the transaction log won’t shrink till there is transaction log backup happens. But there is a trick, If you just switch into Full Recovery mode, but never take an initial Full Backup, SQL Server will not honor your request to be in Full Recovery model. Your transaction log will continue to operate as it has in simple until you switch to Full Recovery Model AND Take your first Full Backup.
2. Other concerns my cause the trans log keep growing
- Uncommitted Transactions
- Extremely Large Transactions
- Operations: DBCC DBREINDEX and CREATE INDEX
- While Restoring from Transaction Log Backups
- Client Applications Do Not Process All Results
- Queries Time Out Before a Transaction Log Completes the Expansion and You Receive False ‘Log Full’ Error Messages
- Un-replicated Transactions
3. If it still doesn’t work
There are maybe two solutions you can consider.
a. setup a limit of the transaction log size
b. force the log file to shrink either manually or by script/schedule job
Back up your database!
Execute the following, substituting <log_file_name_Log> with the appropriate logical name of the database log file, no quotes needed:
- DBCC SHRINKFILE(<log_file_name_Log>)
- BACKUP LOG <database> WITH TRUNCATE_ONLY
- DBCC SHRINKFILE(<log_file_name_Log>)
Afterwords, perform a full backup of the database.
The file should shrink to a ridiculously small shadow of its former self, most likely it could be 1MB.
Just saw an article explain the t-sql ranking usage very clearly, it is written by famous Pinal Dave, the author of http://blog.sqlauthority.com
SQL SERVER – 2005 – Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE
Another article also explains one of the new feature of SQL 2008 R2–>group sets