Showing posts with label running. Show all posts
Showing posts with label running. Show all posts

Friday, March 30, 2012

Long wait times for simple SQL on 2005

I have an existing application deployed on many SQL 2000 servers running without issue. We installed SQL 2005 (8 way DualCore) and the most simple trivial SQL statements are taking an extreme amount of time (e.g. an identical query in 2000 took 10 ms is taking a couple of minutes in 2005). The data in the tables being queried are very small (low row count < 400; and low row size < 1k).

When profiling 2005 we see the connection going into a status of 'suspended' with a wait type of 'ASYNC_NETWORK_IO'. There just simply is not enough data being retrieved for a lag to occur returning data to the client. Has anyone seen anything like this? Any thoughts?

I haven't seen anything relating to NETWORK_IO, one gotcha is not doing index rebuilds. http://sqlblogcasts.com/blogs/simons/archive/2006/10/20/SQL-2005-UpgradeRebuild-your-indexes-.aspx

Can you include more details on the queries.

If you run the queries in SSMS do they still take a long time?

How many users are using the server? Is there server doing anything else?

|||

This is a new client of ours (installing our web app, which, has been in production for a few years and we have no issues in house on our 2005 or 2000 SQL boxes) so there are no current active users (save the one that I was using to test with); so the current activity on the server is virtually nill. This is one of our default databases (again, virtually no data except for so default info in the one table I am querying) - all indexes are up to date.

The table has simple varchar columns and two text columns (the data in the columns vary from 100 bytes to 10k [the high end]. Here is teh defintion of the table:

Table [REPORT_DEF]
[REPORT_GU] [uniqueidentifier] NOT NULL,
[NAME] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[NAMESPACE] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[NUMBER] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[PAGE_ORIENTATION] [varchar](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[OWNER_TYPE] [varchar](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ORGANIZATION_GU] [uniqueidentifier] NULL,
[XML_DEF] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[CLASS_NAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ASSEMBLY] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[COMPANY_GU] [uniqueidentifier] NULL,
[LOCATION] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ICON] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[TITLE] [nvarchar](60) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[SHORT_TITLE] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[LOCALE] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[REPORT_VIEW_TYPE] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[MODULE] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[DEPLOY_VERSION] [numeric](7, 0) NOT NULL DEFAULT ((0)),
[MAIN_REPORT_RPX] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ASSOCIATED_REPORT_GU] [uniqueidentifier] NULL,
CONSTRAINT PRIMARY KEY CLUSTERED - Column: REPORT_GU
CONSTRAINT UNIQUE NONCLUSTERED - columns: NAMESPACE and NAME

The query that is being run is as follows:

SELECT R0.[COMPANY_GU],R0.[NUMBER],R0.[NAME],R0.[ASSOCIATED_REPORT_GU],R0.[MAIN_REPORT_RPX],
R0.[REPORT_GU],R0.[PAGE_ORIENTATION],R0.[SHORT_TITLE],R0.[ASSEMBLY],R0.[MODULE],R0.[TITLE],
R0.[DEPLOY_VERSION],R0.[NAMESPACE],R0.[LOCATION],R0.[ORGANIZATION_GU],R0.[ICON],R0.[OWNER_TYPE],
R0.[XML_DEF],R0.[LOCALE],R0.[REPORT_VIEW_TYPE],R0.[CLASS_NAME]
FROM [REPORT_DEF] R0

ORDER BY R0.[NAMESPACE] ASC,R0.[NAME] ASC,R0.[REPORT_GU] ASC

Again, when I execute this query on the DB server (via Mgmt Studio) it executes in about 80 ms (give or take a couple ms). When run on a machine on the same network (gigabit backbone, the only two machines on this network) the execution time jumps to 75 seconds (give or take 5 seconds).

Thoughts?

|||

Can you run profiler and capture the XML plans, and then compare them, to check they are getting the same plan.

Also be aware that by default SSMS uses the in memory data provider which will give performance improvements compared with running on the client.

How much data is being returned?

Are you using SSMS on the client as well?

|||

Simon,

Thanks for your thoughts on this one. I broke down and called MS and after about 6 hours we found that there was a network issue. Both SQL and the web server machines have gigabit NICs but the switch that the client had the two machines plugged into is a 10/100 switch...with the NICs set to autosensing basic network traffic was extremely delayed (tested via explorer copying a 70MB file to the SQL server occured in 8 seconds, from the SQL server to the web sever was taking about 35 minutes). Anyway, chalk this one up to experience and something to look for next time.

Thanks for your help!

long running transactions w/ other users needing to read data

I have a very long transaction that runs on the same database that
other users need to use for existing data. I don't care if they see
data from the transaction before it is done and am only using the
transaction because I need a way to roll it back if any errors happen
during the transaction. Unfortunately all tables affected in the long
running transaction are completely locked and nobody else can access
any of the affected tables while it is running. I am using the
transaction isolation level of read uncommitted, which from my limited
understanding of isolation levels is the least strict. What can I do to
prevent this from happening?

Below is the output from sp_who2 and sp_lock while the process is
running and another process is being blocked by it.

SPID Status Login
HostName BlkBy DBName Command CPUTime
DiskIO LastBatch ProgramName SPID
-- ----------
--------------- ---- --
---- ------ --- -- -----
--------- --
1 BACKGROUND sa
. . NULL LAZY WRITER 0 0
06/09 15:42:52 1
2 sleeping sa
. . NULL LOG WRITER 10 0
06/09 15:42:52 2
3 BACKGROUND sa
. . master SIGNAL HANDLER 0 0
06/09 15:42:52 3
4 BACKGROUND sa
. . NULL LOCK MONITOR 0 0
06/09 15:42:52 4
5 BACKGROUND sa
. . master TASK MANAGER 0 5
06/09 15:42:52 5
6 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 6
7 sleeping sa
. . NULL CHECKPOINT SLEEP 0 12
06/09 15:42:52 7
8 BACKGROUND sa
. . master TASK MANAGER 0 2
06/09 15:42:52 8
9 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 9
10 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 10
11 BACKGROUND sa
. . master TASK MANAGER 0 1
06/09 15:42:52 11
12 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 12
51 sleeping SUPERPABLO\Administrator
SUPERPABLO . PM AWAITING COMMAND 1813
307 06/09 16:10:34 .Net SqlClient Data Provider 51
52 sleeping SUPERPABLO\Administrator
SUPERPABLO 54 PM SELECT 30 5
06/09 16:10:16 .Net SqlClient Data Provider 52
53 RUNNABLE SUPERPABLO\Administrator
SUPERPABLO . master SELECT 0 3
06/09 16:09:44 SQL Profiler 53
54 RUNNABLE SUPERPABLO\Administrator
SUPERPABLO . PM UPDATE 10095
206 06/09 16:10:02 .Net SqlClient Data Provider 54
56 RUNNABLE SUPERPABLO\Administrator
SUPERPABLO . PM SELECT INTO 151 27
06/09 16:10:33 SQL Query Analyzer 56

(17 row(s) affected)

spid dbid ObjId IndId Type Resource Mode Status
-- -- ---- -- -- ------ --- --
51 5 0 0 DB S GRANT
52 5 0 0 DB S GRANT
52 5 1117963059 4 PAG 1:7401 IS GRANT
52 5 1117963059 4 KEY (5301214e6d62) S WAIT
52 5 1117963059 0 TAB IS GRANT
54 5 1117963059 0 TAB IX GRANT
54 5 1852025829 0 TAB IX GRANT
54 5 1181963287 3 PAG 1:9017 IX GRANT
54 5 1117963059 4 KEY (5301934930a4) X GRANT
54 5 1117963059 3 KEY (530187fc93f3) X GRANT
54 5 1117963059 4 KEY (530154df71eb) X GRANT
54 5 0 0 DB [BULK-OP-LOG] NULL GRANT
54 5 0 0 FIL 2:0:d U GRANT
54 5 1117963059 2 KEY (1d0096c50a7d) X GRANT
54 5 1117963059 2 KEY (1b004a9a6158) X GRANT
54 5 1117963059 2 KEY (1800a435d44a) X GRANT
54 5 1181963287 6 PAG 1:8745 IX GRANT
54 5 1181963287 4 PAG 1:8923 IX GRANT
54 5 1181963287 2 PAG 1:8937 IX GRANT
54 5 1117963059 4 KEY (5301112b0696) X GRANT
54 5 0 0 PAG 1:10889 IX GRANT
54 5 1181963287 5 PAG 1:8859 IX GRANT
54 5 1181963287 6 PAG 1:10888 IX GRANT
54 5 0 0 PAG 1:10891 IX GRANT
54 5 0 0 PAG 1:10893 IX GRANT
54 5 0 0 PAG 1:10892 IX GRANT
54 5 0 0 PAG 1:10894 IX GRANT
54 5 0 0 PAG 1:10882 IX GRANT
54 5 1117963059 3 KEY (530135fbce35) X GRANT
54 5 1117963059 0 RID 1:7387:57 X GRANT
54 5 1117963059 0 RID 1:7387:59 X GRANT
54 5 1117963059 0 RID 1:7387:61 X GRANT
54 5 1117963059 3 KEY (5301406ad2bc) X GRANT
54 5 1117963059 4 PAG 1:7401 IX GRANT
54 5 0 0 PAG 1:7387 IX GRANT
54 5 1117963059 2 PAG 1:7389 IX GRANT
54 5 1117963059 3 PAG 1:7391 IX GRANT
54 5 1117963059 0 RID 1:7387:10 X GRANT
54 5 1117963059 0 RID 1:7387:56 X GRANT
54 5 1117963059 0 RID 1:7387:58 X GRANT
54 5 1117963059 0 RID 1:7387:60 X GRANT
54 5 1117963059 3 KEY (530144afbed8) X GRANT
54 5 1117963059 4 KEY (530115ee6af2) X GRANT
54 5 1117963059 3 KEY (5301c6cd88ea) X GRANT
54 5 1149963173 0 TAB IX GRANT
54 5 1181963287 0 TAB X GRANT
54 5 1117963059 4 KEY (5301d2782bbd) X GRANT
54 5 1117963059 3 KEY (5301015bc9a5) X GRANT
54 5 0 0 DB S GRANT
54 5 0 0 DB [BULK-OP-DB] NULL GRANT
54 5 1117963059 4 KEY (5301501a1d8f) X GRANT
54 5 1117963059 2 KEY (1c00f3a2b6c5) X GRANT
54 5 1117963059 2 KEY (1a002ffddde0) X GRANT
54 5 0 0 PAG 1:7411 IX GRANT
54 5 1117963059 2 KEY (1900c15268f2) X GRANT
54 5 0 0 PAG 1:10840 IX GRANT
54 5 1181963287 4 PAG 1:10841 IX GRANT
54 5 0 0 PAG 1:10842 IX GRANT
54 5 1117963059 3 KEY (5301059ea5c1) X GRANT
54 5 0 0 PAG 1:10820 IX GRANT
54 5 1181963287 4 PAG 1:10821 IX GRANT
54 5 1181963287 5 PAG 1:10874 IX GRANT
54 5 1181963287 5 PAG 1:10876 IX GRANT
54 5 0 0 PAG 1:10877 IX GRANT
54 5 1181963287 5 PAG 1:10878 IX GRANT
54 5 0 0 PAG 1:10849 IX GRANT
54 5 0 0 PAG 1:10850 IX GRANT
54 5 1117963059 2 KEY (1700f225b712) X GRANT
54 5 1117963059 4 KEY (5301214e6d62) X GRANT
56 5 0 0 DB S GRANT
56 1 85575343 0 TAB IS GRANTAccording to the sp_who2 and sp_lock output, spid54 has the long running
transaction. It has an X lock on Key (5301214e6d62). Spid 52 is trying to
get S lock on the key and got blocked. The read uncommited isolation level
doesn't prevent spid 54 from getting X lock on the key as the X lock is
likely obtained as a result of a modification(insert/delete/update), for
which the transaction can't skip locking. The only way I see to prevent
spid52 from blocking is to apply read uncommited isolation level to spid
52(rather than spid 54), so spid 52 can read dirty uncommitted data modified
by spid 54. Not sure whether your app's semantics allow it though.

--
Gang He
Software Design Engineer
Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.
"pb648174" <google@.webpaul.net> wrote in message
news:1118351890.044392.321950@.g43g2000cwa.googlegr oups.com...
> I have a very long transaction that runs on the same database that
> other users need to use for existing data. I don't care if they see
> data from the transaction before it is done and am only using the
> transaction because I need a way to roll it back if any errors happen
> during the transaction. Unfortunately all tables affected in the long
> running transaction are completely locked and nobody else can access
> any of the affected tables while it is running. I am using the
> transaction isolation level of read uncommitted, which from my limited
> understanding of isolation levels is the least strict. What can I do to
> prevent this from happening?
> Below is the output from sp_who2 and sp_lock while the process is
> running and another process is being blocked by it.
> SPID Status Login
> HostName BlkBy DBName Command CPUTime
> DiskIO LastBatch ProgramName SPID
> -- ----------
> --------------- ---- --
> ---- ------ --- -- -----
> --------- --
> 1 BACKGROUND sa
> . . NULL LAZY WRITER 0 0
> 06/09 15:42:52 1
> 2 sleeping sa
> . . NULL LOG WRITER 10 0
> 06/09 15:42:52 2
> 3 BACKGROUND sa
> . . master SIGNAL HANDLER 0 0
> 06/09 15:42:52 3
> 4 BACKGROUND sa
> . . NULL LOCK MONITOR 0 0
> 06/09 15:42:52 4
> 5 BACKGROUND sa
> . . master TASK MANAGER 0 5
> 06/09 15:42:52 5
> 6 BACKGROUND sa
> . . master TASK MANAGER 0 0
> 06/09 15:42:52 6
> 7 sleeping sa
> . . NULL CHECKPOINT SLEEP 0 12
> 06/09 15:42:52 7
> 8 BACKGROUND sa
> . . master TASK MANAGER 0 2
> 06/09 15:42:52 8
> 9 BACKGROUND sa
> . . master TASK MANAGER 0 0
> 06/09 15:42:52 9
> 10 BACKGROUND sa
> . . master TASK MANAGER 0 0
> 06/09 15:42:52 10
> 11 BACKGROUND sa
> . . master TASK MANAGER 0 1
> 06/09 15:42:52 11
> 12 BACKGROUND sa
> . . master TASK MANAGER 0 0
> 06/09 15:42:52 12
> 51 sleeping SUPERPABLO\Administrator
> SUPERPABLO . PM AWAITING COMMAND 1813
> 307 06/09 16:10:34 .Net SqlClient Data Provider 51
> 52 sleeping SUPERPABLO\Administrator
> SUPERPABLO 54 PM SELECT 30 5
> 06/09 16:10:16 .Net SqlClient Data Provider 52
> 53 RUNNABLE SUPERPABLO\Administrator
> SUPERPABLO . master SELECT 0 3
> 06/09 16:09:44 SQL Profiler 53
> 54 RUNNABLE SUPERPABLO\Administrator
> SUPERPABLO . PM UPDATE 10095
> 206 06/09 16:10:02 .Net SqlClient Data Provider 54
> 56 RUNNABLE SUPERPABLO\Administrator
> SUPERPABLO . PM SELECT INTO 151 27
> 06/09 16:10:33 SQL Query Analyzer 56
> (17 row(s) affected)
> spid dbid ObjId IndId Type Resource Mode Status
> -- -- ---- -- -- ------ --- --
> 51 5 0 0 DB S GRANT
> 52 5 0 0 DB S GRANT
> 52 5 1117963059 4 PAG 1:7401 IS GRANT
> 52 5 1117963059 4 KEY (5301214e6d62) S WAIT
> 52 5 1117963059 0 TAB IS GRANT
> 54 5 1117963059 0 TAB IX GRANT
> 54 5 1852025829 0 TAB IX GRANT
> 54 5 1181963287 3 PAG 1:9017 IX GRANT
> 54 5 1117963059 4 KEY (5301934930a4) X GRANT
> 54 5 1117963059 3 KEY (530187fc93f3) X GRANT
> 54 5 1117963059 4 KEY (530154df71eb) X GRANT
> 54 5 0 0 DB [BULK-OP-LOG] NULL GRANT
> 54 5 0 0 FIL 2:0:d U GRANT
> 54 5 1117963059 2 KEY (1d0096c50a7d) X GRANT
> 54 5 1117963059 2 KEY (1b004a9a6158) X GRANT
> 54 5 1117963059 2 KEY (1800a435d44a) X GRANT
> 54 5 1181963287 6 PAG 1:8745 IX GRANT
> 54 5 1181963287 4 PAG 1:8923 IX GRANT
> 54 5 1181963287 2 PAG 1:8937 IX GRANT
> 54 5 1117963059 4 KEY (5301112b0696) X GRANT
> 54 5 0 0 PAG 1:10889 IX GRANT
> 54 5 1181963287 5 PAG 1:8859 IX GRANT
> 54 5 1181963287 6 PAG 1:10888 IX GRANT
> 54 5 0 0 PAG 1:10891 IX GRANT
> 54 5 0 0 PAG 1:10893 IX GRANT
> 54 5 0 0 PAG 1:10892 IX GRANT
> 54 5 0 0 PAG 1:10894 IX GRANT
> 54 5 0 0 PAG 1:10882 IX GRANT
> 54 5 1117963059 3 KEY (530135fbce35) X GRANT
> 54 5 1117963059 0 RID 1:7387:57 X GRANT
> 54 5 1117963059 0 RID 1:7387:59 X GRANT
> 54 5 1117963059 0 RID 1:7387:61 X GRANT
> 54 5 1117963059 3 KEY (5301406ad2bc) X GRANT
> 54 5 1117963059 4 PAG 1:7401 IX GRANT
> 54 5 0 0 PAG 1:7387 IX GRANT
> 54 5 1117963059 2 PAG 1:7389 IX GRANT
> 54 5 1117963059 3 PAG 1:7391 IX GRANT
> 54 5 1117963059 0 RID 1:7387:10 X GRANT
> 54 5 1117963059 0 RID 1:7387:56 X GRANT
> 54 5 1117963059 0 RID 1:7387:58 X GRANT
> 54 5 1117963059 0 RID 1:7387:60 X GRANT
> 54 5 1117963059 3 KEY (530144afbed8) X GRANT
> 54 5 1117963059 4 KEY (530115ee6af2) X GRANT
> 54 5 1117963059 3 KEY (5301c6cd88ea) X GRANT
> 54 5 1149963173 0 TAB IX GRANT
> 54 5 1181963287 0 TAB X GRANT
> 54 5 1117963059 4 KEY (5301d2782bbd) X GRANT
> 54 5 1117963059 3 KEY (5301015bc9a5) X GRANT
> 54 5 0 0 DB S GRANT
> 54 5 0 0 DB [BULK-OP-DB] NULL GRANT
> 54 5 1117963059 4 KEY (5301501a1d8f) X GRANT
> 54 5 1117963059 2 KEY (1c00f3a2b6c5) X GRANT
> 54 5 1117963059 2 KEY (1a002ffddde0) X GRANT
> 54 5 0 0 PAG 1:7411 IX GRANT
> 54 5 1117963059 2 KEY (1900c15268f2) X GRANT
> 54 5 0 0 PAG 1:10840 IX GRANT
> 54 5 1181963287 4 PAG 1:10841 IX GRANT
> 54 5 0 0 PAG 1:10842 IX GRANT
> 54 5 1117963059 3 KEY (5301059ea5c1) X GRANT
> 54 5 0 0 PAG 1:10820 IX GRANT
> 54 5 1181963287 4 PAG 1:10821 IX GRANT
> 54 5 1181963287 5 PAG 1:10874 IX GRANT
> 54 5 1181963287 5 PAG 1:10876 IX GRANT
> 54 5 0 0 PAG 1:10877 IX GRANT
> 54 5 1181963287 5 PAG 1:10878 IX GRANT
> 54 5 0 0 PAG 1:10849 IX GRANT
> 54 5 0 0 PAG 1:10850 IX GRANT
> 54 5 1117963059 2 KEY (1700f225b712) X GRANT
> 54 5 1117963059 4 KEY (5301214e6d62) X GRANT
> 56 5 0 0 DB S GRANT
> 56 1 85575343 0 TAB IS GRANT|||The thing is though, the long running transaction is the only
transaction running. All the other processes are just running non
transactional queries.|||I've cheated with large INSERT statements by inserting blocks of them
in a batch inside a transaction, with a WAITFOR DELAY of a few seconds.
This makes the individual transactions shorter, and allows the other
statements to sneak in and get some work done while doing it. I've
found that inserting 5 batches of of increasing percentage sizes (20,
25, 30, 50, 100) usually ends up inserting near equivelent batch sizes.

The psuedo-code for this method would be something like the following:

BEGIN TRANSACTION

INSERT INTO holdingTable
SELECT PrimaryKey
FROM Table

-- first 2000 records of 10000 records
INSERT INTO DestinationTable
SELECT TOP 20 PERCENT Cols
FROM SourceTable JOIN HoldingTable ON a=b
ORDER BY HoldingTable PrimaryKey

DELETE
FROM HoldingTable
WHERE PrimaryKey IN (SELECT TOP 20 PERCENT PrimaryKey
FROM HoldingTable ORDER BY Primary KEY)

--pause for 10 seconds
WAITFOR DELAY '00:00:10'

--first 2000 records of remaining 8000 records
INSERT INTO DestinationTable
SELECT TOP 25 PERCENT Cols
FROM SourceTable JOIN HoldingTable ON a=b
ORDER BY HoldingTable PrimaryKey

DELETE
FROM HoldingTable
WHERE PrimaryKey IN (SELECT TOP 25 PERCENT PrimaryKey
FROM HoldingTable ORDER BY Primary KEY)

WAITFOR DELAY '00:00:10'

....

--remaining records
INSERT INTO DestinationTable
SELECT TOP 100 PERCENT Cols
FROM SourceTable JOIN HoldingTable ON a=b
ORDER BY HoldingTable PrimaryKey

drop HoldingTable --assumes it's a temp table or table variable

COMMIT TRANSACTION

A similar concept should work for UPDATES.

There may be other solutions out there; this works for me.|||pb648174 wrote:
> The thing is though, the long running transaction is the only
> transaction running. All the other processes are just running non
> transactional queries.

If you read the BOL section "SET IMPLICIT_TRANSACTIONS", You should be
able to see that *every* statement is part of a transaction - there are
no "non transactional" queries.

What actually happens when you execute a query when there is no active
transaction is that a new transaction is started. When the query
completes, the behaviour is affected by the "IMPLICIT_TRANSACTIONS"
settings. When it is OFF (the default), the transaction is
automatically committed, provided the query caused no error. When it is
ON, the transaction is kept open, and must be manually committed at a
later stage.

So, on to your problem. You *may* be able to fix it by issueing the
"SET TRANSACTION ISOLATION LEVEL" statement on each of the other
processes connections. by setting it to READ UNCOMMITTED, this will
affect every transaction which is started by this connection (including
these automatic transactions which your statements are
opening/committing).

This will mean that each of these processes may see data in any
imaginable (read: illegal) state. If you are sure that it is safe for
these processes to see (and process) such data, then this may be the
way to go.

HTH,

Damien|||This large copy is happening from application code and involved 30 or
40 separate stored procedure being called. It is being done
asynchronously via a c# web app and there is a pause of 2 seconds
before every copy command. Unfortunately this two second pause doesn't
help because of the locking situation, it just makes everything block
for longer.

Isn't this a somewhat common scenario? What is the standard way around
this situation? Can somebody tell me why the entire table is being
locked instead of just the new records? I have no problem with new
records being locked, but I do not want it to lock the records it is
reading or the entire table.

Would disaster ensue if I submitted the following command?
SP_INDEXOPTION 'table_name', 'AllowTableLocks', FALSE
GO|||pb648174 wrote:
> The thing is though, the long running transaction is the only
> transaction running. All the other processes are just running non
> transactional queries.

If you read the BOL section "SET IMPLICIT_TRANSACTIONS", You should be
able to see that *every* statement is part of a transaction - there are
no "non transactional" queries.

What actually happens when you execute a query when there is no active
transaction is that a new transaction is started. When the query
completes, the behaviour is affected by the "IMPLICIT_TRANSACTIONS"
settings. When it is OFF (the default), the transaction is
automatically committed, provided the query caused no error. When it is
ON, the transaction is kept open, and must be manually committed at a
later stage.

So, on to your problem. You *may* be able to fix it by issueing the
"SET TRANSACTION ISOLATION LEVEL" statement on each of the other
processes connections. by setting it to READ UNCOMMITTED, this will
affect every transaction which is started by this connection (including
these automatic transactions which your statements are
opening/committing).

This will mean that each of these processes may see data in any
imaginable (read: illegal) state. If you are sure that it is safe for
these processes to see (and process) such data, then this may be the
way to go.

HTH,

Damien|||This large copy is happening from application code and involved 30 or
40 separate stored procedure being called. It is being done
asynchronously via a c# web app and there is a pause of 2 seconds
before every copy command. Unfortunately this two second pause doesn't
help because of the locking situation, it just makes everything block
for longer.

Isn't this a somewhat common scenario? What is the standard way around
this situation? Can somebody tell me why the entire table is being
locked instead of just the new records? I have no problem with new
records being locked, but I do not want it to lock the records it is
reading or the entire table.

Would disaster ensue if I submitted the following command?
SP_INDEXOPTION 'table_name', 'AllowTableLocks', FALSE
GO|||pb648174 (google@.webpaul.net) writes:
> This large copy is happening from application code and involved 30 or
> 40 separate stored procedure being called. It is being done
> asynchronously via a c# web app and there is a pause of 2 seconds
> before every copy command. Unfortunately this two second pause doesn't
> help because of the locking situation, it just makes everything block
> for longer.
> Isn't this a somewhat common scenario? What is the standard way around
> this situation? Can somebody tell me why the entire table is being
> locked instead of just the new records? I have no problem with new
> records being locked, but I do not want it to lock the records it is
> reading or the entire table.
> Would disaster ensue if I submitted the following command?
> SP_INDEXOPTION 'table_name', 'AllowTableLocks', FALSE

It would not have any effect.

The table you are inserting into is *not* completely locked. You posted
an output from sp_lock. According to this output, spid 54 holds an
exclusive table lock on table 1181963287, but that does not seem to
the main theatre for your problem. (You can use
"SELECT object_name(1181963287)" to see which table this is.)

Instead, the main part of the show appears to be table 1117963059, and
spid 54 does not any table locks on this table. But it does hold locks
on all newly inserted rows, as well as all new inserted index nodes.
Process 52 is blocked by spid 54, and this is why:

52 5 1117963059 4 KEY (5301214e6d62) S WAIT

Spid 52 is trying to get a shared lock on an index key, but is blocked.

Assume that the query spid 52 has submitted is "show how many items of
widget X we sold last week", and the optimizer decides to use the
non-clustered index over widget_id to access the sales numbers. And
among the new rows you insert, there are rows with widget_id in question.
When spid 52 tries to access those index nodes, it will be blocked.

So while you don't get a table locked, it is not as simple that other
processes can just read the existing data, and don't bother about the
new data.

The best way would be look into how to shorten the transaction length.
It sounds as if row are being inserted one-by-one, in which case there
are lots of possibilities for improvements.

Another possibility is insert the data into a staging table, and the
insert with one big INSERT statement at the end. But if that is many
rows, that could still block for considerable time.

A further development is to use partitioned views. Here, too, you copy
the data into an empty table, that no other process sees. The processes
that reads data, access the view, not the table. Once the table has been
loaded, you change the view definition to include the new table.

Then you can of course, use READ UNCOMMITTED for the readers, but then
may get some funky results that is not consistent. (READ UNCOMMITTED
on the writer has no effect.) A better alternative may be READPAST,
in which cases locked rows are simply skipped. But this is only a locking
hint, and is not settable as a transaction level.

In SQL 2005, you would probably use the new SNAPSHOT isolattion level,
in which case the readers would see the data as it was before the long
transaction started.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> This large copy is happening from application code and involved 30 or
> 40 separate stored procedure being called. It is being done
> asynchronously via a c# web app and there is a pause of 2 seconds
> before every copy command. Unfortunately this two second pause doesn't
> help because of the locking situation, it just makes everything block
> for longer.
> Isn't this a somewhat common scenario? What is the standard way around
> this situation? Can somebody tell me why the entire table is being
> locked instead of just the new records? I have no problem with new
> records being locked, but I do not want it to lock the records it is
> reading or the entire table.
> Would disaster ensue if I submitted the following command?
> SP_INDEXOPTION 'table_name', 'AllowTableLocks', FALSE

It would not have any effect.

The table you are inserting into is *not* completely locked. You posted
an output from sp_lock. According to this output, spid 54 holds an
exclusive table lock on table 1181963287, but that does not seem to
the main theatre for your problem. (You can use
"SELECT object_name(1181963287)" to see which table this is.)

Instead, the main part of the show appears to be table 1117963059, and
spid 54 does not any table locks on this table. But it does hold locks
on all newly inserted rows, as well as all new inserted index nodes.
Process 52 is blocked by spid 54, and this is why:

52 5 1117963059 4 KEY (5301214e6d62) S WAIT

Spid 52 is trying to get a shared lock on an index key, but is blocked.

Assume that the query spid 52 has submitted is "show how many items of
widget X we sold last week", and the optimizer decides to use the
non-clustered index over widget_id to access the sales numbers. And
among the new rows you insert, there are rows with widget_id in question.
When spid 52 tries to access those index nodes, it will be blocked.

So while you don't get a table locked, it is not as simple that other
processes can just read the existing data, and don't bother about the
new data.

The best way would be look into how to shorten the transaction length.
It sounds as if row are being inserted one-by-one, in which case there
are lots of possibilities for improvements.

Another possibility is insert the data into a staging table, and the
insert with one big INSERT statement at the end. But if that is many
rows, that could still block for considerable time.

A further development is to use partitioned views. Here, too, you copy
the data into an empty table, that no other process sees. The processes
that reads data, access the view, not the table. Once the table has been
loaded, you change the view definition to include the new table.

Then you can of course, use READ UNCOMMITTED for the readers, but then
may get some funky results that is not consistent. (READ UNCOMMITTED
on the writer has no effect.) A better alternative may be READPAST,
in which cases locked rows are simply skipped. But this is only a locking
hint, and is not settable as a transaction level.

In SQL 2005, you would probably use the new SNAPSHOT isolattion level,
in which case the readers would see the data as it was before the long
transaction started.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Wow, that helped clear things up quite a bit, thanks. Can I perhaps do
some work on the indexes in order to fix it? Would changing to a
clustered index make it less likely to block?

The rows aren't being inserted one row at a time, but they are being
done one "project" at a time(maybe a few thousand records), with pretty
standard Insert Into..Select statements. I have my application pausing
for a few seconds between projects to try and reduce stress on the
server and give other processes a chance for freedom, but from what you
are saying, perhaps I need to remove the wait times in order to get the
transaction over as quickly as possible.|||Wow, that helped clear things up quite a bit, thanks. Can I perhaps do
some work on the indexes in order to fix it? Would changing to a
clustered index make it less likely to block?

The rows aren't being inserted one row at a time, but they are being
done one "project" at a time(maybe a few thousand records), with pretty
standard Insert Into..Select statements. I have my application pausing
for a few seconds between projects to try and reduce stress on the
server and give other processes a chance for freedom, but from what you
are saying, perhaps I need to remove the wait times in order to get the
transaction over as quickly as possible.|||On 9 Jun 2005 14:18:10 -0700, "pb648174" <google@.webpaul.net> wrote:

>I have a very long transaction that runs on the same database that
>other users need to use for existing data. I don't care if they see
>data from the transaction before it is done and am only using the
>transaction because I need a way to roll it back if any errors happen
>during the transaction. Unfortunately all tables affected in the long
>running transaction are completely locked and nobody else can access
>any of the affected tables while it is running. I am using the
>transaction isolation level of read uncommitted, which from my limited
>understanding of isolation levels is the least strict. What can I do to
>prevent this from happening?

Personally, I don't abide long-running database transactions in my
applications. If the issue comes up, I reingineer until the need for the
long-running transaction goes away.

Here's one approach I've used...
1. For each table that will be affected, add 2 links to batch records, one for
initial, and one for final.
2. Add a table of batches with a batch ID, and a status that may be pending or
completed.
3. For each batch process, create a new batch record with a status of pending.
4. When writing to the database, point the initial batch of each new record to
the batch record, and point the final batch of each deleted record to the
batch record. Do not modify existing records - instead, add a new modifed
copy, and finalize the old copy.
5. Once all batch updates are completed, change the batch record status from
pending to completed.

When querying the data, to see only data that is current, simply join to the
batch table, and exclude any records that have final batch links to a
completed batch record. Every once in a while, purge these outdated records,
so they don't pile up, and slow down the system.

If a transaction fails, you can roll back by deleting the records with initial
batch references to your batch record, and set any final batch references to
your batch record back to Null, then delete your batch record.

This approach only works if there can only be one batch processor at a time
affecting a particular group of tables, but it has the benefit of not
requiring server transactions to be maintained for long periods of time. A
batch could take several days and have no negative impact on anything. You
could even halt the batch, and continue it on a different meachine if
necessary.|||On 9 Jun 2005 14:18:10 -0700, "pb648174" <google@.webpaul.net> wrote:

>I have a very long transaction that runs on the same database that
>other users need to use for existing data. I don't care if they see
>data from the transaction before it is done and am only using the
>transaction because I need a way to roll it back if any errors happen
>during the transaction. Unfortunately all tables affected in the long
>running transaction are completely locked and nobody else can access
>any of the affected tables while it is running. I am using the
>transaction isolation level of read uncommitted, which from my limited
>understanding of isolation levels is the least strict. What can I do to
>prevent this from happening?

Personally, I don't abide long-running database transactions in my
applications. If the issue comes up, I reingineer until the need for the
long-running transaction goes away.

Here's one approach I've used...
1. For each table that will be affected, add 2 links to batch records, one for
initial, and one for final.
2. Add a table of batches with a batch ID, and a status that may be pending or
completed.
3. For each batch process, create a new batch record with a status of pending.
4. When writing to the database, point the initial batch of each new record to
the batch record, and point the final batch of each deleted record to the
batch record. Do not modify existing records - instead, add a new modifed
copy, and finalize the old copy.
5. Once all batch updates are completed, change the batch record status from
pending to completed.

When querying the data, to see only data that is current, simply join to the
batch table, and exclude any records that have final batch links to a
completed batch record. Every once in a while, purge these outdated records,
so they don't pile up, and slow down the system.

If a transaction fails, you can roll back by deleting the records with initial
batch references to your batch record, and set any final batch references to
your batch record back to Null, then delete your batch record.

This approach only works if there can only be one batch processor at a time
affecting a particular group of tables, but it has the benefit of not
requiring server transactions to be maintained for long periods of time. A
batch could take several days and have no negative impact on anything. You
could even halt the batch, and continue it on a different meachine if
necessary.|||pb648174 (google@.webpaul.net) writes:
> Wow, that helped clear things up quite a bit, thanks. Can I perhaps do
> some work on the indexes in order to fix it? Would changing to a
> clustered index make it less likely to block?

I sort of assumed that all new rows were inserted at the end of the
clustered index. But if the rows you insert are not aligned with the
clustered index, then the problem becomes a lot worse. Any process
that finds itself in need ot a table scan would be blocked.

> The rows aren't being inserted one row at a time, but they are being
> done one "project" at a time(maybe a few thousand records), with pretty
> standard Insert Into..Select statements. I have my application pausing
> for a few seconds between projects to try and reduce stress on the
> server and give other processes a chance for freedom, but from what you
> are saying, perhaps I need to remove the wait times in order to get the
> transaction over as quickly as possible.

Had you committed after each batch, the pause could make some sense. But
if you don't commit until the end, then you should get away with those
pauses.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> Wow, that helped clear things up quite a bit, thanks. Can I perhaps do
> some work on the indexes in order to fix it? Would changing to a
> clustered index make it less likely to block?

I sort of assumed that all new rows were inserted at the end of the
clustered index. But if the rows you insert are not aligned with the
clustered index, then the problem becomes a lot worse. Any process
that finds itself in need ot a table scan would be blocked.

> The rows aren't being inserted one row at a time, but they are being
> done one "project" at a time(maybe a few thousand records), with pretty
> standard Insert Into..Select statements. I have my application pausing
> for a few seconds between projects to try and reduce stress on the
> server and give other processes a chance for freedom, but from what you
> are saying, perhaps I need to remove the wait times in order to get the
> transaction over as quickly as possible.

Had you committed after each batch, the pause could make some sense. But
if you don't commit until the end, then you should get away with those
pauses.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||The only thing about taking out the pauses is that it hammers on the
database so hard, it pegs the CPU.. With the pauses other processes
have a chance to go forward, but with this locking it doesn't seem to
matter.|||The only thing about taking out the pauses is that it hammers on the
database so hard, it pegs the CPU.. With the pauses other processes
have a chance to go forward, but with this locking it doesn't seem to
matter.|||Isn't this "batch" approach sort of ridiculous, given that databases
are supposed to be able to run transactions?? We have hundreds of
tables and the above approach would be extremely onerous. I don't think
it would work anyway since there are identity columns in most of the
tables which need to match up between parent and child relationships.
If somebody inserted a single record while the transaction was running,
that would hose the whole process.|||Isn't this "batch" approach sort of ridiculous, given that databases
are supposed to be able to run transactions?? We have hundreds of
tables and the above approach would be extremely onerous. I don't think
it would work anyway since there are identity columns in most of the
tables which need to match up between parent and child relationships.
If somebody inserted a single record while the transaction was running,
that would hose the whole process.|||This is interesting - adding a clustered index on the identity column
fixed the previous problem, so I picked one module (which has about 5
or 6 tables) to see if I could free up that module while the copy
process was running. I went and put clustered indexes on all the
identity columns of those tables, but now another table has a locking
problem, although this time, not on the index. The output from that
line from sp_lock is below:

54 8 2101582525 1 PAG 1:26568 S WAIT

So now it is a page lock, which from what I've read is a group of
records. That seems resonable given that I am copying large amounts of
data, but why is it stopping my other process?|||This is interesting - adding a clustered index on the identity column
fixed the previous problem, so I picked one module (which has about 5
or 6 tables) to see if I could free up that module while the copy
process was running. I went and put clustered indexes on all the
identity columns of those tables, but now another table has a locking
problem, although this time, not on the index. The output from that
line from sp_lock is below:

54 8 2101582525 1 PAG 1:26568 S WAIT

So now it is a page lock, which from what I've read is a group of
records. That seems resonable given that I am copying large amounts of
data, but why is it stopping my other process?|||pb648174 (google@.webpaul.net) writes:
> The only thing about taking out the pauses is that it hammers on the
> database so hard, it pegs the CPU..

CPU:s are humans. Just keep them working!

(But, OK, if you see a puff of smoke, it's probably time for a pause.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> The only thing about taking out the pauses is that it hammers on the
> database so hard, it pegs the CPU..

CPU:s are humans. Just keep them working!

(But, OK, if you see a puff of smoke, it's probably time for a pause.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> The only thing about taking out the pauses is that it hammers on the
> database so hard, it pegs the CPU.. With the pauses other processes
> have a chance to go forward, but with this locking it doesn't seem to
> matter.

On a little more serious note... if the machine has more than one
CPU, you could consider to reduce the degree of parallelism, to leave
some CPUs to the rest of the pack. You do this by adding

OPTION (MAXDOP n)

at the end of the query. In fact you can even try 1, to abort parallelism
entirely. SQL Server appears to be over-optimistic by the benefits of
parallelism, and non-parallel plans may be better.

If your CPU(s) are hyper-threaded, you should server-configuration
parameter "Max degree of parallelism" to the number of physical CPUs.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> The only thing about taking out the pauses is that it hammers on the
> database so hard, it pegs the CPU.. With the pauses other processes
> have a chance to go forward, but with this locking it doesn't seem to
> matter.

On a little more serious note... if the machine has more than one
CPU, you could consider to reduce the degree of parallelism, to leave
some CPUs to the rest of the pack. You do this by adding

OPTION (MAXDOP n)

at the end of the query. In fact you can even try 1, to abort parallelism
entirely. SQL Server appears to be over-optimistic by the benefits of
parallelism, and non-parallel plans may be better.

If your CPU(s) are hyper-threaded, you should server-configuration
parameter "Max degree of parallelism" to the number of physical CPUs.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> This is interesting - adding a clustered index on the identity column
> fixed the previous problem, so I picked one module (which has about 5
> or 6 tables) to see if I could free up that module while the copy
> process was running. I went and put clustered indexes on all the
> identity columns of those tables, but now another table has a locking
> problem, although this time, not on the index. The output from that
> line from sp_lock is below:
> 54 8 2101582525 1 PAG 1:26568 S WAIT
> So now it is a page lock, which from what I've read is a group of
> records. That seems resonable given that I am copying large amounts of
> data, but why is it stopping my other process?

Lacking telepathic abilities, I can't say why.

What you could try is to use aba_lockinfo, which you find on my web
site, http://www.sommarskog.se/sqlutil/aba_lockinfo.html. This procedure
give you an overview active process, which object they lock, and also
current statements. This gives a little better idea of what is going
on.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||pb648174 (google@.webpaul.net) writes:
> This is interesting - adding a clustered index on the identity column
> fixed the previous problem, so I picked one module (which has about 5
> or 6 tables) to see if I could free up that module while the copy
> process was running. I went and put clustered indexes on all the
> identity columns of those tables, but now another table has a locking
> problem, although this time, not on the index. The output from that
> line from sp_lock is below:
> 54 8 2101582525 1 PAG 1:26568 S WAIT
> So now it is a page lock, which from what I've read is a group of
> records. That seems resonable given that I am copying large amounts of
> data, but why is it stopping my other process?

Lacking telepathic abilities, I can't say why.

What you could try is to use aba_lockinfo, which you find on my web
site, http://www.sommarskog.se/sqlutil/aba_lockinfo.html. This procedure
give you an overview active process, which object they lock, and also
current statements. This gives a little better idea of what is going
on.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||I have no idea if this will help or not...
A vendor gave me a giant sql select that has 200 fields and 25 joins.
The sample of the join:
LEFT OUTER JOIN view_TPSEnum_10145 WITH(NOLOCK) ON
view_TPSEnum_10145.nIndex = [tblObjectType5001_1].[FldNumeric21251]
I'm told the WITH(NOLOCK) wont lock the source table *shrug*. Read the
docs that it allows dirty reads and is used only for selects.|||I have no idea if this will help or not...
A vendor gave me a giant sql select that has 200 fields and 25 joins.
The sample of the join:
LEFT OUTER JOIN view_TPSEnum_10145 WITH(NOLOCK) ON
view_TPSEnum_10145.nIndex = [tblObjectType5001_1].[FldNumeric21251]
I'm told the WITH(NOLOCK) wont lock the source table *shrug*. Read the
docs that it allows dirty reads and is used only for selects.

Long running transaction problem

I have a situation occasionally where I get an open transaction from a day
or so back. I find this using the:
DBCC OPENTRAN
This shows that the transaction is a day or more old. I then do a:
sp_who xx
Where xx is the SPID obtained from the DBCC command. This shows a few more
items but still does not tell me the process that openned or created the
locks. When I run sp_locks the above transaction has tons of locks, like
1,500,000 locks. Without knowing what else to do I do a:
kill xx
This causes the server to go into recorvery, and due to the size it recovers
for a relativly long time.
How can I find out the service, application or process that is responsible
for the hung transaction? Just from the size I have an idea it is a process
which is updating 290k records.
TIA, MarkIf the process is still running, you could try grabbing the sql stmt that is
running, either via DBCC INPUTBUFFER or use the ::fn_get_sql function if you
have SP3 installed, and try to narrow down your search from there.
--
Regards
Ray Mond
"Mark Butler" <mredhat_nospam@.yahoo.com> wrote in message
news:%233cbtgk2DHA.2604@.TK2MSFTNGP09.phx.gbl...
> I have a situation occasionally where I get an open transaction from a day
> or so back. I find this using the:
> DBCC OPENTRAN
> This shows that the transaction is a day or more old. I then do a:
> sp_who xx
> Where xx is the SPID obtained from the DBCC command. This shows a few
more
> items but still does not tell me the process that openned or created the
> locks. When I run sp_locks the above transaction has tons of locks, like
> 1,500,000 locks. Without knowing what else to do I do a:
> kill xx
> This causes the server to go into recorvery, and due to the size it
recovers
> for a relativly long time.
> How can I find out the service, application or process that is responsible
> for the hung transaction? Just from the size I have an idea it is a
process
> which is updating 290k records.
> TIA, Mark
>|||I know it's kind of obvious, but how about looking at the HostName and
ProgramName columns when running sp_who2, to trace the application?
Depending on your security model, you could ask the user directly too, since
you can see the UserName.
--
Regards
Ray Mond
"Mark Butler" <mredhat_nospam@.yahoo.com> wrote in message
news:%233cbtgk2DHA.2604@.TK2MSFTNGP09.phx.gbl...
> I have a situation occasionally where I get an open transaction from a day
> or so back. I find this using the:
> DBCC OPENTRAN
> This shows that the transaction is a day or more old. I then do a:
> sp_who xx
> Where xx is the SPID obtained from the DBCC command. This shows a few
more
> items but still does not tell me the process that openned or created the
> locks. When I run sp_locks the above transaction has tons of locks, like
> 1,500,000 locks. Without knowing what else to do I do a:
> kill xx
> This causes the server to go into recorvery, and due to the size it
recovers
> for a relativly long time.
> How can I find out the service, application or process that is responsible
> for the hung transaction? Just from the size I have an idea it is a
process
> which is updating 290k records.
> TIA, Mark
>|||Thanx for the response Ray. You are right that I should see the user name
but the user name in this case is the asministrator which means it is one of
the automated services that are run. The suspected application is an EDI
package that automatically retreives information via FTP and runs a script
that updates the SQL database. This process runs on the server itself.
--
Mark Butler
"Ray Mond" <yeohray@.hotmail.com> wrote in message
news:ulcMjCn2DHA.2408@.tk2msftngp13.phx.gbl...
> I know it's kind of obvious, but how about looking at the HostName and
> ProgramName columns when running sp_who2, to trace the application?
> Depending on your security model, you could ask the user directly too,
since
> you can see the UserName.
> --
> Regards
> Ray Mond
> "Mark Butler" <mredhat_nospam@.yahoo.com> wrote in message
> news:%233cbtgk2DHA.2604@.TK2MSFTNGP09.phx.gbl...
> > I have a situation occasionally where I get an open transaction from a
day
> > or so back. I find this using the:
> >
> > DBCC OPENTRAN
> >
> > This shows that the transaction is a day or more old. I then do a:
> >
> > sp_who xx
> >
> > Where xx is the SPID obtained from the DBCC command. This shows a few
> more
> > items but still does not tell me the process that openned or created the
> > locks. When I run sp_locks the above transaction has tons of locks,
like
> > 1,500,000 locks. Without knowing what else to do I do a:
> >
> > kill xx
> >
> > This causes the server to go into recorvery, and due to the size it
> recovers
> > for a relativly long time.
> >
> > How can I find out the service, application or process that is
responsible
> > for the hung transaction? Just from the size I have an idea it is a
> process
> > which is updating 290k records.
> >
> > TIA, Mark
> >
> >
>sql

Long running transaction problem

I have a situation occasionally where I get an open transaction from a day
or so back. I find this using the:
DBCC OPENTRAN
This shows that the transaction is a day or more old. I then do a:
sp_who xx
Where xx is the SPID obtained from the DBCC command. This shows a few more
items but still does not tell me the process that openned or created the
locks. When I run sp_locks the above transaction has tons of locks, like
1,500,000 locks. Without knowing what else to do I do a:
kill xx
This causes the server to go into recorvery, and due to the size it recovers
for a relativly long time.
How can I find out the service, application or process that is responsible
for the hung transaction? Just from the size I have an idea it is a process
which is updating 290k records.
TIA, MarkIf the process is still running, you could try grabbing the sql stmt that is
running, either via DBCC INPUTBUFFER or use the ::fn_get_sql function if you
have SP3 installed, and try to narrow down your search from there.
Regards
Ray Mond
"Mark Butler" <mredhat_nospam@.yahoo.com> wrote in message
news:%233cbtgk2DHA.2604@.TK2MSFTNGP09.phx.gbl...
quote:

> I have a situation occasionally where I get an open transaction from a day
> or so back. I find this using the:
> DBCC OPENTRAN
> This shows that the transaction is a day or more old. I then do a:
> sp_who xx
> Where xx is the SPID obtained from the DBCC command. This shows a few

more
quote:

> items but still does not tell me the process that openned or created the
> locks. When I run sp_locks the above transaction has tons of locks, like
> 1,500,000 locks. Without knowing what else to do I do a:
> kill xx
> This causes the server to go into recorvery, and due to the size it

recovers
quote:

> for a relativly long time.
> How can I find out the service, application or process that is responsible
> for the hung transaction? Just from the size I have an idea it is a

process
quote:

> which is updating 290k records.
> TIA, Mark
>
|||I know it's kind of obvious, but how about looking at the HostName and
ProgramName columns when running sp_who2, to trace the application?
Depending on your security model, you could ask the user directly too, since
you can see the UserName.
Regards
Ray Mond
"Mark Butler" <mredhat_nospam@.yahoo.com> wrote in message
news:%233cbtgk2DHA.2604@.TK2MSFTNGP09.phx.gbl...
quote:

> I have a situation occasionally where I get an open transaction from a day
> or so back. I find this using the:
> DBCC OPENTRAN
> This shows that the transaction is a day or more old. I then do a:
> sp_who xx
> Where xx is the SPID obtained from the DBCC command. This shows a few

more
quote:

> items but still does not tell me the process that openned or created the
> locks. When I run sp_locks the above transaction has tons of locks, like
> 1,500,000 locks. Without knowing what else to do I do a:
> kill xx
> This causes the server to go into recorvery, and due to the size it

recovers
quote:

> for a relativly long time.
> How can I find out the service, application or process that is responsible
> for the hung transaction? Just from the size I have an idea it is a

process
quote:

> which is updating 290k records.
> TIA, Mark
>
|||Thanx for the response Ray. You are right that I should see the user name
but the user name in this case is the asministrator which means it is one of
the automated services that are run. The suspected application is an EDI
package that automatically retreives information via FTP and runs a script
that updates the SQL database. This process runs on the server itself.
Mark Butler
"Ray Mond" <yeohray@.hotmail.com> wrote in message
news:ulcMjCn2DHA.2408@.tk2msftngp13.phx.gbl...
quote:

> I know it's kind of obvious, but how about looking at the HostName and
> ProgramName columns when running sp_who2, to trace the application?
> Depending on your security model, you could ask the user directly too,

since
quote:

> you can see the UserName.
> --
> Regards
> Ray Mond
> "Mark Butler" <mredhat_nospam@.yahoo.com> wrote in message
> news:%233cbtgk2DHA.2604@.TK2MSFTNGP09.phx.gbl...
day[QUOTE]
> more
like[QUOTE]
> recovers
responsible[QUOTE]
> process
>

long running transaction blocking queue

Hello,

I seem to be misunderstanding the way transactions work with service broker queues. We have developed and deployed a service broker application that 5 queues and a windows service for each queue on multiple servers (3 currently). Due to a last minute issue, we had to not use transactions when the services executed a recieve and I am not updating the code base to use transactions and am running into blocking issues. One of the services runs for 90 seconds (spooling to the printer) and all of the servers block on the receive operation for this queue. I thought that if I was receving messages from a single conversation, other receives against this queue would not block.

Thanks,

Jim Stallings

RECEIVE indeed should now block each other. One RECEIVE should either return messages for an unlocked conversation or return an empty resultset. Can you share how you do the RECEIVEs? E.g, are you using WAITFOR, are you using a WHERE clause, are you using TOP etc. Also, how many conversations and how many messages per conversation typically are in the queue?

Can you look what is the lock contention on? sys.processes will show what is the type and name of the resource a thread is blocked on.

Thanks,
~ Remus

|||Hello Remus,
Thanks for your response. I took a long weekend so I didn't see your response Friday. At the moment I'm just doing a receive top (1) (count passed in) as this is a very simple implementation where each message has it's own queue and I receive one per message per access until an end conversation is encountered. I (and others) were under the impression from how SSB is presented, that long running transacations could be left open on one conversation without blocking others but your answere indicates that is not the case? My problem is that I need to have a transaction around the processing of the message so that I don't have to build recovery logic if the windows service fails to complete.

From the PDC demo I saw, they showed that you could be processing from one conversation group and other processes could still access another conversation group. That led me to the conclusion that locking is by conversation group. Is that not true? Am I supposed to receive by conversation group? Is receiving by conversation handle supposed to prevent blocking of subsequent receives?

I worked with our DBA to track down the blocking and it is occurring in my stored procedure that does the receive. The sequence is that windows service A (server A) does a receive in a transaction. Windows service B (server B, etc) does another receive and blocks on a select by conversation handle. This select by the way is a holdover from the original coding I converted from the PDC code and is not used anywhere. We went from code complete to production so fast I didn't have time to go back and trim this kind of unused feature from the conversion. I can easily modify the receieve to select by conversation group if this would help with the blocking but since when we did the analysis we saw that we were blocking on the clustered index of the internal table, I had doubts if this would make a difference so I decided to post and ask. My receive proc is below and in the case that I'm asking about, I'm passing a count of 1, no timeout so no waitfor is generated, and no conversation handle.

Thanks,
Jim

ALTER PROCEDURE [DDPT].[ssb_Receive]
/*
****************************************************************************************************************
* File Name : DDPT.ssb_Receive.sql
* Description: Execute RECEIVE against the specified queue. Wait time of 0 will return immediately
* if no messages in queue, non-zero wait time will perform waitfor of specified wait time.
* Receive will be for any conversation unless converstation specified. Message count can
* be specified to limit the number of messages received.
*
* Author : Jim Stallings
*
* Example : Exec DDPT.ssb_Receive.sql
@.queueName = 'LabelRequestQueue'
, @.serverName = 'WSHB8257'
*
* History : Date Initials Description
* - -- --
* 03/13/2006 JAS Initial Development
*
****************************************************************************************************************
*/
@.queueName NVARCHAR(255)
, @.conversationId UNIQUEIDENTIFIER = NULL
, @.waitTime INT = NULL
, @.messageCount INT = 0
, @.serverName NVARCHAR(128)= NULL
AS
BEGIN

SET NOCOUNT ON;

DECLARE @.strSQL NVARCHAR(2000), @.errNo INT
DECLARE @.conversation_handle uniqueidentifier
, @.message_type_name nvarchar(128)
, @.message xml
, @.service_name nvarchar(128)
, @.taskid bigint

SELECT @.strSQL = 'SELECT top 1 @.conversation_handle = conversation_handle,
@.service_name = service_name, @.message_type_name = message_type_name, @.message = message_body
FROM ' + CAST(@.queueName AS VARCHAR(255))

SELECT @.strSQL = @.strSQL + ' WITH (NOLOCK) WHERE conversation_handle = ' + CAST(@.conversationId AS VARCHAR)
WHERE @.conversationId IS NOT NULL

EXEC sp_executesql @.strSQL
, N'@.conversation_handle UNIQUEIDENTIFIER OUTPUT, @.service_name nvarchar(128) output, @.message_type_name NVARCHAR(128) OUTPUT, @.message xml OUTPUT'
, @.conversation_handle OUTPUT, @.service_name OUTPUT, @.message_type_name OUTPUT, @.message OUTPUT

SET @.strSQL = ''
IF @.waitTime IS NOT NULL
SET @.strSQL = 'WAITFOR('

SELECT @.strSQL = @.strSQL + 'RECEIVE '
IF @.messageCount != 0
SELECT @.strSQL = @.strSQL + 'TOP(' + CAST(@.messageCount as varchar) + ')'

SELECT @.strSQL = @.strSQL
+ 'conversation_group_id, conversation_handle
, message_sequence_number, service_name
, service_contract_name, message_type_name
, validation, message_body
FROM ' + CAST(@.queueName AS VARCHAR(255))

SELECT @.strSQL = @.strSQL + 'WHERE conversation_handle = ' + CAST(@.conversationId AS VARCHAR)
WHERE @.conversationId IS NOT NULL

IF @.waitTime IS NOT NULL
SELECT @.strSQL = @.strSQL + '), TIMEOUT ' + CAST(@.waitTime AS VARCHAR)

EXEC(@.strSQL)
SELECT @.errNo = @.@.Error

RETURN @.errNo

END|||

Hi Remus,

I wanted to correct my last post. When I said the demo showed that you could receive from one group while other processes could receive from another, I mispoke. I meant that one of the PDC presentations I listened to gave me the impression that this was capability provided by conversation group locking in that once you started to receive against a conversation group, another process receiving against the same queue would return messages from the next group. I'm assuming that for this to function, a transaction must be kept open against the group until all messages are processed.

Also, I forgot to add that when we looked at the blocking when doing the receive, we determined that we were blocking on the clustered index on the internal table.

Jim

|||

I'm not sure I understand the logic of this procedure. But one thing I can tell is that looking up a conversation by means of SELECT and building a RECEIVE WHERE based on the SELECT return is not going to work. Concurrent procedure calls will return the same conversation and your RECEIVEs will block each other, because they'll try to lock the same conversation (as you already found out).

Also, mixing RECEIVE with and without a WHERE clause on the same queue is not going to work. The RECEIVEs without a WHERE clause will 'steal' messages you expect to be received by the RECEIVEs with a WHERE clause. The typical pattern to use is to use WAITFOR(RECEIVE ...) without a WHERE clause and then process the messages. This pattern allows the RECEIVE to get the next available conversation.

My recommandation is to remove the optional @.conversationId and simplify the procedure to use a simple RECEIVE without a WHERE clause. This way you can benefit from the conversation group locking and the built-in capability of RECEIVE to find the next available conversation group.

BTW you should really consider whether using dynamic T-SQL in a stored procedure is justified in your case. Doing so negates much of the performance benefits of stored procs.

HTH,
~ Remus

|||

Hi Remus,

First, many DBA's have been pounding into me that there are no performance benefits to stored procs since they are compiled on the fly now. Is this not true for SQL 2005?

The logic behind proc was to provide a general purpose receive stored proc that could be used by multiple services with and without timeouts and/or conversation handles. While this could result in the mixed access you mentioned, the respective services (and therefore queues) would consistently either use or not use converstation handles (and timeouts with waitfor). I don't know how else to do a receive proc that works for multiple queues and I don't want to build a stored proc for every queue.

The convoluted logic you mention is logic I said in the post was initially coded, not used and should have been (and is going to be) tossed. In any case, it was just to do a lookup in the queue , and if the conversation handle was not found, the result of executing the select would have nulled the conversation id parameter so that it would not be built into the subsequent receive command. BTW, I thought I read a post from someone in the service broker group that optimizer hints work against the internal tables for service broker and if that is incorrect that would be nice to know. Doing the receive as you suggest is not an issue, that is what I had decided to do anyway. All messages are independent and followed by an end conversation message.

Anyway, down to the real (and I apologize for my poor description of the problem) issue, it is that the windows service that executes for 90 seconds processing a specific message. I have read and heard that service broker could manage a conversation that takes seconds, minutes or even weeks to complete. If a queue is locked for the duration of a transaction, how would this be implemented without single threading on that queue? That is exactly what we are looking for but locking the queue for the 90 seconds duration means that I can't have another server start working on another message. Taking the processing out of the transaction means that I have to do additional coding to track failures and resend messages.

Jim

|||

Jim,

Lets do a simple experiment using the Management Studio.

1. First, create two queues and two services in your database of choice:

[code]

create queue [initiator];

create queue [target];

create service [from] on queue [initiator];

create service [to] on queue [target] ([DEFAULT]);

go

[/code]

2. Now lets send two messages, on two distinct dialogs, by running the following snippet twice:

[code]

declare @.dh uniqueidentifier;

begin dialog conversation @.dh

from service [from]

to service N'to', N'current database'

with encryption = off;

send on conversation @.dh;

go

[/code]

3. On a new Query, issue a RECEIVE and don't commit (to simulate an arbitrarily long transaction). This will grab the first enqueued message:

[code]

begin transaction;

receive * from [target];

-- commit;

[/code]

4. On a new Query, issue a second RECEIVE and don't commit (to simulate an arbitrarily long transaction). This will grab the second enqueued message:

[code]

begin transaction;

receive * from [target];

-- commit;

[/code]

5. On a new Query, issue a third RECEIVE, with a WAITFOR and don't commit. This will block waiting for a message:

[code]

begin transaction;

waitfor (receive * from [target])

-- commit;

[/code]

6. Run again the snippet from step 2 to send a new message.

7. Switch back to the query window opened at step 6. The WAITFOR has now completed, returning the newly enqueued message.

8. Look at the transaction locks will show how each RECEIVE has concurently locked diferent conversation groups:

[code]

select * from sys.dm_tran_locks

where resource_subtype='CONVERSATION_GROUP'

[/code]

This example shows that one can issue concurent RECEIVEs and they will no block each other. If you are seeing in your case RECEIVEs being blocked by each other, that would normally indicate that those RECEIVEs have a WHERE clause and are competing for the same conversation.

HTH,
~ Remus

|||

Hi Remus,

This is what I was asking.

Thanks,

Jim

|||

Jim Stallings wrote:

First, many DBA's have been pounding into me that there are no performance benefits to stored procs since they are compiled on the fly now. Is this not true for SQL 2005?

A stored proc that contains a statement (e.g. a RECEIVE statement) will be compiled once, cached and then at execution time the said statement execution will be invoked. If instead, at the same position, the proc contains an EXEC @.sqlBatch, or an sp_executesql @.sqlBatch, then at execution time the T-SQL batch that is contained in @.sqlBatch has to be parsed, compiled and executed. Parsing and compilation are steps that take significant time, comparable with execution time.

If the @.sqlBatch is identical each time (character by charatcter), then the system might already have a cached ad-hoc compiled plan for it from last time it was executed. But if the batch is but a single character different, it would have to be parsed and compiled from scratch each time. If the @.sqlBatch is constructed with an embeded GUID value, then you're guaranteed to have a different @.sqlBatch each time, causing a parsing and compilation from scratch. If dynamic SQL is a must, a good practice is to construct the @.sqlBatch with and embeded parameter and use sp_executesql instead of EXEC (since sp_executesql can pass in parameters).

A good read on the subject is: http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx. Have those DBAs read it too ;-)

HTH,
~ Remus

Long running SQLTransaction

Hi all,
we have a strange problem in SQL2000 SP3a with SQLTransaction's.
When running SQL Profiler the duration for some SQLTransactions (Commit) are
over six seconds, but the individual SQL statements (sp's) are executed at n
o
time at all.
I've used sp_lock but there are no locks.
Here's an extract from SQLProfiler:
SQLTransaction Commit 6796 5 2005-03-04 15:30:27.887 2005-03-04
15:30:34.683 DTCXact 16335154 sa
Any Ideas ?
Regards Tommy S.Check out master..sysprocesses and find out what the waittypes are
to determine what is causing the extended commit operation.
is the Tlog file on the same disk as the data file . .how busy is the server
?
is this constantly reproduceable? or is it intermittent?
HTH
"Tommy S?derkvist" wrote:

> Hi all,
> we have a strange problem in SQL2000 SP3a with SQLTransaction's.
> When running SQL Profiler the duration for some SQLTransactions (Commit) a
re
> over six seconds, but the individual SQL statements (sp's) are executed at
no
> time at all.
> I've used sp_lock but there are no locks.
> Here's an extract from SQLProfiler:
> SQLTransaction Commit 6796 5 2005-03-04 15:30:27.887 2005-03-04
> 15:30:34.683 DTCXact 16335154 sa
> Any Ideas ?
> Regards Tommy S.
>|||Thanks,
I will try to check out the waittypes the next time I see a long running
transaction.
Yes, the log and the datafile are located on the same disk.
The CPU's (4 of them) are almost idle.
The problem is intermittent and it doesn't seem to be dependent on the
number of users, it can occur at early evening when the number of users
decrease significant.
/Tommy
"Olu Adedeji" wrote:
[vbcol=seagreen]
> Check out master..sysprocesses and find out what the waittypes are
> to determine what is causing the extended commit operation.
> is the Tlog file on the same disk as the data file . .how busy is the serv
er?
> is this constantly reproduceable? or is it intermittent?
> HTH
> "Tommy S?derkvist" wrote:
>|||"Tommy S?derkvist" <TommySderkvist@.discussions.microsoft.com> schrieb im
Newsbeitrag news:72BF587D-E3CF-41FE-9EC8-5681C62E472B@.microsoft.com...
> Thanks,
> I will try to check out the waittypes the next time I see a long running
> transaction.
> Yes, the log and the datafile are located on the same disk.
> The CPU's (4 of them) are almost idle.
> The problem is intermittent and it doesn't seem to be dependent on the
> number of users, it can occur at early evening when the number of users
> decrease significant.
My guess would be that you have an IO bottleneck which makes tx log
writing slow. You could use perfmon to verify this theory.
robert
[vbcol=seagreen]
> /Tommy
> "Olu Adedeji" wrote:
>
server?[vbcol=seagreen]
(Commit) are[vbcol=seagreen]
executed at no[vbcol=seagreen]

Long running SQLTransaction

Hi all,
we have a strange problem in SQL2000 SP3a with SQLTransaction's.
When running SQL Profiler the duration for some SQLTransactions (Commit) are
over six seconds, but the individual SQL statements (sp's) are executed at no
time at all.
I've used sp_lock but there are no locks.
Here's an extract from SQLProfiler:
SQLTransaction Commit 6796 5 2005-03-04 15:30:27.887 2005-03-04
15:30:34.683 DTCXact 16335154 sa
Any Ideas ?
Regards Tommy S.Check out master..sysprocesses and find out what the waittypes are
to determine what is causing the extended commit operation.
is the Tlog file on the same disk as the data file . .how busy is the server?
is this constantly reproduceable? or is it intermittent?
HTH
"Tommy Söderkvist" wrote:
> Hi all,
> we have a strange problem in SQL2000 SP3a with SQLTransaction's.
> When running SQL Profiler the duration for some SQLTransactions (Commit) are
> over six seconds, but the individual SQL statements (sp's) are executed at no
> time at all.
> I've used sp_lock but there are no locks.
> Here's an extract from SQLProfiler:
> SQLTransaction Commit 6796 5 2005-03-04 15:30:27.887 2005-03-04
> 15:30:34.683 DTCXact 16335154 sa
> Any Ideas ?
> Regards Tommy S.
>|||Thanks,
I will try to check out the waittypes the next time I see a long running
transaction.
Yes, the log and the datafile are located on the same disk.
The CPU's (4 of them) are almost idle.
The problem is intermittent and it doesn't seem to be dependent on the
number of users, it can occur at early evening when the number of users
decrease significant.
/Tommy
"Olu Adedeji" wrote:
> Check out master..sysprocesses and find out what the waittypes are
> to determine what is causing the extended commit operation.
> is the Tlog file on the same disk as the data file . .how busy is the server?
> is this constantly reproduceable? or is it intermittent?
> HTH
> "Tommy Söderkvist" wrote:
> > Hi all,
> > we have a strange problem in SQL2000 SP3a with SQLTransaction's.
> > When running SQL Profiler the duration for some SQLTransactions (Commit) are
> > over six seconds, but the individual SQL statements (sp's) are executed at no
> > time at all.
> > I've used sp_lock but there are no locks.
> > Here's an extract from SQLProfiler:
> > SQLTransaction Commit 6796 5 2005-03-04 15:30:27.887 2005-03-04
> > 15:30:34.683 DTCXact 16335154 sa
> > Any Ideas ?
> > Regards Tommy S.
> >|||"Tommy Söderkvist" <TommySderkvist@.discussions.microsoft.com> schrieb im
Newsbeitrag news:72BF587D-E3CF-41FE-9EC8-5681C62E472B@.microsoft.com...
> Thanks,
> I will try to check out the waittypes the next time I see a long running
> transaction.
> Yes, the log and the datafile are located on the same disk.
> The CPU's (4 of them) are almost idle.
> The problem is intermittent and it doesn't seem to be dependent on the
> number of users, it can occur at early evening when the number of users
> decrease significant.
My guess would be that you have an IO bottleneck which makes tx log
writing slow. You could use perfmon to verify this theory.
robert
> /Tommy
> "Olu Adedeji" wrote:
> > Check out master..sysprocesses and find out what the waittypes are
> > to determine what is causing the extended commit operation.
> >
> > is the Tlog file on the same disk as the data file . .how busy is the
server?
> > is this constantly reproduceable? or is it intermittent?
> >
> > HTH
> >
> > "Tommy Söderkvist" wrote:
> >
> > > Hi all,
> > > we have a strange problem in SQL2000 SP3a with SQLTransaction's.
> > > When running SQL Profiler the duration for some SQLTransactions
(Commit) are
> > > over six seconds, but the individual SQL statements (sp's) are
executed at no
> > > time at all.
> > > I've used sp_lock but there are no locks.
> > > Here's an extract from SQLProfiler:
> > > SQLTransaction Commit 6796 5 2005-03-04 15:30:27.887 2005-03-04
> > > 15:30:34.683 DTCXact 16335154 sa
> > > Any Ideas ?
> > > Regards Tommy S.
> > >

Long running SQLTransaction

Hi all,
we have a strange problem in SQL2000 SP3a with SQLTransaction's.
When running SQL Profiler the duration for some SQLTransactions (Commit) are
over six seconds, but the individual SQL statements (sp's) are executed at no
time at all.
I've used sp_lock but there are no locks.
Here's an extract from SQLProfiler:
SQLTransactionCommit679652005-03-04 15:30:27.8872005-03-04
15:30:34.683DTCXact16335154sa
Any Ideas ?
Regards Tommy S.
Check out master..sysprocesses and find out what the waittypes are
to determine what is causing the extended commit operation.
is the Tlog file on the same disk as the data file . .how busy is the server?
is this constantly reproduceable? or is it intermittent?
HTH
"Tommy S?derkvist" wrote:

> Hi all,
> we have a strange problem in SQL2000 SP3a with SQLTransaction's.
> When running SQL Profiler the duration for some SQLTransactions (Commit) are
> over six seconds, but the individual SQL statements (sp's) are executed at no
> time at all.
> I've used sp_lock but there are no locks.
> Here's an extract from SQLProfiler:
> SQLTransactionCommit679652005-03-04 15:30:27.8872005-03-04
> 15:30:34.683DTCXact16335154sa
> Any Ideas ?
> Regards Tommy S.
>
|||Thanks,
I will try to check out the waittypes the next time I see a long running
transaction.
Yes, the log and the datafile are located on the same disk.
The CPU's (4 of them) are almost idle.
The problem is intermittent and it doesn't seem to be dependent on the
number of users, it can occur at early evening when the number of users
decrease significant.
/Tommy
"Olu Adedeji" wrote:
[vbcol=seagreen]
> Check out master..sysprocesses and find out what the waittypes are
> to determine what is causing the extended commit operation.
> is the Tlog file on the same disk as the data file . .how busy is the server?
> is this constantly reproduceable? or is it intermittent?
> HTH
> "Tommy S?derkvist" wrote:
|||"Tommy S?derkvist" <TommySderkvist@.discussions.microsoft.com> schrieb im
Newsbeitrag news:72BF587D-E3CF-41FE-9EC8-5681C62E472B@.microsoft.com...
> Thanks,
> I will try to check out the waittypes the next time I see a long running
> transaction.
> Yes, the log and the datafile are located on the same disk.
> The CPU's (4 of them) are almost idle.
> The problem is intermittent and it doesn't seem to be dependent on the
> number of users, it can occur at early evening when the number of users
> decrease significant.
My guess would be that you have an IO bottleneck which makes tx log
writing slow. You could use perfmon to verify this theory.
robert
[vbcol=seagreen]
> /Tommy
> "Olu Adedeji" wrote:
server?[vbcol=seagreen]
(Commit) are[vbcol=seagreen]
executed at no[vbcol=seagreen]

Long running SQL query in ASP.Net

Greetings,
I have to run a parametirzed sproc on SQL that can take 5-10 minutes to
complete. The query returns a few megs of data that I want to let the
clients download as a CSV. Any ideas on a quick and dirty way to do this
without invoiking SSIS?
Thanks
On Jun 12, 2:44 am, "Shkedy" <shk...@.newsgroups.nospam> wrote:
> Greetings,
> I have to run a parametirzed sproc on SQL that can take 5-10 minutes to
> complete. The query returns a few megs of data that I want to let the
> clients download as a CSV. Any ideas on a quick and dirty way to do this
> without invoiking SSIS?
> Thanks
You can use bcp but this requires xp_cmdshell nned to be enabled ,if
you call from SQL Server
|||Hi Shkedy,
I recommend that you try using bcp command and Import/Export Data Wizard.
Either of the two methods is very convenient, but Import/Export Data Wizard
may be more convenient for you.
For bcp utility, you can refer to:
bcp Utility
http://msdn2.microsoft.com/en-us/library/ms162802(SQL.90).aspx
For Import/Export Data Wizard, please refer to:
How to: Start the SQL Server Import and Export Wizard
http://technet.microsoft.com/en-us/library/ms140052(SQL.90).aspx
Hope this helps. If you have any other questions or concerns, please feel
free to let me know.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== ===
Get notification to my posts through email? Please refer to:
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
If you are using Outlook Express, please make sure you clear the check box
"Tools/Options/Read: Get 300 headers at a time" to see your reply promptly.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
================================================== ====
When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from this issue.
================================================== ====
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== ====
|||Thanks for the prompt and informative response.
I ended up using C# to implement this by launching a new thread that
traverses a reader and writes a text file. I added some more bells and
whistles that I omitted from here but this is the gist of the code:
private void btnSubmit_Click(object sender, System.EventArgs e)
{
if (Page.IsValid)
{
File.Delete(Server.MapPath("~") + "\\reports\\" + CreateFileName());
System.Threading.ThreadStart launcher = new System.Threading.ThreadStart(
CreateCSVReport);
System.Threading.Thread thread = new System.Threading.Thread(launcher);
thread.Start();
Page.RegisterStartupScript("popup", String.Format("<script
language='javascript'>popitup('ReportStatus.aspx?f ={0}')</script>'",CreateFileName()));
}
}
private void CreateCSVReport()
{
SqlDataReader rdr;
SqlConnection cnn = new
SqlConnection(System.Configuration.ConfigurationSe ttings.AppSettings.Get("ConnectionString"));
SqlCommand cmd = new SqlCommand("sp_mysproc");
cmd.Connection = cnn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("@.param1", SomeValue);
cmd.Parameters.Add("@.param2", SomeValue);
cmd.Connection.Open();
cmd.CommandTimeout = 900;
string tempFileName = Server.MapPath("~") + "\\reports\\temp\\" +
Guid.NewGuid().ToString() + ".csv";
System.IO.StreamWriter wrtr = (File.CreateText(tempFileName));
rdr = cmd.ExecuteReader(CommandBehavior.CloseConnection) ;
int rowCount = 0;
for (int i =0; i < rdr.FieldCount; i++)
{
wrtr.Write("{0}",rdr.GetName(i));
if (i < rdr.FieldCount -1 )
wrtr.Write(",");
}
wrtr.WriteLine();
while (rdr.Read())
{
for (int i =0; i < rdr.FieldCount; i++)
{
wrtr.Write("{0}",rdr[i]);
if (i < rdr.FieldCount -1 )
wrtr.Write(",");
}
rowCount++;
wrtr.WriteLine();
if (rowCount % 1000 == 0)
wrtr.Flush();
}
wrtr.Flush();
rdr.Close();
wrtr.Close();
File.Move(tempFileName, Server.MapPath("~") + "\\reports\\" +
CreateFileName());
}
"Charles Wang[MSFT]" <changliw@.online.microsoft.com> wrote in message
news:XKigCINrHHA.1032@.TK2MSFTNGHUB02.phx.gbl...
> Hi Shkedy,
> I recommend that you try using bcp command and Import/Export Data Wizard.
> Either of the two methods is very convenient, but Import/Export Data
> Wizard
> may be more convenient for you.
> For bcp utility, you can refer to:
> bcp Utility
> http://msdn2.microsoft.com/en-us/library/ms162802(SQL.90).aspx
> For Import/Export Data Wizard, please refer to:
> How to: Start the SQL Server Import and Export Wizard
> http://technet.microsoft.com/en-us/library/ms140052(SQL.90).aspx
> Hope this helps. If you have any other questions or concerns, please feel
> free to let me know.
> Best regards,
> Charles Wang
> Microsoft Online Community Support
> ================================================== ===
> Get notification to my posts through email? Please refer to:
> http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
> ications
> If you are using Outlook Express, please make sure you clear the check box
> "Tools/Options/Read: Get 300 headers at a time" to see your reply
> promptly.
>
> Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
> where an initial response from the community or a Microsoft Support
> Engineer within 1 business day is acceptable. Please note that each follow
> up response may take approximately 2 business days as the support
> professional working with you may need further investigation to reach the
> most efficient resolution. The offering is not appropriate for situations
> that require urgent, real-time or phone-based interactions or complex
> project analysis and dump analysis issues. Issues of this nature are best
> handled working with a dedicated Microsoft Support Engineer by contacting
> Microsoft Customer Support Services (CSS) at
> http://msdn.microsoft.com/subscriptions/support/default.aspx.
> ================================================== ====
> When responding to posts, please "Reply to Group" via
> your newsreader so that others may learn and benefit
> from this issue.
> ================================================== ====
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> ================================================== ====
>
>
|||Hi Shkedy,
Thank you for your reply and the detailed additional feedback on how you
were successful in resolving this issue. This information has been added to
Microsoft's database. Your solution will benefit many other users, and we
really value having you as a Microsoft customer.
If you have any other questions or concerns, please do not hesitate to
contact us. It is always our pleasure to be of assistance.
Have a nice day!
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== ===
Get notification to my posts through email? Please refer to:
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
If you are using Outlook Express, please make sure you clear the check box
"Tools/Options/Read: Get 300 headers at a time" to see your reply promptly.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
================================================== ====
When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from this issue.
================================================== ====
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== ====

Long running SQL query in ASP.Net

Greetings,
I have to run a parametirzed sproc on SQL that can take 5-10 minutes to
complete. The query returns a few megs of data that I want to let the
clients download as a CSV. Any ideas on a quick and dirty way to do this
without invoiking SSIS?
ThanksOn Jun 12, 2:44 am, "Shkedy" <shk...@.newsgroups.nospam> wrote:
> Greetings,
> I have to run a parametirzed sproc on SQL that can take 5-10 minutes to
> complete. The query returns a few megs of data that I want to let the
> clients download as a CSV. Any ideas on a quick and dirty way to do this
> without invoiking SSIS?
> Thanks
You can use bcp but this requires xp_cmdshell nned to be enabled ,if
you call from SQL Server|||Hi Shkedy,
I recommend that you try using bcp command and Import/Export Data Wizard.
Either of the two methods is very convenient, but Import/Export Data Wizard
may be more convenient for you.
For bcp utility, you can refer to:
bcp Utility
http://msdn2.microsoft.com/en-us/library/ms162802(SQL.90).aspx
For Import/Export Data Wizard, please refer to:
How to: Start the SQL Server Import and Export Wizard
http://technet.microsoft.com/en-us/library/ms140052(SQL.90).aspx
Hope this helps. If you have any other questions or concerns, please feel
free to let me know.
Best regards,
Charles Wang
Microsoft Online Community Support
=====================================================Get notification to my posts through email? Please refer to:
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
If you are using Outlook Express, please make sure you clear the check box
"Tools/Options/Read: Get 300 headers at a time" to see your reply promptly.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
======================================================When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from this issue.
======================================================This posting is provided "AS IS" with no warranties, and confers no rights.
======================================================|||Thanks for the prompt and informative response.
I ended up using C# to implement this by launching a new thread that
traverses a reader and writes a text file. I added some more bells and
whistles that I omitted from here but this is the gist of the code:
private void btnSubmit_Click(object sender, System.EventArgs e)
{
if (Page.IsValid)
{
File.Delete(Server.MapPath("~") + "\\reports\\" + CreateFileName());
System.Threading.ThreadStart launcher = new System.Threading.ThreadStart(
CreateCSVReport);
System.Threading.Thread thread = new System.Threading.Thread(launcher);
thread.Start();
Page.RegisterStartupScript("popup", String.Format("<script
language='javascript'>popitup('ReportStatus.aspx?f={0}')</script>'",CreateFileName()));
}
}
private void CreateCSVReport()
{
SqlDataReader rdr;
SqlConnection cnn = new
SqlConnection(System.Configuration.ConfigurationSettings.AppSettings.Get("ConnectionString"));
SqlCommand cmd = new SqlCommand("sp_mysproc");
cmd.Connection = cnn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("@.param1", SomeValue);
cmd.Parameters.Add("@.param2", SomeValue);
cmd.Connection.Open();
cmd.CommandTimeout = 900;
string tempFileName = Server.MapPath("~") + "\\reports\\temp\\" +
Guid.NewGuid().ToString() + ".csv";
System.IO.StreamWriter wrtr = (File.CreateText(tempFileName));
rdr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
int rowCount = 0;
for (int i =0; i < rdr.FieldCount; i++)
{
wrtr.Write("{0}",rdr.GetName(i));
if (i < rdr.FieldCount -1 )
wrtr.Write(",");
}
wrtr.WriteLine();
while (rdr.Read())
{
for (int i =0; i < rdr.FieldCount; i++)
{
wrtr.Write("{0}",rdr[i]);
if (i < rdr.FieldCount -1 )
wrtr.Write(",");
}
rowCount++;
wrtr.WriteLine();
if (rowCount % 1000 == 0)
wrtr.Flush();
}
wrtr.Flush();
rdr.Close();
wrtr.Close();
File.Move(tempFileName, Server.MapPath("~") + "\\reports\\" +
CreateFileName());
}
"Charles Wang[MSFT]" <changliw@.online.microsoft.com> wrote in message
news:XKigCINrHHA.1032@.TK2MSFTNGHUB02.phx.gbl...
> Hi Shkedy,
> I recommend that you try using bcp command and Import/Export Data Wizard.
> Either of the two methods is very convenient, but Import/Export Data
> Wizard
> may be more convenient for you.
> For bcp utility, you can refer to:
> bcp Utility
> http://msdn2.microsoft.com/en-us/library/ms162802(SQL.90).aspx
> For Import/Export Data Wizard, please refer to:
> How to: Start the SQL Server Import and Export Wizard
> http://technet.microsoft.com/en-us/library/ms140052(SQL.90).aspx
> Hope this helps. If you have any other questions or concerns, please feel
> free to let me know.
> Best regards,
> Charles Wang
> Microsoft Online Community Support
> =====================================================> Get notification to my posts through email? Please refer to:
> http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
> ications
> If you are using Outlook Express, please make sure you clear the check box
> "Tools/Options/Read: Get 300 headers at a time" to see your reply
> promptly.
>
> Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
> where an initial response from the community or a Microsoft Support
> Engineer within 1 business day is acceptable. Please note that each follow
> up response may take approximately 2 business days as the support
> professional working with you may need further investigation to reach the
> most efficient resolution. The offering is not appropriate for situations
> that require urgent, real-time or phone-based interactions or complex
> project analysis and dump analysis issues. Issues of this nature are best
> handled working with a dedicated Microsoft Support Engineer by contacting
> Microsoft Customer Support Services (CSS) at
> http://msdn.microsoft.com/subscriptions/support/default.aspx.
> ======================================================> When responding to posts, please "Reply to Group" via
> your newsreader so that others may learn and benefit
> from this issue.
> ======================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> ======================================================>
>|||Hi Shkedy,
Thank you for your reply and the detailed additional feedback on how you
were successful in resolving this issue. This information has been added to
Microsoft's database. Your solution will benefit many other users, and we
really value having you as a Microsoft customer.
If you have any other questions or concerns, please do not hesitate to
contact us. It is always our pleasure to be of assistance.
Have a nice day!
Best regards,
Charles Wang
Microsoft Online Community Support
=====================================================Get notification to my posts through email? Please refer to:
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications
If you are using Outlook Express, please make sure you clear the check box
"Tools/Options/Read: Get 300 headers at a time" to see your reply promptly.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
======================================================When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from this issue.
======================================================This posting is provided "AS IS" with no warranties, and confers no rights.
======================================================sql