Showing posts with label servers. Show all posts
Showing posts with label servers. Show all posts

Friday, March 30, 2012

Long wait times for simple SQL on 2005

I have an existing application deployed on many SQL 2000 servers running without issue. We installed SQL 2005 (8 way DualCore) and the most simple trivial SQL statements are taking an extreme amount of time (e.g. an identical query in 2000 took 10 ms is taking a couple of minutes in 2005). The data in the tables being queried are very small (low row count < 400; and low row size < 1k).

When profiling 2005 we see the connection going into a status of 'suspended' with a wait type of 'ASYNC_NETWORK_IO'. There just simply is not enough data being retrieved for a lag to occur returning data to the client. Has anyone seen anything like this? Any thoughts?

I haven't seen anything relating to NETWORK_IO, one gotcha is not doing index rebuilds. http://sqlblogcasts.com/blogs/simons/archive/2006/10/20/SQL-2005-UpgradeRebuild-your-indexes-.aspx

Can you include more details on the queries.

If you run the queries in SSMS do they still take a long time?

How many users are using the server? Is there server doing anything else?

|||

This is a new client of ours (installing our web app, which, has been in production for a few years and we have no issues in house on our 2005 or 2000 SQL boxes) so there are no current active users (save the one that I was using to test with); so the current activity on the server is virtually nill. This is one of our default databases (again, virtually no data except for so default info in the one table I am querying) - all indexes are up to date.

The table has simple varchar columns and two text columns (the data in the columns vary from 100 bytes to 10k [the high end]. Here is teh defintion of the table:

Table [REPORT_DEF]
[REPORT_GU] [uniqueidentifier] NOT NULL,
[NAME] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[NAMESPACE] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[NUMBER] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[PAGE_ORIENTATION] [varchar](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[OWNER_TYPE] [varchar](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ORGANIZATION_GU] [uniqueidentifier] NULL,
[XML_DEF] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[CLASS_NAME] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ASSEMBLY] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[COMPANY_GU] [uniqueidentifier] NULL,
[LOCATION] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ICON] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[TITLE] [nvarchar](60) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[SHORT_TITLE] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[LOCALE] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[REPORT_VIEW_TYPE] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[MODULE] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[DEPLOY_VERSION] [numeric](7, 0) NOT NULL DEFAULT ((0)),
[MAIN_REPORT_RPX] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[ASSOCIATED_REPORT_GU] [uniqueidentifier] NULL,
CONSTRAINT PRIMARY KEY CLUSTERED - Column: REPORT_GU
CONSTRAINT UNIQUE NONCLUSTERED - columns: NAMESPACE and NAME

The query that is being run is as follows:

SELECT R0.[COMPANY_GU],R0.[NUMBER],R0.[NAME],R0.[ASSOCIATED_REPORT_GU],R0.[MAIN_REPORT_RPX],
R0.[REPORT_GU],R0.[PAGE_ORIENTATION],R0.[SHORT_TITLE],R0.[ASSEMBLY],R0.[MODULE],R0.[TITLE],
R0.[DEPLOY_VERSION],R0.[NAMESPACE],R0.[LOCATION],R0.[ORGANIZATION_GU],R0.[ICON],R0.[OWNER_TYPE],
R0.[XML_DEF],R0.[LOCALE],R0.[REPORT_VIEW_TYPE],R0.[CLASS_NAME]
FROM [REPORT_DEF] R0

ORDER BY R0.[NAMESPACE] ASC,R0.[NAME] ASC,R0.[REPORT_GU] ASC

Again, when I execute this query on the DB server (via Mgmt Studio) it executes in about 80 ms (give or take a couple ms). When run on a machine on the same network (gigabit backbone, the only two machines on this network) the execution time jumps to 75 seconds (give or take 5 seconds).

Thoughts?

|||

Can you run profiler and capture the XML plans, and then compare them, to check they are getting the same plan.

Also be aware that by default SSMS uses the in memory data provider which will give performance improvements compared with running on the client.

How much data is being returned?

Are you using SSMS on the client as well?

|||

Simon,

Thanks for your thoughts on this one. I broke down and called MS and after about 6 hours we found that there was a network issue. Both SQL and the web server machines have gigabit NICs but the switch that the client had the two machines plugged into is a 10/100 switch...with the NICs set to autosensing basic network traffic was extremely delayed (tested via explorer copying a 70MB file to the SQL server occured in 8 seconds, from the SQL server to the web sever was taking about 35 minutes). Anyway, chalk this one up to experience and something to look for next time.

Thanks for your help!

Long standing debate...please tell me your thoughts

Ok, here's the situation. Our MIS department recently relocated to a new
office space about 5 miles away from our database servers. We used to be in
the same building.
We have about 10 different servers over there. For this example...let's say
two of those are called DevData and ProdData.
Since we've moved over here, whenever we have to copy data between the two
servers...it takes much, much longer. I would guestimate that it takes 2 or
3x as long to transfer data. Remember, both servers are located over there
(right next to each other).
When we questioned our Network Services God, he informed us that any time we
transfer data from Dev to Prod that it has to first travel over here before
going to the destination server.
Can someone shed some light on this?
Thanks so much.
JenIt first has to travel back to your office because they have
it configured that way. His team screwed up. You wouldn't
want your network database traffic going outside the
local area network of your database "if" you can help it
for performance and security reasons.
2004 and 2005 Microsoft MVP C#
Robbe Morris
http://www.masterado.net
Earn $$$ money answering .NET Framework
messageboard posts at EggHeadCafe.com.
http://www.eggheadcafe.com/forums/merit.asp
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> Ok, here's the situation. Our MIS department recently relocated to a new
> office space about 5 miles away from our database servers. We used to be
> in
> the same building.
> We have about 10 different servers over there. For this example...let's
> say
> two of those are called DevData and ProdData.
> Since we've moved over here, whenever we have to copy data between the two
> servers...it takes much, much longer. I would guestimate that it takes 2
> or
> 3x as long to transfer data. Remember, both servers are located over
> there
> (right next to each other).
> When we questioned our Network Services God, he informed us that any time
> we
> transfer data from Dev to Prod that it has to first travel over here
> before
> going to the destination server.
> Can someone shed some light on this?
> Thanks so much.
> Jen
>|||Well..I totally agree with you and that has been my argument all along.
So the big question is...is this something that can be configured in SQL
Server or is this more a general network issue?
I would LOVE to be able to tell this 'God' (oh..he's *that* bad)..that this
is their issue and explain to him what he can do to fix it. Oh that would
make my day.
BTW - I'm an ex-Excel MVP (used to participate a lot a few years ago).
Thanks for answering so quickly and any additional information would be
great. If I ever make it back to the Summit, I'll buy you a beer.|||How are you copying data between the servers? If you are using a DTS
package, data are pumped through the machine running the package.
Consequently, if you run the DTS package on your PC using EM, all data
passes through your PC. You can invoke the package from a job on either the
source or target server to eliminate the extra hop.
Hope this helps.
Dan Guzman
SQL Server MVP
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> Ok, here's the situation. Our MIS department recently relocated to a new
> office space about 5 miles away from our database servers. We used to be
> in
> the same building.
> We have about 10 different servers over there. For this example...let's
> say
> two of those are called DevData and ProdData.
> Since we've moved over here, whenever we have to copy data between the two
> servers...it takes much, much longer. I would guestimate that it takes 2
> or
> 3x as long to transfer data. Remember, both servers are located over
> there
> (right next to each other).
> When we questioned our Network Services God, he informed us that any time
> we
> transfer data from Dev to Prod that it has to first travel over here
> before
> going to the destination server.
> Can someone shed some light on this?
> Thanks so much.
> Jen
>|||Dan...usually it's just via Import/Export via EM. However, we do have
packages also. The packages are actually stored on the Dev or Prod database
.
We don't do anything 'local'.
So..in that case, you are saying that unless we use Terminal Services or are
at the physical location of the source/destination, data does go through you
r
local machine?
"Dan Guzman" wrote:

> How are you copying data between the servers? If you are using a DTS
> package, data are pumped through the machine running the package.
> Consequently, if you run the DTS package on your PC using EM, all data
> passes through your PC. You can invoke the package from a job on either t
he
> source or target server to eliminate the extra hop.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
>
>|||Jennifer wrote:[vbcol=seagreen]
> Dan...usually it's just via Import/Export via EM. However, we do have
> packages also. The packages are actually stored on the Dev or Prod
> database. We don't do anything 'local'.
> So..in that case, you are saying that unless we use Terminal Services
> or are at the physical location of the source/destination, data does
> go through your local machine?
> "Dan Guzman" wrote:
>
Yes, it will go through the local machine because SQL EM is on the local
machine. If you set up a job to do this using the SQL Server Agent, it
will stay on the server. This, of course, assumes that both servers can
talk to one another through a local switch and not have to travel back
to the main building.
This reminds me of the beloved Netware copy utility I used to use in the
past called NCOPY. If you issued NCOPY from the local PC to copy a file
from the Netware file server to another location, NCOPY actually kicked
off a remote server copy to avoid having to bring the data to client
first.
Copyting is not like that with Windows. If you have a file on the file
server and open a local command window or Explorer session and
copy/paste the file to the same folder, it all comes to the client first
because the client is performing the copy.
In your situation, it sounds like you need to remote into the server or
kick off a server process that starts the copy (the data copied needs to
be on the server as well).
David Gugick
Quest Software
www.imceda.com
www.quest.com|||> So..in that case, you are saying that unless we use Terminal Services or
> are
> at the physical location of the source/destination, data does go through
> your
> local machine?
This is true when you run the package remotely via EM. The source location
of the package doesn't matter because it is loaded on the machine running
the package before execution. As David and I mentioned, another alternative
is to run the package as a SQL Agent job so that it executes on the source
or target server.
I also work with remote servers and find that it pays to be mindful of
network bandwidth with voluminous data-transfer tasks. Remote Desktop is
your friend :-)
Hope this helps.
Dan Guzman
SQL Server MVP
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...[vbcol=seagreen]
> Dan...usually it's just via Import/Export via EM. However, we do have
> packages also. The packages are actually stored on the Dev or Prod
> database.
> We don't do anything 'local'.
> So..in that case, you are saying that unless we use Terminal Services or
> are
> at the physical location of the source/destination, data does go through
> your
> local machine?
> "Dan Guzman" wrote:
>|||Dan would the same be true if calling a remote DTS Package through the local
command prompt. Dont know why this would make a difference but I just
thought I would ask.
CR
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:edqu%23WjfFHA.2156@.TK2MSFTNGP14.phx.gbl...
> This is true when you run the package remotely via EM. The source
> location of the package doesn't matter because it is loaded on the machine
> running the package before execution. As David and I mentioned, another
> alternative is to run the package as a SQL Agent job so that it executes
> on the source or target server.
> I also work with remote servers and find that it pays to be mindful of
> network bandwidth with voluminous data-transfer tasks. Remote Desktop is
> your friend :-)
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
> news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...
>|||The same applies when the package is run with the DTSRUN command-prompt
utility on your PC. However, if you invoke DTSRUN using xp_cmdshell, it
will run on the server.
Hope this helps.
Dan Guzman
SQL Server MVP
"ChrisR" <noemail@.bla.com> wrote in message
news:%23IOYFnlfFHA.3252@.TK2MSFTNGP10.phx.gbl...
> Dan would the same be true if calling a remote DTS Package through the
> local command prompt. Dont know why this would make a difference but I
> just thought I would ask.
> CR
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:edqu%23WjfFHA.2156@.TK2MSFTNGP14.phx.gbl...
>sql

Long standing debate...please tell me your thoughts

Ok, here's the situation. Our MIS department recently relocated to a new
office space about 5 miles away from our database servers. We used to be in
the same building.
We have about 10 different servers over there. For this example...let's say
two of those are called DevData and ProdData.
Since we've moved over here, whenever we have to copy data between the two
servers...it takes much, much longer. I would guestimate that it takes 2 or
3x as long to transfer data. Remember, both servers are located over there
(right next to each other).
When we questioned our Network Services God, he informed us that any time we
transfer data from Dev to Prod that it has to first travel over here before
going to the destination server.
Can someone shed some light on this?
Thanks so much.
JenIt first has to travel back to your office because they have
it configured that way. His team screwed up. You wouldn't
want your network database traffic going outside the
local area network of your database "if" you can help it
for performance and security reasons.
--
2004 and 2005 Microsoft MVP C#
Robbe Morris
http://www.masterado.net
Earn $$$ money answering .NET Framework
messageboard posts at EggHeadCafe.com.
http://www.eggheadcafe.com/forums/merit.asp
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> Ok, here's the situation. Our MIS department recently relocated to a new
> office space about 5 miles away from our database servers. We used to be
> in
> the same building.
> We have about 10 different servers over there. For this example...let's
> say
> two of those are called DevData and ProdData.
> Since we've moved over here, whenever we have to copy data between the two
> servers...it takes much, much longer. I would guestimate that it takes 2
> or
> 3x as long to transfer data. Remember, both servers are located over
> there
> (right next to each other).
> When we questioned our Network Services God, he informed us that any time
> we
> transfer data from Dev to Prod that it has to first travel over here
> before
> going to the destination server.
> Can someone shed some light on this?
> Thanks so much.
> Jen
>|||Well..I totally agree with you and that has been my argument all along. :)
So the big question is...is this something that can be configured in SQL
Server or is this more a general network issue?
I would LOVE to be able to tell this 'God' (oh..he's *that* bad)..that this
is their issue and explain to him what he can do to fix it. Oh that would
make my day.
BTW - I'm an ex-Excel MVP (used to participate a lot a few years ago).
Thanks for answering so quickly and any additional information would be
great. If I ever make it back to the Summit, I'll buy you a beer.|||How are you copying data between the servers? If you are using a DTS
package, data are pumped through the machine running the package.
Consequently, if you run the DTS package on your PC using EM, all data
passes through your PC. You can invoke the package from a job on either the
source or target server to eliminate the extra hop.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> Ok, here's the situation. Our MIS department recently relocated to a new
> office space about 5 miles away from our database servers. We used to be
> in
> the same building.
> We have about 10 different servers over there. For this example...let's
> say
> two of those are called DevData and ProdData.
> Since we've moved over here, whenever we have to copy data between the two
> servers...it takes much, much longer. I would guestimate that it takes 2
> or
> 3x as long to transfer data. Remember, both servers are located over
> there
> (right next to each other).
> When we questioned our Network Services God, he informed us that any time
> we
> transfer data from Dev to Prod that it has to first travel over here
> before
> going to the destination server.
> Can someone shed some light on this?
> Thanks so much.
> Jen
>|||Dan...usually it's just via Import/Export via EM. However, we do have
packages also. The packages are actually stored on the Dev or Prod database.
We don't do anything 'local'.
So..in that case, you are saying that unless we use Terminal Services or are
at the physical location of the source/destination, data does go through your
local machine?
"Dan Guzman" wrote:
> How are you copying data between the servers? If you are using a DTS
> package, data are pumped through the machine running the package.
> Consequently, if you run the DTS package on your PC using EM, all data
> passes through your PC. You can invoke the package from a job on either the
> source or target server to eliminate the extra hop.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> > Ok, here's the situation. Our MIS department recently relocated to a new
> > office space about 5 miles away from our database servers. We used to be
> > in
> > the same building.
> >
> > We have about 10 different servers over there. For this example...let's
> > say
> > two of those are called DevData and ProdData.
> >
> > Since we've moved over here, whenever we have to copy data between the two
> > servers...it takes much, much longer. I would guestimate that it takes 2
> > or
> > 3x as long to transfer data. Remember, both servers are located over
> > there
> > (right next to each other).
> >
> > When we questioned our Network Services God, he informed us that any time
> > we
> > transfer data from Dev to Prod that it has to first travel over here
> > before
> > going to the destination server.
> >
> > Can someone shed some light on this?
> >
> > Thanks so much.
> >
> > Jen
> >
>
>|||Jennifer wrote:
> Dan...usually it's just via Import/Export via EM. However, we do have
> packages also. The packages are actually stored on the Dev or Prod
> database. We don't do anything 'local'.
> So..in that case, you are saying that unless we use Terminal Services
> or are at the physical location of the source/destination, data does
> go through your local machine?
> "Dan Guzman" wrote:
>> How are you copying data between the servers? If you are using a DTS
>> package, data are pumped through the machine running the package.
>> Consequently, if you run the DTS package on your PC using EM, all
>> data passes through your PC. You can invoke the package from a job
>> on either the source or target server to eliminate the extra hop.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
>> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
>> Ok, here's the situation. Our MIS department recently relocated to
>> a new office space about 5 miles away from our database servers.
>> We used to be in
>> the same building.
>> We have about 10 different servers over there. For this
>> example...let's say
>> two of those are called DevData and ProdData.
>> Since we've moved over here, whenever we have to copy data between
>> the two servers...it takes much, much longer. I would guestimate
>> that it takes 2 or
>> 3x as long to transfer data. Remember, both servers are located
>> over there
>> (right next to each other).
>> When we questioned our Network Services God, he informed us that
>> any time we
>> transfer data from Dev to Prod that it has to first travel over here
>> before
>> going to the destination server.
>> Can someone shed some light on this?
>> Thanks so much.
>> Jen
Yes, it will go through the local machine because SQL EM is on the local
machine. If you set up a job to do this using the SQL Server Agent, it
will stay on the server. This, of course, assumes that both servers can
talk to one another through a local switch and not have to travel back
to the main building.
This reminds me of the beloved Netware copy utility I used to use in the
past called NCOPY. If you issued NCOPY from the local PC to copy a file
from the Netware file server to another location, NCOPY actually kicked
off a remote server copy to avoid having to bring the data to client
first.
Copyting is not like that with Windows. If you have a file on the file
server and open a local command window or Explorer session and
copy/paste the file to the same folder, it all comes to the client first
because the client is performing the copy.
In your situation, it sounds like you need to remote into the server or
kick off a server process that starts the copy (the data copied needs to
be on the server as well).
David Gugick
Quest Software
www.imceda.com
www.quest.com|||> So..in that case, you are saying that unless we use Terminal Services or
> are
> at the physical location of the source/destination, data does go through
> your
> local machine?
This is true when you run the package remotely via EM. The source location
of the package doesn't matter because it is loaded on the machine running
the package before execution. As David and I mentioned, another alternative
is to run the package as a SQL Agent job so that it executes on the source
or target server.
I also work with remote servers and find that it pays to be mindful of
network bandwidth with voluminous data-transfer tasks. Remote Desktop is
your friend :-)
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...
> Dan...usually it's just via Import/Export via EM. However, we do have
> packages also. The packages are actually stored on the Dev or Prod
> database.
> We don't do anything 'local'.
> So..in that case, you are saying that unless we use Terminal Services or
> are
> at the physical location of the source/destination, data does go through
> your
> local machine?
> "Dan Guzman" wrote:
>> How are you copying data between the servers? If you are using a DTS
>> package, data are pumped through the machine running the package.
>> Consequently, if you run the DTS package on your PC using EM, all data
>> passes through your PC. You can invoke the package from a job on either
>> the
>> source or target server to eliminate the extra hop.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
>> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
>> > Ok, here's the situation. Our MIS department recently relocated to a
>> > new
>> > office space about 5 miles away from our database servers. We used to
>> > be
>> > in
>> > the same building.
>> >
>> > We have about 10 different servers over there. For this
>> > example...let's
>> > say
>> > two of those are called DevData and ProdData.
>> >
>> > Since we've moved over here, whenever we have to copy data between the
>> > two
>> > servers...it takes much, much longer. I would guestimate that it takes
>> > 2
>> > or
>> > 3x as long to transfer data. Remember, both servers are located over
>> > there
>> > (right next to each other).
>> >
>> > When we questioned our Network Services God, he informed us that any
>> > time
>> > we
>> > transfer data from Dev to Prod that it has to first travel over here
>> > before
>> > going to the destination server.
>> >
>> > Can someone shed some light on this?
>> >
>> > Thanks so much.
>> >
>> > Jen
>> >
>>|||Dan would the same be true if calling a remote DTS Package through the local
command prompt. Dont know why this would make a difference but I just
thought I would ask.
CR
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:edqu%23WjfFHA.2156@.TK2MSFTNGP14.phx.gbl...
>> So..in that case, you are saying that unless we use Terminal Services or
>> are
>> at the physical location of the source/destination, data does go through
>> your
>> local machine?
> This is true when you run the package remotely via EM. The source
> location of the package doesn't matter because it is loaded on the machine
> running the package before execution. As David and I mentioned, another
> alternative is to run the package as a SQL Agent job so that it executes
> on the source or target server.
> I also work with remote servers and find that it pays to be mindful of
> network bandwidth with voluminous data-transfer tasks. Remote Desktop is
> your friend :-)
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
> news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...
>> Dan...usually it's just via Import/Export via EM. However, we do have
>> packages also. The packages are actually stored on the Dev or Prod
>> database.
>> We don't do anything 'local'.
>> So..in that case, you are saying that unless we use Terminal Services or
>> are
>> at the physical location of the source/destination, data does go through
>> your
>> local machine?
>> "Dan Guzman" wrote:
>> How are you copying data between the servers? If you are using a DTS
>> package, data are pumped through the machine running the package.
>> Consequently, if you run the DTS package on your PC using EM, all data
>> passes through your PC. You can invoke the package from a job on either
>> the
>> source or target server to eliminate the extra hop.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
>> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
>> > Ok, here's the situation. Our MIS department recently relocated to a
>> > new
>> > office space about 5 miles away from our database servers. We used to
>> > be
>> > in
>> > the same building.
>> >
>> > We have about 10 different servers over there. For this
>> > example...let's
>> > say
>> > two of those are called DevData and ProdData.
>> >
>> > Since we've moved over here, whenever we have to copy data between the
>> > two
>> > servers...it takes much, much longer. I would guestimate that it
>> > takes 2
>> > or
>> > 3x as long to transfer data. Remember, both servers are located over
>> > there
>> > (right next to each other).
>> >
>> > When we questioned our Network Services God, he informed us that any
>> > time
>> > we
>> > transfer data from Dev to Prod that it has to first travel over here
>> > before
>> > going to the destination server.
>> >
>> > Can someone shed some light on this?
>> >
>> > Thanks so much.
>> >
>> > Jen
>> >
>>
>|||The same applies when the package is run with the DTSRUN command-prompt
utility on your PC. However, if you invoke DTSRUN using xp_cmdshell, it
will run on the server.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"ChrisR" <noemail@.bla.com> wrote in message
news:%23IOYFnlfFHA.3252@.TK2MSFTNGP10.phx.gbl...
> Dan would the same be true if calling a remote DTS Package through the
> local command prompt. Dont know why this would make a difference but I
> just thought I would ask.
> CR
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:edqu%23WjfFHA.2156@.TK2MSFTNGP14.phx.gbl...
>> So..in that case, you are saying that unless we use Terminal Services or
>> are
>> at the physical location of the source/destination, data does go through
>> your
>> local machine?
>> This is true when you run the package remotely via EM. The source
>> location of the package doesn't matter because it is loaded on the
>> machine running the package before execution. As David and I mentioned,
>> another alternative is to run the package as a SQL Agent job so that it
>> executes on the source or target server.
>> I also work with remote servers and find that it pays to be mindful of
>> network bandwidth with voluminous data-transfer tasks. Remote Desktop is
>> your friend :-)
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
>> news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...
>> Dan...usually it's just via Import/Export via EM. However, we do have
>> packages also. The packages are actually stored on the Dev or Prod
>> database.
>> We don't do anything 'local'.
>> So..in that case, you are saying that unless we use Terminal Services or
>> are
>> at the physical location of the source/destination, data does go through
>> your
>> local machine?
>> "Dan Guzman" wrote:
>> How are you copying data between the servers? If you are using a DTS
>> package, data are pumped through the machine running the package.
>> Consequently, if you run the DTS package on your PC using EM, all data
>> passes through your PC. You can invoke the package from a job on
>> either the
>> source or target server to eliminate the extra hop.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
>> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
>> > Ok, here's the situation. Our MIS department recently relocated to a
>> > new
>> > office space about 5 miles away from our database servers. We used
>> > to be
>> > in
>> > the same building.
>> >
>> > We have about 10 different servers over there. For this
>> > example...let's
>> > say
>> > two of those are called DevData and ProdData.
>> >
>> > Since we've moved over here, whenever we have to copy data between
>> > the two
>> > servers...it takes much, much longer. I would guestimate that it
>> > takes 2
>> > or
>> > 3x as long to transfer data. Remember, both servers are located over
>> > there
>> > (right next to each other).
>> >
>> > When we questioned our Network Services God, he informed us that any
>> > time
>> > we
>> > transfer data from Dev to Prod that it has to first travel over here
>> > before
>> > going to the destination server.
>> >
>> > Can someone shed some light on this?
>> >
>> > Thanks so much.
>> >
>> > Jen
>> >
>>
>>
>

Long standing debate...please tell me your thoughts

Ok, here's the situation. Our MIS department recently relocated to a new
office space about 5 miles away from our database servers. We used to be in
the same building.
We have about 10 different servers over there. For this example...let's say
two of those are called DevData and ProdData.
Since we've moved over here, whenever we have to copy data between the two
servers...it takes much, much longer. I would guestimate that it takes 2 or
3x as long to transfer data. Remember, both servers are located over there
(right next to each other).
When we questioned our Network Services God, he informed us that any time we
transfer data from Dev to Prod that it has to first travel over here before
going to the destination server.
Can someone shed some light on this?
Thanks so much.
Jen
It first has to travel back to your office because they have
it configured that way. His team screwed up. You wouldn't
want your network database traffic going outside the
local area network of your database "if" you can help it
for performance and security reasons.
2004 and 2005 Microsoft MVP C#
Robbe Morris
http://www.masterado.net
Earn $$$ money answering .NET Framework
messageboard posts at EggHeadCafe.com.
http://www.eggheadcafe.com/forums/merit.asp
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> Ok, here's the situation. Our MIS department recently relocated to a new
> office space about 5 miles away from our database servers. We used to be
> in
> the same building.
> We have about 10 different servers over there. For this example...let's
> say
> two of those are called DevData and ProdData.
> Since we've moved over here, whenever we have to copy data between the two
> servers...it takes much, much longer. I would guestimate that it takes 2
> or
> 3x as long to transfer data. Remember, both servers are located over
> there
> (right next to each other).
> When we questioned our Network Services God, he informed us that any time
> we
> transfer data from Dev to Prod that it has to first travel over here
> before
> going to the destination server.
> Can someone shed some light on this?
> Thanks so much.
> Jen
>
|||Well..I totally agree with you and that has been my argument all along.
So the big question is...is this something that can be configured in SQL
Server or is this more a general network issue?
I would LOVE to be able to tell this 'God' (oh..he's *that* bad)..that this
is their issue and explain to him what he can do to fix it. Oh that would
make my day.
BTW - I'm an ex-Excel MVP (used to participate a lot a few years ago).
Thanks for answering so quickly and any additional information would be
great. If I ever make it back to the Summit, I'll buy you a beer.
|||How are you copying data between the servers? If you are using a DTS
package, data are pumped through the machine running the package.
Consequently, if you run the DTS package on your PC using EM, all data
passes through your PC. You can invoke the package from a job on either the
source or target server to eliminate the extra hop.
Hope this helps.
Dan Guzman
SQL Server MVP
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
> Ok, here's the situation. Our MIS department recently relocated to a new
> office space about 5 miles away from our database servers. We used to be
> in
> the same building.
> We have about 10 different servers over there. For this example...let's
> say
> two of those are called DevData and ProdData.
> Since we've moved over here, whenever we have to copy data between the two
> servers...it takes much, much longer. I would guestimate that it takes 2
> or
> 3x as long to transfer data. Remember, both servers are located over
> there
> (right next to each other).
> When we questioned our Network Services God, he informed us that any time
> we
> transfer data from Dev to Prod that it has to first travel over here
> before
> going to the destination server.
> Can someone shed some light on this?
> Thanks so much.
> Jen
>
|||Dan...usually it's just via Import/Export via EM. However, we do have
packages also. The packages are actually stored on the Dev or Prod database.
We don't do anything 'local'.
So..in that case, you are saying that unless we use Terminal Services or are
at the physical location of the source/destination, data does go through your
local machine?
"Dan Guzman" wrote:

> How are you copying data between the servers? If you are using a DTS
> package, data are pumped through the machine running the package.
> Consequently, if you run the DTS package on your PC using EM, all data
> passes through your PC. You can invoke the package from a job on either the
> source or target server to eliminate the extra hop.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
> news:CED31D68-2751-44E9-A48B-2BE9315A52B9@.microsoft.com...
>
>
|||Jennifer wrote:[vbcol=seagreen]
> Dan...usually it's just via Import/Export via EM. However, we do have
> packages also. The packages are actually stored on the Dev or Prod
> database. We don't do anything 'local'.
> So..in that case, you are saying that unless we use Terminal Services
> or are at the physical location of the source/destination, data does
> go through your local machine?
> "Dan Guzman" wrote:
Yes, it will go through the local machine because SQL EM is on the local
machine. If you set up a job to do this using the SQL Server Agent, it
will stay on the server. This, of course, assumes that both servers can
talk to one another through a local switch and not have to travel back
to the main building.
This reminds me of the beloved Netware copy utility I used to use in the
past called NCOPY. If you issued NCOPY from the local PC to copy a file
from the Netware file server to another location, NCOPY actually kicked
off a remote server copy to avoid having to bring the data to client
first.
Copyting is not like that with Windows. If you have a file on the file
server and open a local command window or Explorer session and
copy/paste the file to the same folder, it all comes to the client first
because the client is performing the copy.
In your situation, it sounds like you need to remote into the server or
kick off a server process that starts the copy (the data copied needs to
be on the server as well).
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||> So..in that case, you are saying that unless we use Terminal Services or
> are
> at the physical location of the source/destination, data does go through
> your
> local machine?
This is true when you run the package remotely via EM. The source location
of the package doesn't matter because it is loaded on the machine running
the package before execution. As David and I mentioned, another alternative
is to run the package as a SQL Agent job so that it executes on the source
or target server.
I also work with remote servers and find that it pays to be mindful of
network bandwidth with voluminous data-transfer tasks. Remote Desktop is
your friend :-)
Hope this helps.
Dan Guzman
SQL Server MVP
"Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...[vbcol=seagreen]
> Dan...usually it's just via Import/Export via EM. However, we do have
> packages also. The packages are actually stored on the Dev or Prod
> database.
> We don't do anything 'local'.
> So..in that case, you are saying that unless we use Terminal Services or
> are
> at the physical location of the source/destination, data does go through
> your
> local machine?
> "Dan Guzman" wrote:
|||Dan would the same be true if calling a remote DTS Package through the local
command prompt. Dont know why this would make a difference but I just
thought I would ask.
CR
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:edqu%23WjfFHA.2156@.TK2MSFTNGP14.phx.gbl...
> This is true when you run the package remotely via EM. The source
> location of the package doesn't matter because it is loaded on the machine
> running the package before execution. As David and I mentioned, another
> alternative is to run the package as a SQL Agent job so that it executes
> on the source or target server.
> I also work with remote servers and find that it pays to be mindful of
> network bandwidth with voluminous data-transfer tasks. Remote Desktop is
> your friend :-)
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Jennifer" <Jennifer@.discussions.microsoft.com> wrote in message
> news:B757F600-B389-4EF6-9C2B-4DAB81F05B8E@.microsoft.com...
>
|||The same applies when the package is run with the DTSRUN command-prompt
utility on your PC. However, if you invoke DTSRUN using xp_cmdshell, it
will run on the server.
Hope this helps.
Dan Guzman
SQL Server MVP
"ChrisR" <noemail@.bla.com> wrote in message
news:%23IOYFnlfFHA.3252@.TK2MSFTNGP10.phx.gbl...
> Dan would the same be true if calling a remote DTS Package through the
> local command prompt. Dont know why this would make a difference but I
> just thought I would ask.
> CR
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:edqu%23WjfFHA.2156@.TK2MSFTNGP14.phx.gbl...
>

Friday, March 23, 2012

logshipping setup with two standby db servers.

Hi,
Currently we have logshipping setup with one production db server
sending the logs to one standby database server.

we would like to have setup where there will be two standby db
servers. Second server will be located in different country.
Kindly let me know if we can have the setup where
there is one primary server and two standby db servers.

When I tried to configure the same I am not able to complete the
maintenance plan.
It gives error that logshipping monitor already exists.
Please help me. tks in advance.
Regards
Kamalkarora@.melstar.com (Kamal) wrote in message news:<a4d6a1fb.0408230107.abdaf08@.posting.google.com>...
> Hi,
> Currently we have logshipping setup with one production db server
> sending the logs to one standby database server.
> we would like to have setup where there will be two standby db
> servers. Second server will be located in different country.
> Kindly let me know if we can have the setup where
> there is one primary server and two standby db servers.
> When I tried to configure the same I am not able to complete the
> maintenance plan.
> It gives error that logshipping monitor already exists.
> Please help me. tks in advance.
> Regards
> Kamal

Yes, this is possible:

http://support.microsoft.com/defaul...inal.asp#modify
http://www.microsoft.com/resources/...art4/c1361.mspx

You might also want to check this article, which describes one
situation where you might get that error:

http://support.microsoft.com/defaul...q298743&sd=tech

Also check out "Modifying Log Shipping" in Books Online.

Simon

Log-shipping failure

Hi All,
sql server 2000 enterprise edition, 8.00.760
Am trying to set logs-shipping between two databases on two different servers on the same domain. Am using same sql server agent account for both the servers.
After log-shipping is configured, am able to see the transaction logs being backed up in the primary server, then copied on to the secondary server folder. But when it is trying to restore the transaction logs on the secondary server, I get the following
error:
sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029).
FYI, We are using compatibility level of 65 on the primary server, as this is how we want it. But I can change the compatibility level on secondary server to 80 (if its a must). Am wondering, if this is one of the reason for failure.
More information:
I have confugured it to restore in STANDBY mode and terminate any user connections. And the secondary server is used for monitoring log shipping activities.
Has anybody faced similar problem? Please advise
Thanks much
GYK
> Hi All,
> sql server 2000 enterprise edition, 8.00.760
> Am trying to set logs-shipping between two databases on two
> different servers on the same domain. Am using same sql server
> agent account for both the servers.
> After log-shipping is configured, am able to see the transaction
> logs being backed up in the primary server, then copied on to
> the secondary server folder. But when it is trying to restore the
> transaction logs on the secondary server, I get the following error:
> sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029).
> FYI, We are using compatibility level of 65 on the primary server,
> as this is how we want it. But I can change the compatibility
> level on secondary server to 80 (if its a must). Am wondering,
> if this is one of the reason for failure.
> More information:
> I have confugured it to restore in STANDBY mode and terminate
> any user connections. And the secondary server is used for
> monitoring log shipping activities.
> Has anybody faced similar problem? Please advise
> Thanks much
> GYK
Check that the sql server agent account has enough permissions on the
secondary server.
Run the sqlmaint with the parameters from the command-line and see what
errors are returned.
Hope this helps,
Eric Crdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.

Log-shipping failure

Hi All,
sql server 2000 enterprise edition, 8.00.760
Am trying to set logs-shipping between two databases on two different server
s on the same domain. Am using same sql server agent account for both the se
rvers.
After log-shipping is configured, am able to see the transaction logs being
backed up in the primary server, then copied on to the secondary server fold
er. But when it is trying to restore the transaction logs on the secondary s
erver, I get the following
error:
sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029).
FYI, We are using compatibility level of 65 on the primary server, as this i
s how we want it. But I can change the compatibility level on secondary serv
er to 80 (if its a must). Am wondering, if this is one of the reason for fai
lure.
More information:
I have confugured it to restore in STANDBY mode and terminate any user conne
ctions. And the secondary server is used for monitoring log shipping activit
ies.
Has anybody faced similar problem? Please advise
Thanks much
GYK> Hi All,
> sql server 2000 enterprise edition, 8.00.760
> Am trying to set logs-shipping between two databases on two
> different servers on the same domain. Am using same sql server
> agent account for both the servers.
> After log-shipping is configured, am able to see the transaction
> logs being backed up in the primary server, then copied on to
> the secondary server folder. But when it is trying to restore the
> transaction logs on the secondary server, I get the following error:
> sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029).
> FYI, We are using compatibility level of 65 on the primary server,
> as this is how we want it. But I can change the compatibility
> level on secondary server to 80 (if its a must). Am wondering,
> if this is one of the reason for failure.
> More information:
> I have confugured it to restore in STANDBY mode and terminate
> any user connections. And the secondary server is used for
> monitoring log shipping activities.
> Has anybody faced similar problem? Please advise
> Thanks much
> GYK
--
Check that the sql server agent account has enough permissions on the
secondary server.
Run the sqlmaint with the parameters from the command-line and see what
errors are returned.
Hope this helps,
Eric Crdenas
Senior support professional
This posting is provided "AS IS" with no warranties, and confers no rights.

Wednesday, March 21, 2012

LogShipping

Hi all I have created a log shipping task in my SQL servers
This Is the server order
Ist Server (Win2003 , SQL 2000 sp3) is the data server let's say
2nd Server (win2003 , SQL 2000 sp3) Is the monitor server
3rd Server (win2000, SQL 2000 sp3) Is the server that accepts the data
(1st & 2nd server is a clucter system )
And the line Connecting 1st & 2nd Server with the 3rd server is 1MB
The problem now is that the SQL Help is awful about logshipping. It explains
nothing!!
I have Create the log shipping but i get en error that the databases are not
sync.
And the FileLoaded and file copped is stack on First_File...trn
And also my databades seemed to be in sync
What Is wrong'
What should I do'
Please HELP!!!
Thanks in advance
DimitrisHi,
I have to correct you on something...there is plenty of
documentation for log shipping!
Try looking at BOL and this link will help. You should
read before implementing.
http://support.microsoft.com/?id=323135
hth
DeeJay
>--Original Message--
>Hi all I have created a log shipping task in my SQL
servers
>This Is the server order
>Ist Server (Win2003 , SQL 2000 sp3) is the data server
let's say
>2nd Server (win2003 , SQL 2000 sp3) Is the monitor server
>3rd Server (win2000, SQL 2000 sp3) Is the server that
accepts the data
>(1st & 2nd server is a clucter system )
>And the line Connecting 1st & 2nd Server with the 3rd
server is 1MB
>The problem now is that the SQL Help is awful about
logshipping. It explains
>nothing!!
>I have Create the log shipping but i get en error that
the databases are not
>sync.
>And the FileLoaded and file copped is stack on
First_File...trn
>And also my databades seemed to be in sync
>What Is wrong'
>What should I do'
>Please HELP!!!
>Thanks in advance
>Dimitris
>
>
>.
>

Monday, March 12, 2012

Logins permissions for non-sa RESTORE?

I hope this is a nice fat ball that someone can knock out of the park...

We've recently started to upgrade our development servers from Win2k to Windows Server 2003. Naturally, the SQL Server boxes receive MS SQL Server 2000 SP3a. Our database users now cannot restore to their own databases.

We usually grant each login the 'dbcreator' server role (and hope the developers are too busy to realize everything else it allows). Each user's login is dbo in his/her database. Typically one developer will produce a reference dump file, and all the other developers load it as they need it.

If a user executes a restore, it trundles along happily until almost the end, spitting out an error. Here's a sample:

100 percent restored.
Processed 376 pages for database 'bobdb', file 'Data' on file 1.
Server: Msg 916, Level 14, State 1, Line 68
Server user 'bob' is not a valid user in database 'bobdb'.
Server: Msg 3013, Level 16, State 1, Line 68
RESTORE DATABASE is terminating abnormally.

Oh, and this worked with SQL Server 2000 (any patch) on Windows 2000 Server.

Developers reload their databases so often (and DBA resources are so spare) that routing all restores through a DBA will bring development to its knees. So, each developer must be able to restore dumps to their own database.

Help? Which server roles/permissions/GRANTs do we need to provide our developers with the power they need? (and not the powers they don't need!)If the database being restored does not exist, the user must have CREATE DATABASE permissions to be able to execute RESTORE. If the database exists, RESTORE permissions default to members of the sysadmin and dbcreator fixed server roles and the owner (dbo) of the database.

[Books Online] RESTORE|||Our DBAs initially set up databases for each developer, so they should be appropriately configured at that point. Each developer login has the dbcreator role. Each developer's login owns the corresponding database.

GRANT CREATE DATABASE only applies to database users, not SQL Server logins. That is to say, the first sentence from the Online Books doesn't exactly make sense. (What user? in which database?)

Essentially, at some point during the restore, SQL Server forgets that the user doing the restore used to own the database in question, and in so doing it gets a bit confused.

If I create my own dump file, I can restore it to my account. Previously, I could restore a dump created by anyone else.

This is the only other clue I have. I executed this immediately after restoring my database from my dump file.

use bobdb
go
exec sp_changedbowner 'bob'
go

Server: Msg 15247, Level 16, State 1, Procedure sp_changedbowner, Line 15
User does not have permission to perform this action.

Previously I could execute this on my own databases (it's a no-op but it doesn't fail). Was there a security hole in sp_changedbowner, until SP3a?

Does that line about "RESTORE permissions default to members of the sysadmin and dbcreator fixed server roles" mean each user must have both server roles, as opposed to either?|||i dont believe that you have to have dual membership
but it should be relevant to try. and see what pops up
i am more concerned about what has changed between sp2 and sp3 to cause this to happen.

Our DBAs initially set up databases for each developer, so they should be appropriately configured at that point. Each developer login has the dbcreator role. Each developer's login owns the corresponding database.
except for dbcreator what are their admin privileges?
do they own the dbs that they work on?|||I might be wrong about this, but it sounds like you're developers are restoring on other servers, perhaps their own pcs? Anyway, you probably just have a SID issue here. What you need to do is have them drop their permissions, then run sp_help_revlogin to sync up the SIDs on the user accounts. That should solve the restore and the sp_changedbowner problem.

Take these two scripts and put them in your master database on the source server.

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO

/****** Object: Stored Procedure dbo.sp_hexadecimal Script Date: 5/17/2004 9:46:25 PM ******/
ALTER PROCEDURE sp_hexadecimal
@.binvalue varbinary(256),
@.hexvalue varchar(256) OUTPUT
AS
DECLARE @.charvalue varchar(256)
DECLARE @.i int
DECLARE @.length int
DECLARE @.hexstring char(16)
SELECT @.charvalue = '0x'
SELECT @.i = 1
SELECT @.length = DATALENGTH (@.binvalue)
SELECT @.hexstring = '0123456789ABCDEF'
WHILE (@.i <= @.length)
BEGIN
DECLARE @.tempint int
DECLARE @.firstint int
DECLARE @.secondint int
SELECT @.tempint = CONVERT(int, SUBSTRING(@.binvalue,@.i,1))
SELECT @.firstint = FLOOR(@.tempint/16)
SELECT @.secondint = @.tempint - (@.firstint*16)
SELECT @.charvalue = @.charvalue +
SUBSTRING(@.hexstring, @.firstint+1, 1) +
SUBSTRING(@.hexstring, @.secondint+1, 1)
SELECT @.i = @.i + 1
END
SELECT @.hexvalue = @.charvalue

GO

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO

/****** Object: Stored Procedure dbo.sp_help_revlogin Script Date: 5/17/2004 9:46:39 PM ******/
ALTER PROCEDURE sp_help_revlogin @.login_name
sysname = NULL
AS
--Declare needed variables.
DECLARE
@.name sysname,
@.xstatus int,
@.binpwd varbinary (256),
@.txtpwd sysname,
@.tmpstr varchar (256),
@.SID_varbinary varbinary(85),
@.SID_string varchar(256),
@.dbname varchar(255)
--Determine whether to process one login or all. Set up cursor accordingly.
IF (@.login_name IS NULL)
BEGIN
DECLARE login_curs CURSOR FOR
SELECT
sxl.sid,
sxl.name,
sxl.xstatus,
sxl.password,
sd.name AS dbname
FROM
master..sysxlogins sxl
INNER JOIN master..sysdatabases sd ON sxl.dbid = sd.dbid
WHERE
sxl.srvid IS NULL
AND sxl.name <> 'sa'
END
ELSE
BEGIN
DECLARE login_curs CURSOR FOR
SELECT
sxl.sid,
sxl.name,
sxl.xstatus,
sxl.password,
sd.name AS dbname
FROM
master..sysxlogins sxl
INNER JOIN master..sysdatabases sd ON sxl.dbid = sd.dbid
WHERE
sxl.srvid IS NULL
AND sxl.name <> @.login_name
END
OPEN login_curs
FETCH NEXT FROM login_curs
INTO
@.SID_varbinary,
@.name,
@.xstatus,
@.binpwd,
@.dbname
--If no logins found, exit the procedure.
IF (@.@.fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SELECT @.tmpstr = '/* sp_help_revlogin script '
PRINT @.tmpstr
SELECT @.tmpstr =
'** Generated ' + CONVERT (varchar, GETDATE()) + ' on ' + @.@.SERVERNAME + ' */'
PRINT @.tmpstr
PRINT ''
PRINT 'DECLARE @.pwd sysname'
WHILE (@.@.fetch_status <> -1)
BEGIN
IF (@.@.fetch_status <> -2)
BEGIN
PRINT ''
SET @.tmpstr = '-- Login: ' + @.name
PRINT @.tmpstr
IF (@.xstatus & 4) = 4
BEGIN -- NT authenticated account/group
IF (@.xstatus & 1) = 1
BEGIN -- NT login is denied access
SET @.tmpstr = 'EXEC master..sp_denylogin ''' + @.name + ''''
PRINT @.tmpstr
END
ELSE
BEGIN -- NT login has access
SET @.tmpstr = 'EXEC master..sp_grantlogin ''' + @.name + ''''
PRINT @.tmpstr
END
END
ELSE
BEGIN -- SQL Server authentication

IF (@.binpwd IS NOT NULL)
BEGIN -- Non-null password

EXEC sp_hexadecimal @.binpwd, @.txtpwd OUT
IF (@.xstatus & 2048) = 2048
BEGIN
SET @.tmpstr = 'SET @.pwd = CONVERT (varchar(256), ' + @.txtpwd + ')'
END
ELSE
BEGIN
SET @.tmpstr = 'SET @.pwd = CONVERT (varbinary(256), ' + @.txtpwd + ')'
END

PRINT @.tmpstr
EXEC sp_hexadecimal @.SID_varbinary,@.SID_string OUT
SET @.tmpstr =
'EXEC master..sp_addlogin ''' + @.name + ''', @.pwd, @.sid = ' + @.SID_string + ', @.encryptopt = '
END
ELSE
BEGIN
BEGIN

-- Null password
EXEC sp_hexadecimal @.SID_varbinary,@.SID_string OUT
SET @.tmpstr =
'EXEC master..sp_addlogin ''' + @.name + ''', NULL, @.sid = ' + @.SID_string + ', @.encryptopt = '
END
END
IF (@.xstatus & 2048) = 2048
BEGIN
-- login upgraded from 6.5
SET @.tmpstr = @.tmpstr + '''skip_encryption_old'''
PRINT @.tmpstr
END
ELSE
BEGIN
SET @.tmpstr = @.tmpstr + '''skip_encryption'''
PRINT @.tmpstr
END
END
--Add the default database.
SET @.tmpstr = 'EXEC master..sp_defaultdb ''' + @.name + ''',''' + @.dbname + ''''
PRINT @.tmpstr
END
FETCH NEXT FROM login_curs
INTO
@.SID_varbinary,
@.name,
@.xstatus,
@.binpwd,
@.dbname
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0

GO

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

Run the sp_help_revlogin from the master database. Copy the results of the user they need, and have them run the script on their personal server.s They should then be able to do a restore.|||Thanks for the procedures -- I think we'll make good use of them.

I'm working with two servers at the moment -- A is Win2k-sp3/SQL2k-RTM, while B (the problem child) is Win2k3-RTM+hotfixes/SQL2k-sp3a . The dumps are generally from A, and we're trying to migrate developers to B. However, both will be needed for the forseeable future, as both platform combinations are supported. So, one of them will need to take dumps of the other.

To avoid constantly maintaining this, would it be sufficient to simply create a 'dump master' account on each machine? This account would have the same SID on each machine. We can also try to keep each developer's SID consistent across boxes (if/when they use multiple servers).

I have another server box C, which is Win2k-sp4/SQL2k-sp3a+818 . It also chokes on this dump from server A, and a dump from server B. One odd part is that I can't even restore a dump created from one login's database on server B to a different login's database on server B, without getting these permissions issues.

We've fortunately kept the budget supplying us with enough hardware for good, central database servers. Meanwhile, the DBAs have put enough Fear Of God in the developers (and kept enough control of the install CDs) that developers don't have local database servers. This could be worse, and isn't.|||Other than dbcreator role, they are standard logins. They each own their own database, so within that db they're dbo.|||I would try the single 'dump master' account. This is a lot of authority to give someone though. Have you thought about just setting up a directory and table structure to support this. You can put new dumps into a directory and make a table entry. You have a job that sweeps through every 15 minutes or so and does restores.

My biggest concern with the whole operation would be the fact you don't have SP3a across the board. You are running a server RTM?? That's insane.|||I'm as nervous as you are about that RTM server... but someone didn't write license or support contracts that exclude out of date service levels, so we have to keep an RTM box around. (It's a QA regressions box.)|||It's a freaking free service pack update. Tell your clients to use their brains. That one server can endanger your whole enterprise, unless it's just a standalone box sitting on a shelf with a keyboard and monitor attached to it.|||Many of our customers are ... cautious... about service packs. They're too large to want to apply them. Either the bureaucracy is too thick for those who must approve the work to understand the value. Or, they want uptime on the app (sales guys hate not seeing their commission amounts) and don't want to endanger or delay a payroll source because of an unknown service pack.

Others have been hit with performance issues after applying service packs. (SQL Server 2000, SP1 to SP2)

They also have a habit of going against our wishes by deploying other apps against the same SQL Server instance. This then means they have two (or more!) apps to re-validate after the service pack! More than once the other app vendor "hasn't certified" the latest service pack, so customer (believes it) is stuck.

Besides, if they actually had brains which they could use, they would have deployed our $VERY_LARGE app against Oracle (or even DB2) :^) Sales people seem to believe "one sale is as good as another" ... ughh|||The fun part of the automated restore daemon approach is groking the structure of the dump -- file groups, files, etc. Once the files are known, appropriate MOVE options can be crafted, and a restore can start. (See new thread...)

Friday, March 9, 2012

Logins across Linked Servers

Hi.

I wish to be bale to call a proc on a linked server but i am having difficulty with logins. I have a sql 2000 server linked to a sql 2005 server. To be able to call the proc on 2005 a login for the user must exist on both 2000 and 2005, and the login on 2005 must have access to the db and 'grant execute' on the proc.

Is there a way to allow access to the 2005 db without having to perform maintenance of the logins to ensure they match?

I know i could just have 1 login on 2005 that all users on 2000 are mapped to but this means new users added would have to also be mapped in the future.

Is it possible to call the stored proc on 2000 by impersonating a login that exists on both servers? Something similar to 'EXECUTE AS LOGIN = 'linklogin' in 2005? This would mean all users on 2000 could impersonate this one login and after this ine login is set up no further maintenance is required.

Thanks

Ewen

Yes, you can set up a more or less "permanent" login (read more at the link below) but that's normally a very bad idea. What it forces you to do is to take a less granular approach to managing your security. The temptation is that you will grant far more permissions than necessary to too many people - which violates the principal of "least rights", where you only grant what is necessary for the time.

It is a bit of a pain to manage two accounts. But if you'll use Windows security, you won't have to track passwords. Once you've set up your object security, you're essentially done - and safe!

http://www.microsoft.com/technet/prodtechnol/sql/2000/books/c08ppcsq.mspx

Logins across Linked Servers

Hi.

I wish to be bale to call a proc on a linked server but i am having difficulty with logins. I have a sql 2000 server linked to a sql 2005 server. To be able to call the proc on 2005 a login for the user must exist on both 2000 and 2005, and the login on 2005 must have access to the db and 'grant execute' on the proc.

Is there a way to allow access to the 2005 db without having to perform maintenance of the logins to ensure they match?

I know i could just have 1 login on 2005 that all users on 2000 are mapped to but this means new users added would have to also be mapped in the future.

Is it possible to call the stored proc on 2000 by impersonating a login that exists on both servers? Something similar to 'EXECUTE AS LOGIN = 'linklogin' in 2005? This would mean all users on 2000 could impersonate this one login and after this ine login is set up no further maintenance is required.

Thanks

Ewen

Yes, you can set up a more or less "permanent" login (read more at the link below) but that's normally a very bad idea. What it forces you to do is to take a less granular approach to managing your security. The temptation is that you will grant far more permissions than necessary to too many people - which violates the principal of "least rights", where you only grant what is necessary for the time.

It is a bit of a pain to manage two accounts. But if you'll use Windows security, you won't have to track passwords. Once you've set up your object security, you're essentially done - and safe!

http://www.microsoft.com/technet/prodtechnol/sql/2000/books/c08ppcsq.mspx

Friday, February 24, 2012

Login Related Question

When setting 2 servers to replicate (in the case of Push Subsription) at
what point one computer will log on to the other?
I am having some login related problems and I would like to understand the
tehnicality of that aspect
Thank you,
Samuel
Samuel,
in standard transactional replication, when the distribution agent runs,
it'll have to read commands from the distribution database and execute procs
at the subscriber.
For merge, it'll also be on synchronization/running of the merge agent.
For transactional with queued updating subscribers, the queue reader's
execution will initiate a logon.
What error are you seeing, and at what stage?
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .

Login question

I have a login that I need to use across multiple servers in a thin
client application. It's for internal use so one login will suffice. I
initially thought I could just make a login on our domain and then add
this user to each server as a db owner of the respective databases.
However, I can't use the login in a connection string (as far as I
know) as I can't pass down the username/password as connection strings
can't take domain logins.
Is there a way around this? Or should I just create the user on each of
the multiple SQL servers, and then if the password changes sync it
accross manually?> However, I can't use the login in a connection string (as far as I
> know) as I can't pass down the username/password as connection strings
> can't take domain logins.
When you use Windows logins, the login is validated by Windows, you don't pass this in the
connection string, all you say is to ask for a Trusted or a Windows login.
But I'm sure you don't want all end-users to share the same account in Windows...? So, create a SQL
Server login instead.
Or you might want to read about "application roles", which allow each user to have its own login,
but still have one user (the application role) in the database. This way, you don't lose
traceability regarding who did what.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<sloppycode@.gmail.com> wrote in message
news:1145437475.402577.130300@.i40g2000cwc.googlegroups.com...
>I have a login that I need to use across multiple servers in a thin
> client application. It's for internal use so one login will suffice. I
> initially thought I could just make a login on our domain and then add
> this user to each server as a db owner of the respective databases.
> However, I can't use the login in a connection string (as far as I
> know) as I can't pass down the username/password as connection strings
> can't take domain logins.
> Is there a way around this? Or should I just create the user on each of
> the multiple SQL servers, and then if the password changes sync it
> accross manually?
>

Login question

I have a login that I need to use across multiple servers in a thin
client application. It's for internal use so one login will suffice. I
initially thought I could just make a login on our domain and then add
this user to each server as a db owner of the respective databases.
However, I can't use the login in a connection string (as far as I
know) as I can't pass down the username/password as connection strings
can't take domain logins.
Is there a way around this? Or should I just create the user on each of
the multiple SQL servers, and then if the password changes sync it
accross manually?> However, I can't use the login in a connection string (as far as I
> know) as I can't pass down the username/password as connection strings
> can't take domain logins.
When you use Windows logins, the login is validated by Windows, you don't pa
ss this in the
connection string, all you say is to ask for a Trusted or a Windows login.
But I'm sure you don't want all end-users to share the same account in Windo
ws...? So, create a SQL
Server login instead.
Or you might want to read about "application roles", which allow each user t
o have its own login,
but still have one user (the application role) in the database. This way, yo
u don't lose
traceability regarding who did what.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<sloppycode@.gmail.com> wrote in message
news:1145437475.402577.130300@.i40g2000cwc.googlegroups.com...
>I have a login that I need to use across multiple servers in a thin
> client application. It's for internal use so one login will suffice. I
> initially thought I could just make a login on our domain and then add
> this user to each server as a db owner of the respective databases.
> However, I can't use the login in a connection string (as far as I
> know) as I can't pass down the username/password as connection strings
> can't take domain logins.
> Is there a way around this? Or should I just create the user on each of
> the multiple SQL servers, and then if the password changes sync it
> accross manually?
>

Monday, February 20, 2012

Login problem

Hello,
I have two servers where the SQL Server DB provides the data for a Web site.
One server is the live Web server, the other one will serve as a backup.
I use an ODBC connection to conect to SQL Server, and there are a number of
asp pages that connect the SQL Server using SQL Server authentication.
Recently I restored the website DB on the backup server with a full backup
file from the live server and now the asp pages can not connect any more.
When I try to configure the ODBC connection I also get a message that
connection failed.
If I switch to Windows authentication in the ODBC manager I can connect and
the DB tests out successfully.
I tried editing the password but when I try to save it I get the message
that the userid already exists.
I also tried deleting the userid and adding it back in, but I get the same
message, that userid already exists.
I have just transferred another backup from the live server and it includes
a backup of the master, model and msdb DB's. I am hoping that restoring all
from this backup will correct the problem, but I am not an experienced DBA,
so I am not sure
Any suggestions as to how I could fix this problem would be appreciated, our
live server is acting up so we may need to switch.
RagnarSounds like you have switched the LoginMode for this Server or the password
of the user you are connecting with.
Try connecting to the server with the SQL login, if that wont work use
integrated authentication and check wheter the loginmode is WIndows or Mixed
Mode. If its mixed mode, you gotta reset your password because it doesnt
fit to that you you have. If you already tried that, there can be a
difference between the accounts that you imported. A user is created on the
server side on granted access to something on the db on the database level.
If you got a back restore on the backup server, these accounts wont fit
togehter, so oyu gotta delete the user on the db level and recreate him with
the appopiate security Script.
If you dont know how, just raise a hand (or write a mail) and ill help
you.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Ragnar Midtskogen" <ragnar_ng@.newsgroups.com> schrieb im Newsbeitrag
news:O5d0%23g3SFHA.2096@.TK2MSFTNGP14.phx.gbl...
> Hello,
> I have two servers where the SQL Server DB provides the data for a Web
> site.
> One server is the live Web server, the other one will serve as a backup.
> I use an ODBC connection to conect to SQL Server, and there are a number
> of asp pages that connect the SQL Server using SQL Server authentication.
> Recently I restored the website DB on the backup server with a full backup
> file from the live server and now the asp pages can not connect any more.
> When I try to configure the ODBC connection I also get a message that
> connection failed.
> If I switch to Windows authentication in the ODBC manager I can connect
> and the DB tests out successfully.
> I tried editing the password but when I try to save it I get the message
> that the userid already exists.
> I also tried deleting the userid and adding it back in, but I get the same
> message, that userid already exists.
> I have just transferred another backup from the live server and it
> includes a backup of the master, model and msdb DB's. I am hoping that
> restoring all from this backup will correct the problem, but I am not an
> experienced DBA, so I am not sure
> Any suggestions as to how I could fix this problem would be appreciated,
> our live server is acting up so we may need to switch.
> Ragnar
>|||Thank you Jens,
I fixed the problem by adding a new userid with the appropriate privileges,
but I just restored an up to date backup and now the new userid doesn't
work.
This clearly has something to do with the restore, but from my understanding
of SQL Server it should not happen, since the userids and the security data
is stored in the Master database.
This time I was able to delete and add back the userid, so I am back in
business, but I need to find out why this is happening so I can prevent it.
We will be doing regular restores in order to keep the backup server ready
to take over if the live machine fails.
I guess I could run a script to delete and add back the user after every
restore, but I would rather avoid that.
Ragnar|||You are stucked in with orphaned users, you should look here:
http://support.microsoft.com/kb/274188/en-us
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Ragnar Midtskogen" <ragnar_ng@.newsgroups.com> schrieb im Newsbeitrag
news:Ow0VLwPTFHA.2768@.tk2msftngp13.phx.gbl...
> Thank you Jens,
> I fixed the problem by adding a new userid with the appropriate
> privileges, but I just restored an up to date backup and now the new
> userid doesn't work.
> This clearly has something to do with the restore, but from my
> understanding of SQL Server it should not happen, since the userids and
> the security data is stored in the Master database.
> This time I was able to delete and add back the userid, so I am back in
> business, but I need to find out why this is happening so I can prevent
> it.
> We will be doing regular restores in order to keep the backup server ready
> to take over if the live machine fails.
> I guess I could run a script to delete and add back the user after every
> restore, but I would rather avoid that.
> Ragnar
>|||Thank you Jens,
You hit the nail on the head!
I did not notice it, but the database username I was having problems with
does not have a login name, but there is a login with that name, so I
thought everything was OK.
And, sure enough, when I run the sp_change_users_login, the user can log in
to the DB again.
Of course, the acid test of the fix will come when I do the next restore.
I am puzzled how this happened though, I did not set up the DB on the backup
server, but I know it was created initially by restoring a backup from the
live server.
And the user could log in until I did a restore with recent data..
Thank you, made my day!
Ragnar|||Hello Jens,
I had a similar problem, and sp_change_users_login fixed most logins,
but for some of the logins the sp_change_users_login procedure returns
the error:
"The login already has an account under a different user name."
Isn't this exactly the situation that sp_change_users_login is supposed
to fix? What are your thoughts?
-Steve Reich
Ragnar Midtskogen wrote:
> *Thank you Jens,
> You hit the nail on the head!
> I did not notice it, but the database username I was having problems
> with
> does not have a login name, but there is a login with that name, so
> I
> thought everything was OK.
> And, sure enough, when I run the sp_change_users_login, the user can
> log in
> to the DB again.
> Of course, the acid test of the fix will come when I do the next
> restore.
> I am puzzled how this happened though, I did not set up the DB on the
> backup
> server, but I know it was created initially by restoring a backup
> from the
> live server.
> And the user could log in until I did a restore with recent data..
> Thank you, made my day!
> Ragnar *
sreich
---
Posted via http://www.codecomments.com
---