Showing posts with label table. Show all posts
Showing posts with label table. Show all posts

Friday, March 30, 2012

long text with blank lines

Hi;
im trying to create sort of a contract in RS, ill get some information from
an employee table, but most of the report is plain-fixed text. Im not sure
but i think the best way to do it is using only a detail band (ill need to
print a contract by employee), and use textboxes for the text...
but... i noticed i cant insert blank lines in a textbox (nothing happends
when i press return) and im not sure about my approach...
Any advice?
TIAOn May 30, 7:41 pm, "Willo" <willobe...@.yahoo.com.mx> wrote:
> Hi;
> im trying to create sort of a contract in RS, ill get some information from
> an employee table, but most of the report is plain-fixed text. Im not sure
> but i think the best way to do it is using only a detail band (ill need to
> print a contract by employee), and use textboxes for the text...
> but... i noticed i cant insert blank lines in a textbox (nothing happends
> when i press return) and im not sure about my approach...
> Any advice?
> TIA
If I understand you correctly, you should concatenate the employee
table info w/the plain-fixed text as part of the stored procedure/
query that sources your report. Also, you could try using a table
control to create spaces if necessary. Hope this helps.
Regards,
Enrique Martinez
Sr. Software Consultant

Long text starts on next page

I have a table where a cell gets populated with a lot of data. When data is
longer than the page it would page break into the next while leaving a huge
white space on the previous page. how can i make it page break in the middle
of the text when it needs to instead of the whole cell getting moved to the
next page.A textbox has an implicit keep-together flag set on it which dictates the
behavior you are seeing. It is not currently possible to modify this
behavior.
Workarounds include breaking the text into multiple textboxes.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Marvin" <Marvin@.discussions.microsoft.com> wrote in message
news:22A08267-61CB-4EBF-A9A4-C3351FB61F4A@.microsoft.com...
>I have a table where a cell gets populated with a lot of data. When data is
> longer than the page it would page break into the next while leaving a
> huge
> white space on the previous page. how can i make it page break in the
> middle
> of the text when it needs to instead of the whole cell getting moved to
> the
> next page.|||I have used the workaround you suggested but many reports we use pull data
dynamically from a database that is often 60-100 lines long. We get a lot of
page breaks between table & group header rows which not only makes the report
messy but wastes a lot of paper. we also tried using a list with several
textboxes in it and set the keeptogether proprty of the list to true but now
the top textbox gets separated from the one below it if the one below it
needs to span pages. Any other ideas? Thanks.
"Donovan Smith [MSFT]" wrote:
> A textbox has an implicit keep-together flag set on it which dictates the
> behavior you are seeing. It is not currently possible to modify this
> behavior.
> Workarounds include breaking the text into multiple textboxes.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
> "Marvin" <Marvin@.discussions.microsoft.com> wrote in message
> news:22A08267-61CB-4EBF-A9A4-C3351FB61F4A@.microsoft.com...
> >I have a table where a cell gets populated with a lot of data. When data is
> > longer than the page it would page break into the next while leaving a
> > huge
> > white space on the previous page. how can i make it page break in the
> > middle
> > of the text when it needs to instead of the whole cell getting moved to
> > the
> > next page.
>
>

Long text column not accepting

Hi DBAs,
I am very new in SQL server. I created a table where one column is varchar(8000). But when I am trying to insert value from enterprise manager this column cann't accept a long text value. I counted that its' capacity is 1012 charecters. I have tried a lot but don't know how to solve this. I really need help from you. Pls help.

Thanks in advance

Rajat RaychaudhuriI am under pressure , pls advice me soon|||Use Query Analyzer.|||Have you tried using INSERT statement from QA? What's the total length of your record?|||Thanx friends. From Query Analyzer it worked.
Rajat|||If you are new to this, one quick note. SQL Server's Row Byte size is limited somewhere right around 8k bytes for physical data stored in the database. So, if the table has other columns in it, you might be in for trouble. Switching a very large column to Text may save you some future hassle. Of course, it could cause you a little hassle right now. :)|||I respect people that appreciate the value of TEXT/IMAGE datatypes ;)|||A little hassle now? That's not near as much hassle as it could cause him down the road.

RANT OFF

Go ahead...your turn now.|||TEXT/NTEXT/IMAGE columns definitely have their place. They can do things that are otherwise impossible in SQL Server. I don't see them as substitutes for VARCHAR or NVARCHAR any more than I see DATETIME as a substitute... Under certain rigorous conditions any of them might work, but as a generic substitute they are poor choices.

-PatP|||Switching a very large column to Text


-----------
http://www.3-ibm.com|||Is this a question? And how large is "very large"?|||I think he means it's veeeeeeeeeeeeeeeery large.

Long table locks

Hi
There is an application that runs on sql server.
The application selects/updates some few tables frequently
Once there is even a select on this table .It blocks other users
sometimes for very long.
Is there anything that can be done to reduce this?
The table has 18000 rows and does not seem to have an index
I thought indexing might help but 18000 rows without an index is
no reason for 30 minutes of lock time.
I will appreciate your help as usual
VinceVincento Harris (wumutek@.yahoo.com) writes:
> There is an application that runs on sql server.
> The application selects/updates some few tables frequently
> Once there is even a select on this table .It blocks other users
> sometimes for very long.
> Is there anything that can be done to reduce this?
> The table has 18000 rows and does not seem to have an index
> I thought indexing might help but 18000 rows without an index is
> no reason for 30 minutes of lock time.

Yes, something can probably be done, but the information you've provided
is not sufficient to say what should be done.

The key to nail down performance problems is information. Exactly
what is taking long time? And are there any transactions that are
open for all this time? You might get some useful information by
tracing this process with the Profiler.

Is this ia an in-house app, or a something you've bought from a vendor?

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Wednesday, March 28, 2012

Long Running Query - Only from .NET

I have a pretty complex query that aggregates lots of data and inserts multiple rows of that data into a reporting table. When I call this SPROC from SQL Server Management Studio, it executes in under 3 seconds. When I try to execute the same SPROC using .NET's SqlCommand object the query runs indefinitely until the CommandTimeout is reached.

Why would this SPROC behave differently with the same inputs, but being called from .NET?


Thanks for your help!

Turn on debugging, make sure the query isn't causing exception.

ERic

|||

I am debugging in .Net, I don't get an exception until the Timeout is reached.

|||

I fixed it! It was the return method. I was using 'RETURN @.@.ROWCOUNT' rather than 'SELECT @.@.ROWCOUNT'

Thanks for your help

Long running Query

Hi,

I'm trying to optimize a long running (several hours) query. This query is a cross join on two tables. Table 1 has 3 fields - ROWID, LAt and Long. Table 2 has Name, Addr1,Addr2,City,State,Zip,Lat,Long.

Both tables has LatRad - Lat in radians, LonRad- Lon in Radians. Sin and Cos values of Lat and Lon are calulated and stored to be used in the distance formula.

What I'm trying to do here is find the nearest dealer (Table 2) for each of Table 1 rows. The Select statement takes a long time to execute as there are about 19 million recs on table 1 and 1250 rows in table 2. I ran into Log issues- filling the transaction log, so I'm currently using table variables and split up the process into 100000 recs at a time. I cross join and calculate the distance (@.DistValues) and then find the minimum distance (tablevar2) for each rowid and then the result is inserted into another Table (ResultTable).

My Code looks like this:

Declare @.DistValues table (DataSeqno varchar(10),Lat2 numeric(20,6),Lon2 numeric(20,6),StoreNo varchar(60), Lat1 numeric(20,6),Long1 numeric(20,6),Distance numeric(20,15))

Declare @.MinDistance table (DataSeqno varchar(10) primary key,distance numeric(20,15))

Insert into @.DistValues

Select DataSeqno,T.Lat Lat2,T.Lon Lon2,S.StoreNo,S.Lat Lat1,S.Long Long1,

distance=3963.1*Case when cast(S.SinLat * T.SinLat + S.CosLat * T.cosLat * cos(T.Lonrad - s.Lonrad) as numeric(20,15)) not between -1.0 and 1.0 then 0.0 else acos(cast(S.SinLat * T.SinLat + S.CosLat * T.cosLat * cos(T.Lonrad - s.Lonrad) as numeric(20,15))) end

from dbo.TopNForProcess T , dbo.Table2 S where Isnull(T.Lat,0) <> 0 and Isnull(T.Lon,0)<> 0

Insert into @.MinDistance

Select DataSeqno,Min(distance) From @.DistValues Group by DataSeqno

Insert into ResultTable (DataSeqno,Lat2,Lon2,StoreNo,LAt1,Long1,distance)

Select D.DataSeqno, D.Lat2, D.Lon2, D.StoreNo, D.LAt1, D.Long1, M.distance from @.DistValues D Inner Join @.MinDistance M on D.DataSeqno = M.DataSeqno and D.Distance = M.Distance

I created a View called TopNForProcess which looks like this. This cut down the processing time compared to when I had this as the Subquery.

SELECT TOP (100000) DataSeqno, lat, Lon, LatRad, LonRad, SinLat, cosLat, SinLon, CosLon FROM Table1 WHERE (DataSeqno NOT IN (SELECT DataSeqno FROM dbo.ResultTable)) AND (ISNULL(lat, 0) <> 0) AND (ISNULL(Lon, 0) <> 0)

I have indexes on table table1 - Rowid and another one with Lat and lon. Table2 - Lat and Long.

Is there any way this can be optimized/improved? This is already in a stored procedure.

Thanks in advance.

Hello

I just wrote a longer post but i deleted it by pressing the wrong mouse key :(

Anyway.. Tips to optimize - short edition now...

1. Avoid a FULL cross join... If someone is 180° from your store, then i BET he wont be a best match. Limit the stores to the max range you expect + some room for error (depending on how good you can guess) Example:

where Store.Lang between cust.lang - 10 and cust.lang + 10

This is simplified, since i dont take the rollovers at 180° into account (Lang gues from -180 to + 180 right? I allways mix them up... Even in my native language)

If you really want to make it optimal, then you could analyse the store table 1st and determine the "closest" lang and long for each store... So you dont use the "worst case" range for all stores... I am sure in NYC there will be a shorter range then in "Nowhere",Tx

2. Avoid a function on the LEFT side of the where clause! So NO whatever(store.lang) = Bla Bla... This will prohibit the use of an Index...

3. Do you need ALL data NOW or only some data on request? (Like for 1 Custumer?) If its only some on request. Consider using a calculated collumn... But this collumn cant be indexed since the function would point to an external resource. Another thing is how often do stores open/close? If its often then the function becomes more attractive...

|||Use the Sql server 2005 CLR integration|||

Thanks. I do have a parameter that would limit the range and I used only 1 deg and even then it took hours. All the customers had to be calculated at one time and we're planning to schedule this as a job.

|||

joeydj wrote:

Use the Sql server 2005 CLR integration

We are yet to try this. Will sure try this one.Thanks.

|||

Hello

Try to precalculate a "cover Area" for each store. Define this area by 4 Numbers. Max and min long/lat. Store this information in each store record.

This cover area should only take other stores to calculate. Then define 2 new Indexes on those fields. Each consisting max and min of the cover area (for each long and lat)

Now you will be able to reduce the amount of hits to a few per custumer. From those you can calculate the exact distance and extract the closest one. Thats still a lot of processing of geometry functions, but it should be a lot faster. You might also think about simplifying the formulat you use to a^2+b^2=c^2 (There is also no need to calculate the root from c in order get the closest value of c)

Yes... Its kinda cheating but the error you would get here ~0... Its at least so small that the custumer in the same region wont be able to tell the difference ;)

|||Thanks. We decided to follow this route as we just found out now that our current legacy program was doing this. thanks for your inputs.

Long query rollback - can I rename the table?

I have a long query which I have set off and would like to stop, and
rename one of the tables used.

My query is due to my lack of understanding of the underlying
structure of MSSQL-Server...

So say and update updates TABLE_A and I stop it, whilst this
transaction is rolling back I attempt to rename TABLE_A to TABLE_A_OLD
and rename a different table to become TABLE_A. I am assuming that
the rollback actions will use the object reference of TABLE_A_OLD and
continue to rollback the effects on the correct table and not corrupt
'new' TABLE_A... or will it not allow me to rename TABLE_A until the
rollback is complete?

Thanks for any help!

SteveSo say and update updates TABLE_A and I stop it, whilst this

Quote:

Originally Posted by

transaction is rolling back I attempt to rename TABLE_A to TABLE_A_OLD
and rename a different table to become TABLE_A. I am assuming that
the rollback actions will use the object reference of TABLE_A_OLD and
continue to rollback the effects on the correct table and not corrupt
'new' TABLE_A... or will it not allow me to rename TABLE_A until the
rollback is complete?


I would expect that the rename will be blocked until the rollback completes.
The rename requires a schema modification lock, which is incompatible with
the locks held on TABLE_A by the open transaction.

--
Hope this helps.

Dan Guzman
SQL Server MVP

"cheesey_toastie" <bletchley_scum@.yahoo.co.ukwrote in message
news:1173354816.121044.292670@.h3g2000cwc.googlegro ups.com...

Quote:

Originally Posted by

>I have a long query which I have set off and would like to stop, and
rename one of the tables used.
>
My query is due to my lack of understanding of the underlying
structure of MSSQL-Server...
>
So say and update updates TABLE_A and I stop it, whilst this
transaction is rolling back I attempt to rename TABLE_A to TABLE_A_OLD
and rename a different table to become TABLE_A. I am assuming that
the rollback actions will use the object reference of TABLE_A_OLD and
continue to rollback the effects on the correct table and not corrupt
'new' TABLE_A... or will it not allow me to rename TABLE_A until the
rollback is complete?
>
Thanks for any help!
>
Steve
>

Long query block insert?

Hi,
Have a question, I have a select query that I used for bcp out one of my MS
SQL 2000 DB table ( about 30 million of records) and it take more than 3 hrs
to finish(about 13 million record). It happen that when this bcp process is
running no record can be inserted to this table.
Isn't that the select will release lock after it read pass the data?
Thanks!
KH TanHi,
Generally for BCP OUT the table will not be locked and user can insert data.
Can you try using BCP OUT with -b (Batch size) option and see what happens.
Thanks
Hari
SQL Server MVP
"KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
> Hi,
> Have a question, I have a select query that I used for bcp out one of my
> MS
> SQL 2000 DB table ( about 30 million of records) and it take more than 3
> hrs
> to finish(about 13 million record). It happen that when this bcp process
> is
> running no record can be inserted to this table.
> Isn't that the select will release lock after it read pass the data?
> Thanks!
> KH Tan|||Thanks!
I notice that not only the BCP Out having the problem, some other long
running query (select statement) also do the same. I do a dbcc dbreindex on
the table and rerun the query, everything seem comming back to normal..weird
!
Regards,
KH Tan.
"Hari Prasad" wrote:

> Hi,
> Generally for BCP OUT the table will not be locked and user can insert dat
a.
> Can you try using BCP OUT with -b (Batch size) option and see what happens
.
> Thanks
> Hari
> SQL Server MVP
> "KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
> news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
>
>|||Have you used sp_lock or select from sysprocesses to see where the blocking
might be?
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"KH TAN" <KHTAN@.discussions.microsoft.com> wrote in message
news:78B09111-07E0-4380-A8A8-45FA56FD93CC@.microsoft.com...
> Thanks!
> I notice that not only the BCP Out having the problem, some other long
> running query (select statement) also do the same. I do a dbcc dbreindex
on
> the table and rerun the query, everything seem comming back to
normal..weird![vbcol=seagreen]
> Regards,
> KH Tan.
>
> "Hari Prasad" wrote:
>
data.[vbcol=seagreen]
happens.[vbcol=seagreen]
my[vbcol=seagreen]
3[vbcol=seagreen]
process[vbcol=seagreen]

Long query block insert?

Hi,
Have a question, I have a select query that I used for bcp out one of my MS
SQL 2000 DB table ( about 30 million of records) and it take more than 3 hrs
to finish(about 13 million record). It happen that when this bcp process is
running no record can be inserted to this table.
Isn't that the select will release lock after it read pass the data?
Thanks!
KH TanHi,
Generally for BCP OUT the table will not be locked and user can insert data.
Can you try using BCP OUT with -b (Batch size) option and see what happens.
Thanks
Hari
SQL Server MVP
"KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
> Hi,
> Have a question, I have a select query that I used for bcp out one of my
> MS
> SQL 2000 DB table ( about 30 million of records) and it take more than 3
> hrs
> to finish(about 13 million record). It happen that when this bcp process
> is
> running no record can be inserted to this table.
> Isn't that the select will release lock after it read pass the data?
> Thanks!
> KH Tan|||Thanks!
I notice that not only the BCP Out having the problem, some other long
running query (select statement) also do the same. I do a dbcc dbreindex on
the table and rerun the query, everything seem comming back to normal..weird!
Regards,
KH Tan.
"Hari Prasad" wrote:
> Hi,
> Generally for BCP OUT the table will not be locked and user can insert data.
> Can you try using BCP OUT with -b (Batch size) option and see what happens.
> Thanks
> Hari
> SQL Server MVP
> "KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
> news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
> > Hi,
> > Have a question, I have a select query that I used for bcp out one of my
> > MS
> > SQL 2000 DB table ( about 30 million of records) and it take more than 3
> > hrs
> > to finish(about 13 million record). It happen that when this bcp process
> > is
> > running no record can be inserted to this table.
> >
> > Isn't that the select will release lock after it read pass the data?
> >
> > Thanks!
> >
> > KH Tan
>
>|||Have you used sp_lock or select from sysprocesses to see where the blocking
might be?
--
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"KH TAN" <KHTAN@.discussions.microsoft.com> wrote in message
news:78B09111-07E0-4380-A8A8-45FA56FD93CC@.microsoft.com...
> Thanks!
> I notice that not only the BCP Out having the problem, some other long
> running query (select statement) also do the same. I do a dbcc dbreindex
on
> the table and rerun the query, everything seem comming back to
normal..weird!
> Regards,
> KH Tan.
>
> "Hari Prasad" wrote:
> > Hi,
> >
> > Generally for BCP OUT the table will not be locked and user can insert
data.
> > Can you try using BCP OUT with -b (Batch size) option and see what
happens.
> >
> > Thanks
> > Hari
> > SQL Server MVP
> >
> > "KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
> > news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
> > > Hi,
> > > Have a question, I have a select query that I used for bcp out one of
my
> > > MS
> > > SQL 2000 DB table ( about 30 million of records) and it take more than
3
> > > hrs
> > > to finish(about 13 million record). It happen that when this bcp
process
> > > is
> > > running no record can be inserted to this table.
> > >
> > > Isn't that the select will release lock after it read pass the data?
> > >
> > > Thanks!
> > >
> > > KH Tan
> >
> >
> >

Long query block insert?

Hi,
Have a question, I have a select query that I used for bcp out one of my MS
SQL 2000 DB table ( about 30 million of records) and it take more than 3 hrs
to finish(about 13 million record). It happen that when this bcp process is
running no record can be inserted to this table.
Isn't that the select will release lock after it read pass the data?
Thanks!
KH Tan
Hi,
Generally for BCP OUT the table will not be locked and user can insert data.
Can you try using BCP OUT with -b (Batch size) option and see what happens.
Thanks
Hari
SQL Server MVP
"KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
> Hi,
> Have a question, I have a select query that I used for bcp out one of my
> MS
> SQL 2000 DB table ( about 30 million of records) and it take more than 3
> hrs
> to finish(about 13 million record). It happen that when this bcp process
> is
> running no record can be inserted to this table.
> Isn't that the select will release lock after it read pass the data?
> Thanks!
> KH Tan
|||Thanks!
I notice that not only the BCP Out having the problem, some other long
running query (select statement) also do the same. I do a dbcc dbreindex on
the table and rerun the query, everything seem comming back to normal..weird!
Regards,
KH Tan.
"Hari Prasad" wrote:

> Hi,
> Generally for BCP OUT the table will not be locked and user can insert data.
> Can you try using BCP OUT with -b (Batch size) option and see what happens.
> Thanks
> Hari
> SQL Server MVP
> "KH TAN" <KH TAN@.discussions.microsoft.com> wrote in message
> news:A4F01220-E1A3-4329-B059-6807D824A7B4@.microsoft.com...
>
>
|||Have you used sp_lock or select from sysprocesses to see where the blocking
might be?
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"KH TAN" <KHTAN@.discussions.microsoft.com> wrote in message
news:78B09111-07E0-4380-A8A8-45FA56FD93CC@.microsoft.com...
> Thanks!
> I notice that not only the BCP Out having the problem, some other long
> running query (select statement) also do the same. I do a dbcc dbreindex
on
> the table and rerun the query, everything seem comming back to
normal..weird![vbcol=seagreen]
> Regards,
> KH Tan.
>
> "Hari Prasad" wrote:
data.[vbcol=seagreen]
happens.[vbcol=seagreen]
my[vbcol=seagreen]
3[vbcol=seagreen]
process[vbcol=seagreen]
sql

Monday, March 26, 2012

long processing time

Hi all,
I have a datawarehouse which contain a historical table holding close to 100
million records... this table is growing exponentially. Everytime I process
my cube, data has to be read from this table and it is taking very long
before data is returned (between 2-3 hours)
Is there anyway for me to speed this up?
Troubled,
NestorIf you have the Enterprise edition of SQL Server you should look into
partitioning. With partitions you can process things in parallel and you
can also possibly only re-process recent data.
Depending on your situation you may also be able to look into
incremental processing, but this only works if your dimensions do not
change (you can add new members, but moving existing members under new
parents will break incremental processing)
You should also make sure your schema is optimized so that you are
minimizing the joins that have to be done against the relational data
source.
Regards
Darren Gosbell [MCSD]
Blog: http://www.geekswithblogs.net/darrengosbell
In article <edXVSA44FHA.1416@.TK2MSFTNGP09.phx.gbl>, n3570r@.yahoo.com
says...
> Hi all,
> I have a datawarehouse which contain a historical table holding close to 1
00
> million records... this table is growing exponentially. Everytime I proces
s
> my cube, data has to be read from this table and it is taking very long
> before data is returned (between 2-3 hours)
> Is there anyway for me to speed this up?
> Troubled,
> Nestor
>
>

long processing time

Hi all,
I have a datawarehouse which contain a historical table holding close to 100
million records... this table is growing exponentially. Everytime I process
my cube, data has to be read from this table and it is taking very long
before data is returned (between 2-3 hours)
Is there anyway for me to speed this up?
Troubled,
Nestor
If you have the Enterprise edition of SQL Server you should look into
partitioning. With partitions you can process things in parallel and you
can also possibly only re-process recent data.
Depending on your situation you may also be able to look into
incremental processing, but this only works if your dimensions do not
change (you can add new members, but moving existing members under new
parents will break incremental processing)
You should also make sure your schema is optimized so that you are
minimizing the joins that have to be done against the relational data
source.
Regards
Darren Gosbell [MCSD]
Blog: http://www.geekswithblogs.net/darrengosbell
In article <edXVSA44FHA.1416@.TK2MSFTNGP09.phx.gbl>, n3570r@.yahoo.com
says...
> Hi all,
> I have a datawarehouse which contain a historical table holding close to 100
> million records... this table is growing exponentially. Everytime I process
> my cube, data has to be read from this table and it is taking very long
> before data is returned (between 2-3 hours)
> Is there anyway for me to speed this up?
> Troubled,
> Nestor
>
>
sql

long numbers in SQL server

Hi,

I'm designing a table in SQL server and I'm trying to create a column for a number that is 10 characters long. The only datatype that comes near the 10 characters is a BIGINT wich is 8 characters long. But this is still to short.
Is it possible to store a number of 10 characters or is the only way to store this number to store it as a char?

JoachimUse money data type, especially if you are going to make calculations with it. It's the only numeric datatype that really works, and suports very large numbers.

IONUT|||THNX|||That's not really true... the datatype "float" definetly has the largest number support and supports numbers with 309 (!!!) digits. Acording to BOL float is "...a floating-point number from -1.79E+308 to 1.79E+308" ...which is rather large...|||...oh...and decimal would also do the trick...|||What do you mean by 10 characters? The BIGINT datatype is 8 bytes long, which should give it a range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, which is 19 digits. A more efficient use would be Decimal (10,0), which I think uses about 5 bytes.

I don't have access to BOL at this site to verify this, but look in there for datatypes. Numerics use fewer bytes than strings to hold numbers.

Richard|||I don't want to be rude, but Frettmaestro, I didn't said that money accepts the largest number values, I only said that if you want to make calculation with that field value, then the only solution is money datatype, not real and not float. Try this one in Query Analiser:

declare @.val float
set @.val=920
select @.val,(@.val/100) as result

SURPRISE result=9.19999999999999993. It really works doesn't it?

Ha Ha Ha

IONUT

PS
You can be a member, not a junior like me or JRECKERS, but please when you post something here try it first to be damm' sure about it.|||JReckers, when we talk about numbers and the best datatype to store those numbers we talk in terms of integer and real values not number of characters.

If you have integer values then we further talk about range, when you have real values we talk about range but also precision and scale.

If your numbers range from -9,999,999,999 to 9,999,999,999 yoor only choice is to store that number in a bigint data type who's range is -2^63 (-9,223,372,036,854,775,808) through 2^63-1 (9,223,372,036,854,775,807). The Integer data type only covers -2^31 (-2,147,483,648) through 2^31 - 1 (2,147,483,647). Of course I am assuming that you WANT to store the data as a number rather than a string of numbers (varchar).

As for real numbers you are looking at a decimal or a float data type. What is the diffrence? A decimal is fixed presision and scale where as a float is an approximation, not all values in the data type range can be precisely represented BUT as the name implies the decimal point can float.|||ionut calin, you may want to head your own words! A money data type isn't the only data type that will yield a non-apporiximation!|||declare @.val money
set @.val=920
select @.val,(@.val/1000000*1000000) as result
GO
declare @.val float
set @.val=920
select @.val,(@.val/1000000*1000000) as result

Free advice is seldom cheap.

Originally posted by ionut calin
I don't want to be rude, but Frettmaestro ...
declare @.val float
set @.val=920
select @.val,(@.val/100) as result

SURPRISE result=9.19999999999999993. It really works doesn't it?

Ha Ha Ha

IONUT|||Wow ispaleny, that last one was really cool. Did you try it ionut? You said that "It's the only numeric datatype that really works" and I wasn't trying to mock you or anything, just telling you that this wasn't the case. No need to be all cocky and "all that" even if I didn't provide the right answer... I'm just trying to help people here and if you don't appreciate that then that's your problem. And I _really_ could care less what your or mine or anybody elses member status is.|||So, if I am correct, you are saying that when I create a table with a column with the datatype INT that the length of this datatype is the number of bytes that is used to store the number in and not the amount of characters.

So, when I want to store a number of 10 digits (min. 0 and max. 9.999.999.999), I have to use a BIGINT, because an INT goes up to 2,147,483,647 which is nog enough.

Am I correct?

Joachim|||DataType | Bytes | Digits
--------
int 4 9(10)
numeric(9) 5 9
bigint 8 18(19)
numeric(10) 9 10
char(9) 9 9
char(10) 10 10
nchar(9) 18 9
nchar(10) 20 10

If you really need 10 digits, with bigint you get +13% performance in comparison with numeric(10)
and at least +25% in comparison with char(10). Compared with 9 digits, the performance is -50% !!!!!!!

Good luck !|||For the original question, if you only want to store the number (integer) and not ever make calculations based on that number, the bigint is a good solution.

Now, because I see there are very upset persons around here:

I had tried it, and I've also tried this one:

declare @.val as money
set @.val=920
select @.val,(@.val*1000000/1000000) as result

It works isn't it??.

The float datatype sucks (or real for that matter), because for :

declare @.val as float
set @.val=920
select @.val,(@.val/100) as result

you really have no workaround, to get a correct result.

The money datatype is designed to work with four decimal numbers, and in this range it works correctly. The float datatype may be bigger, but it doesn't work correctly even with small numbers.

The last example with @.Val/1000000*1000000 works in float datatype only because the errors are leveled. So the example is good only like a "joke" nothing more. I advise everyone to try with @.val declared as float:

@.val/1000
@.val/10000
...
@.val*0.001
@.val*0.0001

and now try:

@.val/10000000 -> Surprise it works, why? No one knows.........

All numbers above are within the four digits range of money datatype(more than enough, for banks for example)

So, Frettmaestro don't be so happy because for,the so called "errors" with money datatype there is a logical explanation, but for float?

For the flot datatype I can't find one. Maybe you will find one and share with us, will you?

IONUT

PS!
Now I really wanted to be rude.|||Why do you _want_ to be rude? As I said way up there I didn't try to mock you with my initial post and I'm still not trying (or wanting) to be rude. You have to admint that I wasn't wrong when I said that "It's the only numeric datatype that really works" isn't really true, but recomending float probably wasn't the best solution in this case. I recomended using decimal in my second post which you haven't taken into account at all. I'm positive that money will do the trick in this case as with bigint, decimal, numeric and actually float aswell (the real issue had nothing to do with dividing any numbers, just storing it).|||Who wants to use this post thread to speak more about aproximations and good practices how to prevent them here?
I am interested.|||Originally posted by ispaleny
Who wants to use this post thread to speak more about aproximations and good practices how to prevent them here?
I am interested.

No problem.

Joachim|||I would be happy to contribute my .0200 cents worth but I am more interested in knowing if JReckers got his question answered.

Maybe a new thread would be better?|||Originally posted by Paul Young
I would be happy to contribute my .0200 cents worth but I am more interested in knowing if JReckers got his question answered.

Maybe a new thread would be better?

I have my questiong for a big part answered Paul Young. A have one question left which I would like to be answered.

The question is:
I thought that the 'length' option in SQL Server was to set the amount of digits (and with a char datatype the amount of characters) that can be used to store the data in, but it seems to be the amount of bytes where the data is stored in. Please correct me if I'm wrong.

Now my question is why Microsoft did this? I've worked with other databases before (Sybase and MySQL) and in these databases it is possible to set the length of the field (the maxium amount of characters the data can use). At least I always thought it was like this.

Joachim|||Microsoft & Sybase are exactly the same on this, there is no difference.

When you specify length you are stateing the amount of digits (and with a char datatype the amount of characters) that can be store and also the amount of bytes used to store the data, they are directly related.

I think in terms of a range when I specify length, for char datatypes I can store Null or 0 to n characters, for an integer, the range is fixed, for a decimal I cna adjust the range to sute my needs and not take up more space than needed to store my data.

Are we just talking past each other?|||Originally posted by Paul Young
Microsoft & Sybase are exactly the same on this, there is no difference.

When you specify length you are stateing the amount of digits (and with a char datatype the amount of characters) that can be store and also the amount of bytes used to store the data, they are directly related.

I think in terms of a range when I specify length, for char datatypes I can store Null or 0 to n characters, for an integer, the range is fixed, for a decimal I cna adjust the range to sute my needs and not take up more space than needed to store my data.

Are we just talking past each other?

;) Yes I think so.

Thanx anyway. You have been a great help. I think I got it now.

Joachim|||Maybe my first reply on this subject was a little too ... unpolite. If it was I'm sorry. Then I got angry, but now I think it's time to stop all this fight, 'cause it's going nowhere. I'm sorry.

I want to explain why I've got so angry about money datatype:

One year ago I was working for Academy of Economic Studies in Bucharest, were I was dba and programer for a database which deals with the activity of students(grades, collecting taxes and so on). One day a huge problem arrised: one secretary told me that the report oin which were listed the averages of grades for students is wrong, because one average is listed like 9.19 and not 9.20 (and from here a lot of problems). I had spent two days in front of my computer to figure out what was the problem. As you allready suspect, in the table were the grades was stored, the field Nota (means grade in romanian) was declared real.

Because of that real the result was 9.19999999... (truncated from the 2-nd decimal, that is the reglementation in Romania for averages of grades)=9.19, instead of 9.20.

The fun part was that if the the average was for instance 9.30 then the result was good because the long number format was 9.3000000000001 so... I hate datatypes

So, I hope you understand why I don't want to here about real or flot datatype.

IONUT

PS

This problem seems to arrise also in MS Access. DataTypes name there is Double for float, Single for Real and Currency for money.

long insert time

I have a table with about half a million records.
Select queries run fine.
This test query I ran in QA took over 1 minute to run.
insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
What are some possible causes?Two possible causes are blocking and insert trigger(s) on the table.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Won Lee" <noemail@.nospam.com> wrote in message news:OpY%23$dYQEHA.2876@.TK2MSFTNGP09.phx.gbl...
> I have a table with about half a million records.
> Select queries run fine.
> This test query I ran in QA took over 1 minute to run.
> insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
> What are some possible causes?
>|||Tibor Karaszi wrote:
> Two possible causes are blocking and insert trigger(s) on the table.
>
I don't have any triggers. Funny thing, I re-ran the query about 5
times. The first 4 times it took over a minute to perform. The 5th
time, it took 2 seconds. I performed it again and it only took 2 seconds.
Any idea for the change in performance?|||Blocking is still a possible reason.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Won Lee" <noemail@.nospam.com> wrote in message news:%237B4JrYQEHA.1348@.TK2MSFTNGP12.phx.gbl...
> Tibor Karaszi wrote:
> > Two possible causes are blocking and insert trigger(s) on the table.
> >
> I don't have any triggers. Funny thing, I re-ran the query about 5
> times. The first 4 times it took over a minute to perform. The 5th
> time, it took 2 seconds. I performed it again and it only took 2 seconds.
> Any idea for the change in performance?
>|||Another reason could be an autogrow kicking in...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
news:u5gzVvYQEHA.3596@.tk2msftngp13.phx.gbl...
> Blocking is still a possible reason.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Won Lee" <noemail@.nospam.com> wrote in message news:%237B4JrYQEHA.1348@.TK2MSFTNGP12.phx.gbl...
> > Tibor Karaszi wrote:
> >
> > > Two possible causes are blocking and insert trigger(s) on the table.
> > >
> >
> > I don't have any triggers. Funny thing, I re-ran the query about 5
> > times. The first 4 times it took over a minute to perform. The 5th
> > time, it took 2 seconds. I performed it again and it only took 2 seconds.
> >
> > Any idea for the change in performance?
> >
>|||Hi,
As Tiber pointed out this execution time diffence while inserting a record
will be due to Blocking. So to see the blocking:-
While you execute the insert statement, if it is delaying open a new query
analyzer window and execute SP_WHO command.
In the result look for the column BLK, In the normal case all the values
will be "0". If there is a block then a SPID will displayed in BLK field.
If blocked understand the statement that user( SPID - blocked ) is doing by
using the below command:-
DBCC INPUTBUFFER(SPID) -- Replace the SPID with the SPID blocked.
Thanks
Hari
MCDBA
"Won Lee" <noemail@.nospam.com> wrote in message
news:OpY#$dYQEHA.2876@.TK2MSFTNGP09.phx.gbl...
> I have a table with about half a million records.
> Select queries run fine.
> This test query I ran in QA took over 1 minute to run.
> insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
> What are some possible causes?
>|||Tibor Karaszi wrote:
> Another reason could be an autogrow kicking in...
>
OK. Looking up Blocking in Books Online.
Next time I have this issue, I will be write down the DB size to see if
the autogrow is part of the problem. Thanks.sql

long insert time

I have a table with about half a million records.
Select queries run fine.
This test query I ran in QA took over 1 minute to run.
insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
What are some possible causes?Two possible causes are blocking and insert trigger(s) on the table.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Won Lee" <noemail@.nospam.com> wrote in message news:OpY%23$dYQEHA.2876@.TK2MSFTNGP09.phx.gbl
..
> I have a table with about half a million records.
> Select queries run fine.
> This test query I ran in QA took over 1 minute to run.
> insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
> What are some possible causes?
>|||Tibor Karaszi wrote:

> Two possible causes are blocking and insert trigger(s) on the table.
>
I don't have any triggers. Funny thing, I re-ran the query about 5
times. The first 4 times it took over a minute to perform. The 5th
time, it took 2 seconds. I performed it again and it only took 2 seconds.
Any idea for the change in performance?|||Blocking is still a possible reason.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Won Lee" <noemail@.nospam.com> wrote in message news:%237B4JrYQEHA.1348@.TK2MSFTNGP12.phx.gbl
..
> Tibor Karaszi wrote:
>
> I don't have any triggers. Funny thing, I re-ran the query about 5
> times. The first 4 times it took over a minute to perform. The 5th
> time, it took 2 seconds. I performed it again and it only took 2 seconds.
> Any idea for the change in performance?
>|||Another reason could be an autogrow kicking in...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message
news:u5gzVvYQEHA.3596@.tk2msftngp13.phx.gbl...
> Blocking is still a possible reason.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Won Lee" <noemail@.nospam.com> wrote in message news:%237B4JrYQEHA.1348@.TK
2MSFTNGP12.phx.gbl...
>|||Hi,
As Tiber pointed out this execution time diffence while inserting a record
will be due to Blocking. So to see the blocking:-
While you execute the insert statement, if it is delaying open a new query
analyzer window and execute SP_WHO command.
In the result look for the column BLK, In the normal case all the values
will be "0". If there is a block then a SPID will displayed in BLK field.
If blocked understand the statement that user( SPID - blocked ) is doing by
using the below command:-
DBCC INPUTBUFFER(SPID) -- Replace the SPID with the SPID blocked.
Thanks
Hari
MCDBA
"Won Lee" <noemail@.nospam.com> wrote in message
news:OpY#$dYQEHA.2876@.TK2MSFTNGP09.phx.gbl...
> I have a table with about half a million records.
> Select queries run fine.
> This test query I ran in QA took over 1 minute to run.
> insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
> What are some possible causes?
>|||Tibor Karaszi wrote:

> Another reason could be an autogrow kicking in...
>
OK. Looking up Blocking in Books Online.
Next time I have this issue, I will be write down the DB size to see if
the autogrow is part of the problem. Thanks.

long insert time

I have a table with about half a million records.
Select queries run fine.
This test query I ran in QA took over 1 minute to run.
insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
What are some possible causes?
Two possible causes are blocking and insert trigger(s) on the table.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Won Lee" <noemail@.nospam.com> wrote in message news:OpY%23$dYQEHA.2876@.TK2MSFTNGP09.phx.gbl...
> I have a table with about half a million records.
> Select queries run fine.
> This test query I ran in QA took over 1 minute to run.
> insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
> What are some possible causes?
>
|||Tibor Karaszi wrote:

> Two possible causes are blocking and insert trigger(s) on the table.
>
I don't have any triggers. Funny thing, I re-ran the query about 5
times. The first 4 times it took over a minute to perform. The 5th
time, it took 2 seconds. I performed it again and it only took 2 seconds.
Any idea for the change in performance?
|||Blocking is still a possible reason.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Won Lee" <noemail@.nospam.com> wrote in message news:%237B4JrYQEHA.1348@.TK2MSFTNGP12.phx.gbl...
> Tibor Karaszi wrote:
>
> I don't have any triggers. Funny thing, I re-ran the query about 5
> times. The first 4 times it took over a minute to perform. The 5th
> time, it took 2 seconds. I performed it again and it only took 2 seconds.
> Any idea for the change in performance?
>
|||Another reason could be an autogrow kicking in...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
news:u5gzVvYQEHA.3596@.tk2msftngp13.phx.gbl...
> Blocking is still a possible reason.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Won Lee" <noemail@.nospam.com> wrote in message news:%237B4JrYQEHA.1348@.TK2MSFTNGP12.phx.gbl...
>
|||Hi,
As Tiber pointed out this execution time diffence while inserting a record
will be due to Blocking. So to see the blocking:-
While you execute the insert statement, if it is delaying open a new query
analyzer window and execute SP_WHO command.
In the result look for the column BLK, In the normal case all the values
will be "0". If there is a block then a SPID will displayed in BLK field.
If blocked understand the statement that user( SPID - blocked ) is doing by
using the below command:-
DBCC INPUTBUFFER(SPID) -- Replace the SPID with the SPID blocked.
Thanks
Hari
MCDBA
"Won Lee" <noemail@.nospam.com> wrote in message
news:OpY#$dYQEHA.2876@.TK2MSFTNGP09.phx.gbl...
> I have a table with about half a million records.
> Select queries run fine.
> This test query I ran in QA took over 1 minute to run.
> insert into tradeData values (0, '5/23/04', '20:00:00', 0, 0.00)
> What are some possible causes?
>
|||Tibor Karaszi wrote:

> Another reason could be an autogrow kicking in...
>
OK. Looking up Blocking in Books Online.
Next time I have this issue, I will be write down the DB size to see if
the autogrow is part of the problem. Thanks.

long delete

Hi
I have person table that is taking about 90 seconds to delete a person. The
person table is related to about 20 other tables on the personid with the
option exforce relationship for replication and enforce relationship for
inserts and updates. I checked the indexes and they're all indexed. Does
anyone know what could cause such a long delete
thanks
p
Do you have indexes on all the foreign key columns (the referencing tables)? Any blocking going on?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"alfred" <alfred@.discussions.microsoft.com> wrote in message
news:00C44F73-7BFA-4CBA-9720-A7BAA5188998@.microsoft.com...
> Hi
> I have person table that is taking about 90 seconds to delete a person. The
> person table is related to about 20 other tables on the personid with the
> option exforce relationship for replication and enforce relationship for
> inserts and updates. I checked the indexes and they're all indexed. Does
> anyone know what could cause such a long delete
> thanks
> p
|||Hi
Whne it is running, check for any blocking by another process.
Affecting 20 tables does take some time, especially if there are multiple
rows on child tables for each parent row.
Have a look at SQL Profile whilst it is running and check the IO statistics.
Regards
Mike
"alfred" wrote:

> Hi
> I have person table that is taking about 90 seconds to delete a person. The
> person table is related to about 20 other tables on the personid with the
> option exforce relationship for replication and enforce relationship for
> inserts and updates. I checked the indexes and they're all indexed. Does
> anyone know what could cause such a long delete
> thanks
> p
|||Hi
there are not alot of child in the relationship from the number of rows
affected, usually around 2 or 3 rows affected. I checked the sql profiler
trace file. What is taking up the time are these cacheremove events that
execute sql statements that are not really being called by the delperson
procedure. How could they be called?
thanks
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> Whne it is running, check for any blocking by another process.
> Affecting 20 tables does take some time, especially if there are multiple
> rows on child tables for each parent row.
> Have a look at SQL Profile whilst it is running and check the IO statistics.
> Regards
> Mike
>
> "alfred" wrote:
|||hi,
I also run the delete person where personid = 123 on index tuning wizard,
but I keep getting this error
"there is insufficent memory to perform index analysis."
what could cause that
thanks
P
"Tibor Karaszi" wrote:

> Do you have indexes on all the foreign key columns (the referencing tables)? Any blocking going on?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "alfred" <alfred@.discussions.microsoft.com> wrote in message
> news:00C44F73-7BFA-4CBA-9720-A7BAA5188998@.microsoft.com...
>
>
|||I don't know, only reference for that error I found is:
http://groups.google.com/groups?hl=s...eta%3Dgroup%25
3Dmicrosoft.public.sqlserver.*
Did you verify that the referencing table has indexes on the foreign key columns?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"alfred" <alfred@.discussions.microsoft.com> wrote in message
news:286C0462-19C4-4C5B-95D7-F16F3112890B@.microsoft.com...[vbcol=seagreen]
> hi,
> I also run the delete person where personid = 123 on index tuning wizard,
> but I keep getting this error
> "there is insufficent memory to perform index analysis."
> what could cause that
> thanks
> P
> "Tibor Karaszi" wrote:

Monday, March 19, 2012

Logon Problem - I am an idiot "ID 10 T"

Tried using webmatrix code builder to simply bind a web grid to a sql table.
Works great in WEbMatrix, but when I move the code to Visual Studio.VB,
I get a logon error when the code is run.

Any Help is Greatly Appreciated... I am stuck....

Here is the code:

Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load

DataGrid1.DataSource = GetCustomers()
DataGrid1.DataBind()

End Sub

Private Function GetCustomers() As System.Data.SqlClient.SqlDataReader

Dim connectionString As String = "server='localhost'; trusted_connection=true; Database='ASPExamples'"
Dim sqlConnection As System.Data.SqlClient.SqlConnection = New System.Data.SqlClient.SqlConnection(connectionString)

Dim queryString As String = "SELECT [Customers].* FROM [Customers]"
Dim sqlCommand As System.Data.SqlClient.SqlCommand = New System.Data.SqlClient.SqlCommand(queryString, sqlConnection)

sqlConnection.Open()
Dim dataReader As System.Data.SqlClient.SqlDataReader = sqlCommand.ExecuteReader(System.Data.CommandBehavior.CloseConnection)

Return dataReader

End Function

End ClassProblem is solved,

It was SQL Server Security, I had to add a user <Machine Name\ASPNET>
..and explicitly give permissions for SELECT, UPDATE, Etc...

Funny, but the same exact code in WEB MATRIX did not require this to be done.

Any insight on this please respond....

Monday, March 12, 2012

Logon auditing through trigger on sysprocesses?

I need to do auditing of logons, and selectively (depending on who does
them) of updates to a single table as well.
Furthermore, to make remote access to the audit log easier, I would prefer
to log all this in a table on the same server.
Full c2 auditing would give me far too much information (and I haven't even
checked if it is supported in MSDE). It would probably reduce performance
as well.
Full auditing of all updates to just that one table where I need it, would
give too much information too: one certain application that's running on
the local machine should be left out.
Now I think I could get everything I want done through triggers, but
there's one problem.
The 'sysprocesses' table seems like a good place to detect logons, if I
could create an INSERT trigger that copies username and hostname to a
logging table for each new record.
The problem: the db engine won't let me create a trigger on the
sysprocesses table, it keeps saying "access denied" no matter how I try to
override and change permissions.
I realize that there could be some danger too, if executing a trigger would
create a new process. But will it? I would expect that a trigger is
executed in the context of the process that initiated the change.
hi,
Lucvdv wrote:
>...
> The 'sysprocesses' table seems like a good place to detect logons, if
> I could create an INSERT trigger that copies username and hostname to
> a logging table for each new record.
> The problem: the db engine won't let me create a trigger on the
> sysprocesses table, it keeps saying "access denied" no matter how I
> try to override and change permissions.
>
sysprocesses is a "fake" table, in the sense it is materialized on demand,
and you can't perform INSERT, UPDATE, or DELETE operations on this kind of
table no matter how much privilege you have on the system, and of course you
can not write trigger on it..
as regard auditing, try perhaps having a look at
http://www.windowsitpro.com/Article/...434/26434.html ...
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.15.0 - DbaMgr ver 0.60.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
|||Andrea Montanari wrote:
> hi,
> Lucvdv wrote:
> sysprocesses is a "fake" table, in the sense it is materialized on demand,
> and you can't perform INSERT, UPDATE, or DELETE operations on this kind of
> table no matter how much privilege you have on the system, and of course you
> can not write trigger on it..
> as regard auditing, try perhaps having a look at
> http://www.windowsitpro.com/Article/...434/26434.html ...
Thanks.
It wasn't of much help though
The article is about finding out whodunnit in a situation where
everyone has the sa password, and suggests giving each user his own
database to connect to.
In my case each user has his own logon credentials, all changes except
those made by a certain account should be logged, and changes made by
any user should be visible to all.
That part shouldn't be too hard to do, the difficulty is knowing when
someone logs on and from where.
|||On Tue, 30 Aug 2005 19:31:30 +0200, Lucvdv <name@.null.net> wrote:

> In my case each user has his own logon credentials, all changes except
> those made by a certain account should be logged, and changes made by
> any user should be visible to all.
> That part shouldn't be too hard to do, the difficulty is knowing when
> someone logs on and from where.
The change logging part is working, with a simple trigger.
There is a performance hit of course, but it doesn't seem to be too much
(I've tested it in a real live database for about an hour, and didn't see
any noticeable system slowdown or abnormally high CPU use).
In case someone wants to use it as a model, this is the 'simple' version
with a test table I started from:
The 'Test' table just has 2 columns Col1 and Col2, both integer, with Col1
as primary key.
The 'TestLog' table must exist before the trigger is added, with columns
(in this case) DT DateTime, spid smallint, Col1 int, oldCol2 int, newCol2
int, user varchar(128), host varchar(128), prog varchar(128).
In the real test I included an identity column, so sequence numbers will be
missing if someone deletes log lines.
CREATE TRIGGER [trgTest] ON dbo.Test
FOR UPDATE
AS
DECLARE @.user varchar(128), @.host varchar(128), @.prog varchar(128)
BEGIN
SELECT @.user = RTRIM([nt_username]), @.host = RTRIM([hostname]),
@.prog = RTRIM([program_name])
FROM [master].[dbo].[sysprocesses] WHERE spid=@.@.SPID
IF @.prog<>'My application'
INSERT INTO [TestLog]
SELECT GetDate(), @.@.SPID, [Inserted].[Col1], [Deleted].[Col2],
[Inserted].[Col2], @.user, @.host, @.prog
FROM [Inserted] INNER JOIN [Deleted]
ON [Inserted].[Col1]=[Deleted].[Col1]
WHERE [Inserted].[Col2]<>[Deleted].[Col2]
END
|||have a look at
http://www.sqlservercentral.com/colu...qlprofiler.asp
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.15.0 - DbaMgr ver 0.60.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply

Logins Successful or Failed Auditing

Hello,
I have a task to do.
I need to get all the logins (successful or failed) into a table along with
the timestamp which I can keep truncating as and when I want so that it
doesn't grow too big.
I have tried using server side traces to do this. But this has 2
disadvantages which make me NOT want to use it.
1. The output file (.trc file) can be viewed only when the rollover size is
meet or when the SQL Server is stopped <--bad mojo.
2. There is no solution for when the output files keep on growing and
rolling over. I have 75+ servers to monitor. I want something that will keep
record for say, last 1 week thats all. Output in a table can be so so better.
Apparently, using server side trace you cant have output in table.
Does anyone have any suggestions ? I figured I could do something like put a
trigger on sysprocesses but this won't give me the failed logins.
Also, server load would be incredible with a trigger shooting off each time.
Any help is appreciated. Oh, and C2 Audit option is out of the question.
Thats just too detailed.
Thanks.
Regards,
Kunal
Hello,
The only thing I can suggest is that you can import the data from the .trc
file into a table using the function fn_trace_gettable (see BOL for exact
syntax).
Hope this helps.
"kunalap" wrote:

> Hello,
> I have a task to do.
> I need to get all the logins (successful or failed) into a table along with
> the timestamp which I can keep truncating as and when I want so that it
> doesn't grow too big.
> I have tried using server side traces to do this. But this has 2
> disadvantages which make me NOT want to use it.
> 1. The output file (.trc file) can be viewed only when the rollover size is
> meet or when the SQL Server is stopped <--bad mojo.
> 2. There is no solution for when the output files keep on growing and
> rolling over. I have 75+ servers to monitor. I want something that will keep
> record for say, last 1 week thats all. Output in a table can be so so better.
> Apparently, using server side trace you cant have output in table.
> Does anyone have any suggestions ? I figured I could do something like put a
> trigger on sysprocesses but this won't give me the failed logins.
> Also, server load would be incredible with a trigger shooting off each time.
> Any help is appreciated. Oh, and C2 Audit option is out of the question.
> Thats just too detailed.
> Thanks.
> Regards,
> Kunal
|||How about writing a small VB.NET app that is scheduled frequently and reds off of the event log and
import into a table?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"kunalap" <kunalap@.discussions.microsoft.com> wrote in message
news:DE02C02D-0C1D-447B-B98C-BE4F63F5A207@.microsoft.com...
> Hello,
> I have a task to do.
> I need to get all the logins (successful or failed) into a table along with
> the timestamp which I can keep truncating as and when I want so that it
> doesn't grow too big.
> I have tried using server side traces to do this. But this has 2
> disadvantages which make me NOT want to use it.
> 1. The output file (.trc file) can be viewed only when the rollover size is
> meet or when the SQL Server is stopped <--bad mojo.
> 2. There is no solution for when the output files keep on growing and
> rolling over. I have 75+ servers to monitor. I want something that will keep
> record for say, last 1 week thats all. Output in a table can be so so better.
> Apparently, using server side trace you cant have output in table.
> Does anyone have any suggestions ? I figured I could do something like put a
> trigger on sysprocesses but this won't give me the failed logins.
> Also, server load would be incredible with a trigger shooting off each time.
> Any help is appreciated. Oh, and C2 Audit option is out of the question.
> Thats just too detailed.
> Thanks.
> Regards,
> Kunal
|||Thanks for the reply Anoop.
I was already aware of the function. But it is only for viewing the data in
query analyzer.
I guess I will have to setup another job to delete the older .trc files.
Thanks.
-Kunal.
"Anoop" wrote:
[vbcol=seagreen]
> Hello,
> The only thing I can suggest is that you can import the data from the .trc
> file into a table using the function fn_trace_gettable (see BOL for exact
> syntax).
> Hope this helps.
>
> "kunalap" wrote: