Showing posts with label system. Show all posts
Showing posts with label system. Show all posts

Monday, March 26, 2012

Number to Date

Hi,
I have a legacy system that uses a strange date format, wich I'd like to
convert for datetime on a new table.
The date is in this format: 104001(2004-01-01), 104002(2004-01-02),
104271(2004-09-27)
I think that I'll have to break the problem in two parts:
The year - three first digits + 1900
the day - three last digits (dy) day of the year.
I can generate the days by using "datepart(dy,date)", but how can i make the
way back? (to transform 271 in 09-27)
Thanks for your help.
SQL Server can do this with simple CAST operations, for instance, do:
SELECT CAST( CAST( 1900 + LEFT( c, 3 ) AS CHAR( 4 ) ) + '0101' AS DATETIME )
+ CAST( RIGHT( c, 3 ) AS INT ) - 1
FROM tbl ;
-- where c is the column from the table tbl.
Anith
|||Miguel,
Anith's solution will work for dates in or after the year 2000, but it
could give the wrong answer for earlier dates, if they are stored as
integers or as strings but without a leading zero. Here's an alternate
solution that should work:
select dateadd(year,@.c/1000,-1) + @.c%1000 from yourTable
Steve Kass
Drew University
Miguel Salles wrote:

>Hi,
>I have a legacy system that uses a strange date format, wich I'd like to
>convert for datetime on a new table.
>The date is in this format: 104001(2004-01-01), 104002(2004-01-02),
>104271(2004-09-27)
>I think that I'll have to break the problem in two parts:
>The year - three first digits + 1900
>the day - three last digits (dy) day of the year.
>I can generate the days by using "datepart(dy,date)", but how can i make the
>way back? (to transform 271 in 09-27)
>Thanks for your help.
>
>
sql

Number of transactions

Dear all,
Which is the faster and reliable way to obtain how many transactions has
been commited in a day in all the Sql Server db? Is there any system stored
procedure which get that information?
Thanks for your advices/comments/thoughts,
Regards,
EnricI don't believe you can track committed transactions.
There's no perfmon counter or SQL profiler event I can think of that
would do the trick. There's also no proc I know of to do so. As far as
I am aware, the only transactions that are of any interest, in terms of
tracking, are open (or active) transactions, which you can see with
"SELECT @.@.TRANCOUNT", but that won't tell you how many transactions have
been committed over a certain period of time, just how many are
currently open.
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Enric wrote:

>Dear all,
>Which is the faster and reliable way to obtain how many transactions has
>been commited in a day in all the Sql Server db? Is there any system stored
>procedure which get that information?
>Thanks for your advices/comments/thoughts,
>Regards,
>Enric
>
>|||Partial solution
you can see BOL for global vaiables like
@.@.connections,@.@.transcount,@.2packets_sen
t but all these give you no of
transactions since last start. you may use average of it per day.
r.d
"Enric" wrote:

> Dear all,
> Which is the faster and reliable way to obtain how many transactions has
> been commited in a day in all the Sql Server db? Is there any system store
d
> procedure which get that information?
> Thanks for your advices/comments/thoughts,
> Regards,
> Enric
>|||@.@.total_read and @.@.total_write since last start also will help you
"Mike Hodgson" wrote:

> I don't believe you can track committed transactions.
> There's no perfmon counter or SQL profiler event I can think of that
> would do the trick. There's also no proc I know of to do so. As far as
> I am aware, the only transactions that are of any interest, in terms of
> tracking, are open (or active) transactions, which you can see with
> "SELECT @.@.TRANCOUNT", but that won't tell you how many transactions have
> been committed over a certain period of time, just how many are
> currently open.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Enric wrote:
>
>|||@.@.connections has got nothing to do with transactions, it's the number
of connections (successful logins) since the last restart.
@.@.transcount is only the number of currently open transactions.
@.@.packets_sent has got nothing to do with transactions, it's the number
of packets sent out the server's NIC since the last restart.
There are some very informative global vars you can access (along with
the wealth of information available through perfmon counters & SQL
profiler traces) but none of that will tell you the number of
transactions (committed or otherwise) for a given time period.
The only place I can think of that this info is recorded is in the
transaction log. If you were to read the transaction log with
::fn_dblogs or some 3rd party app (like Lumigent Log Explorer) you could
probably tell. Something like this:
-- Number of BEGIN TRAN & COMMIT TRAN since last log truncation
select Operation, count(*) from ::fn_dblog(null,null)
where Operation in ('LOP_BEGIN_XACT', 'LOP_COMMIT_XACT')
group by Operation
-- Number of BEGIN TRAN & COMMIT TRAN today (assuming the log hasn't
been truncated today)
select Operation, count(*) from ::fn_dblog(null,null)
where (Operation = 'LOP_BEGIN_XACT' and convert(datetime,[Begin
Time]) > convert(datetime,convert(varchar(8),getd
ate(),112)))
or (Operation = 'LOP_COMMIT_XACT' and convert(datetime,[End
Time]) > convert(datetime,convert(varchar(8),getd
ate(),112)))
group by Operation
*mike hodgson*
blog: http://sqlnerd.blogspot.com
R.D wrote:
>Partial solution
>you can see BOL for global vaiables like
> @.@.connections,@.@.transcount,@.2packets_sen
t but all these give you no of
>transactions since last start. you may use average of it per day.
>r.d
>"Enric" wrote:
>
>|||No they won't. Those global vars have nothing to do with transactions.
@.@.TOTAL_READ
(http://msdn.microsoft.com/library/d.../>
ls_484k.asp)
@.@.TOTAL_WRITE
(http://msdn.microsoft.com/library/d.../>
ls_26p1.asp)
*mike hodgson*
blog: http://sqlnerd.blogspot.com
R.D wrote:
>@.@.total_read and @.@.total_write since last start also will help you
>"Mike Hodgson" wrote:
>
>|||Thanks so much for that,
"Mike Hodgson" wrote:

> No they won't. Those global vars have nothing to do with transactions.
> @.@.TOTAL_READ
> (http://msdn.microsoft.com/library/d...
bals_484k.asp)
> @.@.TOTAL_WRITE
> (http://msdn.microsoft.com/library/d...
bals_26p1.asp)
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> R.D wrote:
>
>|||In any case, it would be gorgeous have available any job or sp be able to
show info such as this:
user1 select 12 2 10
user1 delete 3 1 2
user2 update 1
I've found this:
DBCC INPUTBUFFER (spid)
Language Event 0 select * from VconexionesNAO where referencia='ACTUA' and
optapl='0'
Doing a loop with that info and then sorting it.
Returning the sentence
"Mike Hodgson" wrote:

> No they won't. Those global vars have nothing to do with transactions.
> @.@.TOTAL_READ
> (http://msdn.microsoft.com/library/d...
bals_484k.asp)
> @.@.TOTAL_WRITE
> (http://msdn.microsoft.com/library/d...
bals_26p1.asp)
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> R.D wrote:
>
>|||sp_monitor reveals several server metrics, but not total transactions.
Also, if you can identify a suitable SQL Profiler event (perhaps
RPC:Completed,SQL:BatchCompleted), then you can have the event log output to
a table for periodic querying.
http://vyaskn.tripod.com/analyzing_profiler_output.htm
http://www.informit.com/guides/cont...&seqNum=41&rl=1
http://www.microsoft.com/technet/pr...ps/sqlprof.mspx
"Enric" <Enric@.discussions.microsoft.com> wrote in message
news:58B2337B-7A9E-4945-89BB-B95F1431584B@.microsoft.com...
> Dear all,
> Which is the faster and reliable way to obtain how many transactions has
> been commited in a day in all the Sql Server db? Is there any system
> stored
> procedure which get that information?
> Thanks for your advices/comments/thoughts,
> Regards,
> Enric
>

Friday, March 23, 2012

number of rows in catalog

Hi
Is there a system query that will tell me the number of rows in a given
catalog?
Cheers
James
> Is there a system query that will tell me the number of rows in a given
> catalog?
Here is an easy way:
EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
|||This will return a good estimate of the number of rows in each table:
SELECT A.name, B.rows
FROM sysobjects A
JOIN sysindexes B ON A.ID = B.ID
WHERE A.type = 'U'
AND B.INDID < 2
ORDER BY A.Name
It is important to know that this is an estimate. The numbers are usually
pretty close. If you find that they are off you may have to issue several
DBCC statements (such as updateusage).
Keith
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
|||I was actually after the number of rows in the full text index catalog.
Any clues?
Cheers
James
"Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si > wrote in
message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
> Here is an easy way:
> EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
> --
> Dejan Sarka, SQL Server MVP
> Associate Mentor
> Solid Quality Learning
> More than just Training
> www.SolidQualityLearning.com
>
>
|||to get an accurate number for each table you would
select count(*) from table
For an estimate you might
select name, rowcnt from sysindexes where id > 100 and indid in (0,1)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
|||catalogs don't contain words per se, but rather unique words. Use this
query to get an idea of the number of unique words
select FulltextCatalogProperty('CatalogName', 'UniqueKeyCount')
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:eqFnnxRsEHA.1988@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I was actually after the number of rows in the full text index catalog.
> Any clues?
> Cheers
> James
> "Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si > wrote in
> message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
given
>
|||James,
You can use the following metadata Full Text Search (FTS) queries to obtain
info from the FT Catalog:
select FulltextCatalogProperty('<FT_Catalog>', 'UniqueKeyCount') -- Number
of unique words
select FullTextCatalogProperty('<FT_Catalog>', 'itemcount') -- row count + 1
select FullTextCatalogProperty('<FT_Catalog>', 'indexsize') -- Size of the
full-text index
See BOL title FULLTEXTCATALOGPROPERTY for more info.
Thanks,
John
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:#mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
|||Hi James,
I think this will get you what you want ...
select FulltextCatalogProperty('CatalogName', 'ItemCount')
"James Brett" wrote:

> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
>
sql

number of rows in catalog

Hi
Is there a system query that will tell me the number of rows in a given
catalog?
Cheers
James
> Is there a system query that will tell me the number of rows in a given
> catalog?
Here is an easy way:
EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
|||This will return a good estimate of the number of rows in each table:
SELECT A.name, B.rows
FROM sysobjects A
JOIN sysindexes B ON A.ID = B.ID
WHERE A.type = 'U'
AND B.INDID < 2
ORDER BY A.Name
It is important to know that this is an estimate. The numbers are usually
pretty close. If you find that they are off you may have to issue several
DBCC statements (such as updateusage).
Keith
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
|||I was actually after the number of rows in the full text index catalog.
Any clues?
Cheers
James
"Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si > wrote in
message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
> Here is an easy way:
> EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
> --
> Dejan Sarka, SQL Server MVP
> Associate Mentor
> Solid Quality Learning
> More than just Training
> www.SolidQualityLearning.com
>
>
|||to get an accurate number for each table you would
select count(*) from table
For an estimate you might
select name, rowcnt from sysindexes where id > 100 and indid in (0,1)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
|||catalogs don't contain words per se, but rather unique words. Use this
query to get an idea of the number of unique words
select FulltextCatalogProperty('CatalogName', 'UniqueKeyCount')
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:eqFnnxRsEHA.1988@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I was actually after the number of rows in the full text index catalog.
> Any clues?
> Cheers
> James
> "Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si > wrote in
> message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
given
>
|||James,
You can use the following metadata Full Text Search (FTS) queries to obtain
info from the FT Catalog:
select FulltextCatalogProperty('<FT_Catalog>', 'UniqueKeyCount') -- Number
of unique words
select FullTextCatalogProperty('<FT_Catalog>', 'itemcount') -- row count + 1
select FullTextCatalogProperty('<FT_Catalog>', 'indexsize') -- Size of the
full-text index
See BOL title FULLTEXTCATALOGPROPERTY for more info.
Thanks,
John
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:#mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
|||Hi James,
I think this will get you what you want ...
select FulltextCatalogProperty('CatalogName', 'ItemCount')
"James Brett" wrote:

> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
>

number of rows in catalog

Hi
Is there a system query that will tell me the number of rows in a given
catalog?
Cheers
James> Is there a system query that will tell me the number of rows in a given
> catalog?
Here is an easy way:
EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com|||This will return a good estimate of the number of rows in each table:
SELECT A.name, B.rows
FROM sysobjects A
JOIN sysindexes B ON A.ID = B.ID
WHERE A.type = 'U'
AND B.INDID < 2
ORDER BY A.Name
It is important to know that this is an estimate. The numbers are usually
pretty close. If you find that they are off you may have to issue several
DBCC statements (such as updateusage).
Keith
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>|||I was actually after the number of rows in the full text index catalog.
Any clues?
Cheers
James
"Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si> wrote in
message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
> Here is an easy way:
> EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
> --
> Dejan Sarka, SQL Server MVP
> Associate Mentor
> Solid Quality Learning
> More than just Training
> www.SolidQualityLearning.com
>
>|||to get an accurate number for each table you would
select count(*) from table
For an estimate you might
select name, rowcnt from sysindexes where id > 100 and indid in (0,1)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>|||catalogs don't contain words per se, but rather unique words. Use this
query to get an idea of the number of unique words
select FulltextCatalogProperty('CatalogName', 'UniqueKeyCount')
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:eqFnnxRsEHA.1988@.TK2MSFTNGP11.phx.gbl...
> I was actually after the number of rows in the full text index catalog.
> Any clues?
> Cheers
> James
> "Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si> wrote in
> message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
given[vbcol=seagreen]
>|||James,
You can use the following metadata Full Text Search (FTS) queries to obtain
info from the FT Catalog:
select FulltextCatalogProperty('<FT_Catalog>', 'UniqueKeyCount') -- Number
of unique words
select FullTextCatalogProperty('<FT_Catalog>', 'itemcount') -- row count + 1
select FullTextCatalogProperty('<FT_Catalog>', 'indexsize') -- Size of the
full-text index
See BOL title FULLTEXTCATALOGPROPERTY for more info.
Thanks,
John
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:#mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>|||Hi James,
I think this will get you what you want ...
select FulltextCatalogProperty('CatalogName', 'ItemCount')
"James Brett" wrote:

> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
>

number of rows in catalog

Hi
Is there a system query that will tell me the number of rows in a given
catalog?
Cheers
James> Is there a system query that will tell me the number of rows in a given
> catalog?
Here is an easy way:
EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
--
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com|||This will return a good estimate of the number of rows in each table:
SELECT A.name, B.rows
FROM sysobjects A
JOIN sysindexes B ON A.ID = B.ID
WHERE A.type = 'U'
AND B.INDID < 2
ORDER BY A.Name
It is important to know that this is an estimate. The numbers are usually
pretty close. If you find that they are off you may have to issue several
DBCC statements (such as updateusage).
--
Keith
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>|||I was actually after the number of rows in the full text index catalog.
Any clues?
Cheers
James
"Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si> wrote in
message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
> > Is there a system query that will tell me the number of rows in a given
> > catalog?
> Here is an easy way:
> EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
> --
> Dejan Sarka, SQL Server MVP
> Associate Mentor
> Solid Quality Learning
> More than just Training
> www.SolidQualityLearning.com
>
>|||to get an accurate number for each table you would
select count(*) from table
For an estimate you might
select name, rowcnt from sysindexes where id > 100 and indid in (0,1)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%23mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>|||catalogs don't contain words per se, but rather unique words. Use this
query to get an idea of the number of unique words
select FulltextCatalogProperty('CatalogName', 'UniqueKeyCount')
--
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:eqFnnxRsEHA.1988@.TK2MSFTNGP11.phx.gbl...
> I was actually after the number of rows in the full text index catalog.
> Any clues?
> Cheers
> James
> "Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si> wrote in
> message news:%23hU7YjRsEHA.1248@.TK2MSFTNGP10.phx.gbl...
> > > Is there a system query that will tell me the number of rows in a
given
> > > catalog?
> >
> > Here is an easy way:
> > EXEC sp_MSforeachtable 'sp_spaceused ''?'', ''true'''
> >
> > --
> > Dejan Sarka, SQL Server MVP
> > Associate Mentor
> > Solid Quality Learning
> > More than just Training
> > www.SolidQualityLearning.com
> >
> >
> >
> >
>|||James,
You can use the following metadata Full Text Search (FTS) queries to obtain
info from the FT Catalog:
select FulltextCatalogProperty('<FT_Catalog>', 'UniqueKeyCount') -- Number
of unique words
select FullTextCatalogProperty('<FT_Catalog>', 'itemcount') -- row count + 1
select FullTextCatalogProperty('<FT_Catalog>', 'indexsize') -- Size of the
full-text index
See BOL title FULLTEXTCATALOGPROPERTY for more info.
Thanks,
John
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:#mC3OeQsEHA.3460@.TK2MSFTNGP15.phx.gbl...
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>|||Hi James,
I think this will get you what you want ...
select FulltextCatalogProperty('CatalogName', 'ItemCount')
"James Brett" wrote:
> Hi
> Is there a system query that will tell me the number of rows in a given
> catalog?
> Cheers
> James
>
>

Wednesday, March 21, 2012

Number of IOs Per Second?

I need to size a storage system. I know that I will have two tables, each
with eight million records. I will have 10 users, each of whom will be
doing complex reports simultaneously, each of which does joins against those
eight million record tables.
What are some guidelines for best and and worst case, about the number of
IOs per second that this system will generate? I need to make sure my
RAID arrays have enough physical disks to satisfy this number of IOs.
If the database is 2 TB in size, what is the desired amount of cache memory?
Will
westes AT earthbroadcast.com
I also found a more recent article
http://www.microsoft.com/technet/pro.../rdbmspft.mspx
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"CHANGE USERNAME TO westes" <DELETE_westes@.earthbroadcast.com> wrote in
message news:u0fS9ExfEHA.1656@.TK2MSFTNGP10.phx.gbl...
> I need to size a storage system. I know that I will have two tables,
each
> with eight million records. I will have 10 users, each of whom will be
> doing complex reports simultaneously, each of which does joins against
those
> eight million record tables.
> What are some guidelines for best and and worst case, about the number of
> IOs per second that this system will generate? I need to make sure
my
> RAID arrays have enough physical disks to satisfy this number of IOs.
> If the database is 2 TB in size, what is the desired amount of cache
memory?
> --
> Will
> westes AT earthbroadcast.com
>
|||There is a great article by Henry Lau about this. it is targeted at SQL 7,
but the concepts still apply
http://msdn.microsoft.com/library/de...l7perftune.asp
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"CHANGE USERNAME TO westes" <DELETE_westes@.earthbroadcast.com> wrote in
message news:u0fS9ExfEHA.1656@.TK2MSFTNGP10.phx.gbl...
> I need to size a storage system. I know that I will have two tables,
each
> with eight million records. I will have 10 users, each of whom will be
> doing complex reports simultaneously, each of which does joins against
those
> eight million record tables.
> What are some guidelines for best and and worst case, about the number of
> IOs per second that this system will generate? I need to make sure
my
> RAID arrays have enough physical disks to satisfy this number of IOs.
> If the database is 2 TB in size, what is the desired amount of cache
memory?
> --
> Will
> westes AT earthbroadcast.com
>
sql

Number of IOs Per Second?

I need to size a storage system. I know that I will have two tables, each
with eight million records. I will have 10 users, each of whom will be
doing complex reports simultaneously, each of which does joins against those
eight million record tables.
What are some guidelines for best and and worst case, about the number of
IOs per second that this system will generate? I need to make sure my
RAID arrays have enough physical disks to satisfy this number of IOs.
If the database is 2 TB in size, what is the desired amount of cache memory?
Will
westes AT earthbroadcast.comI also found a more recent article
http://www.microsoft.com/technet/pr...n/rdbmspft.mspx
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"CHANGE USERNAME TO westes" <DELETE_westes@.earthbroadcast.com> wrote in
message news:u0fS9ExfEHA.1656@.TK2MSFTNGP10.phx.gbl...
> I need to size a storage system. I know that I will have two tables,
each
> with eight million records. I will have 10 users, each of whom will be
> doing complex reports simultaneously, each of which does joins against
those
> eight million record tables.
> What are some guidelines for best and and worst case, about the number of
> IOs per second that this system will generate? I need to make sure
my
> RAID arrays have enough physical disks to satisfy this number of IOs.
> If the database is 2 TB in size, what is the desired amount of cache
memory?
> --
> Will
> westes AT earthbroadcast.com
>|||There is a great article by Henry Lau about this. it is targeted at SQL 7,
but the concepts still apply
sql7perftune.asp" target="_blank">http://msdn.microsoft.com/library/d...ql7perftune.asp
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"CHANGE USERNAME TO westes" <DELETE_westes@.earthbroadcast.com> wrote in
message news:u0fS9ExfEHA.1656@.TK2MSFTNGP10.phx.gbl...
> I need to size a storage system. I know that I will have two tables,
each
> with eight million records. I will have 10 users, each of whom will be
> doing complex reports simultaneously, each of which does joins against
those
> eight million record tables.
> What are some guidelines for best and and worst case, about the number of
> IOs per second that this system will generate? I need to make sure
my
> RAID arrays have enough physical disks to satisfy this number of IOs.
> If the database is 2 TB in size, what is the desired amount of cache
memory?
> --
> Will
> westes AT earthbroadcast.com
>

Number of IOs Per Second?

I need to size a storage system. I know that I will have two tables, each
with eight million records. I will have 10 users, each of whom will be
doing complex reports simultaneously, each of which does joins against those
eight million record tables.
What are some guidelines for best and and worst case, about the number of
IOs per second that this system will generate? I need to make sure my
RAID arrays have enough physical disks to satisfy this number of IOs.
If the database is 2 TB in size, what is the desired amount of cache memory?
--
Will
westes AT earthbroadcast.comI also found a more recent article
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/rdbmspft.mspx
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"CHANGE USERNAME TO westes" <DELETE_westes@.earthbroadcast.com> wrote in
message news:u0fS9ExfEHA.1656@.TK2MSFTNGP10.phx.gbl...
> I need to size a storage system. I know that I will have two tables,
each
> with eight million records. I will have 10 users, each of whom will be
> doing complex reports simultaneously, each of which does joins against
those
> eight million record tables.
> What are some guidelines for best and and worst case, about the number of
> IOs per second that this system will generate? I need to make sure
my
> RAID arrays have enough physical disks to satisfy this number of IOs.
> If the database is 2 TB in size, what is the desired amount of cache
memory?
> --
> Will
> westes AT earthbroadcast.com
>|||There is a great article by Henry Lau about this. it is targeted at SQL 7,
but the concepts still apply
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql7/html/msdn_sql7perftune.asp
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"CHANGE USERNAME TO westes" <DELETE_westes@.earthbroadcast.com> wrote in
message news:u0fS9ExfEHA.1656@.TK2MSFTNGP10.phx.gbl...
> I need to size a storage system. I know that I will have two tables,
each
> with eight million records. I will have 10 users, each of whom will be
> doing complex reports simultaneously, each of which does joins against
those
> eight million record tables.
> What are some guidelines for best and and worst case, about the number of
> IOs per second that this system will generate? I need to make sure
my
> RAID arrays have enough physical disks to satisfy this number of IOs.
> If the database is 2 TB in size, what is the desired amount of cache
memory?
> --
> Will
> westes AT earthbroadcast.com
>

Number of Databases

I have started administrating a system that has 44 databases running on the
same server. That is 44 user databases.
This is running under NT4 SP 6a and SQL 7 SP3
I am sure that SQL was not meant to support this number of databases on the
same server but unable to find any documentation to support this.
Does any one have any experience of working with this number of databases or
point me in the right direction to documentation on the number of supported
databases.
Any help would be appreciatedSQL 7.0 and 200 support up to 32.767 databases
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8
_ar_ts_8dbn.asp). But personally I prefer to limit the number per instance;
however, it really depends on application(s). If everything is working and
users are not complaining, why worry? If you have performance problems and
you have applications using different databases, then it really might be the
simplest solution to install another SQL Server on another box and transfer
half of the databases.
--
Dejan Sarka, SQL Server MVP
FAQ from Neil & others at: http://www.sqlserverfaq.com
Please reply only to the newsgroups.
PASS - the definitive, global community
for SQL Server professionals - http://www.sqlpass.org
"Mark Jones" <mark.jones@.nospam.kimdata.plus.com> wrote in message
news:FR%ab.8119$vX3.1275275@.wards.force9.net...
> I have started administrating a system that has 44 databases running on
the
> same server. That is 44 user databases.
> This is running under NT4 SP 6a and SQL 7 SP3
> I am sure that SQL was not meant to support this number of databases on
the
> same server but unable to find any documentation to support this.
> Does any one have any experience of working with this number of databases
or
> point me in the right direction to documentation on the number of
supported
> databases.
> Any help would be appreciated
>|||How many databases your application can or should support involves far too
many variables for someone to answer "44 is too many" or "44 is fine."
> point me in the right direction to documentation on the number of
supported
> databases.
This shows some of the fixed limits in the product, however there is really
no way for there to be an "official" declaration of how many databases
*your* unique application can and should support.
http://www.aspfaq.com/2345
Follow-ups set only to server.|||We regularly host over 50 databases on a server. You can have many more if
the databases are quiet. But then, you may have to settle for far fewer on a
server if the databases are heavy CPU/Disk/Memory or Nework consumers.
In general, I wouldn't consider 44 databases as too many. The number of
databases is often not the most important factor. Ultimately, it's a
resource issue (CPU, memory, disk, or Network, etc)
--
Linchi Shea
linchi_shea@.NOSPAMml.com
"Mark Jones" <mark.jones@.nospam.kimdata.plus.com> wrote in message
news:FR%ab.8119$vX3.1275275@.wards.force9.net...
> I have started administrating a system that has 44 databases running on
the
> same server. That is 44 user databases.
> This is running under NT4 SP 6a and SQL 7 SP3
> I am sure that SQL was not meant to support this number of databases on
the
> same server but unable to find any documentation to support this.
> Does any one have any experience of working with this number of databases
or
> point me in the right direction to documentation on the number of
supported
> databases.
> Any help would be appreciated
>|||I'm not sure about SQL Server 7.0 at this point, the comments below really
apply to SQL Server 2000.
There was a white paper at www.microsoft.com that discussed a scenario in
which a single instance supported several hundred databases. It was the
architecture that Microsoft used for a bCentral's hosted small business
financials offering (before they acquired Great Plains and decided to drop
the hosted offering). Unfortunately last time I looked I was unable to find
the white paper.
There are really two limits on the PRACTICAL number of databases that a
single instance can support. The first is a limit on the number of file
handles that Windows allows a process to have open at any one time. In the
hundreds of databases example it is clear that only a small subset are
actually in use at any given time and so setting the databases to autoclose
when not in use solves the handle problem. The second problem is that SQL
Enterprise Manager has some algorithms that don't scale to hundreds of
databases. Basically, it loads metadata for all of the databases whenever
you connect to the instance. With hundreds of databases this can take a
long time (I've heard someone mention an example where it took over an hour
to start Enterprise Manager). The solution here is to do all management
from the command line and not use Enterprise Manager to manage hundreds of
databases. Neither of these problems apply when you are only talking about
44-50 databases. That should work just fine without any special handling.
--
Hal Berenson, SQL Server MVP
True Mountain Group LLC
"Mark Jones" <mark.jones@.nospam.kimdata.plus.com> wrote in message
news:FR%ab.8119$vX3.1275275@.wards.force9.net...
> I have started administrating a system that has 44 databases running on
the
> same server. That is 44 user databases.
> This is running under NT4 SP 6a and SQL 7 SP3
> I am sure that SQL was not meant to support this number of databases on
the
> same server but unable to find any documentation to support this.
> Does any one have any experience of working with this number of databases
or
> point me in the right direction to documentation on the number of
supported
> databases.
> Any help would be appreciated
>|||Greg;
My guess is that you'd probably run into other PRACTICAL issues before you
run into the file handle issue. I'd generally advise against using
auto-close for any serious production database. The option has the potential
to cause too many maintenance problems, and has the performance issue.
--
Linchi Shea
linchi_shea@.NOSPAMml.com
"Hal Berenson" <haroldb@.truemountainconsulting.com> wrote in message
news:uPr1iW9fDHA.2152@.tk2msftngp13.phx.gbl...
> I'm not sure about SQL Server 7.0 at this point, the comments below really
> apply to SQL Server 2000.
> There was a white paper at www.microsoft.com that discussed a scenario in
> which a single instance supported several hundred databases. It was the
> architecture that Microsoft used for a bCentral's hosted small business
> financials offering (before they acquired Great Plains and decided to drop
> the hosted offering). Unfortunately last time I looked I was unable to
find
> the white paper.
> There are really two limits on the PRACTICAL number of databases that a
> single instance can support. The first is a limit on the number of file
> handles that Windows allows a process to have open at any one time. In
the
> hundreds of databases example it is clear that only a small subset are
> actually in use at any given time and so setting the databases to
autoclose
> when not in use solves the handle problem. The second problem is that SQL
> Enterprise Manager has some algorithms that don't scale to hundreds of
> databases. Basically, it loads metadata for all of the databases whenever
> you connect to the instance. With hundreds of databases this can take a
> long time (I've heard someone mention an example where it took over an
hour
> to start Enterprise Manager). The solution here is to do all management
> from the command line and not use Enterprise Manager to manage hundreds of
> databases. Neither of these problems apply when you are only talking
about
> 44-50 databases. That should work just fine without any special handling.
> --
> Hal Berenson, SQL Server MVP
> True Mountain Group LLC
>
> "Mark Jones" <mark.jones@.nospam.kimdata.plus.com> wrote in message
> news:FR%ab.8119$vX3.1275275@.wards.force9.net...
> > I have started administrating a system that has 44 databases running on
> the
> > same server. That is 44 user databases.
> >
> > This is running under NT4 SP 6a and SQL 7 SP3
> >
> > I am sure that SQL was not meant to support this number of databases on
> the
> > same server but unable to find any documentation to support this.
> >
> > Does any one have any experience of working with this number of
databases
> or
> > point me in the right direction to documentation on the number of
> supported
> > databases.
> >
> > Any help would be appreciated
> >
> >
>|||Sorry, I meant, Hal (not Greg).
--
Linchi Shea
linchi_shea@.NOSPAMml.com
"Linchi Shea" <linchi_shea@.NOSPAMml.com> wrote in message
news:e%2398WpAgDHA.3200@.tk2msftngp13.phx.gbl...
> Greg;
> My guess is that you'd probably run into other PRACTICAL issues before you
> run into the file handle issue. I'd generally advise against using
> auto-close for any serious production database. The option has the
potential
> to cause too many maintenance problems, and has the performance issue.
> --
> Linchi Shea
> linchi_shea@.NOSPAMml.com
>
> "Hal Berenson" <haroldb@.truemountainconsulting.com> wrote in message
> news:uPr1iW9fDHA.2152@.tk2msftngp13.phx.gbl...
> > I'm not sure about SQL Server 7.0 at this point, the comments below
really
> > apply to SQL Server 2000.
> >
> > There was a white paper at www.microsoft.com that discussed a scenario
in
> > which a single instance supported several hundred databases. It was the
> > architecture that Microsoft used for a bCentral's hosted small business
> > financials offering (before they acquired Great Plains and decided to
drop
> > the hosted offering). Unfortunately last time I looked I was unable to
> find
> > the white paper.
> >
> > There are really two limits on the PRACTICAL number of databases that a
> > single instance can support. The first is a limit on the number of file
> > handles that Windows allows a process to have open at any one time. In
> the
> > hundreds of databases example it is clear that only a small subset are
> > actually in use at any given time and so setting the databases to
> autoclose
> > when not in use solves the handle problem. The second problem is that
SQL
> > Enterprise Manager has some algorithms that don't scale to hundreds of
> > databases. Basically, it loads metadata for all of the databases
whenever
> > you connect to the instance. With hundreds of databases this can take a
> > long time (I've heard someone mention an example where it took over an
> hour
> > to start Enterprise Manager). The solution here is to do all management
> > from the command line and not use Enterprise Manager to manage hundreds
of
> > databases. Neither of these problems apply when you are only talking
> about
> > 44-50 databases. That should work just fine without any special
handling.
> >
> > --
> > Hal Berenson, SQL Server MVP
> > True Mountain Group LLC
> >
> >
> > "Mark Jones" <mark.jones@.nospam.kimdata.plus.com> wrote in message
> > news:FR%ab.8119$vX3.1275275@.wards.force9.net...
> > > I have started administrating a system that has 44 databases running
on
> > the
> > > same server. That is 44 user databases.
> > >
> > > This is running under NT4 SP 6a and SQL 7 SP3
> > >
> > > I am sure that SQL was not meant to support this number of databases
on
> > the
> > > same server but unable to find any documentation to support this.
> > >
> > > Does any one have any experience of working with this number of
> databases
> > or
> > > point me in the right direction to documentation on the number of
> > supported
> > > databases.
> > >
> > > Any help would be appreciated
> > >
> > >
> >
> >
>

Number of database files

I am looking after an SAP database on SQL 2000 and its current size is
approx 190Gb. Due to the original people who installed the system years ago
the database is stored in one file. My question is, will this cause me
problems as the database size increases? I know it will be fine size wise
I'm thinking more of performance issues. The file is on a S.A.N so disk
speed wise it is pretty good.
thanks
Gav
Gav
In my experience with SQL 2000, I have not come accross a database that
needed more than one data file. For large databases they tend to be spread
accross a number of physical disks, so I/O is not usually an issue. (Given
modern disk capability). In version 7 and before you got a benifit when
performing backups and accessing large tables where the database was split
over multiple data files. (Though you should still have no more data files
than physical disks).
You might want to use multiple data files in conjunction with multiple file
groups to control physical placement of your tables. Again though in practice
SQL 2000 does a good job of striping the data making this rarely needed.
I used to work for a manager who kept database devices (back in the 6.5
days) limited to 2gb as you could edit files up to 2gb. You could do this
with data files, if you saw any need to edit the files, but at 190gb it's a
bit too big to be useful.
If you are not experiencing performance problems that seem to relate to the
data file I would not worry about it.
Hope this helps
John
"Gav" wrote:

> I am looking after an SAP database on SQL 2000 and its current size is
> approx 190Gb. Due to the original people who installed the system years ago
> the database is stored in one file. My question is, will this cause me
> problems as the database size increases? I know it will be fine size wise
> I'm thinking more of performance issues. The file is on a S.A.N so disk
> speed wise it is pretty good.
> thanks
> Gav
>
>
|||"Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
> Having one big file has a bit of risk attached to it, and
> is slightly slower than an alternative.
> Firstly if that one file goes down then you will need to
> restore the whole lot, which can take a bit of a time to
> do, for safety its better to split the file onto different
> disk drives, that way if one goes you still have your data
> safely on another disk (or if you use raiding different
> set of disks).
>
Won't you have to restore everything anyway? How do you
keep data consistancy between the 2 or more files? If you only
restore one file surely the restored file will be out of date compared to
the still existing file or files. Unless I'm missing something.

> The second is performance. If you put some files on disk
> drive C, and disk drive D you will have two different
> disks read write heads working for you that will improve
> your performance.
> You could increase performance even more by proper raiding
> and splitting the files. For instance if you have a group
> of tables that are constently been written to you can set
> the raid to be say 10 for those tables, but on a different
> set of disks configure it to be raid 5.
At the moment the entire database is raid 10. If I was to split
the database into more than one file how would I go about it?
It contains approx 38,000 tables, would I have to assign half to
a different file group?
[vbcol=seagreen]
> One point though is that usually having one file will do
> however if you can get away with it, have more.
> Peter
> "The best minds are not in government. If any were,
> business would steal them away."
> Ronald Reagan
>
>
> current size is
> the system years ago
> this cause me
> be fine size wise
> S.A.N so disk
|||>Won't you have to restore everything anyway? How do you
>keep data consistancy between the 2 or more files? If you
only
>restore one file surely the restored file will be out of
date compared to
>the still existing file or files. Unless I'm missing
something.
No, you can back up filegroup, so as long as the filegroup
is on a different disk all you need to restore is that
filegroup.
Its more of a shortcut, you will need downtime of course,
but restore saying 90gb is going to take less time than
190gb.

>At the moment the entire database is raid 10. If I was to
split
>the database into more than one file how would I go about
it?
>It contains approx 38,000 tables, would I have to assign
half to
>a different file group?
This will not just be a software change but a hardware
change as well. You will need to create drives i.e C: D:
E: F ect.
What you would do then is create different groups on
different disks. Unfortunatly its not as easy as giving
them a different file group as you will need to load the
data into the tables with the new filegroups, you can't
just change the table though an alter table command.
I'm going to stick my neck out here and say if you
don't 'need' to then don't. You are already on the best
kind of raid (if you can afford it) otherwise its going to
be a shed load of work to get it all working.
Peter
"They say that you only learn by mistake, does that mean I
am the worlds most learned man?"
Peter The Spate

>--Original Message--
>"Peter The Spate" <anonymous@.discussions.microsoft.com>
wrote in message[vbcol=seagreen]
>news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
and[vbcol=seagreen]
different[vbcol=seagreen]
data
>Won't you have to restore everything anyway? How do you
>keep data consistancy between the 2 or more files? If you
only
>restore one file surely the restored file will be out of
date compared to
>the still existing file or files. Unless I'm missing
something.[vbcol=seagreen]
raiding[vbcol=seagreen]
group[vbcol=seagreen]
set[vbcol=seagreen]
different
>At the moment the entire database is raid 10. If I was to
split
>the database into more than one file how would I go about
it?
>It contains approx 38,000 tables, would I have to assign
half to[vbcol=seagreen]
>a different file group?
will[vbcol=seagreen]
on a
>
>.
>
|||Oke doke. I was not planning on doing it, just wondering out of interest. I
have a test restore box with the full database on it so I can play about
with these sort of things.
Gav :o)
"Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
newsd01c4d6d8$90280b80$a601280a@.phx.gbl...[vbcol=seagreen]
> only
> date compared to
> something.
> No, you can back up filegroup, so as long as the filegroup
> is on a different disk all you need to restore is that
> filegroup.
> Its more of a shortcut, you will need downtime of course,
> but restore saying 90gb is going to take less time than
> 190gb.
> split
> it?
> half to
> This will not just be a software change but a hardware
> change as well. You will need to create drives i.e C: D:
> E: F ect.
> What you would do then is create different groups on
> different disks. Unfortunatly its not as easy as giving
> them a different file group as you will need to load the
> data into the tables with the new filegroups, you can't
> just change the table though an alter table command.
> I'm going to stick my neck out here and say if you
> don't 'need' to then don't. You are already on the best
> kind of raid (if you can afford it) otherwise its going to
> be a shed load of work to get it all working.
> Peter
> "They say that you only learn by mistake, does that mean I
> am the worlds most learned man?"
> Peter The Spate
>
>
>
> wrote in message
> and
> different
> data
> only
> date compared to
> something.
> raiding
> group
> set
> different
> split
> it?
> half to
> will
> on a
|||I just wanted to clarify one comment that Peter made.
[vbcol=seagreen]
While it is true you can do file or filegroup backups and restores there is
a little more to it than just restoring the backup for that file or
filegroup. You must then restore all the log files from the point in which
you backed up the file or filegroup you are restoring. This is what keeps
all of the database in sync with the newly restored backup.
Andrew J. Kelly SQL MVP
"Gav" <gavin.metcalfe@.nospam.portakabin.com> wrote in message
news:cohue6$lt2$1@.sparta.btinternet.com...
> Oke doke. I was not planning on doing it, just wondering out of interest.
> I
> have a test restore box with the full database on it so I can play about
> with these sort of things.
> Gav :o)
> "Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
> newsd01c4d6d8$90280b80$a601280a@.phx.gbl...
>
|||For what it is worth:
There is really no reason to have multiple files in your configuration,
unless it is the case that you have data that you can clearly identify
as read-only or that will not change (i.e. sales_archive_1994). if this
is the case, and it is the case that there is a lot of this data, then
you can greatly improve your backup strategy by moving these tables to
a different file (dump time, dump frequency, restore time, etc.).
if you are going to move data to a new file, the most effective method
is to create a clustered index for that table while specifying the new
filegroup. since the leaf level of the clustered index is the data
itself, it will physically move the object to the new filegroup. if you
do not need the clustered index, you can drop it when you are done. you
will have to rebuild all the non-clustered indexes on that table as
well (specifying the new filegroup), otherwise they will remain on the
primary.
there are other ways to move objects, but they are all extremely log
intensive. the above way is extremely fast.
hth,
hans
|||i forgot. if you are going move data around, then once you are done,
make sure that you execute a dbcc shrinkfile() on the primary
filegroup, otherwise you will get no benefit from the movement.
hans
sql

Number of database files

I am looking after an SAP database on SQL 2000 and its current size is
approx 190Gb. Due to the original people who installed the system years ago
the database is stored in one file. My question is, will this cause me
problems as the database size increases? I know it will be fine size wise
I'm thinking more of performance issues. The file is on a S.A.N so disk
speed wise it is pretty good.
thanks
GavGav
In my experience with SQL 2000, I have not come accross a database that
needed more than one data file. For large databases they tend to be spread
accross a number of physical disks, so I/O is not usually an issue. (Given
modern disk capability). In version 7 and before you got a benifit when
performing backups and accessing large tables where the database was split
over multiple data files. (Though you should still have no more data files
than physical disks).
You might want to use multiple data files in conjunction with multiple file
groups to control physical placement of your tables. Again though in practice
SQL 2000 does a good job of striping the data making this rarely needed.
I used to work for a manager who kept database devices (back in the 6.5
days) limited to 2gb as you could edit files up to 2gb. You could do this
with data files, if you saw any need to edit the files, but at 190gb it's a
bit too big to be useful.
If you are not experiencing performance problems that seem to relate to the
data file I would not worry about it.
Hope this helps
John
"Gav" wrote:
> I am looking after an SAP database on SQL 2000 and its current size is
> approx 190Gb. Due to the original people who installed the system years ago
> the database is stored in one file. My question is, will this cause me
> problems as the database size increases? I know it will be fine size wise
> I'm thinking more of performance issues. The file is on a S.A.N so disk
> speed wise it is pretty good.
> thanks
> Gav
>
>|||Having one big file has a bit of risk attached to it, and
is slightly slower than an alternative.
Firstly if that one file goes down then you will need to
restore the whole lot, which can take a bit of a time to
do, for safety its better to split the file onto different
disk drives, that way if one goes you still have your data
safely on another disk (or if you use raiding different
set of disks).
The second is performance. If you put some files on disk
drive C, and disk drive D you will have two different
disks read write heads working for you that will improve
your performance.
You could increase performance even more by proper raiding
and splitting the files. For instance if you have a group
of tables that are constently been written to you can set
the raid to be say 10 for those tables, but on a different
set of disks configure it to be raid 5.
One point though is that usually having one file will do
however if you can get away with it, have more.
Peter
"The best minds are not in government. If any were,
business would steal them away."
Ronald Reagan
>--Original Message--
>I am looking after an SAP database on SQL 2000 and its
current size is
>approx 190Gb. Due to the original people who installed
the system years ago
>the database is stored in one file. My question is, will
this cause me
>problems as the database size increases? I know it will
be fine size wise
>I'm thinking more of performance issues. The file is on a
S.A.N so disk
>speed wise it is pretty good.
>thanks
>Gav
>
>.
>|||"Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
> Having one big file has a bit of risk attached to it, and
> is slightly slower than an alternative.
> Firstly if that one file goes down then you will need to
> restore the whole lot, which can take a bit of a time to
> do, for safety its better to split the file onto different
> disk drives, that way if one goes you still have your data
> safely on another disk (or if you use raiding different
> set of disks).
>
Won't you have to restore everything anyway? How do you
keep data consistancy between the 2 or more files? If you only
restore one file surely the restored file will be out of date compared to
the still existing file or files. Unless I'm missing something.
> The second is performance. If you put some files on disk
> drive C, and disk drive D you will have two different
> disks read write heads working for you that will improve
> your performance.
> You could increase performance even more by proper raiding
> and splitting the files. For instance if you have a group
> of tables that are constently been written to you can set
> the raid to be say 10 for those tables, but on a different
> set of disks configure it to be raid 5.
At the moment the entire database is raid 10. If I was to split
the database into more than one file how would I go about it?
It contains approx 38,000 tables, would I have to assign half to
a different file group?
> One point though is that usually having one file will do
> however if you can get away with it, have more.
> Peter
> "The best minds are not in government. If any were,
> business would steal them away."
> Ronald Reagan
>
>
> >--Original Message--
> >I am looking after an SAP database on SQL 2000 and its
> current size is
> >approx 190Gb. Due to the original people who installed
> the system years ago
> >the database is stored in one file. My question is, will
> this cause me
> >problems as the database size increases? I know it will
> be fine size wise
> >I'm thinking more of performance issues. The file is on a
> S.A.N so disk
> >speed wise it is pretty good.
> >
> >thanks
> >
> >Gav
> >
> >
> >.
> >|||>Won't you have to restore everything anyway? How do you
>keep data consistancy between the 2 or more files? If you
only
>restore one file surely the restored file will be out of
date compared to
>the still existing file or files. Unless I'm missing
something.
No, you can back up filegroup, so as long as the filegroup
is on a different disk all you need to restore is that
filegroup.
Its more of a shortcut, you will need downtime of course,
but restore saying 90gb is going to take less time than
190gb.
>At the moment the entire database is raid 10. If I was to
split
>the database into more than one file how would I go about
it?
>It contains approx 38,000 tables, would I have to assign
half to
>a different file group?
This will not just be a software change but a hardware
change as well. You will need to create drives i.e C: D:
E: F ect.
What you would do then is create different groups on
different disks. Unfortunatly its not as easy as giving
them a different file group as you will need to load the
data into the tables with the new filegroups, you can't
just change the table though an alter table command.
I'm going to stick my neck out here and say if you
don't 'need' to then don't. You are already on the best
kind of raid (if you can afford it) otherwise its going to
be a shed load of work to get it all working.
Peter
"They say that you only learn by mistake, does that mean I
am the worlds most learned man?"
Peter The Spate
>--Original Message--
>"Peter The Spate" <anonymous@.discussions.microsoft.com>
wrote in message
>news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
>> Having one big file has a bit of risk attached to it,
and
>> is slightly slower than an alternative.
>> Firstly if that one file goes down then you will need to
>> restore the whole lot, which can take a bit of a time to
>> do, for safety its better to split the file onto
different
>> disk drives, that way if one goes you still have your
data
>> safely on another disk (or if you use raiding different
>> set of disks).
>Won't you have to restore everything anyway? How do you
>keep data consistancy between the 2 or more files? If you
only
>restore one file surely the restored file will be out of
date compared to
>the still existing file or files. Unless I'm missing
something.
>> The second is performance. If you put some files on disk
>> drive C, and disk drive D you will have two different
>> disks read write heads working for you that will improve
>> your performance.
>> You could increase performance even more by proper
raiding
>> and splitting the files. For instance if you have a
group
>> of tables that are constently been written to you can
set
>> the raid to be say 10 for those tables, but on a
different
>> set of disks configure it to be raid 5.
>At the moment the entire database is raid 10. If I was to
split
>the database into more than one file how would I go about
it?
>It contains approx 38,000 tables, would I have to assign
half to
>a different file group?
>> One point though is that usually having one file will do
>> however if you can get away with it, have more.
>> Peter
>> "The best minds are not in government. If any were,
>> business would steal them away."
>> Ronald Reagan
>>
>>
>> >--Original Message--
>> >I am looking after an SAP database on SQL 2000 and its
>> current size is
>> >approx 190Gb. Due to the original people who installed
>> the system years ago
>> >the database is stored in one file. My question is,
will
>> this cause me
>> >problems as the database size increases? I know it will
>> be fine size wise
>> >I'm thinking more of performance issues. The file is
on a
>> S.A.N so disk
>> >speed wise it is pretty good.
>> >
>> >thanks
>> >
>> >Gav
>> >
>> >
>> >.
>> >
>
>.
>|||Oke doke. I was not planning on doing it, just wondering out of interest. I
have a test restore box with the full database on it so I can play about
with these sort of things.
Gav :o)
"Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
news:aaad01c4d6d8$90280b80$a601280a@.phx.gbl...
> >Won't you have to restore everything anyway? How do you
> >keep data consistancy between the 2 or more files? If you
> only
> >restore one file surely the restored file will be out of
> date compared to
> >the still existing file or files. Unless I'm missing
> something.
> No, you can back up filegroup, so as long as the filegroup
> is on a different disk all you need to restore is that
> filegroup.
> Its more of a shortcut, you will need downtime of course,
> but restore saying 90gb is going to take less time than
> 190gb.
> >At the moment the entire database is raid 10. If I was to
> split
> >the database into more than one file how would I go about
> it?
> >It contains approx 38,000 tables, would I have to assign
> half to
> >a different file group?
> This will not just be a software change but a hardware
> change as well. You will need to create drives i.e C: D:
> E: F ect.
> What you would do then is create different groups on
> different disks. Unfortunatly its not as easy as giving
> them a different file group as you will need to load the
> data into the tables with the new filegroups, you can't
> just change the table though an alter table command.
> I'm going to stick my neck out here and say if you
> don't 'need' to then don't. You are already on the best
> kind of raid (if you can afford it) otherwise its going to
> be a shed load of work to get it all working.
> Peter
> "They say that you only learn by mistake, does that mean I
> am the worlds most learned man?"
> Peter The Spate
>
>
>
> >--Original Message--
> >
> >"Peter The Spate" <anonymous@.discussions.microsoft.com>
> wrote in message
> >news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
> >> Having one big file has a bit of risk attached to it,
> and
> >> is slightly slower than an alternative.
> >>
> >> Firstly if that one file goes down then you will need to
> >> restore the whole lot, which can take a bit of a time to
> >> do, for safety its better to split the file onto
> different
> >> disk drives, that way if one goes you still have your
> data
> >> safely on another disk (or if you use raiding different
> >> set of disks).
> >>
> >
> >Won't you have to restore everything anyway? How do you
> >keep data consistancy between the 2 or more files? If you
> only
> >restore one file surely the restored file will be out of
> date compared to
> >the still existing file or files. Unless I'm missing
> something.
> >
> >> The second is performance. If you put some files on disk
> >> drive C, and disk drive D you will have two different
> >> disks read write heads working for you that will improve
> >> your performance.
> >>
> >> You could increase performance even more by proper
> raiding
> >> and splitting the files. For instance if you have a
> group
> >> of tables that are constently been written to you can
> set
> >> the raid to be say 10 for those tables, but on a
> different
> >> set of disks configure it to be raid 5.
> >
> >At the moment the entire database is raid 10. If I was to
> split
> >the database into more than one file how would I go about
> it?
> >It contains approx 38,000 tables, would I have to assign
> half to
> >a different file group?
> >
> >> One point though is that usually having one file will do
> >> however if you can get away with it, have more.
> >>
> >> Peter
> >>
> >> "The best minds are not in government. If any were,
> >> business would steal them away."
> >> Ronald Reagan
> >>
> >>
> >>
> >>
> >> >--Original Message--
> >> >I am looking after an SAP database on SQL 2000 and its
> >> current size is
> >> >approx 190Gb. Due to the original people who installed
> >> the system years ago
> >> >the database is stored in one file. My question is,
> will
> >> this cause me
> >> >problems as the database size increases? I know it will
> >> be fine size wise
> >> >I'm thinking more of performance issues. The file is
> on a
> >> S.A.N so disk
> >> >speed wise it is pretty good.
> >> >
> >> >thanks
> >> >
> >> >Gav
> >> >
> >> >
> >> >.
> >> >
> >
> >
> >.
> >|||I just wanted to clarify one comment that Peter made.
>> No, you can back up filegroup, so as long as the filegroup
>> is on a different disk all you need to restore is that filegroup.
While it is true you can do file or filegroup backups and restores there is
a little more to it than just restoring the backup for that file or
filegroup. You must then restore all the log files from the point in which
you backed up the file or filegroup you are restoring. This is what keeps
all of the database in sync with the newly restored backup.
--
Andrew J. Kelly SQL MVP
"Gav" <gavin.metcalfe@.nospam.portakabin.com> wrote in message
news:cohue6$lt2$1@.sparta.btinternet.com...
> Oke doke. I was not planning on doing it, just wondering out of interest.
> I
> have a test restore box with the full database on it so I can play about
> with these sort of things.
> Gav :o)
> "Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
> news:aaad01c4d6d8$90280b80$a601280a@.phx.gbl...
>> >Won't you have to restore everything anyway? How do you
>> >keep data consistancy between the 2 or more files? If you
>> only
>> >restore one file surely the restored file will be out of
>> date compared to
>> >the still existing file or files. Unless I'm missing
>> something.
>> No, you can back up filegroup, so as long as the filegroup
>> is on a different disk all you need to restore is that
>> filegroup.
>> Its more of a shortcut, you will need downtime of course,
>> but restore saying 90gb is going to take less time than
>> 190gb.
>> >At the moment the entire database is raid 10. If I was to
>> split
>> >the database into more than one file how would I go about
>> it?
>> >It contains approx 38,000 tables, would I have to assign
>> half to
>> >a different file group?
>> This will not just be a software change but a hardware
>> change as well. You will need to create drives i.e C: D:
>> E: F ect.
>> What you would do then is create different groups on
>> different disks. Unfortunatly its not as easy as giving
>> them a different file group as you will need to load the
>> data into the tables with the new filegroups, you can't
>> just change the table though an alter table command.
>> I'm going to stick my neck out here and say if you
>> don't 'need' to then don't. You are already on the best
>> kind of raid (if you can afford it) otherwise its going to
>> be a shed load of work to get it all working.
>> Peter
>> "They say that you only learn by mistake, does that mean I
>> am the worlds most learned man?"
>> Peter The Spate
>>
>>
>>
>> >--Original Message--
>> >
>> >"Peter The Spate" <anonymous@.discussions.microsoft.com>
>> wrote in message
>> >news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
>> >> Having one big file has a bit of risk attached to it,
>> and
>> >> is slightly slower than an alternative.
>> >>
>> >> Firstly if that one file goes down then you will need to
>> >> restore the whole lot, which can take a bit of a time to
>> >> do, for safety its better to split the file onto
>> different
>> >> disk drives, that way if one goes you still have your
>> data
>> >> safely on another disk (or if you use raiding different
>> >> set of disks).
>> >>
>> >
>> >Won't you have to restore everything anyway? How do you
>> >keep data consistancy between the 2 or more files? If you
>> only
>> >restore one file surely the restored file will be out of
>> date compared to
>> >the still existing file or files. Unless I'm missing
>> something.
>> >
>> >> The second is performance. If you put some files on disk
>> >> drive C, and disk drive D you will have two different
>> >> disks read write heads working for you that will improve
>> >> your performance.
>> >>
>> >> You could increase performance even more by proper
>> raiding
>> >> and splitting the files. For instance if you have a
>> group
>> >> of tables that are constently been written to you can
>> set
>> >> the raid to be say 10 for those tables, but on a
>> different
>> >> set of disks configure it to be raid 5.
>> >
>> >At the moment the entire database is raid 10. If I was to
>> split
>> >the database into more than one file how would I go about
>> it?
>> >It contains approx 38,000 tables, would I have to assign
>> half to
>> >a different file group?
>> >
>> >> One point though is that usually having one file will do
>> >> however if you can get away with it, have more.
>> >>
>> >> Peter
>> >>
>> >> "The best minds are not in government. If any were,
>> >> business would steal them away."
>> >> Ronald Reagan
>> >>
>> >>
>> >>
>> >>
>> >> >--Original Message--
>> >> >I am looking after an SAP database on SQL 2000 and its
>> >> current size is
>> >> >approx 190Gb. Due to the original people who installed
>> >> the system years ago
>> >> >the database is stored in one file. My question is,
>> will
>> >> this cause me
>> >> >problems as the database size increases? I know it will
>> >> be fine size wise
>> >> >I'm thinking more of performance issues. The file is
>> on a
>> >> S.A.N so disk
>> >> >speed wise it is pretty good.
>> >> >
>> >> >thanks
>> >> >
>> >> >Gav
>> >> >
>> >> >
>> >> >.
>> >> >
>> >
>> >
>> >.
>> >
>|||For what it is worth:
There is really no reason to have multiple files in your configuration,
unless it is the case that you have data that you can clearly identify
as read-only or that will not change (i.e. sales_archive_1994). if this
is the case, and it is the case that there is a lot of this data, then
you can greatly improve your backup strategy by moving these tables to
a different file (dump time, dump frequency, restore time, etc.).
if you are going to move data to a new file, the most effective method
is to create a clustered index for that table while specifying the new
filegroup. since the leaf level of the clustered index is the data
itself, it will physically move the object to the new filegroup. if you
do not need the clustered index, you can drop it when you are done. you
will have to rebuild all the non-clustered indexes on that table as
well (specifying the new filegroup), otherwise they will remain on the
primary.
there are other ways to move objects, but they are all extremely log
intensive. the above way is extremely fast.
hth,
hans|||i forgot. if you are going move data around, then once you are done,
make sure that you execute a dbcc shrinkfile() on the primary
filegroup, otherwise you will get no benefit from the movement.
hans

Number of database files

I am looking after an SAP database on SQL 2000 and its current size is
approx 190Gb. Due to the original people who installed the system years ago
the database is stored in one file. My question is, will this cause me
problems as the database size increases? I know it will be fine size wise
I'm thinking more of performance issues. The file is on a S.A.N so disk
speed wise it is pretty good.
thanks
GavGav
In my experience with SQL 2000, I have not come accross a database that
needed more than one data file. For large databases they tend to be spread
accross a number of physical disks, so I/O is not usually an issue. (Given
modern disk capability). In version 7 and before you got a benifit when
performing backups and accessing large tables where the database was split
over multiple data files. (Though you should still have no more data files
than physical disks).
You might want to use multiple data files in conjunction with multiple file
groups to control physical placement of your tables. Again though in practic
e
SQL 2000 does a good job of striping the data making this rarely needed.
I used to work for a manager who kept database devices (back in the 6.5
days) limited to 2gb as you could edit files up to 2gb. You could do this
with data files, if you saw any need to edit the files, but at 190gb it's a
bit too big to be useful.
If you are not experiencing performance problems that seem to relate to the
data file I would not worry about it.
Hope this helps
John
"Gav" wrote:

> I am looking after an SAP database on SQL 2000 and its current size is
> approx 190Gb. Due to the original people who installed the system years ag
o
> the database is stored in one file. My question is, will this cause me
> problems as the database size increases? I know it will be fine size wise
> I'm thinking more of performance issues. The file is on a S.A.N so disk
> speed wise it is pretty good.
> thanks
> Gav
>
>|||"Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
> Having one big file has a bit of risk attached to it, and
> is slightly slower than an alternative.
> Firstly if that one file goes down then you will need to
> restore the whole lot, which can take a bit of a time to
> do, for safety its better to split the file onto different
> disk drives, that way if one goes you still have your data
> safely on another disk (or if you use raiding different
> set of disks).
>
Won't you have to restore everything anyway? How do you
keep data consistancy between the 2 or more files? If you only
restore one file surely the restored file will be out of date compared to
the still existing file or files. Unless I'm missing something.

> The second is performance. If you put some files on disk
> drive C, and disk drive D you will have two different
> disks read write heads working for you that will improve
> your performance.
> You could increase performance even more by proper raiding
> and splitting the files. For instance if you have a group
> of tables that are constently been written to you can set
> the raid to be say 10 for those tables, but on a different
> set of disks configure it to be raid 5.
At the moment the entire database is raid 10. If I was to split
the database into more than one file how would I go about it?
It contains approx 38,000 tables, would I have to assign half to
a different file group?
[vbcol=seagreen]
> One point though is that usually having one file will do
> however if you can get away with it, have more.
> Peter
> "The best minds are not in government. If any were,
> business would steal them away."
> Ronald Reagan
>
>
> current size is
> the system years ago
> this cause me
> be fine size wise
> S.A.N so disk|||>Won't you have to restore everything anyway? How do you
>keep data consistancy between the 2 or more files? If you
only
>restore one file surely the restored file will be out of
date compared to
>the still existing file or files. Unless I'm missing
something.
No, you can back up filegroup, so as long as the filegroup
is on a different disk all you need to restore is that
filegroup.
Its more of a shortcut, you will need downtime of course,
but restore saying 90gb is going to take less time than
190gb.

>At the moment the entire database is raid 10. If I was to
split
>the database into more than one file how would I go about
it?
>It contains approx 38,000 tables, would I have to assign
half to
>a different file group?
This will not just be a software change but a hardware
change as well. You will need to create drives i.e C: D:
E: F ect.
What you would do then is create different groups on
different disks. Unfortunatly its not as easy as giving
them a different file group as you will need to load the
data into the tables with the new filegroups, you can't
just change the table though an alter table command.
I'm going to stick my neck out here and say if you
don't 'need' to then don't. You are already on the best
kind of raid (if you can afford it) otherwise its going to
be a shed load of work to get it all working.
Peter
"They say that you only learn by mistake, does that mean I
am the worlds most learned man?"
Peter The Spate

>--Original Message--
>"Peter The Spate" <anonymous@.discussions.microsoft.com>
wrote in message
>news:aa3601c4d6c9$62a38270$a601280a@.phx.gbl...
and[vbcol=seagreen]
different[vbcol=seagreen]
data[vbcol=seagreen]
>Won't you have to restore everything anyway? How do you
>keep data consistancy between the 2 or more files? If you
only
>restore one file surely the restored file will be out of
date compared to
>the still existing file or files. Unless I'm missing
something.
>
raiding[vbcol=seagreen]
group[vbcol=seagreen]
set[vbcol=seagreen]
different[vbcol=seagreen]
>At the moment the entire database is raid 10. If I was to
split
>the database into more than one file how would I go about
it?
>It contains approx 38,000 tables, would I have to assign
half to
>a different file group?
>
will[vbcol=seagreen]
on a[vbcol=seagreen]
>
>.
>|||Oke doke. I was not planning on doing it, just wondering out of interest. I
have a test restore box with the full database on it so I can play about
with these sort of things.
Gav :o)
"Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
news:aaad01c4d6d8$90280b80$a601280a@.phx.gbl...[vbcol=seagreen]
> only
> date compared to
> something.
> No, you can back up filegroup, so as long as the filegroup
> is on a different disk all you need to restore is that
> filegroup.
> Its more of a shortcut, you will need downtime of course,
> but restore saying 90gb is going to take less time than
> 190gb.
>
> split
> it?
> half to
> This will not just be a software change but a hardware
> change as well. You will need to create drives i.e C: D:
> E: F ect.
> What you would do then is create different groups on
> different disks. Unfortunatly its not as easy as giving
> them a different file group as you will need to load the
> data into the tables with the new filegroups, you can't
> just change the table though an alter table command.
> I'm going to stick my neck out here and say if you
> don't 'need' to then don't. You are already on the best
> kind of raid (if you can afford it) otherwise its going to
> be a shed load of work to get it all working.
> Peter
> "They say that you only learn by mistake, does that mean I
> am the worlds most learned man?"
> Peter The Spate
>
>
>
>
> wrote in message
> and
> different
> data
> only
> date compared to
> something.
> raiding
> group
> set
> different
> split
> it?
> half to
> will
> on a|||I just wanted to clarify one comment that Peter made.

While it is true you can do file or filegroup backups and restores there is
a little more to it than just restoring the backup for that file or
filegroup. You must then restore all the log files from the point in which
you backed up the file or filegroup you are restoring. This is what keeps
all of the database in sync with the newly restored backup.
Andrew J. Kelly SQL MVP
"Gav" <gavin.metcalfe@.nospam.portakabin.com> wrote in message
news:cohue6$lt2$1@.sparta.btinternet.com...[vbcol=seagreen]
> Oke doke. I was not planning on doing it, just wondering out of interest.
> I
> have a test restore box with the full database on it so I can play about
> with these sort of things.
> Gav :o)
> "Peter The Spate" <anonymous@.discussions.microsoft.com> wrote in message
> news:aaad01c4d6d8$90280b80$a601280a@.phx.gbl...
>|||For what it is worth:
There is really no reason to have multiple files in your configuration,
unless it is the case that you have data that you can clearly identify
as read-only or that will not change (i.e. sales_archive_1994). if this
is the case, and it is the case that there is a lot of this data, then
you can greatly improve your backup strategy by moving these tables to
a different file (dump time, dump frequency, restore time, etc.).
if you are going to move data to a new file, the most effective method
is to create a clustered index for that table while specifying the new
filegroup. since the leaf level of the clustered index is the data
itself, it will physically move the object to the new filegroup. if you
do not need the clustered index, you can drop it when you are done. you
will have to rebuild all the non-clustered indexes on that table as
well (specifying the new filegroup), otherwise they will remain on the
primary.
there are other ways to move objects, but they are all extremely log
intensive. the above way is extremely fast.
hth,
hans|||i forgot. if you are going move data around, then once you are done,
make sure that you execute a dbcc shrinkfile() on the primary
filegroup, otherwise you will get no benefit from the movement.
hans

Tuesday, March 20, 2012

Number of allowed connections for SQL Server 2005 Standard on XP P

I assume that the number of connections that SQL Server 2005 Standard can
concurrently have will be limited by the operating system it is running on.
If that OS is XP Pro, how many concurrent connections can SQL Server 2005
Standard have?
Michael
I believe that is a wrong assumption. There is no limit based on the OS that
I know of other than the fact that 64 bit can support more memory for
connections. But if you are worried about lots of connections then why would
you want to use XP Pro? The # of connections will ultimately be limited by
the amount of memory and the hardware.
Andrew J. Kelly SQL MVP
"michael" <michael@.discussions.microsoft.com> wrote in message
news:0EA3B57E-6583-4A59-B38B-FB4C89AB5D74@.microsoft.com...
>I assume that the number of connections that SQL Server 2005 Standard can
> concurrently have will be limited by the operating system it is running
> on.
> If that OS is XP Pro, how many concurrent connections can SQL Server 2005
> Standard have?
> --
> Michael
|||Yup... and the amount of CPU time you can dedicate to the threads SQL Server
will generate to handle the operations.
Let's hear more about your connection strategy and application architecture.
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
------
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OSgPZMRiHHA.4596@.TK2MSFTNGP05.phx.gbl...
>I believe that is a wrong assumption. There is no limit based on the OS
>that I know of other than the fact that 64 bit can support more memory for
>connections. But if you are worried about lots of connections then why
>would you want to use XP Pro? The # of connections will ultimately be
>limited by the amount of memory and the hardware.
> --
> Andrew J. Kelly SQL MVP
> "michael" <michael@.discussions.microsoft.com> wrote in message
> news:0EA3B57E-6583-4A59-B38B-FB4C89AB5D74@.microsoft.com...
>
|||I did think about this for a bit, so, my answer will probably expose some of
my ignorance on the subject. My apologies in advance.
The reason I want to use XP Pro is that I already own a copy running on a
spare machine. The machine belongs to a domain managed on another box.
Nothing else running on the XP box. Nothing about the operation of the
database is really mission critical. I may have 10 - 30 concurrent users,
but, that's about it. I can put as much memory and storage space as I want.
What would I really get by running SQL Server 2005 on an MS server platform
in such a situation?
Respectfully,
Michael
"William (Bill) Vaughn" wrote:

> Yup... and the amount of CPU time you can dedicate to the threads SQL Server
> will generate to handle the operations.
> Let's hear more about your connection strategy and application architecture.
> --
> ____________________________________
> William (Bill) Vaughn
> Author, Mentor, Consultant
> Microsoft MVP
> INETA Speaker
> www.betav.com/blog/billva
> www.betav.com
> Please reply only to the newsgroup so that others can benefit.
> This posting is provided "AS IS" with no warranties, and confers no rights.
> __________________________________
> Visit www.hitchhikerguides.net to get more information on my latest book:
> Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
> and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
> ------
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OSgPZMRiHHA.4596@.TK2MSFTNGP05.phx.gbl...
>
>
|||The other thing to keep in mind is that regardless of any SQL Server
limitations, Windows XP only has 10 TCP/IP connections so supporting 30
concurrent users might not be possible. XP is tuned to give preference to
UI foreground application and not IO and memory intensive applications like
SQL Server so performance often isn't what you expect.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"michael" <michael@.discussions.microsoft.com> wrote in message
news:212C8757-2AE9-4D70-B57C-986F384BB85C@.microsoft.com...[vbcol=seagreen]
>I did think about this for a bit, so, my answer will probably expose some
>of
> my ignorance on the subject. My apologies in advance.
> The reason I want to use XP Pro is that I already own a copy running on a
> spare machine. The machine belongs to a domain managed on another box.
> Nothing else running on the XP box. Nothing about the operation of the
> database is really mission critical. I may have 10 - 30 concurrent users,
> but, that's about it. I can put as much memory and storage space as I
> want.
> What would I really get by running SQL Server 2005 on an MS server
> platform
> in such a situation?
> Respectfully,
>
> --
> Michael
>
> "William (Bill) Vaughn" wrote:
|||Roger brings up several very good points that you should take into
consideration. XP just isn't geared for the types of work that SQL Server
may demand. If you have 30 concurrent connections I would not call that
trivial and if you want to do it right I would look at a server OS.
Andrew J. Kelly SQL MVP
"michael" <michael@.discussions.microsoft.com> wrote in message
news:212C8757-2AE9-4D70-B57C-986F384BB85C@.microsoft.com...[vbcol=seagreen]
>I did think about this for a bit, so, my answer will probably expose some
>of
> my ignorance on the subject. My apologies in advance.
> The reason I want to use XP Pro is that I already own a copy running on a
> spare machine. The machine belongs to a domain managed on another box.
> Nothing else running on the XP box. Nothing about the operation of the
> database is really mission critical. I may have 10 - 30 concurrent users,
> but, that's about it. I can put as much memory and storage space as I
> want.
> What would I really get by running SQL Server 2005 on an MS server
> platform
> in such a situation?
> Respectfully,
>
> --
> Michael
>
> "William (Bill) Vaughn" wrote:
|||Roger brings up a good point. However, it's possible (and legal) to alter
the number of connections by using the metadata editor to bump the number of
connections to (a maximum of) 40.
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
------
"Roger Wolter[MSFT]" <rwolter@.online.microsoft.com> wrote in message
news:9C5831CD-59BC-4948-95BA-D2513BCDF3A8@.microsoft.com...
> The other thing to keep in mind is that regardless of any SQL Server
> limitations, Windows XP only has 10 TCP/IP connections so supporting 30
> concurrent users might not be possible. XP is tuned to give preference to
> UI foreground application and not IO and memory intensive applications
> like SQL Server so performance often isn't what you expect.
> --
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "michael" <michael@.discussions.microsoft.com> wrote in message
> news:212C8757-2AE9-4D70-B57C-986F384BB85C@.microsoft.com...
>