Showing posts with label advice. Show all posts
Showing posts with label advice. Show all posts

Sunday, March 11, 2012

Aggregation advice


Hi, I am having some trouble with changing the default aggregation behavior of my cube. I need to be able to change the aggregation at different levels of a dimension.


I am currently exploring the possibility of using custom member formulas, placing MDX script into a column within my dimension meaning that I could define a custom roll up at each level in my hierarchy – is this the best way?


I have two Dimensions as part of a pivot table:
DIMTime = [Year].[Quarter].[Hour]
DIMSpatial = [Group_1].[Group_0].[Route].[Link]
And a calculated measure = SUM(x)/SUM(N)


When I ‘roll up’ (total) this is what I am currently getting:

DIM Time

Group_0 2006

1 0.9398

2 0.9471

Total 0.9424


I want to change the roll up to be 0.9398 + 0.9471 = 1.8869


Could anyone give me some guidance on how the MDX Script should look? I have been trying lots of combinations but keep getting a #Value! Error. I haven’t yet been able to implement a custom aggregation so if I am going down the wrong avenue or if you think there is a better way of doing this please let me know…

Thanks in advance for any advice you can give me.

Simon

If you want the roll up to occur from fact table granularity, you could define a Named Calculation like 'x/N', then define a "sum" measure on it (assuming that the 2 measures are in the same fact table/measure group). If they are in different fact tables/measure groups, you might be able to use a measure expression|||

Thanks for your post Deepak and sorry for the slow response.

You are right in assuming that the 2 measures are in the same fact table. In trying to explain the problem I have over simplified my example in the first post. I have tried to put together this illustration

to explain further:

[On the left is what I am trying to achieve and on the right is what I am getting right now]

Click Here

I see from your description above that the default ‘SUM’ aggregation will work. But if I now wanted to ‘/n’ at each level in the hierarchy how would I go about changing this behaviour?

edit: Adding link to diagram instead

Aggregation advice


Hi, I am having some trouble with changing the default aggregation behavior of my cube. I need to be able to change the aggregation at different levels of a dimension.


I am currently exploring the possibility of using custom member formulas, placing MDX script into a column within my dimension meaning that I could define a custom roll up at each level in my hierarchy – is this the best way?


I have two Dimensions as part of a pivot table:
DIMTime = [Year].[Quarter].[Hour]
DIMSpatial = [Group_1].[Group_0].[Route].[Link]
And a calculated measure = SUM(x)/SUM(N)


When I ‘roll up’ (total) this is what I am currently getting:

DIM Time

Group_0 2006

1 0.9398

2 0.9471

Total 0.9424


I want to change the roll up to be 0.9398 + 0.9471 = 1.8869


Could anyone give me some guidance on how the MDX Script should look? I have been trying lots of combinations but keep getting a #Value! Error. I haven’t yet been able to implement a custom aggregation so if I am going down the wrong avenue or if you think there is a better way of doing this please let me know…

Thanks in advance for any advice you can give me.

Simon

If you want the roll up to occur from fact table granularity, you could define a Named Calculation like 'x/N', then define a "sum" measure on it (assuming that the 2 measures are in the same fact table/measure group). If they are in different fact tables/measure groups, you might be able to use a measure expression|||

Thanks for your post Deepak and sorry for the slow response.

You are right in assuming that the 2 measures are in the same fact table. In trying to explain the problem I have over simplified my example in the first post. I have tried to put together this illustration

to explain further:

[On the left is what I am trying to achieve and on the right is what I am getting right now]

Click Here

I see from your description above that the default ‘SUM’ aggregation will work. But if I now wanted to ‘/n’ at each level in the hierarchy how would I go about changing this behaviour?

edit: Adding link to diagram instead

Thursday, February 9, 2012

Advice?

hellooo

I have an Accounting system(vb.net 2003, SQL server 2000), every new year data is cleared, but i may use some data from previous years (such as liabilities)?

whats the best way to that ?

-Shall I create programmatically a new clone DB every year (new DB with same structure as previous year)
OR
-Shall I add a "year field" for tables in DB?

knowing that data will keep growing every year?

whats the best solution, knowing that i dont want the end user of my application to do anything manually, such as creating DB ......

Thank you

Hi,

I would add the year field and provide a tool to delete entries older than a given amount of years. Most liability rules are defined to keep data for a fixes amount of time (e.g. 1, 3, 10 years). That helps to limit the growth of the DB.

--
SvenC

|||Hi,

as you did not mention which edition of SQL Server you are using, you should take a look at partitioning. Vertical Partitioning will help you based on the date the data was entered (or should be archived) to autimatically let SQL Server move this data to another partition (SAN Storage area or another disk). You will not need to worry about picking the data then from some other database or changing your logic, because of the differentiation of the relational and the storage engine. E.g.: As the relational engine will get a command to retrieve all data from the years 2000 (which are already "outdated") it will query the storage engine which has information about where the data is stored (partition).

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de|||

Man thanks alot, but can you praphrase ?
or give me a link to an article tht might help me

I need a brief A_Z solution ......

Thanks

|||ohh sorry i forgot to tell you SQL Server 2000 Enterprise Edition|||The answer is you NEVER delete anything in an accounting system, the accountants will not be happy if you delete things at year end.

The accounting system logic should always be date range specific. As soon as you hard code in Jan - Dec, someone will want to change to July-June.|||

I dont wanna delete anything, I want at each beginning year to create a new DB(same structure as previous year) and may use some Data from previous DB (such as liabilities of previous year).

That all wht I need, can you direct me on the way to so that?

Thank you

|||

That would make quering the data of several years a nightmare.

--
SvenC

|||so whats the solution?|||

hi,

have a new database per year.the advantage:

1. you can put the entire database (when everything has been balanced)

on a readonly mode and its readonly no matter who logged into the system

2. you're not going to delete anything

3. in this way your scaling out your enterprise database

in the front end:

give the user a dropdown to choose which database he is going to connect

behind the scene he/she is just choosing a connection string

store the connection string in a class

and let every sql connection object to use the connection string stored in the class.

regards,

joey

|||

thanks man,


but how to clone my DB2006 to DB2007?
and still access DB2006?

|||

hi,

use the copy database wizard.

open enterprise manager.

drill down to the database you want to copy.

click on tools>wizard>management>copy database wizard.

youll be fine from there

regards,

joey

|||thanks for these replies

I need to do this cloning programmatically|||Isnt there a way to do this programmatically?|||

Have a look at SQL-DMO.

You find the SQLServer object which has a DetachDB and AttachDB method which you can use to detach a db, copy the files and attach it on another server or the same server as a different DB. You might need the Database object prior to that to find the database files.

MSDN has information about the class library and I guess BOL also.

--
SvenC

Advice?

hellooo

I have an Accounting system(vb.net 2003, SQL server 2000), every new year data is cleared, but i may use some data from previous years (such as liabilities)?

whats the best way to that ?

-Shall I create programmatically a new clone DB every year (new DB with same structure as previous year)
OR
-Shall I add a "year field" for tables in DB?

knowing that data will keep growing every year?

whats the best solution, knowing that i dont want the end user of my application to do anything manually, such as creating DB ......

Thank you

Hi,

I would add the year field and provide a tool to delete entries older than a given amount of years. Most liability rules are defined to keep data for a fixes amount of time (e.g. 1, 3, 10 years). That helps to limit the growth of the DB.

--
SvenC

|||Hi,

as you did not mention which edition of SQL Server you are using, you should take a look at partitioning. Vertical Partitioning will help you based on the date the data was entered (or should be archived) to autimatically let SQL Server move this data to another partition (SAN Storage area or another disk). You will not need to worry about picking the data then from some other database or changing your logic, because of the differentiation of the relational and the storage engine. E.g.: As the relational engine will get a command to retrieve all data from the years 2000 (which are already "outdated") it will query the storage engine which has information about where the data is stored (partition).

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de|||

Man thanks alot, but can you praphrase ?
or give me a link to an article tht might help me

I need a brief A_Z solution ......

Thanks

|||ohh sorry i forgot to tell you SQL Server 2000 Enterprise Edition|||The answer is you NEVER delete anything in an accounting system, the accountants will not be happy if you delete things at year end.

The accounting system logic should always be date range specific. As soon as you hard code in Jan - Dec, someone will want to change to July-June.|||

I dont wanna delete anything, I want at each beginning year to create a new DB(same structure as previous year) and may use some Data from previous DB (such as liabilities of previous year).

That all wht I need, can you direct me on the way to so that?

Thank you

|||

That would make quering the data of several years a nightmare.

--
SvenC

|||so whats the solution?|||

hi,

have a new database per year.the advantage:

1. you can put the entire database (when everything has been balanced)

on a readonly mode and its readonly no matter who logged into the system

2. you're not going to delete anything

3. in this way your scaling out your enterprise database

in the front end:

give the user a dropdown to choose which database he is going to connect

behind the scene he/she is just choosing a connection string

store the connection string in a class

and let every sql connection object to use the connection string stored in the class.

regards,

joey

|||

thanks man,


but how to clone my DB2006 to DB2007?
and still access DB2006?

|||

hi,

use the copy database wizard.

open enterprise manager.

drill down to the database you want to copy.

click on tools>wizard>management>copy database wizard.

youll be fine from there

regards,

joey

|||thanks for these replies

I need to do this cloning programmatically|||Isnt there a way to do this programmatically?|||

Have a look at SQL-DMO.

You find the SQLServer object which has a DetachDB and AttachDB method which you can use to detach a db, copy the files and attach it on another server or the same server as a different DB. You might need the Database object prior to that to find the database files.

MSDN has information about the class library and I guess BOL also.

--
SvenC

Advice sought on adding sequential id to batch

My batches need to have some sequential ids added to them. These ids need to be system-wide.

Here is example input data. I am given the first two cols and need to generate the third:

Code Snippet

Row RowType ID

A LN LN1001

B LN LN1002

C TR TR2001

D LN LN1003

E TR TR2002

F TR TR2003

There is no relation between Row and ID except for the sequence of the records.

I have a table that looks like this:

Code Snippet

RowType MaxID

LN 1003

TR 2003

and that is where I go to get/update my latest id. (SELECT ? = MaxID = MaxID + 1 Where RowType = 'TR')

My question is this:

I will probably use a script step to do the work of obtaining and applying a new row id to each row. How do I run the requisite SQL from within a script step? Or is there a better way of doing it?

You would need to cache the current max ID by row type, in preparation for the main script execuction, so the script PreExecute event would seem like a good place. You can use an SSIS connction of type ADO.NET, and use that to get the System.Data.SqlClient.SqlConnection object, and write some stanadard .Net data access code. See http://msdn2.microsoft.com/en-us/library/ms136045.aspx for a start.

You do not want to be making that SQL call per row, so track them in code, and write back the latest values in the PostExecute event.

Alternatively you could just use the Exec SQL Task to query this data and store it in a variable.

You would use a second Exec SQL Task to write back the latest values used after the Data Flow.

|||

Cool.

That is just the sort of info I was after.

I think I will go with the query before and update after model. You've given me some excellent pointers. I guess the main thing I was concerned about was the impact on performance of doing multiple updates (running SQL for each row of data.)

Thanks,

Dylan.

Advice required: Matrix report with a static column

First off, I apologize if this is an elementary question. Thanks for your
patience.

I have a requirement to develop a report that shows by calendar day how much
effort a resource has expended on what tasks. So, I have a stored procedure
that returns data that looks like this:

Resource calendarDay description etcHours effortHours
- - -- - --
Smith, John 2006-08-01 Requirements 20.00 8.00
Smith, John 2006-08-02 Requirements 20.00 8.00
Smith, John 2006-08-03 Design 80.00 8.00
Doe, Jane 2006-08-01 Requirements 5.00 4.00
Doe, Jane 2006-08-01 Design 30.00 4.00

And I've developed a matrix report that looks like this:

AUGUST
1 2 3 4 5
-
Name 1 Requirements 8.00 8.00
Design 8.00
Name 2 Requirements 4.00
Design 4.00

What I need to do, however, is add the "ETC" value at the END of the matrix
row for each Task for each Resource, so that it looks like this:

AUGUST
1 2 3 4 5 ETC

Name 1 Requirements 8.00 8.00 20.00
Design 8.00 80.00
Name 2 Requirements 4.00 5.00
Design 4.00 30.00

I can't seem to add a column to the matrix to represent the ETC column. What
would be the best way to accomplish this?

Thanks in advance for any replies,
Dr Jazz

Dr.

You can right click on Column 5 and select "Add Column" this will create ETC, this will add a static column to the matrix.

Hammer

|||

Hammer2 wrote:

You can right click on Column 5 and select "Add Column" this will create ETC, this will add a static column to the matrix.

Thanks for the reply, Hammer. Unfortunately, "Column 5" doesn't exist in the report designer because this is a matrix control. The only items available are "Column Groups" and I have two defined: one for the month, then another for the actual day (since the date range is a parameter specified by the user). I can right-click and add more Column Groups, but I cannot add a single static column, thus my dilemma.

Cheers,
Dr. Jazz

|||

Humm,

Puzzling because I have multiple Matrix reports and a number of them have static column, you should be able to "Add a column" this is a static column - you can also google matrix reports and static columns, this should bring up some examples.

Hope this helps

Hammer

|||

Hi Hammer,

Thanks for your continued patience and assistance. Check out this web page that will hopefully illustrate my problem better:

http://www.duotronic.com/matrix/matrix.htm

As you can see, I am able to get an "Add column" menu item, but I don't want it to be part of the group because this value will then be displayed for every day in the group. I just want it displayed at the end of the row. Is there a way to do this?

Cheers,
Dr Jazz

|||You arent using your query correctly as you should be using aggregates on that column as well.
SQL cannot tell if you just want a column there since your report is a grouping report.

Can you post your SQL query.

http://jhermiz.googlepages.com|||

Hi jhermiz. Thanks for your reply. I have no doubt that the structure of my query is not exactly compatible with how Reporting Services requires it. My query is listed below:

SELECT
emp.LastName + ', ' + emp.FirstName AS Resource,
cal.calendarDay,
task.taskID,
task.taskCode,
task.description,
tass.etcHours,
-- Get billable hours for this user on this day
SUM(tae.effortHours) AS effortHours

FROM
dbo.udf_getDateRangeTable(@.startDate, @.endDate) AS cal
CROSS JOIN HR.dbo.Employee AS emp
LEFT JOIN dbo.TaskAssignment AS tass ON tass.resourceID = emp.employeeID
LEFT JOIN dbo.TaskActualEffort AS tae ON tae.assignmentID = tass.assignmentID
AND tae.startDate = cal.calendarDay
LEFT JOIN dbo.Task AS task ON tass.taskID = task.taskID
LEFT JOIN dbo.TrackingItem AS ti ON task.trackingItemID = ti.trackingItemID

WHERE
ti.masterProjectID = @.masterProjectID
AND emp.employeeID IN (SELECT employeeID FROM HR.dbo.ProjectEmployeeAssignment WHERE projectID = @.masterProjectID)
-- Only return resources who have actually billed time in the reporting period
AND EXISTS
(
SELECT 1
FROM
dbo.TaskAssignment AS tass1
INNER JOIN dbo.Task AS task1 ON tass1.taskID = task1.taskID
INNER JOIN dbo.TaskActualEffort AS tae1 ON tass1.assignmentID = tae1.assignmentID
INNER JOIN dbo.TrackingItem AS ti1 ON ti1.trackingItemID = task1.trackingItemID
WHERE
tass1.resourceID = emp.employeeID
AND tae1.startDate BETWEEN @.startDate AND @.endDate
AND ti1.masterProjectID = @.masterProjectID
)

GROUP BY
emp.LastName + ', ' + emp.FirstName,
calendarDay,
emp.employeeID,
task.taskID,
task.taskCode,
task.description,
tass.etcHours

ORDER BY
task.taskID,
Resource,
cal.calendarDay

Cheers,
Andre

|||

Hey Dr.

Nice pages by the way, Okay, I understand the nature of your problem - When you "Add the column" you can use the "inscope" expression to tell the which group to display for.

Does that make sense?

Hammer

Advice Requested on Primary Key: Is char(20) better than binary(20)

I am not posting the DDL because it is not relevant to my question.
So far I have been unable to find a decent "natural key" for a table I am
designing. The true "natural key" is varbinary(MAX), which is unusable, and
so I have to consider a surrogate key, which is using SHA1 agorithm to
generate a 20 byte value of the natural key.
My Question:
Which would be the best choice for storing
20 bytes from SHA1 algorithm?
char(20)
binary(20)
I would like to choose the best datatype for Joins/Performance, etc. I am
using SQL Server 2005.
Thanks
Russell Mangel
Las Vegas, NVBINARY. The HashBytes function returns a VARBINARY value.
"Russell Mangel" <russell@.tymer.net> wrote in message
news:eUnwdJAkGHA.2004@.TK2MSFTNGP04.phx.gbl...
>I am not posting the DDL because it is not relevant to my question.
> So far I have been unable to find a decent "natural key" for a table I am
> designing. The true "natural key" is varbinary(MAX), which is unusable,
> and so I have to consider a surrogate key, which is using SHA1 agorithm to
> generate a 20 byte value of the natural key.
> My Question:
> Which would be the best choice for storing
> 20 bytes from SHA1 algorithm?
> char(20)
> binary(20)
> I would like to choose the best datatype for Joins/Performance, etc. I am
> using SQL Server 2005.
> Thanks
> Russell Mangel
> Las Vegas, NV
>
>|||Russell Mangel wrote:
> I am not posting the DDL because it is not relevant to my question.
> So far I have been unable to find a decent "natural key" for a table I am
> designing. The true "natural key" is varbinary(MAX), which is unusable, an
d
> so I have to consider a surrogate key, which is using SHA1 agorithm to
> generate a 20 byte value of the natural key.
> My Question:
> Which would be the best choice for storing
> 20 bytes from SHA1 algorithm?
> char(20)
> binary(20)
> I would like to choose the best datatype for Joins/Performance, etc. I am
> using SQL Server 2005.
> Thanks
> Russell Mangel
> Las Vegas, NV
Your hash wouldn't usually be called a surrogate key because it's
derived from real, meaningful data. I'd prefer to call it a logical key
or business key but I think it's fair enough to call it a natural key
if you like.
Even without the hash, your problem wasn't that you couldn't find the
key but that you had no easy way to enforce it. It's an annoyance that
SQL Server won't permit unique constraints on "large" keys. I suspect
the reason is due to the fact that only B-tree indexes are supported
and that makes large keys impractical in Microsoft's way of thinking.

> Which would be the best choice for storing
> 20 bytes from SHA1 algorithm?
Use BINARY to avoid the overhead and complications of collations.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||>> My Question: Which would be the best choice for storing 20 bytes from SHA
1 algorithm?
char(20)
binary(20) <<
If you use CHAR(20), then you can display it easier, but that is the
only advantage I can think of. It would be nice to have a Teradata
style hashing routine that would give you a true surrogate key, hidden
from the users and with hash clash managed by the system. Sigh!|||"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1150329186.339767.74520@.g10g2000cwb.googlegroups.com...
> char(20)
> binary(20) <<
> If you use CHAR(20), then you can display it easier, but that is the
> only advantage I can think of. It would be nice to have a Teradata
> style hashing routine that would give you a true surrogate key, hidden
> from the users and with hash clash managed by the system. Sigh!
Unless you convert the SHA1 hash to some character format, like Base64,
you'll run into problems displaying some hash codes that contain a 0x00 and
other special control characters in them. Base64, ASCII hexadecimal
representation, etc., of a 20-byte hash will require more than a CHAR(20) to
store.
-- Demonstration of CHAR(10) with an embedded 0x00 char
DECLARE @.x CHAR(10)
SELECT @.x = 'ABC' + CHAR(0) + 'DEF'
SELECT @.x
PRINT @.x

Advice Requested : Trying to write portable SQL

I am running on MS SQL2005.
I have a table called Companies that has a column (Active), which would be
either 0 or 1 for either "Active or Not-Active".
I realize that there is a BIT column type in SQL 2005, but I don't want to
use it. Instead I have chosen to use an 'int' column data type, and then I
applied a check constraint to ensure that only 0 or 1 is inserted.
I have included some DML, please let me know if there is a better way to
accomplish my goal.
Thanks
Russell Mangel
Las Vegas, NV
CREATE TABLE [dbo].[Companies]
(
[CompanyCode] [varchar](5) NOT NULL,
[CompanyName] [varchar](35) NOT NULL,
[Active] [int] NOT NULL CONSTRAINT [DF_Companies_Active] DEFAULT ((1)),
CONSTRAINT [PK_Companies] PRIMARY KEY CLUSTERED
(
[CompanyCode] ASC
) WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
ALTER TABLE [dbo].[Companies] WITH CHECK ADD CONSTRAINT [CK_Companies]
CHECK (([Active]=(0) OR [Active]=(1)))
GO
ALTER TABLE [dbo].[Companies] CHECK CONSTRAINT [CK_Companies]Russell Mangel wrote:
> I am running on MS SQL2005.
> I have a table called Companies that has a column (Active), which would be
> either 0 or 1 for either "Active or Not-Active".
> I realize that there is a BIT column type in SQL 2005, but I don't want to
> use it. Instead I have chosen to use an 'int' column data type, and then I
> applied a check constraint to ensure that only 0 or 1 is inserted.
> I have included some DML, please let me know if there is a better way to
> accomplish my goal.
> Thanks
> Russell Mangel
> Las Vegas, NV
> CREATE TABLE [dbo].[Companies]
> (
> [CompanyCode] [varchar](5) NOT NULL,
> [CompanyName] [varchar](35) NOT NULL,
> [Active] [int] NOT NULL CONSTRAINT [DF_Companies_Active] DEFAULT ((1)),
> CONSTRAINT [PK_Companies] PRIMARY KEY CLUSTERED
> (
> [CompanyCode] ASC
> ) WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
> ) ON [PRIMARY]
> ALTER TABLE [dbo].[Companies] WITH CHECK ADD CONSTRAINT [CK_Companies]
> CHECK (([Active]=(0) OR [Active]=(1)))
> GO
> ALTER TABLE [dbo].[Companies] CHECK CONSTRAINT [CK_Companies]
It's very hard to write totally portable DDL because virtually all SQL
DBMSs (SQL Server, Sybase, Oracle, MySQL for example) have extended the
SQL DDL to define the physical implementation of tables as well as the
logical model. Crazy but true!
In your case, I'd use CHAR(1) and use a more friendly, readable code
instead of a potenitally mysterious 0/1.
You can use the Mimer SQL Validator to check your code against the
ANSI/ISO standard. Standard not necessarily = Portable of course. :-(
http://developer.mimer.com/validator/
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||David Portas (REMOVE_BEFORE_REPLYING_dportas@.acm.org) writes:
> In your case, I'd use CHAR(1) and use a more friendly, readable code
> instead of a potenitally mysterious 0/1.
My bets are on 0 and 1. That's clearcut. char(1) comes with localisation
issues.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:
> David Portas (REMOVE_BEFORE_REPLYING_dportas@.acm.org) writes:
> My bets are on 0 and 1. That's clearcut. char(1) comes with localisation
> issues.
>
Perfectly true that it will be localised, but developing a database
schema without localisation sounds like a pretty tough call to me. What
do you do for table and column names? If you need to support developers
and DBAs with multiple languages then you can put translations in a
data dictionary. On the other hand if you rely on a good data
dictionary then I suppose that negates my argument about the potential
ambiguity of 1/0 anyway...
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1149628866.388604.195270@.y43g2000cwc.googlegroups.com...
> Russell Mangel wrote:
>
> It's very hard to write totally portable DDL because virtually all SQL
> DBMSs (SQL Server, Sybase, Oracle, MySQL for example) have extended the
> SQL DDL to define the physical implementation of tables as well as the
> logical model. Crazy but true!
>
I think the fault lies in the design of SQL for not cleanly accomodating the
physical design of databases. It's one thing to assume that the DBMS will
provide a suitable runtime implementation of a SELECT, INSERT, UPDATE or
DELETE. Quite another to assume that the an implementation won't need
additional details for CREATE TABLE, etc.
In fact, CREATE TABLE is just broken, as it badly conflates the type, the
instance and the implementation. A TABLE ought to be decalred as an
instance of a relation "type", with a prescribed mechanism for specifying
the physical implementation. SQL shouldn't try model the physical design,
but at least should provide a standard way for it to be specified.
David
David|||David Portas (REMOVE_BEFORE_REPLYING_dportas@.acm.org) writes:
> Erland Sommarskog wrote:
> Perfectly true that it will be localised, but developing a database
> schema without localisation sounds like a pretty tough call to me. What
> do you do for table and column names? If you need to support developers
> and DBAs with multiple languages then you can put translations in a
> data dictionary. On the other hand if you rely on a good data
> dictionary then I suppose that negates my argument about the potential
> ambiguity of 1/0 anyway...
The problem with char(1) is that there are no obvious values for the
constant. If a programmer writes
WHERE binarycol = 'J'
when the the possible values are Y and N have introduced a bug. That, there
are no universal values for a char(1) column. There is for bit on the other
hand.
And if you think
WHERE bitcol = 1
is cumbersome for some reason, you can in SQL 2005 write:
WHERE bitcol = 'true'
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||David Browne wrote:
> I think the fault lies in the design of SQL for not cleanly accomodating t
he
> physical design of databases. It's one thing to assume that the DBMS will
> provide a suitable runtime implementation of a SELECT, INSERT, UPDATE or
> DELETE. Quite another to assume that the an implementation won't need
> additional details for CREATE TABLE, etc.
> In fact, CREATE TABLE is just broken, as it badly conflates the type, the
> instance and the implementation. A TABLE ought to be decalred as an
> instance of a relation "type", with a prescribed mechanism for specifying
> the physical implementation. SQL shouldn't try model the physical design,
> but at least should provide a standard way for it to be specified.
>
Agree absolutely. A Data Implementation sub-language ought to be
included as a hook for translating types and views into their
product-specific structures. It's remarkable to think that if the SQL
standard had specified even a rudimentary "DIL" and if vendors had used
it we could have been spared truckloads or denormalization agony and
the worst of the Kimball dimensional muddle. That simple omission has
perhaps cost $billions for database customers.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||Thanks for the reply...
Did I do the check contstraint right? I can't think of anything else to do
here. Besides choosing an int or char(1).
Russell Mangel
Las Vegas, NV

Advice requested : Choosing the best Key for a table

I would appreciate good advice about the following Schema for a Table called
Messages. The table will hold archived messages from Microsoft Outlook. The
SQL 2005 schema that I have posted, is simply what the data looks like in
it's *raw* form. I intend to modify the schema in the most efficient way
possible. This table was created using MS SQL server 2005 and uses two if
the new (MAX) data types.
You can help me the most by focusing on the best Key for the Messages table.
The "natural key" for the table (in raw form), would be MessageID, so lets
discuss it's problems. The first problem is that varbinary(MAX) (new in
SQL2005), does not allow Primary Key constraints, and so it should be!
Obviously we will need to use a smaller DataType than varbinary(MAX), once
you pick something reasonable in size you are allowed to create PrimaryKey
constraint. MessageID's do not have a documented or published limit, they
can be *any* length. From my observations MessageID will typically be 70-340
bytes. So I suppose we could set varbinary(500). This would allow us to
handle most situations. But this is still way too long for a Primary Key in
my opinion. The other columns that are set as xxx(MAX) have similar
problems, ignore these columns for now until we find a decent Key for the
table.
In my opinion this Table does not have a "Natural Key" we can use it's
simply to long, and this table will have "hundreds of millions of rows", so
I am worried about JOIN performance. Once a message is archived (then
deleted), from it's original location and placed in SQL server, the
MessageID has no value and is never displayed to the user. There are more
complications because this table is replicated using (Merge replication).
The database is a distributed database, so using auto identity columns
complicate replication issues, not too mention irritating --Celko--. So I
really want to find a suitable surrogate key.
So what to do?
1. Use RSA to reduce MessageID to 20 bytes, and deal with the collisions?
2. Use a rowGuid (Outch!)
At this point I like option #1 the best because it works nicely with
"replication" as the original MessageID is a Globally Unique Identifier, so
no two MessageID's (PR_ENTRY_ID long term) will ever exist. But I think
there is are issues with generating these shorter keys using RSA, collisions
will happen. Additionally it is quite simple to encode the MessageID using
RSA. The MessageID would now be 20 bytes or [varbinary](20).
Does anyone have any good suggestions?
Thanks for your time.
Russell Mangel
Las Vegas, NV
-- Note: I commented out the Constraint, as varbinary(MAX) is not allowed.
-- This schema is just what the data looks like in the real world.
CREATE TABLE
[dbo].[Messages]
(
[MessageID] [varbinary](MAX) NOT NULL,
[ParentID] [varbinary](MAX) NOT NULL, -- FK for Folders Table
[SenderID] [int] NOT NULL, -- FK for Sender of message
[MessageType] [int] NOT NULL, -- FK MessageType
[Length] [bigint] NOT NULL,
[To] [nvarchar](MAX) NOT NULL,
[CC] [nvarchar](MAX) NOT NULL,
[Subject] [nvarchar](MAX) NOT NULL,
[Body] [ntext] NOT NULL,
[HasAttachment] [bit] NOT NULL,
-- CONSTRAINT [PK_Messages] PRIMARY KEY CLUSTERED
-- (
-- [MessageID]
-- )
)ON [PRIMARY]
-- End SchemaHi, Russel
Well, why do you want the MessageId column to be varbinary datatype? Any
Reason?
Sure, I don't know your business requirements so my guess is (if you do not
want to use an IDENTITYT)
BTW an IDENTITY in most cases is a good candidate for surrogate keys
CREATE TABLE Messages
(
MessageId INT NOT NULL PRIMARY KEY
)
Then you will need to generate a next MessageId by your self like
DECLARE @.MaxId INT
BEGIN TRAN
SELECT @.MaxId =COALESCE(MAX(MessageId),0) FROM Messages WITH
(UPDLOCK,HOLDLOCK)
INSERT INTO Messages SELECT @.MaxId ,....... FROM Table
COMMIT TRAN
"Russell Mangel" <russell@.tymer.net> wrote in message
news:%23JD8adcaGHA.3304@.TK2MSFTNGP04.phx.gbl...
>I would appreciate good advice about the following Schema for a Table
>called Messages. The table will hold archived messages from Microsoft
>Outlook. The SQL 2005 schema that I have posted, is simply what the data
>looks like in it's *raw* form. I intend to modify the schema in the most
>efficient way possible. This table was created using MS SQL server 2005 and
>uses two if the new (MAX) data types.
> You can help me the most by focusing on the best Key for the Messages
> table.
> The "natural key" for the table (in raw form), would be MessageID, so lets
> discuss it's problems. The first problem is that varbinary(MAX) (new in
> SQL2005), does not allow Primary Key constraints, and so it should be!
> Obviously we will need to use a smaller DataType than varbinary(MAX), once
> you pick something reasonable in size you are allowed to create PrimaryKey
> constraint. MessageID's do not have a documented or published limit, they
> can be *any* length. From my observations MessageID will typically be
> 70-340 bytes. So I suppose we could set varbinary(500). This would allow
> us to handle most situations. But this is still way too long for a Primary
> Key in my opinion. The other columns that are set as xxx(MAX) have similar
> problems, ignore these columns for now until we find a decent Key for the
> table.
> In my opinion this Table does not have a "Natural Key" we can use it's
> simply to long, and this table will have "hundreds of millions of rows",
> so I am worried about JOIN performance. Once a message is archived (then
> deleted), from it's original location and placed in SQL server, the
> MessageID has no value and is never displayed to the user. There are more
> complications because this table is replicated using (Merge replication).
> The database is a distributed database, so using auto identity columns
> complicate replication issues, not too mention irritating --Celko--. So I
> really want to find a suitable surrogate key.
> So what to do?
> 1. Use RSA to reduce MessageID to 20 bytes, and deal with the collisions?
> 2. Use a rowGuid (Outch!)
> At this point I like option #1 the best because it works nicely with
> "replication" as the original MessageID is a Globally Unique Identifier,
> so no two MessageID's (PR_ENTRY_ID long term) will ever exist. But I think
> there is are issues with generating these shorter keys using RSA,
> collisions will happen. Additionally it is quite simple to encode the
> MessageID using RSA. The MessageID would now be 20 bytes or
> [varbinary](20).
> Does anyone have any good suggestions?
> Thanks for your time.
> Russell Mangel
> Las Vegas, NV
>
> -- Note: I commented out the Constraint, as varbinary(MAX) is not
> allowed.
> -- This schema is just what the data looks like in the real world.
> CREATE TABLE
> [dbo].[Messages]
> (
> [MessageID] [varbinary](MAX) NOT NULL,
> [ParentID] [varbinary](MAX) NOT NULL, -- FK for Folders Table
> [SenderID] [int] NOT NULL, -- FK for Sender of message
> [MessageType] [int] NOT NULL, -- FK MessageType
> [Length] [bigint] NOT NULL,
> [To] [nvarchar](MAX) NOT NULL,
> [CC] [nvarchar](MAX) NOT NULL,
> [Subject] [nvarchar](MAX) NOT NULL,
> [Body] [ntext] NOT NULL,
> [HasAttachment] [bit] NOT NULL,
> -- CONSTRAINT [PK_Messages] PRIMARY KEY CLUSTERED
> -- (
> -- [MessageID]
> -- )
> )ON [PRIMARY]
> -- End Schema
>|||Can MessageID ever change for a given message? If not, then use it as the
natural key (as silly as this name sounds in this case). Put a unique
constraint on it, but create a surrogate key for joins.
Although everybody else will say that the GUID is a poor choice for a
primary key, I'd say use it but NEVER as a clustered key. Perhaps you'd have
more use for a clustered index on a datetime column (although I can't see on
e
in your proposed schema). Maybe you could also use an IDENTITY column - it
makes a great
candidate for a clustered index.
Of course a better option still would be to do more research and find a
better natural key (maybe even composite) candidate, then use a single colum
n
surrogate key for joins and references.
ML
http://milambda.blogspot.com/|||There is 1 other Column in the Messages table "Created" it's a DateTime,
that might help us find a better key.
Not sure what to call this Key as it has two columns, must be
Composite/Natural?
Is it possible for a person (SenderID), to send a message at the exact same
time?
Hmmmmmmm. Got to think about this for a bit, and maybe I can find another
column that I can use for Composite key.
-- Here is the new Schema,
CREATE TABLE
[dbo].[Messages]
(
[MessageID] [varbinary](max) NOT NULL,
[ParentID] [varbinary](max) NOT NULL,
[SenderID] [int] NOT NULL,
[MessageType] [int] NOT NULL,
[Length] [bigint] NOT NULL,
[To] [nvarchar](max) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[CC] [nvarchar](max) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[Subject] [nvarchar](max) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[Body] [ntext] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[HasAttachment] [bit] NOT NULL,
[Created] [datetime] NOT NULL, -- New Column
CONSTRAINT [PK_Messages] PRIMARY KEY CLUSTERED
(
[SenderID] ASC,
[Created] ASC
)
)ON [PRIMARY]|||Russell Mangel wrote:
> I would appreciate good advice about the following Schema for a Table call
ed
> Messages. The table will hold archived messages from Microsoft Outlook. Th
e
> SQL 2005 schema that I have posted, is simply what the data looks like in
> it's *raw* form. I intend to modify the schema in the most efficient way
> possible. This table was created using MS SQL server 2005 and uses two if
> the new (MAX) data types.
> You can help me the most by focusing on the best Key for the Messages tabl
e.
> The "natural key" for the table (in raw form), would be MessageID, so lets
> discuss it's problems. The first problem is that varbinary(MAX) (new in
> SQL2005), does not allow Primary Key constraints, and so it should be!
> Obviously we will need to use a smaller DataType than varbinary(MAX), once
> you pick something reasonable in size you are allowed to create PrimaryKey
> constraint. MessageID's do not have a documented or published limit, they
> can be *any* length. From my observations MessageID will typically be 70-3
40
> bytes. So I suppose we could set varbinary(500). This would allow us to
> handle most situations. But this is still way too long for a Primary Key i
n
> my opinion. The other columns that are set as xxx(MAX) have similar
> problems, ignore these columns for now until we find a decent Key for the
> table.
> In my opinion this Table does not have a "Natural Key" we can use it's
> simply to long, and this table will have "hundreds of millions of rows", s
o
> I am worried about JOIN performance. Once a message is archived (then
> deleted), from it's original location and placed in SQL server, the
> MessageID has no value and is never displayed to the user. There are more
> complications because this table is replicated using (Merge replication).
> The database is a distributed database, so using auto identity columns
> complicate replication issues, not too mention irritating --Celko--. So I
> really want to find a suitable surrogate key.
> So what to do?
> 1. Use RSA to reduce MessageID to 20 bytes, and deal with the collisions?
> 2. Use a rowGuid (Outch!)
> At this point I like option #1 the best because it works nicely with
> "replication" as the original MessageID is a Globally Unique Identifier, s
o
> no two MessageID's (PR_ENTRY_ID long term) will ever exist. But I think
> there is are issues with generating these shorter keys using RSA, collisio
ns
> will happen. Additionally it is quite simple to encode the MessageID using
> RSA. The MessageID would now be 20 bytes or [varbinary](20).
> Does anyone have any good suggestions?
> Thanks for your time.
> Russell Mangel
> Las Vegas, NV
>
> -- Note: I commented out the Constraint, as varbinary(MAX) is not allowed
.
> -- This schema is just what the data looks like in the real world.
> CREATE TABLE
> [dbo].[Messages]
> (
> [MessageID] [varbinary](MAX) NOT NULL,
> [ParentID] [varbinary](MAX) NOT NULL, -- FK for Folders Table
> [SenderID] [int] NOT NULL, -- FK for Sender of message
> [MessageType] [int] NOT NULL, -- FK MessageType
> [Length] [bigint] NOT NULL,
> [To] [nvarchar](MAX) NOT NULL,
> [CC] [nvarchar](MAX) NOT NULL,
> [Subject] [nvarchar](MAX) NOT NULL,
> [Body] [ntext] NOT NULL,
> [HasAttachment] [bit] NOT NULL,
> -- CONSTRAINT [PK_Messages] PRIMARY KEY CLUSTERED
> -- (
> -- [MessageID]
> -- )
> )ON [PRIMARY]
> -- End Schema
You can add a constraint on MessageID like this:
CREATE FUNCTION dbo.ufn_sha_hash
(@.msg VARBINARY(8000))
RETURNS VARBINARY(20)
AS
BEGIN
RETURN
ISNULL(HASHBYTES('SHA1',@.msg),0x7C7CDB42
7449446E8EA812C5981C5D6A) ;
END
GO
CREATE TABLE dbo.Messages
(
MessageID varbinary(8000) NOT NULL,
HashID varbinary(20) NOT NULL PRIMARY KEY,
CHECK (HashID = dbo.ufn_sha_hash(MessageID)),
);
GO
Whether you should also use a surrogate key is a different question. It
would probably make sense to do so. Hope this helps.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||> CREATE FUNCTION dbo.ufn_sha_hash
> (@.msg VARBINARY(8000))
> RETURNS VARBINARY(20)
> AS
> BEGIN
> RETURN
> ISNULL(HASHBYTES('SHA1',@.msg),0x7C7CDB42
7449446E8EA812C5981C5D6A) ;
> END
> GO
> CREATE TABLE dbo.Messages
> (
> MessageID varbinary(8000) NOT NULL,
> HashID varbinary(20) NOT NULL PRIMARY KEY,
> CHECK (HashID = dbo.ufn_sha_hash(MessageID)),
> );
> GO
> Whether you should also use a surrogate key is a different question. It
> would probably make sense to do so. Hope this helps.
> --
> David Portas, SQL Server MVP
I didn't know that SQL could do SHA1 hashes, thanks. This table is a
replicated table so once you add surrogate keys, you have to deal with that.
Personally I would rather have a binary(20) column which replaces MessageID
column completely than deal with the surrogate key problems with Replicated
(Merge) databases.
The Messages table also has a FK (FolderID) for Folders table, so this
column could also be removed and replaced by a binary(20) column. Folders
Table PK (FolderID)would be replaced with SHA1 binary(20) as well.
Which brings a couple questions:
1. If we are going to store an SHA1 hash, wouldn't it be best to use:
binary(20) instead of varbinary(20)?
2. Will I have any collisions (PK violations) with SHA1 hash (more on this
in next sentence)?
The MessageID is a variable length Binary byte array, and is a Globally
UniqueIdentifier (it's just an older form of GUID before GUID was born) this
MessageID is from MAPI (Exchange Server). So if you had 50 million
varbinary(MAX) MessageIDs, and you ran SHA1 hash against all of them would
you find duplicated SHA1 hash values?
Thanks
Russell Mangel|||Russell Mangel wrote:
> I didn't know that SQL could do SHA1 hashes, thanks. This table is a
> replicated table so once you add surrogate keys, you have to deal with tha
t.
> Personally I would rather have a binary(20) column which replaces Message
ID
> column completely than deal with the surrogate key problems with Replicate
d
> (Merge) databases.
> The Messages table also has a FK (FolderID) for Folders table, so this
> column could also be removed and replaced by a binary(20) column. Folders
> Table PK (FolderID)would be replaced with SHA1 binary(20) as well.
> Which brings a couple questions:
> 1. If we are going to store an SHA1 hash, wouldn't it be best to use:
> binary(20) instead of varbinary(20)?
> 2. Will I have any collisions (PK violations) with SHA1 hash (more on this
> in next sentence)?
> The MessageID is a variable length Binary byte array, and is a Globally
> UniqueIdentifier (it's just an older form of GUID before GUID was born) th
is
> MessageID is from MAPI (Exchange Server). So if you had 50 million
> varbinary(MAX) MessageIDs, and you ran SHA1 hash against all of them would
> you find duplicated SHA1 hash values?
> Thanks
> Russell Mangel
1. True. Although VARBINARY is functionaly equivalent because the
return value is always 20 bytes.
2. Not unless the MessageID exceeds 8000 bytes, which is the maximum
supported length for HashBytes. Although there is a theoretical risk of
a hash collision, you need 2^80 hashed messages on average before you
hit a duplicate. You are hashing GUIDs though. I don't know the GUIDs
used by Exchange but if they have similar qualities to the ones
generated by Windows then theoretically you are much, much more likely
to hit a duplicate GUID than a duplicate hash.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--

Advice requested - How do I convert a Base36 number to SQL int

Hi,
Does anyone know if it is possible to create a Stored Procedure in
SQL7.0/2000 to do the following:
I need to convert an SQL int value to a Base36 value, and the other
direction Base36 to SQL int.
Example:
Take the (Base10) number 22, and convert to "M".
Take the (Base36) number "M" and convert to 22.
The Base36 digits are:
"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
Thanks
Russell Mangel
Las Vegas, NVHi Russel
This is a solution for Hex which could be extended
http://www.umachandar.com/technical...ipts/Main89.htm
http://www.umachandar.com/technical...ipts/Main29.htm
For hex there is also an undocumented procedure xp_varbintohexstr
http://www.umachandar.com/technical...ipts/Main11.htm
John
"Russell Mangel" <russell@.tymer.net> wrote in message
news:ezmZ0qSEFHA.1396@.tk2msftngp13.phx.gbl...
> Hi,
> Does anyone know if it is possible to create a Stored Procedure in
> SQL7.0/2000 to do the following:
> I need to convert an SQL int value to a Base36 value, and the other
> direction Base36 to SQL int.
> Example:
> Take the (Base10) number 22, and convert to "M".
> Take the (Base36) number "M" and convert to 22.
> The Base36 digits are:
> "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
> Thanks
> Russell Mangel
> Las Vegas, NV
>

Advice please - x64 Std Ed best value for money?

Hi
We need to part ways with our 32 Bit SQL 2000 Enterprise Ed server this
year and finally upgrade to 2005.
Purely from a MS / Licensing viewpoint, it would appear that the best
value for money seems to be x64 Std Ed on the per processor license
option on a W2003 x64 Std Ed? (~US$ 5000 / processor)
Rationale
=> It doesn't seem to make sense to go back to 32 bit [PAE / 3GB AWE
and the like]
=> It seems hard to justify the 4 fold price differential between x64
SE and EE - in SQL 2000 we haven't used any of the EE only features
like Indexed Views, and the major reason for EE was for the > 2GB
memory usage.
The server will host about 5 mostly OLTP / LOB Databases Total size of
around 250GB, with around 200 concurrent users. Our current EE 32 bit
setup is 2 x 3Ghz Xeon MP with 6GB RAM and usually runs at about 40%
utilisation.
I have been through
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
Am I missing something obvious? (SE x64 can use >> 4GB RAM, has no DB
Size Limitations, and unlimited concurrent connection limitations,
right?)
Thanks in advance
Stuart
Hi Stuart
I am not sure if this will be posted twice!
I think your reasoning is fine if memory was the only issue that you need
Enterprise Edition. If you are definately not going to be using any of the
features available only in EE such as partitioning, online restores, online
indexing etc then SE will be sufficient.
John
"NonNB" wrote:

> Hi
> We need to part ways with our 32 Bit SQL 2000 Enterprise Ed server this
> year and finally upgrade to 2005.
> Purely from a MS / Licensing viewpoint, it would appear that the best
> value for money seems to be x64 Std Ed on the per processor license
> option on a W2003 x64 Std Ed? (~US$ 5000 / processor)
> Rationale
> => It doesn't seem to make sense to go back to 32 bit [PAE / 3GB AWE
> and the like]
> => It seems hard to justify the 4 fold price differential between x64
> SE and EE - in SQL 2000 we haven't used any of the EE only features
> like Indexed Views, and the major reason for EE was for the > 2GB
> memory usage.
> The server will host about 5 mostly OLTP / LOB Databases Total size of
> around 250GB, with around 200 concurrent users. Our current EE 32 bit
> setup is 2 x 3Ghz Xeon MP with 6GB RAM and usually runs at about 40%
> utilisation.
> I have been through
> http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
> Am I missing something obvious? (SE x64 can use >> 4GB RAM, has no DB
> Size Limitations, and unlimited concurrent connection limitations,
> right?)
> Thanks in advance
> Stuart
>
|||Thanks John - SE x64 will definitely be on the shortlist.
Regards
Stuart
John Bell wrote:[vbcol=seagreen]
> Hi Stuart
> I am not sure if this will be posted twice!
> I think your reasoning is fine if memory was the only issue that you need
> Enterprise Edition. If you are definately not going to be using any of the
> features available only in EE such as partitioning, online restores, online
> indexing etc then SE will be sufficient.
> John
> "NonNB" wrote:

Advice please - x64 Std Ed best value for money?

Hi
We need to part ways with our 32 Bit SQL 2000 Enterprise Ed server this
year and finally upgrade to 2005.
Purely from a MS / Licensing viewpoint, it would appear that the best
value for money seems to be x64 Std Ed on the per processor license
option on a W2003 x64 Std Ed? (~US$ 5000 / processor)
Rationale
=> It doesn't seem to make sense to go back to 32 bit [PAE / 3GB AWE
and the like]
=> It seems hard to justify the 4 fold price differential between x64
SE and EE - in SQL 2000 we haven't used any of the EE only features
like Indexed Views, and the major reason for EE was for the > 2GB
memory usage.
The server will host about 5 mostly OLTP / LOB Databases Total size of
around 250GB, with around 200 concurrent users. Our current EE 32 bit
setup is 2 x 3Ghz Xeon MP with 6GB RAM and usually runs at about 40%
utilisation.
I have been through
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
Am I missing something obvious? (SE x64 can use >> 4GB RAM, has no DB
Size Limitations, and unlimited concurrent connection limitations,
right?)
Thanks in advance
StuartThanks John - SE x64 will definitely be on the shortlist.
Regards
Stuart
John Bell wrote:
> Hi Stuart
> I am not sure if this will be posted twice!
> I think your reasoning is fine if memory was the only issue that you need
> Enterprise Edition. If you are definately not going to be using any of the
> features available only in EE such as partitioning, online restores, online
> indexing etc then SE will be sufficient.
> John
> "NonNB" wrote:
> > Hi
> >
> > We need to part ways with our 32 Bit SQL 2000 Enterprise Ed server this
> > year and finally upgrade to 2005.
> >
> > Purely from a MS / Licensing viewpoint, it would appear that the best
> > value for money seems to be x64 Std Ed on the per processor license
> > option on a W2003 x64 Std Ed? (~US$ 5000 / processor)
> >
> > Rationale
> > => It doesn't seem to make sense to go back to 32 bit [PAE / 3GB AWE
> > and the like]
> > => It seems hard to justify the 4 fold price differential between x64
> > SE and EE - in SQL 2000 we haven't used any of the EE only features
> > like Indexed Views, and the major reason for EE was for the > 2GB
> > memory usage.
> >
> > The server will host about 5 mostly OLTP / LOB Databases Total size of
> > around 250GB, with around 200 concurrent users. Our current EE 32 bit
> > setup is 2 x 3Ghz Xeon MP with 6GB RAM and usually runs at about 40%
> > utilisation.
> >
> > I have been through
> > http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
> >
> > Am I missing something obvious? (SE x64 can use >> 4GB RAM, has no DB
> > Size Limitations, and unlimited concurrent connection limitations,
> > right?)
> >
> > Thanks in advance
> >
> > Stuart
> >
> >

Advice please - x64 Std Ed best value for money?

Hi
We need to part ways with our 32 Bit SQL 2000 Enterprise Ed server this
year and finally upgrade to 2005.
Purely from a MS / Licensing viewpoint, it would appear that the best
value for money seems to be x64 Std Ed on the per processor license
option on a W2003 x64 Std Ed? (~US$ 5000 / processor)
Rationale
=> It doesn't seem to make sense to go back to 32 bit [PAE / 3GB AWE
and the like]
=> It seems hard to justify the 4 fold price differential between x64
SE and EE - in SQL 2000 we haven't used any of the EE only features
like Indexed Views, and the major reason for EE was for the > 2GB
memory usage.
The server will host about 5 mostly OLTP / LOB Databases Total size of
around 250GB, with around 200 concurrent users. Our current EE 32 bit
setup is 2 x 3Ghz Xeon MP with 6GB RAM and usually runs at about 40%
utilisation.
I have been through
http://www.microsoft.com/sql/prodin...e-features.mspx
Am I missing something obvious? (SE x64 can use >> 4GB RAM, has no DB
Size Limitations, and unlimited concurrent connection limitations,
right?)
Thanks in advance
StuartHi Stuart
I am not sure if this will be posted twice!
I think your reasoning is fine if memory was the only issue that you need
Enterprise Edition. If you are definately not going to be using any of the
features available only in EE such as partitioning, online restores, online
indexing etc then SE will be sufficient.
John
"NonNB" wrote:

> Hi
> We need to part ways with our 32 Bit SQL 2000 Enterprise Ed server this
> year and finally upgrade to 2005.
> Purely from a MS / Licensing viewpoint, it would appear that the best
> value for money seems to be x64 Std Ed on the per processor license
> option on a W2003 x64 Std Ed? (~US$ 5000 / processor)
> Rationale
> => It doesn't seem to make sense to go back to 32 bit [PAE / 3GB AWE
> and the like]
> => It seems hard to justify the 4 fold price differential between x64
> SE and EE - in SQL 2000 we haven't used any of the EE only features
> like Indexed Views, and the major reason for EE was for the > 2GB
> memory usage.
> The server will host about 5 mostly OLTP / LOB Databases Total size of
> around 250GB, with around 200 concurrent users. Our current EE 32 bit
> setup is 2 x 3Ghz Xeon MP with 6GB RAM and usually runs at about 40%
> utilisation.
> I have been through
> http://www.microsoft.com/sql/prodin...e-features.mspx
> Am I missing something obvious? (SE x64 can use >> 4GB RAM, has no DB
> Size Limitations, and unlimited concurrent connection limitations,
> right?)
> Thanks in advance
> Stuart
>|||Thanks John - SE x64 will definitely be on the shortlist.
Regards
Stuart
John Bell wrote:[vbcol=seagreen]
> Hi Stuart
> I am not sure if this will be posted twice!
> I think your reasoning is fine if memory was the only issue that you need
> Enterprise Edition. If you are definately not going to be using any of the
> features available only in EE such as partitioning, online restores, onlin
e
> indexing etc then SE will be sufficient.
> John
> "NonNB" wrote:
>

Advice Please

Hello

Which is better and faster?? and WHY??
Writing Select Statement with joins in Stored procedure,
or creating view and calling it from stored procedure (select * from view)..

If the view has the same join in it as the select statement, then the speed will be the same.

|||

I heard thats using a view in such case will slow down my performance??

|||

I've never heard that. do you have a reference? You could test it to verify.

|||

It is likely faster to just use the stored procedure, though in your simple example, the benefit would be minimal. If you are applying where clauses or order by clauses (can't sort in a view) in the stored procedure, the benefit might be more significant. Regardless, the SQL engine has a query optimizer and caches the query plan in most cases, so your best bet is to write an ad-hoc query to call each case, and view the plan... see if it is different, and decide which would be more efficient. My guess is that in a lot of cases, the plan will be the same.

That said, an advantage of the view is reusability, and sometimes this is traded for a very slight performance hit (select some columns that aren't always used). Also, you can use indexed views, which will actually greatly increase performance for join intensive queries with some storage costs as well as a performance hit on insert / update / delete operations. These are probably most efficient when joining to lookup type tables that don't change much.

One other thing... I read a couple of times (don't remember where) that using the SELECT * is inefficient for some reason (don't remember why either...). It could be better to specify the columns explicitly.

Advice on working across databases?

I have two databases, one for DATA and another for GIS data. Each customer
has a table which stores a 3 letter code. This code is different for all
customers. For each customer the GIS db is labeled like so (<3 letter
code>_GIS). I would like to create views that are stored in the DATA db, but
actually retrieves data from a table in the GIS db. The issue lies with
accessing the code for each individual customers.
Sample statement:
create view vw_Space
as
select ID,Vector1,Vector2 from <code>_GIS.dbo.tabelname
But I need to access the customer code from a table and concatenate that
into my select statement in my view. This way the views are dynamic, so to
speak. How can I accomplish this? If you need a better explanation please
contact me. ThanksTim,
Add in fully qualified names for your databases i.e.,
yourdatabase.yourowner.yourtable and try:
CREATE TABLE DATA
(CODE CHAR(3) NOT NULL)
CREATE TABLE GIS
(CODE CHAR(7) NOT NULL,
VECTOR1 INT NOT NULL,
VECTOR2 INT NOT NULL)
INSERT DATA
VALUES ('ABC')
INSERT DATA
VALUES ('DEF')
INSERT GIS
VALUES ('ABC_GIS', 1,2)
INSERT GIS
VALUES ('DEF_GIS', 1,2)
INSERT GIS
VALUES ('DEF_GIS', 3,4)
SELECT D.CODE, VECTOR1, VECTOR2
FROM DATA D JOIN GIS G
ON D.CODE = LEFT(G.CODE,3)
HTH
Jerry
"Tim Gains" <kixman@.excite.com> wrote in message
news:u$iOxWQyFHA.788@.tk2msftngp13.phx.gbl...
>I have two databases, one for DATA and another for GIS data. Each customer
>has a table which stores a 3 letter code. This code is different for all
>customers. For each customer the GIS db is labeled like so (<3 letter
>code>_GIS). I would like to create views that are stored in the DATA db,
>but actually retrieves data from a table in the GIS db. The issue lies with
>accessing the code for each individual customers.
> Sample statement:
> create view vw_Space
> as
> select ID,Vector1,Vector2 from <code>_GIS.dbo.tabelname
> But I need to access the customer code from a table and concatenate that
> into my select statement in my view. This way the views are dynamic, so to
> speak. How can I accomplish this? If you need a better explanation please
> contact me. Thanks
>
>|||thanks for the reply, but what I need to do is extract the code from the
users table, conctenate that in my views select statement, again this would
make the view dynamic if this can be done. I'm not able to ad any tables to
any of the DB's.
The table is called tblInstalation, this table houses the code field w/
only a single row and code is a field in the table. The following is wrong,
but will get my point across better perhaps.
create view vMyViewas
select first *
from (select code from tblInstalation) +_GIS .dbo.tblGISData
How and could this be accomplished in such a manner?
Thanks.
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:et3QjiQyFHA.2848@.TK2MSFTNGP15.phx.gbl...
> Tim,
> Add in fully qualified names for your databases i.e.,
> yourdatabase.yourowner.yourtable and try:
> CREATE TABLE DATA
> (CODE CHAR(3) NOT NULL)
> CREATE TABLE GIS
> (CODE CHAR(7) NOT NULL,
> VECTOR1 INT NOT NULL,
> VECTOR2 INT NOT NULL)
> INSERT DATA
> VALUES ('ABC')
> INSERT DATA
> VALUES ('DEF')
> INSERT GIS
> VALUES ('ABC_GIS', 1,2)
> INSERT GIS
> VALUES ('DEF_GIS', 1,2)
> INSERT GIS
> VALUES ('DEF_GIS', 3,4)
> SELECT D.CODE, VECTOR1, VECTOR2
> FROM DATA D JOIN GIS G
> ON D.CODE = LEFT(G.CODE,3)
> HTH
> Jerry
> "Tim Gains" <kixman@.excite.com> wrote in message
> news:u$iOxWQyFHA.788@.tk2msftngp13.phx.gbl...
>|||Ok...I'm . The sample before used CREATE TABLE just so you would
have an working example for the query enclosed. >first * (first record?
by what criteria?) also the FROM is trying to "build" a database name >
code + _GIS?
Please advise.
HTH
Jerry
"Tim Gains" <papow@.yahoo.com> wrote in message
news:%23u03e1QyFHA.720@.TK2MSFTNGP15.phx.gbl...
> thanks for the reply, but what I need to do is extract the code from the
> users table, conctenate that in my views select statement, again this
> would make the view dynamic if this can be done. I'm not able to ad any
> tables to any of the DB's.
> The table is called tblInstalation, this table houses the code field w/
> only a single row and code is a field in the table. The following is
> wrong, but will get my point across better perhaps.
> create view vMyViewas
> select first *
> from (select code from tblInstalation) +_GIS .dbo.tblGISData
> How and could this be accomplished in such a manner?
> Thanks.
>
>
> "Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
> news:et3QjiQyFHA.2848@.TK2MSFTNGP15.phx.gbl...
>|||Jerry,
Bottom line I need to create a view in DB1 that will return data from a
table in another database (GIS). Here is an example:
create view vSubscriber
as
select first
, last
, startdate
, enddate
, subscriberid
, email
from ABC_GIS.dbo.subscriber
My question is how do I go about concatenating in my view, a field from a
DB1 table in my view. This field would be concatenated in the fully
qualified name portion of the select statement in the view. Again, each
customer has a 3 letter code that resides in his database, this code would
make up a portion of the GIS database name, say for instance ABC_GIS. This
code (ABC) would need to be extracted and concatenated into the views
statement.
create view vSubscriber
as
select first
, last
, startdate
, enddate
, subscriberid
, email
from <DB1.Field>_GIS.dbo.subscriber
ultimately would yield:
create view vSubscriber
as
select first
, last
, startdate
, enddate
, subscriberid
, email
from DB2.dbo.subscriber
Does that make more sense, not sure if it can be done,perhaps with an inline
function returning a table, your thought please.
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:%23c0lT6QyFHA.428@.TK2MSFTNGP10.phx.gbl...
> Ok...I'm . The sample before used CREATE TABLE just so you would
> have an working example for the query enclosed. >first * (first record?
> by what criteria?) also the FROM is trying to "build" a database name >
> code + _GIS?
> Please advise.
> HTH
> Jerry
> "Tim Gains" <papow@.yahoo.com> wrote in message
> news:%23u03e1QyFHA.720@.TK2MSFTNGP15.phx.gbl...
>|||Tim,
I put it in a proc so you wouldn't have to create so many views.
Try:
CREATE PROC USP_SUBSCRIBER
@.CODE CHAR(3)
AS
DECLARE @.SQL VARCHAR(500)
SET @.SQL = 'SELECT first,last,startdate,enddate,subscriberi
d,email from ' +
@.CODE +'_GIS.dbo.subscriber'
EXEC(@.SQL)
EXEC USP_SUBSCRIBER 'ABC'
HTH
Jerry
"Tim Gains" <papow@.yahoo.com> wrote in message
news:O%23sPLIRyFHA.1256@.TK2MSFTNGP09.phx.gbl...
> Jerry,
> Bottom line I need to create a view in DB1 that will return data from a
> table in another database (GIS). Here is an example:
> create view vSubscriber
> as
> select first
> , last
> , startdate
> , enddate
> , subscriberid
> , email
> from ABC_GIS.dbo.subscriber
> My question is how do I go about concatenating in my view, a field from a
> DB1 table in my view. This field would be concatenated in the fully
> qualified name portion of the select statement in the view. Again, each
> customer has a 3 letter code that resides in his database, this code would
> make up a portion of the GIS database name, say for instance ABC_GIS. This
> code (ABC) would need to be extracted and concatenated into the views
> statement.
>
> create view vSubscriber
> as
> select first
> , last
> , startdate
> , enddate
> , subscriberid
> , email
> from <DB1.Field>_GIS.dbo.subscriber
> ultimately would yield:
> create view vSubscriber
> as
> select first
> , last
> , startdate
> , enddate
> , subscriberid
> , email
> from DB2.dbo.subscriber
>
> Does that make more sense, not sure if it can be done,perhaps with an
> inline function returning a table, your thought please.
>
> "Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
> news:%23c0lT6QyFHA.428@.TK2MSFTNGP10.phx.gbl...
>|||Thanks, but I need these to be views, can this be done?
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:uaCbVRRyFHA.3000@.TK2MSFTNGP12.phx.gbl...
> Tim,
> I put it in a proc so you wouldn't have to create so many views.
> Try:
> CREATE PROC USP_SUBSCRIBER
> @.CODE CHAR(3)
> AS
> DECLARE @.SQL VARCHAR(500)
> SET @.SQL = 'SELECT first,last,startdate,enddate,subscriberi
d,email from '
> + @.CODE +'_GIS.dbo.subscriber'
> EXEC(@.SQL)
> EXEC USP_SUBSCRIBER 'ABC'
> HTH
> Jerry
> "Tim Gains" <papow@.yahoo.com> wrote in message
> news:O%23sPLIRyFHA.1256@.TK2MSFTNGP09.phx.gbl...
>|||Basically I want these act as tables.
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:uaCbVRRyFHA.3000@.TK2MSFTNGP12.phx.gbl...
> Tim,
> I put it in a proc so you wouldn't have to create so many views.
> Try:
> CREATE PROC USP_SUBSCRIBER
> @.CODE CHAR(3)
> AS
> DECLARE @.SQL VARCHAR(500)
> SET @.SQL = 'SELECT first,last,startdate,enddate,subscriberi
d,email from '
> + @.CODE +'_GIS.dbo.subscriber'
> EXEC(@.SQL)
> EXEC USP_SUBSCRIBER 'ABC'
> HTH
> Jerry
> "Tim Gains" <papow@.yahoo.com> wrote in message
> news:O%23sPLIRyFHA.1256@.TK2MSFTNGP09.phx.gbl...
>|||and so that it could be parameterized.
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:uaCbVRRyFHA.3000@.TK2MSFTNGP12.phx.gbl...
> Tim,
> I put it in a proc so you wouldn't have to create so many views.
> Try:
> CREATE PROC USP_SUBSCRIBER
> @.CODE CHAR(3)
> AS
> DECLARE @.SQL VARCHAR(500)
> SET @.SQL = 'SELECT first,last,startdate,enddate,subscriberi
d,email from '
> + @.CODE +'_GIS.dbo.subscriber'
> EXEC(@.SQL)
> EXEC USP_SUBSCRIBER 'ABC'
> HTH
> Jerry
> "Tim Gains" <papow@.yahoo.com> wrote in message
> news:O%23sPLIRyFHA.1256@.TK2MSFTNGP09.phx.gbl...
>|||How do I know what value (3 letter code) you want used? Are you trying to
supply it when you query the view? Or are you just trying to create a bunch
of views using the codes from the other table as a one time op?
"Tim Gains" <papow@.yahoo.com> wrote in message
news:%23LVC$TRyFHA.3772@.TK2MSFTNGP11.phx.gbl...
> Basically I want these act as tables.
>
> "Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
> news:uaCbVRRyFHA.3000@.TK2MSFTNGP12.phx.gbl...
>

advice on what is possible in a stored procedure

Hi
I am about to write my first strored procedure and would appreciate some
advice.
Can anyone tell be if the following is feasible.
I have a record in a database that a user updates over a web interface. The
updated record is sent back to the MSDE database.
I would like to use a trigger event to prevent the original record from
being changed and instead create a new record with the updated information.
The 'inuse' flag of the original record would be set to false, and set to
true for the new record.
What I would like to do is in the Trigger call a stored procedure which
creates the new record and then cancels the original update. Can this be
done?
Also can I pass the contents of the virtual table 'inserted' to a stored
procedure? If I can, can I pass them 'as one' into a custom type in my
stored procedure or will I have to pass each of the fields in inserted
individually to the stored procedure.
Thanks
June
Consider that a trigger is simply a stored procedure. The on update trigger
can do whatever you want it to--including adding or changing rows in the
same or other tables.
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
"June Macleod" <junework@.hotmail.com> wrote in message
news:%23dgXdylaEHA.4092@.TK2MSFTNGP10.phx.gbl...
> Hi
> I am about to write my first strored procedure and would appreciate some
> advice.
> Can anyone tell be if the following is feasible.
> I have a record in a database that a user updates over a web interface.
The
> updated record is sent back to the MSDE database.
> I would like to use a trigger event to prevent the original record from
> being changed and instead create a new record with the updated
information.
> The 'inuse' flag of the original record would be set to false, and set to
> true for the new record.
> What I would like to do is in the Trigger call a stored procedure which
> creates the new record and then cancels the original update. Can this be
> done?
> Also can I pass the contents of the virtual table 'inserted' to a stored
> procedure? If I can, can I pass them 'as one' into a custom type in my
> stored procedure or will I have to pass each of the fields in inserted
> individually to the stored procedure.
> Thanks
> June
>

Advice on Web Server & SQL 2k Server Setup?

First, let me apologise, I tried to search for the answer to this, but the search wouldn't work for me, and other searches on the web have come up empty.

I'm just about to move my asp.net/SQL 2k website to dedicated hosting, and as our budget isn't so big, we can't afford two servers, thus both the web-server and SQL server need to be on the same machine.

My question is, what would people recommend for the software setup? The server is speced as follows:

- Pentium 4 3ghz
- 2gb ram
- 15gb primary partition
- 100gb secondary partition
- 250gb sata secondary drive
- Windows Server 2003 SE

The storage arrangement is dictated by the host, as they supply and setup the server. I have no control over this.

I was thinking about putting SQL Server 2k Ent on the secondary partition, and the website on the secondary drive. I'm not sure about what to set SQL Server's memory configuration to, as I've always just worked with both the webserver and sql server on the same machine, with dynamic memory allocaton for sql server, in my development environments, with no problem, but also with no load.

What would be the best compromise considering the requirements here? I appreciate that having a machine dedicated for SQL Server is best, but I can't do that here, yet. Any help humbly appreciated!Ugh. I don't have a lot of experience with low-budget web/database implementations and even less experience with hosted systems.

From a db perspective, exclusively, I would think that your most critical issue would be to separate the data files from the log files and then ensuring that you have another place to put your db backups (I saw only two physical drives, so I think you may be one short already).

There's not enough info to guesstimate sizing or performance requirements, so I'll pass there.

You know already that mixing SQL and IIS is a bad idea; if it can't be helped, then it can't be helped, but it is the first thing I would address down the road if more $$$ become available.

You're a business, you have a budget. You have to do what you have to do in order to make the business take off.

Know your hosting company. Know what their SLAs are. Know what defenses they are providing for you (firewalls, virus scanning, intrusion detection, system redundancy, site redundancy, disaster recovery, etc). Know what their backup policy is. Know what their backup retention policy is. Be prepared to test it/validate it.

Arm yourself with information and be prepared to go back to the business and advocate for more $$$. Be sure to always couch your requests for more $$ in terms of a) more business, b) cost savings and/or c) security/compliance.

Best of luck,

hmscott|||Hey hmscott, thanks for the advice, there's some useful tips in there, especially concerning the splitting of data and log files! Performance wise it's a 50mb database with the site serving about 200 people concurrently on average, with peaks of up to 600 sometimes.

Two physical drives yes. I'll be handling daily backups of the machine by the pipe, as the hosts backup options are too costly for us. This has worked for us in the past and has already proven itself a viable approach.

I need to find some dba guide books now I think, grow from the developer understanding of SQL, to the management side.

Advice on upgrading 2000 to 2005 needed

Hi,
I've read through the threads related to 2000-2005 upgrade I can find on
this newsgroup. From what I've gathered, seems there are the following three
ways to upgrade.
1. in place
2. install a new instance that runs 2005 on the same database server
3. Set up a different server and then install 2005 on it.
We're currently running SQL 2000 SP3 on windows 2003. Is it true that
installing SP4 on SQL 2000 is required before it can be brought up to 2005
for in place upgrade?
For the rest two, I'm not very clear which one is better.
If we do option 3, we need to make DNS changes for server IP/name move which
always doesn't happen right away in my environment. That would most likely
extend upgrade time. But the obvious benefit is in case something wrong
happens with the upgrade, I can have the original 2000 server to safely go
back to.
For option 2, would a new 2005 instance have any negative impact on the sql
2000 instance? Are them totally independent of each other? I need to know
for sure if the 2005 instance doesn't work, the 2000 still works fine.
I'd appreciate any insight or real world experience (better) regarding 2005
upgrade. Things don't always go the way as they are instructed in the doc.
Thanks,
Bing
Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
SP3.
About a side-by-side installation of SQL Server 2005, i'm currently running
on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
2005 instances; since the installation i've never had a problem.
The only important consideration in this case is: do i have sufficient RAM
for both instances? Surely you'll need to configure appropriately Min memory
and Max memory for each instance.
More: currently i'm administering my SQL Server 2000 instances from SQL
Server Management Studio.
Gilberto Zampatti
"bing" wrote:

> Hi,
> I've read through the threads related to 2000-2005 upgrade I can find on
> this newsgroup. From what I've gathered, seems there are the following three
> ways to upgrade.
> 1. in place
> 2. install a new instance that runs 2005 on the same database server
> 3. Set up a different server and then install 2005 on it.
> We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> installing SP4 on SQL 2000 is required before it can be brought up to 2005
> for in place upgrade?
> For the rest two, I'm not very clear which one is better.
> If we do option 3, we need to make DNS changes for server IP/name move which
> always doesn't happen right away in my environment. That would most likely
> extend upgrade time. But the obvious benefit is in case something wrong
> happens with the upgrade, I can have the original 2000 server to safely go
> back to.
> For option 2, would a new 2005 instance have any negative impact on the sql
> 2000 instance? Are them totally independent of each other? I need to know
> for sure if the 2005 instance doesn't work, the 2000 still works fine.
> I'd appreciate any insight or real world experience (better) regarding 2005
> upgrade. Things don't always go the way as they are instructed in the doc.
> Thanks,
> Bing
|||Thanks for the response. RAM allocation is a very good point. Our SQL
server 2000 server (Standard) which is running only one instance currently
has 2G RAM. If I install 2005 (Standard) on the save server, that will
compete with 2000 for RAM.
Bing
"Gilberto Zampatti" wrote:
[vbcol=seagreen]
> Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
> SP3.
> About a side-by-side installation of SQL Server 2005, i'm currently running
> on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
> 2005 instances; since the installation i've never had a problem.
> The only important consideration in this case is: do i have sufficient RAM
> for both instances? Surely you'll need to configure appropriately Min memory
> and Max memory for each instance.
> More: currently i'm administering my SQL Server 2000 instances from SQL
> Server Management Studio.
> Gilberto Zampatti
> "bing" wrote:
|||On May 18, 9:49 am, bing <b...@.discussions.microsoft.com> wrote:
> Thanks for the response. RAM allocation is a very good point. Our SQL
> server 2000 server (Standard) which is running only one instance currently
> has 2G RAM. If I install 2005 (Standard) on the save server, that will
> compete with 2000 for RAM.
> Bing
>
> "Gilberto Zampatti" wrote:
>
>
>
>
>
>
> - Show quoted text -
Here is my experience in a nutshell - as much as I remember anyway. I
just did a 2000 to 2005 upgrade for 8 production databases varying
from a few hundred megs to 50 gigs.This approach with a new server
allowed us to test and to hold cutting over until we were 100% sure
everything was working. There are may ways to do this, but this how I
did it....
We built and configured a new 2005 server first. Here is the overview
of the check list:
1. Build new server with network engineers.
2. Discuss best place to keep logfiles, databases, backups etc. Do
appropriate sizing etc.
3. Decide what new services to use and get them configured and
running. EX: We are not using analysis service.
4. Configure database mail.
5. Configure alerts and get backups going for system databases etc.
6. do a backup and restore from 2000 to 2005 and get the db backup and
log maintenence jobs going. I did weekly stats update and alter index
reorganize.
7. Created all windows, sql logins on the new box - we are mixed mode.
8. This allowed us to test the apps on the new server and permissions
etc. The schema's can be troublesome. I dropped all users after
restoring and then reapplied the permissions.
9.Here was my actual cutover checklist of things the SSIS package did
a. take down apps/or web server during cutover
b.backup 2000 databases to unc path
c.resotore databases to new 2005 box from unc path.
d.drop permissions (logins, schemas, roles, users)
e. re-assign permissions as required.
f.rebuild indexes
g.set database compatibility level (90) for 2005
h.re-point all applications to new SQL instance
i.run backup and maintenence jobs to make sure all working.
j.test and run other SSIS jobs
k.detach old 2005 databases but leave on the server for awhile in case
of issues.
Kristina
|||"Kristina" wrote:

> On May 18, 9:49 am, bing <b...@.discussions.microsoft.com> wrote:
> Here is my experience in a nutshell - as much as I remember anyway. I
> just did a 2000 to 2005 upgrade for 8 production databases varying
> from a few hundred megs to 50 gigs.This approach with a new server
> allowed us to test and to hold cutting over until we were 100% sure
> everything was working. There are may ways to do this, but this how I
> did it....
> We built and configured a new 2005 server first. Here is the overview
> of the check list:
> 1. Build new server with network engineers.
> 2. Discuss best place to keep logfiles, databases, backups etc. Do
> appropriate sizing etc.
> 3. Decide what new services to use and get them configured and
> running. EX: We are not using analysis service.
> 4. Configure database mail.
> 5. Configure alerts and get backups going for system databases etc.
> 6. do a backup and restore from 2000 to 2005 and get the db backup and
> log maintenence jobs going. I did weekly stats update and alter index
> reorganize.
> 7. Created all windows, sql logins on the new box - we are mixed mode.
> 8. This allowed us to test the apps on the new server and permissions
> etc. The schema's can be troublesome. I dropped all users after
> restoring and then reapplied the permissions.
> 9.Here was my actual cutover checklist of things the SSIS package did
> a. take down apps/or web server during cutover
> b.backup 2000 databases to unc path
> c.resotore databases to new 2005 box from unc path.
> d.drop permissions (logins, schemas, roles, users)
> e. re-assign permissions as required.
> f.rebuild indexes
> g.set database compatibility level (90) for 2005
> h.re-point all applications to new SQL instance
> i.run backup and maintenence jobs to make sure all working.
> j.test and run other SSIS jobs
> k.detach old 2005 databases but leave on the server for awhile in case
> of issues.
> Kristina
>
Excellent! Thanks much. We're in a similar situation. For the last step
k, I think you meant 'detach old 2000 databases', right?
Bing
|||On May 18, 11:11 am, bing <b...@.discussions.microsoft.com> wrote:
> "Kristina" wrote:
>
>
>
>
>
>
>
>
>
> Excellent! Thanks much. We're in a similar situation. For the last step
> k, I think you meant 'detach old 2000 databases', right?
> Bing- Hide quoted text -
> - Show quoted text -
yes, I am a poor writer.....it wasn't really that bad to do the
upgrade. I got the wrox press SQL 2005 Administration book and it
helped tons...
Good LUCK!
|||"Kristina" wrote:

> On May 18, 11:11 am, bing <b...@.discussions.microsoft.com> wrote:
> yes, I am a poor writer.....it wasn't really that bad to do the
> upgrade. I got the wrox press SQL 2005 Administration book and it
> helped tons...
> Good LUCK!
>
Glad to hear it isn't that bad. Thanks again, Kristina.
Bing

Advice on upgrading 2000 to 2005 needed

Hi,
I've read through the threads related to 2000-2005 upgrade I can find on
this newsgroup. From what I've gathered, seems there are the following three
ways to upgrade.
1. in place
2. install a new instance that runs 2005 on the same database server
3. Set up a different server and then install 2005 on it.
We're currently running SQL 2000 SP3 on windows 2003. Is it true that
installing SP4 on SQL 2000 is required before it can be brought up to 2005
for in place upgrade?
For the rest two, I'm not very clear which one is better.
If we do option 3, we need to make DNS changes for server IP/name move which
always doesn't happen right away in my environment. That would most likely
extend upgrade time. But the obvious benefit is in case something wrong
happens with the upgrade, I can have the original 2000 server to safely go
back to.
For option 2, would a new 2005 instance have any negative impact on the sql
2000 instance? Are them totally independent of each other? I need to know
for sure if the 2005 instance doesn't work, the 2000 still works fine.
I'd appreciate any insight or real world experience (better) regarding 2005
upgrade. Things don't always go the way as they are instructed in the doc.
Thanks,
BingUpgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
SP3.
About a side-by-side installation of SQL Server 2005, i'm currently running
on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
2005 instances; since the installation i've never had a problem.
The only important consideration in this case is: do i have sufficient RAM
for both instances? Surely you'll need to configure appropriately Min memory
and Max memory for each instance.
More: currently i'm administering my SQL Server 2000 instances from SQL
Server Management Studio.
Gilberto Zampatti
"bing" wrote:
> Hi,
> I've read through the threads related to 2000-2005 upgrade I can find on
> this newsgroup. From what I've gathered, seems there are the following three
> ways to upgrade.
> 1. in place
> 2. install a new instance that runs 2005 on the same database server
> 3. Set up a different server and then install 2005 on it.
> We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> installing SP4 on SQL 2000 is required before it can be brought up to 2005
> for in place upgrade?
> For the rest two, I'm not very clear which one is better.
> If we do option 3, we need to make DNS changes for server IP/name move which
> always doesn't happen right away in my environment. That would most likely
> extend upgrade time. But the obvious benefit is in case something wrong
> happens with the upgrade, I can have the original 2000 server to safely go
> back to.
> For option 2, would a new 2005 instance have any negative impact on the sql
> 2000 instance? Are them totally independent of each other? I need to know
> for sure if the 2005 instance doesn't work, the 2000 still works fine.
> I'd appreciate any insight or real world experience (better) regarding 2005
> upgrade. Things don't always go the way as they are instructed in the doc.
> Thanks,
> Bing|||Thanks for the response. RAM allocation is a very good point. Our SQL
server 2000 server (Standard) which is running only one instance currently
has 2G RAM. If I install 2005 (Standard) on the save server, that will
compete with 2000 for RAM.
Bing
"Gilberto Zampatti" wrote:
> Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
> SP3.
> About a side-by-side installation of SQL Server 2005, i'm currently running
> on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
> 2005 instances; since the installation i've never had a problem.
> The only important consideration in this case is: do i have sufficient RAM
> for both instances? Surely you'll need to configure appropriately Min memory
> and Max memory for each instance.
> More: currently i'm administering my SQL Server 2000 instances from SQL
> Server Management Studio.
> Gilberto Zampatti
> "bing" wrote:
> > Hi,
> >
> > I've read through the threads related to 2000-2005 upgrade I can find on
> > this newsgroup. From what I've gathered, seems there are the following three
> > ways to upgrade.
> >
> > 1. in place
> > 2. install a new instance that runs 2005 on the same database server
> > 3. Set up a different server and then install 2005 on it.
> >
> > We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> > installing SP4 on SQL 2000 is required before it can be brought up to 2005
> > for in place upgrade?
> >
> > For the rest two, I'm not very clear which one is better.
> >
> > If we do option 3, we need to make DNS changes for server IP/name move which
> > always doesn't happen right away in my environment. That would most likely
> > extend upgrade time. But the obvious benefit is in case something wrong
> > happens with the upgrade, I can have the original 2000 server to safely go
> > back to.
> >
> > For option 2, would a new 2005 instance have any negative impact on the sql
> > 2000 instance? Are them totally independent of each other? I need to know
> > for sure if the 2005 instance doesn't work, the 2000 still works fine.
> >
> > I'd appreciate any insight or real world experience (better) regarding 2005
> > upgrade. Things don't always go the way as they are instructed in the doc.
> >
> > Thanks,
> >
> > Bing|||On May 18, 9:49 am, bing <b...@.discussions.microsoft.com> wrote:
> Thanks for the response. RAM allocation is a very good point. Our SQL
> server 2000 server (Standard) which is running only one instance currently
> has 2G RAM. If I install 2005 (Standard) on the save server, that will
> compete with 2000 for RAM.
> Bing
>
> "Gilberto Zampatti" wrote:
> > Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
> > SP3.
> > About a side-by-side installation of SQL Server 2005, i'm currently running
> > on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
> > 2005 instances; since the installation i've never had a problem.
> > The only important consideration in this case is: do i have sufficient RAM
> > for both instances? Surely you'll need to configure appropriately Min memory
> > and Max memory for each instance.
> > More: currently i'm administering my SQL Server 2000 instances from SQL
> > Server Management Studio.
> > Gilberto Zampatti
> > "bing" wrote:
> > > Hi,
> > > I've read through the threads related to 2000-2005 upgrade I can find on
> > > this newsgroup. From what I've gathered, seems there are the following three
> > > ways to upgrade.
> > > 1. in place
> > > 2. install a new instance that runs 2005 on the same database server
> > > 3. Set up a different server and then install 2005 on it.
> > > We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> > > installing SP4 on SQL 2000 is required before it can be brought up to 2005
> > > for in place upgrade?
> > > For the rest two, I'm not very clear which one is better.
> > > If we do option 3, we need to make DNS changes for server IP/name move which
> > > always doesn't happen right away in my environment. That would most likely
> > > extend upgrade time. But the obvious benefit is in case something wrong
> > > happens with the upgrade, I can have the original 2000 server to safely go
> > > back to.
> > > For option 2, would a new 2005 instance have any negative impact on the sql
> > > 2000 instance? Are them totally independent of each other? I need to know
> > > for sure if the 2005 instance doesn't work, the 2000 still works fine.
> > > I'd appreciate any insight or real world experience (better) regarding 2005
> > > upgrade. Things don't always go the way as they are instructed in the doc.
> > > Thanks,
> > > Bing- Hide quoted text -
> - Show quoted text -
Here is my experience in a nutshell - as much as I remember anyway. I
just did a 2000 to 2005 upgrade for 8 production databases varying
from a few hundred megs to 50 gigs.This approach with a new server
allowed us to test and to hold cutting over until we were 100% sure
everything was working. There are may ways to do this, but this how I
did it....
We built and configured a new 2005 server first. Here is the overview
of the check list:
1. Build new server with network engineers.
2. Discuss best place to keep logfiles, databases, backups etc. Do
appropriate sizing etc.
3. Decide what new services to use and get them configured and
running. EX: We are not using analysis service.
4. Configure database mail.
5. Configure alerts and get backups going for system databases etc.
6. do a backup and restore from 2000 to 2005 and get the db backup and
log maintenence jobs going. I did weekly stats update and alter index
reorganize.
7. Created all windows, sql logins on the new box - we are mixed mode.
8. This allowed us to test the apps on the new server and permissions
etc. The schema's can be troublesome. I dropped all users after
restoring and then reapplied the permissions.
9.Here was my actual cutover checklist of things the SSIS package did
a. take down apps/or web server during cutover
b.backup 2000 databases to unc path
c.resotore databases to new 2005 box from unc path.
d.drop permissions (logins, schemas, roles, users)
e. re-assign permissions as required.
f.rebuild indexes
g.set database compatibility level (90) for 2005
h.re-point all applications to new SQL instance
i.run backup and maintenence jobs to make sure all working.
j.test and run other SSIS jobs
k.detach old 2005 databases but leave on the server for awhile in case
of issues.
Kristina|||"Kristina" wrote:
> On May 18, 9:49 am, bing <b...@.discussions.microsoft.com> wrote:
> > Thanks for the response. RAM allocation is a very good point. Our SQL
> > server 2000 server (Standard) which is running only one instance currently
> > has 2G RAM. If I install 2005 (Standard) on the save server, that will
> > compete with 2000 for RAM.
> >
> > Bing
> >
> >
> >
> > "Gilberto Zampatti" wrote:
> > > Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
> > > SP3.
> > > About a side-by-side installation of SQL Server 2005, i'm currently running
> > > on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
> > > 2005 instances; since the installation i've never had a problem.
> > > The only important consideration in this case is: do i have sufficient RAM
> > > for both instances? Surely you'll need to configure appropriately Min memory
> > > and Max memory for each instance.
> > > More: currently i'm administering my SQL Server 2000 instances from SQL
> > > Server Management Studio.
> > > Gilberto Zampatti
> >
> > > "bing" wrote:
> >
> > > > Hi,
> >
> > > > I've read through the threads related to 2000-2005 upgrade I can find on
> > > > this newsgroup. From what I've gathered, seems there are the following three
> > > > ways to upgrade.
> >
> > > > 1. in place
> > > > 2. install a new instance that runs 2005 on the same database server
> > > > 3. Set up a different server and then install 2005 on it.
> >
> > > > We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> > > > installing SP4 on SQL 2000 is required before it can be brought up to 2005
> > > > for in place upgrade?
> >
> > > > For the rest two, I'm not very clear which one is better.
> >
> > > > If we do option 3, we need to make DNS changes for server IP/name move which
> > > > always doesn't happen right away in my environment. That would most likely
> > > > extend upgrade time. But the obvious benefit is in case something wrong
> > > > happens with the upgrade, I can have the original 2000 server to safely go
> > > > back to.
> >
> > > > For option 2, would a new 2005 instance have any negative impact on the sql
> > > > 2000 instance? Are them totally independent of each other? I need to know
> > > > for sure if the 2005 instance doesn't work, the 2000 still works fine.
> >
> > > > I'd appreciate any insight or real world experience (better) regarding 2005
> > > > upgrade. Things don't always go the way as they are instructed in the doc.
> >
> > > > Thanks,
> >
> > > > Bing- Hide quoted text -
> >
> > - Show quoted text -
> Here is my experience in a nutshell - as much as I remember anyway. I
> just did a 2000 to 2005 upgrade for 8 production databases varying
> from a few hundred megs to 50 gigs.This approach with a new server
> allowed us to test and to hold cutting over until we were 100% sure
> everything was working. There are may ways to do this, but this how I
> did it....
> We built and configured a new 2005 server first. Here is the overview
> of the check list:
> 1. Build new server with network engineers.
> 2. Discuss best place to keep logfiles, databases, backups etc. Do
> appropriate sizing etc.
> 3. Decide what new services to use and get them configured and
> running. EX: We are not using analysis service.
> 4. Configure database mail.
> 5. Configure alerts and get backups going for system databases etc.
> 6. do a backup and restore from 2000 to 2005 and get the db backup and
> log maintenence jobs going. I did weekly stats update and alter index
> reorganize.
> 7. Created all windows, sql logins on the new box - we are mixed mode.
> 8. This allowed us to test the apps on the new server and permissions
> etc. The schema's can be troublesome. I dropped all users after
> restoring and then reapplied the permissions.
> 9.Here was my actual cutover checklist of things the SSIS package did
> a. take down apps/or web server during cutover
> b.backup 2000 databases to unc path
> c.resotore databases to new 2005 box from unc path.
> d.drop permissions (logins, schemas, roles, users)
> e. re-assign permissions as required.
> f.rebuild indexes
> g.set database compatibility level (90) for 2005
> h.re-point all applications to new SQL instance
> i.run backup and maintenence jobs to make sure all working.
> j.test and run other SSIS jobs
> k.detach old 2005 databases but leave on the server for awhile in case
> of issues.
> Kristina
>
Excellent! Thanks much. We're in a similar situation. For the last step
k, I think you meant 'detach old 2000 databases', right?
Bing|||On May 18, 11:11 am, bing <b...@.discussions.microsoft.com> wrote:
> "Kristina" wrote:
> > On May 18, 9:49 am, bing <b...@.discussions.microsoft.com> wrote:
> > > Thanks for the response. RAM allocation is a very good point. Our SQL
> > > server 2000 server (Standard) which is running only one instance currently
> > > has 2G RAM. If I install 2005 (Standard) on the save server, that will
> > > compete with 2000 for RAM.
> > > Bing
> > > "Gilberto Zampatti" wrote:
> > > > Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
> > > > SP3.
> > > > About a side-by-side installation of SQL Server 2005, i'm currently running
> > > > on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
> > > > 2005 instances; since the installation i've never had a problem.
> > > > The only important consideration in this case is: do i have sufficient RAM
> > > > for both instances? Surely you'll need to configure appropriately Min memory
> > > > and Max memory for each instance.
> > > > More: currently i'm administering my SQL Server 2000 instances from SQL
> > > > Server Management Studio.
> > > > Gilberto Zampatti
> > > > "bing" wrote:
> > > > > Hi,
> > > > > I've read through the threads related to 2000-2005 upgrade I can find on
> > > > > this newsgroup. From what I've gathered, seems there are the following three
> > > > > ways to upgrade.
> > > > > 1. in place
> > > > > 2. install a new instance that runs 2005 on the same database server
> > > > > 3. Set up a different server and then install 2005 on it.
> > > > > We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> > > > > installing SP4 on SQL 2000 is required before it can be brought up to 2005
> > > > > for in place upgrade?
> > > > > For the rest two, I'm not very clear which one is better.
> > > > > If we do option 3, we need to make DNS changes for server IP/name move which
> > > > > always doesn't happen right away in my environment. That would most likely
> > > > > extend upgrade time. But the obvious benefit is in case something wrong
> > > > > happens with the upgrade, I can have the original 2000 server to safely go
> > > > > back to.
> > > > > For option 2, would a new 2005 instance have any negative impact on the sql
> > > > > 2000 instance? Are them totally independent of each other? I need to know
> > > > > for sure if the 2005 instance doesn't work, the 2000 still works fine.
> > > > > I'd appreciate any insight or real world experience (better) regarding 2005
> > > > > upgrade. Things don't always go the way as they are instructed in the doc.
> > > > > Thanks,
> > > > > Bing- Hide quoted text -
> > > - Show quoted text -
> > Here is my experience in a nutshell - as much as I remember anyway. I
> > just did a 2000 to 2005 upgrade for 8 production databases varying
> > from a few hundred megs to 50 gigs.This approach with a new server
> > allowed us to test and to hold cutting over until we were 100% sure
> > everything was working. There are may ways to do this, but this how I
> > did it....
> > We built and configured a new 2005 server first. Here is the overview
> > of the check list:
> > 1. Build new server with network engineers.
> > 2. Discuss best place to keep logfiles, databases, backups etc. Do
> > appropriate sizing etc.
> > 3. Decide what new services to use and get them configured and
> > running. EX: We are not using analysis service.
> > 4. Configure database mail.
> > 5. Configure alerts and get backups going for system databases etc.
> > 6. do a backup and restore from 2000 to 2005 and get the db backup and
> > log maintenence jobs going. I did weekly stats update and alter index
> > reorganize.
> > 7. Created all windows, sql logins on the new box - we are mixed mode.
> > 8. This allowed us to test the apps on the new server and permissions
> > etc. The schema's can be troublesome. I dropped all users after
> > restoring and then reapplied the permissions.
> > 9.Here was my actual cutover checklist of things the SSIS package did
> > a. take down apps/or web server during cutover
> > b.backup 2000 databases to unc path
> > c.resotore databases to new 2005 box from unc path.
> > d.drop permissions (logins, schemas, roles, users)
> > e. re-assign permissions as required.
> > f.rebuild indexes
> > g.set database compatibility level (90) for 2005
> > h.re-point all applications to new SQL instance
> > i.run backup and maintenence jobs to make sure all working.
> > j.test and run other SSIS jobs
> > k.detach old 2005 databases but leave on the server for awhile in case
> > of issues.
> > Kristina
> Excellent! Thanks much. We're in a similar situation. For the last step
> k, I think you meant 'detach old 2000 databases', right?
> Bing- Hide quoted text -
> - Show quoted text -
yes, I am a poor writer.....it wasn't really that bad to do the
upgrade. I got the wrox press SQL 2005 Administration book and it
helped tons...
Good LUCK!|||"Kristina" wrote:
> On May 18, 11:11 am, bing <b...@.discussions.microsoft.com> wrote:
> > "Kristina" wrote:
> > > On May 18, 9:49 am, bing <b...@.discussions.microsoft.com> wrote:
> > > > Thanks for the response. RAM allocation is a very good point. Our SQL
> > > > server 2000 server (Standard) which is running only one instance currently
> > > > has 2G RAM. If I install 2005 (Standard) on the save server, that will
> > > > compete with 2000 for RAM.
> >
> > > > Bing
> >
> > > > "Gilberto Zampatti" wrote:
> > > > > Upgrade supported by SQL Server 2005 include SQL Server 7.0 SP$ and SQL 2000
> > > > > SP3.
> > > > > About a side-by-side installation of SQL Server 2005, i'm currently running
> > > > > on my laptop (2Gb RAM...) two SQL Server 2000 instances and two SQL Server
> > > > > 2005 instances; since the installation i've never had a problem.
> > > > > The only important consideration in this case is: do i have sufficient RAM
> > > > > for both instances? Surely you'll need to configure appropriately Min memory
> > > > > and Max memory for each instance.
> > > > > More: currently i'm administering my SQL Server 2000 instances from SQL
> > > > > Server Management Studio.
> > > > > Gilberto Zampatti
> >
> > > > > "bing" wrote:
> >
> > > > > > Hi,
> >
> > > > > > I've read through the threads related to 2000-2005 upgrade I can find on
> > > > > > this newsgroup. From what I've gathered, seems there are the following three
> > > > > > ways to upgrade.
> >
> > > > > > 1. in place
> > > > > > 2. install a new instance that runs 2005 on the same database server
> > > > > > 3. Set up a different server and then install 2005 on it.
> >
> > > > > > We're currently running SQL 2000 SP3 on windows 2003. Is it true that
> > > > > > installing SP4 on SQL 2000 is required before it can be brought up to 2005
> > > > > > for in place upgrade?
> >
> > > > > > For the rest two, I'm not very clear which one is better.
> >
> > > > > > If we do option 3, we need to make DNS changes for server IP/name move which
> > > > > > always doesn't happen right away in my environment. That would most likely
> > > > > > extend upgrade time. But the obvious benefit is in case something wrong
> > > > > > happens with the upgrade, I can have the original 2000 server to safely go
> > > > > > back to.
> >
> > > > > > For option 2, would a new 2005 instance have any negative impact on the sql
> > > > > > 2000 instance? Are them totally independent of each other? I need to know
> > > > > > for sure if the 2005 instance doesn't work, the 2000 still works fine.
> >
> > > > > > I'd appreciate any insight or real world experience (better) regarding 2005
> > > > > > upgrade. Things don't always go the way as they are instructed in the doc.
> >
> > > > > > Thanks,
> >
> > > > > > Bing- Hide quoted text -
> >
> > > > - Show quoted text -
> >
> > > Here is my experience in a nutshell - as much as I remember anyway. I
> > > just did a 2000 to 2005 upgrade for 8 production databases varying
> > > from a few hundred megs to 50 gigs.This approach with a new server
> > > allowed us to test and to hold cutting over until we were 100% sure
> > > everything was working. There are may ways to do this, but this how I
> > > did it....
> >
> > > We built and configured a new 2005 server first. Here is the overview
> > > of the check list:
> > > 1. Build new server with network engineers.
> > > 2. Discuss best place to keep logfiles, databases, backups etc. Do
> > > appropriate sizing etc.
> > > 3. Decide what new services to use and get them configured and
> > > running. EX: We are not using analysis service.
> > > 4. Configure database mail.
> > > 5. Configure alerts and get backups going for system databases etc.
> > > 6. do a backup and restore from 2000 to 2005 and get the db backup and
> > > log maintenence jobs going. I did weekly stats update and alter index
> > > reorganize.
> > > 7. Created all windows, sql logins on the new box - we are mixed mode.
> > > 8. This allowed us to test the apps on the new server and permissions
> > > etc. The schema's can be troublesome. I dropped all users after
> > > restoring and then reapplied the permissions.
> > > 9.Here was my actual cutover checklist of things the SSIS package did
> > > a. take down apps/or web server during cutover
> > > b.backup 2000 databases to unc path
> > > c.resotore databases to new 2005 box from unc path.
> > > d.drop permissions (logins, schemas, roles, users)
> > > e. re-assign permissions as required.
> > > f.rebuild indexes
> > > g.set database compatibility level (90) for 2005
> > > h.re-point all applications to new SQL instance
> > > i.run backup and maintenence jobs to make sure all working.
> > > j.test and run other SSIS jobs
> > > k.detach old 2005 databases but leave on the server for awhile in case
> > > of issues.
> >
> > > Kristina
> >
> > Excellent! Thanks much. We're in a similar situation. For the last step
> > k, I think you meant 'detach old 2000 databases', right?
> >
> > Bing- Hide quoted text -
> >
> > - Show quoted text -
> yes, I am a poor writer.....it wasn't really that bad to do the
> upgrade. I got the wrox press SQL 2005 Administration book and it
> helped tons...
> Good LUCK!
>
Glad to hear it isn't that bad. Thanks again, Kristina.
Bing