Friday, March 30, 2012
Numerous Numeric Fields to 1 Numeric Field in New Table
with numerous numeric fields to a new table (B) with just one numeric field.
Thus the number of records in the table (B) would be the number or records
in A mutliplied by the number of numeric fields.
Thanks in advance
Please include DDL with your questions so that we don't have to guess at what
your tables might look like.
Here's an example. Suppose you have a denormalized structure like this:
CREATE TABLE monthly_accounts (account_no INTEGER PRIMARY KEY, jan INTEGER
NULL, feb INTEGER NULL, mar INTEGER NULL, ...)
You can convert this to a more usable form as follows:
CREATE TABLE accounts (account_no INTEGER NOT NULL, dt DATETIME NOT NULL
CHECK (DAY(dt)=1), amount INTEGER NOT NULL, PRIMARY KEY (account_no, dt))
INSERT INTO accounts (account_no, dt, amount)
SELECT account_no, '20040101', jan
FROM monthly_accounts
WHERE jan IS NOT NULL
UNION ALL
SELECT account_no, '20040201', feb
FROM monthly_accounts
WHERE feb IS NOT NULL
UNION ALL
SELECT account_no, '20040301', mar
FROM monthly_accounts
WHERE mar IS NOT NULL
...
Notice that you will usually add at least one column to the key when you do
this.
David Portas
SQL Server MVP
|||Thanks David
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:E5E28C97-6FF6-4EBA-8152-33063A6C813C@.microsoft.com...
> Please include DDL with your questions so that we don't have to guess at
what
> your tables might look like.
> Here's an example. Suppose you have a denormalized structure like this:
> CREATE TABLE monthly_accounts (account_no INTEGER PRIMARY KEY, jan INTEGER
> NULL, feb INTEGER NULL, mar INTEGER NULL, ...)
> You can convert this to a more usable form as follows:
> CREATE TABLE accounts (account_no INTEGER NOT NULL, dt DATETIME NOT NULL
> CHECK (DAY(dt)=1), amount INTEGER NOT NULL, PRIMARY KEY (account_no, dt))
> INSERT INTO accounts (account_no, dt, amount)
> SELECT account_no, '20040101', jan
> FROM monthly_accounts
> WHERE jan IS NOT NULL
> UNION ALL
> SELECT account_no, '20040201', feb
> FROM monthly_accounts
> WHERE feb IS NOT NULL
> UNION ALL
> SELECT account_no, '20040301', mar
> FROM monthly_accounts
> WHERE mar IS NOT NULL
> ...
> Notice that you will usually add at least one column to the key when you
do
> this.
> --
> David Portas
> SQL Server MVP
> --
|||Hi
I occasionally concatenate field names from sysColumn rows (by object) in
order to save myself the typing (in sprocs, etc.). Don't see any reason why
you couldn't add the values in your case (if this is what you're trying to
do).
example:
CREATE PROCEDURE dbo.sp_tablecolumns
@.object_id varchar(100)
AS
DECLARE @.FldCat1 VARCHAR(8000)
SET @.FldCat1=''
SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
FROM sysColumns with (NOLOCK)
WHERE id = object_id(@.object_id)
ORDER BY sysColumns.colorder
PRINT @.FldCat1
GO
usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
rob
"Joe" wrote:
> Are there any routines out there that will automatically convert a table (A)
> with numerous numeric fields to a new table (B) with just one numeric field.
> Thus the number of records in the table (B) would be the number or records
> in A mutliplied by the number of numeric fields.
> Thanks in advance
>
>
|||sorry, I get what you're saying now.
u can do the same kind of thing I mentioned in my 1st post. u'd have to find
the numeric columns from table A first, loop thru A (by row and then columns)
and do the insert (into B) in the loops.
u could write a generic routine starting with the code I posted.
rob
"RobKaratzas" wrote:
[vbcol=seagreen]
> Hi
> I occasionally concatenate field names from sysColumn rows (by object) in
> order to save myself the typing (in sprocs, etc.). Don't see any reason why
> you couldn't add the values in your case (if this is what you're trying to
> do).
> example:
> CREATE PROCEDURE dbo.sp_tablecolumns
> @.object_id varchar(100)
> AS
> DECLARE @.FldCat1 VARCHAR(8000)
> SET @.FldCat1=''
> SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
> FROM sysColumns with (NOLOCK)
> WHERE id = object_id(@.object_id)
> ORDER BY sysColumns.colorder
> PRINT @.FldCat1
> GO
> usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
> rob
> "Joe" wrote:
|||> I occasionally concatenate field names from sysColumn rows (by
object) in
> order to save myself the typing (in sprocs, etc.).
In SQL2000 Query Analyzer can do that automatically for you. Just drag
the Columns node from the Object Browser into the editing window. In
7.0 and earlier you don't have Object Browser so the method you
described may be useful. Not sure what it has to do with Joe's question
though :-)
David Portas
SQL Server MVP
|||What I usually do to copy the column names is
Setting the Result to Text option and then do a
SELECT * FROM table WHERe 1 = 0
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1103200824.858097.15460@.z14g2000cwz.googlegro ups.com...
> object) in
> In SQL2000 Query Analyzer can do that automatically for you. Just drag
> the Columns node from the Object Browser into the editing window. In
> 7.0 and earlier you don't have Object Browser so the method you
> described may be useful. Not sure what it has to do with Joe's question
> though :-)
> --
> David Portas
> SQL Server MVP
> --
>
|||thanks Roji
I did misread the original post.
But in order to handle this problem with a generic solution, you're going to
require some means to programatically gather what these numerous columns are
for whatever Table A has.
Rob
"Roji. P. Thomas" wrote:
> What I usually do to copy the column names is
> Setting the Result to Text option and then do a
> SELECT * FROM table WHERe 1 = 0
>
> --
> Roji. P. Thomas
> Net Asset Management
> https://www.netassetmanagement.com
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> news:1103200824.858097.15460@.z14g2000cwz.googlegro ups.com...
>
>
sql
Numerous Numeric Fields to 1 Numeric Field in New Table
with numerous numeric fields to a new table (B) with just one numeric field.
Thus the number of records in the table (B) would be the number or records
in A mutliplied by the number of numeric fields.
Thanks in advance
Please include DDL with your questions so that we don't have to guess at what
your tables might look like.
Here's an example. Suppose you have a denormalized structure like this:
CREATE TABLE monthly_accounts (account_no INTEGER PRIMARY KEY, jan INTEGER
NULL, feb INTEGER NULL, mar INTEGER NULL, ...)
You can convert this to a more usable form as follows:
CREATE TABLE accounts (account_no INTEGER NOT NULL, dt DATETIME NOT NULL
CHECK (DAY(dt)=1), amount INTEGER NOT NULL, PRIMARY KEY (account_no, dt))
INSERT INTO accounts (account_no, dt, amount)
SELECT account_no, '20040101', jan
FROM monthly_accounts
WHERE jan IS NOT NULL
UNION ALL
SELECT account_no, '20040201', feb
FROM monthly_accounts
WHERE feb IS NOT NULL
UNION ALL
SELECT account_no, '20040301', mar
FROM monthly_accounts
WHERE mar IS NOT NULL
...
Notice that you will usually add at least one column to the key when you do
this.
David Portas
SQL Server MVP
|||Thanks David
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:E5E28C97-6FF6-4EBA-8152-33063A6C813C@.microsoft.com...
> Please include DDL with your questions so that we don't have to guess at
what
> your tables might look like.
> Here's an example. Suppose you have a denormalized structure like this:
> CREATE TABLE monthly_accounts (account_no INTEGER PRIMARY KEY, jan INTEGER
> NULL, feb INTEGER NULL, mar INTEGER NULL, ...)
> You can convert this to a more usable form as follows:
> CREATE TABLE accounts (account_no INTEGER NOT NULL, dt DATETIME NOT NULL
> CHECK (DAY(dt)=1), amount INTEGER NOT NULL, PRIMARY KEY (account_no, dt))
> INSERT INTO accounts (account_no, dt, amount)
> SELECT account_no, '20040101', jan
> FROM monthly_accounts
> WHERE jan IS NOT NULL
> UNION ALL
> SELECT account_no, '20040201', feb
> FROM monthly_accounts
> WHERE feb IS NOT NULL
> UNION ALL
> SELECT account_no, '20040301', mar
> FROM monthly_accounts
> WHERE mar IS NOT NULL
> ...
> Notice that you will usually add at least one column to the key when you
do
> this.
> --
> David Portas
> SQL Server MVP
> --
|||Hi
I occasionally concatenate field names from sysColumn rows (by object) in
order to save myself the typing (in sprocs, etc.). Don't see any reason why
you couldn't add the values in your case (if this is what you're trying to
do).
example:
CREATE PROCEDURE dbo.sp_tablecolumns
@.object_id varchar(100)
AS
DECLARE @.FldCat1 VARCHAR(8000)
SET @.FldCat1=''
SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
FROM sysColumns with (NOLOCK)
WHERE id = object_id(@.object_id)
ORDER BY sysColumns.colorder
PRINT @.FldCat1
GO
usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
rob
"Joe" wrote:
> Are there any routines out there that will automatically convert a table (A)
> with numerous numeric fields to a new table (B) with just one numeric field.
> Thus the number of records in the table (B) would be the number or records
> in A mutliplied by the number of numeric fields.
> Thanks in advance
>
>
|||sorry, I get what you're saying now.
u can do the same kind of thing I mentioned in my 1st post. u'd have to find
the numeric columns from table A first, loop thru A (by row and then columns)
and do the insert (into B) in the loops.
u could write a generic routine starting with the code I posted.
rob
"RobKaratzas" wrote:
[vbcol=seagreen]
> Hi
> I occasionally concatenate field names from sysColumn rows (by object) in
> order to save myself the typing (in sprocs, etc.). Don't see any reason why
> you couldn't add the values in your case (if this is what you're trying to
> do).
> example:
> CREATE PROCEDURE dbo.sp_tablecolumns
> @.object_id varchar(100)
> AS
> DECLARE @.FldCat1 VARCHAR(8000)
> SET @.FldCat1=''
> SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
> FROM sysColumns with (NOLOCK)
> WHERE id = object_id(@.object_id)
> ORDER BY sysColumns.colorder
> PRINT @.FldCat1
> GO
> usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
> rob
> "Joe" wrote:
|||> I occasionally concatenate field names from sysColumn rows (by
object) in
> order to save myself the typing (in sprocs, etc.).
In SQL2000 Query Analyzer can do that automatically for you. Just drag
the Columns node from the Object Browser into the editing window. In
7.0 and earlier you don't have Object Browser so the method you
described may be useful. Not sure what it has to do with Joe's question
though :-)
David Portas
SQL Server MVP
|||What I usually do to copy the column names is
Setting the Result to Text option and then do a
SELECT * FROM table WHERe 1 = 0
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1103200824.858097.15460@.z14g2000cwz.googlegro ups.com...
> object) in
> In SQL2000 Query Analyzer can do that automatically for you. Just drag
> the Columns node from the Object Browser into the editing window. In
> 7.0 and earlier you don't have Object Browser so the method you
> described may be useful. Not sure what it has to do with Joe's question
> though :-)
> --
> David Portas
> SQL Server MVP
> --
>
|||thanks Roji
I did misread the original post.
But in order to handle this problem with a generic solution, you're going to
require some means to programatically gather what these numerous columns are
for whatever Table A has.
Rob
"Roji. P. Thomas" wrote:
> What I usually do to copy the column names is
> Setting the Result to Text option and then do a
> SELECT * FROM table WHERe 1 = 0
>
> --
> Roji. P. Thomas
> Net Asset Management
> https://www.netassetmanagement.com
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> news:1103200824.858097.15460@.z14g2000cwz.googlegro ups.com...
>
>
Numerous Numeric Fields to 1 Numeric Field in New Table
with numerous numeric fields to a new table (B) with just one numeric field.
Thus the number of records in the table (B) would be the number or records
in A mutliplied by the number of numeric fields.
Thanks in advanceHi
I occasionally concatenate field names from sysColumn rows (by object) in
order to save myself the typing (in sprocs, etc.). Don't see any reason why
you couldn't add the values in your case (if this is what you're trying to
do).
example:
CREATE PROCEDURE dbo.sp_tablecolumns
@.object_id varchar(100)
AS
DECLARE @.FldCat1 VARCHAR(8000)
SET @.FldCat1=''
SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
FROM sysColumns with (NOLOCK)
WHERE id = object_id(@.object_id)
ORDER BY sysColumns.colorder
PRINT @.FldCat1
GO
usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
rob
"Joe" wrote:
> Are there any routines out there that will automatically convert a table (
A)
> with numerous numeric fields to a new table (B) with just one numeric fiel
d.
> Thus the number of records in the table (B) would be the number or record
s
> in A mutliplied by the number of numeric fields.
> Thanks in advance
>
>|||sorry, I get what you're saying now.
u can do the same kind of thing I mentioned in my 1st post. u'd have to find
the numeric columns from table A first, loop thru A (by row and then columns
)
and do the insert (into B) in the loops.
u could write a generic routine starting with the code I posted.
rob
"RobKaratzas" wrote:
> Hi
> I occasionally concatenate field names from sysColumn rows (by object) in
> order to save myself the typing (in sprocs, etc.). Don't see any reason wh
y
> you couldn't add the values in your case (if this is what you're trying to
> do).
> example:
> CREATE PROCEDURE dbo.sp_tablecolumns
> @.object_id varchar(100)
> AS
> DECLARE @.FldCat1 VARCHAR(8000)
> SET @.FldCat1=''
> SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
> FROM sysColumns with (NOLOCK)
> WHERE id = object_id(@.object_id)
> ORDER BY sysColumns.colorder
> PRINT @.FldCat1
> GO
> usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
> rob
> "Joe" wrote:
>|||> I occasionally concatenate field names from sysColumn rows (by
object) in
> order to save myself the typing (in sprocs, etc.).
In SQL2000 Query Analyzer can do that automatically for you. Just drag
the Columns node from the Object Browser into the editing window. In
7.0 and earlier you don't have Object Browser so the method you
described may be useful. Not sure what it has to do with Joe's question
though :-)
David Portas
SQL Server MVP
--|||What I usually do to copy the column names is
Setting the Result to Text option and then do a
SELECT * FROM table WHERe 1 = 0
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1103200824.858097.15460@.z14g2000cwz.googlegroups.com...
> object) in
> In SQL2000 Query Analyzer can do that automatically for you. Just drag
> the Columns node from the Object Browser into the editing window. In
> 7.0 and earlier you don't have Object Browser so the method you
> described may be useful. Not sure what it has to do with Joe's question
> though :-)
> --
> David Portas
> SQL Server MVP
> --
>|||thanks Roji
I did misread the original post.
But in order to handle this problem with a generic solution, you're going to
require some means to programatically gather what these numerous columns are
for whatever Table A has.
Rob
"Roji. P. Thomas" wrote:
> What I usually do to copy the column names is
> Setting the Result to Text option and then do a
> SELECT * FROM table WHERe 1 = 0
>
> --
> Roji. P. Thomas
> Net Asset Management
> https://www.netassetmanagement.com
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> news:1103200824.858097.15460@.z14g2000cwz.googlegroups.com...
>
>
Numerous Numeric Fields to 1 Numeric Field in New Table
with numerous numeric fields to a new table (B) with just one numeric field.
Thus the number of records in the table (B) would be the number or records
in A mutliplied by the number of numeric fields.
Thanks in advance
Hi
I occasionally concatenate field names from sysColumn rows (by object) in
order to save myself the typing (in sprocs, etc.). Don't see any reason why
you couldn't add the values in your case (if this is what you're trying to
do).
example:
CREATE PROCEDURE dbo.sp_tablecolumns
@.object_id varchar(100)
AS
DECLARE @.FldCat1 VARCHAR(8000)
SET @.FldCat1=''
SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
FROM sysColumns with (NOLOCK)
WHERE id = object_id(@.object_id)
ORDER BY sysColumns.colorder
PRINT @.FldCat1
GO
usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
rob
"Joe" wrote:
> Are there any routines out there that will automatically convert a table (A)
> with numerous numeric fields to a new table (B) with just one numeric field.
> Thus the number of records in the table (B) would be the number or records
> in A mutliplied by the number of numeric fields.
> Thanks in advance
>
>
|||sorry, I get what you're saying now.
u can do the same kind of thing I mentioned in my 1st post. u'd have to find
the numeric columns from table A first, loop thru A (by row and then columns)
and do the insert (into B) in the loops.
u could write a generic routine starting with the code I posted.
rob
"RobKaratzas" wrote:
[vbcol=seagreen]
> Hi
> I occasionally concatenate field names from sysColumn rows (by object) in
> order to save myself the typing (in sprocs, etc.). Don't see any reason why
> you couldn't add the values in your case (if this is what you're trying to
> do).
> example:
> CREATE PROCEDURE dbo.sp_tablecolumns
> @.object_id varchar(100)
> AS
> DECLARE @.FldCat1 VARCHAR(8000)
> SET @.FldCat1=''
> SELECT @.FldCat1=@.FldCat1+(sysColumns.name + char(44))
> FROM sysColumns with (NOLOCK)
> WHERE id = object_id(@.object_id)
> ORDER BY sysColumns.colorder
> PRINT @.FldCat1
> GO
> usage: dbo.sp_tablecolumns 'MyTableNamethatIwantfieldsfor'
> rob
> "Joe" wrote:
|||> I occasionally concatenate field names from sysColumn rows (by
object) in
> order to save myself the typing (in sprocs, etc.).
In SQL2000 Query Analyzer can do that automatically for you. Just drag
the Columns node from the Object Browser into the editing window. In
7.0 and earlier you don't have Object Browser so the method you
described may be useful. Not sure what it has to do with Joe's question
though :-)
David Portas
SQL Server MVP
|||What I usually do to copy the column names is
Setting the Result to Text option and then do a
SELECT * FROM table WHERe 1 = 0
Roji. P. Thomas
Net Asset Management
https://www.netassetmanagement.com
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1103200824.858097.15460@.z14g2000cwz.googlegro ups.com...
> object) in
> In SQL2000 Query Analyzer can do that automatically for you. Just drag
> the Columns node from the Object Browser into the editing window. In
> 7.0 and earlier you don't have Object Browser so the method you
> described may be useful. Not sure what it has to do with Joe's question
> though :-)
> --
> David Portas
> SQL Server MVP
> --
>
|||thanks Roji
I did misread the original post.
But in order to handle this problem with a generic solution, you're going to
require some means to programatically gather what these numerous columns are
for whatever Table A has.
Rob
"Roji. P. Thomas" wrote:
> What I usually do to copy the column names is
> Setting the Result to Text option and then do a
> SELECT * FROM table WHERe 1 = 0
>
> --
> Roji. P. Thomas
> Net Asset Management
> https://www.netassetmanagement.com
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> news:1103200824.858097.15460@.z14g2000cwz.googlegro ups.com...
>
>
Numeric[DT_NUMERIC] - comma or dot
Hi,
I have this problem:
In one SSIS project that I have, I convert (by using Data Conversion) my numeric column into Numeric[DT_NUMERIC] and get:
1.000000
Then, in another project I convert the same column again into Numeric[DT_NUMERIC] and get:
2,000000
Does anybody know how I can control if I′m using a dot or a comma?
Thank you.
Cannot say I have seen this. Are the packages run on the same machine? If not, are the regional settings the same?|||Yes, they are running on the same machine.
|||Arg! My fault, the regional settings of the Flat File was different.
Thanks for the help!!
Monday, March 26, 2012
Number to words
I want to know how convert number to words or words to number
for example. if i give input 100, i have to get the output "one hundred"
is there any built in function available. need solution immediately.
regards
nlakkaRun the attached script in PUBS database and see if that's what you want. I didn't have enough time to debug it, so if you find anything fishy, - just let me know.|||OK, it has bugs, so hang on...|||Alright, now it's good to go (I think). A little roughy, but good enough for check generation and similar stuff.|||I have tried running the script but it should even translate the cents and not
e.g. 50/100.
Good luck|||I have tried running the script but it should even translate the cents and not
e.g. 50/100.
Good luck
Hey, Why don't you try this link (http://sqlkit.com/blogs/rudra/archive/2006/04/10/104.aspx) ? Just make two part according to the position of the '.' decimal ,and add regional customization according to your choice.
I will add that functionality very soon ...
And use this logic to make it ...
set @.s='2344.34'
set @.part1=substring(@.s,1,(len(@.s)-3))
set @.part2=substring(@.s,(len(@.s)-1),2)
and pass it to the function. Hope u will do that easily ;)
Monday, March 19, 2012
Number formatting in a SQL select statement
Hi
I'm trying to convert and format integer values in a SQL Server select statement to a string
representation of the number formated with ,'s (1000000 becomes 1,000,000 for example).
I've been looking at CAST and CONVERT and think the answers there somewhere. I just don't
seem to be able to work it out.
Anyone out there able to help me please?
Thanks,
Keith.
In any case, I think you will need to CAST your column as a money data type, and then CONVERT it using style 1, like this:
SELECT
CONVERT(varchar(20),CAST(myColumn AS money) ,1)
This will unfortuately also return the 2 digits after the decimal point. So my next step would be to strip them out.
This seems very messy, though. Hopefully someone else will have a better idea.
|||Yeah, that is messy.
<soapbox>
The first question I have is why isn't this being done in yourpresentation layer? SQL's strong suit is selecting data, not formattingit. Your ASP.NET environment already has tools that make this mucheasier than anything that we can come up with in SQL.
</soapbox>
Even messier would be this code sample, which is a function and/orstored procedure that accomplish what you are looking for. I've neverused it, but it looks right to me.
Jason
Update: Forgot to link - http://www.issociate.de/board/post/176502/How_do_I_format_an_integer.html
|||
Definitely a front-end issue. I've had to use SQL to format results when using SQLMail and it's a nightmare. Possible using combinations of cast, convert, charindex, substring, etc., but a nightmare. Use the front-end.
|||Try this url it is using Strings and Formatting in the Framework Class library to do custom formatting. Hope this helps.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconcustomnumericformatstringsoutputexample.asp
Kind regards,
Gift Peddie
Monday, March 12, 2012
NULLS in math
it as zero. Can I use COALESCE or do I need to use CONVERT? Thanks.
Example: Balance = dbo.Table.Amount - dbo.Table.Paid
DavidHi,
use ISNULL function
select ISNULL(field1,0)+ISNULL(field2,0) from Table_name
Thanks
Hari
SQL Server MVP
"David" <dlchase@.lifetimeinc.com> wrote in message
news:OwDq0XNSFHA.1348@.TK2MSFTNGP15.phx.gbl...
>I want to add 2 fields together, but sometimes 1 is NULL and I want to
>treat it as zero. Can I use COALESCE or do I need to use CONVERT? Thanks.
> Example: Balance = dbo.Table.Amount - dbo.Table.Paid
> David
>|||If there is only one value to check for you can also use
ISNULL(Column,[SubstitutionValueIfNull])
.
Coalesce is for comparing different columns, if you want to return the first
NON null value.
Select ISNULL(NULL,0) --> Returns 0
Select COALESCE(NULL,NULL,0) --> returns 0
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"David" <dlchase@.lifetimeinc.com> schrieb im Newsbeitrag
news:OwDq0XNSFHA.1348@.TK2MSFTNGP15.phx.gbl...
>I want to add 2 fields together, but sometimes 1 is NULL and I want to
>treat it as zero. Can I use COALESCE or do I need to use CONVERT? Thanks.
> Example: Balance = dbo.Table.Amount - dbo.Table.Paid
> David
>|||>> want to add 2 fields [sic] together, but sometimes 1 is NULL and I
want to treat
it as zero. Can I use COALESCE or do I need to use CONVERT? <<
Columns are not fields; among the MANY important differences is that
columns can be NULL and fields cannot. Use the right mental model and
SQL will be much easier. COALESCE(x, 0.00) is what you are looking
for. CONVERT() is proprietary; you want to use CAST() instead when you
do explicit type casting.
But a better question is how the NULL was allowed in the first place.
People who do not yet know the differences between fields and columns
forget that they need to solve the problems in the DDL with DEFAULT and
CHECK() constraints.|||<snip>...Columns are not fields; among the MANY important differences is tha
t
columns can be NULL and fields cannot... </snip>
As in everything else, semantics plays a great deal in this. You now
have me

so my understanding as to what these terms mean is a mixture of the limited
research I have done, and the practical experence I have in the 'field',
(NPI), with other folks like myself.
So, up to now, I have thought that in posts like the above, you were
'just' arguing against the less than accurate use of words, and the
ambiiguity that results from that. But I'm now wondering if you are talkin
g
about something else.
Exactly what are you talking about when you use the word "field" ? I
didn't think there was such a concept in relatonal database theory, (except
as a misused alias for 'Column', or, sometimes, for 'attribute'). Are you
referring to the OOP concept of field, (an element of data associated wth an
object, or instance of an class) ? Because that concept of 'Field' can
definitely be null...
Or are you referring to some archaic concept of 'field' as understood in
pre-relational databases ?
I also wonder, how do you know which 'other' concept of field, someone is
confuusing this with? Frankly, I believe most of these folks are talking
about columns, as they exist in modern RDBMSs, and are just ignorant of the
academically correct terminology. They are not confusing your concept of
field with the RDBMS concept of a column, they are just using the wrong
word..|||>> have thought that in posts like the above, you were 'just' arguing
against the less than accurate use of words, and the ambiiguity that
results from that. But I'm now wondering if you are talking about
something else. <<
Like most new ideas, the hard part of understanding what the relational
model is comes in un-learning what you know about file systems. As
Artemus Ward (William Graham Sumner, 1840-1910) put it, "It ain't so
much the things we don't know that get us into trouble. It's the things
we know that just ain't so."
If you already have a background in data processing with traditional
file systems, the first things to un-learn are:
(0) Databases are not file sets.
(1) Tables are not files.
(2) Rows are not records.
(3) Columns are not fields.
Modern data processing began with punch cards, or Hollerith cards used
by the Bureau of the Census. Their original size was that of a United
States Dollar bill. This was set by their inventor, Herman Hollerith,
because he could get furniture to store the cards from the United
States Treasury Department, just across the street. Likewise, physical
constraints limited each card to 80 columns of holes in which to record
a symbol.
The influence of the punch card lingered on long after the invention of
magnetic tapes and disk for data storage. This is why early video
display terminals were 80 columns across. Even today, files which
were migrated from cards to magnetic tape files or disk storage still
use 80 column records.
But the influence was not just on the physical side of data processing.
The methods for handling data from the prior media were imitated in
the new media. The programmer kept using sequential, physically
contigous mental models, too.
Data processing first consisted of sorting and merging decks of punch
cards (later, sequential magnetic tape files) in a series of distinct
steps. The result of each step feed into the next step in the process.
Relational databases do not work that way. Each user connects to the
entire database all at once, not to one file at time in a sequence of
steps. The users might not all have the same database access rights
once they are connected, however. Magnetic tapes could not be shared
among users at the same time, but shared data is the point of a
database.
Tables versus Files
A file is closely related to its physical storage media. A table may
or may not be a physical file. DB2 from IBM uses one file per table,
while Sybase puts several entire databases inside one file. A table is
a <i>set<i> of rows of the same kind of thing. A set has no ordering
and it makes no sense to ask for the first or last row.
A deck of punch cards is sequential, and so are magnetic tape files.
Therefore, a <i>physical<i> file of ordered sequential records also
became the <i>mental<i> model for data processing and it is still hard
to shake. Anytime you look at data, it is in some physical ordering.
The various access methods for disk storage system came later, but even
these access methods could not shake the mental model.
Another conceptual difference is that a file is usually data that deals
with a whole business process. A file has to have enough data in
itself to support applications for that business process. Files tend
to be "mixed" data which can be described by the name of the business
process, such as "The Payroll file" or something like that.
Tables can be either entities or relationships within a business
process. This means that the data which was held in one file is often
put into several tables. Tables tend to be "pure" data which can be
described by single words. The payroll would now have separate tables
for timecards, employees, projects and so forth.
Tables as Entities
An entity is physical or conceptual "thing" which has meaning be
itself. A person, a sale or a product would be an example. In a
relational database, an entity is defined by its attributes, which are
shown as values in columns in rows in a table.
To remind users that tables are sets of entities, I like to use plural
or collective nouns that describe the function of the entities within
the system for the names of tables. Thus "Employee" is a bad name
because it is singular; "Employees" is a better name because it is
plural; "Personnel" is best because it is collective and does not
summon up a mental picture of individual persons.
If you have tables with exactly the same structure, then they are sets
of the same kind of elements. But you should have only one set for
each kind of data element! Files, on the other hand, were PHYSICALLY
separate units of storage which coudl be alike -- each tape or disk
file represents a step in the PROCEDURE, such as moving from raw data,
to edited data, and finally to archived data. In SQL, this should be a
status flag in a table.
Tables as Relationships
A relationship is shown in a table by columns which reference one or
more entity tables. Without the entities, the relationship has no
meaning, but the relationship can have attributes of its own. For
example, a show business contract might have an agent, an employer and
a talent. The method of payment is an attribute of the contract
itself, and not of any of the three parties.
Rows versus Records
Rows are not records. A record is defined in the application program
which reads it; a row is defined in the database schema and not by a
program at all. The name of the field in the READ or INPUT statements
of the application; a row is named in the database schema.
All empty files look alike; they are a directory entry in the operating
system with a name and a length of zero bytes of storage. Empty tables
still have columns, constraints, security privileges and other
structures, even tho they have no rows.
This is in keeping with the set theoretical model, in which the empty
set is a perfectly good set. The difference between SQL's set model
and standard mathematical set theory is that set theory has only one
empty set, but in SQL each table has a different structure, so they
cannot be used in places where non-empty versions of themselves could
not be used.
Another characteristic of rows in a table is that they are all alike in
structure and they are all the "same kind of thing" in the model. In a
file system, records can vary in size, datatypes and structure by
having flags in the data stream that tell the program reading the data
how to interpret it. The most common examples are Pascal's variant
record, C's struct syntax and Cobol's OCCURS clause.
The OCCURS keyword in Cobol and the Variant records in Pascal have a
number which tells the program how many time a record structure is to
be repeated in the current record.
Unions in 'C' are not variant records, but variant mappings for the
same physical memory. For example:
union x {int ival; char j[4];} myStuff;
defines myStuff to be either an integer (which are 4 bytes on most
modern C compilers, but this code is non-portable) or an array of 4
bytes, depending on whether you say myStuff.ival or myStuff.j[0];
But even more than that, files often contained records which were
summaries of subsets of the other records -- so called control break
reports. There is no requirement that the records in a file be related
in any way -- they are literally a stream of binary data whose meaning
is assigned by the program reading them.
Columns versus Fields
A field within a record is defined by the application program that
reads it. A column in a row in a table is defined by the database
schema. The datatypes in a column are always scalar.
The order of the application program variables in the READ or INPUT
statements is important because the values are read into the program
variables in that order. In SQL, columns are referenced only by their
names. Yes, there are shorthands like the SELECT * clause and INSERT
INTO <table name> statements which expand into a list of column names
in the physical order in which the column names appear within their
table declaration, but these are shorthands which resolve to named
lists.
The use of NULLs in SQL is also unique to the language. Fields do not
support a missing data marker as part of the field, record or file
itself. Nor do fields have constraints which can be added to them in
the record, like the DEFAULT and CHECK() clauses in SQL.
Relationships among tables within a database
Files are pretty passive creatures and will take whatever an
application program throws at them without much objection. Files are
also independent of each other simply because they are connected to one
application program at a time and therefore have no idea what other
files looks like.
A database actively s

The methods used are triggers, constraints and declarative referential
integrity.
Declarative referential integrity (DRI) says, in effect, that data in
one table has a particular relationship with data in a second (possibly
the same) table. It is also possible to have the database change
itself via referential actions associated with the DRI.
For example, a business rule might be that we do not sell products
which are not in inventory. This rule would be enforce by a REFERENCES
clause on the Orders table which references the Inventory table and a
referential action of ON DELETE CASCADE
Triggers are a more general way of doing much the same thing as DRI. A
trigger is a block of procedural code which is executed before, after
or instead of an INSERT INTO or UPDATE statement. You can do anything
with a trigger that you can do with DRI and more.
However, there are problems with TRIGGERs. While there is a standard
syntax for them in the SQL-92 standard, most vendors have not
implemented it. What they have is very proprietary syntax instead.
Secondly, a trigger cannot pass information to the optimizer like DRI.
In the example in this section, I know that for every product number in
the Orders table, I have that same product number in the Inventory
table. The optimizer can use that information in setting up EXISTS()
predicates and JOINs in the queries. There is no reasonable way to
parse procedural trigger code to determine this relationship.
The CREATE ASSERTION statement in SQL-92 will allow the database to
enforce conditions on the entire database as a whole. An ASSERTION is
not like a CHECK() clause, but the difference is subtle. A CHECK()
clause is executed when there are rows in the table to which it is
attached. If the table is empty then all CHECK() clauses are
effectively TRUE. Thus, if we wanted to be sure that the Inventory
table is never empty, and we wrote:
CREATE TABLE Inventory
( ...
CONSTRAINT inventory_not_empty
CHECK ((SELECT COUNT(*) FROM Inventory) > 0), ... );
it would not work. However, we could write:
CREATE ASSERTION Inventory_not_empty
CHECK ((SELECT COUNT(*) FROM Inventory) > 0);
and we would get the desired results. The assertion is checked at the
schema level and not at the table level.|||Well, thanks for nvesting so much effort in replying. I appreiciate..
So, the short answer to my question, then, is that you assume that people
who use th word field, instead of column, or Record instead of Row, are
talking about Data Processing systems as designed in the 50s an 60s.
I don't even remember those systems, Joe, and I would wager that the
great majority of the people who read your comments here don't remember them
either (or even know they existed). I'd bet that most of the folks who use
RDBMSs today were introduced to database systems using an RDBMS. A majority
of the remainder probably started wit hX-Base systems in the late 1970s or
early 80s.
If I could guess the motivation behind your comments, I'd guess that it
has something to do with the general lack of knowledge and understanding of
relational database theory exhibited by people who use this technology, and
in this I totally agree with you. As I mentioned in my last post I was not
a
CS major and I have learned about this material only through extensive
self-study and research. Unfortunately, it is not an area where one's lack
of knowledge is immediately obvious and/or fatal. It is, unfortunately, all
too easy to think you know what you are doing when you really don't.
So, after learning my lessopn in numerous ways, I sympathize with your
concern about the general lack of understanding of these concepts that seem
s
to exist within our industry. But I can tell you, at least from my
experience, that your logical approach, of using someones terminology as a
hook or segway to present the ways in which Relational Database objects (lik
e
Tables, Columns, Rows, etc.) are different from non-relational file-based
data processing objects (like Files, Fields, and records) often falls on
deaf ears, or is just simply confusing,, becaue most people are simply not
educated about those other file-based systems in the first place. They are
not thnking "old-style" as opposed to a new paradigm, they are simply
ognorant of any school or approach. To the degree that they know anything
about database theory, what they do know are probably bits and pieces of
Relational theory, not pre-relational theory.
So what I am saying is that you probably shouldn't focus so much on the
words ppeople use, (And I don't mean that the words are not important, they
are, and people should be corrected when they misuse them), but the words
don't betray as much about what they do or do not understand or
misunderstand, they just convey ignorance of the proper words.
The place to get a deeper understanding about what some poster does or
does not understand about relational database theory is in the structure of
the databases they post and the questions they ask us to solve, not in the
words they use...
"A rose, by any other name, ..." if you follow me...
And I thought the quote ...
<snip>"It ain't so much the things we don't know that get us into trouble.
It's the things we know that just ain't so."</snip>
was from Will Rogers !!
"--CELKO--" wrote:
> against the less than accurate use of words, and the ambiiguity that
> results from that. But I'm now wondering if you are talking about
> something else. <<
> Like most new ideas, the hard part of understanding what the relational
> model is comes in un-learning what you know about file systems. As
> Artemus Ward (William Graham Sumner, 1840-1910) put it, "It ain't so
> much the things we don't know that get us into trouble. It's the things
> we know that just ain't so."
> If you already have a background in data processing with traditional
> file systems, the first things to un-learn are:
> (0) Databases are not file sets.
> (1) Tables are not files.
> (2) Rows are not records.
> (3) Columns are not fields.
> Modern data processing began with punch cards, or Hollerith cards used
> by the Bureau of the Census. Their original size was that of a United
> States Dollar bill. This was set by their inventor, Herman Hollerith,
> because he could get furniture to store the cards from the United
> States Treasury Department, just across the street. Likewise, physical
> constraints limited each card to 80 columns of holes in which to record
> a symbol.
> The influence of the punch card lingered on long after the invention of
> magnetic tapes and disk for data storage. This is why early video
> display terminals were 80 columns across. Even today, files which
> were migrated from cards to magnetic tape files or disk storage still
> use 80 column records.
> But the influence was not just on the physical side of data processing.
> The methods for handling data from the prior media were imitated in
> the new media. The programmer kept using sequential, physically
> contigous mental models, too.
> Data processing first consisted of sorting and merging decks of punch
> cards (later, sequential magnetic tape files) in a series of distinct
> steps. The result of each step feed into the next step in the process.
>
> Relational databases do not work that way. Each user connects to the
> entire database all at once, not to one file at time in a sequence of
> steps. The users might not all have the same database access rights
> once they are connected, however. Magnetic tapes could not be shared
> among users at the same time, but shared data is the point of a
> database.
> Tables versus Files
> A file is closely related to its physical storage media. A table may
> or may not be a physical file. DB2 from IBM uses one file per table,
> while Sybase puts several entire databases inside one file. A table is
> a <i>set<i> of rows of the same kind of thing. A set has no ordering
> and it makes no sense to ask for the first or last row.
> A deck of punch cards is sequential, and so are magnetic tape files.
> Therefore, a <i>physical<i> file of ordered sequential records also
> became the <i>mental<i> model for data processing and it is still hard
> to shake. Anytime you look at data, it is in some physical ordering.
> The various access methods for disk storage system came later, but even
> these access methods could not shake the mental model.
> Another conceptual difference is that a file is usually data that deals
> with a whole business process. A file has to have enough data in
> itself to support applications for that business process. Files tend
> to be "mixed" data which can be described by the name of the business
> process, such as "The Payroll file" or something like that.
> Tables can be either entities or relationships within a business
> process. This means that the data which was held in one file is often
> put into several tables. Tables tend to be "pure" data which can be
> described by single words. The payroll would now have separate tables
> for timecards, employees, projects and so forth.
> Tables as Entities
> An entity is physical or conceptual "thing" which has meaning be
> itself. A person, a sale or a product would be an example. In a
> relational database, an entity is defined by its attributes, which are
> shown as values in columns in rows in a table.
> To remind users that tables are sets of entities, I like to use plural
> or collective nouns that describe the function of the entities within
> the system for the names of tables. Thus "Employee" is a bad name
> because it is singular; "Employees" is a better name because it is
> plural; "Personnel" is best because it is collective and does not
> summon up a mental picture of individual persons.
> If you have tables with exactly the same structure, then they are sets
> of the same kind of elements. But you should have only one set for
> each kind of data element! Files, on the other hand, were PHYSICALLY
> separate units of storage which coudl be alike -- each tape or disk
> file represents a step in the PROCEDURE, such as moving from raw data,
> to edited data, and finally to archived data. In SQL, this should be a
> status flag in a table.
> Tables as Relationships
> A relationship is shown in a table by columns which reference one or
> more entity tables. Without the entities, the relationship has no
> meaning, but the relationship can have attributes of its own. For
> example, a show business contract might have an agent, an employer and
> a talent. The method of payment is an attribute of the contract
> itself, and not of any of the three parties.
> Rows versus Records
> Rows are not records. A record is defined in the application program
> which reads it; a row is defined in the database schema and not by a
> program at all. The name of the field in the READ or INPUT statements
> of the application; a row is named in the database schema.
> All empty files look alike; they are a directory entry in the operating
> system with a name and a length of zero bytes of storage. Empty tables
> still have columns, constraints, security privileges and other
> structures, even tho they have no rows.
> This is in keeping with the set theoretical model, in which the empty
> set is a perfectly good set. The difference between SQL's set model
> and standard mathematical set theory is that set theory has only one
> empty set, but in SQL each table has a different structure, so they
> cannot be used in places where non-empty versions of themselves could
> not be used.
> Another characteristic of rows in a table is that they are all alike in
> structure and they are all the "same kind of thing" in the model. In a
> file system, records can vary in size, datatypes and structure by
> having flags in the data stream that tell the program reading the data
> how to interpret it. The most common examples are Pascal's variant
> record, C's struct syntax and Cobol's OCCURS clause.
> The OCCURS keyword in Cobol and the Variant records in Pascal have a
> number which tells the program how many time a record structure is to
> be repeated in the current record.
> Unions in 'C' are not variant records, but variant mappings for the
> same physical memory. For example:
> union x {int ival; char j[4];} myStuff;
> defines myStuff to be either an integer (which are 4 bytes on most
> modern C compilers, but this code is non-portable) or an array of 4
> bytes, depending on whether you say myStuff.ival or myStuff.j[0];
> But even more than that, files often contained records which were
> summaries of subsets of the other records -- so called control break
> reports. There is no requirement that the records in a file be related
> in any way -- they are literally a stream of binary data whose meaning
> is assigned by the program reading them.
> Columns versus Fields
> A field within a record is defined by the application program that
> reads it. A column in a row in a table is defined by the database
> schema. The datatypes in a column are always scalar.
> The order of the application program variables in the READ or INPUT
> statements is important because the values are read into the program
> variables in that order. In SQL, columns are referenced only by their
> names. Yes, there are shorthands like the SELECT * clause and INSERT
> INTO <table name> statements which expand into a list of column names
> in the physical order in which the column names appear within their
> table declaration, but these are shorthands which resolve to named
> lists.
> The use of NULLs in SQL is also unique to the language. Fields do not
> support a missing data marker as part of the field, record or file
> itself. Nor do fields have constraints which can be added to them in
> the record, like the DEFAULT and CHECK() clauses in SQL.
> Relationships among tables within a database
> Files are pretty passive creatures and will take whatever an
> application program throws at them without much objection. Files are
> also independent of each other simply because they are connected to one
> application program at a time and therefore have no idea what other
> files looks like.
> A database actively s

> The methods used are triggers, constraints and declarative referential
> integrity.
> Declarative referential integrity (DRI) says, in effect, that data in
> one table has a particular relationship with data in a second (possibly
> the same) table. It is also possible to have the database change
> itself via referential actions associated with the DRI.
> For example, a business rule might be that we do not sell products
> which are not in inventory. This rule would be enforce by a REFERENCES
> clause on the Orders table which references the Inventory table and a
> referential action of ON DELETE CASCADE
> Triggers are a more general way of doing much the same thing as DRI. A
> trigger is a block of procedural code which is executed before, after
> or instead of an INSERT INTO or UPDATE statement. You can do anything
> with a trigger that you can do with DRI and more.
> However, there are problems with TRIGGERs. While there is a standard
> syntax for them in the SQL-92 standard, most vendors have not
> implemented it. What they have is very proprietary syntax instead.
> Secondly, a trigger cannot pass information to the optimizer like DRI.
> In the example in this section, I know that for every product number in
> the Orders table, I have that same product number in the Inventory
> table. The optimizer can use that information in setting up EXISTS()
> predicates and JOINs in the queries. There is no reasonable way to
> parse procedural trigger code to determine this relationship.
> The CREATE ASSERTION statement in SQL-92 will allow the database to
> enforce conditions on the entire database as a whole. An ASSERTION is
> not like a CHECK() clause, but the difference is subtle. A CHECK()
> clause is executed when there are rows in the table to which it is
> attached. If the table is empty then all CHECK() clauses are
> effectively TRUE. Thus, if we wanted to be sure that the Inventory
> table is never empty, and we wrote:
> CREATE TABLE Inventory
> ( ...
> CONSTRAINT inventory_not_empty
> CHECK ((SELECT COUNT(*) FROM Inventory) > 0), ... );
> it would not work. However, we could write:
> CREATE ASSERTION Inventory_not_empty
> CHECK ((SELECT COUNT(*) FROM Inventory) > 0);
> and we would get the desired results. The assertion is checked at the
> schema level and not at the table level.
>
Friday, March 9, 2012
nulled out - defaults instead?
I am converting a dbase based enterprise wide system that has almost 100
tables to convert into a vb .net system using ms sql server 2000 (I am
posting this both to ado .net newsgroups and sql server newsgroups). I am
working with a prototype where many of the columns in many of the tables
allow nulls. But often I can't call ... is null (in tsql) or isdbnull(...)
in vb .net on the same line as, say, 'or len(trim((dddd)) < 1' because this
throws an error if the col is null, since you can't measure anything when a
col is null.
Now null might have a place in the universe - like black holes - but not
being Stephen Hawkings I just don't know what that place is. But in vb .net
especially and to some extent in tsql also, it's just a pain in the ...
My question - is there any reason I shouldn't convert into tables where,
when the data is converted if it's empty or 0 (int) or # / / # (date), I
use defaults instead (eg, "", 0, 01/01/1900 respectively)? Do I lose
anything by doing this?
Tx for any help.
Bernie YaegerBernie,
Although I belong to the 'avoid nulls at all costs' camp,
you do lose something with defaults.
For example, 1/1/1900 is a valid date in many systems, so
using that date as the default will cause logical
problem. Likewise, a zero may (or may not) cause
problems when used instead of a NULL.
Think of the difference with summing a zero into an
aggregate vs taking an average. In the sum, you can
ignore the zero defaults since they are harmless, but
with the average you have to decide whether to include
them or not.
The key is how your application is coded. If you code
for defaults from the start, there are few difficulties
in making it all work smoothly since you will choose your
defaults and define how they are to be interpreted.
However, from the examples above, any code that includes
the default code in the domain of legal values can cause
you grief.
Russell Fields
>--Original Message--
>OK, I'm nulled out.
>I am converting a dbase based enterprise wide system
that has almost 100
>tables to convert into a vb .net system using ms sql
server 2000 (I am
>posting this both to ado .net newsgroups and sql server
newsgroups). I am
>working with a prototype where many of the columns in
many of the tables
>allow nulls. But often I can't call ... is null (in
tsql) or isdbnull(...)
>in vb .net on the same line as, say, 'or len(trim
((dddd)) < 1' because this
>throws an error if the col is null, since you can't
measure anything when a
>col is null.
>Now null might have a place in the universe - like black
holes - but not
>being Stephen Hawkings I just don't know what that place
is. But in vb .net
>especially and to some extent in tsql also, it's just a
pain in the ...
>My question - is there any reason I shouldn't convert
into tables where,
>when the data is converted if it's empty or 0 (int) or
# / / # (date), I
>use defaults instead (eg, "", 0, 01/01/1900
respectively)? Do I lose
>anything by doing this?
>Tx for any help.
>Bernie Yaeger
>
>.
>|||Allowing NULLS is a design decision that only you can make. IMHO, tri-state
logic is a pain because you often have to exclude data that make no logical
sense. I strongly prefer NOT NULL and default values in columns since it
simplifies the logical conditions. Just make sure your default values are
meaningful in your system and your application knows what to do with them.
NOT NULL and defaults help later when you maintain your application since
you can add columns without having to change existing code. Whatever choice
you make, apply it consistantly and document the few times when you
absolutely must deviate from the standard.
--
Geoff N. Hiten
SQL Server MVP
Senior Database Administrator
Careerbuilder.com
"Bernie Yaeger" <berniey@.cherwellinc.com> wrote in message
news:JIPZa.40432$_R5.12322590@.news4.srv.hcvlny.cv.net...
> OK, I'm nulled out.
> I am converting a dbase based enterprise wide system that has almost 100
> tables to convert into a vb .net system using ms sql server 2000 (I am
> posting this both to ado .net newsgroups and sql server newsgroups). I am
> working with a prototype where many of the columns in many of the tables
> allow nulls. But often I can't call ... is null (in tsql) or
isdbnull(...)
> in vb .net on the same line as, say, 'or len(trim((dddd)) < 1' because
this
> throws an error if the col is null, since you can't measure anything when
a
> col is null.
> Now null might have a place in the universe - like black holes - but not
> being Stephen Hawkings I just don't know what that place is. But in vb
.net
> especially and to some extent in tsql also, it's just a pain in the ...
> My question - is there any reason I shouldn't convert into tables where,
> when the data is converted if it's empty or 0 (int) or # / / # (date),
I
> use defaults instead (eg, "", 0, 01/01/1900 respectively)? Do I lose
> anything by doing this?
> Tx for any help.
> Bernie Yaeger
>|||Hi Russell,
Tx for your advice; I have considered some of the issues you raise and know
the consequences and how to deal with them, I believe.
Bernie
"Russell Fields" <rlfields@.sprynet.com> wrote in message
news:008101c36030$3efd7300$a501280a@.phx.gbl...
> Bernie,
> Although I belong to the 'avoid nulls at all costs' camp,
> you do lose something with defaults.
> For example, 1/1/1900 is a valid date in many systems, so
> using that date as the default will cause logical
> problem. Likewise, a zero may (or may not) cause
> problems when used instead of a NULL.
> Think of the difference with summing a zero into an
> aggregate vs taking an average. In the sum, you can
> ignore the zero defaults since they are harmless, but
> with the average you have to decide whether to include
> them or not.
> The key is how your application is coded. If you code
> for defaults from the start, there are few difficulties
> in making it all work smoothly since you will choose your
> defaults and define how they are to be interpreted.
> However, from the examples above, any code that includes
> the default code in the domain of legal values can cause
> you grief.
>
> Russell Fields
>
> >--Original Message--
> >OK, I'm nulled out.
> >
> >I am converting a dbase based enterprise wide system
> that has almost 100
> >tables to convert into a vb .net system using ms sql
> server 2000 (I am
> >posting this both to ado .net newsgroups and sql server
> newsgroups). I am
> >working with a prototype where many of the columns in
> many of the tables
> >allow nulls. But often I can't call ... is null (in
> tsql) or isdbnull(...)
> >in vb .net on the same line as, say, 'or len(trim
> ((dddd)) < 1' because this
> >throws an error if the col is null, since you can't
> measure anything when a
> >col is null.
> >
> >Now null might have a place in the universe - like black
> holes - but not
> >being Stephen Hawkings I just don't know what that place
> is. But in vb .net
> >especially and to some extent in tsql also, it's just a
> pain in the ...
> >
> >My question - is there any reason I shouldn't convert
> into tables where,
> >when the data is converted if it's empty or 0 (int) or
> # / / # (date), I
> >use defaults instead (eg, "", 0, 01/01/1900
> respectively)? Do I lose
> >anything by doing this?
> >
> >Tx for any help.
> >
> >Bernie Yaeger
> >
> >
> >.
> >|||With due respect to my colleagues, I fall into the 'Use Nulls when
necessary' camp... if you do not know the value, you do not know... Tri
state logic is a little more difficult, but I prefer the data to represent
reality...
And this is something where 'reasonable people differ' in their opinions...
"Bernie Yaeger" <berniey@.cherwellinc.com> wrote in message
news:JIPZa.40432$_R5.12322590@.news4.srv.hcvlny.cv.net...
> OK, I'm nulled out.
> I am converting a dbase based enterprise wide system that has almost 100
> tables to convert into a vb .net system using ms sql server 2000 (I am
> posting this both to ado .net newsgroups and sql server newsgroups). I am
> working with a prototype where many of the columns in many of the tables
> allow nulls. But often I can't call ... is null (in tsql) or
isdbnull(...)
> in vb .net on the same line as, say, 'or len(trim((dddd)) < 1' because
this
> throws an error if the col is null, since you can't measure anything when
a
> col is null.
> Now null might have a place in the universe - like black holes - but not
> being Stephen Hawkings I just don't know what that place is. But in vb
.net
> especially and to some extent in tsql also, it's just a pain in the ...
> My question - is there any reason I shouldn't convert into tables where,
> when the data is converted if it's empty or 0 (int) or # / / # (date),
I
> use defaults instead (eg, "", 0, 01/01/1900 respectively)? Do I lose
> anything by doing this?
> Tx for any help.
> Bernie Yaeger
>|||Just to clarify, I am not religiously opposed to using Nulls. I just think
the places where they apply are very few. If you have properly represented
Entity Relationships in your database, either you know about an entity or
that entity doesn't exist. Thus, you have a complete row in an appropriate
table or no row in that table.
Since most of us work in the real world where we have to live with inherited
databases, Nulls are a fact of life. I prefer to choose where I use nulls
and treat them as the exception rather than the rule. Again, this is just a
personal design preference and should not be taken as holy writ.
--
Geoff N. Hiten
SQL Server MVP
Senior Database Administrator
Careerbuilder.com
"Wayne Snyder" <wsnyder@.computeredservices.com> wrote in message
news:#63VdaLYDHA.384@.TK2MSFTNGP12.phx.gbl...
> With due respect to my colleagues, I fall into the 'Use Nulls when
> necessary' camp... if you do not know the value, you do not know... Tri
> state logic is a little more difficult, but I prefer the data to represent
> reality...
> And this is something where 'reasonable people differ' in their
opinions...
> "Bernie Yaeger" <berniey@.cherwellinc.com> wrote in message
> news:JIPZa.40432$_R5.12322590@.news4.srv.hcvlny.cv.net...
> > OK, I'm nulled out.
> >
> > I am converting a dbase based enterprise wide system that has almost 100
> > tables to convert into a vb .net system using ms sql server 2000 (I am
> > posting this both to ado .net newsgroups and sql server newsgroups). I
am
> > working with a prototype where many of the columns in many of the tables
> > allow nulls. But often I can't call ... is null (in tsql) or
> isdbnull(...)
> > in vb .net on the same line as, say, 'or len(trim((dddd)) < 1' because
> this
> > throws an error if the col is null, since you can't measure anything
when
> a
> > col is null.
> >
> > Now null might have a place in the universe - like black holes - but not
> > being Stephen Hawkings I just don't know what that place is. But in vb
> .net
> > especially and to some extent in tsql also, it's just a pain in the ...
> >
> > My question - is there any reason I shouldn't convert into tables where,
> > when the data is converted if it's empty or 0 (int) or # / / #
(date),
> I
> > use defaults instead (eg, "", 0, 01/01/1900 respectively)? Do I lose
> > anything by doing this?
> >
> > Tx for any help.
> >
> > Bernie Yaeger
> >
> >
>