My friends more like to do a math using a string data type, the reason is because it is much more easier and eficient, I argue them by saying: "How big data do you need for this? We had FPU which contain 80-bit of data, and also SSE which is contain 128-bit and 256-bit". I wonder if anyone did tested some timing between string math and binary math.
Quote from: Farabi on October 15, 2010, 01:44:33 AM
My friends more like to do a math using a string data type, the reason is because it is much more easier and eficient, I argue them by saying: "How big data do you need for this? We had FPU which contain 80-bit of data, and also SSE which is contain 128-bit and 256-bit". I wonder if anyone did tested some timing between string math and binary math.
Well accounting and calculator type applications very often use a BCD numeric representations. Floating point can represent a wide range of numbers, but often does so a the expense of precision.
yes... for adding and subtracting, bcd can be pretty fast - and simple conversion to ascii
I dont know how can the C++ able to do calculation using string data type, I guess they must be convert it to binary and then reconvert it back. If it so, I guess it must be slower than using FPU and then convert the result to ASCII, anyone did timing this?
Hi,
In the thread "Re: Fibonacci Numbers Using Arrays"
dedndave and I discuss BCD arithmetic. And there are
some discussion of how that compares to binary arithmetic.
In reply #38 I posted some code that actually does some
arithmetic using a string to increment a label. Though that
was never intended to be a real world example. The code
in that thread followed an inefficient model due to the
teacher's requirements. Search the forum with "Fibonacci"
and "BCD" to get more efficient code if you want to do
some timing of BCD versus binary.
Using BCD makes converting from BCD to ASCII very
fast. The math is slower, especially for anything other
than adding or subtracting. Using binary the math is faster,
but converting to ASCII is more involved and a bit slower.
This is due to having to extract the digits from the binary
number is basically converting it to an unpacked BCD number.
Regards,
Steve N.
I think your friends probably mean to do the math directly on the string data, essentially applying the "pencil and paper" algorithms that you learned as a child. Even an efficient implementation of these algorithms will be much slower than normal binary math, but still more than fast enough for most financial calculations and similar. And since these algorithms place no limits on the number of digits, the calculations can be done to any desired level of precision. A crude addition example:
;====================================================================
include \masm32\include\masm32rt.inc
;====================================================================
.data
str1 db "123456789",0
str2 db "987654321",0
sum db 20 dup(0)
.code
;====================================================================
start:
;====================================================================
xor ecx, ecx ; clear carry
mov ebx, 9
.WHILE ebx
dec ebx
mov al, str1[ebx]
add al, str2[ebx] ; add digits
sub al, 96 ; convert ascii to binary
add al, cl ; add in any carry from previous
xor ecx, ecx ; clear carry
.IF al > 9
mov cl, 1 ; set carry
sub al, 10 ; reduce digit value
.ENDIF
add al, 48 ; convert binary back to ascii
mov sum[ebx+1], al ; store in sum
.ENDW
.IF cl ; handle carry from last addition
mov sum[ebx], "1"
.ELSE
mov sum[ebx], "0"
.ENDIF
print ADDR str1," + "
print ADDR str2," = "
print ADDR sum,13,10,13,10
inkey "Press any key to exit..."
exit
;====================================================================
end start
EDIT: Added code to handle carry from last addition operation.
that method might compare well if you are only adding 2 values together
because there is no conversion to/from string form
i would think converting to BCD would be better if you had a long list of values, though
BCD has the advantage that dollar/cent values are essentially treated as integers
as Clive mentioned, there is an issue with precision - well - it's actually a "granularity" issue with floating point
$120.65 may well become something like 120.64999
as i said - it's great when adding and subtracting
but, the moment you introduce amortization or anything else that requires multiplication or division, FP is probably best
QuoteWe had FPU which contain 80-bit of data, and also SSE which is contain 128-bit and 256-bit
However, if you wanted to compute something such as the square root of a number with a precision of 1000 digits (even though quite useless), I don't believe the FPU nor SSEXXX would be adequate. That is where string math (i.e. BCD) can come in handy and probably the preferred route. It really has no limit on precision.
There is an example of such computation (square root with even more useless precision up to 9999 decimal digits) in the BCDtut package at:
http://www.ray.masmcode.com/BCDtut.html
The source code is included in the package.
Quote from: dedndave on October 15, 2010, 05:16:26 PM
BCD has the advantage that dollar/cent values are essentially treated as integers
as Clive mentioned, there is an issue with precision - well - it's actually a "granularity" issue with floating point
$120.65 may well become something like 120.64999
Sounds fraudulent to me :bg