Re: TI-M: Re: TI-Math Digest V1 #22


[Prev][Next][Index][Thread]

Re: TI-M: Re: TI-Math Digest V1 #22




In a message dated 6/9/00 5:01:45 PM Mountain Daylight Time, 
noveck@pluto.njcc.com writes:

>  BCD type for both ints and floats.  TI-GCC, OTOH, has different int and
>  float types, which may be what's confusing you.

Not being associated directly with the 89/92+ or m68k assembly, I don't know 
what TI-GCC is ;)  Insight?

>  But if TI's CAS supports both integer and floats of an arbitrary length
>  (limited only by memory), what good would a separate int type be?  Besides,
>  all the decimal number (float) is, as far as the CAS goes, is an int divide
>  by another int, with that denominator being a power of 10.  It's quite
>  simple for both to be represented by BCD numbers, with the only difference
>  being the exponent. . .

True; I think I was under the impression that the float vars were a set 
length, while it was the integer vars that were of varying length.  Again, 
I'm not sure where I read this or even if I read this...maybe someone 
mentioned it to me once and they didn't know what they were talking about or 
something...

>  I thought that BCD was suposed to be far superior for arbitrarily large
>  numbers and float multiplication, division, etc.?

I can't be sure, but I would think binary would be faster on calculations, 
even on arbitrary lengths.  Binary is would be more closely "native" to the 
processor, so I would think naturally it would be more natural for the 
processor to process (redundancy...).  I mean, don't the IEEE specs apply to 
binary floats, not BCD?

>  Wouldn't it be more efficient to keep a list of primes passed so far and
>  trial divide by those - testing just the _known_ primes up to that point,
>  and not all probable ones?

I'm not sure what you mean exactly; if you want to trial-divide by only 
primes, you'd either have to have the list handy (which could get pretty 
bloated) or create it on the fly with maybe a "seeve of esarathus" (not sure 
on the name) method (which would take quite a long time if you create up to 
sqrt(N) if N was large).  Wheeler factorization has its advantages in that it 
provides for a compact way for primality testing and proving/disproving, 
since you don't need any large and elaborate lists, just a relative small 
congruency list.

>  For that matter, it seems that you're better off using some of the ROM to
>  store a prime number list, rather than test them dynamically, at least 
until
>  a certain number is passed (how large would a list of the primes up to 2310
>  be?  Then let Wheeler kick in if we need larger primes?).  I wonder what my
>  89 does if I feed it exteremely large prime numbers. . . and I'm feeling
>  sadistic.

Well, there would be (I'm guessing) about 400 primes up through 2310, because 
I know that a probable-prime list through to 2310 is 480-something in length. 
 So a prime list up to this point isn't too bad, and that would cover 
primality-proving all numbers up through 2310^2 = 5 336 100.

>  Let's see, head on over to google and search for large mersenne prime
>  numbers. . . I think that 2^6972593-1 might be a bit too evil to feed to 
the
>  calc.  Let's try the 1000th prime number, 7919 - the calc knows it's prime
>  _instantly_.  Same with 10,479, the 10,000th prime.  1,299,827 takes only a
>  second, and that's the 100,008th prime number - it takes 822k to list all 
of
>  those as ASCII characters, so I have to doubt that TI listed them all; but
>  wheeler would be way too slow for that.  Your thoughts?

These results, to say the least, aren't surprising; you have to remember that 
you only have to trial-divide up to sqrt(N).  To primality prove 7919, a 
program needs only to trial-divide primes or probable-primes up to 
int(sqrt(7919)) = 88.  Similarly, to primality prove 10 479, the function 
would only have to trial-divide up to 102, although I find it hard to believe 
that 7919 is the 1000th prime and 10479, less than 3000 more than 7919, is 
the 10 000th prime (meaning there's 9000 primes in a set of 3000 
numbers)...did you forget a digit in there?

Anyway, these times you mentioned are well within the range of wheeler 
factorization, provided it's written in assembly and not BASIC (even C++ 
might be fast enough).  Here are the times for PrimeASM, a Z80 program for 
the 85/86 that uses wheeler factorization to primality prove/disprove (and 
provide a factor for) inputted numbers, as compared to the TI-BASIC 
equivalents by STL:

Number                                   Prime6           Prime6B       
PrimeASM
------------                                  -----------          
-------------        --------
9876511 (prime)                       17 sec           14 sec           3 sec
369758771 (=6661*55511)         36 sec           31 sec           8 sec
1000000007 (prime)                  2 min 51 sec  2 min 21 sec  40 sec
17439280249 (=55511*314159)  5 min 5 sec    4 min 13 sec  1 min 28 sec

As a side note, as an accuracy test, PrimeASM took just 6hr57min to find the 
first factor of 9 876 511 069 135 577 ( = 9 876 511 * 1 000 000 007), which 
means it trial-divided every probable-prime up to almost 10 000 000 in about 
7 hours.  I figure that equates to about 100 trial-divides per second...not 
that bad for a Z80 ;)

>  Here's something interesting - it takes about 10 seconds for my calc to
>  realize that factor(2^2000) is 2^2000 =)

Hmmm...2000 divides takes 10 seconds?  Not unlikely for such a large number I 
s'pose...

>  Attempting factor(2^2000-1) is interesting - after about 2 minutes and 50
>  seconds, it gave me a warning that it may not be fully simplified
>  (indicating that it's given up =), and 5 seconds later it displayed an
>  answer.  The largest prime it picked up was 601; it appears to have gotten
>  stuck on a 582-digit number.

Strange that it took nearly 3 min to pick up the factor 601...

>  Anyways, I think my inquiry has been taken out way beyond what I expected.
>  If you need larger primes, don't use a calc to find them; I imagine all the
>  primes worth listing can fit in a simple table - say, the first 10,000 of
>  them.

True, using simply a graphing calculator for finding large primes is stupid, 
but that doesn't take the fun out of it ;)

>  I see, since you have to do the division.  Do you just subtract and check
>  for zero/carry, or have another algorithm for the binary division?

Just do a binary division and check the remainder to see if the divisor is a 
factor or not of the dividend.  Subtraction would take far too long...

Anyway, I'd be interested in how TI factors; I doubt that the 89/92+ uses 
wheeler factorization...  I think I read this off of STL's documentation of 
Prime6, but doesn't the factor function only catch factors up to 65535?

JayEll