[A86] Re: What about LISP?


[Prev][Next][Index][Thread]

[A86] Re: What about LISP?




>From: David Reiss <davidr42@optonline.net>
>Reply-To: assembly-86@lists.ticalc.org
>To: assembly-86@lists.ticalc.org
>Subject: [A86] Re: What about LISP?
>Date: Thu, 20 Dec 2001 01:15:46 -0500
>
>
>On Wed, Dec 19, 2001 at 05:43:35AM +0000, David West wrote:
> > Well yeah sure. I realize that I used SCHEMEish keywords in that
> > example. But once again, the idea that I'm trying to bring up is
> > independent of any particular LISP dialect. I'm suggesting the
> > creation of a LISP-like language that might server as some sort of
> > high-level glue and maybe provide other usefull qualities.
>
>Glue for what?

Yeah that term is a bit vague.  What I meant it to refer to is a quality of 
a high-level language to be able to easily integrate low-level programming 
in some form or another.  That is the ideal language I'm thinking about 
would be semantically compact and at the same time flexible enough so that 
it, when needed, would allow easy integration with assembly level 
programming capabilities.  I used the word glue here to refer to such a 
langauge because it would most likely be used to to "glue" (through some 
built in consistant encapsulation mechanism) smaller pieces of assembly 
language routines.

One of the main parts i'm looking for in this language is an estetically 
pleasing and logical way to do this.

>
> > Well, all programs have "garbage collection" in some form or another.
> > That is, any intersting program contains dynamically allocated data,
> > and as a result need to allocate and manage this data. In languages
> > that don't have garbage collection, this functionality is built into
> > the program, in languages like LISP, it is usually built into the
> > interpreter in some easily recognizable form. The point of my argument
> > is to say that formalized garbage collection isn't neccessarily a bad
> > thing cause every intersting program does it either explicitly or
> > implicitly anyway. What do you think?
>
>It's simply not true that any interesting program contains dynamically
>allocated data. I'd bet that 90% of programs written for the TI-86 use
>only static and automatic (stack) storage. On the TI-89, that number
>will certainly be lower, because the OS provides support for basic
>dynamic memory management.
>

Those programs that make use of static memory should also benifit in a 
garbage collection environment because statically declared memory elements 
don't have to be garbage collection.  So if most current programs for the 
TI-86 are static only, then an implimentation of these programs in an 
interpreted language should not suffer performace loss due to garbage 
collection. (Assuming the interpreter allows for declaring static data.  
Which seems possible even if not commonly done.)

>It's also not the case that all programs have 'garbage collection.' At
>least the way the term is currently used in the computer science
>community, manual memory management doesn't count as 'garbage
>collection.' Of course, it's entirely possible to write a garbage
>collector for Scheme on the 86. I was just pointing out that it has
>nontrivial costs, both in terms of efficiency and complexity of the
>project.
>

You point here is well taken.  I am often guilty of causeing semantic 
confussion when I don't pay carefull enough attention to the way phrases are 
commonly used.  I was trying to generalize the phrase "garbage collection" 
to also refer to the thing that programs do that deals with dynamicly 
allocated memory.  I think your distinction between "manual" and "automatic" 
memory management is helpfull though.  I was attempting to make the argument 
that automatic memory management (garbage collection) isn't neccessarily bad 
for performance, because those programs that perform manual memory 
management still have to incure the overhad for the memory management.  I 
agree that it is hard to generalize things without loosing effeciency, but I 
would also say that it is hard to say just how much effeciency you end up 
loosing.(if any at all)  (Also part of my reason for making this argument is 
to convence myself whether or not it is wrong with the help of people's 
feedback.  I don't neccessarily buy into it yet.  I'm just searching for the 
"right" reason not to buy into it.)

> > I agree with the first part. As for the "and interpreted environment
> > ... too slow for anything usefull". Once again this statement somehow
> > assumes that interpretation adds unmanagable overhead.
>
>Perhaps I was too harsh in my last message: the overhead of an on-calc
>interpreter is not unmanagable and doesn't disqualify the project by any
>means. But the origin of this thread, IIRC, was about a new language
>that could produce code just about as fast as hand-written assembly. My
>point was that no Scheme syetem (interpreter or compiled) would do that.
>That's not to say it wouldn't be useful for things other than games,
>like text-based programs, symbolic manipulators (as someone else pointed
>out), etc.
>
Intuatively I tend to agree with you, but I'm still searching for the 
deductive reason this has to be the case?  That is I suppose I want to know 
why I can't have my cake and eat it too...  (Why we can't have a powerfull 
high-level interpreted language and at the same time do all the things we 
can do in assembly)  The answer everyone seems to give(myself included) is 
that with assembly you have access to the processor directly.  But why can't 
we come up with a language that has really fast assembly-like primitives but 
at the same time maintains the protection and expressive capabilities of a 
high-level (interpreted or not) language.

> > The main goal that I hope to accomplish with the above arguments is to
> > persuade my fellow colleges that a LISP system could be made that
> > doesn't require us to sacrafise the computational ability of the
> > calcs. At least I hope this is the case. (I'm still trying to convence
> > myself that this is possible).
>
>I'm not sure what you mean by 'computational ability.' Certainly
>programs written in Scheme will be more expressive than those written in
>assembly, and they will be able to do anything that assembly programs
>can do (except crash the calculator :) ). They'll just do it some amount
>slower.
>
Good point.  Well.  I'm not sure how to define it, but I'm talking about 
that thing that most people seem to agree you loose by going to an 
interpreted language.  That is if the "fastest" possible instance of 
algorithm X in assembler takes some time T to execute, then it is usually 
assumed that the "fastest" possible instance of the same algorithm in some 
high-level/interpreted language will take more time that T.  I can see the 
"intuition" for this like anyone else.  But I'm trying to figure out exactly 
why this is.  Or if we go ahead and assume this to be true, can we come up 
with a "really cool" comprimize language that allows us to somehow have the 
best of both worlds.  It isn't obvious what this comprimise should be.  
Guess I'm just not creative enough to see it.

later,

David E. West

_________________________________________________________________
Send and receive Hotmail on your mobile device: http://mobile.msn.com