Discussion:
A declarative model of Smalltalk (was Re: [DOCS] SUnit tests)
(too old to reply)
Hannes Hirzel
2012-01-28 11:20:50 UTC
Permalink
[...]
YES YES YES. This is perhaps the most hotly misunderstood
aspect of the ANSI standard. It is a non-reflective declaration
powerful enough to describe a fully reflective runtime program.
Most users of VSE never realized that they were revisioning
non-reflective declarations of very reflective programs.
Could you please elaborate this a bit or send a pointer
to a paper which gives more details.
The first initiation for me was the 1998 OOPSLA in San Diego
in which Allen Wirfs-Brock and Brian Wilkerson wrote a paper
with a title very much like "Modular Smalltalk".
Allen has expounded on this topic and there used to be a set
of slides at www.instantiations.com. Try looking for publications at
that site. Also look for Allen's postings on this list.
I'll try to find time this weekend to put together some of my thoughts
on the topic. Lots of interesting stuff was going on at Digitalk just
prior to the merger with ParcPlace.
The distinction between the declaration of a program and its
runtime incarnation as a wildly reflective system is really very subtle.
Cheers,
:-}> John
Thank you for the pointer. I found this
A declarative model for defining Smalltalk programs (summary)
http://www.smalltalksystems.com/publications/_awss97/SSDCL1.HTM
and
Full presentatioin
http://www.smalltalksystems.com/publications/_awss97/INDEX.HTM
Is it this what you mean?
-- Hannes
On slide
http://www.smalltalksystems.com/publications/_awss97/SLD016.HTM
he says that "normal languages" like Fortran, C++ and Java are
declarative while Smalltalk is imperative.
(Slide http://www.smalltalksystems.com/publications/_awss97/SLD008.HTM)

In my opinion this is an unusal way of using declarative vs imperative.
I am thinking for example of Prolog, SQL and XSLT as beeing declarative
while
most other of the often used languages are imperative.

Aah now I think I can guess what he means. Instead of writing a "proper"
program we fill in all this little bits and pieces of code into various
panes of arcane looking tools called browsers and inspectors.


Another interesting slide - Unnecessary implementation assuptions
http://www.smalltalksystems.com/publications/_awss97/SLD025.HTM

Slide http://www.smalltalksystems.com/publications/_awss97/SLD026.HTM
Reflection vs. the Declarative Model
brings it to the point.


A good question is on slide
http://www.smalltalksystems.com/publications/_awss97/SLD028.HTM

- Traditional Smalltalk reflection is inherently implementation
independent
- Why not objectify the abstract declarative description of a Smalltalk
program?


A new architecture for Smalltalk Development
http://www.smalltalksystems.com/publications/_awss97/SLD035.HTM


And finally it comes - watch out all you tiny image specialists
http://www.smalltalksystems.com/publications/_awss97/SLD042.HTM

The 3+4 image < 10kB


To summarize. I consider theses things to be interesting.
But what I am interested most is learning how this affects what John
This declarative technique is the razor's edge by which a "timeless"
configuration of a program may be archived independently
of the very dynamic image in which it was generated.
-- Hannes Hirzel



P.S. I cc this to "Alejandro F. Reimondo" <***@smalltalking.net>
because I would like to hear his opinion. Alejandro: You have to check
out the previous emails in this thread as well.
Allen Wirfs-Brock
2012-01-28 11:20:50 UTC
Permalink
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20030221/f522618d/attachment.htm
Jeff Read
2012-01-28 11:20:50 UTC
Permalink
When the compiler and parser are tightly integrated with the runtime of the language, instructing the language to add program parts to its currently running program often becomes the norm, rather than crafting everything declaratively in an editor and then running it through a compiler.

Smalltalk appears to borrow a lot from LISP in this regard, at least in terms of philosophy if not implementation. In Scheme, for instance, the keyword define means "Add this symbol to your currently running global environment, and bind it to the following value..." LISPers tend to see this as an advantage.

The difference, if I surmise correctly, is that every Scheme knows what define means; the base semantics for the keyword are standardized -- whereas each Smalltalk has a different protocol for creation of classes and adding methods to them.

The Smalltalk situation is rather unfortunate, but I see not what having a separate declarative syntax for Smalltalk affords us; with a standardized imperative protocol for creation of classes, methods, and variables, the declarative syntax comes for free.
--
Jeffrey T. Read
"I fight not for me but the blind babe Justice!" --Galford
Jeff Read
2012-01-28 11:20:51 UTC
Permalink
I have heard sillier things, however: in the recent DDJ I read that all the widely used programming languages in the future will be XML variants that programmers manipulate using GUI WYSIWYG XML editors. The author of the article really didn't have much to support the use of XML as a program format except that programmers are the only people left who use flat text editors and they really need to get with the times. :)
--
Jeffrey T. Read
"I fight not for me but the blind babe Justice!" --Galford
Cees de Groot
2012-01-28 11:20:51 UTC
Permalink
Post by Jeff Read
I have heard sillier things, however: in the recent DDJ I read that
all the widely used programming languages in the future will be XML
variants that programmers manipulate using GUI WYSIWYG XML editors.
Well, the Flare programming language makes a good argument. The crux is
the 'X' in XML - as usual, one of the most powerful bits and therefore
the least understood and used :-).

Flare is a language that aims to bring programming up to the level
required for developing AI (http://www.singinst.org,
http://www.flare.org). The top-level representation is a GUI, with XML
representation being the 'innards'. The funny bit is, of course, that
you can extend the XML representation with structured comments, pictures
of the authors, notes, full discussion threads, etcetera; to all tools
you are going to develop (repositories, ...), it's all just XML so the
whole bunch is automatically carried over to wherever the source goes.

In a way, it's the 21st century approach to literate programming. I
think it is an interesting direction.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20030222/f9c31526/attachment.pgp
Allen Wirfs-Brock
2012-01-28 11:20:52 UTC
Permalink
Post by Jeff Read
When the compiler and parser are tightly integrated with the runtime of
the language, instructing the language to add program parts to its
currently running program often becomes the norm, rather than crafting
everything declaratively in an editor and then running it through a compiler.
I think you half get and half miss the point. Using a declarative approach
to Smalltalk in no way implies using an editor to create source files. As I
previously mentioned, Smalltalk browsers generally present a declarative
model of Smalltalk programs to the programmer who creates and views class
and method definitions/declarations (a class definition is (partially)
presented as a message expression but to most users that's
just syntax which is treated as a declaration). The issue is more
related to what happens when the user clicks "accept". In a classic
Smalltalk-80 system the browser uses the declaration to imperatively
side-effect the running system and then essentially forgets the
declaration. In a declarative Smalltalk environment the browser records
the declaration as a primary archival artifact and then (perhaps
optionally) side-effects the running system.

The two most widely used "team" Smalltalk development environments during
the "golden age" of commercial Smalltalk were Digitalk's VSE(Team/V) and
OTI's Envy. While OTI was less overt about it, both systems
were essentially declarative in nature. Yet both presented a complete,
browser-based, interactive, incremental, reflective, "Smalltalk-style"
development experience.

The issue is really all about reproducibility of programs. If I create a
program I need to be able to hand it to you with the expectation that you
will receive the exact same program. I need to be able to take that
program and run it with a future version of the runtime system and know
that it will still get the exact same results. I need to combine
independently development "modules" into a common program and know that
each module remains as originally defined. I need to retrieve an old,
archived version of a program and reconstruct it in its exact original
form. To reliably do these things you need a declarative definition of
the program or a module rather than a sequence of state sensitive
imperative operations. BTW, it is the initial state sensitivity that is
the real killer. That's why "file-outs" often don't work when they are
"filed-in" and that's why you have to worry about the load order of change
sets.

(I also take issue with the implied assertion that the Smalltalk/Squeak
compiler is tightly integrated with the runtime system. Removing the
compiler is quite simple. Most commercial Smalltalk products have included
the capability (or even the requirement) to deploy applications without
including the compiler. Similarly, it is quite possible to build a
runtime compiler for Java that is capable of taking a source code class
declaration and dynamically loading it into a running application.)
Post by Jeff Read
Smalltalk appears to borrow a lot from LISP in this regard, at least in
terms of philosophy if not implementation. In Scheme, for instance, the
keyword define means "Add this symbol to your currently running global
environment, and bind it to the following value..." LISPers tend to see
this as an advantage.
Not when they want to create a maintainable, reproducible, deployable
application. In that situation they create archival source code definitions
that can reproducibly create the runable application.
Post by Jeff Read
The difference, if I surmise correctly, is that every Scheme knows what
define means; the base semantics for the keyword are standardized --
whereas each Smalltalk has a different protocol for creation of classes
and adding methods to the
The Smalltalk situation is rather unfortunate, but I see not what having a
separate declarative syntax for Smalltalk affords us; with a standardized
imperative protocol for creation of classes, methods, and variables, the
declarative syntax comes for free.
It's not just an issue of protocol standardization. It's more an issue of
initial state dependencies and polymorphism. Syntactically you can use
Smalltalk message syntax if you want. However, if you want
reproducibility, their semantic interpretation can't be dependent upon the
happenstance state of the running system. As you say, it isn't the
syntax that is important. However, a standard semantics is essential. ANSI
Smalltalk doesn't even bother to define a concrete syntax for class
declarations. It just says that no matter what you use for a
concrete syntax, here is what it means.

Allen Wirfs-Brock
Post by Jeff Read
--
Jeffrey T. Read
"I fight not for me but the blind babe Justice!" --Galford
Nevin Pratt
2012-01-28 11:20:52 UTC
Permalink
Over the years I have read Allen's papers (and read his posts)
concerning a declarative Smalltalk, and I've tried to weigh in and form
my own opinion. Over all of those years, Allen has not convinced me of
the value of the declarative approach, but then again, I have not been
convinced he is wrong either. I am simply undecided.

However, I *do* take issue to the implied assertion that the current
imperative approach used by the major Smalltalk's necessitates any less
(or even any more) reproducable "programs" than the declarative
approach. But let me briefly introduce some definitions of the
components of a declarative "program" before I attempt to defend the belief:

1. A "word" is a sequence of non-white space tokens within the
declaration of the "program" (and "white space" has the classic
definition we are all used to).

2. A "vocabulary" is a unique set of words.

3. A "language" is a vocabulary coupled with a unique meaning of each
word of that vocabulary.

4. Any program declaration is necessarily written in a "language", as
defined above.

Since above I said a vocabulary is a *unique* set of words, this means
that adding or removing a word to or from the vocabulary creates a new
language, per the above definitions.

Since above I said a language is a vocabulary coupled with a *unique*
meaning of each word, this means that changing the meaning of a given
word creates a new language, per the above definitions.

Given the above definitions, then, one of the first things we discover
is that the act of programming is the same thing as the act of "language
designing". You are extending an existing language in a domain-specific
direction (i.e., introducing new words to the vocabulary, whether they
are functions, prodedures, methods, or whatever), and in so doing, you
are creating a new language as a side-effect, for the simple reason that
your "new" language has a greater (and sometimes lessor) set of words
than what you started with, and you may have even altered the "meanings"
of some of the existing words in the process. Either of these acts
result in a new "language" (using the definition of "language" from
above), even though you may be starting from a given base language when
you do it. Your existing "base" language might be the native
instruction set understood by the CPU you are programming on, or on some
Virtual Machine, or any other form of a "language", but you will always
be building on a base language of some sort.

Now, as we know, many languages differentiate intrinsic words within the
vocabulary of the language from the programmer-defined words. But many
languages (such as Smalltalk) do not differentiate them.

Given the above definitions, a Smalltalk "program" (or any other program
in any language) consists of *every* word of the vocabulary of the
"program", coupled with the meaning of every word. And because
Smalltalk does not differentiate *who* introduced the word into the
vocabulary (intrinisic or programmer defined), it is irrelevent whether
"Programmer A" introduced the word to the vocabulary, or "Programmer B".
Likewise, it is irrelevent whether "Company A" introduced it, or
"Company B". Thus, a complete "program" necessarily has to either (1)
include *every* word of the vocabulary of the program, and the
definitions of those words, or (2) assume that such a program is merely
an extension of an existing language (or program). Actually, two
paragraphs above I argued that you will always be building on a base
language, which would mean that #1 isn't even possible, and that your
only option is to assume that such a program is merely an extension of
an existing language (or program). But I won't actually make that
assertion (yet).

Now, if such a "program" includes every word of the vocabulary of the
program (assuming that it is even possible to do this, and I suggested
above that this option isn't even possible), together with the
definitions of those words, I personally don't see any advantages (or
disadvantages) of comparing this with an image-based approach, since
everything has to be included anyway.

And, if it is just an extension of an existing language, then it
necessarily has the same "initial state sensitivity" that the imperative
approach has, because it is expecting to extend a known, existing (i.e.,
"static" and unchanging) base language. If the base language that it is
extending is different from that which it was designed to extend, you
have exactly the same state-related problems that you see with the
imperative approach. And, in the imperative approach, if you start from
a known base "language", then a known sequence of expressions applied to
that base *will* reliably reproduce the "program", just as the
declarative approach will.

When merely "extending" an existing base language, both approaches
require that you know where you are starting from, consequently neither
approach results in any theoretical advantage towards program
reproducability. That's my opinion.

I could say more, but suffice it to say that I remain unconvinced that
any declarative approach has any theoretically advantage (or
disadvantage) over the classic Smalltalk imperitave approach.

Nevin
Post by Allen Wirfs-Brock
Post by Jeff Read
When the compiler and parser are tightly integrated with the runtime
of the language, instructing the language to add program parts to its
currently running program often becomes the norm, rather than
crafting everything declaratively in an editor and then running it
through a compiler.
I think you half get and half miss the point. Using a declarative
approach to Smalltalk in no way implies using an editor to create
source files. As I previously mentioned, Smalltalk browsers generally
present a declarative model of Smalltalk programs to the programmer
who creates and views class and method definitions/declarations (a
class definition is (partially) presented as a message expression but
to most users that's just syntax which is treated as a
declaration). The issue is more related to what happens when the user
clicks "accept". In a classic Smalltalk-80 system the browser uses
the declaration to imperatively side-effect the running system and
then essentially forgets the declaration. In a declarative Smalltalk
environment the browser records the declaration as a primary
archival artifact and then (perhaps optionally) side-effects the
running system.
The two most widely used "team" Smalltalk development environments
during the "golden age" of commercial Smalltalk were Digitalk's
VSE(Team/V) and OTI's Envy. While OTI was less overt about it, both
systems were essentially declarative in nature. Yet both presented
a complete, browser-based, interactive, incremental, reflective,
"Smalltalk-style" development experience.
The issue is really all about reproducibility of programs. If I create
a program I need to be able to hand it to you with the expectation
that you will receive the exact same program. I need to be able to
take that program and run it with a future version of the runtime
system and know that it will still get the exact same results. I need
to combine independently development "modules" into a common program
and know that each module remains as originally defined. I need to
retrieve an old, archived version of a program and reconstruct it in
its exact original form. To reliably do these things you need a
declarative definition of the program or a module rather than a
sequence of state sensitive imperative operations. BTW, it is the
initial state sensitivity that is the real killer. That's why
"file-outs" often don't work when they are "filed-in" and that's why
you have to worry about the load order of change sets.
(I also take issue with the implied assertion that the
Smalltalk/Squeak compiler is tightly integrated with the runtime
system. Removing the compiler is quite simple. Most commercial
Smalltalk products have included the capability (or even the
requirement) to deploy applications without including the compiler.
Similarly, it is quite possible to build a runtime compiler for Java
that is capable of taking a source code class declaration and
dynamically loading it into a running application.)
Post by Jeff Read
Smalltalk appears to borrow a lot from LISP in this regard, at least
in terms of philosophy if not implementation. In Scheme, for
instance, the keyword define means "Add this symbol to your currently
running global environment, and bind it to the following value..."
LISPers tend to see this as an advantage.
Not when they want to create a maintainable, reproducible, deployable
application. In that situation they create archival source code
definitions that can reproducibly create the runable application.
Post by Jeff Read
The difference, if I surmise correctly, is that every Scheme knows
what define means; the base semantics for the keyword are
standardized -- whereas each Smalltalk has a different protocol for
creation of classes and adding methods to the
The Smalltalk situation is rather unfortunate, but I see not what
having a separate declarative syntax for Smalltalk affords us; with a
standardized imperative protocol for creation of classes, methods,
and variables, the declarative syntax comes for free.
It's not just an issue of protocol standardization. It's more an
issue of initial state dependencies and polymorphism. Syntactically
you can use Smalltalk message syntax if you want. However, if you
want reproducibility, their semantic interpretation can't be dependent
upon the happenstance state of the running system. As you say, it
isn't the syntax that is important. However, a standard semantics is
essential. ANSI Smalltalk doesn't even bother to define a concrete
syntax for class declarations. It just says that no matter what you
use for a concrete syntax, here is what it means.
Allen Wirfs-Brock
Post by Jeff Read
--
Jeffrey T. Read
"I fight not for me but the blind babe Justice!" --Galford
Colin Putney
2012-01-28 11:20:53 UTC
Permalink
Post by Nevin Pratt
Over the years I have read Allen's papers (and read his posts)
concerning a declarative Smalltalk, and I've tried to weigh in and
form my own opinion. Over all of those years, Allen has not convinced
me of the value of the declarative approach, but then again, I have
not been convinced he is wrong either. I am simply undecided.
Oddly enough, I have only just recently been introduced to Allen's
work, but I'm quite excited about the possibilities it presents. The
work Avi and I did on Monticello convinced me that a declarative
representation of Smalltalk programs is essential if we are to reliably
and automatically construct images with known properties in a modular
fashion.
Post by Nevin Pratt
However, I *do* take issue to the implied assertion that the current
imperative approach used by the major Smalltalk's necessitates any
less (or even any more) reproducable "programs" than the declarative
approach. But let me briefly introduce some definitions of the
components of a declarative "program" before I attempt to defend the
1. A "word" is a sequence of non-white space tokens within the
declaration of the "program" (and "white space" has the classic
definition we are all used to).
2. A "vocabulary" is a unique set of words.
3. A "language" is a vocabulary coupled with a unique meaning of each
word of that vocabulary.
4. Any program declaration is necessarily written in a "language", as
defined above.
Since above I said a vocabulary is a *unique* set of words, this means
that adding or removing a word to or from the vocabulary creates a new
language, per the above definitions.
Since above I said a language is a vocabulary coupled with a *unique*
meaning of each word, this means that changing the meaning of a given
word creates a new language, per the above definitions.
I find this definition of language unsatisfactory, for two reasons. For
one, it ignores the notion of grammar, which is a key part of language.
Also, the requirement for a unique meaning for each word directly one
of the central notions of Smalltalk, which is that different objects
respond to different messages differently.

The point (snipped for brevity) that any program necessarily extends
the capabilities of the base system is well taken, however. In fact,
it's the main reason that a declarative model for Smalltalk programs is
so useful.

The difference between declarative and imperative representations of
Smalltalk boils down to one of meaning. An imperatively defined program
consists of a series of messages sent to named objects. It is
understood that if those messages are sent in an appropriate context,
they will have the side effect of adding the functionality provided by
the program to the system.

The advantage of the imperative representation is that it provides a
level of abstraction that can be useful. Since there are only message
sends, you can take advantage of polymorphism in the environment where
the program is constructed. A radically different language
implementation, say one that did not use classes to create objects,
could provide an environment where that sequence of message sends would
still have appropriate side effects.

The problem is the imperative approach is that it's so abstract.
There's no *meaning* attached to the message sends in a fileOut.
Because of that, it's impossible to reason meaningfully about the
program. An imperative representation is essentially a program that
constructs a program. The only way to find out what it does is to run
it and look for changes in the state of the system.

A declarative representation, OTOH, does have a meaning. As Allen
mentioned, it's the meaning that's important, not the actual syntax of
the representation. This means you can reason about the program without
loading it into the system. This is important precisely because the
program is an extension of the base system.

As an example consider a program that adds a class to the system. This
class is a subclass of Object, a class that's already part of the
system. The imperative model represents this by sending #subclass: to
the object named 'Object'. If Object doesn't exist, you get a MNU
error. An well-designed system will have facilities to catch these
errors and react appropriately. Nevertheless, if the error occurs in
the middle of the execution, the side effects of the execution up to
that point will persist. This leads to an inconsistent system.

The declarative model will assert the existence of a class which
extends Object. It's up to the system to create whatever structures are
necessary to bring that about. If Object doesn't exist, the system does
not provide the functionality the program depends on, and so it can't
be loaded. It's possible to detect this *before* attempting to load the
program, handle the error appropriately, and leave the system unchanged.

This case of defining a subclass of a non-existent class is just one
example of the larger problem of the program dependencies. Because each
program extends the system, it must rely on certain functionality and
semantics being present in the base system. A declarative
representation of the program makes the those dependencies explicit,
while an imperative makes them implicit.

Cheers,

Colin



Colin Putney
Whistler.com
Nevin Pratt
2012-01-28 11:20:53 UTC
Permalink
Post by Colin Putney
I find this definition of language unsatisfactory, for two reasons.
For one, it ignores the notion of grammar, which is a key part of
language. Also, the requirement for a unique meaning for each word
directly one of the central notions of Smalltalk, which is that
different objects respond to different messages differently.
I was brief. Brevity often causes lack of precision. My fault.

Consider a classical "Object Reference Manual". In such a manual, each
method of each class has an English-language description of the meaning
of that word in that context (i.e., for that receiver), and it targets a
human as the consumer of the definition. The implementation of each
such method is the description of the meaning of that word in that
context (i.e., for that receiver), and it targets the computer as the
consumer of that definition.

I realized my lack of precision during (and even after) posting, but
didn't worry about it, because...
Post by Colin Putney
The point (snipped for brevity) that any program necessarily extends
the capabilities of the base system is well taken, however. In fact,
it's the main reason that a declarative model for Smalltalk programs
is so useful.
...it didn't change the message I was trying to deliver with it, as your
comment above also illustrates.

Also, even my "addendum" above lacks precision. But I think most folks
will understand the message I am trying to deliver with it, and that's
good enough for me (for now).
Post by Colin Putney
<..snip..>
As an example consider a program that adds a class to the system. This
class is a subclass of Object, a class that's already part of the
system. The imperative model represents this by sending #subclass: to
the object named 'Object'. If Object doesn't exist, you get a MNU
error. An well-designed system will have facilities to catch these
errors and react appropriately. Nevertheless, if the error occurs in
the middle of the execution, the side effects of the execution up to
that point will persist. This leads to an inconsistent system.
Not with GemStone Smalltalk. You simply "roll back" to your point of
origin. So, this proves it can be elegantly dealt with in an imperitive
model.

Don't blame weaknesses of some specific implementation onto the
imperitive approach in general. The same warning likewise applies for
the declarative approach as well, which typically creates such artifacts
as "DLL-hell", "JRE version proliferations", "glibc version mismatch",
etc. All of those real-world artifacts (and I'm sure quite a few more)
have, as their underlying root cause, the same "initial state
sensitivity" issues that the imperitive approach has. It is the
specific way that those language designers chose to meet the "initial
state sensitivity" challenge that produced those artifacts. In fact, if
anything, I'd have to give the nod to the imperitive approach, because
I'm personally not aware of *any* approach that as elegantly deals with
the issue in the declarative model (without artifacts like the above) as
GemStone does with the imperitive model. If anybody *does* have an
elegant solution, I think it would be David Simmons with S#, but I
haven't looked deeply enough to judge.

But in either camp, the "initial state sensitivity" issue has to be
dealt with. Thus, I don't see any real difference, and so I remain
unconvinced.

BTW, I quite like your description that "an imperative representation is
essentially a program that constructs a program". Of course, I think
you can see that it too lacks precision, as your definition is
recursive, and a fileout isn't recursive. But I still like your phrase,
and your inclusion of the word "essentially" gives you the needed wiggle
room, IMO.

Nevin
Colin Putney
2012-01-28 11:20:53 UTC
Permalink
Post by Nevin Pratt
Post by Colin Putney
As an example consider a program that adds a class to the system.
This class is a subclass of Object, a class that's already part of
the system. The imperative model represents this by sending
#subclass: to the object named 'Object'. If Object doesn't exist, you
get a MNU error. An well-designed system will have facilities to
catch these errors and react appropriately. Nevertheless, if the
error occurs in the middle of the execution, the side effects of the
execution up to that point will persist. This leads to an
inconsistent system.
Not with GemStone Smalltalk. You simply "roll back" to your point of
origin. So, this proves it can be elegantly dealt with in an
imperitive model.
Using GemStone's transactional capabilities to keep file-ins from
hosing your image is neither elegant nor simple. There's a reason
GemStone is so expensive. Implementing transactions correctly and
efficiently is *hard*. It's complete overkill for something as
fundamental as loading a program, something which should be quite
simple.
Post by Nevin Pratt
Don't blame weaknesses of some specific implementation onto the
imperitive approach in general.
The weakness of the imperative approach is that it's hard to implement
well. Ok, so Gemstone doesn't suck. How many Smalltalks have there been
over the years? How many have had fullblown transactions? More to the
point, what is the easiest and best way to bring, say, decent revision
control capabilities to Squeak? Stephen Pair's Chango project provides
some hope for a transaction engine, but I'm pretty sure a declarative
representation of Smalltalk code will be an easier way to achieve it.
Post by Nevin Pratt
The same warning likewise applies for the declarative approach as
well, which typically creates such artifacts as "DLL-hell", "JRE
version proliferations", "glibc version mismatch", etc. All of those
real-world artifacts (and I'm sure quite a few more) have, as their
underlying root cause, the same "initial state sensitivity" issues
that the imperitive approach has. It is the specific way that those
language designers chose to meet the "initial state sensitivity"
challenge that produced those artifacts.
These problems have nothing to do with declarative representation vs
imperative representation. They are the result of an attempt to link
incompatible chunks of executable object code into a single process
space. If you compile some code against one version of a library and
try to link it against another version, it's no surprise if it doesn't
work.

The point is that we're (or a least I am) concerned with representing
and manipulating source code, not byte code or object code.
Post by Nevin Pratt
In fact, if anything, I'd have to give the nod to the imperitive
approach, because I'm personally not aware of *any* approach that as
elegantly deals with the issue in the declarative model (without
artifacts like the above) as GemStone does with the imperitive model.
If anybody *does* have an elegant solution, I think it would be David
Simmons with S#, but I haven't looked deeply enough to judge.
Compilation of declarative code fails if its dependencies are not met
by the system at the time it is compiled. It's that simple. Sure, if
you tried to run the resulting bytecode in a different environment than
it was compiled in, it might not work. But that's always true, no
matter how you represent the source code.

As for elegant approaches that use a declarative model, how about
Ginsu, Envy, and VSE?
Post by Nevin Pratt
But in either camp, the "initial state sensitivity" issue has to be
dealt with. Thus, I don't see any real difference, and so I remain
unconvinced.
Suit your self...
Post by Nevin Pratt
BTW, I quite like your description that "an imperative representation
is essentially a program that constructs a program". Of course, I
think you can see that it too lacks precision, as your definition is
recursive, and a fileout isn't recursive. But I still like your
phrase, and your inclusion of the word "essentially" gives you the
needed wiggle room, IMO.
Yes, this is subtle stuff. I said "essentially" because I'm not sure
that a series of DoIts constitute a program. In fact, a definition of
the word "program" would help this discussion immensely. I don't think
my description is recursive, though, as I'm not defining the word. The
program constructs another program, it doesn't construct its self.

Cheers,

Colin
Nevin Pratt
2012-01-28 11:20:54 UTC
Permalink
Post by Colin Putney
Post by Nevin Pratt
The same warning likewise applies for the declarative approach as
well, which typically creates such artifacts as "DLL-hell", "JRE
version proliferations", "glibc version mismatch", etc. All of those
real-world artifacts (and I'm sure quite a few more) have, as their
underlying root cause, the same "initial state sensitivity" issues
that the imperitive approach has. It is the specific way that those
language designers chose to meet the "initial state sensitivity"
challenge that produced those artifacts.
These problems have nothing to do with declarative representation vs
imperative representation.
Not directly, no. They instead are artifacts of the way the various
language designers chose to meet the challenge of "initial state
sensitivity". They also are common artifacts of the declarative design
choices commonly made these days. They don't necessarily need to be,
though. I gave a reference to David Simmons work as a potential counter
example.
Post by Colin Putney
They are the result of an attempt to link incompatible chunks of
executable object code into a single process space. If you compile
some code against one version of a library and try to link it against
another version, it's no surprise if it doesn't work.
They are the result of creating "consumer" code that is dependent on a
particular initial state for the "supplier" code (i.e., a particular
version of the supplier code library).

Nevin
John W. Sarkela
2012-01-28 11:20:54 UTC
Permalink
Post by Nevin Pratt
Post by Colin Putney
Post by Nevin Pratt
The same warning likewise applies for the declarative approach as
well, which typically creates such artifacts as "DLL-hell", "JRE
version proliferations", "glibc version mismatch", etc. All of
those real-world artifacts (and I'm sure quite a few more) have, as
their underlying root cause, the same "initial state sensitivity"
issues that the imperitive approach has. It is the specific way
that those language designers chose to meet the "initial state
sensitivity" challenge that produced those artifacts.
These problems have nothing to do with declarative representation vs
imperative representation.
Not directly, no. They instead are artifacts of the way the various
language designers chose to meet the challenge of "initial state
sensitivity". They also are common artifacts of the declarative
design choices commonly made these days. They don't necessarily need
to be, though. I gave a reference to David Simmons work as a
potential counter example.
David's work depends absolutely upon a declarative model. His whole
system is possible precisely
because the semantics of a program may be described independently of
the runtime implementation,
or the concrete language syntax.

This allow his system to host multiply VM's (ie .NET and AOS) on the
bottom surface of his runtime
architecture and multiple programming languages on the upper surface of
the architecture.

In this conversation it is really, really important that we have clear
and consistent distinctions between
1. a module that is a semantic declaration with a concrete syntax
(TeamV, ginsu, etc)
2. a module that is a set of injunctions to be performed in a runtime
context (change set, file in, etc)
3. a module that is a runtime component (.dll, .lib, java bean, squeak
environment, squeak project, etc)

Without seeing these three things as being absolutely distinct and
separable, it is impossible
to understand point of a declarative approach and the participants
never properly respond to
each others messages.

Of course, each of these approaches to modularity may serve a common
user intent, thus we
tend to confuse them as being "the same thing", merely because we can
use them to
achieve "the same effect". The difference is that each of these
approaches has a different
path to the fulfillment of a given user intent. Our job is to be
discriminating and choose the
approach that best fits a particular intent.

[...]

Cheers,
:-}> John
RT Happe
2012-01-28 11:20:54 UTC
Permalink
Post by Allen Wirfs-Brock
Post by Jeff Read
terms of philosophy if not implementation. In Scheme, for instance, the
keyword define means "Add this symbol to your currently running global
environment, and bind it to the following value..." LISPers tend to see
For the record: This is false or at least terminologically misleading.
Scheme variables are language level entities, Scheme symbols are object
level entities (that a meta-program may use to represent variables.
The dynamic evaluation and loading procedures tend to blur that
distinction somewhat, though.) Scheme definitions bind variables, not
symbols. Furthermore, the language report --in my reading-- doesn't
specify top-level definitions fully enough to support a lispy imperative
interpretation well. (But there are lisp-like Scheme dialects/systems.)
Post by Allen Wirfs-Brock
Post by Jeff Read
environment, and bind it to the following value..." LISPers tend to see
this as an advantage.
Not when they want to create a maintainable, reproducible, deployable
application. In that situation they create archival source code definitions
that can reproducibly create the runable application.
In my half-educated opinion, there's precisely the most important
difference separating Scheme from Lisp: Scheme has moved away from
the lispy view of programs as modifications to the evaluator toward
the more static and traditional view of programs as text equipped with
meaning by the language spec.

rthappe
Richard A. O'Keefe
2012-01-28 11:20:55 UTC
Permalink
"Jeff Read" <***@snet.net> wrote:
Smalltalk appears to borrow a lot from LISP in this regard, at
least in terms of philosophy if not implementation. In Scheme, for
instance, the keyword define means "Add this symbol to your currently
running global environment, and bind it to the following value..."

Actually, it doesn't. Any Scheme block (Algol sense) may contain
"defines", and they add to the LOCAL scope, not the GLOBAL scope.
The Scheme standards (both ISO Scheme and RnRS Scheme) explain how
(<stuff> (define x1 e1) ... (define xn en) <more stuff>)
is equivalent to
(<stuff>
(letrec ((x1 e1)
...
(xn en))
<more stuff>))

For example,
(define (sort Xs)
(define (merge Xs Ys) ...)
...
)
does NOT introduce a global definition for merge.

The Scheme standards define entire Scheme programs in which ALL functions
are explicitly present when compilation starts (of course Scheme has closures
constructed at run time, but the lambda-expressions they are constructed from
are explicitly present at compile time).

The difference, if I surmise correctly, is that every Scheme
knows what define means; the base semantics for the keyword are
standardized -- whereas each Smalltalk has a different protocol for
creation of classes and adding methods to them.

ANSI Smalltalk defines a "declarative" representation for classes and
global variables. It is intended as an interchange format between
different Smalltalk implementations.
Stephen Pair
2012-01-28 11:20:56 UTC
Permalink
I've read this discussion with great interest...on Friday I was just
updating the declarative representation of my Swiki.net re-write...what
a pain in the rump!

I agree with Allen's points regarding the need for declarative program
representation. My feeling is that you essentially need to construct
programs imperatively and deliver them declaratively.

It's interesting to note that in the computing world outside of
Smalltalk, there is a sort of symbiotic relationship that exists between
imperative and declarative systems. The most commonly used operating
systems are imperative, and the most commonly used languages are
declarative. If you don't believe me, try reconstructing the disk image
bit for bit of a Windows system after a year of normal usage without
mirroring the drive. ;)

I can only imagine the reaction that normal users would have to a
declarative operating system. Yikes!

So, what do we really want Squeak's future to look like? Is it an
operating system, or is it a programming language? I guess I've always
felt that Squeak should evolve into something that incorporates the best
aspects of operating systems, languages and databases into a unified
approach.

Thus, I think the only logical conclusion is that Squeak needs to
accommodate both forms of usage. And, it should allow both forms of
usage without sacrificing the benefits of either.

- Stephen
Alan Kay
2012-01-28 11:20:56 UTC
Permalink
I hesitate to comment here -- because I don't have the energy to
handle lots of replies -- but ...

We first have to ask ourselves about "meaning" and "interpretation",
and whether they are really separate concepts. For example, what
meaning does "a declarative spec" actually have without some
interpreter (perhaps *us*) being brought to bear? In practical terms,
how does anyone know whether a declarative spec is consistent and
means what it is purported to mean? IOW, *any* kinds of
representations can be at odds with their purpose, and this is why
they have to be debugged. This is just as true with "proofs" in
mathematics. For example, Euler was an incredible guesser of theorems
but an indifferent prover (not by his choice), so generations of PhD
theses in math have been generated by grad students taking an Euler
theorem, finding what was wrong with Euler's proof, and then finding
a better proof!

I think the real issues have to do with temporal scope of changes and
"fences" for metalevels. All languages' meanings can be changed by
changing their interpretation systems, the question is when can this
be done, and how easy is it to do? The whole point of classes in
early Smalltalk was to have a more flexible type system precisely to
extend the range of meanings that could be counted on by the
programmer. This implies that there should be fences around such
metastructures and it should not be easy to willfully change these
meanings willynilly at runtime. Some languages make this easy for
programmers by not allowing such changes to be intermingled with
ordinary programs. Smalltalk is reflective and so it needs more
perspective and wisdom on the programmer's part to deal with the
increased power of expression. I also think that the system should
have many more fences that warn about metaeffects.

However, I don't see anything new here. It was pretty clear in the
60s that a Church-Rosser language was very safe with regard to
meaning. If we think of variables as being simple functions, then it
is manifest that putting assignment into a language is tantamount to
allowing a kind of function definition and redefinition willynilly at
runtime. IOW, assignment is "meta". All of a sudden there are real
problems with understanding meanings and effects. Some of the early
work I did with OOP was to try to confine and tame assignment so it
could be used more safely. Ed Ashcroft's work on LUCID (growing from
Strachey's and Landin's explication of "what LISP means") provided a
very nice way to do extremely safe and understandable
assignment-style programming. You have something very pretty when you
combine these two approaches.

If you want to write a debugger, etc., in the very language you are
programming in *and* want things to be safe, then you have to deal
with fences for metalevels, etc. But, if you are also a real
designer, then you will want to think of these areas as having
different privileges and different constraints.

The bottom line, to me at least, is that you want to be able to look
at a program and have some sense of its meaning -- via what the
program can tell you directly and indirectly. This is a kind of
"algebraic" problem. However, one should not be misled into thinking
a paper spec that uses lots of Greek letters is necessarily any more
consistent or has any more meaning than a one page interpreter that
can be run as a program.

Cheers,

Alan


--
Stephen Pair
2012-01-28 11:20:57 UTC
Permalink
Alan,

Thank you for taking the time to write that reply. Your perspective on
this topic is really valuable (in advancing my own thinking, and I'm
sure to many others on this list). The rest of this email is simply an
attempt to regurgitate what you've just stated in the hopes that someone
might point out any flaws in my understanding.
Post by Alan Kay
I think the real issues have to do with temporal scope of changes and
"fences" for metalevels.
Definitely. There are and have been a lot of solutions that attempt to
bring some measure of control over the temporal evolution of a system.
A "a declarative spec" is but one of those solutions. Transactional
systems, Croquet's T-Time, and even the distinction between "code" and
"data" are others.
Post by Alan Kay
We first have to ask ourselves about "meaning" and "interpretation",
and whether they are really separate concepts. For example, what
meaning does "a declarative spec" actually have without some
interpreter (perhaps *us*) being brought to bear?
In my recent efforts at updating my "declarative spec" of Swiki.net, my
"interpreter" is the base squeak image (version 3.4) and the Squeak VM.
My declarative spec is the script that brings all of the code packages
into that base image.

But, it's also interesting to note that the scope of objects that are
covered by this declarative spec do not include instances of my domain
model. And that is true even of the "declarative" languages where you
typcially have a database and program that exist independently of one
another. This is painfully clear whenever you have to deliver an
upgrade to a program that has to migrate data in an existing database to
a new schema. A system that could address this problem in the context
of both "code" and "data" would certainly be useful.

In my Chango VM and DB, I find that I have to separate objects that
"live on disk" and objects that "live in memory" because it's easiest to
manage the evolution of objects that live in memory (which are typically
metamodel objects) using a declarative program specification model while
managing the evolution of objects on disk (typically domain objects) is
easiest using a transactional model. I briefly thought of managing even
metamodel objects using the transactional system, but quickly vanquished
the idea out of fear of the gut-wrenching changes to Squeak that would
be required and the realization that having a transaction open while I
write code would result in ridiculously long running transactions and
impose a severe burden on the transaction system. ;)

In a database system, the declarative spec might be the full transaction
logs that could be used to recreate the data. If the logs were
sufficiently general (i.e. they had no direct schema knowledge)...then,
you could conceivably upgrade a database by simply creating a new db
with the new schema and then applying the logs.

A declarative spec is nothing more (or less) than a sequence of
operations that transition a system from one state to another (not
unlike an atomic update in the database world). And, when given some
beginning state, applying those operations will always yield the same
final state.

Regarding "fences" for metalevels, I take this to mean that you want
some measure of control over temporal evolution at the metalevel
boundaries where these fences reside. That control might take a form
that is similar to a declarative model, a transactional model, or
otherwise.

Given the limitations of the systems of today, I do see a need for
supporting a declarative model of program specification...however,
looking forward, I can also envision systems that provide much more
general and powerful means for managing the temporal evolution of a
system.

- Stephen
Colin Putney
2012-01-28 11:21:05 UTC
Permalink
Post by Stephen Pair
In a database system, the declarative spec might be the full
transaction
logs that could be used to recreate the data. If the logs were
sufficiently general (i.e. they had no direct schema knowledge)...then,
you could conceivably upgrade a database by simply creating a new db
with the new schema and then applying the logs.
A declarative spec is nothing more (or less) than a sequence of
operations that transition a system from one state to another (not
unlike an atomic update in the database world). And, when given some
beginning state, applying those operations will always yield the same
final state.
This is tricky stuff, so I may not be grasping the subtleties of your
explanation. As I understand it, however, "a sequence of operations
that transition a system from one state to another" would be an
imperative specification.

A declarative specification would be a description of the final state
of (part of) the system, with the actual sequence of operations
required to bring that state about left as an exercise for the reader.
That reader might be a progammer such as myself, a simple program
loader like SqueakMap, or a version control system like Envy.

For me, the hard part of this is that we're really talking about
"meaning" which is a very difficult thing to pin down. In my own work I
try to be very clear about what the abstractions that I'm dealing with
actually mean. Usually this involves making sure that my OO model
corresponds the real world in some meaningful way. A step up the scale
of reflection though, it also requires some concept of what's going on
inside the machine, not at a bit level, but at an abstract level that's
both very clear and difficult to describe. Maybe I need more math.

As Alan mentioned, a program, whether specified imperatively or
declaratively, is only meaningful in relation to some interpreter. The
tricky thing about Smalltalk is that that interpreter, the VM, is as
small and simple as possible. The semantics of interpretation are
largely moved into the program its self; the program is the whole image.

Now, in practice we don't deal with the whole image during development.
It's to big to comprehend, and most of it doesn't change much anyway.
So the Smalltalk programs that we develop and distribute are only the
set of classes and methods that implement the additional functionality
needed for the program we're interested in.

From a tool-making perspective, however, these programs are difficult
to deal with in a safe way, because their meaning relies on an
interpreter with variable semantics. They have the ability to modify
the way they are interpreted. If the interpreter is a human reader,
that's not a problem, because we're (generally) smart enough to notice
this and take it into account as we manipulate the program.

If we want to have tools to help us manipulate the program, though, we
need to make some distinction between regular operations that follow
the "conventional" interpretation, and meta-level operations which may
alter the semantics of interpretation. Better fences, as Alan called
them, would be very helpful for several projects in the Squeak
community - Monticello, Islands/Squeak-E, SqueakMap etc.

It seems to me that a declarative representation for Smalltalk programs
is a step in the right direction, though it's not a complete solution.
The program object model that Allen's paper presents would also be
quite useful for manipulating programs. I haven't been able to digest
the ideas behind Squeak-E yet, but is seems like the E folks are a long
way down the road the solving these problems. You can't have security
without safety, so it will be interesting to see what sort of fences
get put up as a result.

Coln
Lex Spoon
2012-01-28 11:21:11 UTC
Permalink
Post by Colin Putney
This is tricky stuff, so I may not be grasping the subtleties of your
explanation. As I understand it, however, "a sequence of operations
that transition a system from one state to another" would be an
imperative specification.
Many interpreters are going to do the following: "parse a series of
changes from this file, and then apply the changes in sequence". So
it's not necessarily so different.

Everyone is missing Alan's point. A "declarative" syntax still requires
that you have an interpreter to make sense of it. Thus the difference
is pretty subtle, at least in theory. It's the difference between
interpreter+language, versus interpter1+interpreter2+language.


Let me get more pragmatic for a moment. In *practice*, imperative versus
declarative is about the same. First, you can perfectly well read a
Squeak fileout, in practice, as if it were declarative. There are only
5-10 forms that you must support. Second, you can go to the
trouble, if you like, of making up a Smalltalk environment where code
changes don't directly modify the system, but instead log themselves
into the currently active changeset.



-Lex
Stephen Pair
2012-01-28 11:21:07 UTC
Permalink
Post by Stephen Pair
Post by Stephen Pair
In a database system, the declarative spec might be the full
transaction
logs that could be used to recreate the data. If the logs were
sufficiently general (i.e. they had no direct schema
knowledge)...then,
Post by Stephen Pair
you could conceivably upgrade a database by simply creating a new db
with the new schema and then applying the logs.
A declarative spec is nothing more (or less) than a sequence of
operations that transition a system from one state to another (not
unlike an atomic update in the database world). And, when
given some
Post by Stephen Pair
beginning state, applying those operations will always
yield the same
Post by Stephen Pair
final state.
This is tricky stuff, so I may not be grasping the subtleties of your
explanation. As I understand it, however, "a sequence of operations
that transition a system from one state to another" would be an
imperative specification.
Yes, I guess the point here is that the real difference between an
imperative and declarative program construction is that the imperative
approach is not capturing the operations that evolve the system from a
beginning to a final state. It would be sort of like applying a
declarative spec, and then throwing away that spec. And, even if the
imperative approach did capture the operations, there would be lots of
intermediate steps captured which wouldn't be appropriate in a
"declarative spec."
Post by Stephen Pair
A declarative specification would be a description of the final state
of (part of) the system, with the actual sequence of operations
required to bring that state about left as an exercise for
the reader.
But I think that's the problem. Many people think of a declarative spec
in that way, but that is in fact an illusion. A declarative spec cannot
possibly describe final state in the absence of the "interpreter" of
that declarative spec. I think that was Alan's point.

You could think of a declarative spec as a single operation rather than
a sequence. It makes no difference...the declarative spec is an
instruction given the interpreter to make it transform the state of
system.

The benefit of having a declarative spec is that it makes the state
transition repeatable given some interpreter of that spec. There are
other ways to accomplish similar results and we should be mindful that
there may be a common approach that accomodates this and other use
cases.
Post by Stephen Pair
The tricky thing about Smalltalk is that that interpreter, the VM, is
as
Post by Stephen Pair
small and simple as possible. The semantics of interpretation are
largely moved into the program its self; the program is the
whole image.
Yes, "interpreter" does not mean the VM...it means the VM plus the
current state of the image. For example, a VM alone cannot comprehend a
Smalltalk class definition.

- Stepehn
Avi Bryant
2012-01-28 11:21:12 UTC
Permalink
Post by Lex Spoon
Let me get more pragmatic for a moment. In *practice*, imperative versus
declarative is about the same. First, you can perfectly well read a
Squeak fileout, in practice, as if it were declarative. There are only
5-10 forms that you must support.
Which is exactly what DVS does, for example.
Stephane Ducasse
2012-01-28 11:21:13 UTC
Permalink
Hi lex and alan

I think that it would be good have a list of the changes that should be
made in squeak to have a declarative syntax.

The obvious one is to have a
- GlobalVariable declaration, PoolDictionary,
instead of Smalltalk at: #MyVar put: 0
We should have GlobalVariable named: #MyVar initialized: '0'

- InstanceVariable declaration would be good too.

- ClassVariable

- I have doubt for classDefinition because even if this is currently
done via a
message we can interpret it as a declaration.

Alan (I know that you are busy) but what would be your point items for
such a list.


I think that having such a list is important for the people such as Avi
that could use
a much more declarative syntax for DVS.

Stef
Post by Lex Spoon
Post by Colin Putney
This is tricky stuff, so I may not be grasping the subtleties of your
explanation. As I understand it, however, "a sequence of operations
that transition a system from one state to another" would be an
imperative specification.
Many interpreters are going to do the following: "parse a series of
changes from this file, and then apply the changes in sequence". So
it's not necessarily so different.
Everyone is missing Alan's point. A "declarative" syntax still
requires
that you have an interpreter to make sense of it. Thus the difference
is pretty subtle, at least in theory. It's the difference between
interpreter+language, versus interpter1+interpreter2+language.
Let me get more pragmatic for a moment. In *practice*, imperative
versus
declarative is about the same. First, you can perfectly well read a
Squeak fileout, in practice, as if it were declarative. There are only
5-10 forms that you must support. Second, you can go to the
trouble, if you like, of making up a Smalltalk environment where code
changes don't directly modify the system, but instead log themselves
into the currently active changeset.
-Lex
Prof. Dr. St?phane DUCASSE (***@iam.unibe.ch)
http://www.iam.unibe.ch/~ducasse/
"if you knew today was your last day on earth, what would you do
different? ... especially if, by doing something different, today
might not be your last day on earth" Calvin&Hobbes
Alan Kay
2012-01-28 11:21:13 UTC
Permalink
Hi Stef --

Just an historical note ...

My first attempt at dynamic OOP was called Flex, and I tried to do
two main things: extension in all areas and the abstraction of
assignment in all areas -- both of these worked out pretty well.

Later, after seeing Carl Hewitt's Planner and some other work at MIT
(such as Pat Winston's concept learning system) and motivated by
Papert's LOGO work, I got very interested in doing away with
variables in the storage sense as much as possible. The never
implemented Smalltalk-71 embodied these ideas. Smalltalk-72 allowed
some of these ideas to get implemented and tested, but in a
rudimentary way because its control metastructures were not present
in a strong enough way.

The bottom line in these early stages was to try to have messages be
requests for larger goals to be carried out, and *not* to simply
simulate data-structures. Now, we realized that some programs would
look more like data-structure and procedural programming than we
wanted because we didn't yet know how to do a real object oriented
version of them. We thought of a "good" program as being one that had
no visible getter and setter methods to the outside world and that
the interior would change behavior appropriately as the result of
satisfying goals. (This later merged with some ideas about
event-driven objects that would pretty much *not* send and receive
messages, but would simply respond to general changes around them
(could be thought of as weak message receipt, etc.) -- this never got
implemented in the 70s).

I still have many of these prejudices. So I wince every time I set a
g/setter that is not absolutely necessary -- every time I see a
collection used nakedly, etc. I still think that Ed Ashcroft's way of
looking at process in his language Lucid (itself very influenced by
the POVs of Strachey and Landin) is a really nifty way to reconcile
the two worlds -- and I still like "information algebras" (like APL)
that allow projections to be created. Though they all have
interpreters, all of these systems lean very much to trying to allow
something more like mathematical mappings to be used as a programming
paradigm.

As you know, Croquet is very interesting as a testbed for these
ideas. The underlying "temporal algebra" by Dave Reed is a way to
allow very powerful methods of description (if a surface form can be
found that is usable and understandable by programmers) -- so quite a
lot of the scripting design that Andreas Raab has been doing with the
modeling ideas of Dave A Smith is to find a dynamic model that is
both programmable and incorporates the higher level integrities of
the larger sim worlds.

Cheers,

Alan

-------
Post by Stephane Ducasse
Hi lex and alan
I think that it would be good have a list of the changes that should
be made in squeak to have a declarative syntax.
The obvious one is to have a
- GlobalVariable declaration, PoolDictionary,
instead of Smalltalk at: #MyVar put: 0
We should have GlobalVariable named: #MyVar initialized: '0'
- InstanceVariable declaration would be good too.
- ClassVariable
- I have doubt for classDefinition because even if this is
currently done via a
message we can interpret it as a declaration.
Alan (I know that you are busy) but what would be your point items
for such a list.
I think that having such a list is important for the people such as
Avi that could use
a much more declarative syntax for DVS.
Stef
Post by Lex Spoon
Post by Colin Putney
This is tricky stuff, so I may not be grasping the subtleties of your
explanation. As I understand it, however, "a sequence of operations
that transition a system from one state to another" would be an
imperative specification.
Many interpreters are going to do the following: "parse a series of
changes from this file, and then apply the changes in sequence". So
it's not necessarily so different.
Everyone is missing Alan's point. A "declarative" syntax still requires
that you have an interpreter to make sense of it. Thus the difference
is pretty subtle, at least in theory. It's the difference between
interpreter+language, versus interpter1+interpreter2+language.
Let me get more pragmatic for a moment. In *practice*, imperative versus
declarative is about the same. First, you can perfectly well read a
Squeak fileout, in practice, as if it were declarative. There are only
5-10 forms that you must support. Second, you can go to the
trouble, if you like, of making up a Smalltalk environment where code
changes don't directly modify the system, but instead log themselves
into the currently active changeset.
-Lex
http://www.iam.unibe.ch/~ducasse/
"if you knew today was your last day on earth, what would you do
different? ... especially if, by doing something different, today
might not be your last day on earth" Calvin&Hobbes
--
John W. Sarkela
2012-01-28 11:21:14 UTC
Permalink
Howdy,

Actually, the BlueBook introduces pools as a shared scope.
Surprisingly, nowhere on pp 46-47 is the expression
'Pool Dictionary' to be found. Perhaps the declaration
should be for a Pool and the variables with their initializers
that it contains.

It's a small thing, but the point is to exorcise implementation
details from the language of declaration.

John
Post by Stephane Ducasse
Hi lex and alan
I think that it would be good have a list of the changes that should
be made in squeak to have a declarative syntax.
The obvious one is to have a
- GlobalVariable declaration, PoolDictionary,
instead of Smalltalk at: #MyVar put: 0
We should have GlobalVariable named: #MyVar initialized: '0'
- InstanceVariable declaration would be good too.
- ClassVariable
- I have doubt for classDefinition because even if this is currently
done via a
message we can interpret it as a declaration.
Alan (I know that you are busy) but what would be your point items for
such a list.
I think that having such a list is important for the people such as
Avi that could use
a much more declarative syntax for DVS.
Stef
Post by Lex Spoon
Post by Colin Putney
This is tricky stuff, so I may not be grasping the subtleties of your
explanation. As I understand it, however, "a sequence of operations
that transition a system from one state to another" would be an
imperative specification.
Many interpreters are going to do the following: "parse a series of
changes from this file, and then apply the changes in sequence". So
it's not necessarily so different.
Everyone is missing Alan's point. A "declarative" syntax still
requires
that you have an interpreter to make sense of it. Thus the difference
is pretty subtle, at least in theory. It's the difference between
interpreter+language, versus interpter1+interpreter2+language.
Let me get more pragmatic for a moment. In *practice*, imperative
versus
declarative is about the same. First, you can perfectly well read a
Squeak fileout, in practice, as if it were declarative. There are
only
5-10 forms that you must support. Second, you can go to the
trouble, if you like, of making up a Smalltalk environment where code
changes don't directly modify the system, but instead log themselves
into the currently active changeset.
-Lex
http://www.iam.unibe.ch/~ducasse/
"if you knew today was your last day on earth, what would you do
different? ... especially if, by doing something different, today
might not be your last day on earth" Calvin&Hobbes
Allen Wirfs-Brock
2012-01-28 11:21:15 UTC
Permalink
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20030303/61f1ba94/attachment.htm
Continue reading on narkive:
Loading...