Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lisp Machine Manual (1984) (hanshuebner.github.io)
142 points by sillysaurus3 on Sept 6, 2017 | hide | past | favorite | 152 comments


The cover of the Lisp Machine Manual had the title printed in all caps diagonally wrapped around the spine, so on the front you could only read "LISP CHINE NUAL". So the title was phonetically pronounced: "Lisp Sheen Nual".

My friend Nick made a run of custom silkscreened orange LISP CHINE NUAL t-shirts (most places won't print around the side like that).

https://www.facebook.com/photo.php?fbid=74206161754&l=54ec4e...

I was wearing mine in Amsterdam at Dappermarkt on Queen's Day (when everyone's supposed to wear orange, so I didn't stand out), and some random hacker (who turned out to be a university grad student) came up to me at random and said he recognized my t-shirt!

http://www.textfiles.com/hacking/hakdic.txt

CHINE NUAL (sheen'yu-:l) noun.

The reference manual for the Lisp Machine, a computer designed at MIT especially for running the LISP language. It is called this because the title, LISP MACHINE MANUAL, appears in big block letters -- wrapped around the cover in such a way that you have to open the cover out flat to see the whole thing. If you look at just the front cover, you see only part of the title, and it reads "LISP CHINE NUAL"


Link to an image of the manual, for the lazy: https://c1.staticflickr.com/1/101/264672507_307376d26c_z.jpg


Thanks for this! Looks elegant.


The classic joke in here is that if you look up "fascism" in the index, it goes to the section about logging in. User authentication wasn't part of the dream, so having been forced to build it, they embedded a dig on it right in the manual.

https://hanshuebner.github.io/lmman/files.xml#fascism

I think there are a few other jokes in the index but I don't recall what they are.


not surprising since many of the early lisp people grew up on ITS which had a built in 'crash' command available to all users which crashed the system..

https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...


It goes on to say

> The Lisp Machine uses your username (or the part that follows the last period) as a first guess for your password (this happens to take no extra time).

Looks like they won in the long end and got a perhaps semi-surreptitious way to at least opt-in to an open environment.


"Once a password is recorded for one host, the system uses that password as the guess if you connect to a file server on another host."


That was probably Stallman's doing. He wrote most of the LISP machine code.


> He wrote most of the LISP machine code.

That's nonsense. The Lisp Machine code was written by several people, some which had significant contributions.


Indeed. Before they left to found Symbolics, David Moon, Howard Cannon, Dan Weinreb, and others (I'm sure I'm forgetting at least one name) wrote the vast bulk of the system. I think Richard Greenblatt wrote most of the microcode (he also did most of the hardware design and construction).

After Symbolics formed, RMS did an amazing job at reimplementing many of the new features Moon and company were adding, so as to keep the MIT version of the system at rough parity with the Symbolics version. So his name belongs in the story, but not first.


The MIT version was commercialized, too. Lisp Machine Inc. employed Stallman and Greenblatt, and was back primarily by Texas Instruments.


Both Lisp Machines Inc. and Symbolics were selling the MIT version before redesigning it into their own brand new products, the LMI-LAMBDA and the Symbolics 3600.


The other side of the argument.

https://danluu.com/symbolics-lisp-machines/

Also interesting bits about the fall of Symbolics, the AI winter, and why Lisp adoption waned as a result.


If that's the case, then why is Emacs such a poor facsimile of the LispM environments? ;-)


A not-so-close approximation to using a Lisp Machine these days is to use the GuixSD distro. It features a package manager, init system, and initial RAM disk all written in Scheme. Combine that with Emacs, the guix-emacs extension, and, if you are so inclined, a Lispy WM like StumpWM and you have a system where a large number of critical components are implemented in Lisp.


There seems to be plenty of people who think Lisp is awesome.

If so, then why hasn't it become "mainstream"?


"Lisp is a minimal fixed point amongst programming languages. It's not an invention, but a discovery. That's why it won't just go away." - Brian Beckman [1]

That said, LISP machines are quite interesting on their own, because they follow a different architecture compared to (mainly) register-based x86 and similar.

[1] https://twitter.com/lorentzframe/status/821001730938654724


Asbestos suit on.

There are many positive things you can say about Lisp.

However, many day-to-day developers, and their managers, believe that Lisp's syntax creates nigh-unreadable programs. For example, there's no built-in infix syntax, as opposed to practically all other programming languages, and many modern programmers today expect infix as minimum table stakes for a language. Lisp syntax is extremely simple, but that doesn't mean the resulting code is easily read. Very smart people will disagree with that opinion - especially a number of people on Hacker News!! However, just look up phrases like "lots of irritating silly parentheses" and you'll see that this opinion is extremely widespread. Many people here will disagree with the sentiment, and that's fine, but it's a fact many people believe programs in Lisp syntax are hard to read.

A lot of people will reply that Lisp's syntax is simple, that real programs have been written in Lisp, that it has a kind of simplicity/elegance, and that Lisp syntax enables powerful facilities like Lisp macro processing. All of that is absolutely true. Lisp is quite powerful. But trying to convince people that Lisp is "advanced", even though it doesn't support basic capabilities like infix out of the box, is a losing battle. Forth is one of the few other languages that has an odd syntax and lacks support for infix, and it is also rarely used today.

Feel free to reply, but thoughtful responses preferred :-).


Your perception confirms my previous post that mediocre developers have a hard time to think in different ways, even to code in non-infix notation. Linux suffers a similar problem. Many people are so used to Windows desktops that they have a hard time to think in different ways, and to appreciate the Linux way which provides not one common desktop but many ones which can freely be chosen.

It reminds me of the quote "Who knows only hammers as a tool considers every problem a nail".

Also the typical claim that LISP just means "lot of silly parentheses" points out that those critics have never tried to develop in the Lisp way. Ironically, as time goes on and complexity rises, more and more developers appreciate new features which have been available in Lisp for more then 50 years -- lambda expressions, maps, and macros for instance. Lisp was so far ahead of its time that people today still have problems to realize what is so fascinating about Lisp, and why Lisp machines are so awesome. They should take a look at the graphical shell [1] of McCLIM [2] (a revival of the famous CLIM GUI) to see that powerful desktop computing can be very different.

[1] https://common-lisp.net/project/mcclim/static/media/screensh...

[2] https://common-lisp.net/project/mcclim/excite.html


I think the best cure for the "lots of silly parentheses" argument is to ask someone to count the amount of (, ), [, ], {, }, < and > characters in their code. Java, for example, is easily on par with Lisp here.


My approch is always to start with a normal language example and then start moving parenthesis to the left.


The odd thing about the fully parenthesized syntax is that it's offputting at first, but once you've been doing it for a little while, in an editor that does parenthesis matching and auto-indentation, it gets to be just as easy and pleasant to work with as more common languages, or even more so. (Packages like Emacs 'paredit' make it even better. (I haven't tried 'parinfer'.))

I understand this is hard to believe if you haven't tried it, and it certainly does take some getting used to. Lisp is, in short, an acquired taste. Think of it like very spicy food: many people avoid it, but those of us who like it, like it a lot.


Completely agree. Lisp code I wrote 10 years ago is still very readable to me. It makes perfect sense once you're "over the hump" but you gotta make that investment.

Contrast this with Forth, which I was at one time equally expert at, and where I could rarely read my own code 10 days later.


The biggest reasons for this are local variables vs. stack ops and parenthesized expressions vs. having to know the arities of the words called. You can add local variables to Forth, but afaik nobody did during its heyday; and "phrasing" your code with whitespace can help some on the latter issue, but it's admittedly not as explicit as parens.

(From another one-time Forther and more recent Lisper.)


Mach 2 Forth for the Mac in the late 80s had local variables and when I wrote a Forth compiler for the TI 34010 I included local variables. They helped a lot, but they still don't make Forth as readable as Lisp.


> Packages like Emacs 'paredit' make it even better.

Not just better — it's a leap from editing code as plain sequence of lines of characters to editing code structurally. And lisp syntax enables complete and uniform structural editing across all contexts from simple arithmetic expressions to complex top-level definitions.


I don't even disagree on the infix math point, really. I do find it awkward that it makes up a small fraction of what you actually do in most programs. Seems odd that such a minorly applicable point holds such high regard.

Sadly, I assert that this being the main irritation of LISP is one more of fashion than it is of actual applicability. Sad, because I do have pride in our communities to want them to be above fashion. Most people that "hate" LISP have never written a single line of it, ever. Nor will they ever do so.


On lisp hate: back in the mid-/late-90s on Slashdot it was en vogue to make jokes about lisp. I'm fairly certain it originated in the spirit of the Unix haters handbook, in that it was in-group therapy for the initiated. But then I think it just became a thing that won karma points, and thus was repeated as a truism.

I remember working the university computer helldesk in the late nineties and repeating such a joke to the smartest white-bearded Unix wizard I've ever met. He rattled off five or six more jokes about lisp, and then proceeded to explain the AI systems he used to work on, and why it was his favorite language.

Having never really coded in lisp myself, I was immediately ashamed of the fact that I had been thoughtlessly making light of a thing that others valued. And years later, I'm atoning by trying to use Racket (sometimes Clojure) whenever I can.


I should be clear that I think I fell victim to that. Specifically, I recall I used to at least laugh at those jokes. Can't remember making them, but I also don't remember really understanding them.

So, I could just be projecting. Likely, I am. I would love to be wrong, at least.


There was no mid-90's Slashdot. It started in late 1997. I should know, I'm user #1484. :)


Mid-late-: the average of 1995 and 1999? :-p

And #5648, but I was a lurker for some time before a friend convinced me to register.

EDIT: but yes, lisp hate meme was probably firmly late 90s


If we randomly sample no-longer-used languages, I suspect we will see infix support more often than not. Infix won't save a language.

> fact many people believe programs in Lisp syntax are hard to read.

It's not a fact; it's just a couple of trolls, plus people just repeating what they have heard. People who reply to "lisp" with "lots of irritating silly parentheses" haven't necessarily even tried to read any examples of Lisp; they are just repeating a joke they heard.

Just like how people who say "fix it again, Tony" when they hear "Fiat", who have never driven a Fiat, and have no idea about the statistics of Fiat reliability relative to other makes of car.


It might just boil down to the fact that programs of some complexity in almost any language are hard to read, even more so when you're a beginner in said language and might not even know its coding/formatting conventions...

PS: The Part regarding Fiat is interesting, because we have a similar saying in Germany, namely: "Fehler in allen Teilen", which translates to "Errors in every part".

I have really no idea about the statistics about fiat reliability though, so don't take that as a statement from me^^


Some things stay "true" even long after they are not. Lisp is also unbearably slow... as compared to C on 1979 microcomputers.


Lisp is pretty slow in comparison to C, that's acceptable.

As a quick self test stand up a C http server using gnu microhttp, hit it with AB or Wrk.

Next stand up a http server in sbcl (e.g. using wookie) and do the same AB or Wrk load test.

If you are expecting common Lisp to come close to C or even JVM you will be quite disappointed.

I know I was.

I love Lisp but it's a unwise to compare it with C on performance.


What.

I do entirely expect SBCL to come within factor of 2 to C (if you instrument your Lisp code with type declarations properly), and definitely to get JVM performance. And even unoptimized CL code I expect to run faster than Ruby, Python or JS.

Modern CL implementations are good, and the language itself gives you plenty of ways to help the program compile down to efficient native code.


> Lisp is pretty slow in comparison to C, that's acceptable.

It is not. (Common) Lisp can be made as fast as C if you really need it, by using tricks...

By the way, for a while, the fastest HTTP static server was one written in Lisp, "teepeedee2", 2009. Fastest in the world.


Lacking infix syntax out of the box is a problem, though not hard to add as a sibling mentioned; for those curious I like linking to this gem dating back to '93 since it's part of quicklisp now https://github.com/rigetticomputing/cmu-infix The rest of the syntax being somewhat alien-ish is also a problem, but it goes away with practice. (https://www.thejach.com/imgs/lisp_parens.png)

But I don't think syntax alone is enough to explain Lisp's apparent languishing (though only apparent since it seems to be on the rise again), because we have Clojure. Clojure has the same syntax issues as Lisp (ok a bit fewer because of no reader macros and there's an argument that bracket variety is more pleasant to read) yet it's very popular all things considered. And because Clojure has a BDFL, if lacking infix OOTB was really an issue (I don't think it's come up in any of the annual surveys), RH could just include one of the libraries for it out of the box. SBCL could do that for Lisp, but it would be non-standard.

To me there are several stronger drivers for language adoption than anything the language itself offers. Here are three: having a BDFL (Python, Perl, PHP, Clojure... not Lisp), having big corporate backing (Swift, Objective C, C#, Go, arguably Java but driven by more than 1 megacorp... not Lisp), being the lingua franca of a very widespread platform (Javascript (web), PHP (shared web hosts), Bash, C/C++ (unix, windows), Java (android)... not Lisp, maybe if the Lisp Machine had won but it didn't).


The Lisp Machine manual documents their infix reader macro:

https://hanshuebner.github.io/lmman/rdprt.xml#reader

  #⋄x:y+car(a1[i,j])⋄
which gets read into something like

  (setq x (+ y (car (aref a1 i j))))


No infix operators is an advantage. I don't like memorizing precedence tables; most of the time people use parens just to be on safe-side anyways. It's even considered good practice in most cases.

I can see how lots of parentheses can look irritating (particularly at the end of expressions). But once you are familiar with it(it's not that hard) you can just read between parens and you don't see them anymore(most lisp code is well-indented). There is also a thing called sweet-exp that is supported by some popular lisps.


Undoubtedly many people think like this, but I've never been able to understand it.

I'm forced to use other languages and rarely have the time to do something in Lisp for fun, but when I do, it is the most readable and least eye-straining language that I know.

Python is far more eye-straining in its monotony.

I understand other arguments: I'd like batteries included outside Lispworks/Allegro, larger repositories of examples that reflect best practices, perhaps even an improved Common Lisp.

But readability? I just don't get it.


The fact that huge number of people couldnt get past the syntax is why I liked efforts such as Dylan:

https://en.m.wikipedia.org/wiki/Dylan_(programming_language)

A bit better. Then, the approach closest to mine of mixing 3GL syntax with LISP was Julia language. It was femtolisp underneath with a syntax more like a traditional language. You get the power without the ugliness. Dylan I havent seen in a long time but Julia is taking off. Look into it if you havent.

https://julialang.org


I understand where you're coming from and agree to a large extent, but I think your hypothetical person's characterization of infix as a "basic capability" is a bit off. It's antithetical to Lisp's design to make use of infix notation. As far as Forth goes, I would say the same thing.


Agreed. Straightforward arithmetic is the capability, in Lisps represented in such a way as to fit with the philosophy of the language.

It's not like the majority* of languages see infix (or prefix, postfix, postcircumfix, etc.) notation as a basic enough concept to allow defined functions to make use of it. Arithmetic always seems to be the special case. No operator overloading, but arithmetic is pre-overloaded. Parentheses of function calls are all you need to keep track of subexpressions, but arithmetic will have an arbitrary and large list of precedences.

*correct me if I'm wrong. I know a fair few languages but mostly deep dives into particular niches. That could bias me.


Here is the thing about infix: the lack of infix probably does hurt, but not lack of the arithmetic infix. There is various syntactic sugar used in programming languages that is front-and-centre, expressing program and data structure, used everywhere.

I believe that most programmers won't have that much problem with (+ (* x x) (* y y)), but even those who don't will balk at cruft like (aref a 3), and (slot-value obj 'foo).

In the TXR Lisp language, I provide a set of well-targeted syntactic sugars for things like this; sugars which do not disturb the surrounding Lisp expressions and don't introduce ambiguities (precedence and associativity).


That's funny because all the popular languages are mostly prefix too. (foo bar baz) is prefix, but so is foo(bar baz).


The thing is most people who say things like "Lots of irritating silly parentheses" or think it is hard to read are generally people who have never used it, or at best had a couple of weeks of Lisp/Scheme in a programming language survey course. Lisp doesn't look like just another ALGOL-like language and so it looks "weird" if you don't know it. It's like somebody who only knows closely related Romance languages like Spanish and Italian thinking English is weird because it isn't just a variation of something they know.


I will agree that lisp syntax is hard to read and write, not because it's hard to come up and figure out in what order the parenthesis should be set (or because it uses prefix notation, that's the easiest to get used to), but because it's nigh imposssible to keep track of the parenthesis mentally and visually. Writing lisp involves too much parenthesis-checking to be anywhere near fun. It's not hard to understand, but when it comes to writing complex functions it just spills out of proportion, or maybe I'm just bad at diluting functions into small enough parts. Either way, some large function in C is much simpler to get a grasp of the general control flow. Love the idea, hate actually using it.


I disagree. Keeping track of parenthesis is learnable very quickly, and you do it pretty much subconsciously.

A trick that did it for me: I started solving SICP exercises using pen and paper. It took few lines of Lisp written with correct indentation style in mind, and my brain internalized the concept, making me no longer consciously count the parens.


If you have to cope with parens and indentation by hand, then sure. A proper editor deals with those for you, and provides facilities for semantic traversal and editing besides.


You just haven't tried.


>However, many day-to-day developers (...) believe that Lisp's syntax creates nigh-unreadable programs (...) Very smart people will disagree with that opinion (...) it's a fact many people believe (...) trying to convince people that Lisp is "advanced", even though it doesn't support basic capabilities like infix out of the box, is a losing battle.

But note the use of the "believe" word. As you point out, it's all beliefs. There are also programmers that know Javascript but believe that programming in Python must be hard (true story). Beliefs.

Then there are people, like myself, that once they use Lisp enough, feel that almost any other language would be torture compared to used Lisp. And i love the parentheses.

The problem, and this is a problem in all aspects of programming, is that people like to "believe" things about something they don't know enough to have a reliable opinion of.

So the parent question was something like "if Lisp is so good, why isn't popular?". Are good things always popular? Are popular things always good?

If really really good music must be popular as well, then you would turn on the radio (or MTV, or VH1 on the TV) and all music would be Bach, Beethoven, Debussy, Stravinsky... you get the idea. Or if you are a rock or jazz fan, include in that list the very best of the genre (i.e. John Coltrane, etc.)

If popular music must be good as well then perhaps Justin Bieber and Shakira are masters of music and we should revere them.

In my view, the reason Lisp isn't popular today starts with the Unix becoming popular and mainstream. Unix ran fast on cheap hardware (minicomputers, PDP-x). Little by little C became accepted as a standard for systems programming.

So by early 1980s C becomes mainstream; operating system APIs are then majorly written in C, even more so in the 90s (i.e. Windows API), thus the next popular languages are modeled on C: C++, and then Java which holds the same syntax and is quite influenced by it. Afterwards cue all the other languages made for overcoming some of Java's drawbacks, like C# and others.

Meanwhile Lisp was simply let behind in terms of popularity. But at the same time during the 90s and 00s it was used for important, complex stuff: air travel reservation, aerodynamics simulation, credit card transaction checking, auto-piloting the deep space one spaceship for days...

It wasn't forgotten, since Lisp never stalled from evolution, it has kept evolving since 1958, it has never stopped; the enthusiasts are still creating more good tooling, more libraries, even really nice stuff like Portacle that makes installing a complete Lisp programming environment a one-click operation.

So, it all depends on beliefs of the people versus actual usage. Some of the beliefs and myths held by people, besides the parentheses thing, are for example (my example is biased to Common Lisp)

    1. Lisp is slow
-> Lisp is generally at the same speed level as Java on current Oracle JVM, and can be done, with tricks, to perform in the same speed as C or Fortran. In any case it is still 10x to 100x faster than most other popular languages out there like Ruby or Python.

    2. There's no good IDE available
-> You will find that many Lispers love using SLIME on Emacs for hacking on Lisp. It provides high productivity.

    3. Since there are few users, there is little 
    support (forum topics covering your issues) or manuals or guides.
-> There are a ton of books on Lisp and most of them are very good, some of them outstandingly good. The reference documentation (CLHS) is really well written. Since this is a language that has endured for decades, the amount of articles or posts written about doing X in Lisp is enormous.

    4. There aren't enough libraries
-> Not many libraries if we compare with say, Java libs; however so far I've found everything i need for my purposes. And while i wouldn't bother to dive into a Java or C++ library to modify it or understand how it works, usually Lisp libraries (actually "systems" made of "packages") are easy to understand and modify at your will. It also helps that usually Lisp requires few lines of code for achieving something, compared to C++ or Java.

    5. Lisp doesn't have static typing.
It does have it. But it is optional and depends on the implementation. However the most popular implementation, SBCL, has very good support by it. On the other hand, many Lispers use it for performance reasons rather than safety reasons. Lisp is a very strongly typed language.

    6. Lisp doesn't support <insert your favorite programming paradigm>
In theory, Lisp can support any programming paradigm, and in practice it supports most programming paradigms out there.

    7. Lisp produces unreadable, garbled code.
Some of the cleanest code i've seen in my whole 20+ years of programming was Lisp libraries.

    8. Lisp can't work for complex or large codebases
    since it's so hard to read
As pointed out, very complex, millions-of-LOC Lisp systems have been created successfully, some of them used today (like aero simulation software).

-----

Now, something that is true is that Lisp is hard to learn: If you use Lisp, a very powerful language, then you ought to use the full power of Lisp. And for this, you would need to be familiar and handy with all of the most popular programming paradigms (procedural, imperative, functional, OOP, declarative), AND you would also need to understand and know how to take advantage of metaprogramming (macro programming). For many programmers, this represents a challenge. Even if they already know OOP, they don't really know OOP the Lisp way, since the object system (CLOS) is different and more powerful than the typical OOP facilities provided by Java or C++.

But those "pre-requisites" are in fact the "pre-requisites" for any programmer to become a really good programmer, regardless of the language....

Once these "pre-requisites" are met, learning Lisp itself is rather easy, since the syntax is easy and most features are orthogonal to the language.


For example, lisp has no built-in infix syntax

I wrote one: https://github.com/shawwn/lumen/blob/6aa017cfd90fba381b6ea1d...

https://github.com/shawwn/lumen/blob/6aa017cfd90fba381b6ea1d...

It's an experimental feature, but it seems to work fine with no ambiguities. It also has "x = 42" for implicit local variables, rather than "(let x 42 ...)"

What do you think of it? Is it much more readable? I'm still debating whether to keep it.


As a day-to-day developer, but one who wants to learn as much as I can outside of my domain, you are spot on.

I generally work in an OO language (C# specifically) and have long wanted to get into functional programming. I am finally making headway, but after false starts in lisp (racket, clojure) ultimately I am using F#. Partially, admittedly, due to the .NET aspect (its even shipped as part of default dotnet core) but also because of the lack of parenthesis.

One of the reasons to go functional was to get away from the excessive syntax and formality of a C-based language, and replacing every curly bracket with 20 parenthesis (minor hyperbole) doesn't achieve that for me.


"replacing every curly bracket with 20 parenthesis (minor hyperbole)"

It isn't even minor hyperbole, it's just plain false.

It has been my experience that, generally speaking, the combination of curly brackets { }, square brackets [ ], and parens ( ) in languages with C-like syntax is at worst roughly equivalent to the number of parens in most lisp dialects.

Some people freak out because they see a function in a lisp that ends with:

  some-final-expression)))))
But they think that it is totally fine when a C-like langauge ends with:

          some_statement_or_expression
        }
      }
    }
  }
}

If the stacks of parens at the end bother you so much, there is nothing in most lisps preventing you from indenting them on lines too:

  some-final-expression
        )
      )
    )
  )
)

I wouldn't advise it, but you can do it.

I think more than anything, people are just unwilling to accept that lisps look different than the X number of curly-bracket-style languages they are used to. It looks weird to them at first, because of past experience, so to some degree they are looking for a reason to write them off from the beginning.

When I was in college, I had this initial reaction, and didn't even consider anything lisp-like for many years. It was only my frustration with the limitations of C-like langauges that led me to reconsider lisps years later, and man am I ever glad I did.

[EDIT for additional comment]:

and btw, I'm a "day-to-day developer" also.


> I wouldn't advise it, but you can do it.

That's basically how I write functions in Lisp, with indentation levels and every closing parenthesis on a separate line.

The thing is, when you're done writing a (leaf)function in Lisp, you're done: it works. Since you won't need to revisit it (unless the functional design changes), you clean it up for compactness by grouping all closing parenthesis together.

I could be the only one, but I've always found it normal and practical to develop with indentation, and regroup when I'm done.


You might be right in aggregate counts, but the difference is that in a C-style language you have different types of fluff: curly braces, parenthesis, semi colons etc. So the count of any one type is not overwhelming (unless you let your cyclomatic complexity get away from you). In Lisp its almost all parenthesis, so yes it is more.

Besides which my point is more that why have any of this at all? Only the insane write code without tab or space indentation, and if you are doing that anyway then an OCaml-esque language like F# where that serves as your grouping syntax is the saner option.


"Only the insane write code without tab or space indentation"

^ um, and who exactly suggested that? Certainly not me.

And if you think lisp style languages are generally written without tab or space indention, then you clearly haven't read much lisp code.

I thought you said you tried Clojure? Clojure's data literals use curly braces (maps, sets), square brackets (vectors) and parens (lists). I think that's a good thing -- they function like parens but are just distinct enough to make the underlying type more obvious.

Even so, the parens don't get in the way of scheme/lisp programmers. You really do get used to it (usually even like it) after a while (oh and many scheme's use square brackets for function arguments, etc).

"why have any of this at all?"

^ because the parens serve a very important purpose. They group code and data in a very compact, clear, and easily parseable(sp?) way -- a way that allows for extremely powerful manipulation of both. The fact that you don't understand this is evidence that you haven't put in enough effort to appreciate the power of lisp-like languages. I'm not suggesting that you have to like lisp or its syntax, but to suggest that the way it is designed is "insane" is just ridiculous.

Using indentation as grouping is by no means "saner" than using parens. Whitespace is too easy to get wrong or to misinterpret. Heaven help you if you try to implement macros that use whitespace for grouping.


Criticising lisp was a mistake, and I should have already ripcorded my way out of this discussion thread already, but you make some good points, so I'll respond at my own peril:

I didn't suggest you implied that lisp isn't written without indentation. Yes I have coded with lisp, and I know they use other syntax than just parens.

Parens serve an important purpose with lisps, just like the semi colon does in c languages. As a C# programmer I can, and have, written long statements on one line; in particular, LINQ statements lend itself to this if you use the method syntax, which I use exclusively over its query syntax form. To make it readable you break it over lines but terminate with a semi colon to finish: it is still run as a single statement. This is powerful.

In F# you can still do this, but it requires that the entire statement start at the appropriate indentation in order for the compiler to infer the closure.

Indentation can cause misinterpretations, but I'd argue, at least for me, that the happy case (i.e. 99% of a given codebase) that using indentation instead of parens, curly braces or similar, is a less clutter free way of writing code. And, since its something that is generally done anyway, it makes curly braces/parens redundant. Note that in F# you can use parens as well to group code, if you need to enforce a closure, which is a good design decision: make them optional. C# would be much better, and I would argue (again at my peril) that if it was optional in a lisp-style language as well, that would be great.


> Yes I have coded with lisp, and I know they use other syntax than just parens... I would argue (again at my peril) that if it was optional in a lisp-style language as well, that would be great.

The reason you are getting downvoted is not just because you are criticizing Lisp, it is also because you are posting a bunch of BS without understanding the difference between a context-free and a context-sensitive language. Please learn some basic parsing theory and then you will understand why your suggestion for a white space sensitive syntax for a homoiconic programming language is nonsense.


Well, there was SRFI 49, which wasn't very popular but was supported by Matthias Felleisen.


"parens ... make them optional ... and I would argue (again at my peril) that if it was optional in a lisp-style language as well, that would be great."

^ making parens optional in a lisp would IMHO make everything more complicated at best.

Someone else in this thread mentioned SRFI 49. Personally I don't find it's syntax...

  define
   fac x
   if
    = x 0
    1
    * x
      fac
       - x 1
... to be any easier to read or understand than the usual scheme syntax:

  (define (fac x)
   (if (= x 0)
     1
     (* x 
      (fac (- x 1)))))
In fact I much prefer the latter. I see absolutely no advantage to making the parens optional, and I see lots of disadvantages to removing them. Especially when it comes to implementing something like lisp macros, I think removing parens would greatly complicate that and open the door for tons of ambiguity, as well as making it far too easy to misinterpret what the code was doing.

I agree with sedachv. Your "mistake" is not that you are criticizing lisp. It's just that those of us here that know lisp don't see your criticism as being valid, and we do not see your suggestions as being something that would make lisp better, but on the contrary would make it much worse.


SRFI 49 went too far the other way. In F# the expression would be:

  let rec fac x =
    if x = 0 then
      1
    x * fac (x - 1)
Or with pattern matching:

  let rec fac = match x with
    | 0 -> 1
    | _ -> x * fac (x - 1)


I suspect that SRFI 49 went as far as it did in an attempt to reduce ambiguity (but I haven't spent much time looking at it, so take that with a grain of salt).

I understand why people prefer the combination of infix expressions and a possible reduction in the number of bracket-like symbols in their programming language (indeed, as I stated before, I originally found lisp syntax to be weird, and originally I didn't like it). When things are different than what you are used to, it is common to not like them.

However, having spent a considerable amount of time programming in lisp-like languages, I find that the benefits of prefix notation and the use of parens as boundaries in the language far outweigh the costs of having them there. Any discomfort with the aesthetics of the code seems to fade with time for most people (if they are willing to spend enough time using the language to get used to them).


> Only the insane write code without tab or space indentation, and if you are doing that anyway then an OCaml-esque language like F# where that serves as your grouping syntax is the saner option.

Instead of making disparaging remarks about people's mental health, learn some basic parsing theory:

https://en.wikipedia.org/wiki/Context-sensitive_grammar


Because Brendan Eich's management lied to him about allowing him to do Scheme in the browser: https://brendaneich.com/2008/04/popularity/

When I think of what web programming could have been like, it makes me want to weep.


I also dream of what it would be like if lisp in the browser had become mainstream, but it's possible that if he had implemented it like he wanted to, it wouldn't have received such wide spread adoption like javascript did.


> it's possible that if he had implemented it like he wanted to, it wouldn't have received such wide spread adoption like javascript did.

I don't care for Scheme, but I'd be very, very surprised if it'd have failed in such a situation. If a terrible train wreck of a language like JavaScript can succeed, pretty much anything better than INTERCAL can. People wanted to extend browsers; had Scheme been the only way to do that, Scheme is what they'd have used.


True. My optimistic/pessimistic daydream is that, as the only game in town for some period of time, it would have introduced enough developers to the language to really give it traction. We'd now be seeing people asking "Why can't I run the same language on the server as in the browser?" It would be the second renaissance!


If I had more free time on my hands, or were king of the world and could make it so, a fun exercise would be to retrofit Firefox / Chromium to support Scheme as replacements for HTML/CSS/Javascript and get a merry band of geeks to build a shadow web.


It all comes back to the "worse is better" thing. Frequently enough the technologies in mainstream usage are not the best, technically speaking. Mainstream programming languages are still playing catch up on the features that the Lisp family of languages has had for decades. UNIX succeeded not because it was the best OS of the time (it wasn't, MIT made much better systems) but it had a killer feature: it ran on commodity hardware, the PDP-11.


but it had a killer feature: it ran on commodity hardware, the PDP-11

That is not the essence of worse is better. UNIX won over the others because it was simple and regular, and was built from components that could be reassembled into more complex things. Pipes were a big part of that.


UNIX had a weak design. It won not because it was good, but because it ran on more computers. There are many other OSes of that time period that had superior designs.

The Lisp Machine had way better composition primitives than UNIX's pipes passing plain text around.


The part of the problem they were hard to implement efficiently back then.


What does mainstream have to do with quality? There's a lot of crap that's mainstream in most industries. I don't see why ours would be any different (especially as fad-driven as programming tends to be).


A more powerful language is not enough of an advantage to defeat entrenched alternatives with strong network effects. There isn't a reason why language X is not entrenched, but there are historical reasons why C, C++ and Java are entrenched. Lisp just isn't enough more powerful than them to beat their home-court advantage. I guess I'd say, Lisp might be 10x better, but the network effects are orders of magnitude more meaningful than that intrinsic benefit.

It's not a very meaningful answer to the question, but it's a bit like asking why Todd isn't rich when we all know Todd is a smart guy. It's because getting rich takes more than raw intelligence.

Suppose you have a wildly evolving field where nobody can really predict what technologies are going to be successful in the market beyond the next five years or so. Suppose you have no internet. Suppose there are thousands of programmers with the ability to invent a language. Wouldn't you expect that some of them will, by happenstance, wind up on successful products and become successful while others stay relatively unknown? That's the birth of the mainstream programming languages and their antecedents in the 70s and early 80s.


> Lisp just isn't enough more powerful than them to beat their home-court advantage.

Note, however, that Lisp had a 10 year head start on C. Lisp had the home-court advantage, and C ate its lunch anyway.


Lisp had a different purpose. It wasn't developed as a systems programming language and wasn't used much as such. Though it had been used for some very specialized machines from the mid 70s to early 90s, see the Lisp Machine Manual. But if you read the manual. you'll see that it was initially developed for a very narrow group of people: high-end personal workstation users in research&development. This required very deep pockets. When this manual was published (1984) the cost was in the range from $70k upwards. Most of the machines at that time were bought with government money (aka DARPA).

C spread, because it was a low-level systems programming language and various operating systems were written in C.

The Lisp Machine softwre OTOH was written in a high-level language with special requirements for the hardware: tagged memory, stack-based instruction set CPU, large memory space, garbage collection support in hardware, ...

Thus the software was not stripped down to a minimum and not very portable - which Unix and C was.


The hardware that ran the software described in this manual didn't have tagged memory or garbage collection support in hardware and the real instruction set was similar to any RISC CPU.

Somebody could have produced a workstation on conventional hardware that just ran Lisp but they didn't. Maybe Tektronix would have been a good candidate as their 4400 series wasn't sold to run UNIX, they ran either Smalltalk or Franz Lisp on top of a minimal OS.


> The hardware that ran the software described in this manual didn't have tagged memory or garbage collection support in hardware and the real instruction set was similar to any RISC CPU.

Then you should check the architecture of those machines some time: MIT CONS, MIT CADR, Symbolics LM-2 (a repackaged CADR), Symbolics 3600, LMI Lambda, ...

http://www.bitsavers.org/pdf/mit/cons/TheLispMachine_Nov74.p...

Page 5: 1 GC bit, 1 User bit, 2 cdr code bits, 5 bits data type, 23 bit pointer.

Looks to me like a tagged CPU architecture...

The MIT CADR Lisp Machine was a stack architecture with 24bit data and 8 bit tags. Six bits for data type encoding and 2 bits for compact lists. The CPU does type checks on operations, ...

It was nothing like a RISC machine, which were researched for Lisp (Symbolics, Xerox, SPUR, SPARC, ...) mid/end 80s. A full decade later after the architecture of the MIT CONS Lisp Machine.


I am well aware of the architecture of the MIT CADR, LMI Lambda and TI Explorer, Symbolics lispms less so but they are not the subject of this thread. I have written CADR microcode recently.

None of the features you list are constrained by the architecture of the hardware, they are just conventions of the software VM running on it. Would you suggest that the X86 is a tagged CPU architecture just because SBCL or a JVM use tags ?

>Page 5: 1 GC bit, 1 User bit, 2 cdr code bits, 5 bits data type, 23 bit pointer.

This is not true for the software that matches this version of the manual. System 99 used 25 bit pointers, there wasn't a GC or user bit. The change from the earlier word format was possible because this was not fixed in hardware.

The CADR microinstruction set is load/store with regular opcode fields, it is very much like an early RISC.


If the X86 would provide SBCL with such instructions and data, it would be a tagged architecture, but it doesn't. The SBCL compiler outputs conventional X86 instructions.

The Lisp Machine compiler OTOH generates instructions for a mostly stack machine, which runs on the CPU in microcode.


The Lisp compiler described by this manual can output microcode directly.

Please don't assume that all Lisp Machines were the same as Symbolics ones.


Please don't assume that the Lisp compiler on some Symbolics could not output micro code. It could, IIRC.

But it was not what a Lisp developer normally would do, he/she would use the compiler in such a way that it outputs the usual machine code, not micro code.


Computer Architecture is not defined by what a Lisp developer normally would do.

What hardware features of the CADR do you feel provide support for GC and tagged words ?

I have built the CADR microcode from the same source to use both 24 and 25 bit pointers, it is just software.


Whether microcode is hardware or software is blurred. Remember, when microcode was introduced in the 1960's, it was used for implementing the same thing in software that other versions of the same computer family did in hardware. With microcode, a vendor could offer different machines at different price/performance points. A sequential circuit can implement an algorithm; microcode can implement an algorithm.


The CADR is an example of a computer that implements a virtual machine in software, what makes it special ?


Computer architecture on the user level is defined by the data format and instruction set the CPU offers. How it is implemented is another level. I don't know how some Intel i7 is implemented, but it probably has writable microcode and some very different architecture inside.

That Intel hides the microcode and the CADR didn't is just another detail.


You still haven't identified what hardware features you think can be emulated in software on a CADR but couldn't be emulated in software on a 68020.


Lisp had a 10 year head start on expensive academic machines that were low on industry uses. C had a head start on high-end mainframes for actual work. Then the main contenders on the nascent PC were assembly, C and Pascal. P-code based compilers arrived quickly and were faster to develop with, but the resulting code was inefficient—and early PCs were sufficiently resource-constrained that inefficient code was very noticeable. So C obtained the territory that mattered: high-end business machines and the low-end machines that would spark the computing revolution, and just had to grow the two together.

Lisp had a head start, but wasn't on the track that mattered.


> C had a head start on high-end mainframes for actual work.

No. C started on low-end minicomputers, not high-end mainframes.

And why didn't Lisp take over the low-end minicomputers? Because Lisp was too resource-hungry, and those low-end minicomputers didn't have the muscle to run Lisp well. They ran C just fine, though.


So how did Lisp have the home court advantage then? :)


Because it existed, and C didn't. Because it had a decade of code already written, and C didn't. Because it ran on many machines, and C initially ran on one.


It ran on several machines, but not the ones that mattered. The existing code was irrelevant and not portable.

I'm not denigrating Lisp. I'm saying the network effects favored C because the PC happened to take off.


> It ran on several machines, but not the ones that mattered.

And why wasn't it ported to the ones that mattered? I keep hearing about how easy it is to build a (simple) Lisp compiler. Why wasn't it ported, or a fresh one built?

> The existing code was irrelevant and not portable.

Why was the existing code irrelevant? In ten years, nobody wrote anything in Lisp that mattered to anyone?

In fact, there were Lisp compilers for the PDP-10, which (I presume) could run on the PDP-11, where C began to take over the world. Why didn't those Lisps take over the world? You can't explain it just by C existing on the right machines at the right time. (There wouldn't have been C on the PC if it hadn't taken off - at least to some degree - on the PDP-11 a decade earlier.)


> Why wasn't it ported, or a fresh one built?... In fact, there were Lisp compilers for the PDP-10, which (I presume) could run on the PDP-11

The PDP-11 was in no way compatible with the PDP-10, and there was no way a PDP-10 Lisp implementation like Maclisp or Interlisp would fit on a PDP-11. All you have to do is Google "PDP-11 Lisp" to see that there were many different Lisp interpreters and at least one compiler implemented for the PDP-11 throughout the 1970s. Likewise Intel processors - the first implementation of Lisp on the 8080 was done by Takashi Chikayama (http://www.logos.ic.i.u-tokyo.ac.jp/~chik/) in 1976 (http://www.softwarepreservation.org/projects/LISP/utilisp/ut...)


I don't have to explain why Lisp failed to take off. I just have to explain why C did, because network effects are more significant than language advantages. This is my central thesis.

I don't think awareness of Lisp was evenly distributed among people developing on PCs in the time period when the network advantage was obtained. Even if it was, if there were already a decent C compiler, I don't think most practical engineers would immediately embark on a compiler development project. During this era, those who did know about Lisp certainly would have regarded it as being too resource-intensive for work in a resource-constrained environment like the PC.

I think it's specious to say it's easy to develop a simple Lisp compiler, so why didn't anything get ported. The code you may have wanted to port would not have been written in some minimalist dialect of Lisp, it would have been written in ZetaLisp. So there would be assumptions about resources and the underlying system that you would have to fake. And again, we're talking about porting from a resource-rich environment to a resource-scarce environment.

It was probably a reach for me to say there wasn't anything to port. But what would someone have wanted to port? A graphical application that ran on a Lisp machine would have made assumptions about the display that aren't true about a PC. An academic AI program would have been too short on resources to be much use.

What wound up selling the PCs in the early days was business applications. Were there an assortment of useful business applications on Lisp machines that there was commercial demand for? WordStar and Visicalc invented that territory and were first released in the late 70s—does anyone remember programs for the Lisp machine that businesses were clamoring for? And really, software churn has always meant there are more new programs in use than old ones.

What I don't have an explanation for is why the C compilers produced better code than the Pascal compilers. Partly, I know p-code made it faster to build a Pascal compiler but I don't know why it inhibited development of more performant ones. Turbo Pascal's speed of compilation was a result of not having an AST, which probably inhibited performance optimizations, but that may have been too late in the story to explain anything. And I don't have an explanation for why nobody developed a Lisp-like language that had explicit allocation. My best guess would be simply that their attention was elsewhere, probably working on AI, which wouldn't bottom out until after the winners were already entrenched.

By the way, in case it isn't clear, I am really enjoying this conversation. I'm very interested in this window of time in computing and if I have a wrong idea about something, I do want to know. The only way to preserve this history is to talk about it and keep the memory alive.


> What I don't have an explanation for is why the C compilers produced better code than the Pascal compilers.

I was not aware of this, but I can think of one reason: Pascal does bound checks on array references.

Having lived through the Pascal-to-C transition in academia, I was under the impression that the main reason for C's success was that it came with Unix, and Unix was free (at least to schools at the time), while other languages and OSes cost money.


That was part of the reason, yes.

On MS-DOS, Turbo Pascal and Turbo C had equal code quality and for the use cases where bound checking or integer overflow checking could be a performance issue, it was possible to disable it.

In both cases you would end up using Assembly anyway, because on those days compilers were pretty bad generating code.


That erodes part of my argument. Why did C overtake Pascal? If it wasn't performance or availability, was it just preferences? Or portability from other existing codebases?


Short answer, UNIX and politics.

Long answer is a bit more complex.

UNIX was the only OS where C was relevant, and AT&T was forbidden to sell it, so they made the source code available to universities and companies for a nominal price.

That was already a big difference, instead of paying a sum of several thousands for closed source OS stuck to a specific mainframe, companies and universities could get something with source code for about $70 (if I remember correctly).

Thus UNIX eventually became the foundation of 80's startups like Sun and SGI.

Home computers firmware was basically Assembly, where C (actually K&R C dialects) was yet another language among all the available ones.

When the 16 bit revolution happened, there were several 16 bit computers systems done in a mix of Pascal and Assembly. The most notable ones Apple Lisa, Mac OS, MicroEngine and Corvus Concept.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...

http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...

https://en.wikipedia.org/wiki/Pascal_MicroEngine

https://en.wikipedia.org/wiki/Corvus_Systems

As a notable example, Photoshop started as an Object Pascal application, which was later on ported to C++.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...

So you had UNIX variants gaining a foothold on university campus and companies, with people wanting to write applications on their tiny (cheaper) computers at home, which like JavaScript and the Browser nowadays, C eventually escaped UNIX.

Apple eventually gave in to market pressure and rewrote their Object Pascal based SDK into C++ and C. And even tried their first stint at UNIX with A/UX.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...

Corvus Systems tried to pivot their computers into Concept Unix workstation.

The Amiga OS was written in a mix of Assembly and BCPL, and eventually got its UNIX port.

https://en.wikipedia.org/wiki/Amiga_Unix

C by virtue of being UNIX's system language meant that its tooling was free with the OS, while any other programming language had to be bough separately and still you would need to write FFI to C at some level.

As side note, eventually C compilers stopped being available for free when Sun started the trend of selling the SDK, which made the largely ignored gcc to start getting contributions.

While this was happening, Brian Kernighan decided to push for C, with articles like

"Why Pascal is Not My Favorite Programming Language"

http://www.lysator.liu.se/c/bwk-on-pascal.html

Which ignored the fact that the majority of Pascal dialects had features that covered his complaints, specially the second ISO revision, Extended Pascal. And that outside of UNIX most C compilers were actually dialects, which is one of the causes for UB.

There were other bash articles flying around like "Real Programmers Don't Use PASCAL".

http://www.ee.ryerson.ca/~elf/hack/realmen.html

On the PC world, OS/2 and Windows were being written in C, but the majority of application developers were actually already jumping with C++ wit C Set++ for OS/2, OWL, VCL and MFC on Windows.

Another nascent PC OS, BeOS, was also being written in C++.

It was the rise of FOSS UNIX software, with C as its systems language, that shifted the balance towards C. To stress it again, just like JavaScript on the browser.

If you look at the history of systems programming languages, the ones that lasted longer were always the ones that were the way to write software for a specific OS.

Regarding Pascal, the majority of vendors started to consider Turbo Pascal as the pseudo-official standard, which when Borland decided to ignore small developers and focus on the enterprise, just kind of killed the Pascal market. Specially since Turbo Pascal was only focused on PC.

There is probably a bit more to rant about, but I guess you get the idea.


This was hugely useful to me, and I appreciate the copious links. But I am still stuck on something. In the early days, you programmed the home computers with assembler (it was the truth) or you used co-equal non-truth languages like C or Pascal. Later on, the OSes were all written in C, so the application code was being written in C (or mixed C + assembler) and you had FFI problems if you weren't using C. But what happened in between these that caused the OSes to be written in C rather than Pascal? Was Pascal just not perceived to be useful as a systems language?

Politics and Unix would explain it if, people were borrowing and/or aping Unix in their OSes, or if people just generally had the impression that C was more powerful. Is that kind of the direction you're pointing?


In the 8 bit days, home computers where programmed in Assembly, Basic and Forth.

Anything else required being loaded from tape or floppy, including running CP/M instead of the ROM firmware.

Remarkable systems in these days Spectrum, C64, Tandy, Atari, MSX, Apple.

You can find a few programming books here:

http://atariarchives.org/

http://www.worldofspectrum.org/documentation.html

Then we moved into 16 bit, the major platforms being MS-DOS, Lisa and MacOS, Atari and Amiga.

MS-DOS was coded in Assembly, Lisa and MacOS in Object Pascal, Atari and Amiga used a mix of Assembly and BCPL.

MS-DOS and AmigaOS had quite a few UNIX influences on them.

Regarding my experience in Portugal, MS-DOS applications were being mostly programmed in Assembly, Pascal, C, C++, Basic and Clipper. Then there were some outliers using Prolog, Modula-2, Forth.

Microsoft and Borland were the two biggest vendors on the market and supported almost the same set of languages, with Borland ones being more programmer friendly.

Amiga were usually programmed in Assembly, Basic, AMOS, Modula-2 and C.

Mac OS was using Object Pascal until by System 7 release they decided to switch to C++.

UNIX started to be quite relevant by the mid-90's, and many wanted to continue the work on their home computers, that were doing at work or university on the expensive computer centres running some form of UNIX.

For example, I wrote K&R C code on MS-DOS for OS classes where the teacher would bring a PC tower into the class running Xenix, where each group would get a turn trying to execute their UNIX assignments.

Think a bit like how the Web influenced the desire to have JavaScript everywhere with node, it was a similar experience.

UNIX was being adopted by companies, the other contender being VMS (using BLISS as systems language).

With C companies had an ANSI/ISO standard ratified in 1989, while with Pascal, there was a de facto standard Turbo Pascal/Object Pascal with ISO being largely ignored, but they were confined to MS-DOS/Mac OS.


Thanks! I really appreciate your sharing this with me.


One nit: "Why Pascal is Not My Favorite Programming Language" explicitly states that it is not a comparison of Pascal and C. "Real Programmers Don't Use PASCAL" was not a "bash" article; it was satire. It didn't advocate use of C, even in satire - it advocated Fortran and assembler.

I think blaming either of those articles is mistaken.


BBS and Usenet discussions back in the day had another point of view, even if that wasn't part of the respective author's intentions.


There were native Pascal compilers available for the PC at the same time as the first C compilers. The Pascal ones didn't generate p-code, people just preferred writing in C.


I thought that the prevalence of Pascal compilers was partly due to p-code as a compiler implementation technique, not that they were generating it.

As I mentioned above, this raises questions about why C came to dominate Pascal. They both had a comparable early foothold on PCs.


> I don't think awareness of Lisp was evenly distributed among people developing on PCs in the time period when the network advantage was obtained.

My assertion is that you're looking at the wrong decade for C's network effects. You should be looking at the PDP-11 decade, not the PC decade. And in the PDP-11 decade, I believe there was much wider awareness of Lisp. (I was a kid then, so that's just my impression. I don't actually know.)

> It was probably a reach for me to say there wasn't anything to port. But what would someone have wanted to port? A graphical application that ran on a Lisp machine would have made assumptions about the display that aren't true about a PC. An academic AI program would have been too short on resources to be much use.

Fair point.

> And I don't have an explanation for why nobody developed a Lisp-like language that had explicit allocation.

Lisp allocates all the time. You have to work hard if you want it not to. So if you want explicit allocation, you'd have to invoke it all the time, and it would really get in the way - far more than it does in C. (Far more things are allocated in Lisp than in C.)

But you could mean "why didn't someone make a Lisp where you could explicitly allocate a block of memory if you chose to"? You wouldn't have to always use it in that case, so my objection in the previous paragraph doesn't apply. But if you did that, first, the garbage collector would have to know how to clean up those objects (or else you'd have to explicitly free them), and second, you'd have to be able to do something with it. Lisp, to my (very limited) knowledge, doesn't have a way to treat an allocated block of memory as anything other than a cons pair. You'd have to add that to the language. You could do it, but it would be something tacked on to the side of the language, not something you could use the same as the rest of the language.


> And in the PDP-11 decade, I believe there was much wider awareness of Lisp. (I was a kid then, so that's just my impression. I don't actually know.)

I was not around but from the literature it seems Lisp trolls existed even back then - for example Ted Nelson's Computer Lib / Dream Machines contains some very negative and very mistaken statements about Lisp, and it was published in 1974.

> Lisp allocates all the time. You have to work hard if you want it not to. So if you want explicit allocation, you'd have to invoke it all the time, and it would really get in the way - far more than it does in C. (Far more things are allocated in Lisp than in C... Lisp, to my (very limited) knowledge, doesn't have a way to treat an allocated block of memory as anything other than a cons pair.

Between this comment and your ignorance of the difference between a PDP-10 and a PDP-11 I get the impression that you do not know anything about computer history or about Lisp or about what constitutes memory allocation. Please stop trolling.


> Between this comment and your ignorance of the difference between a PDP-10 and a PDP-11 I get the impression that you do not know anything about computer history or about Lisp or about what constitutes memory allocation. Please stop trolling.

Thank you for your charitable assumptions. You might be even more charitable and state what's wrong with what I said.

My understanding is, 1, that every newly-created cons pair requires an allocation, 2, that every new list entry is a new cons pair, and 3, that you can avoid creating them but you have to keep a very careful eye on your code in order to do so - the normal way of writing code will cons frequently.

Which, of those statements, do you consider to be wrong, and why?


> Which, of those statements, do you consider to be wrong, and why?

Every one of them. You do not understand the difference between what the program code does and what the programming language implementation does, and your underlying assumption is that there is a difference in how a Lisp runtime without garbage collection would manage memory allocation and how the C standard library manages memory allocation.

I am going to assume you need a stdlib.h refresher:

    void *malloc(size_t size);

    void free(void *ptr);
How do you think free works? How would you write a linked list implementation in C?

I just happened to have done some software archaeology on a Lisp implementation without a garbage collector a while ago (https://github.com/vsedach/Thinlisp-1.1), and the memory allocation and lifetime management is exactly the same there as in C++ - manual management and dynamic extent regions (RAII).

Now I really want to get pedantic with each of your points, because individually each is wrong in interesting ways:

> 1, that every newly-created cons pair requires an allocation

This statement is a redundant tautology ("every new allocation requires an allocation"), but even the correctly phrased assertion that "CONS always allocates" is not always true:

https://en.wikipedia.org/wiki/Hash_consing

http://3e8.org/pub/scheme/doc/lisp-pointers/v1i3/p17-white.p...

> 2, that every new list entry is a new cons pair

Again, redundant, and again, not always true, because of hash consing and CDR coding:

https://en.wikipedia.org/wiki/CDR_coding

> 3, that you can avoid creating them but you have to keep a very careful eye on your code in order to do so - the normal way of writing code will cons frequently.

If you do not need to allocate lists, do not allocate lists. There is no "normal" way of writing code that exists outside of what your performance requirements are, what the language semantics are, and what the language implementation provides in terms of those semantics. The only situation where you cannot avoid frequent allocations is when all the data structures are immutable.


Hash consing will break if conses have to be mutated, or if their identity has to be attributed with data specific to the circumstances of their creation (such as attributing expressions read from files with file and line number information).

AnimalMuppet is basically right. Normal, everyday Lisp code does quite casually allocate memory: more or less of it depending on circumstances.

Implicit dynamic memory allocation occurs in places where in the analogous circumstances in C it wouldn't occur.

The abstract semantics of Lisp (such as ANSI CL) call for dynamic allocation of all lexical variable bindings, because in the abstract semantics, these have indefinite extent.

Each time a lexical scope with variables is entered, dynamic allocation takes place to create the fresh bindings, which enables continued access to these bindings after that scope's evaluation terminates.

Such allocation is only avoided as a matter of optimization: the Lisp compiler analyzes the scope to determine that no escaping closures are created in it and then the bindings can be allocated stackwise (dynamic extent) similarly to C's automatic storage.

Nothing in ANSI CL guarantees to you that (let (x)) will not allocate a piece of memory for x which endures after the form terminates, and has to be reclaimed.


Hold on there, cowboy. You're still assuming the worst about me, with no evidence, and that gets annoying by the second or third time.

> You do not understand the difference between what the program code does and what the programming language implementation does

False.

> I am going to assume you need a stdlib.h refresher

False, and even if I did, that wasn't much of one.

> How do you think free works?

I've seen an implementation.

> How would you write a linked list implementation in C?

I've had the pleasure (?) of doing so.

On to the meat of your reply: I was not aware of any non-garbage-collected Lisps. Still, isn't it fair to say that such Lisps are by far the exception?

I was under the impression that immutable was the normal way to write Lisp. ("Normal" here not meaning "the one right way" or "the way the language tries to make you write" - Lisp doesn't do that - but more "the way the majority of people write it".)

And even if you're going to not write it that way, if you really want to avoid allocations, don't you have to also be careful what library functions you call, since many of them cons?


> Hold on there, cowboy. You're still assuming the worst about me, with no evidence, and that gets annoying by the second or third time.

The evidence is that you insist on posting a bunch of "I was under the impression" bullshit instead of spending a few minutes using a search engine. That is what is really annoying. Why do you think that your basic thesis that programs written under the assumption that there is garbage collection will not work well when there is no garbage collection is a novel or insightful contribution to the discussion about the original post?


> Lisp trolls existed even back the

Recently while reading, Appel's Compiling with Continuations, I read some mistaken claims about Lisp and scope, one doesn't have to be a troll, which requires intent, to make ignorant claims about Lisp, parsing or any subject. Although the amount of ridiculous arguments from people who clearly haven't used it (or at most used it very briefly) does make one's blood boil at times.


I agree, but just because I'm curious, what did Appel say? Different dialects of Lisp have had different scoping rules, but beyond that I don't know what is true or false. I believe earlier ones were dynamically scoped, because I vaguely recollect that lexical scoping was an important change in Common Lisp and Scheme. Of course Common Lisp has a special way to ask for dynamic scope.


Can't find the quote I was remembering, but it said that Lisp didn't had lexical scope. The book was published in '92, so Lisp already had lexical scope. (Which was introduced by Scheme IIRC)

Here is another one I found looking for the quote I was remembering.

> "It has higher-order functions, meaning that a function can be passed as an argument and return as the result of another function – as in Scheme, C and Haskell, but not Pascal and Lisp.


Wow. Well, at one time, maybe Lisp did not have that? It certainly did by the time this book was written though.


"Lisp" still hasn't; it is a family which comprises various dialects, many of which have lexical scoping, but some of which don't, like Emacs Lisp (which only had dynamic binding until recently and now supports lexical scoping, but not by default).


My comment was directed at the quote about higher-order functions, which is a more legitimately surprising thing than something about scope. And of course we're talking about whatever would have been considered "Lisp" by Appel in 1992, even though you and I might know better and be more precise today.


> My assertion is that you're looking at the wrong decade for C's network effects. You should be looking at the PDP-11 decade, not the PC decade. And in the PDP-11 decade, I believe there was much wider awareness of Lisp.

This is the crux of it I think, and unfortunately my flashlight isn't bright enough to get much further. I was born in 1981, so this is all reconstruction for me. Unfortunately, the people who were there are getting scarce. Anyway, thanks for talking it through with me!


Excellent languages like Lisp, Haskell, Ada, etc., maybe also Rust and Nim, will likely never be mainstream since they offer a lot of features which require a lot of discipline by the programmer.

Companies want mediocre developers for mediocre languages (Java, C++, C#, etc.) because mediocre developers can be replaced easily, and because they get lower salary due to great competition.

Mediocre languages are just good enough to implement the things which need to be done. All their features are comprehensible by mediocre developers. That's another reason why only mediocre languages become mainstream.


If an excellent language doomed to obscurity is one that offers "a lot of features which require a lot of discipline by the programmer," and for the mediocre-only mainstream languages "all their features are comprehensible by mediocre developers," then I think C++ is pretty clearly a counter-example to that, not a supporting example.


Lisp workstations were quite expensive with proprietary source code, while UNIX workstation were built on source code available almost for free (AT&T could not sell it so they licensed it for a symbolic price), which made them cheaper to sell.

Also the companies involved in selling Lisp machines did quite a few management errors.


Ask anyone who owned a Symbolics 3600, a single-user machine the size of a refrigerator. Overpriced hardware. Poor maintenance service and low hardware reliability. A level of company arrogance seldom seen in a small company. (They thought they were going to rule AI. Didn't happen.) And, originally, 45-minute garbage collections. (Virtual memory plus slow disk plus naive garbage collection.)

What really killed them was decent LISP compilers for mainstream CPUs. LISP doesn't really need special hardware with tag bits.


When the machine appeared there was nothing like it. Later machines were more robust and also smaller using microprocessors, downto Lisp Machines on Nubus cards for Macs.

> a small company

At max they had 1000 employees.

> And, originally, 45-minute garbage collections

Then they invented and commercialized better GCs.


I'm beginning to find that small companies have more arrogance than larger. Just, confirmation bias seems to get people to remember the ones with the skills to back it up. Most companies fail without you ever hearing of them.


Yes, that is what I meant with management errors.


A big part, especially back in 1984 was that LISP tended to be slow on traditional PC hardware. Also somewhat memory hungry, which was a definite problem back in the 80s when memory was expensive.


That's correct. The microprocessor CPUs were not that good at running Lisp. A boost came with the 68020+MMU and the 68030 with integrated MMU. Those had large enough address spaces, were fast enough and had good performing GCs.

Memory was very expensive. The 1984 Mac started with 128kb RAM... A 1988 Mac IIx with 8MB was usable.


My memory is that in 1988 8MB was a luxurious amount of RAM, that machine would cost 2 to 4 thousand dollars. Most people were still using machines with memory measured in kilobytes. Even serious business users were usually limited to 640KB.


It was luxurious, no doubt.

The 68030 Mac IIx started slightly below $8000 with 4 MB RAM and no disk. I can't remember the prices, but I would expect that another 4MB would have cost up to $2k.


I don't see why language quality should have anything to do at all with mainstream usage.

PHP can be considered a mainstream language today. Even COBOL in some parts of the industry. Are they any better because of that?


According to TIOBE, in 1987 only C was more popular than Lisp. Thirty years later, C is #2 and Lisp is #31. Lisp was mainstream and for some reason other languages overtook it. I think it's a good question.


Well... for a language to succeed long-term, it needs to be either significantly better than the alternatives (at some particular kind of use), or else it needs to be entrenched in existing code.

When C began to spread, it was significantly better than alternatives for writing operating systems and associated tools on commodity hardware. Part of that "better" was cost - cost of the machines it ran on, cost of buying a compiler, and cost of implementing a compiler. (The mistake the anti-C people make in saying "but other languages were so much better!" is that they forget that cost was part of "better".)

C became entrenched in Unix and, later, Windows. But it didn't start entrenched. It started by being better.

If Lisp were significantly better, with everything considered, it should have won by now. I mean, yes, there are a lot of people chasing the current hotness. But the programming profession is not made up entirely of fools and sheep. Better languages get noticed, get used, acquire momentum. (Note well, however, that "better" again is "better with all things considered" - it doesn't equate to "better" as defined by theorists who about how much better their language is, even though it's a little-used language.)


You are commenting on a site run by Lisp


Are you saying that makes Lisp mainstream?


It's more mainstream than the $NEW-AWESOME-JAVASCRIPT-FRAMEWORK. Not being mainstream doesn't mean that something isn't awesome. There are lots of awesome discoveries trapped in research/academia because the overton window of the current technology community is shifted too far to "What I am used to", or because nobody cares to look.

Eventually some of these ideas will get dislodged and fall down to us, but you can't rely on that. If you read about Lisp's history it's very obvious why it is rejected by people at the moment.


>>> It's more mainstream than the $NEW-AWESOME-JAVASCRIPT-FRAMEWORK.

This sentence doesn't make any sense.

"Mainstream" means used alot - widespread acceptance.

Lisp is #33 on Tiobe after Cobol, Foxpro, Fortran and Ada. How you can compare that to the takeup of the latest JS nframeworks I don't really understand.

https://www.tiobe.com/tiobe-index/

21 SAS 1.372%

22 Dart 1.306%

23 D 1.103%

24 Transact-SQL 1.075%

25 ABAP 1.065%

26 COBOL 1.055%

27 (Visual) FoxPro 0.932%

28 Scala 0.923%

29 Fortran 0.879%

30 Ada 0.787%

31 Crystal 0.756%

32 Erlang 0.733%

33 Lisp 0.690%

34 Awk 0.662%

35 Lua 0.648%


I was just looking at that same list. I'm surprised Lua is that low considering how many applications embed it.


https://www.tiobe.com/tiobe-index/programming-languages-defi...

The reason those rankings are (mostly) useless is that it's based on search engine hits. An active, effective community that for whatever reason doesn't generate a million and one blog entries a month will not perform as well on the index. This makes it very useful for indicating fads in programming, but not reality.


There's also http://redmonk.com/sogrady/2017/03/17/language-rankings-1-17... which uses # pull requests on github with # of tags on Stack Overflow. Another biased methodology for sure, but it paints a different picture.


Most of Lisps novel features have been incorporated into the mainstream high-level languages of today: Python, Ruby, JavaScript, C#, Objective-C. The one that's still relatively unique to lisp is homoiconicity-based macros, but some non-s-expression languages like Nim and Elixir are able to accomplish them. And then there's languages like Clojure and Hy which are gaining mainstream status slowly through over-zealous adoption by disgruntled Rubyists and Pythonistas.


Lisp genes are all over the place. Wait for 20 years and you'll even more of them.


There's not much money in getting people to use it—it's rarely required and can be expensive to train for (as opposed to java, say).


"Don't worry about people stealing an idea. If it's original, you will have to ram it down their throats."


Clojure isn't mainstream?


The future is long. Computers will be around for thousands of years. They've only been useful for the last ~60 years.

A programming language needs at least two things to succeed: a niche, and corporate backing. Almost every successful language has had both.

Lisp's benefit is that it is discoverable. Programmers hundreds of years from now will continue to stumble across the roots of lisp: http://www.paulgraham.com/rootsoflisp.html

http://ep.yimg.com/ty/cdn/paulgraham/jmc.lisp

The axioms are too simple not to discover by accident.

What are some promising niches for Lisp?

I think gamedev is a likely candidate. The whole gamedev industry still uses C++ / C#. But Lisp can be just as fast -- you can even make a statically-typed Lisp, which avoids any possibility that Lisp's performance will bite you.

The benefits for gamedev in particular are immense: http://all-things-andy-gavin.com/2011/03/12/making-crash-ban...

The reason this hasn't happened is because it takes a certain personality type to write languages. In 99% of cases, your language will fail. Meaning, if you spend 3 years on it, that's 3 years you didn't spend learning React or Vue or %salary-du-jour. You have to love it.

That love is quite the feeling. When you design your own language from the ground up, and break down all barriers of complexity, it stops mattering whether anyone else ever uses your language. Making it was reward enough.

In concrete terms, I think self-hosted Lisps are a promising way forward. It's simple to bootstrap a Lisp in almost any language: JS, Lua, Python, Ruby. Write a reader (you can use JSON to start), then write a compiler that steps through the tree of expressions and spits out the native language constructs. E.g.

  (%while true
    (print "she sells C shells by the C store"))
becomes C#:

  while (true) {
    Console.WriteLine("she sells C shells by the C store");
  }
or Python:

  while True:
    print("she sells C shells by the C store")
etc.

Then you can save the output code to disk and run it.

At this point you've written your compiler in Python, Ruby, or whatever. Now the trick:

Create a file "compiler.l" that sits alongside your "compiler.py" file. For each function in compiler.py, translate it into your Lisp language and save it into compiler.l.

E.g. if your compiler.py file has:

  def compile(x):
    if atom(x): return compile_atom(x)
    if special(x): return compile_special(x)
    return compile_call(x)
then in your compiler.l file, you should write:

  (define compile (x)
    (if (atom x) (return (compile_atom x)))
    (if (special x) (return (compile_special x)))
    (return (compile_call x)))
And hey presto -- your compiler.py file is now capable of reading in compiler.l and generating the exact same code. Meaning you no longer have to program in Python! (Or whatever language you're targeting.) At that point your language is fully self-hosted, and you can extend it however you like.

It's so incredibly easy to set up a self-hosted Lisp that it almost seems like a toy. But it's incredibly powerful. E.g. I'm currently building one on top of Racket so that I can write traditional unhygenic macros, completely sidestepping Racket's syntax transformer system.

You could imagine doing something similar for React, letting you write programs to generate React components -- a step toward the ultimate templating system.

Once you've done this, you'll notice that the entire language is extremely small. It's a thin wrapper around the host language. But that's the value -- that's why it's useful. E.g. in many Lisps, it's common to use the symbol 't for True and an empty list for False. In those systems, there is no such thing as a "boolean" type. Everything is either an empty list or not-an-empty-list. You could do that, which demonstrates just how powerful this technique is. Just modify the way you compile IF statements:

  (define compile_if (cond a b)
    (+ "if (" (compile cond) ") { "
           (compile a)
       " } else { "
           (compile b)
       " }\n"))
to

  (define compile_if (cond a b)
    (+ "if (yes(" (compile cond) ")) { "
           (compile a)
       " } else { "
           (compile b)
       " }\n"))
(i.e. wrap COND in a call to a YES function.)

Then you can define YES as:

  (define yes (x)
    (return (not (or (== x false) (empty_list x)))))

Now your compiler spits out "if (yes(x)) { a } else { b }" everywhere, and your YES function is your definition of truthiness.

So, you can do that, and it works, but at that point you'll discover that you start wrestling with the underlying language. If you change how IF behaves, then you'll need to change how AND and OR behave, too. Stuff like that.

I've found the best strategy is to make your Lisp as close to the target language's semantics as possible. When you do that, you end up with a tiny runtime. Clojure's runtime is gargantuan because it has to define in Java all of Clojure's semantics. But here, you're using the host language's semantics directly.

Now it might seem that this "isn't really a Lisp". But it turns out that it's just as powerful as any other Lisp. All you have to do is change your compiler from:

  (define compile-file (src)
    (compile (read-file src)))
to

  (define compile-file (src)
    (compile (expand (read-file src))))
Then define an EXPAND function that performs macroexpansion on the expressions you got from the reader.

At that point it's straightforward to introduce:

  (define-macro when (cond . body)
    (return `(if ,cond (do ,@body))))
which compiles to:

  def when_macro (cond, *body):
    return ["if", cond, ["do"] + body]

  environment["when"]["macro"] = when_macro
Your EXPAND function becomes dead simple:

  def expand(form):
    if atom(form): return form
    mac = environment[form[0]]["macro"]
    if mac: return apply(mac, form)
    return map(expand, form)
Finish it off by defining DEFINE-MACRO:

  def define_macro_macro(name, args, *body):
    f = eval(compile(["fn", args] + body))
    environment[name]["macro"] = f

  environment["define-macro"]["macro"] = define_macro_macro
Presto, now you can write

  (define-macro add1 (x) (return (list "+" x 1)))

  (print (add1 41))
and it compiles to print(41+1).

Scott Bell and Daniel Gackle pioneered the above technique, and I've spent the last couple of years implementing it across different languages, including Python, elisp, ruby, and racket.

You can try it out here:

https://github.com/sctb/lumen

That's Lumen, a Lisp for JS and Lua. One of the coolest aspects is that it compiles to both JS and Lua simultaneously, and runs in both.

If you want one for Python, you can try out my branch:

https://github.com/shawwn/lumen/tree/features/python

It compiles to JS, Lua, and Python simultaneously. So the same codebase runs on node, lua, luajit, torch, python2.7, python3, and pypy.

LuaJIT has a fantastic FFI library -- and since our lisp runs in Lua, that means we automatically get a fantastic FFI: https://github.com/sctb/motor/blob/master/pq.l

Imagine how much work it would be to write your own FFI! I think it's one of the best FFI's of any Lisp implementation.

So that's a rough outline of Lisp's unique power. It's so versatile that it feels like just a matter of time until someone decides to embed it in some popular system, like a game engine or an online store builder. Clojure might get some serious competition within the next decade.


If you have NoScript and Firefox, you have to enable JS on the domain for the stylesheet to render the documents.


"Those not on the Arpanet may send U.S. mail to.."

Ah.. nostalgia.


"I believe the commercialization of software has harmed the spirit that enabled such systems to be developed."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: