Wow, wasn't expecting to see my post on here! Eventually, I want to write a follow-up, but I'm still a beginner.
Here's what I've liked about Common Lisp so far:
* The condition system is neat and I've never used anything like it -- you can easily control code from afar with restarts
* REPL-driven programming is handy in situations where you don't quite know what will happen and don't want to lose context -- for example parsing data from a source you're unfamiliar with, you can just update your code and continue on instead of having to save, possibly compile, and restart from the very beginning
* Common Lisp has a lot of implementations and there's a good deal of interoperability -- I was able to swap out implementations to trade speed (SBCL) for memory usage (CLISP) in one case (multiple compatible implementations is one of the reasons I've been leaning towards CL instead of Scheme for learning a Lisp)
* Even as an Emacs noob, the integration with Common Lisp is excellent, and it works great even on my super slow netbook where I've been developing -- this isn't as big of an advantage these days with fast computers, VS Code, and language servers, but it's definitely retrofuturistic
There's also a few things I don't like:
* The most popular package manager (QuickLisp) is nice, but not nearly as featureful as I've become accustomed to with newer languages/ecosystems
* Since the language itself is frozen in time, you need lots of interoperability libraries for threads, synchronization, command line arguments, and tons of other things
* I really, really wish SBCL could support fully static builds, to enable distributing binaries to non-glibc Linux distributions
I'm sure there are more pros/cons, but that's what came to mind just now.
Last time I checked on it, QuickLisp doesn't support fetching packages over anything except for plain http, with no encryption and no verification mechanism in place to detect files that may have been tampered with during transmission.
I think not supporting encryption or authentication for something as important as fetching source code makes QL a non-starter for me and hopefully for anyone else who cares about security.
Another issue I have ran into, is that SBCL is hosted on sourceforge, which has in the past injected malware into projects downloadable archives! I consider this to also be a security issue, and sourceforge in general is not pleasant to work with. I don't think there are any valid reasons to continue to use sourceforge today, so why such an important project continues to use it confuses me a lot.
I don't see these issues mentioned by anyone else which is bizarre to me.
I really like lisps and common lisp specifically but things like this has driven me away from using it and it doesn't appear that anyone cares about fixing these things.
These issues get mentioned a lot, you just haven't noticed I guess. Sourceforge is also an issue with some C libraries too, I'm guessing because it was done a long time ago? not sure.
I use ECL because it has really good C interop. It actually lets you inline C and access macros directly, making it a great glue language for C libraries. It's what I'm using it for now. I think you might even be able to avoid the GC entirely and use it to script C programs together in a performant way, by using the C FFI to allocate and manage the memory, including the ECL types, instead of the GC. And that's actually doable because of how good the inspector/debugger for lisp is. You can even inline assembly. I'm working on a bunch of CL stuff around this sort of thing, I plan to do a writeup of it and share it once I've developed it more.
Lisp has it's downsides, but the C FFI/embeddability, along with the excellent low-level debugger/inspector, interactivity, and conditions and restarts, makes it worth the time for me to invest in it. And the stability of the language. My main gripe is the reader, but it's easy-ish enough to avoid the problems with named-readtables, or a simple lisp parser for `read` or whatever. I like Clojure, but it's missing some key stuff from the old lisp world that I'd love to see. Shadow-cljs is awesome.
Nix has a really convenient new CL libraries packaging upstream now. That verifies everything with sha256. It's quite complete because it's seeded from Quicklisp and had more packages added on (aswell as their native library dependcies.)
Nix isn't to everyone's taste but it demonstrates that you can treat security/reproducibility/etc as orthogonal to Quicklisp and Sourceforge (and to Lisp native tooling in general.)
The reason for this is quite simple: portability. Quicklisp also uses plain TAR files to distribute dists. Why? Because quicklisp has a built-in TAR extractor written in 100% standard/portable CL. This allows Quicklisp to run on just about everything, from your computer to real LispMs and operating systems like Mezzano.
TLS comes up every time someone discusses Quicklisp, but nobody bothers to go ahead and actually implement it portably (and even if they did, have fun with performance and side channel attacks, both of which require you to break portability to implement well for every platform you want to target).
If you would like a more stereotypical package manager, consider using CLPM. Though one of the big reasons to use CLPM is not encryption IMO, but versioning. ASDF supports locking versions of dependencies, but Quicklisp doesnt ever use this and instead constantly pushes latest of everything from git repositories. This IMO sucks a lot more than using plain HTTP given that this actually breaks code, whereas some MITM from plain HTTP connection to Quicklisp would require so much coordination (and specificity of target) that it's just not in my threat model at all.
This does keep coming up, and it's a few years old now. I think Quicklisp can easily still support https while supporting the older packages that are tar+http, which could easily be mirrored in a git repo. Quicklisp has unfortunately taken over the entire ecosystem, making it hard to use anything else, and you often need to depend on it to use a lot of tools in the ecosystem. It sort of reminds me of Systemd in that way.
I agree on the version pinning being a worse situation, and also not having something like "node_modules" for lisp. I haven't tried CLPM since a while back, it was kind of hard to setup back then.
I have a little package manager thing, cl-micropm, that just uses Quicklisp to fetch everything via docker (should probably support podman too), and an .envrc file to tell ASDF to look in the project directory (a project-local node_modules-like folder called "lisp-systems") for systems. That way I can pin my deps manually by picking the commits + git submodule in lisp-systems/, and it's isolated to my local project. I looked into using the Docker container to rewrite the requests to use https, bypassing whatever Quicklisp is doing, but I never got around to that.
I'm looking to switch it to something even simpler/explicit though, cl-pm, that'll only optionally need/use Quicklisp via podman _only_ to figure out what the dependencies are, and then just have a function that uses wget/curl/git-pull to conveniently explicitly pull them in on request. That way you can decide to add a git mirror for an old http library, or pin a specific version, etc. It's slightly more manual than Quicklisp or CLPM, not a big deal, but very easy for anyone with just a little bit of lisp knowledge to understand the whole thing in under an hour.
> Quicklisp has unfortunately taken over the entire ecosystem, making it hard to use anything else, and you often need to depend on it to use a lot of tools in the ecosystem. It sort of reminds me of Systemd in that way.
This is a strange statement.
What requires QL to work? In the "bad old days" you had to manually download the sources and drop them somewhere ASDF could find them[1]. This still works. You can blithely live as if QL does not exist and get that same experience.
1: Yes, there was asdf-install, but I think I managed to get that to work once with about half-a-dozen tries?
Ultralisp, Rowsell, Qlot, Quickdocs, etc. Virtually every modern project build/install instructions reference Quicklisp. You have no idea which dependencies you need to pull and from where, which can be a real PITA for a large project. A lot of project code I've looked at also has Quicklisp references in the actual code for whatever reason, usually for testing or building or whatever else, so to run those you need Quicklisp. It's really hard not to say it's taken over the ecosystem or that there isn't lock-in, I don't know what you mean to be honest. Quicklisp is also a curated list gatekept by one person, so whatever is on there isn't really representative of everything that's being worked on. You can publish on Ultralisp if you don't want to wait or if it wasn't accepted, but then you're still using Quicklisp under the hood. And it's hard to discover things on Github/Gitlab/etc because there's a lot of stub repos just trying things out, with little to no stars.
I'd love to see ecosystem support for other package managers. CLPM is still in beta and has been for a good while now. Quicklisp too. Quicklisp famously doesn't support HTTPS, version pinning, project locals, etc, which has really throttled any progress with the Common Lisp ecosystem. It's not like ASDF at all, which is become a standard that's built into a lot of the lisp compilers.
Before QL, you looked at the .asd and searched cliki for each name. You can still do that if you like (and can use Google as well). Sometimes the readme had better instructions, but often they didn't work.
In fact, each project in the QL repository includes a link to upstream, so you can use that to find your sources if you like
I literally once rewrote the 20% of a library that I personally needed because it was faster than tracking down all the dependencies.
Your original comment reads like there used to be all these awesome package managers, and QL came around and squashed them, but there was rather a giant vacuum that QL quickly filled.
> Before QL, you looked at the .asd and searched cliki for each name.
Only very few libraries are on Cliki, and the links "upstream" just link to the repo that almost always says to just use Quicklisp for installation. Quicklisp has a quicklisp-projects repo where the project sources are all in one place, but it's not very helpful for what I've been talking about.
> Your original comment reads like there used to be all these awesome package managers
Sorry I don't want to engage with flamebait... I commented on this thread to raise awareness for issues and interesting things that I think people here might find useful, because there's still a lot of interest in lisp.
If you really think I'm wrong, do a writeup and share it on HN with everyone. You can go to the awesome-cl repo to find the most popular libraries in the ecosystem, and show how easy it is to avoid using Quicklisp to install/build/find the deps/run tests for all those repos. It would really help and I think it would save a lot of people time. For something like the Nodejs ecosystem, for example, such a writeup would probably only take like an hour tops because of the maturity of the npm package manager.
> If you really think I'm wrong, do a writeup and share it on HN with everyone. You can go to the awesome-cl repo to find the most popular libraries in the ecosystem, and show how easy it is to avoid using Quicklisp to install/build/find the deps/run tests for all those repos. It would really help and I think it would save a lot of people time. For something like the Nodejs ecosystem, for example, such a writeup would probably only take like an hour tops because of the maturity of the npm package manager.
I picked dexador because Fukamachi likes lots of small projects, so it's going to have lots of deps; it took me about 50 minutes while watching baseball and chatting with family:
I should also note that, should you want to avoid reading the .asd file, you can skip steps 2-4 and just download dependencies as-needed.
This is literally what my workflow was for using 3rd party Lisp projects the day before QL came out. Prior to my discovery of Google it was even more of a pain.
I've never successfully gotten a Nodejs project working without NPM, but NPM vies with pypi for my second least favorite packaging ecosystem (haskell cabal "wins" this contest).
> Quicklisp doesnt ever use this and instead constantly pushes latest of everything from git repositories
Yeah, I didn't recall off hand, but this was one of my main complaints with Quicklisp vs. other package managers I've used (for other ecosystems--not CL).
> whereas some MITM from plain HTTP connection to Quicklisp would require so much coordination (and specificity of target) that it's just not in my threat model at all
I hope you're right, but it still seems like an unnecessary risk. Even if I can't imagine a scenario where someone is able to MITM me (or, more likely, a server I'm deploying code to), there's still the lingering feeling that it's possible. I certainly wouldn't download an executable over HTTP and run it, and downloading library code is fairly similar (although easier to inspect, at least).
Are you proposing authentication over an insecure connection? If so, then the credentials could be compromised by a middle man. The same would be true for the signatures.
There is no technical reason why quicklisp couldn't use the systems libcurl and openssl when its available and fallback to fetching with its portable http implementation when they aren't available.
Every other languages package manager has managed to solve this issue!
If the issue is that nobody has actually had time to work on it, that's fair, but I don't believe that optionally supporting libcurl would cause QL to be less portable.
Try ocicl instead of quicklisp. System tarballs are hosted in an OCI registry, and are downloaded via TLS connections (obeying proxies). Tarballs are signed and signatures are stored in the sigstore rekor transparency log for later inspection. https://github.com/ocicl/ocicl
> Last time I checked on it, QuickLisp doesn't support fetching packages over anything except for plain http, with no encryption and no verification mechanism in place to detect files that may have been tampered with during transmission.
I know it's not an excuse, but it was fun as heck booting up "capital M" MacOS (9.2.1) and loading Quicklisp into MCL without any trouble. I'm not even sure that's a supported platform by Quicklisp. https://code.google.com/archive/p/mcl/
When I started using CL 20 years ago, libraries were stored on cliki and any malicious user could put malware there. Any source you asdf-installed was generally GPG signed and the installer automatically checked signatures against your personal trust-chain.
Learning CL back then was my first introduction to GPG (and Emacs, and Linux)
> When I started using CL 20 years ago, libraries were stored on cliki and any malicious user could put malware there. Any source you asdf-installed was generally GPG signed and the installer automatically checked signatures against your personal trust-chain.
Which, in practice, involved downloading GPG public keys from cliki because I didn't know every single CL developer.
For static builds, if you're willing to run a slightly older version of sbcl daewok's work on building and linking sbcl in a musl environment might be solution you're looking for. I've tried to port his patches to more recent versions but there are segfaults due to changes in upstream.
I’ll take a look, thanks! My biggest concern with Scheme is that each implementation seems to have its own ecosystem due to subtle incompatibilities.
From an outsider’s perspective it seems a lot more fragmented than CL. Not necessarily a big deal if you have the libraries you want, but it gives me pause.
R7RS, which Gambit (mostly?) supports, helps mitigate this by making library code more portable across implementations. Gambit, in particular, can also very easily take advantage of the wide variety of C libraries; it has one of the easiest, most integrated FFIs of all Scheme implementations.
That's true. For cases when you want to start with a good set of libraries (json, csv, databases, HTTP client, CLI args, language extensions…), I am putting up this collection together: https://github.com/ciel-lang/CIEL/ It can be used as a normal Quicklisp library, or as a core image (it then starts up instantly) or as a binary.
It can run scripts nearly instantly too (so it isn't unlike Babashka). We are ironing out the details, not at v1.0 yet.
> handling a runtime error by just fixing the broken code--in-place, without any restarts [from the blog]
We run a long and intensive computation and, bad luck, we get an error in the last step. Instead of re-running everything again from zero, we get the interactive debugger, we go to the erroneous line, we compile the fixed function, we come back to the debugger, we choose a point on the stackframe to resume execution from (the last step), and we see our program pass. Hope this illustrates the feature well!
I like your pragmatic approach of using Lisp where it makes sense and not being afraid to shell out to something else where appropriate (among many other nuggets of wisdom).
I cdr car less about your cons. Seriously though, mad props for being diligent enough to spend your attention on this. There is a lot to learn from people who came before us and build on that.
Won't you also be more likely to write code based on data that you happen to have in the current situation, but not for data that covers every situation?
E.g. code that accesses an optional property as if it was always present, because it happens to be present when you're writng the code, etc.
That seems like a possible pitfall when relying on a REPL heavily, but I haven't used such a language myself, so can't speak from experience.
And with TDD, aren't you ore likely to write code based on the current tests you have, but not code that covers every situation?
Any time writing code, you (should) aim for the general situation and then test it with whatever edge-cases you think of at the time. The REPL lets you live-test. I know many people who dump their REPL history to a file and turn them into tests.
My attitude with tests is not to write individual tests when I can write property-based tests. The payoff from the latter is considerable. Let the computer do the work of generating and running tests; its time is worth a whole lot less than mine.
For individual tests, say for coverage, these should also be generated automatically if possible, say by looking for inputs that kill mutants. I've backburned a Common Lisp system for doing this, generating mutants from Common Lisp source forms and automatically searching for and minimizing inputs that kill new mutants. Maybe one day I'll finish this and put it out there for general use.
My point was, having an actual example of data in front of you, instead of only definition of the structure/schema/interface/type of the data could push people more towards relying on things specific to that example. Especially in dynamically typed languages, but also for things like trying to take the first element of a list that might be empty (in languages where that doesn't return an `Option`), etc.
And I wonder whether someone observed that in practice.
I just create new / changed functions next to the others and eval the selected region, then clean up. When I think i'm done, I'll restart the repl and try if it all is fine or if I depended on something in the state. That doesn't often happen anymore. I use the repl to try out things I just written in files. I can't say I remember a moment when state was a/the problem.
Here's what I've liked about Common Lisp so far:
* The condition system is neat and I've never used anything like it -- you can easily control code from afar with restarts
* REPL-driven programming is handy in situations where you don't quite know what will happen and don't want to lose context -- for example parsing data from a source you're unfamiliar with, you can just update your code and continue on instead of having to save, possibly compile, and restart from the very beginning
* Common Lisp has a lot of implementations and there's a good deal of interoperability -- I was able to swap out implementations to trade speed (SBCL) for memory usage (CLISP) in one case (multiple compatible implementations is one of the reasons I've been leaning towards CL instead of Scheme for learning a Lisp)
* Even as an Emacs noob, the integration with Common Lisp is excellent, and it works great even on my super slow netbook where I've been developing -- this isn't as big of an advantage these days with fast computers, VS Code, and language servers, but it's definitely retrofuturistic
There's also a few things I don't like:
* The most popular package manager (QuickLisp) is nice, but not nearly as featureful as I've become accustomed to with newer languages/ecosystems
* Since the language itself is frozen in time, you need lots of interoperability libraries for threads, synchronization, command line arguments, and tons of other things
* I really, really wish SBCL could support fully static builds, to enable distributing binaries to non-glibc Linux distributions
I'm sure there are more pros/cons, but that's what came to mind just now.