Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a NodeJS developer it's still kind of shocking to me that Python still hasn't resolved this mess. Node isn't perfect, and dealing with different versions of Node is annoying, but at least there's none of this "worry about modifying global environment" stuff.


Caveat: I'm a node outsider, only forced to interact with it

But there are a shocking number of install instructions that offer $(npm i -g) and if one is using Homebrew or nvm or a similar "user writable" node distribution, it won't prompt for sudo password and will cheerfully mangle the "origin" node_modules

So, it's the same story as with python: yes, but only if the user is disciplined

Now ruby drives me fucking bananas because it doesn't seem to have either concept: virtualenvs nor ./ruby_modules


It's worth noting that Node allows two packages to have the same dependency at different versions, which means that `npm i -g` is typically a lot safer than a global `pip install`, because each package will essentially create its own dependency tree, isolated from other packages. In practice, NPM has a deduplication process that makes this more complicated, and so you can run into issues (although I believe other package managers can handle this better), but I rarely run into issues with this.

That said, I agree that `npm i -g` is a poor system package manager, and you should typically be using Homebrew or whatever package manager makes the most sense on your system. That said, `npx` is a good alternative if you just want to run a command quickly to try it out or something like that.


>It's worth noting that Node allows two packages to have the same dependency at different versions

Yes. It does this because JavaScript enables it - the default import syntax uses a file path.

Python's default import syntax uses symbolic names. That allows you to do fun things like split a package across the filesystem, import from a zip file, and write custom importers, but doesn't offer a clean way to specify the version you want to import. So Pip doesn't try to install multiple versions either (which saves it the hassle of trying to namespace them). You could set up a system to make it work, but it'd be difficult and incredibly ugly.

Some other language ecosystems don't have this problem because the import is resolved at compile time instead.


This is incorrect on several points.

Firstly, Node's `require` syntax predates the modern JS `import` syntax. It is related to some attempts at the time to create tools that could act like a module system for the browser (in particular RequireJS), but it is distinct in that a lot of the rules about how modules would be resolved were specifically designed to make sense with NodeJS and the `node_modules` system.

Secondly, although path imports are used for local imports, Node's `require` and `import` syntax both use symbolic names to refer to third-party packages (as well as built-in packages). If you have a module called `lodash`, you would write something like `import "lodash"`, and Node will resolve the name "lodash" to the correct location. This behaviour is in principle the same as Python's — the only change is the resolution logic, which is Node-specific, and not set by the Javascript language at all.

The part of NodeJS that _does_ enable this behaviour is the scoped module installation. Conceptually, NPM install modules in this structure:

    * index.js (your code goes here)
    * node_modules (top level third-party modules go here)
        * react
            * index.js (react source code)
            * node_modules (react's dependencies go here)
                * lodash/index.js
        * vuejs
            * index.js (vuejs source code)
            * node_modules (vue's dependencies go here)
                * lodash/index.js
        * lodash
            * index.js
Importantly, you can see that each dependency gets their own set of dependencies. When `node_modules/react/index.js` imports lodash, that will resolve to the copy of lodash installed within the react folder in the dependency tree. That way, React, VueJS, and the top-level package can all have their own, different versions of lodash.

Of course in practice, you usually don't want lots of versions of the same library floating around, so NPM also has a deduping system that attempts to combine compatible modules (e.g. if the version ranges for React, Vue, and Lodash overlap, a version will be chosen that works for all modules). In addition, there are various ways of removing excess copies of modules — by default, NPM flattens the tree in a way that allows multiple modules to "see" the same imported module, and tools like PNPM use symlinks to remove duplicated modules. And you mention custom importers in Python, but I believe Yarn uses them already in NodeJS to do the fun things you talk about like importing from zip files that are vendored in a repository.

All of the above would be possible in Python as well, without changing the syntax or the semantics of the `import` statement at all. However, it would probably break a lot of the ecosystem assumptions about where modules live and how they're packaged. And it's not necessarily required to fix the core Python packaging issues, so I don't think people in Python-land are investing much time in exploring this issue.

Basically, no, there is no fundamental reason why Python and Node have to have fundamentally different import systems. The two languages process imports in a very similar way, with the main difference being whether packages are imported in a nested way (Node) or as a flattened directory (Python).


Because you don’t need virtualenvs or ruby_modules. You can have however many versions of the same gem installed it’s simply referenced by a gemfile, so for Ruby version X you are guaranteed one copy of gem version Y and no duplicates.

This whole installing the same dependencies a million times across different projects in Python and Node land is completely insane to me. Ruby has had the only sane package manager for years. Cargo too, but only because they copied Ruby.

Node has littered my computer with useless files. Python’s venv eat up a lot of space unnecessarily too.


In principle, venvs could hard-link the files from a common source, as long as the filesystem supports that. I'm planning to experiment with this for Paper. It's also possible to use .pth files (https://docs.python.org/3/library/site.html) to add additional folders to the current environment at startup. (I've heard some whispers that this causes a performance hit, but I haven't noticed. Python module imports are cached anyway.) Symlinks should work, too. (But I'm pretty sure Windows shortcuts would not. No idea about junctions.)


I wish the ecosystem made heavier use of .zip packages, which would gravely help with the logistics of your hardlink plan, in addition to slimming down 300MB worth of source code. The bad news is that (AIUI) the code needs to be prepared for use from a package and thus my retroactively just zipping them up will break things at runtime

Take for example:

  $ du -hs $HOMEBREW_PREFIX/Cellar/ansible/11.1.0_1/libexec/lib/python3.13/site-packages/* | gsort --human
  103M /usr/local/Cellar/ansible/11.1.0_1/libexec/lib/python3.13/site-packages/botocore
  226M /usr/local/Cellar/ansible/11.1.0_1/libexec/lib/python3.13/site-packages/ansible_collections


Wheels are zip files with a different extension. And the Python runtime can import Python code from them - that's crucial to the Pip bootstrapping process, in fact.

But packages that stay zipped aren't a thing any more - because it's the installer that gets to choose how to install the package, and Pip just doesn't do that. Pip unpacks the wheel mainly because many packages are written with the assumption that the code will be unpacked. For example, you can't `open` a relative path to a data file that you include with your code, if the code remains in a zip file. Back in the day when eggs were still a common distribution format, it used to be common to set metadata to say that it's safe to leave the package zipped up. But it turned out to be rather difficult to actually be sure that was true.

The wheel is also allowed to contain a folder with files that are supposed to go into other specified directories (specific ones described by the `sysconfig` standard library https://docs.python.org/3/library/sysconfig.html ), and Pip may also read some "entry point" metadata and create executable wrappers, and leave additional metadata to say things like "this package was installed with Pip". The full installation process for wheels as designed is described at https://packaging.python.org/en/latest/specifications/binary... . (Especially see the #is-it-possible-to-import-python-code-directly-from-a-wheel-file section at the end.)

The unzipped package doesn't just take up extra space due to unzipping, but also because of the .pyc cache files. A recent Pip wheel is 1.8 MiB packed and 15.2 MiB unpacked on my system; that's about 5.8 apparent MiB .py, 6.5 MiB .pyc, 2.1 MiB from wasted space in 4KiB disk blocks, and [s].8 MiB from a couple hundred folders.[/s] Sorry, most of that last bit is actually stub executables that Pip uses on Windows to make its wrappers "real" executables, because Windows treats .exe files specially.


uv helps prevent some of that using caching https://docs.astral.sh/uv/concepts/cache/#cache-directory


"They copied ruby" is a little unfair. From memory it was some of the same individuals.


You're right about this: apparently the same guy who wrote Bundler wrote Cargo.


Ruby has a number of solutions for this - rvm (the oldest, but less popular these days), rbenv (probably the most popular), chruby/gem_home (lightweight) or asdf (my personal choice as I can use the same tool for lots of languages). All of those tools install to locations that shouldn't need root.


Yes, I am aware of all of those, although I couldn't offhand tell anyone the difference in tradeoffs between them. But I consider having to install a fresh copy of the whole distribution a grave antipattern. I'm aware that nvm and pyenv default to it and I don't like that

I did notice how Homebrew sets env GEM_HOME=<Cellar>/libexec GEM_PATH=<Cellar>/libexec (e.g. <https://github.com/Homebrew/homebrew-core/blob/9f056db169d5f...>) but, similar to my node experience, since I am a ruby outsider I don't totally grok what isolation that provides


The mainline ruby doesn't but tools to support virtualenvs are around. They're pretty trivial to write: https://github.com/regularfry/gemsh/blob/master/bin/gemsh

As long as you're in the ruby-install/chruby ecosystem and managed to avoid the RVM mess then the tooling is so simple that it doesn't really get any attention. I've worked exclusively with virtualenvs in ruby for years.


FWIW, you can usually just drop the `-g` and it'll install into `node_modules/.bin` instead, so it stays local to your project. You can run it straight out of there (by typing the path) or do `npm run <pkg>` which I think temporarily modifies $PATH to make it work.


The `npx` command (which comes bundled with any nodejs install) is the way to do that these days.


`npx` doesn't update package.json/package.lock though, right? So you might get a different version of the package once awhile. If it's an executable you depend on for your project, it makes sense to version it IMO.


You can do a (local) install using `npm install` and then execute the binary using `npx`. npx will also try to fetch the binary over the network if you don't have it installed, which is questionable behaviour in my opinion, but you can just cancel this if it starts doing it.


Python has been cleaning up a number of really lethal problems like:

(i) wrongly configured character encodings (suppose you incorporated somebody else's library that does a "print" and the input data contains some invalid characters that wind up getting printed; that "print" could crash a model trainer script that runs for three days if error handling is set wrong and you couldn't change it when the script was running, at most you could make the script start another python with different command line arguments)

(ii) site-packages; all your data scientist has to do is

   pip install --user
the wrong package and they'd trashed all of their virtualenvs, all of their condas, etc. Over time the defaults have changed so pythons aren't looking into the site-packages directories but I wasted a long time figuring out why a team of data scientists couldn't get anything to work reliably

(iii) "python" built into Linux by default. People expected Python to "just work" but it doesn't "just work" when people start installing stuff with pip because you might be working on one thing that needs one package and another thing that needs another package and you could trash everything you're doing with python in the process of trying to fix it.

Unfortunately python has attracted a lot of sloppy programmers who think virtualenv is too much work and that it's totally normal for everything to be broken all the time. The average data scientist doesn't get excited when it crumbles and breaks, but you can't just call up some flakes to fix it. [1]

[1] https://www.youtube.com/watch?v=tiQPkfS2Q84


> Python has been cleaning up a number of really lethal problems like

I wish they would stick to semantic versioning tho.

I have used two projects that got stuck in incompatible changes in the 3.x Python.

That is a fatal problem for Python. If a change in a minor version makes things stop working, it is very hard to recommend the system. A lot of work has gone down the drain, by this Python user, trying to work around that


I assume these breaking changes are in the stdlib and not in the python interpreter (the language), right?

There was previous discussions about uncoupling the stdlib (python libraries) from the release and have them being released independently, but I can’t remember why that died off


> I assume these breaking changes are in the stdlib and not in the python interpreter (the language), right?

That's the usual case, but it can definitely also happen because of the language - see https://stackoverflow.com/questions/51337939 .

> There was previous discussions about uncoupling the stdlib (python libraries) from the release and have them being released independently, but I can’t remember why that died off

This sort of thing is mainly a social problem.


I assume these breaking changes are in the stdlib

Well yes, but sometimes the features provided by stdlib can feel pretty 'core'. I remember that how dataclasses work for example changed and broke a lot of our code when we upgraded from 3.8 to 3.10,

There have also been breaking changes in the C API. We have one project at work that is stuck at 3.11 since a third party dependency won't build on 3.12 without a rewrite.


I don’t exactly remember the situation but a user created a python module named error.py.

Then in their main code they imported the said error.py but unfortunately numpy library also has an error.py. So the user was getting very funky behavior.


Yep, happens all the time with the standard library. Nowadays, third-party libraries have this issue much less because they can use relative imports except for their dependencies. But the Python standard library, for historical reasons, isn't organized into a package, so it can't do that.

Here's a fun one (this was improved in 3.11, but some other names like `traceback.py` can still reproduce a similar problem):

  /tmp$ touch token.py
  /tmp$ py3.10
  Python 3.10.14 (main, Jun 24 2024, 03:37:47) [GCC 11.4.0] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> help()
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/opt/python/standard/lib/python3.10/_sitebuiltins.py", line 102, in __call__
      import pydoc
    File "/opt/python/standard/lib/python3.10/pydoc.py", line 62, in <module>
      import inspect
    File "/opt/python/standard/lib/python3.10/inspect.py", line 43, in <module>
      import linecache
    File "/opt/python/standard/lib/python3.10/linecache.py", line 11, in <module>
      import tokenize
    File "/opt/python/standard/lib/python3.10/tokenize.py", line 36, in <module>
      from token import EXACT_TOKEN_TYPES
  ImportError: cannot import name 'EXACT_TOKEN_TYPES' from 'token' (/tmp/token.py)
Related Stack Overflow Q&A (featuring an answer from me): https://stackoverflow.com/questions/36250353


... it's tricky. In Java there's a cultural expectation that you name a package like

  package organization.dns.name.this.and.that;
but real scalability in a module system requires that somebody else packages things up as

  package this.and.that;
and you can make the system look at a particular wheel/jar/whatever and make it visible with a prefix you specify like

  package their.this.and.that;
Programmers seem to hate rigorous namespace systems though. My first year programming Java (before JDK 1.0) the web site that properly documented how to use Java packages was at NASA and you still had people writing Java classes that were in the default package.


But let's all be real here: the ability of __init__.py to do FUCKING ANYTHING IT WANTS is insanity made manifest

I am kind of iffy on golang's import (. "some/packge/for/side-effects") but at least it cannot suddenly mutate GOPATH[0]="/home/jimmy/lol/u/fucked" as one seems to be able to do on the regular with python

I am acutely aware that is (programmer|package|organization|culture)-dependent but the very idea that one can do that drives us rigorous people stark-raving


It took me a while to realize that you do mean __init__.py running when a package imports.

But, you know, top-level code (which can do anything) runs when you import any Python module (the first time), and Python code doesn't have to be in a package to get imported. (The standard library depends on this.)


> Programmers seem to hate rigorous namespace systems though

Pretty much a nothing burger in Rust, so I disagree that items necessarily hate the concept. Maybe others haven’t done a good job with the UX?


That. Forcing people to do it right from day one helps a lot. Also Rust attracts a programmer who is willing to accept some pain up front to save pain later, if your motto is "give me convenience or give me death" you might stick with C or Python.

If you give people an "easy way out" it is very hard to compel them to do it more rigorously. Look at the history of C++ namespaces as well as the non-acceptance of various "Modula" languages and Ada back in the 1980s.


Yeah, that's not exactly the problematic situation... but the good news is I improved the Python's error message for this in 3.13. See https://docs.python.org/3/whatsnew/3.13.html#improved-error-...


Half the time something breaks in a javascript repo or project, every single damn javascript expert in the team/company tells me to troubleshoot using the below sequence, as if throwing spaghetti on a wall with no idea what's wrong.

Run npm install

Delete node_modules and wait 30minutes because it takes forever to delete 500MB worth of 2 million files.

Do an npm install again (or yarn install or that third one that popped up recently?)

Uninstall/Upgrade npm (or is it Node? No wait, npx I think. Oh well, used to be node + npm, now it's something different.)

Then do steps 1 to 3 again, just in case.

Hmm, maybe it's the lockfile? Delete it, one of the juniors pushed their version to the repo without compiling maybe. (Someone forgot to add it to the gitignore file?)

Okay, do steps 1 to 3 again, that might have fixed it.

If you've gotten here, you are royally screwed and should try the next javascript expert, he might have seen your error before.

So no, I'm a bit snarky here, but the JS ecosystem is a clustermess of chaos and should rather fix it's own stuff first. I have none of the above issues with python, a proper IDE and out of the box pip.


So you’re not experiencing exactly this with pip/etc? I hit this “just rebuild this 10GB venv” scenario like twice a day while learning ML. Maybe it’s just ML, but then regular node projects don’t have complex build-step / version-clash deps either.


I think it's something unique to python's ML ecosystem, to be honest. There is a lot of up-in-the-air about how to handle models, binaries and all of that in a contained package, and that results in quite a few hand-rolled solutions, some of which encroach on the package manager's territory, plus of course drivers and windows.

I've worked on/with some fairly large/complex python projects, and they almost never have any packaging issues that aren't just obvious errors by users. Yes, every once in a while we have to be explicit about a dependency because some dependent project isn't very strict with their versioning policy and their API layers.


I've not used python professionally for years - but I have had to do this maybe once in many years of usage. Seen it like once more in my team(s). A rounding error.

I've seen someone having to do this in node like once every month, no matter which year, no matter which project or company.


The pain is real. Most of the issues are navigable, but often take careful thought versus some canned recipe. npm or yarn in large projects can be a nightmare. starting with pnpm makes it a dream. Sometimes migrating to pnpm can be rough, because projects that work may rely on incorrect, transitive, undeclared deps actually resolving. Anyway, starting from pnpm generally resolves this sort of chaos.

Most packing managers are developed.

Pnpm is engineered.

It’s one of the few projects I donate to on GitHub


What kind of amateurs are you working with? I’m not a Node.js dev and even I know about npm ci command.


Sounds like a tale from a decade ago, people now use things like pnpm and tsx.


Environment and dependency management in JS-land is even worse.

Similar problems with runtime version management (need to use nvm for sanity, using built-in OS package managers seems to consistently result in tears).

More package managers and interactions (corepack, npm, pnpm, yarn, bun).

Bad package interop (ESM vs CJS vs UMD).

More runtimes (Node, Deno, Bun, Edge).

Then compound this all with the fact that JS doesn't have a comprehensive stdlib so your average project has literally 1000s of dependencies.


Valid criticisms, but the "standard" choices all work well. Nvm is the de facto standard for node version management, npm is a totally satisfactory package manager, node is the standard runtime that those other runtimes try to be compatible with, etc.

Will also note that in my years of js experience I've hardly ever run into module incompatibilities. It's definitely gnarly when it happens, but wouldn't consider this to be the same category of problem as the confusion of setting up python.

Hopefully uv can convince me that python's environment/dependency management can be easier than JavaScript's. Currently they both feel bad in their own way, and I likely prefer js out of familiarity.


> I've hardly ever run into module incompatibilities

I'm not totally sure what you're referring to, but I've definitely had a number of issues along the lines of:

- I have to use import, not require, because of some constraint of the project I'm working in - the module I'm importing absolutely needs to be required, not imported

I really don't have any kind of understanding of what the fundamental issues are, just a very painful transition point from the pre-ESM world to post.


I was referring to cjs vs esm (require vs import, respectively). I suspect I was late enough to the game (and focused on browser js) such that I've mostly only used esm.

I will note that node has embraced esm, and nowadays it's frowned upon to only publish cjs packages so this problem is shrinking every day. Also cool is some of the newer runtimes support esm/cjs interop.


>Similar problems with runtime version management (need to use nvm for sanity, using built-in OS package managers seems to consistently result in tears).

In practice I find this a nuisance but a small one. I wish there had been a convention that lets the correct version of Node run without me manually having to switch between them.

> More package managers and interactions (corepack, npm, pnpm, yarn, bun).

But they all work on the same package.json and node_modules/ principle, afaik. In funky situations, incompatibilities might emerge, but they are interchangeable for the average user. (Well, I don't know about corepack.)

> Bad package interop (ESM vs CJS vs UMD).

That is a whole separate disaster, which doesn't really impact consuming packages. But it does make packaging them pretty nasty.

> More runtimes (Node, Deno, Bun, Edge).

I don't know what Edge is. Deno is different enough to not really be in the same game. I find it hard to see the existence of Bun as problematic: it has been a bit of a godsend for me, it has an amazing ability to "just work" and punch through Typescript configuration issues that choke TypeScript. And it's fast.

> Then compound this all with the fact that JS doesn't have a comprehensive stdlib so your average project has literally 1000s of dependencies.

I guess I don't have a lot of reference points for this one. The 1000s of dependencies is certainly true though.


> I wish there had been a convention that lets the correct version of Node run without me manually having to switch between them.

For what it's worth, I think .tool-versions is slowly starting to creep into this space.

Mise (https://mise.jdx.dev/dev-tools/) and ASDF (https://asdf-vm.com/) both support it.

Big reason I prefer ASDF to nvm/rvm/etc right now is that it just automatically adjusts my versions when I cd into a project directory.


I've only recently started with uv, but this is one thing it seems to solve nicely. I've tried to get into the mindset of only using uv for python stuff - and hence I haven't installed python using homebrew, only uv.

You basically need to just remember to never call python directly. Instead use uv run and uv pip install. That ensures you're always using the uv installed python and/or a venv.

Python based tools where you may want a global install (say ruff) can be installed using uv tool


> Python based tools where you may want a global install (say ruff) can be installed using uv tool

uv itself is the only Python tool I install globally now, and it's a self-contained binary that doesn't rely on Python. ruff is also self-contained, but I install tools like ruff (and Python itself) into each project's virtual environment using uv. This has nice benefits. For example, automated tests that include linting with ruff do not suddenly fail because the system-wide ruff was updated to a version that changes rules (or different versions are on different machines). Version pinning gets applied to tooling just as it does to packages. I can then upgrade tools when I know it's a good time to deal with potentially breaking changes. And one project doesn't hold back the rest. Once things are working, they work on all machines that use the same project repo.

If I want to use Python based tools outside of projects, I now do little shell scripts. For example, my /usr/local/bin/wormhole looks like this:

  #!/bin/sh
  uvx \
      --quiet \
      --prerelease disallow \
      --python-preference only-managed \
      --from magic-wormhole \
      wormhole "$@"


>You basically need to just remember to never call python directly. Instead use uv run and uv pip install.

I don't understand why people would rather do this part specfically, rather than activate a venv.


Because node.js isn't a dependency of the Operating system.

Also we don't have a left pad scale dependency ecosystem that makes version conflicts such a pressing issue.


Oh, tell us OS can’t venv itself a separate python root and keep itself away from what user invents to manage deps. This is non-explanation appealing to authority while it’s clearly just a mess lacking any thought. It just works like this.

we don't have a left pad scale dependency ecosystem that makes version conflicts such a pressing issue

TensorFlow.


We have virtual envs and package isolation, it's usually bloated, and third party and doesn't make for a good robust OS base, it's more an app layer. See flatpak, snapcraft.

"Compares left pad with ml library backing the hottest AI companies of the cycle"


Bloated is an emotion, it's not a technical term. The related, most commonly known technical term is "DLL hell". And I absolutely love to work with flatpak, because not once in my job I thought about it, apart from looking what folder to backup.

Left pad was ten years ago. Today is 2025-01-13 and we are still discussing how good yet another python PM presumably is. Even a hottest ML library can only do so much in this situation.


>Oh, tell us OS can’t venv itself a separate python root and keep itself away from what user invents to manage deps.

I've had this thought too. But it's not without its downsides. Users would have to know about and activate that venv in order to, say, play with system-provided GTK bindings. And it doesn't solve the problem that the user may manage dependencies for more than one project, that don't play nice with each other. If everything has its own venv, then what is the "real" environment even for?


This. IME, JS devs rarely have much experience with an OS, let alone Linux, and forget that Python literally runs parts of the OS. You can’t just break it, because people might have critical scripts that depend on the current behavior.


I think it makes sense given that people using python to write applications are a minority of python users. It's mostly students, scientists, people with the word "analyst" in their title, etc. Perhaps this goes poorly in practice, but these users ostensibly have somebody else to lean on re: setting up their environments, and those people aren't developers either.

I have to imagine that the python maintainers listen for what the community needs and hear a thousand voices asking for a hundred different packaging strategies, and a million voices asking for the same language features. I can forgive them for prioritizing things the way they have.


I'm not sure I understand your point. Managing dependencies is easy in node. It seems to be harder in Python. What priority is being supported here?


Hot take: pnpm is the best dx, of all p/l dep toolchains, for devs who are operating regularly in many projects.

Get me the deps this project needs, get them fast, then them correctly, all with minimum hoops.

Cargo and deno toolchains are pretty good too.

Opam, gleam, mvn/gradle, stack, npm/yarn, nix even, pip/poetry/whatever-python-malarkey, go, composer, …what other stuff have i used in the past 12 months… c/c++ doesn’t really have a first class std other than global sys deps (so ill refer back to nix or os package managers).

Getting the stuff you need where you need it is always doable. Some toolchains are just above and beyond, batteries included, ready for productivity.


Have you used bun? It's also great. Super fast


pnpm is the best for monorepos. I've tried yarn workspaces and npm's idea of it and nothing comes close to the DX of pnpm


What actually, as an end user, about pnpm is better than Yarn? I've never found an advantage with pnpm in all the times I've tried it. They seem very 1:1 to me, but Yarn edges it out thanks to it having a plugin system and its ability to automatically pull `@types/` packages when needed.


automatically pull `@types/` packages when needed

Wait, what? Since when?


It's a plugin that's included by default in Yarn 4: https://yarnpkg.com/api/plugin-typescript

In Yarn 2/3, it needs to be added manually.


I swear I'm not trolling: what do you not like about modern golang's dep management (e.g. go.mod and go.sum)?

I agree that the old days of "there are 15 dep managers, good luck" was high chaos. And those who do cutesy shit like using "replace" in their go.mod[1] is sus but as far as dx $(go get) that caches by default in $XDG_CACHE_DIR and uses $GOPROXY I think is great

1: https://github.com/opentofu/opentofu/blob/v1.9.0/go.mod#L271


To be fair your specific example is due to… well forking terraform due to hashicorp licensing changes.


hcl is still MPLv2 https://github.com/hashicorp/hcl/blob/v2.23.0/LICENSE and my complaint is that the .go file has one import path but the compiler is going to secretly use a fork, versus updating the import path like a sane person. The only way anyone would know to check for why the complied code behaves differently is to know that trickery was possible

And that's not even getting into this horseshit: https://github.com/opentofu/hcl/blob/v2.20.1/go.mod#L1 which apparently allows one to declare a repos _import_ path to be different from the url used to fetch it

I have a similar complaint about how in the world anyone would know how "gopkg.in/yaml.v3" secretly resolved to view-source:https://gopkg.in/yaml.v3?go-get=1


When was the last time you saw a NodeJS package that expects to be able to compile Fortran code at installation time?

If you want Numpy (one of the most popular Python packages) on a system that doesn't have a pre-built wheel, you'll need to do that. Which is why there are, by my count, 54 different pre-built wheels for Numpy 2.2.1.

And that's just the actual installation process. Package management isn't solved because people don't even agree on what that entails.

The only way you avoid "worry about modifying the global environment" is to have non-global environments. But the Python world is full of people who refuse to understand that concept. People would rather type `pip install suspicious-package --break-system-packages` than learn what a venv is. (And they'll sometimes do it with `sudo`, too, because of a cargo-cult belief that this somehow magically fixes things - spoilers: it's typically because the root user has different environment variables.)

Which is why this thread happened on the Python forums https://discuss.python.org/t/the-most-popular-advice-on-the-... , and part of why the corresponding Stack Overflow question https://stackoverflow.com/questions/75608323 has 1.4 million views. Even though it's about an error message that was carefully crafted by the Debian team to tell you what to do instead.


It is kind of solved, but not default.

This makes a big difference. There is also the social problem of Python community with too loud opinions for making a good robust default solution.

But same has now happened for Node with npm, yarn and pnpm.


I wouldn't really say it's that black and white. It was only recently that many large libraries and tools recommended starting with "npm i -g ...". Of course you could avoid it if you knew better, but the same is true for Python.


How has NodeJS solved it? There are tons of version managers for Node.


Node hasn't solved this mess because it doesn't have the same mess.

It has a super limited compiled extensions ecosystem, plugin ecosystem and is not used as a system language in mac and linux.

And of course node is much more recent and the community less diverse.

tldr: node is playing in easy mode.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: