Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I'm super confused with this. Getting a few thousand per second seems relatively trivial, on an Arduino. Maybe there's something I'm missing here, or is this the abstractions that software lives at these days?


This limits simultaneous writes to the maximum number of open file handles supported by the OS. I don’t know what that is, but I don’t see how it can compare to a multiple multiplexed TCP/IP sockets.

When you’re writing billions of messages per day, I don’t see how a file system scales.


Close the file handle if you aren’t actively processing the file.


That still allows for perhaps 3000-4000 simultaneous writes, depending on how many file handles are in use by other processes.


On Linux, /proc/sys/fs/file-max has the maximum number of simultaneous open filehandles supported by the kernel. On my laptop this is about 1.6 million


And mine is about twice that.

But also keep in mind every executable has at minimum its own executable as an open file. Picking a random python process I currently have running, lsof reports it has 41 open *.so files.


Yes, but it's also highly unlikely that if you're trying to push transactions per second into that kind of range that you'd be doing it with individual processes per transaction. You'd also be likely to be hitting IO limits long before the number of file descriptors is becoming the issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: