Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are many years since I no longer create partitions on any SSD or HDD, because I believe that this serves no useful purpose and it just wastes a part of the SSD/HDD.

I format directly the raw unpartitioned SSD/HDD with a file system that uses 100% of the capacity, with no wasted sectors. At least on Linux and FreeBSD, there is no need of partitions.

For booting the computers, I either boot them from Ethernet or I boot them from a small USB memory that uses a FAT file system for storing the OS kernel, either in the format required by UEFI booting, or, when booting Linux in legacy BIOS mode, together with syslinux, which loads the kernel.



So your solution to not use partitions is to use multiple disks? You do understand that people invented partitions precisely because they wanted to use a single disk, right?

I am glad this setup works for you, but many people will not want to need a USB drive to boot their desktop, laptop, tablet, phone, et cetera.


For a desktop, this can be transparent for the user, because it can be booted via Ethernet or from an USB drive that is attached all the time to the computer, possibly on one of the internal USB type A connectors that exist on many motherboards, precisely for this purpose.

Using a single disk with multiple partitions is less convenient than using a separate boot drive, because the separation makes easier the reuse of both the boot drives and of the root drives in other computers, or their copying onto drives of different sizes, when migrating or cloning operating systems.

For a laptop that contains encrypted SSDs/HDDs, using an USB drive to boot it, which is not normally kept with the laptop, can be used for improved security, because only in this case no secret keys need to be stored on the encrypted drive. Even if the secret keys are also encrypted, they must be encrypted with a key derived from a password, which can make their decryption much easier than the decryption of a drive encrypted with a random key.


> For booting the computers, I either boot them from Ethernet or I boot them from a small USB memory that uses a FAT file system for storing the OS kernel, either in the format required by UEFI booting, or, when booting Linux in legacy BIOS mode, together with syslinux, which loads the kernel.

That certainly works, but I'm pretty sure that moving booting off of your main disk is the only reason you can go without partitions, and I'm also pretty sure that most people don't want to deal with that.


You're talking about saving _at most_ 200MBish. That's a lot of work to maintain for little gain...


There is less work, not more work.


This sounds like work to me

> For booting the computers, I either boot them from Ethernet or I boot them from a small USB memory that uses a FAT file system for storing the OS kernel, either in the format required by UEFI booting, or, when booting Linux in legacy BIOS mode, together with syslinux, which loads the kernel.

Creating boot USB drives (which I think need partitions don't they?) or setting up a PXE boot server would take me a lot more effort than an extra minute with gdisk to create partitions before formatting the disk.


If the USB drives were bought formatted as FAT, which is always true for those smaller than 32 GB, they already have the required partition.

For booting with UEFI, you just need to create the directories with the names expected by the firmware. For legacy booting, you just need to install syslinux, which takes a second.

Then the USB drive can be used to boot any computer, without any other work, for many years.

When you change the kernel, you just mount the USB drive (which is not mounted otherwise), then you copy the new kernel to the USB drive (possibly together with an initrd file), renaming it during the copy, you unmount the USB drive and that is all.

You can keep around a few USB drives with different kernel versions, and if an update does not go well, you just replace the USB drive with one having an older version.

Configuring a DHCP/TFTP server for Ethernet booting is done only once.

Adding extra computers may need a directory copy in the directory of the TFTP server only when the new computers have a different hardware that requires different OS kernels.

Updating a kernel requires just a file copy towards the directory of the TFTP server, replacing the old kernel.

None of these operations requires more work than when using a boot partition on the root device.

There is less work because you make booting USB drives or a DHCP/TFTP server only once for many years or even decades, while you need to partition the SSD/HDD whenever you buy a new one that will be used as the root device.


Yeah I don't see it. I don't really have any investment in how you boot your machines, but I can't see this being anything but significantly more work than just using the tooling that's already there for you. When I buy I computer I set it up once and then it lasts 3-6 years. Even setting the system you've got up once would likely take me more time that I've spent adding partitions to disks in the last 20 years. Heck that's probably true even if you include all the servers I've administered in that time as well as my personal machines, especially since those all ran Ubuntu or RedHat where the installer just does it for me, vs my personal Arch machines.

I partition a new computer once every few years. I upgrade the kernel a few times a month. With the normal way that's a simple `pacman -Syu` or `apt get dist-upgrade` and it's handled, no mounting thumb drives or sftp needed.


It's amazing what lengths people go to to justify their convictions and not realize the silliness. You've just described a convoluted setup with many drives and computers on network and claim there is no more work.

That is more work for everybody except if they wanted a) completely encrypted main disk and booting from portable guarded USB, or b) set of computers in school or internet cafe with boot process managed/updated efficiently via network.

This system does provide new special capabilities, but it is not for free. Meanwhile, most users are happy with defaults like 1G partition with bootloader and kernel, which allows easy updates without worrying about having the right USB drive, being in the right port, being mounted in the right path, or losing it.


When I install a new OS I typically just use the guided installer which makes the partitions automatically. This is usually the default too. I would actually have to go out of my way to set it up so that the drive is a single partition then on top of that create a USB drive that I'd need to always have on hand which sounds like a tremendous PITA with a laptop.

If it it works for you great but that is a LOT of extra work to regain less than 1% of the storage on a drive.


That is risky, since without a partition table, some operating systems and disk management tools will treat the disk as empty, making it easy to accidentally overwrite data.

> 100% of the capacity, with no wasted sectors.

You will never have that. SSDs have a large amount of reserved space, and even on HDDs, there are some reserved tracks for defect management.


By "some operating systems and disk management tools" you mean MS Windows and Windows tools.

Obviously, I do not use unpartitioned SSDs/HDDs with Windows. On the other hand, With Linux and *BSD systems they work perfectly fine, regardless whether they are internal or removable.

For interchange with Windows, I use only USB drives or SSDs that are partitioned and formatted as exFAT. On the unpartitioned SSDs/HDDs I use file systems like XFS, UFS or ZFS, which could not be used with Windows anyway.

Any SSD/HDD that uses non-Windows file systems should never be inserted in a Windows computer, even when it is partitioned. When a SSD/HDD is partitioned, it may be hoped that Windows will not alter a partition marked as type 0x83 (Linux), but Windows might still destroy the partition table and the boot sector of a FAT partition. It happens frequently that a bootable Linux USB drive is damaged when it is inserted in a Windows computer, so the boot loader must be reinstalled. So partitioning an USB drive or SSD does not protect them from Windows.

>> 100% of the capacity, with no wasted sectors. > You will never have that.

I thought that it is obvious that I have meant 100% of the capacity available for users, because there is no way to access the extra storage used by the drive controller and also no reason to want to access that, because it has a different purpose than storing user data, so your "correction" is pointless.


Also some dumb firmware may write to such disks, Asrock boards were reported in past to do that.

Efi+boot partitions usually take less than 2G of space, and can be made like 200MBs total, while mainstream disk capacity is hundreds of GBs nowadays.

This "loss of useful space" is immaterial in most cases. Maybe if you have something like a 2GB drive from 1990s that you want to use (why?) then it makes sense to shave off 1G off that. But it is more work, as you have to buy, prepare and manage the USB drive.


There are more reasons to set it up more or less like that.

Think of an expensive, super fast but considerably small SSD, and some cheap big mass storage (maybe even on spinning rust) along that.

You'll likely try to use the expensive SSD as efficiently as possible. Every GB counts if you have "only", say, 0.5 TB.

A boot partition on such expensive and small (but fast) media is pure wastage.

Also this kind of setup seems not so uncommon, as I can claim that I've done something similar. :-)

There are even more reasons. It makes things even more simple and less error prone:

The argument that you can swap disks more easy was mentioned already. But that's not everything one gains.

SSDs are very prone to get worn out much quicker and loose at least half of their performance when you mess up the data alignment on them. In case of FDE with partitions (maybe on top of LVM even) the alignment issues isn't trivial. It's quite easy to mess up the alignment by mistake. You can read a lot of docs, try to find out details about the chips on your SSD, do calculations, yada yada, or you just encrypt the raw device and use the whole disk without partitions. That's considerably simpler, nothing can go wrong.


> Every GB counts if you have "only", say, 0.5 TB.

It doesn't. You can make the overhead partition take 200MB. That's immaterial fraction of 0.5TB. You ain't gonna see impact of this loss. Additionally, by partitioning the drive, you protect it from dumb programs who like to create partition tables.

Yes, there are reasons for not partitioning your OS disk, like full disk encryption. But it is more work.

> when you mess up the data alignment on them. In case of FDE with partitions (maybe on top of LVM even) the alignment issues isn't trivial.

This sounds interesting. What are these alignment issues? Why do you think they are present on disk with partitions (I never had those issues) and why do you think they are not present on disk without partitions (may be they are, due to compression/encryption)?


In case you would use anyway only one partition (because boot is elsewhere) not having any partitions at all is not more but less work.

Alignment issues are only really relevant in case of SSDs. The FS blocks need to align with the "physical" blocks of the chips used. (Actually this are also "only logical blocks", presented to you by the SSD controller, but at least this is fully transparent). If the alignment is messed up the SSD needs to consider at least 2 "physical" blocks (as presented by the controller to the OS) when accessing a single FS block. This leads to doubling the wear and halves the performance. (At least, in really unhappy scenarios this can even triple the access effort).

Where exactly a FS block starts and ends in relation to the underlying "physical" block(s) depends on all the "headers" that are "in front" of the FS blocks (or logically sometimes "a layer up", even "physically" this also only means "in front"). Partition tables are headers. LUKS headers are obviously also headers that need to be taken into account. LVM headers (and blocks, groups, volumes) are even one more layer to consider.

To make things more fun, like said, the "physical" blocks are only an abstraction presented by the controller. In some cases their size is configurable through the SDD controller firmware. (But this shouldn't be done without looking at the chips themself). The more interesting part is: The "physical" blocks can have "funny" sizes… (Something with some factor of 3 for example). Documentation on this is frankly spare…

The usual tools just assume some values that "work most of the time". But this whole problematic is actually quite new. Older version of all the related tools didn't know anything about SSD block alignment. (Like I said, they still don't know anything for sure, there is not way to know without looking a the docs and specs of the concrete device, but now at least they try to guess some "safe values"; with a large margin).

If you use partitions you'll end up with those "funny" few MiBs large offsets, which you have seen for sure. (If you don't use offsets it's very likely that the alignment is wrong).

Without partitions the other storage layers are much easier to align. You don't need to waste a few MiBs around your partitions, and especially don't need to remember (and maybe even recalculate) this stuff when changing something.

Not many people know about this whole dance as misalignment isn't a fatal problem. It will just kill your SSD much quicker, and half the performance (at least). But SSDs are so fast that most people would not notice without doing benchmarks… (Benchmarks of the different storage layers is actually the only way to test whether you got the alignment right).

If you don't look into this yourself you can only pray that all tools used were aware of this issues and guessed some values that work by chance properly with your hardware. But if you created partitions without the "safe" offsets (usually by setting values yourself and not letting the tool chose its "best guess") the alignment is quite likely wrong.

I'm came across this issue because I was wondering why Windows' fdisk always added seemingly "random" offsets around partitions it created. It turns out it's a safety measure. Newer Unix tools will do the same when using proposed defaults.

TL;DR: If you don't create a partition table on a NVM device you can just start your block layer directly on block zero and don't have to care about much as long as you also set the logical block size of that layer to the exact same value as the (probably firmware configurable) "physical" block size of the device. If you have a (GPT) partition table in front (which is by the way of varying size to make things even more funny) you need to add "safety offsets" to your partitions. Otherwise you're torturing your NVM device, resulting in servery crippled performance and lifetime.

I hope further details are now easy to google in case anybody likes to know more about this issue.

---

> Additionally, by partitioning the drive, you protect it from dumb programs who like to create partition tables.

The better protection would be do keep drives far away form operating systems and their tools that are known to randomly shred data… ;-)


Thanks for the effort, but this is not very convincing. Is there any documented case where physical blocks have size that in bytes is not some power of 2? I suspect if that exists, it is quite a rare device. Blocks of size 512B, 4K, 8K are the most common case, and correct alignment is completely taken care of by the 1MiB offset which is standard and default in fdisk and similar tools on Linux. You mention "random" offsets with newer Unix tools - I have never encountered this. Any examples?


> Thanks for the effort, but this is not very convincing.

I've written this to shed light on the alignment issue as I was under the impression that this would be be something completely new to you. ("This sounds interesting. What are these alignment issues?")

> Is there any documented case where physical blocks have size that in bytes is not some power of 2?

Yes, there are examples online. I did not make this up!

It was in fact some major WTF as I came across it…

> I suspect if that exists, it is quite a rare device.

Jop, that for sure.

Also the documentation is very spare on this, like already mentioned.

I think it was the early triple-cell chips that had such crazy layouts. (Did not look it up again; maybe this was only a temporary quirk; but maybe it still exists, no clue).

> Blocks of size 512B, 4K, 8K are the most common case, and correct alignment is completely taken care of by the 1MiB offset which is standard and default in fdisk and similar tools on Linux.

Well, it depends.

This thingies I've read about with some factor of 3 in their block size would need at least a 1.5 MiB offset… (And the default 1 MiB offset would torture them to a quicker death; but most people would likely never find out).

There are devices with much bigger (optimal) block sizes, I think in the MiB ballpark (don't remember the details out of the top of my head, would need to look it up again myself). Also in such cases the 1 MiB would not suffice.

Those devices are usually in some compatibility mode in factory settings, with much smaller blocks than optimal for maximal performance and least wear. You need to tell the firmware explicitly to switch the block size to get best results (which is of course not possible after the fact as it obviously shreds all data on the device).

Also it's not only the offset around the partitions. You need to take the block sizes into account also regarding the block layers "inside" the partitions. Which was actually my point: This makes things more complicated than strictly needed.

> You mention "random" offsets with newer Unix tools - I have never encountered this. Any examples?

By "random" I've meant that the offsets appear seemingly random when you don't know the underlying issue. It's not only the one offset after the partition table. Depending how large the partitions are there may be or may not be additional offsets around the partitions themself.

Of course all this is not rocket science. We're talking about simple calculations. But that's just one more thing to be aware of.

My conclusion form that journey back than was: Just screw it, and don't add partitions to the mix if you don't strictly need them. One thing less to care about!

For example the laptop I'm just writing on has two NVM devices. The bigger and faster one is used as (encrypted) root FS, without any partitions on it, the other smaller and slower one carries the EFI partition and an (encrypted) data partition. If I would have have partitions on the root disk this would not give me any advantages, but additional stuff to think about. So why should I do that? OTOH I need an EFI partition to boot. So I have created one on the other disk. I think this is a very pragmatic solution. Just don't add anything that you don't need. <insert Saint-Exupéry quote about perfection here>


Alright that makes sense.


I have taken this approach for secondary drives where I want to use the entire drive as a big filesystem for data.

For the system disk I have always partitioned it though. I generally create at least /, /var, /home, and /usr. That way it's less likely that a runaway process can fill up the entire disk, at worst it might fill up /home or /var.

And unless I'm really space-constrained, I'll leave some unpartitioned space as well, for later flexibility.


That is an excellent application of partitions but it's better done with LVM so you can change the partitions' sizes easily. You should be able to install LVM on the whole disk.


You can still do BIOS boot on disks without partitions. One huge advantage of "legacy" boot is that it can work filesystem-agnostic, avoiding and secondary FS implementations in the firmware or in the bootloader.

And if you go that far, you can throw out any FS kmod from the initrd except for what you need for your root partition. Including vfat.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: