2

Knowing Linux and FreeBSD distributions on CD, I grow up giving each directory like /usr, /var, /opt, and so on, its own partition One of the holy rules was, that / must be protected from running full.

Nowadays I see very often systems, where even /tmp or /var is on the root partition and systems failing because one application has occupied the whole available space, e.g. below /opt or /tmp.

So, why do a lot of administrators use only one partion instead of a more sophisticated approach to partitioning. Did I miss something in the last decade?

Oliver F.
  • 145
  • I guess one of the main factors is that people stick whatever configuration their (Cloud) VM was initially configured with and those usually come with only one partition. – Henrik Pingel Aug 15 '20 at 13:44
  • Because we don't have comparatively tiny disks anymore. The whole idea of partitioning like this came about when hard drives were measured in megabytes -- or even less! – Michael Hampton Aug 15 '20 at 15:07

4 Answers4

2

I've also noticed the tendency you mentioned, yet the answers will be opinionated.

In the past we had physical machines with physical disks: It was difficult to make changes, and it required downtime. Nowadays, thanks to Logical Volume Managers and Virtual Machines, it's very easy to expand a disk:

  • All file systems used in server systems support live expanding.
  • RAID configurations are transparent to connected system - arranged on Storage.
  • Virtual machines have virtual disks that can be easily expanded.
  • On physical servers connected to storage systems, a virtual disk is presented in essence.

Therefore, many admins go for a simpler (or simplistic!) setup, single partitioned systems (or two partitions, data only separated), and rely on monitoring tools to notify low free space, which can be easily expanded.

Krackout
  • 1,575
2

Simple partitioning is the default in many environments. Popular OS images in many clouds, default partition scheme in installers, simplifies automation to find the one disk. Works fine, disks can often be extended to enormous sizes online, and cloud-init will extend the file system for you.

Until it gets messy. Occasionally people bring problems to Server Fault of instances failing because / is completely full. After the usual things like log file purges, they are left wondering how to reduce the size of / and prevent it from happening again. Tricky, re-partitioning can't really be done online, reducing file systems definitely requires unmount and thus booting a rescue environment, and Linux XFS can't be reduced anyway.

My ideal Linux storage setup is a small disk for boot and OS, and separate disks for application storage. All LVM, and leaving some free space on the VG for future needs. For example, on a database server, boot from /dev/sda1, but the data at /var/lib/pgsql/ is stored on a different VG on PV /dev/sdb. A scheme like this allows data and OS to be restored separately, and neat tricks like creating a new VM instance but moving over the same data volume. Probably too complicated for a simple application instance without a lot of state, and thus simple storage requirements. But still possible.

John Mahowald
  • 33,256
  • 2
  • 21
  • 39
-1

One of the reasons might be that nowadays you can easily upgrade OS versions without reformatting the partitions.

I still create a dedicated partition for /home, but probably no real reasons any more.

kofemann
  • 4,866
-1

„Keep it simple“ is a pretty common design principle. Todays VMs are easy to expand, not like physical servers years ago.