Simple partitioning is the default in many environments. Popular OS images in many clouds, default partition scheme in installers, simplifies automation to find the one disk. Works fine, disks can often be extended to enormous sizes online, and cloud-init will extend the file system for you.
Until it gets messy. Occasionally people bring problems to Server Fault of instances failing because / is completely full. After the usual things like log file purges, they are left wondering how to reduce the size of / and prevent it from happening again. Tricky, re-partitioning can't really be done online, reducing file systems definitely requires unmount and thus booting a rescue environment, and Linux XFS can't be reduced anyway.
My ideal Linux storage setup is a small disk for boot and OS, and separate disks for application storage. All LVM, and leaving some free space on the VG for future needs. For example, on a database server, boot from /dev/sda1, but the data at /var/lib/pgsql/ is stored on a different VG on PV /dev/sdb. A scheme like this allows data and OS to be restored separately, and neat tricks like creating a new VM instance but moving over the same data volume. Probably too complicated for a simple application instance without a lot of state, and thus simple storage requirements. But still possible.