![]() Thats why I moved my unimportant guests from ZFS to LVM-Thin. Don't want to know how bad consumer QLC SSDs without a Powerloss Protection would wear when running these tests. ![]() And thats all done with enterprise SSDs for write intense workloads with MLC NAND. With big sequential async writes its still a lot, but way less write amplification. At least when doing small random sync writes. So SSDs used with ZFS will die 3 to 6 times faster as when using LVM-Thin. ![]() Means writing 1TB of 4k sync writes to ZFS will wear the SSD by 35 to 62 TB and LVM-Thin will just wear by 8TB. ZFS got a factor 35 to 62 and LVM-Thin just factor 8. Just compare the "Write Amplification Total" on the left between ZFS and LVM-Thin. The forced volblocksize of 32K is just too annoying with raidz1.īut I still would keep that LVM-Thin for unimportant guests (but maybe mirror it with mdadm raid1) because the write amplification (average of factor 20 here) of ZFS is hitting the SSDs too hard.Ĭlick to expand.Jup, did 155 benchmarks in this thread.Īnd SSD wear of ZFS is really bad. Instead of the 5x 200GB in a raidz1 I'm using now. Today I would use a encrypted ZFS mirror with systemd boot for the system disks and a 4x 400GB SSDs for a ZFS striped mirror or 2x 800GB SSDs for a ZFS mirror to store my guests. Not the best setup, but what I did back then. ![]() Sdi is an LUKS encrypted debian USB stick that I boot to backup my whole sda and sdb on block level to my PBS. Sdh is LUKS encryption -> LVM -> LVM-Thin for my guests that do heavy sync writes but with just unimportant data (zabbix with MySQL for monitoring, graylog with mongodb+elasticsearch for log collection, blockchain DBs using SQLite) that would wear the SSDs too hard if I would run them on ZFS (saves about 400GB of writes per day when running them on LVM-Thin instead of ZFS). Sdc, sdd, sde, sdf, sdg form a encrypted raidz1 pool for my guests (not great for IOPS but fast enough for my workloads and more capacity and less SSD wear compared to a striped mirror). Part4 is mdadm raid1 -> LUKS encryption -> LVM -> LVs for swap and root. Part3 is my boot partition on mdadm raid1 (could also be bigger). └─sdi5 crypto_LUKS 2 17bc4279-1c49-4f41-b0e7-3e3a152ca389sda+sdb are my system disks booting from grub (part1), part2 reserved for ESP but never used and hard to switch anyway because proxmox-boot-tool wants the ESP to be 512MB and I only prtitioned it with 300MB or so. └─lukslvm LVM2_member LVM2 001 wLR7wJ-BpXT-21JR-yG7Z-hpsZ-4yuv-TXWbNn NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |