The appendix includes additional details and FAQ-style answers that did not fit into the VLDB version.
user01815-2 2 hours ago [-]
This seems to miss a reference to Zoned XFS, which is the Linux file system that actually looked into this kind of data placement at the file system layer. The paper includes numbers using RocksDB:https://dl.acm.org/doi/10.1145/3725783.3764399
The paper shows a through analysis of write amplification and slowdown/wear with large databases (800GB) on a single machine. Databases are MySQL and postgres.
As already commended, this can lead to an optimized storage table format for greater performance. Nice!
I would expect that a similar analysis can be done for sqlite, maybe with a different dataset, single write thread..
Dwedit 9 hours ago [-]
SMR Hard Drives have very different rules about how you should access them vs conventional hard drives or SSDs. I wonder how much optimizing for SMR drives (Big sequential writes) would also optimize for other drive types.
jauntywundrkind 8 hours ago [-]
The zoned-storage people (whom shingled folks were a subsect of) seemed pretty ok with the FDP (Flexible Data Placement - TP4146b) scheme that finally finally finally got hammered out for NVMe 2.1 (August 2024). It was also designed to satisfy the open-channel flash people as well.
It's a fairly simple concept that lets you have some write-affinity, that lets you declare when writing that this write should be associated with other writes with the same FDP number, a tagging.
I'm not fully convinced this really is as good as what the open channel flash people wanted. But drive manufacturers were never voluntarily going to give up really complex Flash Translation Layers. They all want to be value add, have their expensive fancy controllers keeping the market from commoditizing to just using NAND directly. But FDP does show some very real promises, can have huge read-write-affinity bonuses!
I note that SSDFS filesystem is still out there being improved and maintained, for a file system that tries to take advantage of this all. I'm not sure if it's made the jump to using FDP or is still older much more ornery & never quite loved ZNS specification. I'd love to give it a try but FDP and ZNS drives are not easy to get ahold of, require asking very nicely, and when I last checked required purchasing very expensive fancy enterprise SSD that cost a ton but which had pretty so so performance figures. That was a couple years ago now.
https://www.phoronix.com/news/Linux-SSDFS-NVMe-ZNS-SSDshttps://news.ycombinator.com/item?id=34939248
The paper here is wonderful & beautiful. FDP should make this kind of thing so so so much easierz should remove so many of the downsides of drive usage mentioned here. If only it were available. I'd really love it if driver reviewers would look at and comment on the feature matrix drives have, and comment on FDP, but generally, it feels like there's no ask, little pull and thus no push, for an obvious and basically free to implement zero cost improvement that makes everything vastly better. Alas. Can't wait. Hopefully drive prices are better by 2031 & FDP is finally available. Fingers crossed.
9 hours ago [-]
itsthecourier 9 hours ago [-]
this is the kind of research that creates new db types, or super optimized postgres im not sure yet
jandrewrogers 7 hours ago [-]
This paper gives a really nice end-to-end treatment of an entire problem domain that is usually taken piecemeal. Almost all of the techniques mentioned are already used in databases in some form. It won't lead to new database types but it provides a framework for thinking about the write amplification problem.
Not every database architecture will be able to easily take advantage of all these techniques. Some designs are much more easily optimizable than others.
melhindi 1 hours ago [-]
To add to that: some of the techniques are well known to storage experts, but not yet widespread among database engineers.
The paper does a great job of explaining the effects on database systems. Great work!
vetrom 8 hours ago [-]
can be both, psql has pluggable storage engines. See any of the numerous columnar or sharding extensions for postgres for examples of prior art.
dofi4ka 10 hours ago [-]
I feel fooled after clicking on the link and seeing that this PDF is downloading (or just literally writing to my SSD) until I realized that this is the point
The extended version is available on arXiv if you’d like more details: https://arxiv.org/pdf/2603.09927
The appendix includes additional details and FAQ-style answers that did not fit into the VLDB version.
I would expect that a similar analysis can be done for sqlite, maybe with a different dataset, single write thread..
It's a fairly simple concept that lets you have some write-affinity, that lets you declare when writing that this write should be associated with other writes with the same FDP number, a tagging.
I'm not fully convinced this really is as good as what the open channel flash people wanted. But drive manufacturers were never voluntarily going to give up really complex Flash Translation Layers. They all want to be value add, have their expensive fancy controllers keeping the market from commoditizing to just using NAND directly. But FDP does show some very real promises, can have huge read-write-affinity bonuses!
I note that SSDFS filesystem is still out there being improved and maintained, for a file system that tries to take advantage of this all. I'm not sure if it's made the jump to using FDP or is still older much more ornery & never quite loved ZNS specification. I'd love to give it a try but FDP and ZNS drives are not easy to get ahold of, require asking very nicely, and when I last checked required purchasing very expensive fancy enterprise SSD that cost a ton but which had pretty so so performance figures. That was a couple years ago now. https://www.phoronix.com/news/Linux-SSDFS-NVMe-ZNS-SSDs https://news.ycombinator.com/item?id=34939248
The paper here is wonderful & beautiful. FDP should make this kind of thing so so so much easierz should remove so many of the downsides of drive usage mentioned here. If only it were available. I'd really love it if driver reviewers would look at and comment on the feature matrix drives have, and comment on FDP, but generally, it feels like there's no ask, little pull and thus no push, for an obvious and basically free to implement zero cost improvement that makes everything vastly better. Alas. Can't wait. Hopefully drive prices are better by 2031 & FDP is finally available. Fingers crossed.
Not every database architecture will be able to easily take advantage of all these techniques. Some designs are much more easily optimizable than others.