i was smart once
I’m currently on a tangent within fat.handler
. I started to rip out the caching, but because I hoped to add it back in later in some form I started making everything that needed disk data call a single function that gets the block from disk. Once I had that though it really just seemed easier to actually implement a basic cache right now, that could be expanded into something for suitable for use by all filesystems later.
My basic requirement was that at all times the cache knows exactly what the contents of a block is and whether it needs to be written out or not. For this reason, I’ve decided to go with a model where there is only ever a single copy of a block in memory, each with a reference count. That way if one part of the code modifies a block, other parts of the code will see those changes and not have an out-of-date copy. And at all times, the cache can know what’s going on.
The cache will maintain a list of dirty blocks and write them out at regular intervals based on some (configurable) criteria. Basically, it’ll do all the hard stuff. The filesystem should just be able to say at any time “I need block X” and the cache will sort it out.
To do this I need an efficient data structure to store the blocks. My first thought was a kind of hashtable where without the hashing bit - just modulus/bitmask the block number. We threw it around the office over lunch and did the maths and it turned out that the overhead would be huge. B-trees (specifically, B+trees) looked to be the way forward, so I spent quite a bit of time trying to implement one.
I used to be a CS major, but for some reason I just can’t work with proper algorithms and data structures, only wacky hacks. I still haven’t been able to make my b-tree work, but thinking about it further I realised that a flat array and a binary search will actually do just as good a job in this case. B-trees really shine when the nodes are stored somewhere where its slow to get at them (on disk). When its all in memory its advantages are much reduced.
Again, my brain conspires against me. It took me about three hours to implement a basic binary search over an array. I’m sorely disappointed in myself - this stuff is supposed to be childs play. At least it works. The basic stuff is in, with reference counting and all the rest. The array is currently eight bytes per entry - four the key int, four for the data pointer. That may go up to twelve if I end up needing a 64-bit type for the key, but the overhead is still minimal. The entries get allocated in chunks (which will probably be configurable), and grow (and probably shrink) the array as necessary.
Tomorrow I’ll start adding actually loading the blocks from disk. After that I should be able to start refactoring the handler code properly.