Outdated egg!
This is an egg for CHICKEN 4, the unsupported old release. You're almost certainly looking for the CHICKEN 5 version of this egg, if it exists.
If it does not exist, there may be equivalent functionality provided by another egg; have a look at the egg index. Otherwise, please consider porting this egg to the current version of CHICKEN.
level
TOC »
Description
Provides a high-level API to leveldb implementations. Use in combination with an implementation egg (eg, leveldb).
Interface API
This module exposes an interface, which other eggs provide implementations of. The API described below is what the interface provides.
Basic read and write
- db-get db keyprocedure
Returns the value of key in db as a string. Causes an exception if the key does not exist.
- db-get/default db key defaultprocedure
Same as db-get but returns default on missing keys instead of raising an exception.
- db-put db key value #!key (sync #f)procedure
Stores value under key in datbase db. If the sync option can be set to #t to make the write operation not return until the data being written has been pushed all the way to persistent storage. See the Synchronous Writes section for more information.
- db-delete db key #!key (sync #f)procedure
Removes the value associated with key from db. If the sync option can be set to #t to make the write operation not return until the data being written has been pushed all the way to persistent storage. See the Synchronous Writes section for more information.
Atomic updates (batches)
- db-batch db ops #!key (sync #f)procedure
When making multiple changes that rely on each other you can apply a batch of operations atomically using db-batch. The ops argument is a list of operations which will be applied in order (meaning you can create then later delete a value in the same batch, for example).
(define myops '((put "abc" "123") (put "def" "456") (delete "abc"))) ;; apply all operations in myops (db-batch db myops)
The first item in an operation should be the symbol put or delete, any other value will give an error. The next item is the key and in the case of put the third item is the value.
Apart from its atomicity benefits, db-batch may also be used to speed up bulk updates by placing lots of individual mutations into the same batch.
Range queries (streams)
- db-keys db #!key start end limit reverse fillcacheprocedure
Allows forward and backward iteration over the keys in alphabetical order. Returns a lazy sequence of all keys from start to end (up to limit). This uses the lazy-seq egg.
- start - the key to start from (need not actually exist), if omitted starts from the first key in the database
- end - the key to end on (need not actually exist), if omitted ends on the last key in the database
- limit - stops after limit results have been returned
- reverse - iterates backwards through the keys (reverse iteration may be somewhat slower than forward iteration)
- fillcache - whether to fill leveldb's read cache when reading (turned off by default so the bulk read does not replace most of the cached contents)
- db-values db #!key start end limit reverse fillcacheprocedure
Allows forward and backward iteration over the keys in alphabetical order. Returns a lazy sequence of all values pairs from start to end (up to limit). This uses the lazy-seq egg.
- start - the key to start from (need not actually exist), if omitted starts from the first key in the database
- end - the key to end on (need not actually exist), if omitted ends on the last key in the database
- limit - stops after limit results have been returned
- reverse - iterates backwards through the keys (reverse iteration may be somewhat slower than forward iteration)
- fillcache - whether to fill leveldb's read cache when reading (turned off by default so the bulk read does not replace most of the cached contents)
- db-pairs db #!key start end limit reverse fillcacheprocedure
Allows forward and backward iteration over the keys in alphabetical order. Returns a lazy sequence of all key/value pairs from start to end (up to limit). This uses the lazy-seq egg.
- start - the key to start from (need not actually exist), if omitted starts from the first key in the database
- end - the key to end on (need not actually exist), if omitted ends on the last key in the database
- limit - stops after limit results have been returned
- reverse - iterates backwards through the keys (reverse iteration may be somewhat slower than forward iteration)
- fillcache - whether to fill leveldb's read cache when reading (turned off by default so the bulk read does not replace most of the cached contents)
Stream Examples
(lazy-map display (db-pairs db start: "foo:" end: "foo::" limit: 10)))
You can turn the lazy-seq into a list using lazy-seq->list, just be warned that it will evaluate the entire key range and should be avoided unless you know the number of values is small (eg, when using limit).
(db-batch db '((put "foo" "1") (put "bar" "2") (put "baz" "3"))) (lazy-seq->list (db-pairs db limit: 2)) ;; => (("foo" "1") ("bar" "2")) (lazy-seq->list (db-values db)) ;; => ("1" "2" "3") (lazy-seq->list (db-keys db)) ;; => ("foo" "bar" "baz")
Synchronous Writes
Note: this information is mostly copied from the LevelDB docs
By default, each write to leveldb is asynchronous: it returns after pushing the write from the process into the operating system. The transfer from operating system memory to the underlying persistent storage happens asynchronously. The sync flag can be turned on for a particular write to make the write operation not return until the data being written has been pushed all the way to persistent storage. (On Posix systems, this is implemented by calling either fsync(...) or fdatasync(...) or msync(..., MS_SYNC) before the write operation returns.)
Asynchronous writes are often more than a thousand times as fast as synchronous writes. The downside of asynchronous writes is that a crash of the machine may cause the last few updates to be lost. Note that a crash of just the writing process (i.e., not a reboot) will not cause any loss since even when sync is false, an update is pushed from the process memory into the operating system before it is considered done.
db-batch provides an alternative to asynchronous writes. Multiple updates may be placed in the same batch and applied together using a sync: #t. The extra cost of the synchronous write will be amortized across all of the writes in the batch.
Creating an interface
If you want to provide your own storage impelmentation, import this egg and define the interface as follows:
(use level) (define myleveldb (implementation level-api (define (level-get db key) ...) (define (level-get/default db key) ...) (define (level-put db key value #!key (sync #f)) ...) (define (level-delete db key #!key (sync #f)) ...) (define (level-batch db ops #!key (sync #f)) ...) (define (level-stream db #!key start end limit reverse (key #t) (value #t) fillcache) ..)))
Implementations
- leveldb - provides the level API to libleveldb
- sublevel - provides namespaced API access to another implementation
- level-sexp - automatically read/write s-expressions to a level implementation
Source code / issues
https://github.com/caolan/chicken-level
Changelog
3.0.0
- add db-keys, db-values and db-pairs: previously these were available by customizing db-steam via keyword parameters
- remove db-stream, use db-keys, db-values or db-pairs instead
2.0.0
- make-level now expects three arguments (implemenation name, interface implementation, resource) and returns a level record
- all write operations should now return #<unspecified> instead of #t
- added db-get/default procedure (and level-get/default method to interface)
- interface method names now use a "level-" prefix, eg level-get instead of get
- db-stream should now return key+value combinations as pairs instead of lists eg, (("key" . "value")) instead of (("key" "value"))