406 Commits (ee04d89b0f25a60842f0402d27ea746de354da2a)

Author SHA1 Message Date
Sayan Nandan 0f1264d312 Decompose linearity tests and utils into modules
Also fixed license headers
3 years ago
Sayan Nandan a87478dcba Optimize dependencies 3 years ago
Sayan Nandan 76acde2f4f Fix missing action argument in setkeys macro 3 years ago
Sayan Nandan 26775924ac Add tests for pop 3 years ago
Sayan Nandan 57c957d4e7 Add `pop` action 3 years ago
Sayan cdae667cb0
Fix pid file creation (#170)
* Remove the pid file if runtime errors occur

* Clean up error handling and fix pid file creation

The pid file was being created before evaluating the args, now it may
happen that incorrect args or --help was passed: in that event, the pid
file remains created. This was also fixed, besides some refactoring.
3 years ago
Sayan ca9e482f47
Deter other processes from using the same data dir (#169)
* Deter other processes from using the same data dir

For more information, see #167

* Don't lock `pid_file`

Windows has mandatory locking so second instance won't be able to read
the PID of the other process. We'll just keep the file descriptor/handle
open
3 years ago
Sayan 7349e461e6
Try to auto recover the save operation on termination (#166)
This is very useful because it removes the need for user intervention in
the event save on termination fails. Say the save operation fails due to
'some bad daemon' changing the directory's perms. Now skyd reports this
error while trying to save upon termination. Our sysadmin now fixes the
perms issue. The previous design would force the sysadmin to _somehow_
foreground skyd and hit enter. That is silly. The new design just
attempts to do a save operation every 10 seconds. So in case the issue
is fixed, the save operation will recover on its own.

Why not exponential backoff?
That's because the issue can be fixed some long time later and we may
have reached a large backoff value so the save that could have succeeded
would have to wait for a long duration before it can do anything
meaningful.

This also fixes a bug that caused BGSAVE errors to be reported as info
class log entries.
3 years ago
Sayan 8df9901740
Upgrade deps and actiondoc (#165) 3 years ago
Sayan e553c5172b
Release v0.6.1 (#164)
* Explicitly fsync and relax CPU on snap busy-loop

This commit also switches to using global `VERSION` and `URL` statics
than defining it per-crate.

* Add changelog entry and bump up version

* Optimize `dbtest` macro and rm redundant allocs

* Upgrade deps
3 years ago
Sayan Nandan 0c33395a09 Add some general optimizations 3 years ago
Sayan 1bde8b197d
Fix file-locking on solaris (#162) 3 years ago
Sayan Nandan 72d871ed3f Upgrade deps 3 years ago
Sayan Nandan 1bec90baac Optimize `sset` and `sdel` implementations 3 years ago
Sayan Nandan 2eedb041bb Make `gen_constants_and_matches!` macro logical
This commit imporves the overall 'look' of the macro and makes it appear
more logical
3 years ago
Sayan Nandan 58830edc80 Use iterators for actions 3 years ago
Sayan Nandan a839137643 Move actions into an `actions` module
These are actions and shouldn't be called the `kvengine`.
3 years ago
Sayan Nandan f8ea4c33de ucase for ASCII only
This is a silly optimization but can be significantly faster for larger
action names. Also, since there are (should be) no UTF-8 characters in
the first argument, this is absolutely fine and sensible.
3 years ago
Sayan Nandan a68c42f720 Document missing instances of `unsafe` 3 years ago
Sayan 952c5caa86
Poison the database by default if snapshotting fails (#160)
* Stop accepting writes if snapshotting fails

This is an important consideration: if BGSAVE fails and poisons the
database, snapshotting can and should too. But this is debatable in some
parts. For example, users may configure snapshots to be on a network
file system (symlinked maybe) and this can fail.
Now in some cases, this failure 'may be acceptable'. This commit adds a
way to customize this behavior through the `failsafe` key in the
snapshots section of the cfg file and through the --stop-write-on-fail
option passed to `skyd` on startup. However, BGSAVE remains unchanged:
it will always poison the database if it fails. If the user doesn't want
this, they can simply disable BGSAVE.

* Add changelog
3 years ago
Sayan Nandan ac336ff821 Add missing changes in changelog
Use `pat` token instead of `path` in query engine
3 years ago
Sayan Nandan eea0f86c97 Upgrade skytable client driver version 3 years ago
Sayan 6c9a36d397
Improve compat to use storage formats for upgrades (#158) 3 years ago
Sayan Nandan 260b336bf6 Add tests for races and synchronized unlocks 3 years ago
Sayan a1320da52b
Migrate to using `Coremap` (#156)
* Upgrade all interfaces to use new HTable

* Document `HTable`
3 years ago
Sayan Nandan 966c2594bf Fix `lskeys` tests 3 years ago
Sayan Nandan b77b783064 Add tests for lskeys 3 years ago
Sayan Nandan ea7891fba7 Implement lskeys 3 years ago
Sayan Nandan 449da56308 Upgrade all interfaces to use new in-memory table 3 years ago
Sayan Nandan 975e953426 Add compat module for upgrading old files 3 years ago
Sayan Nandan c2a20d4476 Implement serialize/deserialize for `HTable` 3 years ago
Sayan Nandan a7f5d84ef4 Bump dependencies
Also use `tokio::join` in-place of `futures::join`
3 years ago
Sayan 790558d2c7
Improve reliability, simplicity and recoverability of BGSAVE (#153)
* Create a new file on writing to flock-ed file

This fix is a very important one in two ways. Say we have an user A.
They go ahead and launch skyd. skyd creates a data.bin file. Now A just
deletes the data.bin file for fun. Funny enough, this never causes flock
to error!
Why? Well because the descriptor/handle is still valid and was just
unlinked from the current directory. But this might seem silly since
the user exits with a 'successfully saved notice' only to find that the
file never existed and all of their data was lost. That's bad.
There's a hidden problem in our current approach too, apart from this.
Our writing process begins by truncating the old file and then writing
to it by placing the cursor at 0. Nice, but what if this operation just
crashes. So we lost the current data AND the old data. Not good.

This commit does a better thing: it creates a new temporary file, locks
it before writing and then flushes the current data to the temporary
file. Once that succeeds, it replaces the old data.bin file with the
newly created file.

This solves both the problems mentioned here for us:
1. No more of the silly error
2. If BGSAVE crashes in between, we can be sure that at least the last
data.bin file is in proper shape and not half truncated or so.

This commit further moves the background services into their
own module(s) for easy management.

* Fix CI scripts

Fixes:
1. Our custom runner (drone/.ci.yml) was modified to kill the skyd
process once done since this pipeline is not ephemeral.
2. GHA for some reason ignores any error in the test step and proceeds
to kill the skyd process without erroring. Since GHA runners are
ephemeral, we don't need to do this manually.
3 years ago
Sayan Nandan 85616544ef Ensure that the entire file is locked
Although this is barely documented, setting the nNumberOfBytesToLockLow
and nNumberOfBytesToLockHigh to MAXDWORD apparently locks the entire
file
3 years ago
Sayan Nandan 42ad5680ff Fix missing imports on Windows
I'm not on a Windows machine, so I don't get these errors reported!
3 years ago
Sayan Nandan 03e241902f Ensure that duplicated handle has same permissions
This is particularly relevant for Windows
3 years ago
Sayan Nandan ba53e5160b Use unlocks to ensure that file is readable 3 years ago
Sayan Nandan 76d184663a Add test for BGSAVE 3 years ago
Sayan Nandan d7cd1bfb70 Use different byte count for test
This test simply makes sure that the 0s written while truncating don't
reappear (they should never do)
3 years ago
Sayan Nandan 93ef949bac Manually unlock file after complete termination
The cloned flock might attempt to call the unlock but it is a cloned
descriptor!
3 years ago
Sayan Nandan 893cf1d741 Fix `FileLock::write` impl and make snaps blocking
This commit implements a tokio blocking task for mksnap and also fixes
FileLock's write method and adds a test for the same
3 years ago
Sayan Nandan b5865e500b Use Terminator for termination of all bg services
What we did in the old implementation was pure over-engineering.
We relied on CoreDB's `Drop` impl to terminate the background services.
Now this is absolutely unreliable due to the nature of async functions.
We also relied on the bgsave scheduler to release the lock upon exit
which is also unreliable because we left the service to the mercy of the
runtime. We spawned the task and didn't hold as much as a `JoinHandle`
to it. That's bad because the runtime can just abort these tasks which
may result in the lock never being released. Even though it is designed
to release the lock on Drop, the destructor may however not be called at
all.

This commit fixes all those issues by simplifying the entire impl to
use Terminator. Now the background save and snapshot services run
independently, in their own tasks. Whenever the user passes a SIGINT,
we tell everyone to quit. The listeners understand that this is the
last query they'll process and the background save tasks exit almost
immediately. But what if some data was modified by this last query...?

No worries, that is completely handled by main(). The lock that BGSAVE
leaves is immediately (almost) returned to main and main will attempt
to flush the data almost immediately. That's how we maintain reliability
3 years ago
Sayan Nandan 78e9441564 Spawn blocking I/O tasks on a dedicated thread 3 years ago
Sayan Nandan 7d1b44a57f The snapshot service had similar bugs that were
fixed
3 years ago
Sayan Nandan 5a7f17db14 Fix strong count calculation logic
See the added comment for more context
3 years ago
Sayan Nandan 5d4650712f Fix BGSAVE running right on service start
This fixes another flaw with the previous implementation: running BGSAVE
right when the service is started which causes unnecessary disk I/O
3 years ago
Sayan Nandan 77f4b6e7be Make BGSAVE optimistic and fix BGSAVE bugs
This commit ensures that BGSAVE is optimistic in doing what it is doing:
If BGSAVE fails once, it will immediately poison the table. Now let's
say that some amazing sysadmin managed to SSH into the server and was
able to fix the storage issue; BGSAVE would be able to succeed.
The current implementation was flawed: firstly it prevented that and
secondly even if it succeeded in running BGSAVE, the server would refuse
to accept writes. This commit fixes this behavior.
3 years ago
Sayan Nandan a61ab02cd9
Fix disk storage on termination (#151)
See #150 for more information
3 years ago
Sayan Nandan 3616793554 Update versioning and support information [skip ci] 3 years ago
Sayan Nandan 6b47279b1b Fix CI script and improve terminal artwork 3 years ago