Right now Redis makes a great cache, lossy message bus, and scratchpad, but you have to plan on data loss. Redis-Raft should hopefully change that by offering strict serializability, and from our testing, it looks like they're on track. Watch for GA next year!
Redis-Raft is really cool, because of the existing Redis replication strategies (Sentinel, Cluster, Enterprise, CRDT), all of them can lose updates during partitions.
There are a ton of neat bugs here, including infinite loops, total data loss on failover, servers sending responses to the wrong clients, and all kinds of crashes. None should have affected production users; Redis-Raft wasn't public until May, and GA isn't until 2021.
New Jepsen analysis: we worked with Redis Labs to evaluate Redis-Raft, a new, still-under-development approach to Redis replication, and found 21 issues, 20 of which have been fixed in recent builds. https://jepsen.io/analyses/redis-raft-1b3fbf6
I'm gonna be giving a Zoom talk on Elle for CMU's database seminar, on July 27th. I think anyone can join, if you want to listen in. :) https://db.cs.cmu.edu/events/db-seminar-spring-2020-db-group-black-box-isolation-checking-with-elle/
@jepsen_io Completely agreed! I'm aware of a few other instances of reported non-serializable behavior under PostgreSQL SSI: https://www.postgresql.org/message-id/20141021071458.2678.9080%40wrigleys.postgresql.org https://email@example.comfirstname.lastname@example.org Repros: https://github.com/gfredericks/pg-serializability-bug
Anyway, consistency models are a mess; news at 11. 😂
So, while Berenson et al. say that snapshot isolation isn't stronger than repeatable read, PostgreSQL appears to have implicitly adopted the strict interpretation instead, and says that SI is stronger than RR. In fact, SI prohibits *every* anomaly in the strict interpretation of the ANSI SQL standard, including their (narrow) definition of phantoms!
This could still be consistent with the ANSI SQL standard's definition of repeatable read! As Berenson et al. pointed out twenty five years ago (!), the standard is ambiguous. The paper argues that there are two interpretations of the ANSI phenomena: strict, and broad. They say the broad interpretation is what ANSI *meant* to define, and goes on to define and analyze snapshot isolation in those terms. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf
Something odd came up during this review, which Martin Kleppmman previously reported in https://github.com/ept/hermitage/blob/master/postgres.md: PostgreSQL repeatable read isn't repeatable read: it's snapshot isolation, which allows a specific class of G2-item anomalies (those with nonadjacent rw edges) prohibited under formalizations of repeatable read.
A new Jepsen report! It turns out that PostgreSQL's "serializable" isolation was not, in fact, serializable: it exhibited G2-item. A patch is coming in the next minor release. :) http://jepsen.io/analyses/postgresql-12.3
Manish Jain, from Dgraph Labs, wrote a bit about his experience debugging Jepsen test failures, and how he used OpenCensus traces to get a handle on some tricky bugs: https://dgraph.io/blog/post/solving-jepsen-with-opencensus/
So here's a neat thing postgres 12.3 might do? Maybe I'm doing it wrong, not sure yet.
All these transactions are executed with SERIALIZABLE isolation over lists implemented as comma-separated TEXT fields. `r x [1, 2]` means we read the current value of row x and found it to be [1,2]. `a x 3` means "append 3 to x", like so:
insert into txn1 as t (id, val) values ($1, $2) on conflict (id) do update set val = concat(t.val, ',', $3) where t.id = $4
rw is an anti-dep, ww and wr are deps.
"Note: Because of the way synchronous replication is implemented in PostgreSQL it is still possible to lose transactions even when using synchronous_mode_strict"
oh, okay, cool
is there a postgres replication/HA setup that *doesn't* lose data?
I called this out in the report as well, but the write concern documentation still doesn't say anything about rollbacks/write loss: https://docs.mongodb.com/manual/reference/write-concern/
If users are really aware of, and OK with, with write loss by default (presumably because the probability of failure is small or the impact is low) then it should be fine to talk about it. If users *aren't* aware of this behavior, but most are subject to it by accepting defaults, then of *course* you should educate people about it!
Or, you know, choose safer defaults. That's an option!