I called this out in the report as well, but the write concern documentation still doesn't say anything about rollbacks/write loss: https://docs.mongodb.com/manual/reference/write-concern/
If users are really aware of, and OK with, with write loss by default (presumably because the probability of failure is small or the impact is low) then it should be fine to talk about it. If users *aren't* aware of this behavior, but most are subject to it by accepting defaults, then of *course* you should educate people about it!
Or, you know, choose safer defaults. That's an option!
MongoDB found a bug in the retry mechanism which they think is responsible for the issues we found in 4.2.6--a fix is scheduled for 4.2.8!
I mean like... 91 points and never even *touched* front page? I know, I know, never read Hacker News, but this is... weird behavior. I know Jepsen's gotten accidentally nuked by the voting ring detector in the past; maybe that's happening again.
Did HN's antispam measures get a lot more aggressive recently? The last handful of Jepsen reports have really struggled to make it to frontpage, despite significantly higher vote-to-age ratios than comparable posts. Once they're on FP, they reliably hit top 10, but Dgraph's actually had to get rescued by a mod, and yesterday's Mongo post never made it past mid-second-page.
To be clear, I don't encourage anyone else to submit or upvote, don't pass around HN page links, submit exactly once, etc.
Also the `snapshot` read concern doesn't actually give you snapshot reads unless you commit with write concern `majority`, and apparently this is... by design? Even for read-only transactions? I have questions!
tl;dr: MongoDB 4.2.6's transactions aren't full ACID, or even snapshot isolated. We found read skew, cyclic information flow, and internal inconsistencies, including transactions which could read their own writes from the future. Ooooh, spooooky!
Also transactions are allowed to lose data & read uncommitted, possibly impossible states by default, because why would you *not* want that behavior from something called a transaction. This was already documented, but I found it surprising!
By popular demand, here's a quick take on MongoDB 4.2.6's transaction system. There are CHARTS, there are GRAPHS, okay it's mostly CHARTS OF GRAPHS but they're really cool anomalies and I hope you enjoy them.
Guys, gals, & non-binary pals: it's here! Jepsen 0.1.19 is cut and on Clojars, and offers what I think is the final version of jepsen.generator.pure, the namespace which will replace jepsen.generator in 0.2.x.
Hey folks! I'm gonna be talking about some new research at RedisConf 2020, which will be streaming free online tomorrow and Wednesday. Check it out! https://redisconf.com/register
I'm not totally sold on this API yet, but I've been working on this design for over a year, and it's finally runnable: you can write tests with pure generators, and Jepsen will run them like you'd expect. We'll probably have a compatibility/deprecation release, followed by breaking changes in 0.2.0.
- No more random deadlocks
- Time limits that actually work right
- Generators can react to ok/fail/info events
- Sequences are intrinsically generators
- Better composition rules
Hi folks. I'm gearing up for the biggest API change in Jepsen in ~5 years: I'm replacing the generator system. If you'd like to start trying out the new API (and offer comments!), see https://github.com/jepsen-io/jepsen/blob/master/jepsen/src/jepsen/generator/pure.clj.
Jepsen 0.1.18 is now available, including support for cycle-detection tests with Elle, dumping packet captures with tcpdump, and quality-of-life improvements.
Elle looks at histories of transactions from real databases, and infers constraints on version orders and the universe of all possible Adya-style dependency graphs consistent with that history. We employ cycle detection to automatically find and explain minimal anomalies.
I'm pleased to announce the latest @jepsen project: Elle, a black-box, linear-time checker for transactional (or single-key!) consistency models. I've been working on this with Peter Alvaro for over a year now, and I'm delighted to finally have it out the door.
Elle has been the secret behind most of the recent Jepsen analyses, and we believe its techniques represent a novel and useful contribution to the field.
Again, this is just design review--I can't speak to anything about Replicache's implementation--but these properties are theoretically achievable! There's precedent in both Bayou and Eventually-Serializable Data Services, both from 1996! I'm kinda curious what kind of cross-pollination was going on there.