Comment 8 for bug 1450251

Revision history for this message
teo1978 (teo8976) wrote :

> Software isn't flawless,

And here's a flaw of this software, which is why I reported the bug.
I was under the impression that you were trying to tell me that this wasn't a flaw at all, that it was because this software is "not designed for" doing a given task and that you were claiming that this was a sensible design decision, which it isn't.
But perhaps I misinterpreted you, as I see you haven't closed the bug. Perhaps you were just explaining where the root of the bug is, which is a design flaw, in which case we agree.

> and there are anomalous edge cases for almost all software that aren't handled well.

Here you are considering "anomalous edge case" a case that should be considered an "obvious normal case", perhaps not frequent, but which should definitely be handled well.

> In this case, a cosmetic operation

Pardon me, "cosmetic"??

> is unable to be performed on roughly 0.00005% of bug reports,

Do you have any data that backs up that statistical estimation?

> and there is a workaround,

Yeah, a ridiculously painful workaround that takes tens of man hours: moving all duplicates of the source bug one by one.

> so it's not a very high priority to fix directly.

Oh well, priorities are subjective, I'll give you that.

> Someone was repeatedly trying to mark A as a duplicate of B many
> thousands of times an hour, causing database locks to be held on A, B,
> and all of A's duplicates, preventing other duplicate operations from
> completing on those bugs.
> Now that they've stopped doing silly things like that

No, that was me yesterday (having launched an infinite while loop from a terminal; i did it in the hope that the spike in errors would attract some attention to the bug).
But that was YESTERDAY, and I stopped. TODAY, I manually started (again) to manually move dozens dupes of A to dupes of B (NOT to mark A as dupe of B), which is nothing silly, it is the "workaround" you suggest yourself, and it worked fine for the first few dozens bugs. Then, it started to systematically time out.

Indeed, after a few minutes, it did start working again.

So, if SUCCESFULLY moving a bunch of dupes from a bug to another triggers some "database lock", then that's another wrong design choice.
And also, if the reason the operation fails is that it has been locked for security or whatever reason, then "timeout error" is a dementially wrong error message.

> If it doesn't work, say so in a pleasant manner and we can work out why it's broken again and how to fix it.

1) Whether I speak in a pleasant or unpleasant manner should not be of any relevance in taking action to fix something that is broken
2) We are talking about another issue: the workaround shouldn't be needed at all. Handling of marking a dup that has its own dups is handled in a ridiculously inefficient matter and it must be optimized.

> In this case, the largest master bug in history is being marked as a duplicate of another large master bug

And the fact that it doesn't work demonstrates that the estimation of the size of bugs that the system should be capable of handling was pathetically wrong.
That's as if a video going viral made facebook crash and they said "This is the most viral video in history, it's a pathological case".

> only necessary because someone
> filed a new bug and decided that *it* should be the new master, rather
> than using the existing bug

No, "only" necessary because the new bug was the one being handled by non-idiots, the one that was assigned to somebody actually working on it and who knows what they are doing, and the one that was not constantly screwed up by monkeys who wrongly changed the status to "fix released", which cannot be reverted (which is another idiotic design decision, btw).

> a very uncommon occurrence

Not at all. I see that all the time. Original crappy bug report and a much better later one.

> for an established bug with hundreds of duplicates

It's "established" only in that it has hundreds of duplicates, and once it has, it attracts more, and when somebody wants to fix that mess with what should be a trivial one-click operation, they hit the bug we are talking about.

> This particular 0.00005% of the dataset is by far the biggest piece of work that this code has ever
> seen, and the code does not handle it well, so it is clearly a uniquely pathological case.

The code does not handle it well: that's what is pathological. It's a pathology in the software.