how do you quantify "more data than usual" and configure your alerts to avoid alert fatigue? If we push 500 Gig a night, with 10% variance, Russia can push 50G of "other" data before raising an alert. We use calculated rolling baselines for lots of performance alerting, and found those systems get "gamed" accidentally far to often. A few dropped packets, not enough to raise an alert, but enough to shift the baseline... a few more dropped, not enough to raise an alert (against the new baseline) but enough to shift it again. All of a sudden, users are complaining, but everything is green across the board.