[Defender Pro] Better decisions with centralized metrics

Defender Filescan commonly reports eval() functions in plugins. It would be helpful if it didn't report false positives, based on Ignore decisions by other Defender users. For example, plugin Foo gets flagged for an eval() at line 1030 column 4 for version 2.5.1. I choose to ignore that, and so does every other user of this plugin. So the next time someone runs a Filescan, and it detects this exact same condition, why should it warn the admin and ask them to make a decision when everyone else has already been through the investigation process and concluded that the flag is invalid.

Of course we can't really trust the crowd – what if way too many people simply hit that Ignore button, and it turns out the warning was good? The problem we need to avoid is where too many unqualified people hit that Ignore button. I may or may not be qualified to make a decision about the Foo plugin – do you know any better? In this case some help from our friends at WPMU DEV would help:

When more than X sites gets exactly the same warning, log it. Look into it. And somewhere in the Hub set a disable flag for "Foo/2.5.1/1030/4". Now we know someone has actually looked at it, and from that point forward no other site admin needs to see that warning. This can be a part of the Pro service offering, beyond the freeware offering.

Maybe that service doesn't need to come directly from WPMU DEV. Maybe we can make better decisions based on a factor of trust of others. In other words, if I'm on the development team for plugin Foo, and I know for a fact that that line 1030 is good, then my level of trust for that specific plugin, maybe 79 on a scale of 1 to 100, should be much higher than Joe Bloggs, with a rating of 2, who can't read PHP. All we need is a rating to be displayed in the File Scan page, like : "This is considered a non-issue by 82% of known users of this plugin with a credibility rating greater than 74".

How do we assign a factor of trust to people? Well let's stop here and consider that now we're trying to fine-tune a process that doesn't exist yet. A lot of questions could be asked about how to implement this. If the premise is valid we can create solutions for every challenge. If we aren't even doing this then we don't need to go down the rabbit hole to solve every possible problem. I'm saying we should start by considering a centralized process rather than the current paradigm where everyone is forced to make their own decisions for no good reason anymore given the technology available.

Consider also : When a site admin chooses to delete a PHP file, or an entire plugin, because it contains an eval, that's a flag to the rest of us that perhaps something is up. Does that person know something the rest of us should know? Wouldn't it be nice if Defender logs this significant event, or just clicks a new Warning button, so that someone at WPMU DEV can check to see if perhaps the rest of us should delete that specific Foo v2.5.1 plugin as well? Wouldn't it be nice if those of us with Foo v2.5.0 could be warned that v2.5.1 has some defect that should preclude our update to that release – and if Automate then refused to install that specific version without a manual override?

It's time to make use of the data that's available in these important plugins that have a connection back to the WPMU DEV servers. There are metrics there about how often sites trigger IP lockouts, about the class C IP blocks that are sourcing the most recent bot attacks, about how often there are unusual issues with specific plugins, about how often specific files keep re-appearing in folders (like favicon.ico) … and lots of other things that are common to most of us.

Thanks for your consideration.