• Resolved Imageozguy

    (@ozguy)


    Hi Caleb,

    What settings would keep the nasties such as hackers, script kiddies, scrapers etc out, but keep the false positive risk low?

Viewing 1 replies (of 1 total)
  • Plugin Author ImageMaikuolan

    (@maikuolan)

    Hi,

    Any sufficiently strong measures to keep out the majority of nasties is going to have at least some small degree of false positive risk. As to the actual severity of that risk, as well as exactly what would constitute “sufficiently” strong, will somewhat depend on the nature of the website in question and the scope of its normal, intended traffic (e.g., whether that traffic is inherently internationalised versus being more localised, targeting a specific country or region, and therefore how vigilant we’re able to be in regard to blocking anything outside the targeted countries/regions; whether the software being used on that website behaves in ways that might trigger any of CIDRAM’s signatures; commercial versus non-commercial websites, in the sense of how that would play into tolerances; etc). Which kinds of block events actually constitute a false positive in the first place is also somewhat opinionated, given that whether something should or shouldn’t be blocked in the first place is also somewhat opinionated and that a false positive is defined on that basis. Because of that, what would generally be considered the best, most optimally balanced settings is likely to vary somewhat from one website to another.

    That all said, there are some general recommendations that should hold true for just about everyone.
    – Enabling measures for users, in limited circumstances, to be able to regain access to the website when wrongly blocked, e.g., by enabling CIDRAM’s ReCaptcha or HCaptcha features (or even, when possible, both) is going to be a good idea. While not eliminating the risk of false positives, it’ll take the sting out of it in most cases.
    – For the most part, the more modules, signature files and such which are active at an installation, the stronger that installation’s provided protections will be, but also, the higher the risk of false positives. Of those available modules and signature files, using what you need while leaving out what you don’t need, in general, is going to help in regard to finding that sweet spot between best protection and least false positive risk. Knowing exactly what it is that you do and don’t need often comes down to a combination of experience and knowing what each module and signature file provides. The latter part of that, at least, can be learned by reading the descriptions of those available modules and signature files as shown at the updates page.
    – Log as much as you can. If anything goes wrong (i.e., when false positives occur), logs will help in diagnosing exactly what has gone wrong, and help in finding the right solution to the problem. Without logs, in most cases, we’re left in the dark about such problems.
    – If you’re not sure what something does, ask.
    – Checking how “signatures➡shorthand” has been configured and adjusting according to your needs (i.e., based on the reason for something being blocked, whether you would agree or disagree that it should be blocked in the first place) is always a good idea.
    – Wherever possible, the signature files and modules I would recommend for most installations are already enabled by default, but there are some which aren’t, because they require API keys which would need to be entered into the configuration in order for those modules to work properly (e.g., the AbuseIPDB module, the Stop Forum Spam module, the Project Honeypot module, etc). Enabling those can definitely help with keeping the nasties out, though there some inherent limitations, too (i.e., regular API lookups could slow a website’s response times down a bit where performed, plus most such APIs tend to impose lookup quotas, so it’s generally best not to use all of them at the same time, and to only perform those lookups where essential, such as at a website’s login and registration pages, contact pages, etc, in order to not affect performance too much and to avoid surpassing any imposed quotas; though, of course, that would mean that any pages where said lookups aren’t being performed could still be accessible by anything which would normally be blocked by said lookups). All of said APIs also have their own inherent false positive risk, too.
    – The more time you have available to monitor and optimise everything over time, generally, the stricter you can afford to be with your settings (because you’ll be more likely to notice if/when something goes wrong and have the time to adjust accordingly). The less time you have available, the less strict you’ll likely want to be with your settings due to the increased risk of not noticing when something goes wrong and/or not having the time to fix it when it does. In a less-strict scenario, you’ll likely be blocking less nasties than would be ideal, but also likely incur less false positives than would otherwise be the case. In a more-strict scenario, you’ll likely do better at blocking the nasties, but may potentially incur a greater number of false positives than would be ideal (but will also hopefully have the time, log data, and anything else necessary to be able to rectify the problem when it occurs).

    If I think of anything else, I’ll write it in a separate, subsequent reply, so that this reply doesn’t reach TL;DR lengths and to avoid the risk that I accidentally repeat any points. Anyhow, for now, I hope that helps. 🙂

Viewing 1 replies (of 1 total)

The topic ‘Basic Security’ is closed to new replies.