I know the "risk" calculation, but I don't understand what the variables in the calculation mean
The risk calculation is ((asset * priority * reliability)/25
)
I don't quite understand what the individual variables in this equation are supposed to be, though, and they don't appear to be documented or explained in any detail.
For example, what is "reliability" supposed to denote? Is there any article or documentation describing the parts of this calculation and what they are supposed to actually mean? Something like "this event is very reliable" -- but what does that even mean, especially if I have no idea if a particular event is reliably a security event. For example, what metric/rubric am I to use to determine if an event is more "reliable"?
And "asset": I suppose some assets are clearly more important than others, but how can I decide how much more important? For example, is there a rule of thumb on setting asset value?
And finally, priority seems pretty arbitrary as well. Are there any guidelines or examples on setting this value for any given event?
I want to turn up the sensitivity of some events, but I feel like I'm randomly mashing buttons without understanding what the intent is behind the components of this risk equation.
Priority. How urgently the event should be investigated
Reliability. The chance the event is a false positive
See section 3.2 of https://www.alienvault.com/doc-repo/usm/security-intelligence/AlienVault_Life_cycle_of_a_log.pdf for more context.
I've never seen any rule of thumb for setting asset values, but I tent to use:
5: for any server/device which can receive a packet from the Internet, or has unencrypted access to valuable data (PCI, bank, PHI, SSN, etc). Or any Domain Control, LDAP server, or any other form of authentication services device. VPN device
4: Any database server which did not falls into the above. App servers Source code repo's.
3: Any other Prod server or devices
2: Any non-prod devices
1: any server or device which really don't care about