As pioneers in the development of simulated phishing attacks, we naturally recommend using these assessment tools as foundational components of security awareness training programs. But just as we believe there is more to successful security education than phishing tests, we feel there is more to measuring program success than tracking end-user click rates. And that’s for one simple reason: these metrics don’t provide a full view into your organization’s susceptibility.
There are a number of reasons why this is true. We’ve spoken in the past about the fact that one phishing email example is exactly that — one example. As such, each simulated attack essentially presents a snapshot of susceptibility at a moment in time, and some of that susceptibility is rooted in the way the email was structured, its level of sophistication, and why it resonated with the users who engaged with it. All of those elements vary in some way, with every different phishing test and every active attack from the wild.
In addition, as our second annual Beyond the Phish™ Report again showed, click rates don’t necessarily reflect knowledge levels. Part of that is because of what we just noted in the prior paragraph; because users happen to make the right decision on one phishing test doesn’t mean they did so because they spotted the threat and practiced active avoidance. Maybe they were too busy to notice the email. Maybe someone else told them not to click. Maybe the message didn’t resonate with them. There are too many factors at play to definitively count non-clicking users as knowledgeable users.
But even if we put those two things aside, there remains one key reason that click rates must be regarded as potentially unreliable metrics, and that’s because they can be manipulated.
Your immediate reaction might be to disagree with me; you might think, I don’t make my users click, they decide on their own. A click is a click, no matter how small! And that is true — other than the occasional oddity or back-end trickery that sometimes happens with sandboxes and such, individual clicks are on those individuals. But administrators absolutely can influence the likelihood of a click happening (or not happening) based on factors like those mentioned above: how a message is constructed, how difficult the red flags are to spot, how applicable a topic might be to users in a particular organization, when the message is sent, etc.
Administrators can consciously — and even unconsciously — make things easier or more difficult for end users, and manipulate mock attacks in order to influence a data trend. That’s why these simulated phishing statistics should not be the sole source of truth when it comes to your security awareness and training program.
Go Beyond the Phish – in More Ways Than One
Just as we advocate for thinking beyond the phish for cybersecurity assessments and training, we recommend extending beyond phishing tests to evaluate vulnerabilities and gauge progress.
In addition to utilizing question-based knowledge assessments and education modules, you can look to the security events that you already are (or should be) tracking, including metrics like the following:
- Numbers of active malware infections
- Rates of successful external phishing attacks
- Downtime hours for end users following a malware infection, successful phishing attack, or misplaced/stolen device
- Hours and resources tied to remediation of devices following end-user mistakes
- The quantity and quality of calls fielded by your IT helpdesk
- Numbers of suspicious emails reported by your employees
This last metric is a particularly good indicator of whether users are becoming more active in checking and evaluating the emails they are receiving on a day-to-day basis. You should start to see positive improvements with all of these measurements as you progress through a well-rounded, effective program. Those improvements will not only offer indicators of advancing knowledge but also offer the opportunity to gauge ROI on your education efforts.