The Human Risk Factor

2
1475

The Human Risk Factor by Rick Kam

RickKam (2)By Rick Kam, President and Co-founder, ID Experts
rick.kam@idexpertscorp.com

Employee negligence was identified as a top threat to information security, according to a recent report from the Ponemon Institute. And an article earlier this year in Federal Times noted that “Every survey of IT professionals and assessment of cybersecurity posture shows at least 50 percent of breaches and leaks are directly attributable to user error or failure to practice proper cyber hygiene.” Now, to anyone who’s been paying attention for the last decade or so, it will come as no surprise that people make mistakes that cause data breaches. To err is human, and that is not going to change. What has changed is the scope of damage resulting from these errors. A decade ago, a lost laptop or improperly discarded paper records might expose hundreds or even thousands of people to a potential data breach. Today, with massive digitization of medical information, mobile data usage, and massive system integration, everyday human errors can cause breaches that expose millions of people to potential harm. To cite just one example, InfoWorld and CSO reported that the 80 million-record Anthem data breach was probably caused when thieves infiltrated Anthem’s system using a database administrator password captured through a phishing scheme.

Attack Vectors Point from People to Technology

A recent blog by Napier University professor William Buchanan aptly lists the top three threats in computer security as “people, people, and people.” Buchanan’s article mentions leaving devices unattended, sharing passwords, or accidentally emailing information to the wrong people as typical security errors, but many of the breaches from cyber-attacks are also traceable back to users unwittingly giving bad actors access to networks. Whether thieves get users to share personal information via phishing schemes, enter their credentials on a spoofed web site, or download apps with embedded malware, tricking people is the easiest route to cyber-theft. Yes, hackers can exploit system vulnerabilities once they’re inside a network, but user mistakes give them the foothold. Kevin Mitnick, a notorious hacker of the 1980s and early 1990s famously told a BBC interviewer, “The lethal combination is when you exploit both people and technology. What I found personally to be true was that it’s easier to manipulate people rather than technology. Most of the time, organizations overlook that human element.”

Plugging the People Gap

Healthcare organizations face challenges in plugging the human security gap. The biggest risk is a lack of awareness on the part of users. Even if your organization has good security processes and training, and even if people faithfully follow security procedures at work, they are typically unaware that actions in their private lives can put their employer at risk. The chance comment on Facebook, using the same password on personal and work accounts, or an entertaining, but secretly malicious app downloaded to a personal device that is also used at work can vault criminals right past an organization’s network security. If employees are bringing their own devices to work, even their failure to do an OS update with important security patches can put your networks at risk.

The second biggest challenge is visibility: you don’t know and can’t control what websites your employees, customers, and business partners visit, what links they click on in popup windows, and or who they chat with online. And you can assume that every user is exposed to multiple risks every day. According to a new report from Palo Alto Networks, over 40 percent of all email attachments and nearly 50 percent of portable executables examined by Palo Alto’s WildFire software were found to be malicious. The report also found that the average time to “weaponize” world events—to create phishing or other schemes to capture passwords or deliver malware—is six hours. Just think, within a few hours of an earthquake in Chile or a tsunami in Japan, your well-meaning employees trying to donate to a relief fund can be spoofed into providing information that leads to a data breach.

Improving Your Odds

Humans can’t be error-proofed any more than technology, but there are things you can do to help your workforce, customers, and partners keep your organization and their information secure. A recent blog by Jeff Peters of SurfWatch Labs recommends fighting social engineering with user awareness programs and using technology to limit exposure. Email coming into your networks can be scanned for malicious attachments and links. Periodic security training is great, but ongoing education is also needed: How about a short, fun weekly or monthly newsletter with news of scams and tips on how to avoid them? How about a bulletin board where users can post suspected scams and get recognition for warning others?

Despite your best efforts at promoting security, people will make mistakes. Among other things, scammers will capture or even guess passwords. Vast numbers of people still use birthdates, pets or children’s names, or other personal information for passwords. A new study covered in the Financial Times found that even nuclear plants around the world are still using factory-set passwords such as “1234” for some equipment. For this reason, some security experts are beginning to advocate doing away with passwords altogether for critical systems and moving to multi-factor authentication. TechTarget reported that at the IAPP Privacy. Security. Risk. 2015 conference, keynote speaker Brian Krebs advocated stronger authentication schemes, saying “From my perspective, an over reliance on static identifiers to authenticate people is probably the single biggest threat to consumer privacy and security.” In the Federal Times article mentioned above, Jeremy Grant, a senior executive at NIST, advocates doing away with passwords. He uses two-factor authentication on his phone—biometric identification (a thumbprint) and derived credentials from a CAC or PIV card on his phone—so that there is nothing to remember and nothing that can be stolen.

No Foolproof Solutions

Speaking at the Privacy, Security. Risk. 2015 conference, retired RSA chairman Art Coviello said that, with cloud computing and other new technologies, “The attack surface has expanded so dramatically that it’s becoming unfathomable…The United States is living in the biggest and most vulnerable digital glass house on the planet.” With medical data scattered from the cloud to multiple points of care and to the personal devices of millions of healthcare workers, security failures are going to happen. You may not be able to fool all of the people all of the time, as Abraham Lincoln said, but cyber-criminals can fool enough of the people enough of the time to eventually overcome virtually any defense. Unless you envision a perfectly consistent robotic healthcare workforce with no personal lives (oh, wait, robots could be hacked), you can’t count on your staff, users, or your business associates to be 100 percent secure, 100 percent of the time. Ultimately, the best you can do is to educate people, monitor consistently and comprehensively for security incidents—based on thorough and up-to-date risk analysis—and have plans and teams ready to respond when human error leads to human and business peril.

[bctt tweet=”The Human Risk Factor @rickkam” via=”no”]

2 COMMENTS

  1. Good Morning Rick,

    Thanks for posting. Agreed – the biggest risks that organizations face is comfort level of its employees.

  2. Kam’s post highlights a key shortcoming that many organization’s experience when trying to increase the reliability or adding to their “error proofing” (Kam’s term) of employee’s and their actions.

    In this particular post, lack of awareness is pointed out as well as negligence. Where I think organization’s miss a key opportunity is digging deeper, even with some type of rudimentary analysis (root cause or otherwise) to identify mitigating actions to decrease the likelihood of risks related to the human factor. My experience has been and continues to be that by not doing this analysis (again, much of it is very easy and doesn’t have to be delegated to a slow death in a committee) simple steps to reduce some of the contributing factors that resulted in the error are not identified and in the end, it should be no surprise that the same reasons for the errors continue to exist.

Comments are closed.