The threath of Information Thieves

You know what’s cheap in the 21st century? Compute time. You know what’s expensive? Human judgment. And they’re not interchangeable. Humans are good at understanding things, computers are good at counting things, but humans suck at counting and computers suck at understanding.  – Cory Doctorow, Pester Power

This blog is becoming increasingly an asylum for victims of acts of social engineering. The rant on the Huurwoningen scam has already  over 100 comments and the Facebook trickery post also generates some fair amount of traffic. The thing that bothers me the most, is that in the huurwoningen case, Song Chine and others keeps on doing what they are good at: manipulating and deceiving people in order to scam them. They send the same nice, plausible and “innocent” e-mails to innocent people in an attempt to persuade them to comply with their request. But of course, the statements they make are completely false. And the sad thing is that it seems like there doesn’t seem to be law enforcement on these malevolent acts. There is no evidence that the charlatans are punished and they don’t seem to face the felony charges they deserve. What is also striking me is that there is no online haven or portal where victims can report such incidents of online deception, trickery and bogus calls. Another big issue with the phenomenon of social engineering, is that there is very little known about the exact length and breadth of its problem space. In a world built on digital information and global connectivity, that is dangerous. There’s also no mechanism that triggers the Federal Trade Commission (FTC) to step in. I think there’s still a long way to go. The law is currently not transparent and it is not clear what is covered and what not. Which institution should take ownership of what? The cybercops? Do service providers need to be made accountable for informing the cybercops of social engineering felonies? I don’t have an answer for that matter, but what I do know is that there’s an evolution where social engineering techniques are increasingly being applied for the purpose of unauthorized system access, information gathering and fraud. In the first place, I believe people should be educated about social engineering techniques and its consequences.  So in an effort towards creating more awareness, I am writing this piece.

Let me just step back, and start with the security of Information Systems. My alma mater colleague Peter De Bruyn wrote an excellent article on Social Engineering in Informatie and to a certain extent I will base myself upon his writings. He says that when we talk about the security of Informations Systems, usually only the technical side is looked at. Firewalls, encryption and RAID are good examples of that. However, these techniques are not always effective to protect Information Systems under all circumstances. There are also human-related risks to Information Systems. That’s where social engineering comes in. Indeed, attacks that exploit a person’s gullibility can also create a tremendous amount of damage. And such social engineering attacks are very hard to detect. Because things aren’t always what they seem. A social engineer will aim to catch us unaware. He will leverage his psychological power to move the victim into an unauthorized act.

It turns out there’s not a single and unanimous definition of Social engineering. The Wikipedia definition seems to be a good one though. It says that Social Engineering is a collection of techniques used to manipulate people into performing actions or divulging confidential information. The thing is that almost everyone is vulnerable to Social Engineering attacks and maybe might even commit such acts. I am going to make a bold statement here: humans are  stupid. Social Engineers abuse the very fundamental human psychological properties and their decision making attributes (cognitive biases): their willingness to help others, their greed, their naivety, their willingness to perform, their fears,  their usage of  “mental shortcuts” and their initial goodwill and trustworthiness towards others.  Secondly, people might commit such crimes because humans lie to each other. Under certain circumstances, people might do anything (e.g., cyber-stalking, intimidate others) to obtain certain information that is important to them. On a side-note, I remember a mathematics teacher in high school once told me that any human being is capable of killing someone – just think of what happened during the Holocaust.  There’s a vast number of social engineering methods, and for your information, these are the most popular ones: pretexting, diversion theft, phishing, tailgating, baiting and quid pro quo. It’s not my intention to explain them in detail here though.

So how does a typical social engineering attack looks like? Traditional computer attacks focus on the technical weaknesses in hardware and software.  Social Engineering attacks however focus on human weaknesses. Some attacks though can have a mixed character and will be used to exploit technical weaknesses, as Peter points out. Good examples are trojan horses and (spear) phishing will offer you a “useful” application or all kinds of “vital” information is requested to better “protect” a target.  Ultrasurf is for instance used to circumvent Internet censorship, but it’s still a black box. You cannot know what the code does, unless you reverse engineer it. Luckily,  antivirus programs report it contains spyware and several trojan horses (which may actually enable government surveillance). Peter identifies four typical phases in a social engineering attack. It is a reoccurring pattern. In the first phase, the social engineer will try to collect personal information about the victim. Like names, e-mail addresses, telephone numbers and so on. Subsequently, the social engineer will try to build a trust relationship with the victim. He will use the information he gathered earlier on to take away any suspicion by indicating his familiarity (e.g., name-drop) to the potential victim’s environment.  The third face consists of gathering more specific information, like IT infrastructure and architectures, server and application names, usernames and passwords. In order to get this detailed valuable information,  psychological and emotional pressure means will be used. And finally, the attacker will exploit this information to get unauthorized access to the systems as to change, delete or copy data and basically use it to his/her own benefit.

The question that then arises is how can we combat the social engineering phenomenon? First and foremost, there will always be a risk of a successful attack based on social engineering. Humans expose inherent weaknesses. Computers do too, because they cannot interpret data carefully enough (but that’s another interesting and philosophical discussion). I think that recognizing the above described pattern could provide the impetus to protect oneself to the attacks, however, organizations and the public services should also take measures to combat this evilness.  Peter distinguishes three types of measures that can be taken. The first measure is to formulate clear and concrete policies about physical access to systems and list the type of activities and actions that employees are (dis) allowed to do when providing certain information. One can classify this information by separating sensitive information, private information, internal information and public information. Every policy should also contain clear instructions about identity verification (e.g., how to make correctly use of passwords). It is also imperative that personnel is encouraged to report suspicious acts to a central repository so that it can be centrally monitored. This will help security officers to step in and arrange for a forensic examination. A second measure is raising the security awareness by training and education. In our jobs, people need to be aware in their day-to-day work about what to do to assure the security of the information they handle. Employees should know why they need to respect the policies and what happens if they don’t. In corporate speak: policies should be executed like if they were a basic hygiene factor. Social engineering penetration tests can also contribute to raise security awareness by showing the vulnerabilities in the policies. Finally, some authors suggest taking technical measures too, to protect oneself from social engineering attacks. Think changing passwords frequently, strong authentication, time-based tokens or biometric identity checks.  By the way, a big research topic nowadays deals with how to model and analyze the socio-technical aspects of modern security systems and on how to protect such systems from socio-technical threats and attacks. This requires different communities of researchers (experts in computer security and in cognitive, social, and behavioral sciences) to sit together, in order to identify weaknesses potentially emerging from poor usability designs and policies, from social engineering, and from deficiencies hidden in flawed interfaces and implementations.

When the exact size of the social engineering problem is unknown, it is also hard to mine the phenomenon. So I am eagerly awaiting  an online mass initiative to sprout, which will allow us to combat this never-ending conflict. There’s definitely room for improvement in the collaboration between citizen and government, employee and organization and the enforcement of law and order.  What we do know is that everyone can become a victim of social engineering, and that such techniques are being applied by both in- and outsiders. And I am going to end again with a bold and sardonic statement: no matter how much we care, no matter how well aware we are: we will not and cannot desist and cease social engineering crimes. Social engineers are shrewd, and maybe sometimes we can outsmart them. But that’s also where it ends. Securing ourselves against such attacks is a very complex and never ending process.

Update 1: Here some pointers. In Belgium, users can report Internet crimes on You needn’t worry about who is qualified for what, eCops makes sure that your report is being investigated by the appropriate service. In The Netherlands, recently the NCSC started operating. NCSC cooperates in enhancing the defensibility of the Dutch society in the digital domain. Their goal is to realize a safe, open and stable information society by sharing knowledge, offering insight and also offering a proper action perspective.

Leave a Reply

Your email address will not be published. Required fields are marked *