Cybersecurity is often framed as a technical discipline. Firewalls, detection systems, and infrastructure resilience remain central to how organisations approach risk. Increasingly, however, attacks exploit structure, scale, and behaviour as much as code.
Ahead of Tech Show Frankfurt on 6-7 May at Messe Frankfurt, Mark T. Hofmann, criminal psychologist and cybercrime expert, examines how cybercrime has evolved from isolated actors into commercially structured ecosystems, and what that evolution means for risk assessment, workforce training and organisational response.
– – – – – –
You’ve spent time studying and interviewing cybercriminals directly. What do organisations still misunderstand about who these actors are today?
Many organisations still underestimate how professional and structured cybercriminals are. The cliché of a lone hacker in a hoodie is outdated. What we actually see is a real economy with clear roles and responsibilities. There is IT support, customer service, affiliate systems, logos, branding, and even quality management. Some ransomware groups operate more professionally than many legitimate companies, to be honest.
At the same time, organisations misunderstand motivation. It is not just about money. Money is often the entry point, but then comes the challenge, status, power, recognition, and the thrill. Many are proud of what they do. They see it as a skill set. If you misunderstand who your opponent is, you can’t defend against them.
There’s still a perception of hackers as isolated individuals. How accurate is that picture now, and what does the reality look like?
That picture is largely inaccurate today. Of course, there are still individuals, but most serious attacks are not carried out by isolated actors. They are organised ransomware groups, often globally distributed, working like companies.
We see ransomware-as-a-service models, affiliate programs, and even internal hierarchies. Some groups publish press releases or have recruitment processes on darknet forums. They collaborate, they outsource, they specialise. One group develops malware, another handles negotiations, and another focuses on initial access (like a broker).
Cybercrime in 2026 is not a lone wolf problem. It is organised, scalable, and highly efficient.
You’ve described cybercrime as an economy in its own right. How does that change the way organisations should think about risk?
If you understand cybercrime as an economy, then you understand that it follows economic logic. Supply and demand, specialisation, scalability. That changes everything.
You are not dealing with random attacks. You are part of a market. Your company is a potential target, and attackers evaluate you based on profitability, vulnerability, and expected return. Of course, the high-profile targets are more profitable, but in hospitals, the urgency and necessity to pay may be higher. Try to build as many barriers as you can and try to make it as hard as possible for attackers to get in. Be an unattractive target.
AI and deepfakes are becoming more accessible. How are attackers actually using these tools in practice today?
We already see AI being used to scale and improve social engineering. Phishing emails are now written in perfect language, without errors, fast and automated. That removes one of the classic indicators people were trained to look for (like typos or poor language skills in phishing emails). It also means that barriers to entry are lower than ever. You do not need to be a highly skilled hacker anymore. You can buy tools, services, or even complete attacks. That increases the number of potential attackers significantly. With AI agents, this might get worse fast.
Deepfakes take this to another level. Voice cloning and video manipulation allow attackers to impersonate real people very convincingly. A few seconds of audio can be enough to clone a voice.
There have already been cases where voice cloning was used to authorise large financial transactions. This is the future of CEO fraud.
AI lowers the entry barrier and increases the quality at the same time. That is a dangerous combination.
Many attacks rely less on technology and more on human behaviour. Which psychological principles are most commonly exploited?
Cybercrime is primarily a psychological problem. The main levers are always the same: emotion, urgency, and authority.
Attackers create pressure, trigger emotions, and push people into fast decisions. Humans react reflexively before they think. That is exactly what attackers exploit.
Trust is another key factor. If something sounds familiar, if a voice is known, if the request appears legitimate and urgent, people tend to comply.
And then there is routine. People are busy, distracted, and under pressure. In that state, even trained individuals make mistakes. Everyone can be hacked; it’s not about intelligence.
You’ve spoken about tactics like false identity, urgency, and authority. Why do these techniques remain so effective, even in security-aware organisations?
Because awareness alone is not enough. Knowing something is one thing; acting correctly under pressure is something completely different.
If someone calls you, claims to be IT support, creates urgency, and tells you there is a problem, your brain switches into reaction mode, especially in stressful environments.
Authority is extremely powerful. If the “CEO” or “police” calls, people tend not to question. Urgency removes time to think, and false identity creates trust.
Even highly trained professionals can be deceived. That is why routines and verification processes are so important. Not just knowledge, but habits. A good example is locking your laptop whenever you leave your desk. I even do this when I am all alone in my home office, not because I am paranoid, but because it’s a habit.
What do organisations tend to overestimate about their security and underestimate about their exposure?
They overestimate their technical security and underestimate the human factor.
You can have firewalls, network security, and all technical measures in place. If an employee reveals a password on the phone or clicks, which firewall can prevent this kind of behaviour? You need both: Technical security, but also the human firewall.
At the same time, many organisations believe they are not interesting enough to be targeted. That is one of the biggest myths. Small size does not protect you. Even kindergartens are attacked. If you earn more than a kindergarten teacher, you are a potential target.
Most attacks start with human error. But the same humans can also be the strongest line of defence if they are trained, motivated, and aware.
That is the idea of the human firewall. Secure your systems, but more importantly, train your people. As a speaker, my motto is “Make Cybersecurity Great again”. I think Cybersecurity can and has to be entertaining to truly reach and inspire people to change their behaviour.
Click Here For The Original Source.
