Skip to content

My published articles

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 15 Dec 2025

🔗 Link: https://www.computerweekly.com/opinion/The-three-cyber-trends-that-will-define-2026

🔗 Full article text in My blog post

Synopsis: As we prepare to close out 2025, the Computer Weekly Security Think Tank panel looks back at the past year, and ahead to 2026.


"In Cyber security, basics matter!" even in 2025

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 15 Dec 2025

🔗 Link: https://www.computerweekly.com/opinion/In-cyber-security-basics-matter-even-in-2025

🔗 Full article text in My blog post

Synopsis: As we prepare to close out 2025, the Computer Weekly Security Think Tank panel looks back at the past year, and ahead to 2026. What a year 2025 has been: Rich in both cyber events and innovations alike. On the latter, not a week has passed without a mention of innovation in Artificial Intelligence (AI). I am excited about the innovative ways AI is going to be used to benefit our society; perhaps this is the 4th Industrial Revolution coming. The level of useful innovation in cyber security, despite some questionable claims by certain vendors, will increase in 2026 with new products and services.


The UK’s secret iCloud backdoor request: A dangerous step toward Orwellian mass surveillance

Help Net Security logo

📖 Publisher: Help Net Security

📆 Published: 13 Feb 2025

🔗 Link: https://www.helpnetsecurity.com/2025/02/13/uk-government-icloud-backdoor-request/

The LinkedIn post

Synopsis: "The United Kingdom government has secretly requested that Apple build a backdoor into its iCloud service, granting the government unrestricted access to users’ private data. This revelation deeply concerns me – it is a blatant overreach that threatens privacy, security and civil liberties.

This raises an urgent question: should technology companies be forced to bow to government pressure and bring in George Orwell’s 1984 nightmare, or should they remain steadfast in protecting our privacy rights?"


A humble proposal: The InfoSec CIA triad should be expanded

Help Net Security logo

📖 Publisher: Help Net Security

📆 Published: 16 Jan 2025

🔗 Link: https://www.helpnetsecurity.com/2025/01/16/infosec-cia-triad/

The LinkedIn post

Synopsis: "The inconsistent and incomplete definitions of essential properties in information security create confusion within the InfoSec community, gaps in security controls, and may elevate the costs of incidents.

In this article, I will analyse the CIA triad, point out its deficiencies, and propose to standardize the terminology involved and expand it by introducing two additional elements."


In the cloud, effective IAM should align to zero-trust principles

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 27 Nov 2024

🔗 Link: https://www.computerweekly.com/opinion/In-the-cloud-effective-IAM-should-align-to-zero-trust-principles

In today’s digital landscape, the traditional security perimeter has dissolved, making identity the new frontline of defence. As organisations increasingly adopt cloud services and remote work models, managing and securing identities has become paramount. Effective Identity and Access Management (IAM) practices are essential for IT departments to safeguard against cyber-attacks, phishing attempts, and ransomware threats. By implementing robust IAM strategies, organisations can ensure that only authorised individuals have access to critical resources, thereby mitigating potential security risks. Let’s dive into the most important things to focus on, all of which are aligned to core zero-trust principles. Verify explicitly

One of the main drivers fuelling the ongoing adoption of cloud technology is the unparalleled ease of access to resources from anywhere, from any device, at any time of day. In practical terms though, it would be short-sighted to allow this level of unchallenged access without verifying that the access requests are being made by the correct person. After all, we still live in an age where usernames and passwords are often written down near the devices they’re used on. IT security teams should have sturdy mechanisms in place to explicitly verify these access requests so that there can be confidence assigned with allowing access, especially from unrecognised network locations.

Some examples of how this could look in practice would be by using strong multi-factor authentication (MFA) methods to secure requests. Strong methods include approving an access request via a notification in your chosen authenticator app on a smart device (already using biometrics to be unlocked) or by using a number matching prompt so that the requestor must manually enter the correct answer in their app before access is granted. These methods help skirt some of the growing techniques attackers are using to try and get around MFA prompts: namely, SIM-swapping and MFA fatigue. The emergence of these MFA-focused attack techniques demonstrate that attackers will always try to stay one step ahead of emerging security features.

MFA isn’t the be-all-and-end-all when it comes to identity security though. It’s merely the first hurdle that security teams must place between an attacker and their goal of compromising an environment. The more hurdles that are in place, the more likely an attacker will give up and move to an “easier” target. MFA will deter most attackers, but not all.

User and Entity Behavioural Analytics (UEBA) is another modern technique that can provide an additional layer of security. Regardless of whether an attacker has managed to get by the MFA hurdle they’ve encountered, UEBA consistently monitors the different metrics that are generated when a user interacts with the cloud platform. Any deviations from what’s considered normal for that user are assigned a risk score, and if enough anomalies are caught, it can force the user into a password reset experience, or even lock the account altogether until the security team is satisfied that the account hasn’t been compromised.

These techniques demonstrate a small piece of what can be done to bolster the IAM platform to be more resilient to identity-focused attacks. Where this will inevitably move to in the future will be in protecting against the use of AI-generated deepfakes. AI technology is also becoming more accessible to everyone – this includes bad actors too! Using features in Microsoft Entra like Verified ID, including having to perform real-time biomimetic scans to prove authenticity, will be commonplace soon, ensuring that when someone gets that call from the CFO at the end of a Friday afternoon to approve huge invoices for payment, they can have confidence they’re speaking with their CFO, and not an AI-generated video call. Use least-privilege access

As organisations grow and evolve, so do the permissions and privileges that are provisioned to make the technology work. Over time, identities can accumulate huge amounts of different al-la-carte permissions to perform very specific tasks. If these permissions aren’t right-sized regularly, it can mean that some identities can carry huge amounts of power over the IT environment. Let’s cover some concepts that help mitigate this risk.

Role Based Access Control (RBAC) is a way to consistently provision pre-mapped permissions and privileges to suit a specific role or task. These pre-defined roles make it easy to provision the correct amount of rights for the task at hand. Cloud platforms such as Microsoft 365 and Azure come with many roles out of the box, but also allow for custom roles to suit the needs of any organisation. It’s recommended to use RBAC roles as much as possible, and this goes doubly so for when implementing the next technique.

Just-in-time (JIT) access takes RBAC a step further. Instead of having identities stacked with elevated permissions and privileges 24 hours a day, JIT access grants elevated rights on a temporary basis. Microsoft Privileged Identity Management is an example of a JIT tool, and allows appropriate identities to temporarily upgrade their permissions to a predetermined RBAC role, and can include additional checks and balances like approvals, forcing an MFA approval, email notifications or customisation options for how long individuals can get access to a certain permissions. Ultimately, this means that if those accounts with access to higher privileges are compromised, it doesn’t necessarily mean that the bad actor will be able to exploit those permissions.

In addition to using modern IAM techniques and technologies to keep rights and permissions right-sized, it’s also important to ensure that there are processes in place to ensure good identity hygiene practices. This can come in many forms, but if focusing on Microsoft Entra solutions, we can highlight two specific tools that can help make these processes work smoother than a manual effort. Firstly, access reviews can be used to periodically check identities in an environment and provide an indication of who has been using their elevated rights or not. This leaves service owners empowered to make decisions about who should be left in permission groups or not. This is also a fantastic way of auditing external collaborators who have been invited into your tenant via Entra B2B.

Access Packages are another way of keeping permission enablement standardised. Applications, groups, cloud services and more, can be grouped into a single package, for example, Entry-level Accounting may be a package created that grants access to payroll software, viewer access to multiple SharePoint sites and a specific Microsoft Team. Once that person is removed from the access package, for example, if they were to move departments, or get promoted, removing them from this single Access Package will remove all associated access to the bundle of services. This means that stagnant permissions are less likely to accumulate on a given identity. Assume breach

Even with all the best security tools available, organisations are never 100% immune from attacks. Facing this reality is a key part of a successful security strategy. It’s important to always assume a breach is possible and to increase your resilience so that responding to attacks isn’t a daunting experience. A couple of concepts can be introduced to help out here.

Firstly, the idea of continuous authentication is important to embrace. Instead of adopting the mindset of “user X has successfully performed an MFA request therefore I’ll grant all the access they’ve asked for”, seems to complement some of the concepts already covered in this article, but as highlighted earlier, attackers are always going to try to get one step ahead of security tooling, and so it’s vital that limits are put on access, even if the user seems to be doing everything correctly. Nothing does this better than altering the sign-in frequency that users will be subjected to, especially if access content from outside of the organisation network boundary. Note though, there is an important balance to be struck between enforcing sound security practices and impacting the user experience so the point of frustration.

Adaptive Access Controls can also be utilised to galvanize decision-making on access requests. For example, if User X is logging on from their registered device, within the organisational network boundary, to a SaaS platform they use every day – that poses minimal risk. Access should be granted in most instances here. However, take User B who is logging on from an external IP address that’s a recognised anonymous VPN platform, on an unregistered device, looking to download mass amount of information from SharePoint. This could be a legitimate request, but it also could be signs of identity compromise, and real-time adaptive controls such as the Sign-in or Risk policies in Entra ID Protection can help to keep resources better protected in these scenarios.

In summary, implementing a zero-trust security model with a focus on IAM is essential for combating cyber attacks, phishing, and ransomware. By adopting principles such as verify explicitly, least privilege and assume breach, organisations can significantly reduce the risk of unauthorised access and lateral movement within their networks. Technologies like MFA, JIT access and UEBA play a crucial role in enforcing these principles. Additionally, continuous monitoring, identity analytics, and deception technologies help detect and respond to potential breaches swiftly, ensuring a robust and resilient security posture.


Win back lost trust by working smarter

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 23 Sep 2024

🔗 Link: https://www.computerweekly.com/opinion/Security-Think-Tank-Win-back-lost-trust-by-working-smarter

In a typical enterprise, a division of responsibilities is codified: an IT team runs IT systems and a security team operates security systems. There might not be any risk of security systems affecting IT systems until the security tools are running on end-user devices, servers and as active elements in the network (firewall admins will agree with me, they get lots of unwarranted grief from IT teams that “the firewall is slowing things down”).

Out of the security tools that have potential impacts on IT managed systems are anti-malware kernel-hooked drivers. As cyber threat actors improve their attacks, so too do the capabilities of anti-malware tools. To perform their function efficiently these are allowed privileged access into the deeper levels of the operating systems and applications. That is where the technical, responsibility and incident management issues arise. To resolve these, IT and security teams must work together, not against each other.

Take a security tool that requires a piece of software (agent/service/kernel driver) to run on IT managed systems, be they end-user computers or servers. The security team cannot and should not demand that the IT team install the said software on their systems, blindingly trusting the security team that “this software is safe”.

Instead, the IT team should insist on correct justification and performance impact testing. An assessment should be done of how these tools, managed by a security team, affect the IT team’s Recovery Time Objectives (RTO) and Recovery Point Objectives (RPOs) contract between the IT team and the rest of the business.

Unfortunately, based on my experience, and the analysis of the biggest IT incident caused by a security company to date, many enterprises even in the regulated industries failed to do just that.

You might recall those businesses that, even days after CrowdStrike distributed a faulty channel update and released a fix a few hours later, were unable to resume normal operations. Take Delta Airlines as an example. While all other US airlines restored their operations within two days of the fix being made available, Delta was unable to operate for five days.

While I am not advocating for the reduction of CrowdStrike's portion of the blame, I argue that the failure to resume operations once the fix was available, represents a failure of IT and security teams in the affected organisations.

The IT team's primary objective is to deliver business value by making sure necessary IT systems are available and performing within agreed parameters, while the security team's primary objective is to reduce the probability of material impact due to a cyber event. CrowdStrike was not a cyber event; it was an IT event that was caused by a security vendor. Similar events happen due to Microsoft blunders every year.

Inevitably, the lack of preparedness to restore normal operations within agreed RTOs and RPOs tarnishes both the IT and security team's reputation in the other business functions executives’ books.

The lost trust and reputation are difficult to regain. As an industry we need to learn from this and work smarter.

The following are three lessons learned from this era-defining incident:

  • Focus on testing recovery based on agreed RTO & RPOs. Security teams should insist that IT team perform recovery testing covering scenarios where a security tool makes the operating system non-bootable.
  • CIOs and CISO should talk jointly to the rest of business executives and explain the need for specialised security tooling but also assurances that the tested recovery is within the agreed parameters (e.g. RTOs and RPOs).
  • Engage with the company’s legal counsel and procurement to review the security vendor’s contracts and identify unfair advantages that vendors have embedded in the contracts regarding the compensations due to their faults in service delivery.

Are CISOs ready for zero trust architectures?

Help Net Security logo

📖 Publisher: HELP NET SECURITY

📆 Published: February 20, 2020

🔗 Link: https://www.helpnetsecurity.com/2020/02/20/zero-trust-architectures/

Synopsis: "The concept of zero trust architectures is not new. During my career, I was a member of the Jericho Forum, a group that essentially invented the concept. At that time technology was not mature enough to support a true “zero trust architecture”. This has changed and I firmly believe that today, technology is at a suitable level for enterprises to move to architectures without perimeters."

Zero trust is a concept that is gaining an increasingly large and dedicated following, but it may mean different things to different audiences, so let’s start with a definition. I refer to an excellent post by my friend Lee Newcombe and I agree with his definition of zero trust:

“Every request to access a resource starts from a position of zero trust. Access decisions are then made and enforced based on a set of trust metrics selected by the organization. These trust metrics could relate to the user, their access device, the resource to be accessed, or a combination thereof.”

The concept of zero trust architectures is not new. During my career, I was a member of the Jericho Forum, a group that essentially invented the concept. At that time technology was not mature enough to support a true “zero trust architecture”. This has changed and I firmly believe that today, technology is at a suitable level for enterprises to move to architectures without perimeters.

That said, a true full-scale transition to a zero trust architecture will require more than just changes to network, application and supporting technologies – it will also need to drive large scale security and general IT policies or be driven by a large scale transformation program. And as usual, training will play a big role.

In my opinion, CISOs should prepare for zero trust architectures by:

  1. Engaging expert advice to review the current IT and security architecture, assessing the feasibility to migrate to zero trust; which will deliver a roadmap highlighting:

  2. Required technology investments

  3. Sunsetting of legacy systems
  4. Business applications updates
  5. Updates to policies to ensure alignment to legacy information and privacy frameworks Training all stakeholders on the concepts of zero trust

  6. Evangelizing lower cost of exposure by correctly implementing zero trust architectures to CISOs peers and C-suite executives and legal counsel, highlighting that the change may be long and costly during transition (while supporting legacy architecture), but can be shown to have the following benefits:

  7. Business competitiveness as to being able to scale business applications and places of business without costly investments in traditional network security

  8. Limiting potential breaches as the access between applications is limited only to required communications
  9. Improved compliance levels with the “state of the art” requirements of GDPR, potentially limiting the maximum penalty if a less-likely breach occurred

What other business justification could CISOs spell out? One of the benefits is micro-segmentation, which is both a cause and a pre-requisite of zero trust architectures – depending on the organization’s starting point. Micro-segmented systems deliver vast benefits in reducing attack surface, compartmentalization that support DevSecOps team structures, and – last but not least – improved monitoring.

On that topic and similarly to current security architectures, monitoring for event anomalies, sometimes leading to security incidents, is paramount in zero trust architectures, especially when feeding the monitoring events into an AI engine where a machine learning model is regularly updated by DevSecOps teams (trained to understand data science).

Finally, and probably most importantly, if we accept that the formula of zero trust equals to:

Access granted if [Sum(device score),Sum(user score), Sum(resource score)] > [required device score, required user score, required resource score]

Zero trust architectures are only possible when organizations know exactly what their users, device assets and applications are, and how these are configured, interrelated and secured.

It may not be a big stretch to jump to a conclusion that the CIS 20 Controls 1-6 are, in fact, the cornerstones for zero trust architectures. And herein lies a problem that most CISOs will face: A high percentage of organizations would attain very low maturity in design and implementation of these 6 core CIS controls, meaning a move to zero trust architecture without sorting the basics first should be avoided.

In conclusion, given the complexities of a zero trust retrofit into existing networks and systems, CISOs should focus their energy on A) embedding zero trust into wider organizational transformation roadmaps, and B) focusing on automating the basic security controls (e.g., CIS 1-6) before attempting potentially costly and doomed-to-fail zero trust re-architecture programs.


You can upgrade Windows 7 for free! Why wouldn’t you?

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 27 Jan 2020

🔗 Link: https://www.helpnetsecurity.com/2020/01/27/upgrade-windows-7-for-free/

“Doomsday is here! The sky is falling! Windows 7 is out of support and all hell will break loose!” – or, at least, that’s what some cybersecurity experts and press outlets want you to think. In this article, I will offer some advice to businesses of all sizes that may need to continue using Windows 7, while understanding the risk. This is my opinion and should be taken as advice only. Every company is different, and your circumstances are likely to vary.

Background

Windows 7 has been Microsoft’s most successful operating system and, it’s safe to say, one of the most loved. Lessons learned from Windows XP, and especially Vista, allowed Microsoft to build a stable operating system that only required one Service Pack, despite being in use for over 10 years.

However, nothing lasts forever, and with Windows 7 end-of-support originally announced way back in 2015, the end ultimately arrived on January 14, 2020.

Microsoft is facing criticism for ending support for all but enterprise customers paying for extended support, but it’s worth noting that Apple faces no criticism for constantly upgrading iOS and MacOS and for (rather quickly) ending support for legacy versions of those OSes. Of course, we still have to see whether the recent Crypto API spoofing vulnerability will test Microsoft’s resolve to keep Windows 7 unpatched for not-paying customers. Security benefits of Windows 10

Even Steve Gibson, world-renowned and respected security expert and my favorite podcaster, who swore that he would never move off from Windows 7, is now relenting and moving to Windows 10.

I believe Microsoft has made tremendous progress in the security of their operating system, a process that famously started after the security mishaps of Windows XP and cumulated with a memo sent by Bill Gates (then CEO) to all staff back in 2002. Eighteen years and 4 major Windows versions later, we finally see the benefits of the Trustworthy Computing initiative: a secure-by-design operating client and server systems and applications for on-premise and cloud use.

Here I want to list just a few security benefits of Windows 10:

Streamlined and automated security updates enabled by default.
Windows Defender is now a state-of-the-art endpoint protection system, optimally designed to work on Windows 10 and utilizing the power of Microsoft Cloud for optimal protection.
Core operating system protection with Device Guard, Secure Boot, Application Guard, Isolated browsing and many other features.
Protected folders guarding against ransomware and document theft.

My issue with Microsoft, though, is that not all of these security features are available in the Home edition, which is frequently purchased by individuals, families and small businesses. I urge Microsoft to reconsider this strategy – security should be part of the core operating system for all and not a paid feature, otherwise the concept of Trustworthy Computing cannot be fully delivered.

There is also another reason to upgrade from Windows 7, and this is specifically relevant for businesses that must comply with the GDPR and equivalent regulations around the world. The GDPR requires security controls to be “secure by default” and “secure by design” with supplemental guidance quoting “state-of-the-art”. As Windows 7 is no longer a supported operating system, one cannot possibly succeed with an argument that keeping an End-Of-Life system operational in its processes is “state-of-the-art” security. Businesses continuing to run Windows 7 should tread carefully and keep Windows 7 at their peril. How to upgrade Windows 7 for free

The good news is that Microsoft still allows free transition to Windows 10. Compatibility should not be a big issue as Windows 10 can run on most systems that supported Windows 7.

The simplest way to perform upgrades is to run the Windows 10 Upgrade Tool which checks the compatibility of your system and guides you through the upgrade.

However, a big obstacle to upgrading could present legacy applications that simply won’t run on Windows 10. If you cannot upgrade

Sometimes the upgrade is just not possible, so let me present some options for minimizing the risk of security breaches with Windows 7. Please note, I don’t believe these would constitute sufficient compensating controls for GDPR compliance:

  1. Virtualize Windows 7 on top of Windows 10 (available in Professional and Enterprise) and only use it for legacy applications
  2. Limit or preferably block access to the Internet and email from machines running Windows 7
  3. Enable the Windows 7 firewall and make it as restrictive as possible: whitelist only access to required systems and block all incoming traffic
  4. Increase security monitoring of Windows 7 access, file/registry changes and indicators of compromise – assume the operating system is insecure and has been compromised unless proven otherwise

All of the above controls are going to need human and financial resources, which I believe is a good incentive for organizations to fully migrate off Windows 7.

As always, reach out to experts for more detailed advice if your organization is still on its journey to Windows 10. Conclusion

Those hoping that I was going to justify staying on Windows 7 are likely sorely disappointed.

My advice is “upgrade, upgrade, and UPGRADE” – hardware where possible and operating system without due delay. The cost of new hardware may be daunting, but the cost of a security breach that would have been prevented on a patched, modern and supported system is likely to be much higher.

Hooded hackers? More like ruthless competitors

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 10 Jan 2020

🔗 Link: https://www.computerweekly.com/opinion/Security-Think-Tank-Hooded-hackers-More-like-ruthless-competitors

Let me tell you a story. James had just finished his working week and retired home to enjoy a well-deserved weekend with his family.

However, it was not to be. James, a director of cyber security for a medium-sized hotel chain, received a call from his boss, the CIO, informing him that the company’s online booking system had been taken down due to a security compromise.

And just like that, the idea of a peaceful weekend evaporated, instead replaced by a nightmare of a cyber security breach. The investigation revealed that the hotel chain’s online shopping systems, and specifically the payment page, had been hacked by the Magecart group, which had modified just one JavaScript file, adding 10 lines of code that had been stealing customers’ payment details.

Unfortunately for James’s company, this modification was not detected until an official breach notification from its card processor. The resulting fines, class action, remediation work and lost customer confidence cost the company half of its annual revenue. It almost went bankrupt!

Considering this fictitious story, I want to concentrate on the user and business leader perception of a cyber criminal.

Most people, when told about a cyber attack, imagine a hooded scruffy teenager sitting in a smelly loft of his or her parental house. One only needs to watch the Amazon series Mr Robot to understand why the image holds such sway over the imagination.

But not so fast, please. That is not how most perpetrators of cyber crimes look in reality. It is far better to think of a cyber criminal simply as a white-collar criminal, one who is most likely part of a wider group and motivated by profits.

For them it’s a business – a criminal business, but a business nonetheless. One can easily see an analogy to a normal legal business setup: a back-office team, outsourcing of tedious tasks to other criminal businesses, budgeting and calculating return on investment, internal cyber security delivering essential operational security. Imagine a well-oiled machine with an efficient management structure that many enterprises would envy.

With all that in mind, the forward-thinking businesses will do best by thinking of cyber criminal gangs simply as ruthless competitors. Ones trying to disrupt business operations or steal a valuable customer database or intellectual property information.

Such a change in thinking will shift the focus in business employees and management to implementing appropriate processes, technology and training.

Simply put, nothing focuses the minds of business users as much as a ruthless competitor threatening to put them out of business for its own competitive advantage.


Is it true you can't manage what you don't measure?

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 11 Mar 2019

🔗 Link: https://www.computerweekly.com/opinion/Security-Think-Tank-Is-it-true-you-cant-manage-what-you-dont-measure

What you don’t measure you cannot manage – or can you? Is this a controversial view? It has undoubtedly been branded untrue in the past.

I’m not sure there is a title “True Leader” as they come in all “shapes and sizes” and probably fit into the commonly defined categories such as “autocratic”, “democratic” and “laissez-faire”. In standard business practices, true leaders lead teams and companies by instinct and are influenced by previous experiences (both good and bad).

I wonder how risk-averse they really are, or do they rely on gut instinct where they feel comfortable because they have made decisions based on intuition successfully on many occasions. Is there a parallel to betting on red continually until the ball drops onto a black number?

Many cyber attacks happen because business leaders underestimate typical cyber risks: in the end, running a business is all about taking risks. However, “gut instinct” only takes you so far – until the ball drops onto a black number!

The biggest problem with using “gut instinct” to feel cyber security risks is a lack of understanding of adversaries, and underestimating impacts and likelihood of an event occurring. Some might call it wilful neglect by refusing to acknowledge that any meaningful threat exists.

As a result, business leaders, when confronted with a cyber security state of play, dismiss it as fear, uncertainty and doubt (FUD). Did I hear someone say, “wilful neglect”? But let me be clear – it has been fear that has been driving our decisions for millennia. Those who did not fear were more likely to succumb to catastrophic events from which survivors learned.

I therefore strongly believe that “fear” is useful leverage when talking to executives asking to improve cyber security budget.

The key, however, is to bring believable data (eliminate uncertainty and doubt) that compare your organisation with the rest and assesses your exposures to other catastrophic events (i.e. cyber security incidents). Just offering up statistics is relatively pointless. It isn’t until you start talking financial impact that you may begin to gain the executive board’s attention.

The above is my long-winded induction to the conclusion: yes, collect metrics and key risk indicators (KRIs) – but be selective. Now, you are saying, “that’s clever of you, but which ones?” The answer to that question is a typical one: “It depends!” If you are starting on your cyber security journey, try asking following questions

Are you confident you know all your data stores – including personal computers, test/dev servers, cloud applications, mobile devices and even USB sticks – that could be targeted, causing  serious data loss? Here’s my (not so) secret: most organisations would not or could not be willing to confidently answer “yes”.
Are you able to enumerate all vulnerabilities in all your systems that are present and could be used in current active attacks by adversaries? Answering this question is not as easy as running vulnerability scans. A well-established vulnerability management programme is vital here.
Are you able to detect cyber incidents within hours of compromise? Spoiler alert: most organisations are not, and the mean time to incident discovery is around 180 days!

Only when you can answer the above questions confidently, with complete honesty and good consciousness with resounding “yes”, should you bother with metrics and KRIs. Otherwise, it is a waste of time and resources which you could better use those to change the “no” answers to “yes” for the questions mentioned above.

In summary, please don’t rely on pure gut instinct: good leaders will make full use of the specialists they have, to provide the necessary information to help make the right decision.


No tech will ever counter-balance poorly implemented processes

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 12 Feb 2019

🔗 Link: https://www.computerweekly.com/opinion/Security-Think-Tank-No-tech-will-ever-counter-balance-poorly-implemented-processes

Sometimes I get into discussions pertaining to the usage of the latest technologies to thwart data breaches. In many cases, the debate quickly steers into suppliers, capabilities and features. I try my best to get my point across: cyber security starts with processes at the hygiene level, and once these are implemented to a satisfactory level, more advanced processes can be added.

It seems dangerous to me that cyber security processes are so undervalued in the portfolio of security programmes. Instead, companies put various technologies in place, in some cases implementing these without a care for how they will be managed, monitored and integrated into the rest of processes.

By this rather lengthy introduction, I want to say unified threat management (UTM), or any other technology for that matter, is no good without well-executed processes. As I alluded to in my previous Computer Weekly Security Think Tank contribution, start with the critical controls implemented as processes – supported by trained people, good configuration and managed technologies. It is only then that we stand a realistic chance to protect against data breaches.

I would like to follow with a piece of advice to any security, IT and business executives: start small but focus on implementing Center for Internet Security (CIS) process controls 1 to 6. Many data breaches would be avoided if companies followed this advice.

Yet, I recognise it is not as simple as 1 to 6. Controls 1 and 2 mandate a well-executed asset management process resulting in an accurate configuration management database (CMDB) and change management processes. That is no small feat. In fact, I have yet to see an organisation where asset management works as a mature process.

Controls 3 and 5 call for patch, vulnerability and hardening management. These are even harder processes to master. Appearances are misleading in vulnerability management – successful data breaches used older, yet less critical vulnerabilities.

Control 4 is almost a non-starter for many organisations that take the simple route of giving every user admin privileges on their laptops and PCs. It is not helped by the default setup in Windows and MacOS, and we know what happens to defaults – they stick.

Finally, Control 6 requires active looking for incidents in logs. In heterogeneous networks, the logs sources are plenty and varied, making effective monitoring non-trivial.

Do you need 1 or 6 different technologies? My diplomatic answer is: it depends. However, please do not start with this question in the first place. Lead with processes, desired outcomes, people, resources, and add technologies towards the end of that decision.


Walk before you run

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 📆 Published: 16 Jan 2019

🔗 Link: https://www.computerweekly.com/opinion/Security-Think-Tank-Walk-before-you-run

Text: We have all tested this postulate: “One needs to first walk before running.” This applies in life as well as in cyber security. I have seen many companies buying shiny and blinking boxes without first addressing fundamental controls, and then failing to receive the promised value from these investments.

Having said that, the paradigm of zero-trust networks, software-defined datacentres and containerisation delivers an exceptional level of security through automation, asset management, self-healing policies and application partitioning.

However, as with anything in IT and cyber security, an exceptional technology operated by untrained and undisciplined people following not-so-well thought through and documented processes is bound to fail. Even worse, a false sense of security could mean higher likelihood of successful attacks.

For companies to benefit from these advanced technology patterns, they need to rethink their processes, eliminating the human element as much as possible, rethink security policies by moving more to industry standards rather than bespoke and, most importantly, train people to use, manage and monitor new technologies.

The key controls should still be implemented even when having these advanced technologies:

An accurate and detailed CMDB [configuration management database] structured from business processes down to infrastructure. A real-time vulnerability and threat management programme. Secure baseline builds and automated reporting/remediation of compliance failures. Well-designed identity and access control – ideally expressed as a code and linked into a single source of truth of identities, roles and organisational structure. Monitoring of events for unusual, out-of-norm events with a follow-up process. There is more, but these present an absolute minimum to be able to reach the level of benefit promised in your business case for investment into zero-trust networks, software-defined datacentres and containerisation.

Think of this when sitting on a supplier’s call showcasing the magic of their technology. There are no shortcuts in life, cyber security included.


Outsource responsibility, not accountability

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 06 Aug 2018

🔗 Link: https://www.computerweekly.com/opinion/Security-Think-Tank-Outsource-responsibility-not-accountability

I often use the saying: “If it is not your core business, consider outsourcing.” This generally works when it comes to business support functions, such as accounting, bookkeeping, legal services and design, for example.

However, when it comes to information security, the cut is less clear because “security is the responsibility of everyone”. Although it is a cliché, it sums up the intertwined nature of security in any business. While the various business functions can be seen as verticals, security cuts right through them all.

For the purpose of this article, a small to medium-sized enterprise (SME) should strongly consider the following in-house/outsource advice:

  • Security policies – keep in-house and consult with a professional.
  • Organisation of information security – consider outsourcing the chief information security officer (CISO) function to external professionals.
  • Identity and access management – keep the people management function in-house, while outsourcing technical access control to a managed services provider (MSP).
  • Asset management and data classification – keep in-house and consult with a professional.
  • Operational and network security – outsource to MSP.
  • Physical security – outsource to MSP.
  • Systems acquisition, development and management – outsource to a professional development company.
  • Resilience to incidents – keep in-house and consult with a professional.
  • Supplier relationship – keep in-house and consult with a professional.
  • Compliance – keep in-house and consult with a professional.

The above might differ for various types of SME verticals. In all cases, however, it is critical to retain overall accountability for information security in-house.

Finally, a good set of key performance indicators (KPIs) and metrics should be agreed with one or more MSPs.


Why cloud business continuity is critical for your organization

📖 Publisher: Help Net Security

Help Net Security logo

📆 Published: Jul 24, 2015

🔗 Link: https://www.helpnetsecurity.com/2015/07/24/why-cloud-business-continuity-is-critical-for-your-organization/

Synopsis: "Business continuity, the ability of a company to continue or quickly restart operations following a systems outage, tends to be a topic overlooked by business leaders. Many see it as a responsibility of their IT teams, and think no more of it. However, this is a dangerous abrogation of responsibility, as any CEO who has suffered through a prolonged systems outage can vouch for."

Business continuity, the ability of a company to continue or quickly restart operations following a systems outage, tends to be a topic overlooked by business leaders. Many see it as a responsibility of their IT teams, and think no more of it. However, this is a dangerous abrogation of responsibility, as any CEO who has suffered through a prolonged systems outage can vouch for.

When IT managers design and operate their IT systems, they need to have input from business managers, a.k.a. their customers, on the required protection level when disaster strikes. Additionally, it is the responsibility of business teams to prepare and test recovery plans, to counter potential events that can severely affect the company’s ability to operate. These plans go beyond IT systems, and include, for example, how to communicate if email is not working.

It is important to understand the two very different parts the cloud can play in this planning. The first is awareness among business leaders and owners that any IT system can break, including cloud services. Overall, last year there were many outages of cloud providers – some planned, others not (see statistics for previous 12 months as monitored by CloudSquare).

The most severe problem for a business with cloud operations will be when outages of their SaaS provider occur. These provide bespoke services for businesses and are therefore not easy to reproduce or replace during the outage. For example, if SalesForce goes down for the day, what would a business do replace its function, especially given all the data that it relies on is stored remotely by SalesForce?

All this means it is critical that the disaster recovery and business continuity planning takes into account all business critical systems that exist in the Cloud, as well as those hosted locally. These services normally include CRM, email, invoicing and accounts, sales services, logistics and trading systems.

The second part the cloud can play in business continuity is when a disaster occurs that hinders internal IT systems. Certain business functions can be set up in cloud infrastructure relatively quickly. However, it is vital to plan for and test the handoff and switchover of those functions to ensure business continuity.

Unfortunately, enabling bespoke applications in the cloud is remains difficult and often cost prohibitive. A good example of business applications where the cloud can help with a disaster are: managed desktop, email and collaboration, file sharing, internal and external web sites.

Overall, the main point is that while cloud services can play an important role in business continuity, business leaders need to take the responsibility of ensuring their operations are fully prepared with tested recovery scenarios.


Context-aware security is business-aware security

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 1 Mar 2013

Synopsis: "The static security policy decisions are over. Is your firewall still only a dumb IP based firewall that allows or blocks access based on IP addresses? What about contextual information such as: identity, location, data transferred and behaviour of the traffic?"


Quick time to market to blame for many SQLi attacks

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 1 Sep 2012

Synopsis: "Cyber criminals are typically after your data for monetary reasons. From their point of view, the most valued asset in your network is your customer or payment card database; the bigger the merrier. "


Virtualisation and security: In what ways is virtualisation helping and hindering enterprise security?

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 18 Jul 2011

Synopsis: "From security point of view, all traditional security controls that a diligent security professional would apply to dedicated HW systems are still relevant in the virtualisation world. There are, however, some that stand out as more important: hypervisor security, change control, and maintaining security posture for offline images and templates."


How can businesses measure the effectiveness of their IT security teams to ensure they are getting value?

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 13 Jul 2011

Synopsis: "The question of measuring the value of security in an organisation has not been fully answered since the creation of information security discipline. And this fact is, in my opinion, one of the reasons security teams find it difficult to convince business to invest in security, except perhaps immediately after an incident."


What should businesses do to ensure their IT defences resist targeted, advanced persistent threats (APTs)?

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

📆 Published: 11 May 2011

🔗 Link: tbd Synopsis: "My taken on the question: Security threat reports are increasing, identifying targeted and advanced, persistent threats (APTs) as top priorities for all organisations of all sizes and sectors. The reality of APTs has recently been demonstrated by the successful theft of information from security firm, RSA. In the light of these advisories and the RSA data breach, what should businesses be doing to ensure their IT defences can resist targeted, advanced, persistent (APT) attacks?"


Review: 1Password 3

📖 Publisher: (IN)Secure Magazine Issue 24

Help Net Security logo

📆 Published: Feb 1, 2010

🔗 Link: https://img2.helpnetsecurity.com/dl/insecure/INSECURE-Mag-24.pdf

Synopsis: "How many times have you, as a security professional, explained to your friends, family or colleagues that using one password for everything is not ideal and not secure - far from it, actually? Yet the report by CPP suggests that many Brits do exactly that! A typical response from those “offenders” is: “It is impossible to remember all those passwords. That is why I use just one strong password.” Obviously, we know it does not really matter how strong that one password really is!"


Federation for the Cloud: Opportunities for a Single Identity

📆 Published: ISACA

📆 Published: tbd

🔗 Link: tbd

Synopsis: "Cloud computing has changed the way IT departments deliver the services to the business. Many organizations, small or big, need to share the data with their partners. Furthermore, organizations need to give access to their systems to users. Traditional models relied on creating accounts in local identity databases. More recent approach uses federation between two organizations that trust each other. However, what if you take a federation concept to the cloud? Can there be such a service as federated identity in the cloud? Could we all end-up with one single identity that is used for all our activities? This presentation will give some fresh views on the topic."


What’s holding up the cloud?

Computer Weekly

📖 Publisher: ComputerWeekly Think Tank

🔗 Link: tbd

Synopsis: "My take on the Think Tank question: Are security concerns and a lack of adequate risk assessment tools the reason SMEs are not adopting cloud computing, or is the real reason something else that security professionals are also in a good position to address"


Enterprise grade remote access

Magazine: (IN)SECURE,Jul 12, 2007

Help Net Security logo

📆 Published: 12 Jul 2007

🔗 Link: https://img2.helpnetsecurity.com/dl/insecure/INSECURE-Mag-12.pdf

Synopsis: "The way we access applications inside the networks is fascinating subject. The boundaries between inside and outside gradually diminish and we, as security professionals, face the new security threats. Having properly designed, secured and maintained remote access system is the key for the business to compete in fast moving world. It is no longer possible to fire an excuse “I am traveling, will login to my email and send it to you next week when I am back from my business trip.” There will be no-one to send it to then!"


Enforcing the network security policy with digital certificates

Magazine: (IN)SECURE, Issue 11 - May 2007

Help Net Security logo

📆 Published: 1 May 2007

Link to PDF: https://img2.helpnetsecurity.com/dl/insecure/INSECURE-Mag-11.pdf

Synopsis: "Far too often, security is compromised because administrators or even security professionals do not know how to use certain technologies. This unfortunately increases the risk and devalues the information security profession in people's eyes. I am going to suggest a solution to two of many security problems that organisations face today: a) Secure VPN access to an office network from the Internet, b) Secure access to Extranet applications for employees or 3rd parties."