The security sector’s general lack of sophistication has always been a problem.
Actually, many of the people in the sector don’t even see it. Self-awareness is perhaps one of the key indicators of sophistication…
This is one of the reasons the sector has always been so sluggish when it comes to identifying and addressing new problems when they crop up.
It has taken a very long time - for instance - to get some portions of the security space to agree that it is more appropriate to be Risk Driven than Compliance Led, even though we’re probably about to evolve out of both modes and into something else altogether.
But let’s look at these two (sort of) conventional approaches to delivering Security for a moment.
Before there were standards I think we just made it up as we went along.
When I made my first forays into the sector in the mid-1980s, nobody made any mention of them. The first really impactful document I ever came across was the PSDB document “Who will be the first to test your CCTV System?” around 1994 when it was released. Prior to that, I’d taken no real notice of any of the British Standards in the BS 5013x series because they seemed irrelevant to what I was doing - even though I was busily deploying security solutions in the real world throughout that period.
Other standards have subsequently emerged that are much more useful - the BSEN/IEC 62676 set, for instance, for video surveillance, but also things like ISO2700x and ISO31000. My world does not really include a whole lot of intruder alarms these days, so I don’t keep up to date on what 50131 evolved into, but I do know that a lot of evolution happened, particularly with what happened under NACOSS in the UK (and through the morphing process becoming NSI), where meaningful implementation and installation standards made more sense than a bunch of academic documents. This did not eliminate the cowboys but it did make it harder for companies with ambition to be cowboys by default.
When I was an integrator, I would see requirements clauses in tender specifications that just said “the system must be compliant with BSxxxxx.x” or whatever. It meant basically nothing. In general, the person who wrote the specification had never actually read those standards, and certainly did not understand how they might relate to the implementation of the underlying technology out in the wild.
That’s very much the case with a lot of standards in lots of sectors. It’s easy to simply say “must be compliant” but real world environments are varied and dynamic, and the standards bodies themselves have come to realise that a highly prescriptive set of mandatory measures will very rarely make it into the mix if their cost cannot be justified.
Setting a fixed bar for everyone to jump over no matter what their circumstances might make sense in some scenarios but it’s always a challenge when it comes to ephemeral matters, like safety and security.
We often used to find ourselves saying to clients “I know that it says this in the standard, but how would you like to interpret it in your particular set of circumstances?” There are a lot of factors that might influence our decisions around implementation, but the main factor that many parts of the industry settled upon as being the driver of these decisions was Risk.
In a Risk Based approach to security design we generally perform a quantitative risk assessment at the beginning of the process in order to create a weighted list of priorities, focusing the mind of the client on the threat modes that might be most likely to have significant negative consequences in the specific circumstances of the client’s site.
The overall concept of a risk assessment makes sense - in fact it’s pretty obvious - and is the basis of many respected and reasonably mature approaches to security design these days (even among organisations where the approach is not commonly applied ie aviation). However, unlike in the world of skin-tight lycra, one size most definitely does not fit all.
Threats and vulnerabilities can only be determined if you know what the assets are in the first place, so instead of starting with a risk assessment you need to start with the development of an asset register in which you identify what the organisation you’re dealing with consider their assets, and which additional assets or resources you as a professional risk assessor might consider are contributing important components of the organisation’s operation.
You also need to have some sense of which of these assets are critical, where interdependencies between assets might exist, and what the value of the various assets and resources might be. This is not trivial.
Once you have a register of assets and can identify which of these the organisation relies upon - either directly or in support of other assets - then you can look at each and decide what kind of vulnerabilities might exist for each one, and you can also look at the landscape within which each asset exists and begin to make a list of some of the threats that would need to be considered as viable within the context of the client.
But let’s hold on a second before we get completely carried away on this wave of logical deduction.
For a start, understanding the nature of a business operation and all of its moving parts is likely to be well beyond the sophistication of most people doing the design for a security system - especially now, when the chicken-egg downward spiral of fees and capabilities has left the sector so devoid of real talent or insight.
Secondly - and now more importantly than ever before - from where are you sourcing the underlying data you’re making your quantitative risk assessment assumptions anyway?
Where do people come up with the list of threats they choose to include in a risk assessment?
How do people decide on the typical vulnerabilities each asset might be exposed to?
When it comes to the likelihoods and impacts, where do you find this information, and as for the effectiveness factors that are applied for different types of risk control measure, how are these derived, tested or validated?
In some cases there might be Design Basis Threat definitions that risk assessors will work off of, but where did they come from? Just formalised opinions usually.
This post is not intended to pick apart the po-faced guessing game that is the risk assessment process, masquerading as it does as some sort of science for security practitioners to cling to, while they churn out the same old gobbledigook. That’s another day’s work.
Instead, my purpose here is to highlight an already gaping hole in end-user process that’s just got a whole lot bigger.
Even before ChatBots shambled into the spotlight, much of the information used for assessing risk came either off the internet, cut’n’pasted from the previous project or plucked from thin air by people who knew that their prognostications would never be fact checked…but now also ChatBots.
The idea of using the hyper-functional performance capabilities of computers to scrape and filter data from the internet (media) that provides relatively up to date assessment of risk likelihood is actually a good one, in principle.
One of the drawbacks of attempting to assess things like localised crime risk or larger scale geopolitical risk is that they’re difficult data sets to reliably access everywhere. In countries that prefer not to talk about crime and terrorism out in the open, it can be difficult to not revert to anecdotal or out of date information. The internet - on the other hand - is more likely to have up-to-date reports or at least the ability to rapidly scan through social media and get reports or comments about things that might have happened.
Problem is that this is an unbelievably skewed data set, and the more you choose to ignore the biases and distortions it’s built on, the more you’re going to create reports that are (a) inaccurate, but also (b) contributing to the unreal dataset upon which the next study will be based.
And all of that depends on whether the original internet information came from a reliable source, and this is where the next really big issue that we need to get ahead of sits.
Some people are predicting that by the end of 2025, 90% of the content on the internet will be generated by machines. The content is going to include deep fake videos, lots of generative AI images of things that never happened, utterly convincing audio of things that were never said and so much text spewing from LLMs to lure in gullible people, but also to lure in cross-reference hungry search algorithms.
Just like every time some new risky looking technology shows up, people seem to believe that we'll all be fine once regulations catch up - even though I struggle to think of any instance where this has ever happened within a few generations or before some catastrophe has occurred to put a rocket up the appropriate orifice.
In a year and a half from now we are not going to have any laws written or have put in place any regulations that can do anything to stop this turning into a free-for-all orgy of lies, with no means of validating or invalidating stories, regurgitated and cross-linked via word of mouth (thereby cutting off the traceability) through the ever-growing cesspool of addictive online interaction populated (partially) by we humans.
If there were one compelling reason for Web 3.0 that could have done some real good for all - rather than simply enabling a minority of folks to make more money or maintain their cloak of paranoia (or secrecy, depending on how you look at it) - it might have been in enabling traceability of information back to its original source, unequivocally.
Unfortunately it doesn't look as though there's enough focus on this aspect of the technology in everyday use right now. Sure, there's the move towards end to end encryption, but that's embroiled in its own messy struggle with the pro and anti privacy movements. And it still falls short of the mark anyway.
Encrypting text from end to end is great to maintain the integrity of the text.
Text is just a symbolic representation of information, and you can represent any information symbolically - as ones and zeroes. So it doesn't matter what the underlying data might be, the use of end to end encryption is important in making sure that information sent from A to B never ends up in the hands of C, who might have copied, altered or removed it (assuming we can trust the encryption and its provider/facilitator…which is a big assumption).
But what if the material you’re encrypting wasn’t authentic in the first place, or what if you used some sort of air-gap to convert it from a verifiable media to an unverifiable media, or what if there was a human in the loop - taking hearsay from one source, shoving it through their internal bias machine and then regurgitating it out into another media, etc?
Regulations aren’t going to stop these issues, and opposing views on civil liberty and data privacy continue to get in the way of hooking the engine up to the carriages, never mind allowing this train to depart the station…
Even the most diligent of risk advisor in the security space depends almost entirely upon a combination of recycled conventions from the past and information from sources that probably aren’t reliable today, and most certainly cannot be relied upon when 90% of the information on the web is machine generated - by machines that take machine curated information as their basis of fact.
If there was a clause in every consultancy contract that said “prove it”, we’d all be screwed; but that’s what we need to start thinking seriously about.
Governance is a funny sort of a thing. People mention it a lot (often with a straight face) but there’s not a huge amount of it in everyday business life.
But all of a sudden it’s becoming rather important.
Facts are actually facts, even when all the people in your social media circle claim otherwise. Proof cannot be assured simply because you did your own research or asked somebody else who claimed that they did. We need ways to know who and what to trust.
Of course nobody is going to be 100% certain of everything, and we’ve been happy with unsubstantiated probabilities in the security world for years, but that world relied on the stopped clock being right at least twice a day. Now we’re entering a world in which you can add and remove hands on the clock, or make them point at hours and seconds that never even existed.
So many times, new technologies emerge and people ask “what do we do to counter this strange new threat?” as if there’s a special anti-nastiness gun we might buy to shoot down the risk. There is no gun. You just need to understand that this risk exists and then put yourself out of range.
At the moment I am wondering whether or not somebody is going to invent a special helmet that people can wear that will prevent anything that’s not true getting in to your head. Like AR goggles but that work on all of the senses and all of the information you see, hear, smell or taste either directly or via the internet, automagically detecting untruths or unsupported opinions and simply making them disappear. It would be great - if it weren’t probably a service hosted in the cloud…
Governance goggles for the enterprise would be useful too. Problem is that once we switch them on they might show up all the other bullshit people have been fed all these years.