This is not a topic I particularly wanted to discuss like this, but events have made it necessary - and now that I have got to put pen to paper on the subject, I find that it’s impossible for me to get quickly to the nub of the matter without meandering my way through a number of side bars, background pieces and allegorical illustrations first.
Some of these might not seem relevant.
They are.
Indulge me.
As I've mentioned a number of times, there are three specific topic areas that I’m drawn to discuss in detail within this newsletter : the deconstruction of the dysfunctional Security Industry, the challenges facing urban designers and managers as a result of the increasingly chaotic state of the planet, and finally, a discussion of advanced technological threats, both real and imagined.
There are quite a few nuances to that final category, and I had been planning to cover them slowly and methodically over the months, exploring each in a reasonably sensible order that would build up a picture of the disturbing state we have allowed ourselves to be hypnotized into.
However, the subject has leapt into the middle of the road ahead, and made it very clear that it's not going to respect my orderly plans.
That topic is (deep breath) Artificial Intelligence.
Perhaps I could have asked ChatGPT or Bard to spit out the requisite text for me, but as charming as that might have been it sort of misses the point (and simultaneously hits the point squarely on the nose).
The topic is broad and my perspective has many facets, and so I’ve decided to break down my thoughts into a number of sections which I will be publishing periodically - not necessarily all one after the other - for the benefit of my reader’s digestive systems, and also because I'm not sure people really have the attention span required to consume something so large in just a few consecutive bites via this medium.
That’s why this piece is labelled Part 1…
As of today, I can’t tell you how many parts there will be or how frequently they will appear, but there is a lot of ground to cover - even in my introduction - and I might not even get to talk about AI at all in this issue.
Bear with me, once again. Please.
I need to talk about technological advancement in itself, what has driven it up until now and where that trajectory leads (and doesn’t).
I also need to put into perspective the fruits of that advancement and how some of these are both manifesting themselves and being misrepresented. This is a particularly topical subject, because it is shockingly dangerous on a bunch of different levels. The risks are - in my view - even greater than those that come from the totally different tangential topic of Artificial General Intelligence, and I’ll get into that as well - not because I believe it is something to get bent out of shape about, but because I think it’s important for people to begin understanding what the difference is.
While we’re on the journey I am very likely to offer some of my concerns for social order and the various ways in which that order could be disrupted by what I see coming down the track from advanced technologies, along with the ways in which those of us in the professional risk management world might be forced to react over the coming decade. Predictions beyond that time period are pointless.
But putting the crystal ball aside, we have a pin-sharp record of precisely what has happened in previous decades - and in particular since around about the year I was born, when the first commercially produced integrated circuits began to spill off the production lines and were being bought in bulk by NASA for the Apollo Programme.
This was right at the start of the most important technological revolution for centuries - without which we simply wouldn’t have electronic computers or the internet, without which we’d be unlikely to have any portable form of communications, information processing or display devices, and without which the capabilities of any form of artificial intelligence would be limited to what we might achieve with clockwork mechanisms.
In 1960 you couldn't even make two diodes on the same piece of silicon without carving a groove down the middle with a knife to separate their substrates.
By 1964, Texas Instruments were selling monolithic semiconductor logic chips consisting multiple transistors on a common substrate to NASA while I along with Moore’s Law were bungee jumping (umbilically) into existance.
I sincerely doubt that people coming out of university today with masters or even PhD qualifications in computer science could adequately explain what a PN junction is or how semiconductor doping is used to adjust the electrical properties of these substances, but the electronic devices that have resulted from this process are fundamental to our modern society in ways that the wheel or fire would have been fundamental to ancient civilizations.
Active electronic devices - a range of different diode types, transistor variants (both bipolar and field effect), thyristors, triacs and other nonlinear oddballs that make up the extent to which human ingenuity has managed to stretch the behavior of differently doped lumps of semiconducting materials - make our modern life.
Before them we had vacuum tubes and passive devices. Tubes can be made to perform some of the functions we rely on semiconductors for but not all of them, and there is no equivalent to Moore’s Law in the tube universe.
Perhaps human ingenuity could have come up with an alternative that might have somehow found a different route to where we are now, but it hasn’t managed so far, and it looks very much to me that if the planet had no semiconducting materials in its periodic table, and if clever people had not worked out how to manipulate them through the doping process then technology as we know it simply could never have emerged.
Granted, we would have TV and radio - both of which predate the transistor. We would have computers, both electrical and mechanical. They would not fit in our pockets.
I have lived my entire life alongside Moore’s Law, and in many ways my career exists like a small flat stone that has skipped across the surface of the phenomenological lake that exists because of semiconductors and that Law.
That's why I feel comfortable - now that the trajectory of that small stone descends towards its final skips - providing a perspective on how we arrived where we are, and why we are not somewhere else.
The transistor is a very useful device. It is able to exhibit unexpected non-linear characteristics in relation to current and charge that can be harnessed to perform all manner of interesting effects.
Use it one way and it's an amplifier - converting imperceptible signals from a sensor into something we can see for ourselves, or pumping music around an arena full of people to make them dance and sing. Use it a different way and it's a switch - making lights shine or dams open or the wings of a fighter jet adapt to supersonic flight.
Make that transistor as small as you can, combine it into circuits and gates, so that the fundamentals of boolean logic can be achieved, then repeat that logic over and over as many times as you can, and you eventually get a microprocessor or a memory device.
The phenomena that Moore’s Law describes has made it possible for semiconductor companies like Intel, AMD and NVIDIA to make more and more complex microprocessors with densities that would have astounded those semiconductor scientists of the 1950s who couldn't even envisage putting more than one transistor on a single piece of silicon.
In the microprocessor realm, the number of transistors we’re fitting onto a single chip today is around 13,300,000,000.
In FLASH memory it's 5,333,333,333,333.
Moore’s Law is widely considered to be still holding true within these realms of the semiconductor world.
If I'm honest, the FLASH figures are just here for shock value. Memory is sort of beside the point when it comes to processing and functionality, but it's an illustration of how we're still well within the bounds of what physics can do, showing how Moore’s Law has a few more years on the clock before we start running into atomic limitations.
The point here is that the chips upon which ChatGPT and Bard or any other AI solutions operate are running on transistors. Lots and lots of transistors. They switch on, they switch off. That’s all they do.
The chat interfaces that we see on our phones are software front ends designed to create a relatively simple interface through which people can access the underlying functions, which sits on top of language processing systems, that in turn sit on top of layers and layers of obfuscation, that reside on top of operating systems, that are built with code that was generated by a machine (the compiler) to convert instructions into low level functions that can be handled by boolean logic, which is implemented in the processors as gates and registers made out of transistors.
With the exception of a very few conceptual experiments and quantum computers (which are nothing to do with the world in which you and I live for now), this is what is at the core of all computing on the planet where we live. ALL computing.
In my eyes there is nothing more influential in shaping the way we live our lives today than the invention of the PN junction and all of the devices that spilled from the discovery. It's more important than the Spinning Jenny or the steam engine, maybe more significant than the industrial revolution as a whole when you consider what it has brought us in today's world.
Take an opportunity to absorb this information and take a look around you at all of the things that contain some sort of electronic component that relies on semiconductors. From the dimmer switch on your wall to the LED bulbs above your head, from the phone in your hand and all of the infrastructure that sits behind it or the electricity that comes out of the socket on the wall and all of the electronic systems that enable it to get to the place where you need it. When you see any of these or you read about how the infrastructure company is using AI to manage the efficiency of delivering that function to your palm, it’s all happening as a result of transistors turning on and turning off. No more. No less.
As you think about it, listen really carefully. Listen to the buzzing in your ears, maybe even listen to your own heartbeat if you can. You think that’s transistors?
When we look at a conventional electronic circuit, with its collection of active, passive and integrated components all soldered onto a piece of fiberglass and interconnected through copper tracks weaving around the surface, it’s easy to point to each component and know what it does. Take out your cable snips and you can begin removing the components one by one if you’re so inclined, and watch the thing pretty quickly stop working as you slice out the pieces of the puzzle. If you chop out a memory chip the circuit does not become absent minded as you proceed, it just stops working as soon as you cut a few legs off the device. If you go at the microprocessor the same thing will happen. The capability of the processor will not slowly deteriorate, it just breaks.
Inside those devices you will find transistors.
In the even more integrated modern electronic world, the quantity of components you’ll see on the circuit board has declined, and wherever possible the electronics that provide processing along with the electronics that provide some memory and the electronics that serve as the logical glue between the two have become areas all the same piece of silicon. It’s cheaper in mass production. It uses less real estate. Sometimes it’s more efficient, but it’s really just the same as it was when these functions were all divided between discrete devices. Underneath it all, it’s just transistors.
What do you see if you saw off the top of somebody’s skull?
Although functional MRI has been used a lot in recent history to show how different parts of the brain appear to react to stimuli or while the subject is carrying out specific tasks, one of the most significant and fascinating discoveries that has been uncovered is that while there may be areas inside your skull that become especially active when doing particular things, it is not the case that the brain is made up of geographically fixed building blocks of capability.
Multiple different blobs of brain meat light up when you’re talking or walking or singing a song. Of course there are some lumps that are primarily associated with certain general areas of functionality but it is not like a circuit board where you can read the device IDs printed on the lid of each chip and know precisely what contribution it makes to the overall function of the whole.
We sometimes think of memory in humans as being divided into long term and short term, plus perhaps things like muscle memory, but in actual fact our memories exhibit themselves as knowledge, wisdom, experience, belief, habit and a bunch of other traits. Whatever we do from moment to moment is automatically and imperceptibly influenced by any number of previous acts that you lived through as well as a bunch that you did not.
Studies in behavioral genetics have demonstrated beyond doubt that many of your behaviours and innate predispositions come from your ancestry - a different form of memory that’s hard wired into you at conception and that stays with you for life.
There is no equivalence for all of these different types of memory in the electronic realm. Sure, there are volatile and non-volatile forms of memory, static, dynamic, re-writable and write-once/read-many. But they’re just transistors and they store 1s and 0s. Of course, you could possibly create some sort of a software simulation that used electronic, transistor-based memory to keep chunks of data that you might use to exhibit a property that appears to the world as if it’s something similar to knowledge or wisdom or experience, but it wouldn’t be. It would just appear like it was those things from the outside.
It would be artificial and it would also be incomplete in comparison to the human brain equivalent.
It’s simply not possible for an electronic circuit - whether it be built from discrete devices or it be built from a large slab of silicon that’s been functionally sliced and diced to create an all in one super-chip - to do what a human brain does. Even with massively parallel processing devices, transistors behave only in the way they behave. They’re interconnected in a fixed manner that simply will never reorganise itself in the way that the interconnections of the brain constantly build and rebuild.
This is the first major point to be made about something that is fundamentally artificial in this world.
In the next episode I am going to describe some of the other consequences of Moore’s Law and how these have manifested themselves and made some functionality possible that can fool you into believing something that is not true.
This is the first part of a series here at Securiosity that’s going to explore what the world has labelled Artificial Intelligence, unpacking the underlying technologies and their origins and looking at the clear differences between what they do and how they do it, and what we see in human equivalent we’ve become used to.
The aim is not to try to demonstrate how either is superior or inferior. It is simply to get people thinking with more clarity on what we’re dealing with and what that’s going to result in.
Apocalyptic prophecies are already available from others. Right now we need coping mechanisms.
Meanwhile there’s plenty in the physical and information security world to be dealing with that certainly doesn’t look very intelligent to me, so this series is going to come and go on an as required basis. For now, there are some fundamentals to get through, so expect more of these next week.
If you’re not already subscribed, add Securiosity free of any cost and get the updates straight into your inbox as they appear.
If you have something to say or a burning issue to deal with drop me a note and we’ll thrash it out together, for better or worse.