Deterrence Evolving: Nuclear, CYBER & AI

The concept of deterrence is simple enough. What I have is so deeply unpleasant if I use it against you that you feel you cannot take the risk of attacking me. Mutually Assured Destruction – MAD theory – was the pillar on which that rested for the entire Cold War. And it worked, to a point it still does.

However it has become infinitely more complicated by technological developments. We now have cyber warfare capabilities on all sides that could have nation-wide consequences at alarming speed. It’s entirely possible to render a whole country useless by shutting down its electrical grid and bringing it to a halt. We saw a few weeks ago how disruptive that was in Spain and Portugal. The government there has been reluctant to admit what now seems likely – they were testing their capability to manage a nation-wide power grid surge and it went wrong.

Across Europe, Russian and other group hackers have tried to attack major institutions. The UK faced an attack on the major retailer M&S – that still isn’t resolved fully. On the same day the Co-Operative retail chain literally pulled the plug on its IT systems to save it from the hackers. They saw the cost of totally disrupting their business for a week to ten days as less damaging than giving in to an expensive and on going ransom campaign. The costs are immense and the damage has real world physical consequences for sometimes millions of people.

There are endless reports from national crime and intel agencies that Chinese, Russian, Iranian hacking groups are forever trying to get into defence computers, government agencies, and even personnel records to find ways of getting in to valuable information – and it works repeatedly. The US faced the entire database of federal employees being stolen by China. Yet just as much if not more, is prevented.

The UK just took the decision to be less passive and actually wage counter cyber warfare on the major protagonists. It is to invest £1 billion in a new cyber warfare system for the army. Other governments are set to follow suit. It’s too late to establish deterrence in this field it seems, but the consequences of attacks on western nations especially are becoming considerably more counter productive than players like Russia seem to understand.

Russia spends a good deal of money and time pumping out disinformation. Yet its campaigns are falling on increasingly deaf ears when they attack an institution like the UK National Health Service – that’s personal to millions of people, and their view of Russia rapidly deteriorates when they know who was responsible. In the field of cyber warfare there is daily conflict. What nobody has yet done is launch their worst attacks – the really bad nation-damaging type you can’t easily get back from in less than months, if not years type. And they exist. Nobody has done it because they fear the consequences of retaliation. To a small degree, MAD does work at this level.

One element of such an attack is that smaller nuclear armed nations like the UK and France have even mooted the possibility that such an attack would actually be so severe, only a nuclear response would be viable. That’s a theory nobody wants to test.

Because nuclear weapons are seen as the ultimate deterrent – cyber warfare is seen as being ‘not as bad’ so unlikely to trigger a nuclear response. Yet a cyber attack is potentially as capable of rendering an entire nation so crippled that death and destruction would follow on a scale equal to a nuclear attack. Nobody has gone as far as stating that one is unquestionably liked to the potential use of the other.

Yet there’s also a ‘dumbing down’ approach entering the equation. Of late its been seen that smaller lower yield nuclear weapons – where warheads are in the likely sub 100kt range if not smaller, the type you can ‘dial-a-yield’ to fit the circumstances, have more of a place. Politicians are told they need to have ‘non-strategic options’, to ‘demonstrate’ anger or capability if the scenario so calls for it.

Elements in the UK believe that it needs to return to a tactical nuclear weapons inventory to give the Prime Minister more options in the event of just such a scenario, let alone a military generated battlefield one. The UK is not alone in this assessment.

For nearly ten years the United States Navy has argued itself out of something it does not want but Congress is now forcing it to have, the SLCM-N, the nuclear sea launched cruise missile.

Designed to fit only in the Virginia Class submarines the navy has opposed it from the get go. Their objections are surface level simple – if they have to carry one, it is one less missile they can use for tactical purposes, potentially right when it matters.

Yet there is far more to it. It’s not just putting cruise missile on a submarine, they can already do that. For one the nuclear warhead is never installed on the missile until authorization to use it has been granted. That requires specialised communications equipment and someone trained to use it. It means a nuclear warhead handling facility on the submarine to arm the missile. That requires specialized trained crew. It requires in the end, nothing less than the same type of equipment for communications a ballistic missile submarine would need to have – but with just one missile to fire.

The reason for the missile – to give the President the options to attack say China, on a non-strategic level if a message needed to be sent that things had gone too far. It’s a questionable concept. China is never going to sit there and have a nuclear warhead detonate in Shanghai port without retaliation. Then what happens next? The very concept of lowering the threshold of using these weapons makes them more likely to create an escalatory cycle.

Shockingly the entire concept of the SLCM-N may not be a reality until 2035 according the USN. It will take that long to field it.

Yet there are other equally dangerous weapons. At least one missile on most Ohio Class and Columbia SSBN’s is scheduled to have a Trident-2 equipped with the new low yield warhead – said to be as low as 1kt. If the enemy sees an SSBN launch its missiles on long range sensors, they have no idea what type of warhead it is. Who knows how they would react? Russian doctrine is launch on warning, as is that of the US.

Are our SSBN’s safe in the future as Ai hunter killer autonomous weapons hunt them down? The basis of our nuclear deterrent can easily be undermined.

It gets even more complicated when you have systems like Golden Dome – it may sound like some sort of brothel in a Trump hotel, but the price tag for the missile defense system is likely to exceed $200 billion. Its purpose is to intercept strategic nuclear weapons in flight.

China has already thrown its toys out of the proverbial pram over the concept, quite rightly saying it undermines nuclear deterrence principles – which it does. More to the point what missile defence systems tend to do is force the other side not to build their own, but to build more attack missiles to overcome the defences, with more complex penetration aids, and more warheads.

Of course what Russia and China are most afraid of is that Golden Dome might actually work. They will always have to assume it does, even if it doesn’t. That’s the nature of the nuclear arms race game.

Then we have complex drones and Ai to deal with. It seems inevitable Ai will be placed in a position to assist in managing a nuclear conflict. Its already in use on the battlefield in Ukraine and that situation is going to increase exponentially in the coming years – I’ve even seen reports that the first ‘aware’ Ai is likely to make itself known as early as August this year though I doubt it. The technology is accelerating well beyond our capability to restrain it – money is involved and those who want to be very rich don’t care about the consequences and have little respect for safeguards. Their assumption they can control what they’ve unleashed is the ultimate in hubris. I remain old school when it comes to Ai – my maxims is; if it can out-think us it can sink us.

How do we incorporate these systems which are advancing at speeds we can barely comprehend, into systems we control before those very systems are outdated? Where does it end? How are we supposed to continue deterring our enemies from taking some action that may seem logical to a machine and not to us as humans? How far are any of us prepared to go, will we have any say?

It could be that Ai is just the buzzword of the moment, that it never truly learns to be useful because the crap it has ingested from its creators renders it incapable of being realistic and viable. Limited Ai, trained only on the theory of battle and the capabilities of the weapons it knows we have available, can potentially advise us to take certain actions. But to let it carry out those missions on our behalf? That to me is a step far too far. We have no idea how it could play out, or if it is even capable of executing our wishes without taking steps we would never approve of.

An Ai has the potential to make a mistake we have not foreseen, it is not emotional, it cannot be deterred. It is the ultimate in game management – always looking for a winning position – it is not deterred as we would be, if it thinks it can ultimately win, when winning is world ending.

Humans have to work out how far we’re prepared to go to protect and deter. This day has been coming for over 40 years – the 1983 movie ‘Wargames’ was silly but prophetic. Yet, still, the pace of the rate of change is moving faster than our ability to control it.

We already have systems that can end our planet at our own hand. If we must go that far let that at least be human decision, not one of some over-educated but inhuman machine.

Deterrence from cyber warfare – assisted by Ai in attack and defence, nuclear weapons use, managed by Ai, every conventional system we once operated is increasingly set to become Ai controlled. Everyone and their sister is looking to find ways of using it for something.

The question is can it be kept under control? Can we genuinely deter our enemies in the next two decades as Ai and a myriad weapons systems using it come on line? It won’t just be us using them, they will be too. Does that make conflict more likely as machines alone fight it out, and the personal loss – the death and destruction is no longer feared?

Science fiction has never faced becoming science fact on such a scale. Other than the time travel element the whole ‘Skynet’ concept is not entirely unfeasible.

The price of the failure of deterrence is catastrophic. If we cannot live together in peace we must deter each other from imposing our will on those who chose to live differently. The difficulty of doing that is about to become vastly more complex. It’s a challenge few have truly grasped yet. Time is running out to do so.

The Analyst

militaryanalyst.bsky.social

3 thoughts on “Deterrence Evolving: Nuclear, CYBER & AI

  1. War between super powers would always lead to civilisation destruction. This will soon also be the case for the next level of military powers due to advances in weaponry. The only way out I see is for a peace pact between the dominant powers, but human nature makes this unlikely to happen. So our advanced civilisation is likely to end at some point in the not too distant future. That’s why Musk despite his flaws wants to colonise Mars before it’s too late.

    Like

Leave a reply to perfectioncasually8b3d16d813 Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.