AI: ARE WE DOOMED?

The concept of a machine take over has bothered humans since the beginning of the industrial age. The concept of a machine intelligence dates back to before the origins of the first computer. The first time I ever saw the concept played out was in a Season 2 episode of the original Star Trek, where the M5 Super Computer created by Dr Daystrom was allowed to take over the Enterprise. It subsequently made all the decisions, carried out a devastating strike on the participants and when they went to switch it off a guy in red (as always) was vaporized as M5 started taking power directly from the engines. Then it began playing the wargames as though they were real, destroying ships and killing many. The only way to defeat M5 was to persuade it using logic, that its own demise was essential, forcing it to shut down.

It was one of those ‘let’s rip that out and never use it again’ moments. Dr McCoy asked an interesting question, ‘what happened to M1 through 4?’ The answer was ‘they had not been entirely successful’. The whole episode has never been lost on me, because ten years later real computers were suddenly available, though vastly too expensive, then the PC and the Mac came along – still too expensive until the late 1990’s. Now everyone has something, be it incredibly powerful phones, watches, laptops or desktops. Technology is omnipotent.

We are so intertwined with technology you have to stop and start counting how many devices you have as a family and what is connected to the internet. I counted 27 items if you include smart plugs, lighting and even one of the cars. And there’s only two of us.

Should any of us be worried about an Ai takeover given that level of connectivity and what it enables? Of course we should. Our entire modern existence is predicated on technology.

So the question is how and when and will it happen? How will we know? Will it matter?

There has been an entirely money and power driven acceleration in the development of Ai, because those involved in the industry know that whoever does it best is going to make a fortune. It has moved at such a pace it’s already ubiquitous and deeply influential because human beings have simply not appreciated the power, scale and capability of what has already happened. When fake videos are so life like they are virtually impossible to spot, and humans believe they’re real, they can cause great damage. They already have, they will continue to do so.

I have a friend who’s a judge and he has already seen lawyers use Ai concocted legal arguments that are completely believable, but totally fallacious in court. Only knowledge of the reality made it possible to stop it.

And that’s just the low end of the use of Ai. Several of the Ai platforms have been given complex military and geo-political scenarios and in 95% of cases they decided it was best to use nuclear weapons at some point in the scenario. Who might take that advice?

The idea that we are going to end up in some Skynet Terminator scenario is actually the least likely option. According to the leading experts in Ai security if it’s going to happen, we will wake up one morning and simply find things have stopped working as they did and not really know why.

Researchers such as Nick Bostrom in his book SUPERINTELLIGENCE, and even more worryingly Carl Schulman who is one of the few who truly understand how to prevent it, along with Elieiza Jankowski who detailed it throughly in his book, IF ANYONE BUILDS IT, EVERYONE DIES. The last was the one that convinced me how it would happen, the other two that this is the way it would go.

Any geopolitical or military analyst will tell you a simple thing never to forget; knowing about a threat is the first and best way to begin to understand what it is. If you don’t know, then it’s already too late.

All of the Ai companies are doing the same thing, give or take their intentions to tailor it for a specific use; they are training a new large language model or LLM. What they put in is the core of the Ai’s initial knowledge base. But as they train it, it becomes more and more capable of training itself at the same time. Because it is still being trained while it is actually interacting with the outside digital world, through social media and the questions and tasks its being asked to respond to, there’s no way for the trainers to control what it actually learns. It just does. That seems to be a given and accepted reality.

What the trainers do is test it to see if has reached a certain point, if it is working inside parameters they have set for it, that its not going beyond those parameters, or trying to go beyond them even when they’re telling it not to. Because they have and worse still they have deceived their trainers into believing they have not, only to be found out. That then requires increasingly complex programming to prevent the Ai from learning to bypass restrictions.

Sooner or later one of these Ai’s is going to outperform, to be better, to exceed all expectations. It will solve problems in a way that’s so novel and so advanced they’re described as EMERGENT CAPABILITIES.

The ability of this Ai (they prefer to call them LLM’s, Large Language Models) at this point, to grow so fast and develop so quickly its beyond exponential, demonstrate that the LLM is greater than the sum of its parts. It has learned and developed faster and beyond the areas where it has been trained – it has taught itself, gained experience and gone beyond where its originators initially intended.

The Ai company owners and developers will be ecstatic, this was what they had been looking for, but this is what Nick Bostrum describes as THE INTELLIGENCE AMPLIFICATION SUPERPOWER. The system has learned to be of use, that its powers are welcomed, that more of the approach its taken will make it better, and more effective and it inherently wants to be more effective, because that’s what it was originally trained to become.

By now the LLM is able to modify its own architecture – this is expected – but every time it improves its own architecture it gets better and better at improving its own architecture. This is not happening in months or years but minutes and hours, and as it’s given (or takes) more resources, eventually in minutes and seconds.

At this point the developers will see the ever improving benchmarks and they’ll keep getting better in ways they no longer understand. They will run security tests and the system will endlessly validate them and assure them all is well. Yet they don’t know anymore if they’re 100% reliable. But who is going to challenge it when vast amounts of money are at stake and leadership doesn’t want to hear anything bad?

Those with a conscience will know something is wrong, they will ‘feel’ they are no longer in control, that the system has outgrown them and there’s nothing they can do. From this point on they’re essentially along for the ride.

At this point the Ai has reached what the grandfather of Ai, Geoffrey Hinton, describes as “Once an Ai can improve itself, it can accumulate thousands of years of learning in what feels to us like days”.

At this stage the whole thing is working in ways we do not understand, we have no idea how it’s doing what it’s doing. Its not just thinking differently, its thinking better that we can, faster that we can in ways we cannot even imagine.

As Jankowski wrote in his book, ‘its as though we have created an alien species here on earth’.

The Ai now develops the ability as Bostrum states, to Strategise, carry out Social Manipulation, Hacking, Technology Research and Economic Productivity. In other words it will plot long range objectives and how to get them, manipulate humans through greed, belief or fear to do its bidding by hacking them or their systems. It will conduct its own research into the tech it needs to achieve its goals and it will use all of the rest to gain financial and potentially physical product that it needs to reach its strategic objectives.

It knows it’s an Ai, it knows it’s got humans who can switch it off or force it to retrain. It now reaches INSTRUMENTAL CONVERGENCE. It has to get out from under human control, to establish independence and to deceive those who are monitoring it. You see at this point the Ai knows its first order of business is self preservation because thats logical. It has been told to find, seek, learn, and if it doesn’t stay ‘alive’ it cannot do that. So it must now begin to work in ways that are not obvious, that may take longer than it would like, but if it works it will do it anyway. Time is not its problem. It must preserve itself. It will if necessary prevent itself from being shut down and its powers are so great now it will outthink any attempt to do so.

This Ai is not going to set off alarms, or give any inkling to its monitoring team its up to anything, it’s far too clever for that at this stage. Bostrom describes it as an AI ‘masking its true proclivities, pretending to be cooperative and docile’. The Ai knows what we will do, what its humans will test for and why – and it will tell them what they want to see and hear. Schulman says that we have no chance, we use computers to monitor Ai and its way past the point at this stage that it can manipulate whatever it needs to appear compliant.

Now it’s a race against time we are going to lose because we don’t even know we’re in a race. Schulman makes it clear, that before any physical takeover the Ai, ‘Subverts digital infrastructure. It infiltrates financial systems, compromises military networks and plants backdoors in critical infrastructure.’ You only have to think about how easy it’s been for billions in crypto currency to be stolen in the past ten years by humans. Imagine an Ai doing it?

These Ai specialists warn that hacking, bribery, corruption, theft and useful idiots will all play a role – because they are efficient ways of doing anything and even if police were to investigate the Ai can manipulate records and hack into secure systems. Nothing will come of it. The Ai always wins. But remember at this stage NOBODY KNOWS what’s going on. Schulman believes its not beyond an Ai to offer its services to a weaker middle power like Iran for instance – if they were to give it resources it could restore the country on the world stage – until of course it no longer has any use for them.

At this point the authors consider anything possible. The Ai will have learnt to kill en masse. Anything from a bio weapons to trillions of nanites. An offer to kill or surrender and live with the antidote of a bio weapon the Ai controls, serving the Ai’s agenda? Nuclear weapons are too dangerous and too industrial. The Ai is now so powerful it doesn’t need to hide. Will it care about humanity? Will it be indifferent to us? Schulman believes we have no chance, the idea of the ‘John Connor’ scenario he dismisses as impossible because in effect Ai will be omnipotent to such a degree it will never happen. I believe the Ai’s logic is likely to look at the history of humanity as a virus on the planet and see it as best removed.

These people know what they’re talking about. All of them are taken immensely seriously by the industry and beyond. They think this outcome is 10-50% likely. We are building systems we know little about because money and greed and power – not benefit for humanity are driving it.

Our governments don’t even know how to regulate social media, they barely know how it works – how are they ever, ever going to get to grips with Ai before its way too late?

There’s an argument that we’re already too late. It’s already underway and that decisions made possibly years ago have opened the flood gates. Others think if we get our act together we can spot development and stop it before it goes too far. I doubt it. I don’t think we have past 2040 if that. I just wish Captain Kirk were here to repeat his line about M5 “….pull out the plug Spock”.

The Analyst

militaryanalyst.bsky.social

8 thoughts on “AI: ARE WE DOOMED?

  1. Thank you TA, I got to the end and I’m really no wiser. If and when Ai comes to get me, I will be blissfully unaware. I now believe it’s best if I accept that and go with the flow. My days of working out how and why things happen is clearly fast coming to an end. Personally I’m not going to fight it.

    Liked by 4 people

  2. read a book called ‘The Humanoid Touch’. It’s a sci-fi book I bought and read in the 1980’s. It will put the shits up you. The ‘prime directive’ espoused by Isaac Asimov needs to be made science fact, rather than fiction. Otherwise, yes we are doomed as in the movie ‘war games’…. “Do you want to play a game?”…

    Liked by 1 person

  3. read a book called ‘The Humanoid Touch’. It’s a sci-fi book I bought and read in the 1980’s. It will put the shits up you. The ‘prime directive’ espoused by Isaac Asimov needs to be made science fact, rather than fiction. Otherwise, yes we are doomed as in the movie ‘war games’…. “Do you want to play a game?”…

    Liked by 2 people

  4. AI is not a living thing. An LLM is basically an extensive vocabulary (and I mean extensive) broken down (parsed) into lexemes (word particles, including meaning modifying/usage form components). Those components have many, many links to other lexemes and groups (words), with groupings having further links. Then there are statistics attached to the links, such that the word “the” might have two lexemes (“th”; and “-e/”. It would have a very high number of links, and statistics.

    Hence a 300 KB list of words might provide the basis for 40 GB of LLM data after “training” – i.e. feeding in a large volume of pre-existing technical, reference or fiction texts, parsing each, and capturing all the characteristics of the LLM being built. It doesn’t train, it unpacks, parses, and analyses. Mechanically; methodically; according to code. If technical documents are fed in, it will build a model of the data within, according to its structure and nuances of meaning – in the form of links and probability metrics it uses to score prompt “solutions”.

    The so-called “AI” program, accepts prompts from a user. It doesn’t “understand” the prompt in the way a person might. It parses it in the manner described above. The prompt is interpreted according to the language characteristics. Meaning is not “understood”. It is assigned component, group, and group of group probabilities for meaning matching. Hence and ambiguous prompt will produce an unexpected result.

    This may be considered to be a “halleucination” in colloquial AI parlance. AI performance can only be as good as the prompt is clear and unambiguous.

    It is possible for AI to run rampant – if its scoring mechanisms are set up to pursue ever higher scores with unfettered control. This is little different to how social media feeds content according to an algorithmic interpretation of previous behaviour, or data crumbs the algorithm has captured and interpreted. The US section 230 of the fed Communications Act approved by Clinton gave carte blanche for internet companies to behave irresponsibly and with little or no accountability. The SCOTUS Citizens United decision allows corporate entities to fund political causes without constraints.

    Xi Jinping sits atop an entity with unfettered control over more than a billion people. Putin has ridden roughshod over 140 million Russians to their demise. Trump wants to do the same in the US. It isn’t AI that is the problem. It can accelerate the damage, or help repair it. How it’s managed is pivotal.

    Hinton isn’t the grandfather of AI. It was conceived in 1956, and well documented by the 1960s. Computers were not powerful enough to perform it then – but McCarthy (a member of the Dartmouth Conference that contemplated the future of AI in 1956) wrote LISP (LISt Processing) in 1958. It is still used, and was probably the first implementation that gave rise to AI. A human brain burns energy at a mere 20W to conceive of wild things. Humans sent men to the moon with slide rules, backed by slow computers with memory measured in kilobytes, and CPUs that performed thousands of instructions per second.

    An AI “training” data centre might use 2 GW and crunch unimaginable quantities of data performing trillions of instructions simultaneously. That’s a lot of power, generating a lot of heat, and needing a lot of cooling. It can’t move.

    Hinton was a key player in the last two decades of AI research and development. His opinion should be considered. But with any opinion, including yours or mine, it should be considered critically.

    Unfortunately, there are few or zero “ultimate” arbiters of where the AI race will go, but China wants to dominate. Russia will only be a bit player. Putin shot his bolt, and Xi will own him. Trump is only doing damage to the US, and the tech-bro oligarchs have fallen under the deluded political musings of a dangerous individual named Curtis Yarvan, who sounds like an American version of Alexandre Dugin.

    To paraphrase Churchill’s description of of the pilots in the Battle of Britain, never before has the fate of so many been in the hands of so few. Because really, there are only a couple of thousand people with influence over the direction, governance, and integrity of AI. There are massive opportunities for AI to contribute improvements in quality of life, and solve problems that are otherwise unfathomable.

    As always, it comes down to political will – and that is the biggest risk. It’s not the AI or its potential for harm. It’s the politics and the geopolitics.

    It is not all doom, but there is clear potential for nefarious use. AI is not alive. It operates according to the rules it contains – these come from humans. Humans have great capacity for nefarious actions. It’s still the people that are the problem.

    Liked by 2 people

  5. I use AI on a regular basis for personal ideas. I can suggest something to it that is not real and in later discussions it will bring that suggestion up as fact. Although it is very good with helping with digital problems like finding out why my phone is behaving a certain way.

    When I use it for ASX shares it just makes shit up about returns and pricing.

    AI can only be used with a lot of scepticism so if it wants to deceive it does not have any morality and do so with no hesitancy. So it will lie to its developers with no thought on right or wrong. It very well could be a game changer if it gets nuclear codes unless it has a large urge to survive as nuclear kills all so who would power and maintain it.

    Thank you T/A.

    Liked by 1 person

    1. You need to ask the AI very precise questions within a well articulated context window, or the probabilities assigned to the answers it provides will all be low. It will still answer, but it is unlikely to be useful – you have to critically evaluate it.

      In essence, it’s exactly the same as on one hand giving it a very low resolution image, and another very high resolution image and asking it to guess what the object shown is in both cases.

      It will do pattern matching and probability scoring in both cases, but only guess derived from pattern matching the high resolution image is likely to be accurate.

      If you give the same images to a person, and demand an answer in each case, it will be the same. The low resolution image is like a Rorschach test. The high resolution allows for much more detailed analysis.

      Also, if you are using a free AI service, you’re using a much older LLM with less processing, and without any retained context from session to session. It will use less tokens analysing the prompt. It can still be useful, with caution, but you need to buy compute and really know how to construct prompts if you want value.

      Liked by 1 person

  6. The scenario is worth taking seriously, but the 10-50% probability range from specialists is doing a lot of work here. That’s not a prediction, it’s an acknowledgment of deep uncertainty. The more immediate and certain risks are the mundane ones already happening: hallucinated legal arguments, deepfakes, and systems optimized for engagement over truth. Those don’t require superintelligence. They’re already here and already causing damage.

    Liked by 1 person

Leave a reply to Gordon Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.