TEHRAN, Feb. 22 (MNA) –As was also clearly stated by Vladimir Putin on September 4, 2017: “whichever country leads the way in Artificial Intelligence research will be the ruler of the world”.

According to Thomas Kuhn’s old, but still useful, epistemological model, every change of the scientific paradigm – rather than the emergence of new material discoveries – radically changes the visions of the world and hence strategic equilibria.

Hence, first of all, what is Artificial Intelligence? It consists of a series of mathematical tools, but also of psychology, electronic technology, information technology  and computer science tools, through which a machine is taught to think as if it were a human being, but with the speed and security of a computer.

The automatic machine must representman’s knowledge, namely show it, thus enabling an external operator to change the process and understand its results within the natural language.

In practice, AI machines imitate the perceptual vision, the recognition and the reprocessing of language -and even of decision-making – but only when all the data necessary to perform it are available. They do so creatively, i.e. they self-correct themselves in a non-repetitive way.

As can be easily imagined, this happens rarelyin a complex system with a high rate of variation over time and space, as is exactly the case in war clashes.

Just think about the intelligence reserved for the Chiefs of Staff, which obviously no one ever feeds into any machine to “run” it.

Hence, first and foremost, AI is about making the machine imitate the human reasoning process, which is achieved by applying the Turing test.

As you may remember, Alan Turing was the mathematician who devised for the British intelligencea number of techniques for speeding the breaking of German ciphers and cracking intercepted coded messages that could find settings for the Enigma machine used by the German Nazi Intelligence Services.

Due to the large amount of data to be checked and translated, his mathematics required an electromechanical machine, a sort of computer which was in fact created at Bletchley Park, Britain’s codebreaking centre, with the technologies of the time: vacuum valves, copper wires and electric engines.

To be precise, the Nazis had developed a primitive computer, namely Z1, that was hard to program, while the British Colossuspermitted the introduction of cards and tapes that allowed its adaptation to the various needs of the British SIGINT of the time.

Furthermore, in Turing’s mind, the Imitation Game involving three people (a sort of deception game) could be replaced by a machine – and here the mathematical theory permitting AI comes into play.

The machine takes the place of either human beings who try to prevent the correct identification of the third human being (C) – an identification that remains hidden to both A and B.

Hence Alan Turing claims that man A can be replaced by a machine and that this can be correctly defined as “thinking”.

Hence, according to Alan Turing,the human thought can be creatively imitated and recreated through a Finite State Machine (FSM) that can simulate other Discrete State Machines.

In principle a Finite State Machine is a machine allowing  to fully describe – in mathematical terms – the simultaneous or non-simultaneous behaviour of many systems.

With a view to better understanding this concept, we can think of an image: the warp of a fabric with respect to its weft, which can have various colours or designs.

Conversely, a Discrete-State Machine is a calculator, i.e.a machine evolving by sudden jumps from one state to another.

The same evolutionary jumps that the epistemologist, Thomas Kuhn, thought as steps of a scientific paradigm.

Finally, in Turing’s mind, the Discrete State Machine was the most suitable for simulating the human thought-behaviour.

Currently, in AI, almost exclusively “hybrid spots” are used, i.e. systems unifying various types of finite or discrete state machineswhich develop and process also probabilistic scenarios.

There is no need for going further into this network of technical reasoning, which only partially regards the topic of this article.

It is worth recalling that the issue has its true conceptual and strategic origin in March 2017, when a computer program developed by Google, namely AlphaGo, beatthe world champion in the ancient Chinese board game Go, an extraordinary strategy game.

According to some US analysts, it was the game that inspired the Head of the North Vietnamese Armed Forces and of the Viet Mihn Communists, Vo Nguyen Giap, in his confrontation with the United States and its allies.

A game in which – unlike what happens in chess-there is no immediate evidence of the victory of either contenders.

Years before, in 1997, a much less advancedalgorithm than AlphaGo had beaten the chess champion Gary Kasparov.

With a view to better understanding what an AI system is, it is worth recalling that AlphaGo is made up of two deep “neural networks” having millions of neural connections very similar to those of the human brain.

A neural network is a mathematical model inspired by the structure of the neural networks typical of the human brain.

It consists of information interconnections and it is a mathematical-information system made up of artificial neurons and processes using computational connections common to all “neurons”.

Furthermore the AlphaGo system self-corrects and learns by itself, because it stores and quickly processes the many matches and games in which it participated.

As can be easily imagined, this also makes it largely unpredictable.

In the future, however, the new military robots with high autonomy of movement and selection of targets – and, sometimes, even of the AI ​​procedure to be used – will incorporate a great deal of Artificial Intelligence.

This will make the difference between a losing robot and a winning one on the ground.

Hence, at some point of technological evolution, they may also take autonomous actions.

Therefore the problem arises of how much autonomy can be given to robots, whether they are mobile on the ground or centralized in a command brigade.

Tactical autonomy, while the neural connections between the various military robots are managed simultaneously by a “classic” human system and by a 2.0 AI mechanism?

Probably so.

But here factors such as each country’s doctrine and the assessment of the probability of a war clash and with whom, must be considered.

Therefore many human lives can be saved even in a conflict and on the war theatre, except in a counter-resource robot action, which hits the civilian population.

It will also be easier to resortto armed confrontation, but a higher cost of automated defense or attack operations will be expected.

Obviously considering that the AI systems are derived from “natural thought”, if – in the activities – very few changes are to be made to an already-defined program, the machines always work better than human beings.

They are faster, much more precise and they never rest. Moreover, they have no parallel reasoning patterns deriving from personal tastes, ideologies, feelings, sensations, affections.

They are not distracted by value, cultural, symbolic, ethical and politicalissues and probably not even by the typical themes of the Grand Strategy.

In principle, however, if what is at stake are substantially equivalent technical choices or similar evaluations of the final future scenarios, on which the machine has no pre-set programming, man will always prevail in the match between man and robot.

Hence Metaphysics – or the “science of aims”, to put it in Aristotle’s words – is the unique attribute of our species.

But the process to achieve extra-technical goals can always be formalized and hence there is always at least one finite state machine in the world that can imitate it – on its own, however, without further support of the homo sapiens sapiens.

It must also be considered that the techniques for the AI “autonomous weapons” cannot be completely classifiedbecause, in these technologies, the commercial sector can often overcome the efficacy of “covered” technology weapons.

If we open up to commercial technologies, that would be the end of confidentiality.

In fact all AI, ranging from finance to machine tools up to biological and environmental programming, is a market-driven technology controlled by open markets- or rather  still oligopolistic ones.

However, what are the limits and the merits of a war or global strategy technology entirely rebuilt according to  AI standards?

The simple answer is that firstly no finite state or hybrid machine can evaluate the reliability of the data and systems it receives.

Hence we can imagine a new kind of intelligence action, that is the possibility of “poisoning” the command systems of the enemy’s AI machines.

The deep Internet, the area of ​web​sites – often having  criminal relevance – not resulting in the official search engines, could also host viruses or even entire opposing systems, which directly reach our AI machines, thus making them fulfill the enemy’s will and not ours.

It is worth recalling that Von Clausewitz defined victory as “the prevailing of the opponent’s will or of our will”.

Nevertheless the Artificial Intelligence systems can be extremely useful in the military and intelligence sector, when it comes to using them in the “computer vision”, where millions of data must be analyzed creatively in the shortest possible time.

In fact, the Turing machine and the derived AI ​​machines can imitate abduction, a logical process that is very different from that of deduction and induction.

Deduction, which is typical of traditional machines, such as the calculator, is the logical process that, starting from a non-analyzed premise, rationally derives particular propositions describing the perceivable reality.

Conversely, induction is a logical process that, with a number of finite steps fully adhering to the natural logic, allows to shift from empirical data to the general rule, if any.

Hence abduction is an Aristotelian syllogism in which the major premise is certain while the minor one is only probable.

The Aristotelian syllogisms are made up of a general statement (the major premise), a specific statement (the minor premise) and a conclusion that is inferred.

They are adaptable to both induction and deduction.

Furthermore,  in the various types of syllogism the Stagirite developed, the major premise is the general definition of an item belonging or not to a whole.

For example, “All men are bipeds”.

The minor premise is that “George is a man (or is a biped)” and hence the conclusion is that “George is a biped (or a man)”.

Finally, in abduction, there is an opposite reasoning compared to the other two: it is used when we know the rules and the conclusion and we want to reconstruct the premise.

The definition of abduction given by Charles S. Peirce, who long evaluated it in his pragmatist philosophy, is the following: “the surprising fact, C, is observed; but if A were true, C would be a matter of course.

Hence there is reason to suspect that A is true”.

If I have white beans in my hand and there is a bag of white beans in front of me, there is reason to believe  that the beans in my hand were taken out of that bag.

In fact, this is exactly the way in which an AI machine corrects or enhances its knowledge starting from the program we put in it.

Another military use of AI is the “deep” face recognition, far more analytical and fast than it can be done today.

There is also voice recognition, the immediate indication of the sources of an enemy communication and its almost simultaneous comparison with the countless similar or anyway opposing communications.

Artificial Intelligence can also be used for military logistics issues or for the multi-variable resolution of war games, and even for combat automation in mixed environments with men and machines in action.

Therefore recourse to a limited war will be ever more likely if there are no human victims and if the confrontation is directed by advanced automatic systems.

There will also be an impact on political responsibility, which could be shifted to AI systems and not to  commanders or decision-makers in the flesh.

What political and strategic effects would an automatic clash have and what immediate psychological mechanisms would it trigger in the population?

However, who wins in the recently-started war for dominance in AI military and intelligence technologies?

For the time being, certainly China.

In fact, in November 2017 the Chinese startup company Yitu Tech won the contest for the best face recognition system.

The challenge was to recognize the greatest number of passengers accidentally encountered in a civilian airport.

The Chinese government has already approved a project called “Artificial Intelligence 2.0” having specific applications both in the economy and in military and intelligence structures.

The Chinese  Armed Forces are now working on a unified project in AI 2.0, an initiative regarding precisely the relationship between AI civilian and military applications.

As already noted, this is the strategic weak point of the AI military programming, because it verifies strong competition between the market and state organizations, at least in the West.

In fact, for the US Intelligence Services, the line to be currently followed in the smart war automation is to implement the new technologies to enrich the information already present on the President’s table.

In China the “merger” between market and State in the AI ​​sector is directly regulated by the Commission for Integrated Military and CivilianDevelopment, chaired  personally by Xi Jinping – and this says it all.

In the framework of the new AI strategic evolution, the Chinese Armed Forces follow the criterion of “shared construction, shared application and shared use” with private individuals and entities – at least for all innovations in the programming and automatic management of information (and actions) on the battlefield and in the intelligence area.

Therefore the Chinese AI 2.0 puts together robotic research, military systems without pilot or other staff and  the new military brain science.

A new theoretical-practical branch that affects even the mental and remote control of machines through human applications such as headsets detecting and interpreting the brain activity of the wearer, thus allowing them to control the machines.

This already happened at the Zhengzhou Military Academy in August 2015, with students guiding and controlling  robots through sensors placed on their skullcaps.

Hence the new AI activities in the intelligence sector can be easily imagined: infinitely broader and faster data collection – and even structured and semi-processed – creation of automatic intelligence contrast systems; entry into electronic media systems and networks available to  “anonymous” data decision-makers that change the perception of the battlefield and of the whole enemy society.

Finally, the synergic coverage of the civilian and military data of the country that has achieved dominance in AI technologies.

Each new technology in the AI military sector is protected and, hence, implies a civilian, military or hybrid battlefield , in which all the operations of those who possess the advanced tool always hit the target with the minimum use of soldiers and with the utmost confidentiality.

It would be good for the EU to think about these new scenarios, but currently imagining that the European Union is able to think is mere theory.

Furthermore China has created a new Research Institute on AI and related technologies linked to the Central Military Commission and the Armed Forces.

Liu Ghuozhi, the Director of this Research Institute, likes to repeat that “whoever does not disrupt the adversary will be disrupted”.

The current rationale of the People’s Liberation Army is that the new and more advanced AI environment 2.0 –  i.e.  that of war, of the strategic clash and of the apparently peaceful political one – is already a new stage in military thinking.

This is a qualitatively different level, far beyond the old conflict information technologies – a stage requiring a “new thinking” and a completely different approach to military confrontation, which immediately turns into a social, economic, technological and cultural one.

Hence a Chinese way – through technology –  to the Russian “hybrid warfare”, but a strategic thinking remaining along the lines of the Unrestricted Warfare theorized by Qiao Liang and Wang Xiangsui in 1999, at the dawn of globalization.

In fact, the origin of globalizationshould not be found in the fall of the Berlin Wall, but in the beginning of Deng Xiaoping’s Four Modernizations in 1978.

It is also worth noting that, from the beginning, the implicit planning in the “Unrestricted Warfare” theorized by the two Chinese Colonels had been thought against “a more powerful opponent than us”, namely the United States.

Hence merging of technical and intelligence services in the area of ​​operations;union of intelligence and AI networks; integration of command functions with other activities on the ground, obviously also with intelligence, and finally use of the large mass of information in real time.

This is made possible thanks to the adaptation of the Chinese Intelligence Services to the speed and wide range of data provided by all technological platforms and by any “human” source.

The ultimate goal is unrestricted warfare, in which you do not dominate the “enemy’s will”, but all its resources.

Therefore China currently thinks that “technology determines tactics” and the People’s Liberation Army intends to develop also support systems using Artificial Intelligence to back strategic decision-making.

Still today this should work also on the basis of the old US program known as Deep Green created in 2005 by the  Defense Advanced Research Program Agency (DARPA).

It is an AI system intended to help military leaders in the strategic evaluation of scenarios, of their own options and of the enemy’s options, as well as their own potential – at such a speed enabling to counteract any enemy move before it could be fully deployed.

Finally what is the Russian Federation doing in the field of modernization of its Armed Forces by means of Artificial Intelligence?

It is doing many things.

First and foremost, Russia is carefully studying unmanned ground vehicles (UGV), such as Uran-9, Nerekhta and Vir.

They are all armoured tanks that can host anti-tank missiles and mid-sized guns.

Secondly, since 2010 Russia has favoured the development of its Armed Forces in relation to what its military doctrine  defines as “intelligence exchange and supremacy”.

In other words, the Russian military world believes that the intelligence superiority is central both in times of peace and in times of war.

Superiority vis-à-vis its own population to be protected from others’ dezinformatsjia and superiority with respect to the enemies’ propaganda in their own countries – an information action that must be mastered and dominated, so that the enemy’s public can develop an ideological universe favourable to Russian interest.

This psycho-intelligence “exchange” – always based on AI supports – implies diplomatic, economic and obviously military, political, cultural and religious tools.

It is mainly developed through two intervention areas: the technical-intelligence and media area and the other one  more traditionally related to psychological warfare.

Russia is also developing a program to adapt its supercomputers to deep learning, with an AI system  significantly callediPavlov.

The deep learning of computers having hundreds of petaflops (a petaflop is equivalent to 1,000,000,000,000,000 floating point operations per second)is an AI system allowing to fully imitate not only the “normal” human thought- which is defined as “logical” – but also the possible statistical variations, which are in fact involved in abduction, of which we have already spoken.

It is worth repeating that the EU closely follows America with regard to drones, computer science and information technologies and it is also starting to fund some projects, including military ones, in the 2.0 AI sector.

However, they are technological goals far away in time and, in any case, despite the dream, or the myth, of a  European Armed Force, intelligence, advanced battlefield doctrines and intelligence neural networks – if any – are strictly limited to the national level.

With the results we can easily imagine, above all considering the intellectual and technological lack of an EU doctrine on “future wars”.

First published in our partner Modern Diplomacy 

https://moderndiplomacy.eu/2018/02/22/artificial-intelligence-intelligence/