Giancarlo Elia Valori
“Artificial Intelligence” is a terminology specifically invented in 1956 by John McCarthy and concerns the ability to make appropriate generalizations quickly, but based on an inevitably limited set of data.
The wider the scope and the faster conclusions are drawn, and with minimal information, the smarter the machine’s behaviour can be defined.
Intelligence is the creative adaptation to quick changes in the environment. This is the now classic definition, but in this case, with machines, the speed and the increasingly narrow base of the starting data are also evaluated.
What if the starting data does not contain exactly the necessary information – which is possible? What if, again, the speed of the solution stems from the fact that the data collected is too homogeneous and does not contain the most interesting data?
Konrad Lorenz, the founder of animal ethology, was always very careful to maintain that between instinctive behaviour and learned behaviour, external environmental and genetic sources can be equally “intelligent”. The fact, however, is that greater flexibility of a behaviour – always within a reasonable time, but not as quickly as possible – generates greater intelligence of the animal.
As said by a great student of Lorenz, Nikko Tinbergen, human beings are “representational magpies”, which means that much of their genetic and informative history has no practical value.
When the collection of information becomes easy, the “adaptive” magpie has a very adaptive behaviour, but when the data collection is at the maximum, all data counts and we never know which, among this data, will really be put into action.
In other words, machine data processing is a “competence without understanding”, unless machines are given all senses – which is currently possible.
Human intelligence is defined when we are at the extreme of physically possible data acquisition, i.e. when individuals learn adaptive-innovative behaviour from direct imitation of abstract rules.
Abstract rules, not random environmental signals.
If machines could reach this level, they would need such a degree of freedom of expression that, today, no machine can reach, not least because no one knows how to reach this level; and how this behaviour is subsequently coded.
What if it cannot be encoded in any way?
The standardization of “if-then” operations that could mimic instincts, and of finalized operations (which could appear as an acquired Lorenz-style imprinting) is only a quantitative expansion of what we call “intelligence”, but it does not change its nature, which always comes after the particular human link between instinct, intelligence and learning by doing.
Which always has an accidental, statistical and unpredictable basis. Which duck will be the first to call Konrad Lorenz “dad”, thus creating a conditioning for the others? No one can predict that.
If systematized, bio-imitation could be a way to produce – in the future – sentient machines that can create their own unique and unrepeatable intelligent way to react to the environment, thus creating a one and only intelligent behaviour. Will it be unique?
However, let us go back to Artificial Intelligence machines and how they work.
In the 1980s there was the first phase of large investment in AI, with the British Alvey Program; the U.S. DARPA Program spending a billion US dollars on its Strategic Computing Initiative alone; finally the Japanese Fifth Generation Computer Project, investing a similar amount of money.
At the time there was the booming of “expert systems”, i.e. symbolic mechanisms that solved problems, but in a previously defined area.
From the beginning, expert systems were used in financial trading.
There was the hand of the expert system in the fall of the Dow Jones Industrial Average by 508 points in 1987. In 1990, however, Artificial Intelligence also began to be used in the analysis of financial frauds, with an ad hoc program used by the Financial Crimes Enforcement Network (FinCEN), especially with the possibility to automatically review 200,000 transactions per week and to identify over 400 illegal transactions.
Machine learning, the model on which the most widely used AI financial technology relies, is based on a work by McCullogh and Pitts in 1943, in which it was discovered that the human brain produces signals that are both digital and binary.
A machine learning system is composed, in principle, by: 1) a problem; 2) a data source; 3) a model; 4) an optimization algorithm; 5) a validation and testing system.
In 2011, deep learning (DL) added to the other “expert” systems.
It is a way in which machines use algorithms operating at various separate levels, as happens in the real human brain. Hence deep learning is a statistical method to find acceptably stable paradigms in a very large data set, by imitating our brain and its structure in layers, areas and sectors.
As explained above, it is a mechanism that “mimics” the functioning of the human brain, without processing it.
DL could analyse for the first time non-linear events, such as market volatility, but its real problem was the verification of models: in 2004 Knight Capital lost 440 million US dollars in 45 minutes, because it put into action a DL and financial trading model that had not been tested before.
In 2013, during a computer block of only 13 minutes, Goldman Sachs flooded the U.S. financial market with purchase requests for 800,000 equities. The same week., again for a computer error, the Chinese Everbright Securities bought 4 billion of various shares on the Shanghai market, but without a precise reason.
Between 2012 and 2016, the United States invested 18.2 billion US dollars in Artificial Intelligence, while only 2.6 were invested by China and 850 million US dollars by the United Kingdom in the same period.
The Japanese Government Pension Savings Investment Fund, the world’s largest pension fund manager, thinks it can soon replace “human” managers with advanced Artificial Intelligence systems.
BlackRock has just organized an AI Lab.
In 2017, however, China overtook the United States in terms of AI startups, with 15.2 billion funding.
China now has 68% of AI startups throughout Asia, raising 1.345 billion US dollars on the markets for their take-off.
China has also overtaken the United States in terms of Artificial Intelligence patents over the last five years.
Nevertheless, considered together, the USA and China still account for over 50% of all AI patents worldwide.
China also dominates the market of patents on AI technology vision systems, while deep learning data processing systems are now prey to the big global companies in the sector, namely Microsoft, Google and IBM. Similar Chinese networks are rapidly processing their new “intelligent” data collection systems, also favoured by the fact that the Chinese population is about twice as much as the US population and hence the mass of starting data is huge.
The Chinese intelligence industry zone near Tianjin is already active.
In the end, however, how does Artificial Intelligence change the financial sector?
AI operates above all in the trading of securities and currencies in various fields: algorithmic trading; the composition and optimization of portfolios; validation of investment models; verification of key operations; robo-advising, namely robotic consultancy; the analysis of impact on the markets; the effectiveness of regulations and finally the standard banking evaluations and the analysis of competitors’ trading.
Algorithmic trading is a real automatic transaction system – a Machine Learning program that learns the structure of transaction data and then tries to predict what will happen.
Nowadays computers already generate 70% of transactions in financial markets, 65% of transactions in futures markets and 52% of transactions in the public debt securities market.
The issue lies in making transactions at the best possible price, with a very low probability of making mistakes and with the possibility of checking different market conditions simultaneously, as well as avoiding psychological errors or personal inclinations.
In particular, algorithmic trading concerns hedge funds operations and the operations of the most important clients of a bank or Fund.
There are other AI mathematical mechanisms that come into play here.
There is, in fact, signal processing, which operates by filtering data to eliminate disturbing elements and observe the development trends of a market.
There is also market sentiment.
The computer is left completely unaware of the operations in progress, until the specific algorithm is put to work – hence the machine immediately perceives the behaviour of supply and demand.
There is also the news reader, a program that learns to interpret the main social and political phenomena, as well as pattern recognition, an algorithm teaching the machine to learn and react when the markets show characteristics allowing immediate gains.
Another algorithm is available, developed by a private computer company in the USA, which processes millions of “data points” to discover investment models or spontaneous market trends and operates on trillions of financial scenarios, from which it processes the scenarios deemed real.
Here, in fact, 1,800 days of physical trading are reduced to seven minutes.
However, the algorithms developed from evidence work much better than human operators in predicting the future.
Artificial Intelligence works as a prediction generator even in the oldest financial market, namely real estate.
Today, for example, there is an algorithm, developed by a German company, that automatically “extracts” the most important data from the documents usually used to evaluate real estate transactions.
In Singapore, Artificial Intelligence is used to calculate the value of real estate property, with a mix of algorithms and comparative market analysis. Man is not involved at all.
As to corporate governance, there are AI programs that select executives based on their performance, which is analysed very carefully.
What is certainly at work here is the scientist and naive myth of excluding subjectivity, always seen as negative. The program, however, is extremely analytical and full of variables.
Artificial Intelligence is also used in the market of loans and mortgages, where algorithms can be processed from an infinity of data concerning clients (age, work, gender, recurrent diseases, lifestyles, etc.) and are linked to operations – always through an algorithm – which are ordered, without knowing it, from one’s own mobile phone or computer.
So far we have focused on Artificial Intelligence algorithms.
But there is also quantum computing (QC), which is currently very active already. Its speed cannot be reached by today’s “traditional” computers.
It is a more suitable technology than the others to solve problems and make financial forecasts, because QC operates with really random variables, while the old algorithms simply simulate random variables.
Quantum computing can process several procedures simultaneously, and these “coexistence states” are defined as qubits.
In a scenario analysis, QC can evaluate a potentially infinite set of solutions and results that have been randomly generated.
An extremely powerful machine which, however, cannot determine exactly – as it also happens to slower machines – whether the scenario processed corresponds to human interests (but only to the initial ones known by the machine) or whether the procedure does not change during operations.
GIANCARLO ELIA VALORI
Honorable de l’Académie des Sciences de l’Institut de France
President of International World Group
February 2020