To appreciate what the term “financial singularity” refers to we must first understand the origin of the concept of a singularity which has its roots in the physics community. At the centre of a black hole is a gravitational singularity created from the ever-increasing gravity by the implosion of a dying star. The singularity itself is a one-dimensional point of infinite mass. It is impossible to “see” a singularity as it is beyond the event horizon of the black hole where not even light can escape.
(Source: Northern Arizona University http://www.cefns.nau.edu/geology/naml/Meteorite/Book-GlossaryS.html)
All known laws of physics break down and the nature of a singularity can never be fully understood.
The Singularity
When the term “singularity” is used to describe a future event it is comparable with the usage in physics in that we are unable to foresee events beyond it, or more specifically, our human minds aren’t currently able to comprehend what its nature is or what could be past it.
Verner Vinge is credited with first using the term in a technological context. It refers to the evolution of human intelligence that is advancing at a rate that we will soon be able to create intelligence that rivals our own. When this happens, intelligence will have reached a singularity, an intellectual transition as impenetrable as the singularity of a black hole. The technological singularity is a popular theme amongst science fiction writers and futurologists but has recently gained wider acceptance as an issue that will affect humanity and has therefore been adopted as an important concept by the Artificial Intelligence community.
Ray Kurzweil embraced Vinge’s ideas and is now widely associated with the term “The Singularity”. Kurzweil is a Director of Engineering at Google and a renowned author, computer scientist and futurologist. He envisages a future period where the pace of technological change is so rapid and so deep that human life will be irreversibly transformed. Kurzweil goes further and imagines a merger of biology and technology that further advances human evolution. Kurzweil predicts, based on the exponential growth of technology that the Singularity will come to pass by 2045.
The term “The Singularity” refers to a hypothetical future where super-intelligent machines are created that surpass the thinking abilities of humans. Should the singularity occur, technology would advance beyond what we could foresee or control and the world would be transformed beyond recognition. In common with the singularity in a black hole, not only can we not see beyond the singularity but all known laws will cease to operate in the way they did before.
The Financial Singularity
In a recent article by Robert Schiller, professor of economics at Yale University, he asked the question: Will alpha eventually go to zero for every imaginable investment strategy? For readers outside the financial community, “alpha” refers to returns due to trader skill over and above returns from the market called the beta. Beta returns are those you would get if you invested in say a stock market tracker fund. The alpha created by a trader is due to inefficiencies in the market or information not yet accounted for in the current market price. A trader, for example, could have a different view on the valuation of a company which might make its current price seem cheap. The trader would then buy the stock hoping that the rest of the market would eventually agree and thus make a profit. This kind of inefficiency exists because the market does not price assets using some kind of formula. Instead, the current market price is the “best guess” of the price that is arrived at by sequences of buying and selling actions of market participants. In fact there is no notion of a “correct” price because the market consensus defines the price and only later as views change does the price move and become the new consensus. All participants potentially have different views over different time horizons which give rise to trends in prices. Participants may all come to agree at the same time, the result being a market crash or a bubble. These views are all arrived at using incomplete data since none of the participants have complete knowledge of the market.
Imagine that all participants in a market had access to all of the data needed to correctly price an asset. There would be no inefficiencies and the price would not move until the data changes. This is currently beyond our human capabilities but could be possible with super intelligent, super-connected machines.
In a world where machine trading far surpasses the capabilities of human trading it would be inefficient and unprofitable to use humans to make judgements on price. Humans would just give up. The machines would take over all trading and humans would be reduced to occasional users of the market as human-based speculation would disappear. As machines efficiently removed the inefficiencies in the market all opportunities for identifying alpha would also disappear and all futures returns would go to zero. This is the financial singularity.
The implications of the financial singularity could be profound. Investors would abandon their attempts to make money. Pensions and savings would not grow resulting in a mass withdrawal of funds from the banking system. As the financial system started to unravel, people would rely more on governments for support leading to more centralised control. Without the gains from investment, the capitalist system itself could collapse and the world economy as we know would change forever. A new world order would emerge and the free market economy would be gone. There may be many different consequences of a financial singularity some foreseeable and some not. In line with the usage of the term singularity, it’s difficult to envisage what the world would be like beyond it.
Are we already there?
Some observers think we are almost at the financial singularity. They argue that sophisticated institutional investors using advanced trading machines dominate the markets making it harder for your average investor to make money.
Trading machines have been around for decades and have been used routinely by many funds and banks. Funds that use trading machines are often referred to as Quant or Systematic funds. Early incarnations were relatively simple relying on basic rules and ideas. Over the past decade, systematic funds have become more sophisticated in applying ideas from science and engineering. Funds have attracted some very smart people with PhDs and access to large budgets. Advances in technology have allowed sophisticated analysis to be performed that was not possible until recently. Funds are increasingly employing machine learning techniques and big data analysis to try and gain an edge in understanding the behaviour of increasingly complex and connected markets.
It is estimated that over 70% of the trades on the US stock exchanges are done by machines using a strategy known as High-Frequency Trading (HFT). HFT uses powerful computers to execute orders at sub-second speed. Competition between HFT strategies is a technological arms race. Machines are often collocated at the exchange to minimise the communications latency in order to get to trade faster than their competitors. HFT strategies rely on tiny discrepancies in prices across exchanges and in the bid-offer spread. The strategy is called arbitrage and looks to exploit temporary price inconsistencies. Interestingly the very act of making these trades removes the inconsistency making the price reach an equilibrium. Markets have relied on arbitrage even prior to HFT to minimise differences between exchanges. For example, if the same stock is priced differently on two different exchanges, arbitrage will eliminate that price difference by trading each stock until the difference disappears. In this case, HFT is performing a useful market function. Other HFT strategies include more controversial practices of creating an artificial spike in prices to profit from the subsequent correction. A recent study from the SEC stated that over 96.8% of orders are never completed. Although it’s not said in the report, most experts see it as a sign of HFT activity.
HFT has come in for some heavy criticism from politicians, regulators and the press. Some claim the secretive nature of HFT activities are disguising the fact they are making huge profits at the expense of the retail investor. HFT advocates claim they provide a useful market function of providing liquidity and stabilising the markets through arbitrage activities. Wherever the truth lies, one fact that cannot be denied is that machines are playing an increasingly dominant role in the markets that cannot be replicated by humans. It is only a small step for the world’s exchanges to become fully automated with human participants becoming a small fraction of the market.
Are these the foundations for the architecture for the financial singularity?
Why we’re not there yet
Let’s take an example of some recent dramatic market moves. The chart below shows the China Shanghai Composite index which is an index of all stocks traded on the Shanghai Stock Exchange.
(Source: stockcharts.com)
In July 2015 the index plummeted wiping out over a third of its value in just a few weeks. The Chinese government panicked and intervened to stop the fall by buying stocks. Interestingly, this market crash does not coincide with a worldwide fall and subsequent recession like it did in the crash of 2008. The main reason that is cited is that the prior growth was funded by borrowing. Trading on margin was strictly regulated but was relaxed over the preceding 5 years. Investing using borrowed money allows for increased leverage. Increased leverage was partly responsible for the severity of the 2008 financial crisis. In an attempt to cool off the market Chinese authorities reduced the amount that could be borrowed and introduced new limits on 12th July 2015. Stocks fell dramatically after that date. In August The Peoples Bank of China (PBOC) devalued the Yuan by adjusting the central parity. The market dropped spectacularly after that. Facing increased pressure to do something the PBOC cut rates and the reserve requirement ratio in an attempt to avert the downward spiral. It is estimated that 80% of investors in the Shanghai market are ordinary Chinese people who have been attracted to the markets following the recent increase in savings capital as a result of China’s growth. This is in stark contrast to the machine dominated markets of the west.
What does this serve to illustrate? It tells us that the frailties of human investment are still alive and well from government interventions, bubbles, crashes, greed and fear. This could not be further from the financial singularity. Not all markets are driven by machines and many worldwide markets are still either dominated by human participants and perhaps protected by regulators and governments.
What is the Right Environment for the Financial Singularity to Occur?
What technology would need to be in place for The Financial Singularity? More specifically what technology is required for the machine to know everything to price an asset which would remove all alpha?
The technological infrastructure would need to be super connected and super-fast. Moore’s Law is the observation that over the history of computing hardware, the number of transistors on an integrated circuit has doubled approximately every two years. This has led to an exponential increase in computing power. Moore’s Law predicts that we’ll see the following speedups, from now:
In 10 years, 100x
In 20 years, 10,000x
In 30 years, 1 million x
In 40 years, 10 million x
Bandwidth doesn’t quite keep up with this but it’s close. Neilson’s Law observes that bandwidth increases at about 10% less than Moore’s Law but nevertheless has the same exponential growth. Storage capacity has lagged but with the advent of solid-state storage, this looks to be catching up. Moore’s Law has been predicted to come to an end due to physical limitations such as communication speeds on CPUs being limited to the speed of light but new technology has so far emerged to overcome these limitations such as multiple-core CPUs. Advances in Quantum Computing could take it even further. With Moore’s Law remaining quite resilient, it’s easy to foresee technology becoming super-fast and super connected.
However, the hardware is only one part of the story. The software required to connect and analyse the world’s data would need to evolve too. The current world wide web is still predominantly page based following on old world model of newspapers and TV. Web 2.0 described the transition from this mainly static world to a more dynamic experience incorporating a more social dimension. Web 3.0 describes the next transition of the web which is already underway. Web 3.0 will be more connected, open, and intelligent, with semantic web technologies, distributed databases, natural language processing, machine learning, machine reasoning, and autonomous agents. In particular the Semantic Web will allow machine to machine communication where all data is machine readable by machines. Tim Berners Lee (who coined the term Semantic Web) expressed his vision as follows:
“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines.”
Tim Berners Lee
It is the start of a world where everything is connected, all data is available and all services are interlinked which can be analysed from machine to machine.
Assuming everything is connected and available, is it even possible to make sense of the data? Let’s take an example of analysing the data of a single company. If everything is known about a company, its current financial state and projections for the future, then its share price could be estimated with a greater degree of accuracy than the markets currently do. This share price would not change until the data changed which would be in response to some external stimulus. The idea that it’s possible to know everything about a company seems pretty farfetched but the advent of big data is proving that large data sets can be analysed. Advances in machine learning are adding meaning to data that was previously too large to analyse. To compute a share price a machine would have to know about all of the data within the company. This would include details of finances, productivity and inventory which would be possible if all of the companies’ systems were interconnected. However, to have a complete picture, data outside the company would need to be factored in such as market share, the popularity of the product and whether consumers are interested in the company’s products and so on. This may also sound farfetched that any machine could know all of that but this too is in the realms of current technological reality. Google and Facebook possess knowledge of consumer preferences and trends which they have not even begun to exploit. To get a glimpse of this go to Google Trends which shows a graph the number of searches for a given keyword and matches it up to relevant news stories.
Prices that were calculated in this manner would end up flatlining until an event occurred that was outside of the data such as a major takeover announcement, a change of strategy or the exit of a key person in the company. Prices would look like horizontal lines with steps. In order for there to be alpha, an investor would need to outsmart the machine’s predictive capabilities.
The problem of perfect markets was identified by Grossman and Stiglitz in their classic paper, “On the Impossibility of Informationally Efficient Markets”. They suggest that if the smart money was to expend effort and seek reward they would only do that if the market was in disequilibrium. If through their actions the market reached equilibrium then they would stop and the market would fall into disequilibrium again. Grossman and Stiglitz’s model describes a price system that does not reveal the true price of an asset because of the presence of noise introduced by the activities of uninformed traders. Informed traders earn a return on their activities by having an advantage over uninformed traders and therefore are happy to pay for this informational advantage. However, if there is no informational advantage over uninformed traders they will stop paying and cease their activities which would reintroduce the noise into the system again.
In Grossman and Stiglitz’s model it is possible that the informed traders could be machines and as the capability of machines increases the ratio of machine to human informed traders would increase to a level where humans were a tiny fraction. The question then becomes; just as the informed traders would cease their activities wouldn’t those responsible for running the machines simply turn them off? To answer this question we need to look at one of their key assumptions in their paper that may not be true in a Web 3.0 world and beyond. Their assumption is that information is costly. The web has already turned the “information is costly” model on its head with information providers like Google providing information for free to gain revenue elsewhere. In essence, data providers like Google use the information they give for free to gain access to information that they don’t have so they can charge for it. For example, they provide search results to you for free and then gather information on your preferences to sell onto advertisers. If the cost of information is, therefore, low the incentive to switch off the machines is much less. The act of one market participant switching off a machine may create an opportunity for the other machines and the more people who turn off their machines the more opportunity there is left for the remaining machines. If there is then an opportunity again the machines would be turned back on particularly since the information is cheap. This would create a disincentive to turn off your machine and may even be considered a cost. The inherent cycle between equilibrium and disequilibrium implied by their paper may not occur in the machine dominated world.
The Emergence of Intelligent Machines
The financial singularity could be brought on by the technological singularity by the creation of super-intelligent machines. How do intelligent machines emerge? If I wanted to write a computer program to play chess I could program every possible move a player could make with an appropriate counter move. Initial analysis reveals that this is a poor approach. After whites first move there are 20 possible moves by black (all of his pawns and the two knights). After blacks’ next move there are 400 possible moves. Next, there are over 5000 possible moves and so on. For a typical game of 30 moves, there could be over 140 million moves (although the number of sensible moves would be much less). Computer scientists are inherently lazy so rather than typing in over 140 million combinations, they prefer to write programs to do repetitive tasks and will instead write rules that encode strategies. However, the first generation rules won’t be as good as a chess grand master so they will develop rules to learn and refine the rules. This rule rewriter could also be refined by further rules to become a better rule writer. This successive improvement can go through many iterations until the machine is able to improve itself and in a sense becomes a chess program writing program. The machine reaches a level where it is better a writing a chess program than the original authors. It is this level of machine evolution that is theorised by AI researchers. The singularity is considered inevitable because AI machines would be able to make these evolutions in themselves far faster than their human developers. The speed at which each generation of machine is created by the previous generation would increase exponentially with the last machines created in a blink of an eye.
There is much debate in the AI community as to the feasibility of this scenario. There are physical limitations such as the underlying hardware that the AI runs on. The hardware would need to exponentially increase in speed and someone would still have to pay for it and build it. However, the same reasoning applied to a chess AI could also be applied to a trading system AI. A machine could be developed that was better at developing trading strategies than the original developers. If a market was controlled by enough of these machines it would become impossible to make any judgement as to why a price was at a particular level. You would have no knowledge of the rules of the game because they will have been written by an AI.
Is the Financial Singularity Inevitable?
The exponential nature of technological improvements coupled with the evolution of intelligent machines outlined so far in this article implies that The Singularity is inevitable and that the foundations have already been laid. Technologically it’s possible buts its inevitability relies on decisions made today as to whether it will occur or not. Why would we, the human market participants, choose for the Financial Singularity to occur? Often seemingly benign choices lead to unintended consequences which are harder to back out of than the original choice.
Some consider the dominance of HFT and algorithmic trading in the US markets to evidence enough of the foundations of the Financial Singularity but there are interesting lessons to be learned from the rise and recent fall of HFT. There is evidence that the volume of HFT trades is in retreat and less money is being made per trade. Volatility in the markets today is significantly less than in the pre-2008 golden years of HFT and its daily volatility where the money is made. The flash crash of May 2010 was considered to have been exacerbated by HFT. In August 2012, Knight Capital, one of the largest HFT players in the market, deployed a faulty algorithm that erroneously bought millions of shares. It cost them $440m to unwind and caused the collapse of the company. With some high profile blow-ups and less money being made in an increasingly crowded marketplace, the HFT presence is now in retreat (or at least not expanding). Regulators are now starting to scrutinise HFT players and policy makers and regulators worldwide are considering transaction taxes. Any amount of transaction tax would effectively wipe out the profit margin of HFT programs that rely on making small profits on a large number of trades.
The HFT example above follows Stiglitz’s model where cost component is coming from regulation. When the utility of a market is threatened by the actions of a participant then the role of the regulator is to step in to prevent it.
HFT is not considered benign by many market participants but what if AI was considered a force for good in the markets? Those who argue against the role of speculators imagine the markets to be more stable without the interference of speculators. It’s possible that AI could be used to police the markets more effectively than regulators can now.
Laplace’s Demon
From the discussion above about Moore’s Law and the AI Singularity it’s tempting to believe that an all-knowing machine could be built. The same argument was put forward by Laplace in 1814 in relation to how classical physics, in particular Isaac Newton, was making great advances in explaining the workings of the universe:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Pierre Simon Laplace, A Philosophical Essay on Probabilities
This argument is referred to as Laplace’s Demon is concerned with the idea of determinism, namely that all past events determine the future. If the AI Singularity were to happen, the intellect in Laplace’s argument could be an AI. The Financial Singularity is essentially the implementation of this thinking in relation to an AI calculating market prices. Many scientists were drawn to the idea that their theories would one day be able to explain everything. Arguably many financial market practitioners are drawn to this idea too.
By the turn of the century, the idea of determinism was running into problems and there were new branches of scientific thinking which placed limits on what was achievable by science. These included problems in measurement identified by Heisenberg’s Uncertainty Principle, and sensitivity to initial conditions described in Chaos Theory. Additionally, incompleteness theorems developed by Kurt Gödel and limitations of computing machinery ideas developed by Alan Turing.
Heisenberg’s Uncertainty Principle tells us that there is a fundamental limit to what we can know about nature at the quantum level. In Quantum Mechanics, the Uncertainty Principle says that we cannot measure the position and the momentum of a particle with absolute precision at the same time. The more accurately we know one of these values, the less accurately we know the other. It turns out that the principle applies to other areas of measurement in science such as frequency and time in Digital Signal Processing. This is why many measurements that suffer from the Uncertainty Principle are given as probabilities.
Chaos Theory describes how small changes in initial conditions can produce random and unpredictable behaviour. It was encountered as a problem in weather prediction by mathematician and meteorologist Edward Lorenz. He found that introducing even slight changes in the initial conditions could produce wildly different results. This became known as the Butterfly Effect e.g. a butterfly flapping its wings in South America can affect the weather in Central Park.
The work of Gödel and Turing in the 1930s addressed the problem proposed by Hilbert in the 1920s referred to as the decision problem: could a procedure be devised which would demonstrate whether a mathematical theorem was provable or not. Gödel demonstrated using his Incompleteness Theorem that it was not possible. This had profound implications for mathematics and science and was a killer blow to the ideal of determinism. Gödel essentially proved that a single Theory of Everything was actually logically impossible and that is not only applied to mathematics but to everything that is subject to the laws of logic. Turing extended the idea to computing machines placing limitations on what can be computed using a universal machine. In 2008, David Woolpert, building on the work of Turing extends them to address the logic of science. Using Cantor Diagonalisation, his results showed that a theory of everything implemented by Laplace’s hypothetical vast “intellect” was logically impossible. Taking everything into account; the Uncertainty Principle, Chaos Theory, incompleteness theorems of maths, science and computing Laplace’s Demon was slain for good. Even if the demon was an all-powerful AI in the age of the technological singularity the logic above determines that an AI can never know everything within its universe and cannot predict future events in that universe.
A Weak and Strong Form of the Financial Singularity
With the exponential rise in computing power, the interconnectedness of all data and the rise of AI and in particular, the willingness of the big financial players to deploy this kind of technology, the Financial Singularity seems almost inevitable. However recent theories in science put limitations on what these machines could achieve. Even though limitations on these types of machines logically exist it is still possible for sophisticated machines to have an effect on the markets which could still make that market unusable for participants to generate alpha. Thus it makes sense to describe The Financial Singularity in two forms; a Strong Form and a Weak Form. The Strong Form is the Financial Singularity described by much of this article where trading technology evolves to a point where it is impossible to extract any alpha from any market and all returns from investments drop to zero. There is a state in between where we are now and the Strong Form of The Financial Singularity where trading technology distorts an individual market to such an extent that it’s not possible to extract alpha from that market. It is the Strong Form applied to one market (or market sector) rather than all markets. This is what I refer to as the Weak Form of the Financial Singularity.
I believe that the incompleteness, chaos and uncertainty theorems of Heisenberg, Lorenz, Gödel, Turing and Woolpert suggest that a theory of everything in the financial markets implemented by super-intelligent AI’s will never happen. This effectively rules out the concept of the Strong Form of the Financial Singularity. The Strong Form implies a level of forecasting which is unlikely to ever be possible. The randomness of world events and the behaviour of the participants whether they be ordinary people, financial institutions or governments will be always be variables that cannot be predicted.
However, rather than one financial singularity it’s more likely there will be multiple singularities in individual markets. The rise of HFT has shown how easily this can happen. A market may still never reach the Weak Form of the Financial Singularity even if most of the participants are machines. AIs programmed with different goals and different time horizons will act like autonomous versions of existing market participants. Alpha could be generated by one AI gaming another AI. Alpha could also be generated by humans gaming AI’s in elaborate ways. When a market reaches this state the human participants will want to decide whether they to allow a market to take this form. After all the markets are a tool for pricing human commerce and not a game in itself. As the market became increasingly dominated by machines the smart money would move to markets where the singularity was not a threat. This is in line with Grossman and Stiglitz view of the impossibility of efficient markets. As the world evolves new markets would arise providing new frontiers for speculation. Limitations on the singularity evolving would have to be placed by regulators limiting certain types of trading and type of participants so no one was disadvantaged. The limits put on HFT trading recently are an example of a market backing away from a potential singularity. As is often the case markets themselves evolve as participants change and in some cases, a market may cease to exist.
Where Do We Go From Here?
The artificial intelligence community is divided about whether the technological singularity will occur or not. However, the community are all in agreement that we should engage in discussion on how AI will affect society in the future. A recent study by Deloitte estimated that 30% of jobs could be lost in the UK through automation over the next 10-20 years. I personally view The Financial Singularity as a thought experiment with real-world implications rather than a certainty. It is with this perspective that I discuss The Financial Singularity in this article.
Below is a quote which seems very relevant in the context of this discussion:
“… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”
Ted Kaczynski
This is an excerpt from Industrial Society and Its Future (aka the “Unabomber Manifesto”) where Ted Kaczynski tried to explain, justify and popularize his militant resistance to technological progress. Whilst no one would support his terrorist activities, or prevent technological progress, his view represents the idea of sleep walking into a future where control is relinquished. This is the threat behind both the Weak and Strong Forms of The Financial Singularity. I believe markets could oscillate in and out of the weak form but never quite make it to the strong form. It falls to all market users to decide what we want from the markets we use and the institutions who use those markets and how they should evolve.
Time for disclosure; if you don’t already know, I am a computer scientist and systematic trader. My colleagues and I build machines to help us do a better job in trading than we could do without them. We consider technology and AI to be assistive intelligence rather than artificial intelligence. We look to build smarter machines which are tools to do our jobs better. However, we don’t want to be the whole market or be in a market of just machines. Instead, we build machines to help us make some sense of a complex real world, a world that would be even more complex if the job was pitting one AI against another. We use machines to implement our trading ideas. These ideas are about capturing market behaviour of human participants and trying to apply it consistently over time. Being aware of the possibility of a Financial Singularity in markets is a prerequisite to not allowing it to happen. The responsible role of a participant that deploys sophisticated technology into the markets is one where your actions do not compromise the integrity and function of markets as a whole.