Preparing to face the next financial crisis may well be about doing a better job of anticipating potential risks. Kimmo Soramäki, founder and CEO of the London-based Financial Network Analytics, or FNA, has been doing just that. FNA designs platforms, software and algorithms that help central bankers, regulators and other institutions better visualize complex issues such as interconnectedness, systemic risks and even warning signs of financial crimes like money laundering. Some users, like Canada’s payments and settlements organization, have used this technology to visualize potential liquidity pressures at banks and help reduce them.

Soramäki founded and serves as editor-in-chief the Journal of Network Theory in Finance, which serves as a forum for central banks and others in the financial services ecosystem to learn new techniques to address risks and share experiences. He started his career as an economist at the Bank of Finland, where in 1997 he developed the first simulation model for interbank payment systems. During the financial crisis of 2007-2008, he advised Group of 10 (G-10) central banks in modeling interconnections and systemic risk.

In a recent conversation with Knowledge at Wharton, Soramäki discussed how modeling, simulations and analytics could help make the financial system safer and more efficient. (He will be discussing how to future-proof financial market infrastructures with artificial intelligence and machine learning at the upcoming Sibos conference in Sydney next month. This interview is part of an editorial collaboration between Knowledge at Wharton and The SWIFT Institute.)

Following is an edited transcript of the conversation.

Knowledge at Wharton: Could you tell us about your work during the financial crisis 10 years ago? What work did you do with the G-10 central banks during that critical time?

Kimmo Soramäki: The big problem in the last financial crisis was that it was very visible with the collapse of Lehman Brothers. Everyone was afraid of what was going to happen next, who was exposed to Lehman, and if there would be lots of systemic cascading failures throughout the system.

What I did with a number of central banks was to go into the granular payments data and transaction-level data and start looking into who is connected with whom. [The objective was to] try to start to make the metric visible of these exposures and thereby also to identify the critical banks, the critical nodes in that network and how that system might collapse if there were certain failures in that system. We did the data mapping, creating these metric maps using very granular financial markets data that had not been done before.

Knowledge at Wharton: How did FNA evolve out of that work? You say the FNA platform makes the financial system safer and more efficient through advanced data analytics, artificial intelligence and machine learning. How does the platform achieve those outcomes?

“We’ve made it easier for any central bank, any regulator or any authority to start doing data analytics and simulations….”

Soramäki: I was doing quite a bit of research [in this area] with the New York Fed and the Bank of England. I’m developing software to automatize the creation of these networks and to implement algorithms to identify the central spots in that network and to visualize them. That software became FNA. We’ve made it easier for any central bank, any regulator or any authority to start doing data analytics and simulations, the way we did them perhaps more manually in our research since 10 years ago.

Knowledge at Wharton: Why is it important for regulators and central bankers to understand the concept of what is called “SupTech?” How does that differ from “FinTech” and “RegTech?” Is supervisory technology becoming the new normal in regulating financial markets?

Soramäki: The [term] FinTech started maybe five or six years ago. Over time, FinTech started to mean more of the new ways of providing financial services to retail consumers.

A couple of years ago, so-called RegTech, or regulatory technology, was spun off from FinTech. Over time, that has started to mean the technology that banks or the regulated entities are using in order to comply with regulations. That in a way led to SupTech as the next step, which is the technology that the supervisors – the central banks, the capital markets authorities and banking supervisors – are using to make sense of the data that they get from these regulated entities and to automate manual processes to become more efficient at overseeing and supervising the financial system.

When I worked 15 years ago at the European Central Bank, we had very little technology. We had very little data. That was before the financial crisis. But after the crisis, regulators have access to much more granular data in order to be able to monitor, stress-test and simulate [scenarios]. That has helped them foresee and manage the risks to the financial system better.

Knowledge at Wharton: How does your platform use AI and machine learning? Also, how do central banks, supervisors and financial market infrastructures use the platform? How does it help regulators to monitor, oversee and supervise financial markets?

Soramäki: When I started developing the FNA platform, I intentionally decided to make it very flexible. [I wanted it to be] a platform where data scientists, business analysts or economists could do modeling, tie in their own datasets, do analysis simulations and visualize those networks. It was focused on making visible the interconnectedness in the financial system.

The platform works with a large number of algorithms. We have put in more than 300 machine learning, graph and simulation models that users can combine in various ways for whatever questions they have. The questions that are easy to [address] with the platform are around financial risks, the community’s risks, operational risks and the ability to tie in transaction-level data or exposure data to create networks.

We have those networks change over time, see and detect anomalies and outliers for early warning purposes, and then be able to simulate certain kinds of failures. For example: What would happen if there were to be bank failure? How would the rest of the system react? Or what would happen if certain financial markets crashed? How would the other markets react?

The questions that central banks have been looking at with our system are around systemic risks, looking at how financial institutions are interconnected with one another through different types of trades that they make in different markets, whether they are systemically important players, and then to see how those failures in the system might manifest.

We’ve done a lot of work related to financial market infrastructures, helping central banks to design new, better, more liquidity-efficient systems to make interbank payments. Many of the large-value payment systems are operated by central banks. Recently we began to work around financial crime, such as money laundering or sanctions violations or related parties analysis. These are network questions, so you are following the money. You are looking at who is interacting with whom, and being able to do that and tie in multiple datasets allows you to get a bigger and better picture of how criminals might want to hide things in plain sight.

“We’ve done a lot of work … helping central banks to design new, better, more liquidity-efficient systems to make interbank payments.”

Knowledge at Wharton: It would be helpful to understand some of these aspects in more specific detail, if we could walk through a few use cases. To begin with, could we talk about interconnectedness and why that is so important to the banking sector? How can interconnectedness and data visualization help in identifying risk concentrations? I understand that the Bank of England has a paper about this. Could you talk about that and also some of its key takeaways?

Soramäki: Before the financial crisis, there was very much room left for rumors and for people thinking the worst. Therefore the financial markets pretty much froze during the 2007-2008 crisis, because there was so much uncertainty.

With the uncertainty, you’re just like in a dark room, and you’re afraid. And you freeze, instead of having this ability and better understanding of what actually the realities are. [Now, we could get clarity on the response actions we need to take by] having these pictures and being able to show that, “Hey, it might not be that bad,” or “It is that bad?”

I often compare it to map-making. Geographic maps help us make correct decisions when we are navigating. In the same way, these financial maps help risk management as well as supervision and oversight, just by making visible what was invisible before.

Once you have that information, you can start doing modeling [around scenarios] like – What would happen if someone were to fail? What would be the consequences? Maybe you cannot prevent the failure, but you can prevent the vicious cycle and the cascading failure where things start to fail because of this initial condition. So you can safeguard against that and understand how everything is actually interconnected and how the process would flow.

That is the type of work we are doing, for example, with some central counterparties. Central counterparties were set up after it was mandated that a lot of the over-the-counter derivatives trading should flow through these [entities] that would become the seller to each buyer, and the buyer to each seller, thereby removing the counterparty risk between people who were trading. [Consequently], I would not have to be afraid that when I’m trading with you that you fail, and you don’t leave with me funds for security. I’m actually trading against the central counterparty, and [therefore] that is safe.

We’ve been working with [central counterparties] to understand how certain types of failures would manifest in the systems. [The objective is to help] them be better prepared for those failures and prevent these cascading failures from taking place.

We’ve also recently done research into how all these financial market infrastructures and central counterparties are connected globally through common members. We are seeing a very interconnected global financial market – especially between Europe and the U.S. and some of Asia’s financial centers.

Once we have more information about what those networks look like, we can start doing modeling on the worst-case scenarios and come up with remediation plans. For example, how would we react if such a failure were to take place? [With such modeling], we are better prepared for a new financial crisis that could begin from these areas.

Knowledge at Wharton: When you look at these networks of how different nodes are integrated through financial transactions, how well is China integrated into the global system? What picture do you see there?

Soramäki: A couple of years ago, we conducted research with the SWIFT Institute looking at SWIFT payments data. SWIFT is the carrier of a lot of information around international payments. There were billions of payment transactions, so we looked at how countries were interacting with each other.

At that time, China was not well integrated with the rest of the world. When we looked at clusters of countries that interact a lot with each other, China was in the same cluster as the U.S., with the U.S. leading that cluster. Others included the European cluster, an African cluster as well, and then a cluster around the countries of the former Soviet Union that [maintained] strong trade links with one another. If we were to do that research again, China would probably start to form its own cluster very soon, if it hasn’t already, around Southeast Asia.

Knowledge at Wharton: How could central banks gain a deeper understanding of risks using advanced analytics, perhaps on your platform?

Soramäki: The Hong Kong Monetary Authority established HKTR, one of the [many] trade repositories that collect trade information and have been set up all over the world. In Europe, it’s under a directive of the EMIR (European Market Infrastructure Regulation). In the U.S., trade repositories fall under the Dodd-Frank Act.

“Maybe you cannot prevent the failure, but you can prevent the vicious cycle and the cascading failure where things start to fail because of this initial condition.”

Much of the work of the central banks or institutions that have been looking at that data is in understanding the data. It takes a long time to understand questions such as: “What do I have? How much of the global market does it cover? How should I interpret all the numbers I see? Are there errors? Are they real? Do they come from different sources? How do I integrate all of this together?”

Much research has been done on it, but I haven’t seen a lot of the operationalization of that data into ongoing monitoring yet, mainly because it’s such a new dataset. The research is focused on issues like: How much of the market does it actually cover, because it might be that some institutions are taking risks in one part of the market that we see in the data, and uploading them in other parts of the markets, for which we don’t see the data. We need to be very careful about the conclusions we draw before we fully understand what we have in our hands.

Knowledge at Wharton: You also do simulations on your platform. What types of simulations are now possible of complex financial systems? Could you also give an example or two of what insights are being gained that would not have been possible through previous technology?

Soramäki: The simulations need to be domain-specific, because the devil is always in the details. We have been focusing on simulations of financial market infrastructures. It’s something that I’ve been doing for over 20 years. In the late 1990s, I programmed or started what is called the Bank of Finland Payment System Simulator, which also has been used by many institutions, mostly for research purposes. In our software, we also have a simulator that allows you to pretty much simulate any type of financial market infrastructure, [such as] a payment system or a central counterparty or a security settlement system. When you’re designing a new system, you want to simulate it. You want to try out how the system works and try out different features of the system.

Payments Canada has a multi-year modernization project for the Canadian payment systems. They are moving from a large value payment system called LVPS into a new system, which is based on real-time gross settlement, which is the international norm. They were concerned that the liquidity requirements of the banks would increase as a consequence of moving to this new system.

Every dollar that they need to keep at the central bank is taken away from maybe some profitable purposes, and that’s a cost for the bank. So they want to minimize the amount of money that they need in order to make these payments. In a small project with Payments Canada, we used our simulator and showed that by employing some smart liquidity-saving mechanisms, and some algorithms, we could reduce the liquidity requirements by 40% from what they initially thought, which of course was very good news for the banks.

For the past year, Payments Canada has been carrying out different simulations and coming up with new ways of providing this service to banks that is more efficient than the previous one. That is a good bottom-line impact of being able to do simulations and coming up with new designs that help make the system less costly and less risky, as well.

“The next financial crisis is a certainty, but always it takes longer to materialize than anyone expects.”

Knowledge at Wharton: Are there still some risks that you’re unable to track, that you’d like to do better in the future?

Soramäki: Yes, and these are often mostly related to data availability. Everyone complains that they provide more and more data to regulators. Large amounts of data already exist. But you can only get the true picture if you have pretty much everything, because there might be some risks that are taken in some part of the market and are offset in the other. So if you don’t have the full data, you really can’t get the full picture, either.

[Now], with [increased] data availability, we are in the early stages of [understanding] how we could interact with these large datasets. How do we make them so simple that we can come up with some insights from them? The challenge in every project we go to is we need to do some work in order to be able to prove that there are valuable insights that you can get from looking at the data. Another challenge with artificial neural networks and machine learning techniques is that they might give you a result, but don’t tell you why. So more research work needs to be done [in this area].

Knowledge at Wharton: We just passed the 10th anniversary of the 2008 financial crisis. Based on everything that you see today – this wide spectrum of risks that you described – what do you think are the chances of another global financial crisis? Where do you see the greatest risks and vulnerabilities, and what can be done about them?

Soramäki: The next financial crisis is a certainty, but always it takes longer to materialize than anyone expects. I think that was the case with the last financial crisis, as well. We haven’t figured out the remedy to the financial crisis that we’ve been having every eight or 10 years for the past 150 years or even longer. Our biggest risks relate to some large changes that have happened or are happening. [Consider] quantitative easing – we don’t know how that will play out. It’s a very big risk. I think a slowdown or something bad happening in China is also another big risk. The Chinese financial market has exploded in the past years, and there is very little visibility to that. Those are the drivers that may be behind the next financial crisis.