Panel Discussion: The Future of Financial Services with Data + AI

In today’s economy, financial services firms are forced to contend with heightened regulatory environments and a variety of market, economic and regulatory uncertainties. Coupled with increasing demand from customers for more personalized experiences and a focus on sustainability/ESG, incumbent Banks, Insurers and Asset Managers are reaching the limits of where their current technology can take them with their Digital Transformation initiatives. It’s more critical than ever for institutions to turn towards big data and AI to meet these demands, and make smarter, faster decisions that reduce risk and protect against fraud. Business and analytics leaders and teams from the Financial Services sector are invited to join this industry briefing to learn new ideas and strategies for driving growth and reducing risk with data analytics and AI.


  • Jacques Oelofse, VP Data Engineering and ML, HSBC
  • Mark Avallone, VP, Architecture, S&P Global
  • Douglas Hamilton, Chief Data Scientist, Nasdaq

Speaker: Junta Nakai


– [Moderator] Hi everyone. Thanks for joining our Financial Services breakout. Now I’m going to hand it over to our first speaker, Junta. Take it away.

– Hello, my name is Junta Nakai. I head up the financial service business at Databricks. And thank you so much for joining me today. We’re only talking about taking a data in AI driven approach towards the future of financial services. Financial services is one of the fastest growing verticals that we have at this company. We work with over 400 institutions around the world from banking, insurance, to FinTech. And I think the future of finance is going to be three things. It’s going to be much more instant. It’s going to be far more inclusive and it’s going to be increasingly invisible. And underpinning all three of these trends is data. And if you think about the traditional advantages that a FSI had in it’s past, which I would argue was capital and scale, and think about what’s going to make a very competitive FSI for the future. I think it comes down to two things, data and its people, or more specifically the ability of its people to leverage that data that they have to drive innovation and drive new types of customer experiences to compete in the new era of financial services. This is why I think financial services can be at best understood in the prism of data and AI and therefore financial services is a data and AI challenge. And there’s two things in particular that our customers are trying to address today. One is the historical lack of innovation in the sector. Two is the lack of customer centricity in the sector historically. So let’s double click on innovation. Study after study time and again, the same story gets told, which is that innovation in financial services lag innovation in other sectors. So from PWC, to Fast Company, to Forbes, you see here that five of 1000 companies, nine of 1000 companies, four of 50 companies, et cetera, that there was an only a very small number of financial services institutions globally that make the list of the most innovative companies in the world. And that is in direct contrast to just how important and how big financial services is to the overall economy. So in developed markets, FS is about 16% of total market cap. And in emerging markets is even higher at 20%. So in spite of the size of the sector, in spite of the ability for it to generate tons of data and thus leverage that data, innovation in this sector has not kept up. The second part is as a lack of customer centricity. So it’s not surprising that if you look at NPS scores, for example, financial services tends to lag behind other industries. But here in this chart, I find what very interesting is if you look at the top and you see brokerage and investments. Brokerage investments actually scores really high in terms of customer satisfaction and customer experience relative to the other sector. And I think that’s directly correlated to the amount of innovation that’s happened in the space over the past few decades. So think about passive investing, robo-advising, no fee stock transaction, ESG investing, et cetera. There has been actually a lot of innovation that’s happened in brokerage and investment. And as a result, the way in which people interact with these applications, these services tends to cause happier, or create happier experiences and happier customers. So when you think about the future of finance and what incumbents and Fin Techs have to do in order to win in that environment, they have to first win the battle for relevance with our customers, because winning relevance for customers generates data. So if you can imagine if you’re an incumbent that has a very compelling and engaging app, banking app, and your average customer checks that app 25 times a month, that is a significant competitive advantage, both in terms of new product development and in terms of personalization that you have versus an incumbent that doesn’t have that banking app. So winning the battle for relevance is increasingly going to become increasingly important in financial services. And I want to touch upon two themes or two topics in particular that our customers are highly focused on today. One is ESG, sustainable investing. The other one is open banking. And both ESG and open banking, just like other parts of financial services, I believe can be best addressed with data and AI and therefore is a data and AI challenge. So if you look at open banking, for example, so for those of you that are not too familiar with open banking, the crux of it is an ability of data to be shared across institutions for a particular customer so that there could be new types of aggregations, new type of experiences and ultimately the regulators hope this will drive innovation, increase the level of customer centricity and lower the cost of financial services being provided to end customers. And if you think the simple example of how someone’s shopping with cash, credit card, online shopping and mobile shopping, how that generates data, you kind of get a glimpse of why it’s so important to capture this data and innovate on it in the realm of open banking, because as customers move from cash purchases towards online shopping, the variety and complexity of the data increases substantially and thus the ability to drive engagement with that data and the platform to be able to deliver those services also increases in capacity. And in the realm of open banking, incumbents and Fin Techs alike must be able to leverage all types of datasets. That’s going to be coming to them. It could be structured data coming in batch form from other banking institutions. It might be streaming unstructured data coming from new sources, social media, IOT devices. It might be unstructured and structured data coming in both batch and streaming from alternative data providers or third-party data providers, and being able to land all of that information in one single place and being able to drive your ad hoc data science, your machine learning reporting. This ability is what enables an institution to unlock innovation and drive new products and services and experiences for the end customers. Let’s take ESG, environmental social governance, as a next example of what customers are highly focused on. The crux of the problem with ESG data is the data. A lot of the data comes in either unstructured formats. It can come in different varieties and different schemas and different velocities, but more importantly, there is no agreed upon definition of standards and how companies should disclose their sustainability data. So if you’re a bank or an insurance company or an asset manager, trying to leverage this data, you need to be able to have a flexible enough architecture to be able to ingest all that data and make sense of that data. And when I look at ESG specifically within financial services, I think our customers can be broken down into three levels of maturity. The first level of maturity is marketing. So people who are using ESG and sustainability to tell a story to their end customer, the second more advanced stage is being able to use that ESG data for analysis of vendors, of investments, of supply chains, et cetera, to do basic things like risk management and scoring of the ESG scores of the people that you do business with. And finally, the sort of the gold standard of where people want to get through is what I call operational issue. That’s an ability to take all of your internal data, all of your external data around sustainability operations or operational data around sustainability. It could be third-party data. So that’s coming from ESG vendors or social media or sustainability reports, and being able to put that all together so that you can benchmark and understanding what’s happening to your company and your counterparties and partners today, and ultimately course-correct. And when you get this ability to really operationalize ESG, that’s when ESG becomes a much more real and it enables it to make the impact in the world it’s supposed to in terms of the environment, in terms of social issues, in terms of the governance issues. And if you think about open banking and ESG specifically, and in order to get to that next level of digital transformation, I believe that FSIs need to have what I call a modern financial data cloud. And the way I conceptualize it is two ways. It’s about vertical data and horizontal data. So vertical data simply means that the depth of data that a institution has, especially incumbents that have been around for decades, sometimes centuries, and being able to leverage the entirety of the data sets or the data that a bank has accumulated over the years. And being able to use that in its entirety is actually a pretty significant source of competitive advantage going forward. And then the next part is horizontal data, or more specifically the ability to use the breadth of external data, can be ESG data, alternative data, market data, social media data, and news data, et cetera. And when a financial services division can both access and analyze and use the breadth of the external data that’s available to them and the depth of the internal data they’ve accumulated over the years, that’s how you build a winning financial services institution what I call leaders, which is at the bottom right of the sharp. And in order to get there more specifically, we believe that customers or financial service institutions need what I call a modern cloud data architecture. That’s an ability to land all of your data in batch, in streaming, your structured, unstructured, semi-structured data, coming from open banking apps, or IOT devices, being able to land that, all that variety of data in one place in one single data lake, where you might curate it from a bronze layer to a silver layer to do your machine learning, you might aggregate it even further to what we call a gold layer to do your BI reporting. The point is the being able to do all your machine learning, your data science, your reporting, et cetera, from that single source of truth. And historically, that’s been really hard because of technical limitations. And what we see our customers do is typically this path, which is they say, okay, let’s put all our data into a data lake because it’s cheap. It can deal with lots of other types of datasets to be generated from voice to image, to text, and it’s very flexible. But putting all your data in a data lake, had it’s challenges and cons as well. There was low concurrency, it was slow and it had lack of consistency and all this stuff. And that data lake became what’s known as a data swamp as it became unwieldy and difficult to use. So the next step, our financial institutions said, okay, let’s put some of those data in a data warehouse because it deals with the concurrency issues and the speed issues, and we can leverage our existing skillsets, to really take advantage of it, which was great until they realize, Hey, we can’t really do machine learning here. This is really meant for SQL analytics only, it’s in a proprietary format and it’s expensive. And the future vision, and certainly the vision that Databricks is striving towards is what we call the Lake house. The Lake house paradigm takes the best of both worlds, the best of data lake and the best of data warehouses, and puts them together so that you can have an open source, open format way to land all your data in both batch and streaming. That’s a single source of truth that’s performing, that’s reliable, that’s machine learning ready, that’s BI ready, that has high concurrency, has asset transactions, et cetera. And again, taking the best of both worlds, put it together in what we call the Lake house, which is kind of the convergence of both the data lake and the data warehouse. And that gets all wrapped up in what we call a Lake house, but more specifically open source that can launch this such as Delta Lake play a very key critical component in driving towards that reality. And when you have this ability or this Lake house paradigm, to land all of your data in one place and drive all of your analytics and reporting and machine learning from that one place, it helps to unlock a myriad use cases that our customers are trying to already do on our platform today. So again, we work with over 400 institutions around the world, and when I distill all the use cases down, it really comes down to many things, but there’s five in particular. So three vertical use cases, and two horizontal use cases. The three vertical ones are personalized finance, upsell, cross sell, term prediction, et cetera, personalization, they’re important in the realm of open banking, risk management across credit risk, market risk, liquidity risk, reputational risk, very important in the spectrum of ESG. So what is my exposure to E, what is my exposure to S, what is my exposure to G, thinking about systemic risks instead of idiosyncratic risk, right? Climate change is a systemic risk. And how do I think about that in the spectrum of the day-to-day business I have as an FSI? Fraud detection, pretty self-explanatory, but this is probably the area where the most advanced uses of machine learning are in financial services today. And two horizontal ones are alternative and third-party data. So using a diverse array of datasets to make better decisions, being able to do streaming analytics and streaming data, and finally model development. So model development in financial services is unlike any other sector. It takes potentially months, if not, quarters, to go through model creation, validation, audit, model strategy, et cetera. And our customers are using our platform is specifically the best of breed open source technologies like Delta Lake and Emo Float to really accelerate the model development life cycle in financial services. I just want to talk about one last thing, is something called solution accelerators. Solution accelerators is a new program here at Databricks, where we help our customers bridge the gap between ideation of a use case and the actual implementation of that use case. So specifically it helps our customers get a running start on a use case. And what we have created is a set of content, set of technical capabilities, code, notebooks, blogs, best practices, webinars, thought leadership articles, et cetera, that we hand over to our customers and are publicly available on this web site that you see listed here and there two comprehensive ones we’ve built, one around risk management and how to build a mock risk management platform in the cloud for financial services. And the second one is how to take a data and AI driven approach to environmental social governance. And I touched upon operationalizing ESG instead of just using as a marketing tool or risk management tool. Again, thank you so much for taking the time to listen to me today. Financial services is critically important to Databricks, as I mentioned, is one of the fastest growing verticals that we have at this company. And we’re here to lean in, make you successful in the use cases that you bring on to our platform. And I hope sincerely that we’ll be able to meet again in person and see you as a customer of Databricks as well. Thank you so much for your time.

– [Moderator] Thanks, Junta. That was great. And now I’d like to turn over to Alessio Basso from HSBC to talk about how they’re reinventing the entire payment experience with data and AI. Alessio.

– Hi, everyone. Thanks for joining in. My name is Alessio Basso and I’m joining you from Hong Kong. I am the chief architect of Pay Me from HSBC. And today I’m going to share with you a little bit of how at HSBC we use machine learning and data to improve the payment experience for some of our customers here in Hong Kong. First, who we are, HSBC is one of the world’s largest banking and financial services organization. We serve more than 40 million customers worldwide to a network that spans 64 countries and territories. We have been around for 155 years. So we have pretty significant footprint, we run data centers in 31 of these countries. And we amassed in terms of data, about 256 petabytes. That was the count at June this year. It was 170 last year, so it keeps growing pretty fast. This is all the data that we generate in process to serve our customers, to provide them banking services, understand what their needs are, so we can serve them better. A lot of what HSBC does on a day to day basis is to facilitate payments. From family paying electricity bills, to facilitating international commerce between multinational corporations. About a couple of years ago, we identified the unmet need in the Hong Kong market, which is one of our home markets. There was how to send money to friends and family simply and conveniently. The option at the time were pretty cumbersome. It wasn’t unusual if you had to pay your friend who bank with a different bank, for you to go to your HSBC ATM, withdraw cash, then walk down the street to your friend’s bank ATM and deposit exactly the same cash that you just withdrawn. It was cumbersome, it was insecure, and we knew that we could do better. So a couple of years ago, we invented this better way. We launched a social P2P application called Pay Me. We started only with P2P and again the focus was facilitating transfer between friends and family instantly and for free. And later on, we launched a sister product called Pay Me for business, which was for merchants to collect payments, either in store or online from customers that were using Pay Me. A lot of businesses adopted it because you could then have access to your money instantly and without a massive fee, which are typically drawbacks of other methods of payments, like credit cards. We experienced a very steady growth. We now have 2.2 million users in a city of 7 million, and we keep launching a new feature for both consumers and businesses. A couple of weeks ago, we discovered actually that to Pay Me has even become a widely used verb for people here in Hong Kong, similar to the likes of Google it for searching information. People are saying I’ll Pay Me you later when they want to send money. So it’s really amazing and humbling that we are so embedded in the fabric of Hong Kong. How do we achieve this? We did not want to design and build all those features based on hunches. We wanted to make data-driven decision and create a data driven roadmap in order to work on features that really were meeting a need and solving a problem for our customers. So we could apply a number of creative techniques to understand the data that we have and drive insights from that. Let me give you two examples. The first one is when you make a payment with Pay Me, you have to type a payment message. Why do we want a payment message associated with every small P2P payments? Because we can then use national language processing techniques to process those messages, categorize them, and try to understand what the intent behind the payment was. For instance, you may type the sushi emoji and then a smiley emoji, and then I’ve been to emoji. So we categorize this as a dining or take away or Japanese. And we do this with three languages, but actually for English, traditional simplified Chinese and emoji. And by doing that, we were able to determine the most of the payments that were flowing to the P2P platform were actually consumer to business payments. So if you remember that I said at the beginning, we want to do a data driven roadmap and that Pay Me was initially a P2P application. We analyze the data. We realized that there was a big use case that we were not addressing directly. And we then created a sister product that I just mentioned, which is Pay Me for business. This is just one example of what I mean when I say data driven roadmap. Another thing that we do with data, which is really cool is that we created a network of all the interactions between our customers, both friends relationship, as well as payments interaction. And we can predict how the network will evolve so we can predict interactions. And we use them to recommend other things to your users, like other people that you may want to pay or merchants. During these three years from very high growth, we experienced challenges of course, we applied this data driven roadmap principle, not only in terms of features that we delivered to our customers, but also on how we optimize our internal operations. So let me talk about one use case that we recently worked on. One of the challenging parts of scaling is not the technology, it’s really the human part. So we all know how difficult it is to create easy to use apps that can be navigated instinctively. And we know that our customers do not really use on-screen messages, FAQ instructions. So in Hong Kong, specifically, people really like to just take the phone and ring the HSBC offline. This is all good, but on our side, we had hundreds of thousands of contacts raised by our customers every day. And human agent needs to apply consistent tone of voice and answers on all these hundreds of thousands of tickets. And this takes time and on the customer side, of course, a longer waiting time create frustration as well. Probably not good enough quality for answers. AI to the rescue. We used machine learning to help initially to help humans pick the best answer to these customers’ query, we use two different models, one was bot and the other one was universal centers encoder, both from Google, to understand the conversation and then a different model to map the meaning of this conversation to one answer or a library of related answers. At the first with just AI helping a real agent to pick the best answer. And it stage two, the whole conversation is completely managed by a chat bot. So along on this, you can see that the customer doesn’t see the change, the UI doesn’t change. The UI always looked like chat inside the app. It’s an improvement that we did in the backend, in our operation, and it was invisible to the customer. So it’s very easy to underestimate the impact of this type of operation-focused model, it’s not visible, the customer doesn’t see it. The customer cannot share it on social media and say, Hey, this is really cool, but it has a real impact on operations, on customer satisfaction and eventually on the bottom line. So these are just a couple of ideas to give you a hint of how we use machine learning in our day to day product. But we realized pretty early that in order to succeed with machine learning and data science, we need to have a stable, solid platform to handle the data that our product was generating. We wanted to achieve two things. Mostly first, we want to enable our team to export the data on a self-service manner, but of course being a regulated entity, we have to have governance and control and access. And of course data security and confidentiality was paramount. So we went through a few iteration of our platform in order to address this challenge. I won’t drag you through all this iteration, but I will focus on how we solve this specifically using Databricks. So in Pay Me we have diverse data sources. Some we call slow moving sources like user profile or settings, they don’t change a lot. And we typically ingest them in batches with different frequencies. Other are fast-moving sources like transactions or user interactions in the app. And we handle this in a streaming fashion. On the other side of our data platform, we have different types of people. The one to use the technology. One is data scientists, the one of course to apply different models and analyze the data, using their own tool of choice. And they typically want access to all the raw data without restriction. Data analyst, they’re more focused on extracting insights and creating dashboards, visualization, and they typically want to have a very structure assisted, well organized data model. And in between we have the data engineers, right? That they work in supplying accurate and timely data to both. So we realized pretty quickly during our journey that without a unified platform, like the one that Databricks provide, you have a hodgepodge of pipelines, tools, libraries that are hard to track and maintain, is a real challenge. And if I had to make this challenge explicit, I will list three. One is data. We need fresh, full data sets without losing track of where the data came from and how it has been transformed. And this is particularly important for us being a regulated entity. We have certain approval processes that were created back in the days where data moving was not that easy. And we have to comply with those processes, by making sure that those are automated and transparent to the user of the data. The second challenge is machine learning. Our data scientists did not have to have a constrained environment. They need to be able to use the tools and libraries they need. But of course, on the corporate side, we need to give a controller an assurance that the model is reproducible, it can be promoted across environments and all experiments can be tracked, the performance of the models can be measured and so on and so forth. We realized that if you don’t solve the first two challenges, the results are stale analytics. You get old data, all the models that you ran to analyze this data do not always return reliable results. And it’s really, really difficult to build a platform like that from the ground up. We did start it with just our own data machines, Jupiter, and pipelines to glue it all together. And it didn’t really work out. So if we want it to be successful, we had to change. And really Databricks was instrumental in enabling this change. Let me tell you a bit more about our setup. And again, it’s a very high level description, but it captured the essence of what is important for us. The first thing is that we don’t have one Databricks instance, but two environments as the base of our platform. One is called production that basically ingest and capture all the real amassed customer data, personal identifiable information, everything. And the other one is called discovery. So discovery is an exact copy of production, but we stream all the data from production to discovery and we de identify it. So sensitive data is masked, sensitive IDs regenerated. So discovery is a safe space for data users, data analysts, product managers, engineers, data scientists, to discover the data and analyze it and create features that are more recent. We did a lot of work automating the control and auditing part that I mentioned is important for us as a regulated business, a couple of things that are important are schema validation at ingestion. We want to make sure that the schema is enforced in some cases. So we don’t have a lot of data with the form of the people do not expect to. And we also automate a lot of routes around masking. So we are really making sure that personal identifiable information did not trickle down from production to discovery. And we have a number of safeguards to make sure that even if you don’t know that there is a data that you need to mask the team raised an alert in case there is a possibility of mistakes. So the key attributes that enables agility for us, at the platform though, is not the pipelines that move data around that we created. But it’s really the fact that in the discovery environment, all the different types of users that I just mentioned can work together each doing their own parts to derive insights from this data. And it’s very easy because all the plumbing like storage, the data pipelines, the management on notebooks, machine learning libraries, deploying the libraries in the clusters are managed for us by Databricks. So the platform and discovery environments is really that single unified platform for all the types of data users that we have can work together, see each other work, of course with controls, and then build on top of each other. There are particularly three key features of this platform that really help us deliver on our vision. The first one is Delta streaming. So for who don’t know data streaming is the capability of any data table to be both a streaming source and a sink. So we could create complex in the pipelines to process data that comes in from our transactional platform in near real time without having to rely on external services to do the plumbing. There is no message broker here, is all managed by Delta. So all you do is that you write on a table and that this automatically trigger processing for the next type of your pipeline. And this is really, really important. Simplify your operations dramatically. The second thing is ML flow. It makes it really easy to run experiments, trace more the performances, manage multiple model versions and so on and so forth. And it’s really, really fundamental as you scale up, you have a lot of different models and you have a lot of different data scientists working on different use cases, and of course you want to continuously improve. And there’s also another very important part of it, which is another Delta feature called time travel, which is very, very useful to manage machine learning model recommendation in a regulated environment. Time travel is the ability of Delta to handle auditing of historical versions of all the data that is inserted there. So I can explore the data as of a particular point in time. This is very useful, for instance, when you have a case of model bias, so you want to investigate that, and how would you reproduce this in order to investigate if your model was indeed biased, you need the recommendation. So the results of the model, as well as all the inputs that were provided at the time to create the recommendation. So the hyper-parameters and all the input features. So if you store them in Delta, and we do, we have like a continuously updated audit trail of everything that we ever fed to our machine learning model, the machine learning model itself and all the results. So we can investigate this particular cases. So this is really, really important for us in a regulated industry. And of course to the close the circle, we have automated pipelines. So we publish the notebooks and all the models that we created in discovery back into production, where they can run to generate our insights from real data and serve them to real consumers. So of course we started the output to those models in Delta. We connect our BI tools in Delta so we can do all that analysis, dash-boarding, representation, and investigation, if needed. That I just mentioned which is that important for us. So again, I want to state again, that this to set up to two Databricks set up, this is really important for us, production handle all the traffic and all the data, and all the models and all the accommodation that we sell to our customer. Discovery is a collaborative data science and engineering environment, where all the members of the teams can work together to explore data and generate insight. And this is all safe because the discovery do not have a PII in our case. So that’s our setup and that’s how it really worked for us and enable this culture of collaboration that then made the data driven roadmap principle possible in our case for Pay Me and for HSBC. So as a closure, to summarize, we have a transaction application which is a digital wallet that we deploy in the Hong Kong market. We built our data platform on top of Databricks, and we built it as a used single unified data platform for data engineering, data analytics, and data science, some tools were provided, some other we built on top of it. And this is the cornerstone of our data analytics platform. And it’s been really, really effective for us to deliver the value that Pay Me brings to our consumer. A couple of improvements that we see, again, data timeliness is very important for us. Our teams are happy. They really liked the tool, they really liked the collaborative way of working that it brings. Some tools are really good for us as a regulated industry. It facilitates all the historical change, it facilitates tracking, it facilitates improving of ML model and governance, it enables data driven roadmap, and that’s the most important thing for us. That’s all I have for today. Thanks for listening to me. Thanks for joining in, and I hope I will be showing you much more the next time we have a chance to speak together because we’re just getting started with Pay Me at HSBC. Thank you.

– [Moderator] Thanks Alessio. What an awesome story. Next up, we have a great set of panel speakers to participate in a discussion moderated by Junta Nakai. Junta, back over to you.

– We’re super excited to have a panel of thought leaders in the industry panel for financial services today. I’m going to do brief introductions. Doug Hamilton, Chief Data Scientist at NASDAQ. Jacques Oelofse who is the VP of Data Engineering and Machine Learning Platform at HSBC. And finally, Mark Avallone from S&P Global, who is the VP of Architecture. So let me just dive into today’s panel session. We’re going to talk a little bit about agility, transformation, collaboration, and also culture. And with that, maybe I could start off with Doug, what is NASDAQ’s approach to data and AI today? And I would love to know how that’s changed since 2019. And how has that really evolved from a company in a capability perspective?

– Yeah, absolutely. Thanks you and thanks for having me. So when we look at NASDAQ’s AI and data strategy over the last few years, I think the most notable thing is the evolution from being kind of an analytics group that looks and tries to find exclusively insights in data to one that looks to see how we can leverage AI, cognitive process automation and advances within machine learning like reinforcement learning and deep learning to provide a more robust and scalable processes and products built on top of that technology. I think if we want some really great examples of this, when we first started the group some years ago, our focus was almost entirely on kind of vetting alternative data, bringing it into our ecosystem, figuring out if there was interesting information in there and then delivering it to our clients. Now, this is an interesting value proposition and very product-focused, and certainly focused on ensuring that we’re investigating data and offering high value data sets to the buy and sell side for use. But what we found is that even more valuable than that is when we can leverage the ever decreasing costs of cloud, the increased reliance on cloud to propagate the amount of data that we store and ease of access to it, to begin developing much larger technologies like NLP technologies that allow us to not just extract information from structured data, but from the bevy of unstructured data that we have through SEC filings, earning transcripts, et cetera, from looking exclusively at what indexes might exist or what signals might exist to actually be able to generate and manage portfolios and indexes using AI technologies. And I think that’s kind of our biggest sea change on the technology side, is a refocus away from kind of the analysis portion and towards the automation and scale. By the same token, we’ve also noted that, five years ago or four years ago, this was a relatively new technology and our business, as well as our technologists were all struggling with the rapidly evolving landscape. Over this time we’ve also noticed in this kind of, he’s back into this question of where do we focus our AI efforts, that our business has also begun to understand AI, not as a series of arcane algorithms that get published in journals that nobody reads, but rather as a series of capabilities around control prediction, perception, that is how machines see the world and read documents and interactions, so new and interesting ways to interact with machines through things like machine transcription that we can build novel products on and help to scale our current notion of financial markets to markets everywhere.

– That’s great. And Jacques, I would love to ask you the same question with the additional thing of the Pay Me application, which obviously has been a great success for HSBC and in especially in terms of competing against the formidable tech companies that are really close to the city. Can you talk a little bit about how data and AI has contributed to the success of Pay Me?

– Yes. Thanks Junta. And hi, everybody. So yeah, three things that have mainly driven the success. So firstly Pay Me is for most part a data driven product, but maybe before I get into the details, maybe you just start by giving it a quick introduction about what Pay Me is. Pay Me as a digital app was launched in Hong Kong in 2017. The goal is to send money to friends and family conveniently and for free. And to just address the problem in the Hong Kong market where sending money was a very expensive and cumbersome. So essentially if you’re from the US, it’s a similar product to Venmo. And so fast forward three years, we now have 70% of the market share. And like you mentioned, competitors like Alipay and WeChat Pay. So after we launched the product, we started running some AI and ML to identify various use cases on the product. And what we saw was a significant of users were really using the app to collect payments for products and services. And this really solidified the value prop for a merchant app. And from this kind of sister app Pay Me for business was born. This is really a companion app like Uber for drivers. And it’s used by shops in Hong Kong to collect payments from customers that have the Pay Me app, but also has several integrations for POS, vending machines, API, e-commerce and e-commerce platforms. So really an ecosystem for payments in Hong Kong. So driving the roadmap was really one of the initial use cases. I think the second one was really embedding ML into the user experience. An example of that might be we really using the app to pay people. And so in that journey, you really want to provide a list of who you might pay next. And historically you’d use something like looking at who you’ve paid most historically and rank order them in that fashion, but you really want to use ML to go a level deeper and look at during lunchtime you’re probably more likely to pay somebody, a colleague, and in the evening or on weekends family or friends. So while for the end user you don’t really see a big difference in the UI of the app. It really impacts the journey and it makes people more likely to use the app and there’s really value on that. And so I think that the final kind of item that I just want to touch on, that’s not really kind of machine learning driven, is democratizing data, and this is something that’s been very beneficial, it’s something that’s been around for some time. And I think the benefits are well understood and it’s really just giving more people access to data. And within Pay Me we found that the product teams have really adopted very well to this. And they’re now able to much quickly understand how the product’s performing when we’re launching new features, adjusted to the customer’s needs and pace. So what is democratizing data, interestingly we’ve also noticed an uptake and use of like a notebook style data exploration, as opposed to the more traditional style drag and drop visual data exploration. So I think those are the kind of main areas that have helped drive Pay Me success with data.

– That was great. Thanks a lot for that color Jacques. And next to you, Mark, and the same question, but for a company like S&P, data assets and monetizing that has been core to your business for years, and even given that context, I would love to really understand how S&P’s approach to data and AI has evolved and changed.

– Sure, thanks. So we have four key divisions within S&P Global. We have the ratings division and indices division, plats where they focus on commodities and insights. And then the division that I’m in, which is market intelligence. So you can think of a market intelligence as the funnel for the data where we ingest and rationalize, and then sell that data in and of itself as a product, and then further give that data or share that data with the other divisions, for them to perform their business function with. So market intelligence, our 2 billion in annual revenue comes from this data collection that continuously grows in breadth and depth, the volume and the velocity of that data continues to grow. And so I remember not too long ago, everyone was talking about big data. And now you mentioned an asset about AI and it’s kind of like big data with a purpose. Finally folks figured out what to actually do with the big data, rather than just the kind of feat of storing it and running something on it and driving some insight, it’s become clear to see that we must process this big data and apply technologies to it. And that’s what we find tremendously valuable with AI is our ability to keep up with this. The demand for this data, Doug mentioned driving insights on alternative data and looking for potentially new indices and our ratings industry looking to create new insights into issuances, our ability to apply AI is as crucial to our ability to really tread water in a world of ever increasing data.

– That’s great. Big data with a purpose, I love that. I love that quote. And I would love to stay with you again, Mark. And some of the companies on this panel have been around for centuries, literally. So S&P was founded in 1860, HSBC was founded in 1865. How do you get an organization with that much history and legacy to embrace change and we’d love to know what were some of the enabling factors from a business and tech perspective that actually enabled that transformation.

– Yeah. So I’d say that we’ve been lucky across the entire corporation. And have a really strong partnership between business and technology all the way up to the board. Rebecca Jacoby, formerly with Cisco is on the board and truly understands investments in platform and investments in technology, and then within the division of market intelligence where our business is the distribution of data, I think we have a very good understanding across the business of what our clients are looking to do with that data and are aware of what’s kind of the art of the possible, and then looking to us within our technology teams to not just distribute that data to our clients, for them to perform that function or apply those technologies, but looking for opportunities to apply those technologies to either deliver new top line growth through innovation or to just achieve increasing efficiencies, because I was mentioning the volume and the velocity of the data. There’s no way that we can linearly scale our manual workforce to keep up with curating that data and linking that data and cleaning that data. We have to bring these technologies to bear AI technologies, to bear on this data in order to really just keep up. We can’t make that linear manual investment.

– And Jacques, I would love to go to you next and ask you the same question about how does an organization with 155 years of history embrace change, and what were some of the enabling factors there?

– Yeah, Junta sure. So I think in the last few years the focus of digital transformation that really started setting the pace for change and driving the needle in the right direction. And I think there’s really been a big understanding of the value of data and AI. And so it’s really just how do you get it into the hands of people quickly? So an approach that kind of worked for us early on is to just do a showcase of what’s possible and get a success early on. And that really gets people excited and it’s something that’s worked well for me in the past. So from a business perspective, it’s really carefully deciding what that piece of work may be, you really need to strike a balance between showcasing something that can give you enough value, your stakeholders within a reasonable period of time. So I think the second thing is also really empowering teams to make decisions. And instead of just having a top down approach to also have a bottom up approach and initiatives like self-service and democratizing data really kind of help you drive this agenda. So I think the other thing that worked for us at Pay Me was really setting yourselves up in a way that you have a tech and business having close representation in each part of the product with really closely aligned goals. And you really need to also set your technology up so that the teams really have a tenure to achieve those goals without really being too reliant on too many shared services. And that’s not always easy. So I think the move to cloud has really enabled this change. And I think also choosing the right tooling plays an important factor. You really want something that scales, but that doesn’t require too much upfront setup. Yeah.

– Thanks Jacques. And Doug, NASDAQ is not quite as old as a S&P or HSBC, but we’d love to know how has changed occurred at NASDAQ?

– Yeah, absolutely. So I think NASDAQ definitely has some advantage here in that our heritage really is as a digital marketplace first and foremost. And so digital has been kind of the core and fiber of our being from the beginning. Now, with that said with the rise of things like AI and increased data, we have had to renegotiate our place in the market. Historically we were a very passive provider of a financial market upon which people traded and the natural thing that everybody, the first time they hear machine learning thinks is machine learning deals with numbers and predicting numbers and the stock market is made of numbers. So naturally the first thing we tried to do was use our data to predict prices. Well, it turns out that nobody wants NASDAQ to predict prices. Neither are market participants who are trying to do that themselves. And aren’t interested in our kind of lack of skin in the game of the actual trading portion of it, nor do the people listed on our exchange want us predicting their prices and making any sorts of valued statements about them specifically. So finding out where we exist in that market is difficult to do any sort of machine learning without having an opinion, because at the end of the day, that’s what it is. It’s a statistically informed opinion that we’re producing. So finding out what the right level of that opinion is, how we disseminate it and who the clients are for that opinion has been an interesting thing that we’ve had to renegotiate. I’d also say that questions and concerns around how we deal with all of this under the auspices of a march to cloud technology has also been a challenge that we’ve had to overcome. How do we ensure that our clients both on the corporate services side, as well as on the, of course, the trading participants side who have incredibly sensitive data, incredibly sensitive strategies, maintain the security and privacy of those strategies while also being able to extract the insights necessary to keep markets safe, fair, and transparent, and do that computation on cloud when appropriate and overcoming that hurdle was equally important. Which I think we did through following on a number of emerging best practices around cloud security and privacy in ensuring that we could continue our digital transformation in the age of AI and big data.

– And I would love to stay with you again, Doug, it’s often said that AI is a team sport, actually Jacques talked about the democratization of data a little bit earlier, but what are some of the main challenges of driving innovation at scale across multiple lines of businesses, regions, functions, et cetera. And I’d love to know how has the unified analytics platform helped with collaboration specifically.

– Yeah, absolutely. So let’s first talk about the obvious ones across region. I’m sure that Jacques can commiserate with this right now, at the end of the day while these collaboration platforms are great for letting our team in Sydney write some code and do some testing, then the team in Stockholm to verify it and push it into production while the team here in Boston builds out the next layer of the next model that we might want to push, there’s no substitute for getting on a phone or getting used to be getting on a plane and going and meeting face to face though. I think getting on a phone is perfectly fine in many cases and no one likes the 5:00 AM phone call, especially when the day ends with the 11:00 PM phone call. These everybody’s got to be on the same page and the ability on a kind of a tactile level to reduce the number of those, because we have these collaborative platforms like Databricks, like Get Hubs, et cetera, that allow us to code in parallel, test in parallel, verify in parallel and deploy in parallel has been enormously powerful from the head end. I’d say beyond that, the ability to connect, to have these connections across lines of businesses, because ultimately data, even in a firm that kind of has our data storage mostly on point or surprisingly on point for a large firm, data is still siloed across different business units. And the fact that we can very rapidly kind of in a persistent way, keep those connections through a data platform like ML float, just make starting up new projects much easier without having to track down 100 people in 100 different business units who know one table that exists in one place that might actually be 900 PowerPoint presentations that you’re going to get to learn how to scrape next week. So I think those are the sorts of challenges that we’re seeing and friction in project startup and project persistence, that are being reduced by these these platforms. And I’m certainly thankful to get some of my evenings back from them as well, as well as seeing the finished product.

– That’s great. And next, I’d love to go to Mark with actually the same question as well. So collaboration in S&P.

– Yeah. I think this last question was put to Doug, I was just thinking immediately about our marketplace initiative within S&P Global kind of born out of market intelligence, but not restricted to that one division. Primarily we’re looking to co-mingle and provide a marketplace for third-party data to distribute to clients as well around data through APIs and feeds. And we look to enrich and add value to that third-party data by co-mingling it with our own data. And that could come across a number of different content, domain verticals where maybe the third-party data pertains to real estate data. And we’d want to cross-reference a footfall traffic with particular properties or particular rates, or we want to commingle company fundamentals or metals and mining data and insurance data. And the data analytics platforms provided us kind of just a wonderful work bench to put that data together and look to see where we can draw some insights and see what kind of data aligns across our vast data storage. Yeah, I think it’s really been a huge boon. We’ve been using it across the marketplace also as a white labeled product where we’re increasingly looking at not only doing that ourselves and enriching the data ourselves, but providing a work bench for clients to subscribe to this data and then marry it with data that must reside on premises or must be secured and kept private and within their ecosystem. So it’s been a really great addition to this initiative.

– That’s great. Thanks Mark. And Jacques, same question. And you’ve already talked a little bit about collaboration, but how has collaboration helped with Pay Me success?

– Yeah, Junta. So two aspects really, the first one is really on the platform aspect of the collaborative platform. So with a unified platform, you really end up… Well, if you don’t have a unified platform, you really end up with multiple systems, ETL, Delta Lake, something for streaming scheduling and whatnot. So having the separate systems often traditionally you have a lot of overhead and these only can be integrated and end up with separate owners. So in a scenario where you really want to kind of get pipelines off fast, or provide new functionality on the platform to your users, this can often be cumbersome. And like with competing priorities on these different systems, it can really slow you down. So this isn’t an unexpected way that the collateral platform has actually helped us move fast, as new features are released, they’re all working seamlessly together. From a people perspective though, there’s obviously having the different teams work together on the same platform has really helped align understanding. And I think that’s some of what Doug had alluded to. And often these functions kind of overlap for example you have a data scientist, he comes more from a statistical background and he might be interested in trading insights that get consumed in a white paper or presentation but he’s not that comfortable with the aspect of getting an automated pipeline to production on spark. So like this kind of collaborative platform, then it really makes it easy for these different roles, like the data scientist, the data engineer to work together. Very often when these pipelines get moved to production, what you get back is not what you initially created. So this collaborative environment definitely makes this a lot easier. I think this also kind of translates into faster cycles. So when you move from experimentation and to production, having that collaborative environment, as you do this more and more often, and people sharing their work they see what needs to happen. And you often get to the end point a lot faster. So also something that’s maybe not as data science-y driven, but the model output and the the features that like the drive the model also have a place in traditional MI. And so having your data analysts actually collaborating on the same platform ensures that a lot of this data is also reflected in your kind of MI dash-boarding. So, yeah.

– That’s awesome. And Jacques, I would love to stay with you and talk about everybody’s favorite subject, which is security and cloud. And you mentioned earlier a little bit about the move towards the cloud, but obviously security is top of mind for all financial services. And especially for G-SIBs like HSBC and the customers that we talked to, there’s sometimes still a little bit of trepidation with moving to the cloud because of that. And we’d love to get your sense of how your organization got comfortable with security in the cloud and more specifically, are you seeing a push for multi-cloud?

– Yeah, so Pay Me initially had explosive growth after it launched, and this was supported by the cloud. So deploying a fast growing product on like a traditional internet infrastructure, wasn’t really an option at the time. So Pay Me was the first product HSBC deployed on the cloud. And so being a big company and having a long history of security controls being implemented mainly on traditional on-prem data sets center paradigm, policies were written and enforced as such. So a key to kind of making our internal security teams comfortable with the cloud was really dialogue. And essentially we’re saying the equivalent, but it’s for on-prem, but it’s just on the cloud. And so you can do things similar if not better on the cloud. So it was really just engaging with our security team through dialogue and getting them comfortable with us. I think another item that we identified early on that this wasn’t going to be a problem. And what we ended up doing is the platform that most people work on. We actually ended up de-identifying, masking all sensitive and personal information where people work and this also really went a long way in putting people’s minds at ease and driving those conversations forward. In terms of multi-cloud, I think Pay Me right now is just on a single cloud, but what I have seen outside of Pay Me, certainly where I’m pretty much on most commercial, large cloud providers.

– That’s great. And Mark, we’d love to ask you the same question as well. So security cloud adoption.

– Yeah. I think division to division the security needs are different ratings of course, has well, maybe not obviously, but ratings is in such a business that they’re highly regulated and the requirements for governance of their data and provenance of their data is quite different than say market intelligence, where most of the data that we have and deal with is actually publicly available. And it’s really our kind of finesse with aligning that and standardizing it that is creating a true value and distributing it’s creating true value for our clients. So for market intelligence, the adoption of the cloud is really not difficult arrangement security-wise, outside of the obvious kind of security considerations, but it’s not really IP or sensitive data, PII concern as it wouldn’t be for ratings. In general, we do focus on a multi-cloud strategy. It was necessitated. We’ve moved and have local operations in China. Our ratings division is the first multinational credit rating agency to issue ratings in China for Chinese owned securities. And so we actually have Alibaba cloud that we use within China. We use AWS outside. In general we don’t really look at it kind of business line to business lines, so much as kind of product platform or service to serve as kind of capability to capability. And we by default our stance is to build things in a multi-cloud fashion and to provide some insulation and portability through things like containerization and abstraction and provider models, but where this isn’t adhered to because of tactical need or just kind of conscious decision, it’s always a conscious decision or pretty eyes wide open as far as like what our technical debt is to remain truly cloud agnostic and hybrid capable. So that being said, the Databricks platform obviously is a really comfortable choice as far as vendor solution that can kind of grow and move with us as we kind of keep our options open and utilize whatever makes sense.

– Great. And Doug, the same question as well.

– Yeah, so I think Jacques and Mark really hit on the big issues here, right? One is the type of data that we’re looking at. So we look at something like our smarts business, our market surveillance business, that data was much more sensitive and would take a much longer time for the businesses to become comfortable working with on the cloud, and would need emerging standards and frankly comfort with things like de-identification, and the encryption standards to deal with that. By the same token, our market data today is not freely available, but it’s available on cloud, it’s not free, but it is available on cloud, through NASDAQ market data services in a real-time in streaming way. And of course, that’s much more public information, but I mean, I think even beyond that one of the things we haven’t talked about is the growing ecosystem of both advice from professional organizations around cloud security, the technologies that are available for cloud security, whether it’s the security features offered by large cloud providers, the open-source security features, the open-source dockerization, containerization features, et cetera, that Mark touched on, or just the better guidance given by organizations like G-Sec and CSIPP around how we handle the cloud. I think all of those things really play an important role in cloud adoption in particularly becoming more comfortable with cloud security. It’s not just how does your one team handle some relatively sensitive data, because that’s nice, but not scalable. It’s also about the broad ecosystem of industry standards, expert advice, and in technology, that’s been built up to support cloud security.

– That’s great. And I want to stay with you Doug, and talk about maybe the number one subject that comes up in our conversations is about culture. And more specifically recruiting. Obviously there’s a lot of demand for data scientists, data engineers, people with these type of technical capabilities. And I know specifically you, Doug, you do a lot of events and speaking, how has being an industry leader in advanced analytics at a place as NASDAQ resulted in recruitment success or driven recruitment success?

– Yeah. I’ll tell you that almost every single person that we’ve recruited has at some point commented that they’ve either dug up or found or seen either myself or one of the folks in the lab speaking at a conference before. And that was a driving factor in their decision to join our internship, to join the team, join the company more broadly. And I think that sort of exposure, particularly when you’re looking at very new graduates is really a differentiator, right. We have our MI lab set up in Boston and there are a couple of good schools in the Boston area from which we like to recruit. And many of these people have many, many options. And when you’re trying to decide where to start your career, the ability to go out and kind of verify that the person you’re going to be working with, or the team you’re gonna be working with are real thought leaders in the industry, or at least have put themselves out as being thought leaders in the industry, is a one that I think we find incredibly valuable in bringing these people on. And also frankly telling our story and letting people know what we do and maybe this isn’t as much a problem at S&P or HSBC, but certainly at NASDAQ, the first question I get asked at cocktail parties is what stock should I buy? Like, that’s not really what we do, man. So just being able to tell our story about what we do outside of provide a financial market and to let people know that we don’t trade on the market is really valuable.

– Got it. I would love to ask you some stock tips later on, but Mark, same question, how do you find more marks, right. Especially in a place like London and New York or where tech companies are vying for the same type of talent.

– I don’t know how many more Marks market intelligence can stand, but we definitely want more people like the leaders I have across my own team. I think the answer is similar to Doug’s, you really just want to get yourself out there and you want to have your team get themselves out there. And generally that kind of that problem solves itself, top talent generally has such a passion for what they do. They’re involved in their local environments, local communities and they communicate and signal to that community, what they’re doing and share that and we completely support that sharing of the work and sharing of the experience. And I think S&P Global has such an amazing and well cataloged set of data and so many interesting problems and such an amazing kind of role and function and the financial markets of the world, that it’s a pretty easy sell once people actually know, somewhat to Doug, what it is you actually do, once they get past this idea that you’re not trying to write some new algorithm to pick the best stocks and predict their prices necessarily. There are many other problems you’re out to solve. And with our focus on ESG and diversity and inclusion, and the real material role that has across corporations, across the world, across economies. And I think our leadership, our C-suite leadership, our leadership across the board makes a very strong stand on social issues of the day which I think is really brave and I completely respect and support. And I think that when people do look at our company and pay attention and see what we do and what we stand for, they’re really excited to work not just for a company that is influential and strong, even in current economic times, but a company that shares their perspectives and is really driving for material change.

– Great, and Jacques, same question. And obviously HSBC is a truly global bank and you’re in Hong Kong. So how do you think about recruiting local talent versus global talent and in terms of just attracting the best people into the organization?

– Yeah, so it’s global business in Hong Kong, and I think we have a close proximity to China. And I think as we all know, China is a front runner in AI, and I think they have plans to be the global leader by 2030. But this was really driven by significant investment, high adoption rates and access to large volumes of data, really. So HSBC has technology centers throughout China. And so we really actually able to dip into that large talent pool and we’ve been able to successfully hire from other tech giants like Baidu and Tencent. So given the global demand for talent, especially in the data science ML field it’s certainly made sense for us to focus more on local.

– That’s great. And finally, I want to peer a little bit into the future, and maybe I’ll start with Doug this time and talk about how you see data and AI evolving and today NASDAQ uses AI, for example, for anomaly detection, looking into the future what do you think is in store for the next two to three years? And how do you think data and AI will impact on the future goals and priorities of the company?

– Yeah, that’s a great question. I like that one. So, yeah, today, when I think about the landscape of AI, what we’re able to monetize, it’s really anomaly detection, benchmarking, peer grouping, and various various applications there not be. And those are great, and they’ve been very, very valuable to us. But I think when we look at the future, I think there’s two trends that we’re going to see kind of converge that are core to innovation. The first is a broader innovation trend that we you’ve seen over and over again throughout history, which is that when factories were first electrified, people would basically try to take the same processes they were doing with steam power, with steam engines were all with water power and just do it with electricity instead. And that’s not really the core. And while there’s some benefit gained from moving from a less reliable or a more cumbersome form of energy to electricity, it’s not the real benefit, which is this distributed nature of how of energy delivery. And so over time, what happened is people began rather than trying to put electricity into current processes and current design, designing things around the electrification of the factory. I think what we will see in the not too distant future, and what we see right now really is we have a process, how can we make it faster by applying AI? We have a process, how can we make it less expensive to deliver by applying AI? What we’re going to see in the near term future, which is very exciting is we have AI, it is a core transformative technology. How do we build processes and business cases around AI at its core, rather than putting it in afterwards to make existing processes and businesses more efficient? So I think that’s the first thing we’re going to see. The second thing, which will really enable this, and which will really enable us to think about AI more broadly and how it enables market efficiency, how it enables a new and novel market structures and new and novel ways to deliver markets to the world is the rise of reinforcement learning. In particular here, what we’re interested in is how we go from merely having AI being a recommendation system, or a system that’s reliable in making marginally better predictions than say dice to a system that really makes decisions and hedges its own risk based on those decisions, which is what’s at the core of the bellman equation, right, is not just knowing what the prediction will be, but also something about that distribution of error and how to take advantage of that distribution of error in an advantageous way based on a particular action. And I think these two things, a view of AI as being a core technology around which we can build businesses and business processes and the ability to move from AI as merely a tool to extract insights and make recommendations, and really as a tool that can make hundreds of thousands to millions of decisions every day without needing a huge amount of human oversight for many core and critical processes and free up people’s times to do more of that creative work, more of the relationship work as well, that goes into designing and delivering AI solutions. These are the two trends in technology that have me most excited around AI where I think NASDAQ is going to be going in the future.

– That’s great. And Mark, same question, what’s the data and AI analytics footprint of S&P going to look like in a few years?

– Yeah. So I think again, that answer probably would vary depending on division to division, where in our kind of insights driven divisions to Doug’s point, there’ll be more and more AI driven kind of judgements. I think AI is actually kind of a made up term. There’s not really a hard and fast definition necessarily. It’s kind of always what is hard to imagine a computer doing that humans do well today. And that kind of keeps shifting right. Now my eight year old talks to his Alexa and Google and whatnot. And that seems like sci-fi to think when I was eight years old that that would be possible. So we’re not necessarily in the business of figuring out everything people would want to apply AI to, but we definitely see an increasing trend that our clients really are interested in our feeds, in our APIs and just getting that data so that they can really figure out how to bring to bear that AI for their own needs and their own insights. So I think for S&P kind of division to division, ratings and indices, we’ll be looking for opportunities to further enrich and add value. And then we’ll again to Doug’s point, maybe not so romantic as the application of AI to really just keep up with the volume and velocity of the data that we’re seeing so that we can be that partner for everyone else, that trusted distribution channel for everyone else to get the data that they need to bring to bear the AI within their own problem set, in their own domain.

– For AI to deal with the big data. That’s a really interesting concept as well, and finally last but not least Jacques, same question Pay Me is already a fairly sophisticated user of machine learning AI compared to other financial services that I know of. And we’d love to see how you see Pay Me in data and AI evolving in your business.

– Yeah, so I think we’ll continue to use ML to understand which problems our customers want us to solve next. I think also how we can create more value for actors in our ecosystem, be that merchants, platforms, partners or customers, another big area I think there’s going to be operational efficiency and how do we make processes faster, which result in better service to customers beyond things like fraud into optimizing onboarding journeys, customer support and so on. And then I think lastly we just continue our journey to democratize data and create a culture where more people are empowered to use the data responsibility.

– Awesome. Well, that wraps it up for the panel discussion today. I just want to take a moment to really thank Doug and Mark and Jacques for spending time with us today and sharing your insights. And thank you so much for the audience for joining us today.

– [Moderator] That was great. Thank you to all our panelists and Junta for an awesome discussion. All right, that’s all we have today. Thank you again to our participants and thank you for joining our industry leadership forum. Can’t wait to see you again.

Watch more Data + AI sessions here
Try Databricks for free
« back
About Junta Nakai


Junta Nakai is the Global Industry Leader for Financial Services at Databricks where he is responsible for driving the adoption of the Unified Data Analytics Platform across Capital Markets, Banking/Payments, Insurers and Data Providers. Prior to joining Databricks, Junta spent 14 years at Goldman Sachs in New York, where he most recently served as the Head of Asia Pacific Sales in the Equities Division. He is a contributing Business and Technology writer for various publications and speaks frequently about digital transformation in Financial Services at conferences and media outlets around the world. Junta is bilingual in English and Japanese and holds a B.A. in Economics & International Studies from Northwestern University.