Decoding The Future
Decoding the Future, hosted by Fujitsu Uvance, dives into the transformative world of CX, data, and AI. Through conversations with top experts across Asia and Oceania, listeners explore groundbreaking trends like generative AI, vision AI, and data security. Discover how these innovations are reshaping industries like retail and healthcare and gain practical advice on leveraging technology to solve challenges, drive digital transformation, and stay competitive in today’s dynamic landscape.
Decoding The Future
EP. 16 Securing against AI Risks with CSA and IDC
AI is moving faster than most security teams can keep up with. As enterprises rush into Generative AI and Agentic systems, the risks are piling up, and many leaders still have no clear plan to manage them.
In this episode of Decoding the Future, Stanley Tsang from the Cyber Security Agency of Singapore (CSA, National Regulatory Body responsible for protecting Singapore's cyberspace) and Dr. Chris Marshall from IDC break down the real-world threats that come with scaling AI across the enterprise. From LLM vulnerabilities and guardrail bypasses to data sovereignty and AI-enabled cyberattacks, this conversation reveals what companies must fix before AI becomes unmanageable.
Thank you for listening!
Discover more content like this on Decoding the Future.
Learn more about Fujitsu's AI Solutions here.
Ker Yang: Welcome to Decoding the Future, a podcast where we explore the latest trends, challenges, and innovation in the world of technologies. I'm your host, Ker Yang, and today we have two special guests, Stanley from Cybersecurity Agency of Singapore, and Dr. Chris from IDC. Maybe do a quick introduction of yourself, Chris and Stanley.
Chris Marshall: Delighted to Ker Yang. My name's Chris Marshall. I lead IDC’s Research and AI and industries across Asia Pacific. And my background, I've spent most of my career mainly in financial services, often on the risk and security side. It's only the last 10 years, 15 years I've really been pushing on the AI side, and it's a delight to marry the two stories together in this conversation today.
Stanley Tsang: Stanley from Cybersecurity Agency. I'm the Distinguished Engineer and Senior Director from CSA. I look after the technology adoption, including AI, 5G, as well as architecture for Cybersecurity Agency. So glad to be here to actually share my thoughts. Thank you.
Ker Yang: Thank you. I think for both of you, maybe you are aware that AI has been the hottest topic in the market for very long.
And many of our customers are looking at how they can realize AI today in the enterprise space. With that, maybe Chris, would you like to share with us, what do you think are some of the security gaps as they scale AI towards production use cases? What is forthcoming in that?
Chris Marshall: Good question.
I think one thing that we have seen is that, the last three, four years since ChatGPT really kicked off has been a real headlong rush to innovate at all costs. Everybody's just tried to get as many interesting use cases. The board's been pressuring IT to get these things out there, usually functional use cases, but in some cases more complicated industry use cases too.
But what we see is that a lot of companies have used open source technologies for the most part to drive those things, and that's made a lot of sense. It's relatively cheap in terms of initial investments, and they've been able to deploy those with some success, it has to be said.
But now we're starting to notice as open source gets deployed, particularly in production, some of the limitations of open source is starting to become apparent. And I think that is more than anything else, perhaps driving the need for a more disciplined way of thinking about how, AI can be deployed at scale.
For example, we are seeing that some of the open source platforms do have issues, and it's certainly possible in many cases to use a relatively small number of malicious files to really corrupt the outputs you get from GenAI systems. So I think that's probably the biggest single thing.
But, I think it's all also worth pointing out that there's other issues that have really come to light and we've struggled to tell a story that leverages the innovation, the need to bring to market quickly, and at the same time, building in security from the ground up initially built into the underlying structure of how we deliver these systems at scale.
Stanley Tsang: Yeah. A hundred percent agree with Chris. I think that's exactly what we actually observe on how Singapore adopt AI. Everyone has been chasing the silver bullet and they want to adopt AI as much as possible. Sometimes they think of security as an after afterthought.
They spend so much effort to look at AI to deliver value. A lot of time we don't see they had actually put in any security thought.
Because of that, CSA actually created a security guideline and command guide back in 2024. We see it as an important set frame, to define the entire AI system lifecycle, what they need to watch out. How they can actually beef up their security control. We want to emphasize they should consider when we actually ever adopt an AI system, it’ll be security by design, security by default. And then they actually need to put in proper control. Cannot be when they go live, then think about how to actually adopt security.
So that is actually the key thing we do. And AI, like Chris said, it evolved so fast. So we released a document about one year ago. This year, just last week we announced that we got to do addendum for the agentic framework. We see people using agentic is even worse. Because we are not dealing with one attack surface. When we talk about different agents work together, possibly from different company and then the tech service that we expand. So we quickly actually come up with a guideline.
And then the importance actually not just we close the door by ourselves. Most importantly, we involve the industry. We do a lot of public consultation and then ask all the industries to actually chip in and hopefully we can come up with framework that’s secured and matured enough to actually help people adopt or implement AI security system.
Ker Yang: .
That's very interesting. I think, Stanley, from the regulator point of view, you would definitely agree that, security cannot be an afterthought. So we definitely can see that CSA is coming in strongly to support the whole industry to move, to adopt this AI solution with security built in from day one.
Stanley, if I may ask, what are the typical solutions that you see people in the industry are starting to focus on?
Stanley Tsang: Very good question. I think depends on the industry. So we see alot of the solution in Singapore are particularly focusing on LLM. In contrast to some other country, like China, they actually carry more on the industry use cases. It'll be a combination of the traditional ML AI.
So in Singapore, we do see people using AI, to try to solve of their business problems. I do see even in some of the government sector. For example, healthcare. They actually want to use AI system to help to improve the workflow and then even to see whether they can improve some of the medical use case.
If you ask me, for CSA or cybersecurity, we look at AI through three key lens. For us to run cybersecurity operations, I would like AI to help us to do more. Can we use AI to do red teaming, power analysis? Those certainly are areas we want to explore and see whether we can harvest the power of AI.
And then as a regulator, we also worry about how to help the company, government adopt AI security. Lastly, we also worry about if the defenders like are using AI to defend our ourselves, I'm pretty sure the bad guys will also want to use AI to do attack.
So we also worry about the AI-enabled threats. I think it's actually proven for the last one, two years, we are seeing more and more AI-enabled deepfakes, phishing and so on and so forth. So I think moving forward this will be scaled up. We need to be get ready to actually tackle this kind AI-enabled threat.
Chris Marshall: I'm curious, and I have a question for Ker Yang, and it's on the back of what Stanley said there. All of us are talking about what we should be doing, right? There're many things we should all be doing, but how practically are people doing? Are they actually succeeding in baking in security from day zero in their AI design activities?
Ker Yang: Oh, Chris, that's an interesting question. I think today, most people that we see when we work with our customers in the technology field, everybody's adopting Generative AI. Everybody is working with LLM. And the interesting thing is everybody's putting huge amount of data to let all these chatbots, agents have access to such kind of technology.
We are seeing that security is lagging in the sense that, for example, we look at how Gen AI have access to certain amount of data, but Gen AI is also susceptible to attacks. And these kinds of attacks can come in through a direct or indirect form. Of course, the direct form is looking at how they can hack through your application all the way to assessing your datastore database.
The indirect way is actually attacking your LLM, and making sure that impersonation happens and how you can retrieve or draw data from your LLM indirectly. For example, one of the key use cases that we typically mention in this aspect is, example, your HR software you have a chat bot. And that HR software actually have access to all your company's employees’ salary.
Imagine someone impersonating and asking the chat bot, could you tell me what's your CEO's salary? Of course, this could be blocked off, but in certain way, the person can also impersonate and do a kind of reverse engineering onto expense claim that the CEO has and gradually work out what the CEO’s salary could be. So, these are some of the ways that security is being attacked in the realm of AI and LLM technology deployment.
Stanley Tsang: I agree with what Ker Yang has shared. I think people need to be more aware about the LLM security. We need to actually do risk assessment to understand what's the risk factor?
You have multiple dimensions. I think, to me, AI is really a data problem. Your crown jewels is actually your data. So how can you make sure that you safeguard your training data or your output data, how you actually can make sure your model is safe. So you need to constantly look at what will be the possible gap.
Because to construct an AI solution, there're many forms and shapes. You can actually call an open API with Co-pilot, Gemini, and so on and so forth. And then you can also build an open source model. It all represents different levels of security concerns. So we need to understand what is actually the crown jewel you’re trying to protect? Spending more focus on protecting those, and then you need to do regular monitoring. Means, you put in guardrails, you need to actually validate how good is the guardrail.
I agree with you. Guardrail put in, it doesn't mean it will solve our problem. Because what we do, we actually spend a lot of time try to break the guardrails ourselves and you'll be surprised, even somewhere, big enterprise that I don't want to name. But, it's surprisingly not that difficult to bypass those guardrails and then eventually leak problems like leaking company data. Or people actually do data processing and things like that.
Knowing what’s the risk involved and then putting in the safeguard, doing a continuous vulnerability assessment monitoring, I think will be the key to adopt the AI system.
Ker Yang: I think data is just one important aspect, but as more and more LLM, GenAI actually get hooked onto workflows, where they're actually able to execute certain actions. This is where it gets tricky and, it gets really. dangerous, if security aspect is not taken care of.
Stanley Tsang: Based on your market research, how ready is the industry, or enterprise, are they ready for security for AI adoptions?
Chris Marshall: One of the things we see in the marketplace, quite frankly, is just a wide variation. It goes all the way from what I'd call opportunistic AI development, even shadow AI to some extent, where every team says, let's use one of these gen AI large language models to do something interesting and there's no control at all. And slowly, companies morph. Often larger companies tend to do it quicker than smaller ones. Slowly, they morph towards a more centralized, like a COE kind of model. AI development and AI operations management.
And then what I find interesting is, each of these approaches has got its own problems, right? If you're very close to the data, you're very quick to innovate. All good. If you centralize, everything takes a bit longer, a bit more complicated. On the other hand, you have standards and good practices.
Another problem, which I think is really interesting is that very often the COE does not have a risk focus. Their job is the development and the innovation. So they've got a mandate, usually from the CEO, to drive AI within the business.
And I think one issue we see increasingly is as AI goes into production, as agentic AI becomes more important because as you noted, you're integrating with lots of internal systems with agentic AI in a way that you weren't with Gen AI quite so much, it was more point to point. But when it gets to agentic and all, we start to have real consequences of AI use rather than just, I get a silly answer in my query.
What happens I think is that companies start to think about a larger role for the traditional risk teams or security teams. The CISO gets involved and says, hey, I've gotta worry about these AI engines that are being used in their various forms. And you have separation of duties between the COE, whose job is really to develop, maintain, manage the AI models, use cases across the enterprise. And the larger risk story. In financial services, we've long had a history of doing things like separation of duties. We say the guy who takes the risks is not the person who checks the risk.
And I think that's something we are just moving towards in the larger companies. We're not there for the most part, but with agentic AI, we start to have to worry about these things. And we have to start thinking about how do we build and, maybe it's in the traditional risk management team, maybe it's in the security team. It's not clear. Maybe it's IT risk or operational risk.
But again, that direction seems inexorable and that's what we're seeing just hinted at those areas. Those companies that are really doubling down on AI to actually drive business development within their organization rather than just do it as helping their HR team or doing narrow functional areas or whatever it is.
Stanley Tsang: Chris, you already pointed out, I agree. In the board conversation, people are focusing on wanting AI to deliver value. We actually have matured enough the risk management model. I would like the people to spend more time to figure out how they manage the AI risk. I fully agree when agentic comes, it's not just an internal issue. It could be we’re building an agentic with external company, you buy service maybe. So, all of a sudden, the entire framework becomes so complicated, you’ve got your internal risk, external reasons, and so on so forth.
It's a tricky issue. We want a company to think more about security risk. If you want to extend agentic use case in certain industry, safety also will be coming into the picture as well. With healthcare, if we want to actually use AI to replace some of the functional doctors, how sure can we actually say that the AI system can be trusted, will not hallucinate, and then give the wrong advice and so on and so forth? And then if we move AI to the AI physical, let's say upcoming the autonomous vehicle can we fully trust the driving to the car? What happens if the car got accident? Who should bare the responsibility?
I think particularly for this reason, CSA actually set up AI Security Centre of Excellence in Singapore. We think this is actually very big problem. And then we need to set in focus to bring in more people from the academia, industry to be involved.
How can we make the AI more secured when we extend the use case to the AI physical, we are ready. That's some of the things on top of our minds. Other than just LLM, in the longer run, we need to really have a mature framework policy to guide us to do these kind of adoptions.
Ker Yang: Thank you, Stanley. Maybe just a very controversial question to ask both of you. I find it very interesting. Today, we have many roles and responsibilities. We have the CTO. In some organization, there's a CDO, Chief Digital Officer. We also have Chief AI Officer these days. So, pertaining to accountability, do you think it lies with CTO, CDOs, Chief AI Officer, CAIO, or with CISO, in terms of securing AI solution?
Stanley Tsang: Yeah. Lemme give my take first. I think you're right. Because AI is moving so fast. Based on what you described, I see all forms and shapes in different companies. To me, who actually doing it is not very clear or well defined.
To me, more importantly, we need to recognize AI as a vehicle to help us drive innovation, bring value. On the other hand, we are also concerned about security, safety and other elements. So I think as long as it’s well-defined within organization, who would take the responsibility. So my advice for company, they need to set up some kind of an AI governance framework, well-defined who is actually doing what. Whether it's parked under the CTO, CIO, that's secondary. As long as we have a well-defined framework within the company, I think it's actually better off. And when question gets asked, it could be everyone’s problem or no one's problem, you lose the accountability.
That's the concern I have.
Ker Yang: Thank you.
Chris Marshall: Yeah, I think accountability is the key thing, and one thing I'm a bit wary of, and I see this in a lot of companies, they talk about a Chief AI Officer or something like that. And I think that's a mistake quite honestly, because ultimately, accountability stops with the CEO and the business.
I mean, if you've just hired somebody in order to be the fall guy if things go pear-shaped, you've clearly missed the point entirely, it seems to me. And I think more and more we see, especially with agentic AI, you'll see AI blur into virtually every aspect of every business in every form. And suddenly it just doesn't make any sense to talk about, somebody whose job it is to be responsible for AI.
Because, I'm always reminded, I mentioned this in the recent event we did, about the idea of a Chief Telephone Officer in America in 1900. All the big railway companies would have Chief Telephone Officers because they thought, telephone is a big thing. It's gonna change everything we do.
It did change everything we did. But the problem was, it was ubiquitous and it was just a hygiene fact. It became something part of everybody's job. So therefore, it didn't demand one person to be responsible for telephone. The same is gonna be true of AI. It's gonna be much more the case that if we do stupid business decisions, AI is gonna speed that process up. But still, it's the CEO who is ultimately on the hook for this. And anything else is an abdication of that responsibility. Frameworks have got exactly the way to support that argument.
Stanley Tsang: Yeah. Chris got your point; it resonated with me. Because, a few years back, when we talk about security, it’s a CISO problem. We know it will not work until you elevate to be a board conversation, become the CEO mandate.
I can see the equivalent for AI as well. If you’re using this to solve your business problem, it needs to be holding the CEO accountable because this one can generate a huge impact. You generate a lot of value, but at the same time, it can create a lot of risks as well.
So on governance, you need to be a top conversation within the board. I think CEO should be the right person to own the responsibility.
Ker Yang: Thank you. I think CEO is definitely the ultimate, accountable person, but from what I hear from both of you, security is everybody's problem. So everybody needs to own a piece of it to drive this, to make this successful.
Stanley Tsang: Chris, what's your view on how regulatory play a role in AI adoptions?
Chris Marshall: I'm always wary of answering your questions from a regulator. I think there's almost two types of regulations that we see everybody worrying about right now. One are, more traditional kinda regulations like that are usually quite clear, very strict. Data privacy, data security, data sovereignty in some cases where data is. And these are usually a set of rules that if you get it wrong, you're in trouble, you've got real skin in the game here. Then there's the AI, I won't call 'em regulations. They're more frameworks, policies, good advice.
And I think they’re both useful. But to be honest, I don't see that the market has quite taken on board the framework and the good practices aspect of regulations for AI. Things like explainability, transparency, all these good things and they are good things to do, make no mistake.
But there're many different competing frameworks across the world, and I think we struggle a little bit as corporate executives to figure out. Do I take the Singapore version or do I take the ISO version? Do I take the European AI Act? There’re many different spins on this.
As a result of that uncertainty, and especially with geopolitics that makes it all more complicated still, what people have tended to do is say, I'm just gonna focus on the big things, data privacy, data security, data sovereignty. In fact, I would argue the data sovereignty is the hot topic in the marketplace right now partly because China is spearheading the open source story, which makes sense. I suppose from their perspective. America is usually the source of alot of the big closed AI models. And then you've got a lot of parties in between that are somehow trying to navigate this complicated, I won't say battlefield, that's the wrong word, but certainly a conflicting set of agendas that these different players have.
Whether it's across the entire AI stack from hardware all the way to data, to models and everything else, there're pressures that are really caused by first of all, wanting to innovate fast. And second, to make sure we're not left in the lurch by potentially conflicts from data sovereignty issues.
Those are the big priorities from a regulatory perspective that companies are trying to grapple with. Obviously the security, the data privacy, these are the things that if you get it wrong, you’re in big trouble, frankly.
Stanley Tsang: I agree, so that's why we are not rushing to come out with some kind of regulatory framework for AI. We want to see it as an enabler. We want to provide guidance rather than straightaway implement some kind of policy. Policy is actually a very complicated topic because in order to figure out how to protect, how to educate, create something, enforce certain things, but yet not stop innovation is actually quite tricky.
So, we’re not rushing to do that. We actually want to create, from IMDA, they created the AI governance framework and AI verify. And for us, we lean forward to look at the security element, more from guidance perspective and hopefully it will have enough guidelines for Singapore-based companies to follow so they achieve the innovation, yet they do so in a safe and secure manner. I think that's one of the key views from Singapore government. That's how we see the AI problem in today's lens.
But moving forward, globally when we actually have a more mature framework properly, we can discuss, do we actually need a AI policy for certain sector. For example, for CSA plans, it's actually more on the CII. We want to have a more stringent requirement to make sure that CII system is more prudent compared to general enterprise. So those are some of the things to keep our policy teammates busy for the last one, two years.
Ker Yang: Thanks. I think data sovereignty is definitely one of the key issues and challenge that we need to address from the technology perspective. This is where we are seeing that, there's quite a couple of solutions that can address this.
For example, we have this private LLM model where you can deploy on premise, where your private data actually gets secured and it doesn't flow in as a feedback look back to the open model out there on the web. And with that, we can also look at how we can secure different agentic AI platform workflows, by limiting it off-prem, by limiting certain security access to certain workflows or action-based execution. With that, we will be able to dissect and segregate the type of access that the models will actually have integration to. And with that, we are looking at how we can closely collaborate in the industry, especially with those security practice, security consultancy firms, how are we going to drive all this technology to make sure that it's fully secured with the latest updates available and working with the government regulatory to keep updating the guidelines, the policies, and to look into how we can drive this together as one.
Stanley Tsang: Yeah, a hundred percent agree. Ker Yang. So like my CEO always say, cybersecurity is a team spot. We need everyone onboard to actually play a part. I think AI is a very good example. Why we need to do security right.
Otherwise, you may not get the objective you actually want. On one hand, you actually want innovation, but on the other hand, if you open up and then you cause a lot of the harm than the value you intend to get.
Ker Yang: I think today we have quite a fair bit of discussion. We talked quite a fair bit on what the market really sees AI deployment to the risk that AI brings forth.
And of course, we are looking at how we can work together where security is everybody’s problem. Right now, towards the end, let's end with key takeaways. Maybe what is the one thing that each of you would tell enterprise leaders to do now to strengthen AI security? Maybe start with you, Chris.
Chris Marshall: I suppose you have to start from where you are, that'd be my first point. You've gotta make sure if, depending on your technology profile, what sort of use cases you're doing. What sort of projects are you doing? Functional use cases? Are you doing industry use cases, are you mainly using predictive AI versus generative versus agentic AI?
These determine to some extent the threats and challenges that you will face from a security perspective. So I think some sort of gap analysis probably makes sense, just to give you a first clue as to what are the things that really are most important to worry about.
As you get more mature, as I mentioned a little bit before, I think it does make sense to have someone responsible, someone accountable for AI. What you'd call it is another issue, security, trust, responsible AI. You can call it lots of things. But I think having that person have an independence of the AI development process is really important, and I think that until you actually get to that point, you will always run the risk.
As again, anybody who's worked in a bank will know this. You always have the risk that you are the wolves are looking after the wolves and the sheep are looking after the sheep. You can't have that. You've gotta make sure there's a clear clarity about who can say this model is too dangerous, too risky, or the data issues are too complicated, or too demanding in terms of our current infrastructure or this use case is beyond us. Either beyond the technology, beyond our ability to implement. And I think you've gotta have someone in the organization whose job it is to say, this is not what we're gonna do.
Ker Yang: Thank you. What about you Stanley?
Stanley Tsang: Yeah, the key things I wanna call out is the visibility really. One sensible thing for enterprise to do is actually they need to have some sort of AI inventory, because if you don't do that, it expands so fast and then you got shadow AI that kind of problem.
So have a clear AI system inventory is actually very important. And then also understand each of the AI system, what’s the software build material. Supply chains definitely will be a key issue because no one actually build everything by himself under the hood. They have so many supply chains that they are dependent on. And then Chris already called out, if you are actually using a China model, you wanna make sure all the downstream dependency is actually safe. So you need to really to do more risk assessments. And then, I fully agree, appoint someone to actually be responsible from the technical point of view. That means a few people are very crucial for the AI system. Your data officer, your CISO, if you have an AI officer, these three parties need to work working very closely in order to really deliver the innovation, at the same time, secure your data and your crown jewel. That will be the advice I want to give.
Ker Yang: AI brings huge promise, but it’s success depends on trust. By combining sound technology practices, market insights, and thoughtful regulations, we can build AI systems that are secured and sustainable.
Chris, Stanley, thanks for the conversation and thank you to our listener for tuning into Decoding the future. Until next time!