AI

Navigating the Intricacies of Artificial Intelligence in Singapore’s Securities Sector | Insights


Major Applications of AI in Singapore’s Securities Industry

AI has emerged as a transformative force to be reckoned with; reshaping the way the securities industry worldwide navigates complexities of the dynamic financial landscape. Nowhere is this more evident than in Singapore, the city-state emerging as a global leader in the movement towards responsible integration of AI into its thriving securities industry.

As the financial services sector continues to evolve at an astronomical pace, the nation’s most prominent industry players have recognised the transformative power of AI. To illustrate this, consider how many home-grown financial giants and even the Singapore Exchange (SGX) have all made significant strides in harnessing the power of advanced algorithms and machine learning to drive their trading strategies, fraud detection and anti-money laundering methods, and enhance their overall investor experience.

Notably, however, the embrace of AI extends far beyond the private sector. The central bank and integrated financial regulator, the Monetary Authority of Singapore (MAS), has also been vocal about leveraging this transformative technology to strengthen its oversight and regulatory functions.

This symbiotic relationship between industry innovators and vigilant regulators has positioned Singapore as a shining example of how AI can be responsibly integrated into the very fabric of a nation’s financial ecosystem.

In the following, we delve into the specific ways AI has become a versatile tool harnessed by both industry players and the nation’s financial regulators.

Adoption by Industry Players

Key industry players have been leveraging AI to enhance operational efficiency, improve customer experience, and tighten security measures. Briefly, five (5) main (but non-exhaustive) areas in which AI has been harnessed can be categorised:

  1. Algorithmic Trading and Quantitative Investment Strategies

    The realm of algorithmic trading and quantitative investment strategies has transformed tradition trading methodologies through the use of AI.

    For instance, AI-driven algorithmic trading is one of the primary applications of AI in the securities industry, where AI-powered systems are used to conduct complex market data analysis, identify patterns and then make automated trading decisions at high speeds, enabling more efficient and data-driven investment strategies.

    In particular, hedge funds and investment firms have been increasingly relying on AI-driven quantitative investment strategies to identify market opportunities and execute trades more efficiently and profitably.

  2. Fraud Detection and Anti-money Laundering

    Use of AI has strengthened the industry’s defences by facilitating the detection and prevention of financial crimes, such as fraud and money-laundering.

    Through use of AI-driven pattern recognition solutions and deployment of anomaly-detection algorithms, financial institutions are better able to identify suspicious activities and transactions. Like previous traditional methods, AI-powered solutions are able to sift through complex, voluminous transactions, analyse the data and transaction patterns to identify anomalies and suspicious activity, and thereafter flag out potential fraudulent activities with greater accuracy and speed than ever before.

    For instance, some Singapore financial institutions employ AI to reduce false positives and prioritise alerts in fraud detection, enabling their analysts to focus on higher-risk activities.

  3. Robo-advisory and Automated Portfolio Management

    Singapore has also seen a rise in AI-driven robo-advisory services. Through use of AI and ML algorithms, financial institutions are able to provide personalised investment recommendations and automated portfolio management, transforming investment advisory by delivering cost-effective and scalable investment advisory services to a wider range of investors (i.e., general public).

    It can be observed how some local banks integrate AI into their broader investment management offerings, leveraging ML to provide personalised robo-advisory services to retail investors, offering tailored investment recommendations and automated portfolio rebalancing.

  4. Customer Experience: Chatbots and Virtual Assistants

    Businesses in all industries, including the securities industry, have also begun to take steps to enhance their customer service experience through deployment of AI-powered chatbots and virtual assistants providing personalised 24/7 support and information to customers.

    Use of AI-powered chatbots and virtual assistants not only streamlines interaction between the business and their customers, but allows them to offer customers with assistance at a level of efficiency and personalisation that was previously unattainable and, frankly, unfathomable.

  5. Risk Analysis and Compliance Monitoring

    Securities industry firms have also increasingly begun to utilise AI-powered solutions to conduct tasks requiring analysis of complex and vast data sets more efficiently, which allows them to then assess risk exposures and monitor compliance with regulatory requirements with greater finesse.

    In some cases, certain AI solutions can also provide real-time insights and predictive analysis to empower firms to navigate the complex regulatory landscape and mitigate potential risks proactively.

Adoption by Regulators

Industry players are not the only parties taking advantage of the revolutionary benefits of utilising machine learning (a type of AI referred to loosely as AIML) to expand their toolkit and enhance their supervisory and enforcement resources.

In his written reply to Parliamentary Questions on the use of artificial intelligence in supervision of financial institutions, Lawrence Wong (then Deputy Prime Minister and Minister for Finance, and Chairman of MAS) was forthcoming with the fact that deployment of AIML-powered analytics has enabled the MAS to identify emerging risks, monitor market activities and ensure compliance with industry standards with unprecedented speed and precision.

In particular, he illustrated this with two (2) broad areas that have yielded meaningful results:

  1. Enhanced Ability to Identify Emerging Risks – By developing tools using ML, the MAS has improved their risk targeting for supervisory or enforcement action. The ML model not only helps enforcement officers identify and prioritise potential market collusion or manipulation for investigation, but also assists supervisors identify financial advisory representatives who may present higher risks of exhibiting bad behaviour. This in turn allows them to prioritise financial institutions that require deeper supervisory engagement.
  2. Automation of Tasks – The MAS also applies natural language processing (NLP) to help supervisors work more efficiently by flagging issues for attention without the need for manual trawling through voluminous textual data. The NLP is also used to scan social media and reports for news and developments that may warrant supervisory attention.

The MAS also employs advanced data analytics to identify networks of suspicious activity in the financial system, which may indicate risks of money laundering, terrorism financial or other financial crimes. The regulator anticipates that incorporation of such techniques with more powerful AIML learning tools will soon help MAS sift out high-risk networks and transaction patterns with greater efficacy.

Existing Regulatory Framework in the Securities Industry

Under the existing regulatory framework for the securities industry, the key regulator is the MAS. As central bank and integrated financial regulator, the MAS is responsible for developing and enforcing regulations to supervise the nation’s banking, capital markets, insurance and payment sectors (loosely referred to here as “financial entities”. It establishes rules for such financial entities, which are implemented through legislation, regulation, directions and notices.

The main sources of legislation forming the foundation of Singapore’s securities regulatory framework are:

  • Banking Act 1970 – governs deposit-taking institutions, including full banks, wholesale banks, merchant banks, and finance companies
  • Financial Advisers Act 2001 – regulates financial advisers
  • Insurance Act 1966 – addresses insurance companies and insurance brokers
  • Securities and Futures Act 2001 – provides comprehensive regulation for capital markets entities, including fund managers, REIT managers, corporate finance advisers, trustees, dealers, credit rating agencies, and financial advisers
  • Payment Services Act 2019 – regulates payment service providers and payment systems

While none of these legislations expressly or specifically address the use of AI in their respective industries, they will apply in conjunction alongside other AI-specific guidelines (as discussed in greater detail below).

Regulatory Framework on Use of AI in Singapore

When it comes to AI regulation, there are two (2) main regulatory approaches that have been emerging globally. On one side, some jurisdictions (e.g., the European Union through its AI Act) have favoured a comprehensive approach to AI regulation which creates a new regulator, prohibits certain uses of AI (e.g., social scoring), imposes strict obligations on providers and deployers of high-risk AI systems, specific requirements for general purpose AI models, and more limited obligations for other AI systems. On the other hand, Singapore has been among others favouring a more flexible, guideline-based strategy. In particular, Singapore has been taking a fairly sectoral approach towards AI governance so far which shows many similarities with the regulator-led approach proposed by the UK Government in its AI White Paper.

A. General AI Governance Regulation

The Info-comm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) have been the most active regulators in AI governance regulation so far. 

Briefly, the IMDA is a statutory board that develops and regulates the info-communications and media sectors, while the PDPC serves as the national data privacy authority of Singapore, ensuring protection of personal data and data privacy.

Some key guidelines and initiatives launched on AI governance by the IMDA and PDPC include:

  • Model AI Governance Framework

    This Model AI Governance Framework was developed by the PDPC and IMDA, aiming to provide private sector organisations with readily implementable (but voluntary) guidance on the responsible and ethical deployment of AI solutions. Its guiding principles focus on ensuring AI systems are (a) fair, explainable, transparent; and (b) human-centric.

    Additionally, it focuses on four (4) key areas:

    1. Internal Governance Structures and Measures: organisations should ensure there are clear roles and responsibilities in place for the ethical deployment of AI, as well as risk management and internal control strategies.
    2. Determining AI Decision-making Models: organisations should consider the risks of using a particular AI model based on the probability and severity of harm, and determine what degree of human oversight would be appropriate based on the expected probability and severity of harm.
    3. Operations Management: organisations should take steps to understand the lineage and provenance of data, the quality of their data, and transparency of the algorithms chosen.
    4. Stakeholder Interaction and Communication: organisations should take steps to build trust and maintain open relationships with individuals regarding use of AI, including steps such as general disclosure, increased transparency, policy explanations, and careful design of human-AI interfaces.

Although not developed solely with the securities industry in mind, these guidelines are similarly encouraged across all industries

  • AI Verify

    The AI Verify Framework, developed by the IMDA, assesses AI ethics principles, promoting transparency and accountability. The framework consists of 11 AI ethics principles which jurisdictions around the world internationally recognise.

    The 11 governance principles are transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency and oversight, inclusive growth, societal and environmental well-being.

    The Minister for Communications and Information announced in mid-2023 the launch of the AI Verify Foundation (AIVF), which is intended to support use of AI Verify. While the IMDA indicated it did not have any intention at that point to regulate the use of AI solutions back then, it did suggest the possibility of introducing regulations in future.

  • Proposed/Draft Model of AI Governance Framework for Generative AI

    Recognising increasing concerns with the use of generative AI (as opposed to traditional AI, which the existing Model AI Governance Framework was tailored towards), it was announced on 16 January 2024 that the AIVF and IMDA had developed a draft, proposed Model AI Governance Framework for Generative AI intended to expand on the existing Model AI Governance Framework.

    Briefly, the proposed framework identifies nine (9) dimensions to support fostering a trusted AI ecosystem:

    1. Accountability – putting in place right incentive structure for different players in AI system development life cycle to be responsible to end-users
    2. Data – ensuring data quality and addressing potentially contentious training data in a pragmatic way
    3. Trusted development and deployment – enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation and disclosure
    4. Incident reporting – implement incident management system for timely notification, remediation and continuous improvement
    5. Testing and assurance – providing external validation and added trust through third-party testing, and developing common AI testing standards for consistency
    6. Security – addressing new threat vectors that arise through generative AI models
    7. Content provenance – transparency about where content comes from as useful signals for end users
    8. Safety and alignment R&D – accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values
    9. AI for public good – responsible AI includes harnessing AI to benefit public by democratising access, improving public sector adoption, upskilling workers and developing AI systems sustainably.
  • Advisory Guidelines on Personal Data in AI Recommendation Systems

    As an independent statutory body enforcing Singapore’s Personal Data Protection Act 2012 (PDPA), the PDPC plays a vital role in overseeing the use of AI solutions within the securities industry. The backbone of Singapore’s data protection law, the PDPA is a consent-based framework that sets out fundamental regulations with regard to the collection, use and disclosure of personal data by all organisations, including financial entities in the securities industry.

    To provide further guidance, the PDPC published the “Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems” on 1 March 2024 (Advisory Guidelines) to provide organisations with certainty on when they can use personal data to develop and deploy systems that embed ML models (AI Systems). The guidelines also re-assure consumers on use of their personal data in AI Systems, since they are typically used to make autonomous decisions or assist a human decision-maker through recommendations and predictions.

    The Advisory Guidelines are not legally binding, but they demonstrate how the PDPC is likely to interpret and enforce the PDPA consistently with the principles and recommendations outlined therein.

    These guidelines typically encompass several key principles and best practices:

    1. Transparency and Explainability: AI Systems should operate transparently, such that organisations must be able to explain the decision-making process of their AI Systems.
    2. Fairness and Non-Discrimination: AI Systems must be designed and deployed in a manner that is fair and does not discriminate against any user group, ensuring that recommendations do not reinforce harmful biases or stereotypes.
    3. Human Oversight: There must be appropriate human oversight and intervention mechanisms for AI Systems.
    4. Accuracy: Efforts should be made to ensure that personal data used by AI Systems is accurate, complete and up-to-date. Incorrect or outdated information can lead to inaccurate recommendations, potentially causing inconvenience or harm to users.
    5. Consent and Notification: Organisations must comply with consent and notification obligations under the PDPA when using personal data for AI Systems. This includes obtaining their consent before collecting and using their data for recommendations, as well as providing them with options to opt-out or customise how their data is used.
    6. Security: Adequate security measures should be in place to protect personal data from unauthorised access, disclosure, alteration and destruction. This includes technical and organisational measures to ensure the integrity and confidentiality of data.
    7. Accountability: Organisations deploying AI Systems should be accountable for their compliance with these guidelines. This includes implementing appropriate governance structures, conducting impact assessments, and having mechanisms in place for addressing any concerns or grievances from users regarding the recommendations provided.
    8. Other Data Protection Considerations: Organisations should implement appropriate controls when handling personal data, such as data minimisation, pseudonymisation and de-identification.
B. Sectoral AI Governance Regulation – Financial Services Industry

The MAS was the first sectoral regulator to take action on AI governance regulation, and has since issued its own set of guidelines and toolkits for specific applicability to the securities industry:

It should be noted that the FEAT Principles are not legally binding, but rather serve as a benchmark for best practices in the financial sector. They apply to financial institutions using AIDA to make or assist in making decisions affecting their customers or their business operations.

The MAS strongly encourages financial institutions to adopt these guidelines as part of their internal risk management controls and governance frameworks related to AIDA – and are part of Singapore’s broader initiative to foster innovation while ensuring that development and application of AI in the financial sector are aligned with ethical principles.

  • Veritas Initiative

    The MAS announced the release of Veritas Toolkit version 2.0 – an open-source toolkit developed by an MAS-led consortium of 31 industry players – to enable responsible use of AI in the securities industry. Its aim is to help financial institutions carry out assessment methodologies for the FEAT Principles on the financial products and services they intend to offer, ensuring the responsible and ethical use of AIDA.
  • Draft GenAI Governance Framework

    Similar to the recent proposed Model AI Governance Framework for Generative AI led by the IMDA and PDPC, the MAS announced on 15 November 2023 the successful conclusion of phase one of Project MindForge, which seeks to develop a risk framework for the use of generative AI (GenAI) for the financial sector.

    Although the MAS recognises the efficiency and numerous benefits that GenAI can bring to the provision of financial products and services, it is similarly cautious of the risks (e.g., sophisticated cybercrime tactics, copyright infringement, data risk and biases).

    Project MindForge aims to develop a clear and concise framework on the responsible use of GenAI in the financial industry, and is supported by a consortium comprising several Singapore financial institutions.

    Currently, at phase one of Project MindForge, the consortium has developed a comprehensive GenAI risk framework, highlighting seven risk dimensions identified in the following areas: (a) Accountability and Governance, (b) Monitoring and Stability, (c) Transparency and Explainability, (d) Fairness and Bias, (e) Legal and Regulatory, (f) Ethics and Impact, and (g) Cyber and Data Security. The whitepaper detailing the risk framework is expected to be published sometime this year.

As the use of AI continues to revolutionise the financial services industry worldwide, Singapore showcases itself as a leading example of how to harmonise technological advancement with ethical considerations compounded with societal trust.

The nuanced regulatory landscaped is defined by key frameworks and guidelines – such as the Model AI Governance Framework and FEAT Principles in the financial sector – which together set a global standard for the responsible and ethical implementation of AI solutions in the securities industry.

Complementing regulatory efforts by the MAS and IMDA, the PDPC has also kept up to speed in its data protection regulations to further solidify Singapore’s commitment to responsible AI innovation and deployment.

By ensuring that the collection, use and disclosure of personal data in AIDA-powered financial services adhere to strict privacy standards, the PDPC has reinforced the city-state’s position as a trusted hub for the development and deployment of cutting-edge solutions.

Challenges Ahead and Future Outlook

As the rapid integration of AI solutions continues to shape the future of the securities industry worldwide, financial institutions and Singapore regulators will face the on-going challenge of finding a delicate balance between fostering technological innovation, mitigating associated risks, safeguarding ethical and responsible practices, and maintaining consumer trust.

The MAS has daringly chosen to face these challenges head-on – by taking a proactive and collaborative approach to enabling responsible adoption of AI within the securities industry.

First, the MAS actively seeks partnerships with a broad spectrum of stakeholders within the financial and tech industries, as well as with regulatory entities, to collectively navigate the complexities of AI technology and its financial applications.

In addition to that, another key to its strategy is the implementation of regulatory sandboxes, which serve as test beds for new AI applications and business models under MAS supervision. These controlled environments allow for the cautious exploration of innovative technologies and operational models, limiting potential risks before wider adoption. Such initiatives not only facilitate the safe progression of AI innovations but also provide the MAS with valuable insights to inform on-going regulatory development.

As Singapore looks to the future, its approach positions it at the forefront of regulatory innovation in the financial sector, especially concerning AI. Through a combination of strategic collaboration and practical experimentation within regulatory sandboxes, Singapore aims to maintain its status as a progressive and secure environment for the adoption and growth of AI in finance.

This commitment underscores Singapore’s role as a visionary in crafting a regulatory framework that not only accommodates the rapid pace of technological change but also ensures the financial sector’s resilience and integrity.



Source

Related Articles

Back to top button