Fairness principles for artificial intelligence and data analytics



Fairness principles for artificial intelligence and data analytics

A photo of the Monetary Authority of Singapore

Date Published: 26 March 2021 

Authors: Bill Jamieson and Lai Zheng Yong.


Following the conclusion of Phase One of the Veritas initiative, on 6 January 2021, the Veritas Consortium (the “Consortium”) published two whitepapers detailing the Fairness, Ethics, Accountability and Transparency (“FEAT”) fairness assessment methodology (the “Methodology”) and its application in the two use cases.[1] This article provides an update on Singapore’s fairness framework for the adoption of artificial intelligence in finance.



Artificial intelligence and data analytics (“AIDA”) technology is increasingly employed for its ability to optimise decision-making processes. AIDA removes human-decision-making as a variable, and replaces it with a data-driven approach. The adoption of AIDA by Financial Services Institutions (“FSI”) has been observed in areas involving internal-process automation and risk management, in the form of credit scoring and fraud detection.[2]

In response to the plethora of risks associated with the adoption of AIDA in finance, regulators across the globe have developed their own guidelines to address what they identify as the major risk categories. In a research study of 36 guidelines on ethics and principles for artificial intelligence, the team at Berkman Klein Center found the theme of “fairness and non-discrimination” to be featured in all of the guidelines studied, the Monetary Authority of Singapore’s (“MAS”) FEAT principles being one of which.[3]


Fairness of AIDA

The effectiveness of artificial intelligence is fundamentally predicated on the data it analyses. It follows that AIDA technology is limited by both latent biases within the data and the algorithmic perpetuation of the same.[4] To counter such risks, it is essential to identify the context of the data being utilised and have an understanding of how such data is relevant to the end-product.

Context is of particular significance as the abovementioned latent biases can impede the system’s ability to process the data. Such latent biases can be observed from the following example:

“if one obtusely inputs white-collar professional labour data from the 1940s to the 1970s into an artificial intelligence system to predict what demographics of individuals would be the most successful applicants for white-collar professions, the suggestion would likely be white males of a certain age.”[5]


Insofar as we accept that data may always contain some form of bias, extra precaution must be taken when handling the end-product, and appropriate adjustments made to mitigate such biases. Such adjustments are necessary to not only improve accuracy of the end-product, but to also incorporate a human assessment on the ethics, morality, and social acceptability of the end-product to the decision-making process.[6]

Having highlighted concerns over the fairness of AIDA in decision-making, we assess the findings put forth by the Consortium in the following sections.


Fairness Principles

In Singapore, a set of principles has been published by MAS in relation to FSIs’ use of AIDA. The Fairness Principles form the tenets of the Consortium’s Methodology, and its application keeps the AIDA’s decision-making process aligned with the overarching business and fairness objectives.

The four Fairness Principles are as follows[7]:

F1 – Individuals or groups of individuals are not systematically disadvantaged through AIDA-driven decisions, unless these decisions can be justified 

F2 – Use of personal attributes as input factors for AIDA-driven decisions is justified

F3 – Data and models used for AIDA-driven decisions are regularly reviewed and validated for accuracy and relevance, and to minimise unintentional bias

F4 – AIDA-driven decisions are regularly reviewed so that models behave as designed and intended


The Methodology

The Methodology consists of five steps:

(A) describe system objectives and context;

(B) examine data and models for unintended bias;

(C) measure disadvantage;

(D) justify the use of personal attribute; and

(E) examine system monitoring and review.[8]


Steps A, B, and C invite the assessor to establish both the business and fairness objectives of the system, which sets the benchmark against which the system’s fairness and potential tradeoffs are measured against. In HSBC’s simulated case study on the marketing of unsecured loans, consideration was given to the potential harms and benefits of having marketing intervention to the AIDA-selected individuals.[9] Historically, foreign nationals have a lower rate of loan application approval. It was noted in the study that there is a potential harm of further disadvantaging foreign nationals where such historical data is utilised.[10] By identifying latent bias at an early stage, FSIs are able to input mitigating mechanisms such as lifting the threshold for foreign nationals to mitigate the bias present within the data.[11]

The concept of fairness must not be regarded as being blind to personal attributes. A gender- or racially-blind algorithm can widen any pre-existing disparity, and intervention may be necessary to promote fairness. It was observed in HSBC’s study that a higher loan application rejection rate for foreign nationals would materialise if the nationality of the applicant was not taken into account by the system.[12] Such inclusion of personal attribute was justifiable to ensure that the system meets its intended objectives set out in step A, and satisfies the Fairness Principles F1 and F2.

Lastly, the Methodology calls for an ongoing monitoring of the system, in accordance to Fairness Principles F3 and F4. HSBC hypothesised that such monitoring can be implemented by conducting an analysis before a campaign was initiated, to prevent a significant shift in the parameter of the system; monitoring the system’s output during the campaign; and having the senior management team review the end-result of the campaign to ensure that the system meets the established objectives.[13] In order to keep humans in the loop with the operations of the AIDA technology, it has been suggested that such accountability framework should be built upon existing infrastructure.[14] In Singapore, this may be in the form of extending the scope and responsibilities of senior managers in FSIs under the MAS-issued Proposed Guidelines on Individual Accountability and Conduct (IAC Proposed Guidelines)[15], to incorporate responsibility over the day-to-day operations of the AIDA technology.[16]



We note that the Methodology is principles-based and does not prescribe mandatory responsibilities or regulatory obligations that FSIs must comply with. It remains to be seen how the Fairness Principles will fare beyond simulated studies. Moving forward, Phase Two of the Veritas Initiative will focus on the development of the ethics, accountability and transparency assessment methodology.


Disclaimer: This update is provided to you for general information and should not be relied upon as legal advice. The editor and the contributing authors do not guarantee the accuracy of the contents and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the contents.


[1] Monetary Authority of Singapore, Media Release, 6 January 2021

[2] PYMNTS, “AI Innovation Playbook”, June 2019, [12]-[13]; and Zetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, “Artificial Intelligence in Finance: Putting the Human in the Loop” (February 1, 2020), [11]

[3] Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society, 2020, [5]

[4] Tom C.W. Lin “Artificial Intelligence, Finance, and the Law”, [536]-[538]

[5] Tom C.W. Lin “Artificial Intelligence, Finance, and the Law”, [536]-[537]

[6] Tom C.W. Lin “Artificial Intelligence, Finance, and the Law”, [537]

[7] Veritas Consortium, Veritas Document 1 – FEAT Fairness Principles Assessment Methodology, [5]

[8] Veritas Consortium, Veritas Document 1 – FEAT Fairness Principles Assessment Methodology, [8]

[9] Veritas Consortium, Veritas Document 2 – FEAT Fairness Principles Assessment Case Studies, [22]

[10] Veritas Consortium, Veritas Document 2 – FEAT Fairness Principles Assessment Case Studies, [22]

[11] Veritas Consortium, Veritas Document 2 – FEAT Fairness Principles Assessment Case Studies, [32]-[34]

[12] Veritas Consortium, Veritas Document 2 – FEAT Fairness Principles Assessment Case Studies, [38]

[13] Veritas Consortium, Veritas Document 2 – FEAT Fairness Principles Assessment Case Studies, [44]

[14] Zetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, “Artificial Intelligence in Finance: Putting the Human in the Loop” (February 1, 2020), [44]

[15] Note that the IAC Proposed Guidelines was issued on 10 September 2020, and will be effective from 10 September 2021

[16] Zetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, “Artificial Intelligence in Finance: Putting the Human in the Loop” (February 1, 2020), [44]

Bill Jamieson is a Partner at CNPLaw LLP. Bill is an English lawyer who is also registered to practise Singapore law in the areas of corporate law, banking and finance and securities laws. He enjoys working in the diverse and dynamic Asian market and helping his clients to achieve their goals.

    Bill’s practice focuses on corporate financing transactions, investment funds, mergers and acquisitions, private equity, and employment law matters. His experience includes 10 years in the City of London and over 20 years in Asia. Before joining CNP, Bill was a partner in a well-known international law firm. He is recommended lawyer for Corporate and M&A, Banking and Finance, Investment Funds and Labour and Employment in Legal 500 Asia Pacific 2021. Bill is one of the firm’s contacts for Interlaw, a network of independent full-service corporate law firms ranked by Chambers and Partners in its highest category, “Elite”, amongst all global law firm networks.

    Every business involves an amalgam of various stakeholders, such as investors, shareholders and directors. Ideally, each of these stakeholders should have a common vision of what is best for the company. However, this is rarely the case when individual interests are factored into the equation.

    Stakeholder conflicts (regarding issues such as breaches of fiduciary dutiesderivative actionsshareholder oppressionmanagement deadlocksmanagement compensationdividend payments and buy-outs) can be a thorny issue and can leave a company crippled if not addressed promptly.

    Given the diversity of interests at play, we appreciate that a multi-faceted approach is usually the most cost-efficient method of resolving stakeholder conflicts. Therefore, we provide clients with ready access to an integrated team of lawyers (combining the experience of our corporate, dispute resolution and employment law practices where applicable) who will effectively engage the relevant stakeholders in discussions on how best to resolve their differences amicably.

    More often than not, clients are able to avoid costly protracted court proceedings and resolve stakeholder conflicts with discretion and expediency.

    We provide legal advisory services to fund managers, investors and investee companies in relation to both open-end funds and closed-end funds that deal with a variety of asset classes and employ different investment strategies including: Hedge Funds, Private Equity Funds, Mutual Funds, Venture Capital (VC) Funds, Commodity Funds, and Exchange Traded Funds (ETFs).

    At CNPLaw, we understand that the current rapid advances in technology are significantly disrupting the way businesses operate and forcing them to evolve to stay relevant. In this fast-changing world, our clients will need lawyers that they can rely on to help them navigate the challenges and stay ahead of the technology game.

    Our technology law team constantly monitors the latest technological developments and assesses how they might impact our clients’ businesses. Our mission is to help our clients respond to and embrace these changes to enable them to thrive in this new environment.

    We provide our clients with end-to-end, intuitive advice on all forms of technology and e-commerce related transactions,