December 3rd, 2020
Today, the issue of corporate governance is critical. Good governance has moved far beyond investor security. In a world where transparency and accountability are demanded by all types of stakeholder, good corporate governance has become vital for businesses.
Without good governance, key business activities, whether focused on change, such as project management or transformation, or on business- as-usual, where improvements in efficiency and effectiveness are essential, are at higher risk of failing, creating uncertainty for investors and other stakeholders.
The world is waking up to the idea that the implications of Artificial Intelligence for corporate governance and the business world cannot be ignored, with many businesses already incorporating AI technology into complex, difficult and important decision-making processes.
We asked big tech expert Jon Machtynger to share his insights on AI advances in corporate governance. Here are his views:
It’s worth breaking this topic up into two distinct areas. The first is where AI is used to help provide more rigour in the decision-making process. The second is ensuring that AI systems are robust and that the models applied are credible.
For the first area, AI or more accurately machine learning, is being used to reduce the risk of doing business, for example, by using historical data to evaluate what a fraudulent transaction might look like, or to help businesses stay compliant with internal and external regulatory standards. Businesses must often balance speed and accuracy, so trained models allow them to achieve more accuracy at faster speeds than before.
For the second area, a lot more effort is now being invested in ensuring that machine learning models are seen to be transparent and fair and that the process of developing those models is transparent and reproducible. Without a transparent process, trust in the models is compromised. Let’s say that a model arrives at a decision, but the decision is not deemed accurate, or it is one that doesn’t support corporate policy. How can the company determine what data, model, or algorithm led to that decision? How can they ensure that if they fed the same data at a future time, that it would come up with the same decision? Reproducibility helps companies understand data biases and algorithmic flaws, and how to adapt to statistical drift – the tendency of the subject being studied to change over time. This operationalization of the machine-learning process is also known as MLOps.
One area of concern that’s become more important in recent years is the perceived power of large tech companies. This relates mainly to social media platforms but also to hand-held devices, personal computers, operating systems, IoT devices, and search engines. The balance and potential conflict between service delivery and the right to privacy has become an issue, and many consumers are pushing back. The assumption that large companies could be relied on to do the ‘right’ thing has been shown to be unrealistic in several very public cases.
Many companies have responded to this concern by sharing their standards and demonstrating a public commitment to ethics. This hasn’t always worked. For example, Google published a code of ethics, but their AI ethics board lasted a week before it was disbanded. Microsoft’s AI, Ethics, and Effects in Engineering and Research (AETHER board, on the other hand, is central to how they work with AI capabilities, and the types of client engagements or internal developments they will work on.
So, while automated governance is a capability that most organisations will take advantage of, there is a human dimension that affects how effective governance is – and is seen to be. Organisations must balance between the speed of coming to a decision, and the defensibility of that decision if they are called to account.
Board decisions should always be supported by the use of data and its correct interpretation, and most are. However, there is an erroneous perception that where decision systems previously provided facts, letting let the humans decide, AI is deciding. This is not the fault of those who work in AI, but a clear reflection of the need for increased decision support in an increasingly complex and data-rich world. AI capabilities shouldn’t replace individual accountability for decisions, particularly if AI systems used are not transparent about how they arrive at their recommendations. The fact that they are carrying out incredibly difficult tasks is often used as a reason for opaqueness in decision-making.
However, recent developments in the interpretability of how ML systems arrive at decisions may help. One of these is the open-source framework called interpret.ml, which allows data scientists (and by extension, board members and key decision-makers) to understand if the models driving decisions are biased, fair, and transparent.
The use of AI in corporate governance is becoming a necessity and will become an increasing feature of data-driven boardroom decision-making. As business becomes increasingly complex and fast-moving, AI will become essential in improving prediction of the outcomes of decisions, in strategy, auditing, logistics, human resources, and other areas.
You heard it here, from an expert. To find out more about how you can develop a competitive advantage by integrating artificial intelligence into your business’ decision-making and corporate governance, get in touch with us today.
March 24th, 2021
Permutable will hold its first ethical investing and artificial intelligence event this April 22, 12pm – 1pm. Aimed at ethical campaigners, data scientists, investment analysts...
Read more >
March 16th, 2021
Today, Permutable.ai is announcing the first of its ethically based initiatives using artificial intelligence. Permutable will be working with the Royal Marsden Hospital Foundation in...
Read more >
January 27th, 2021
The race for corporates to adopt and integrate AI into their business is on. It is no wonder that the global artificial intelligence market size is...
Read more >