Like it or not, Generative AI is changing the way we do things, especially the way we work. While it will bring many new opportunities, it will also bring new potential problems. As part of good governance practices, board members should be acting now to ensure they are meeting their obligations. Set out below are some of those key steps, though specific measures will vary from industry to industry.

AI is appearing everywhere. Even Microsoft’s basic text editor now includes AI in its latest version. This ubiquity of AI requires a comprehensive review of existing governance practices. While this could result in stand-alone solutions, ideally the aim should be to integrate with existing systems where practicable.


One of the remarkable features of AI is its ability to produce results that can be difficult to distinguish from work done by humans. However, AI can break the link between the production of work and the responsibility for that work. Systems of accountability should be checked and where necessary, rules for allocating responsibility drawn up or revised.

AI can “hallucinate”, where output is fabricated to fill gaps.

This often results in the dangerous situation of output that is wrong but still very convincing. Whatever the AI does, someone must be responsible for it. This includes auditability (it should be possible to identify what output is the result of AI). Anyone responsible for AI assisted work should be trained and knowledgeable in both the area of work and the AI technology used. They don’t need to be experts in AI systems, but they do need to understand what issues and risks are associated with the underlying technology. Systems need to be in place to monitor the quality of AI assisted work.

Risk assessment

Ensure your existing risk management systems account for relevant developments in AI. A risk assessment should be conducted before AI systems are introduced. 

Security, Confidentiality and Privacy

Integration of AI introduces new security risks. Currently, the main risks are related to privacy and personal information, and data security. Although best practice is to have anonymised data in the dataset, some AI systems will contain personal information including sensitive information. New methods of attacking those systems continue to emerge.

Most organisations will have legal obligations regarding that data. You might find, for example, that the introduction of AI means your privacy policy or collection statements no longer reflect your actual practice. This can potentially result in being in breach of the Australian Privacy Principles and therefore the Privacy Act. Despite efforts to protect data, there are some circumstances in which training data can be retrieved by an attacker.

In addition to reviewing existing privacy policies and procedures, data management and protection plans should be revisited and updated to reflect the use of AI. Contractual restrictions on the use of data, especially where it is “commercial in confidence”, should also be reviewed.

BYOD (Bring Your Own Devices)

Even if you have everything locked down within your own systems, there’s still a strong chance that employees (and of course third parties in your supply chain) will use AI. AI is rapidly becoming a standard feature on high-end smartphones, in addition to being available through laptops and tablets. Businesses will have to decide for themselves the extent to which they wish to integrate AI, but they should be prepared to deal with employees turning to AI on their own devices for assistance. Many organisations will have experienced something similar when smartphones began to proliferate in the workplace many years ago.


You should decide when to require disclosure of the use of AI. You may decide that your organisation will require that use is always disclosed (though there may be a further decision regarding to whom that disclosure is made), or there may be reasons for making exceptions. This is a complex issue and there are compelling arguments on both sides. These decisions should be made after considering those arguments and with a solid grasp of the issues, particularly regarding: ethics; privacy; security and safety; trust and reputation, and respect for the right of stakeholders to make decisions for matters they regard as important.


Large Language Models (LLMs), currently the most common form of Generative AI systems, produce output that reflects their training data. While this has, in the past, notoriously resulted in racist output on some occasions, it can also have more subtle consequences. Depending on what it is used for, AI can lead to both direct and indirect discrimination. In the same way that the quality and accuracy of AI derived output should be checked like the work of a junior or inexperienced employee might be checked, the use of AI will have to be regulated in ways previously thought to apply only to humans. In the same way that organisations will be held responsible for inaccuracies or errors made by AI, they stand at risk of being held accountable for any resulting discrimination.

The lines between IT and HR are starting to blur. The Australian Human Rights Commission produced guidance on AI and discrimination for the insurance industry over a year ago. As AI is more widely adopted, more industries will have to consider this issue.


Measures that an organisation decides to take to deal with AI should be clearly communicated with both internal and external stakeholders. The reputational benefits that come with taking measures to ensure AI is adopted in an organised, structured, and safe manner will be minimal if customers, vendors and suppliers, and the general public don’t know that you’re taking them.

Ensuring that these measures are successful requires integrating them into the organisation’s culture. This will require clear internal communications and probably training. There may be suitable policies already in place that can be updated to include AI, or new policies may need to be developed. For example, while you may opt for a specific AI acceptable use policy for some measures, modifying existing policies, e.g. an ICT devices policy, may also be an option.

Actions for board or senior executives

Ensure you understand enough about generative AI and its actual or potential use in your organisation to be able to make the necessary decisions and delegate appropriately. You might find input from a trusted technical source, whether internal or external, useful for this.

Take steps to ensure decision makers, at all levels, are suitably informed about the implications of AI regarding any matters about which they make decisions or are likely to be held accountable.

You should consider seeking advice on the legislative, regulatory, and contractual obligations applicable to your organisation regarding AI or the broader systems that AI could be moving into.