To ensure that we minimize risk to our clients and ourselves, it is essential that we have in place guardrails and guidelines in the area of Artificial Intelligence (Ai). They are designed to support us and help grade across six important areas; Transparency and Accountability, Compliance and Regulation, Fairness and Equity, Bias and Risk Taking, Alignment with Business Goals, and Trust Building.

What is the name of the product, tool, app, or type of Ai you are planning on using or experimenting with?

(check 'MSQ Radar' to see if we already use and approve this Ai.)

1 Transparency and Accountability

Knowing the underlying mechanisms of Ai recommendations and outputs promotes transparency. This transparency, in turn, enables the company to be accountable for the decisions made by Ai. We understand that in most cases, this will be largely unknown, which can be a huge risk. We suggest searching the website or supporting documentation of the product, tool, or API originator to find any supporting documentation.

Do you understand how the product or code you are using has come to its output and/or recommendations?

2 Compliance and Regulation

Data-usage policies often align with data protection and local privacy regulations such as GDPR or HIPAA. MSQ businesses must ensure they comply with local laws to avoid legal repercussions. If the product, service, or API you are about to use does not have strong data-usage policies you may inadvertently be supplying the provider with sensitive information, which we cannot afford to do.

Do you understand the data-usage policy of the company, tool or process you are using?

3 Fairness and Equity

Human oversight is essential to evaluate the ethical implications of Ai outputs and recommendations.

Working with the Ai experts within MSQ could help identify and address potential biases, discrimination, or ethical concerns, which in turn promotes responsible and fair Ai use. In Ai, we call this 'Human-In-The-Loop' and it slows things down, but can be the difference between good or bad in the final outcome.

Is there a validation step where human experts review and approve any Ai-generated recommendations or actions before you, or the tool acts upon them or publishes the output?

4 Bias and Risk Taking

Recognising bias in Ai outputs is the first step towards ensuring fairness and equity. The bias almost always comes from the data used to train the model. Unintentional bias can lead to discriminatory outcomes, which can harm individuals or groups, erode trust, and damage a company's reputation.

Do you understand that the output from any Ai tool or process might contain unintentional bias, and are you happy to take that risk knowing that you could be held responsible if there is a legal challenge?

5 Alignment with Business Goals

Outcomes and KPIs ensure that Ai tools are aligned with the overarching goals and objectives of your company, the group, or the client. They help direct Ai efforts towards achieving specific outcomes that contribute to the success of the product, process, service, or sale.

Do you have a clear set of KPIs or objectives for the tool or process you are about to use?

6 Trust Building

Seeking permission to use Ai on an audience builds trust between businesses and their clients or audiences. It also demonstrates transparency and respect for their choices, fostering a positive relationship.

Has the client or audience explicitly granted permission for this Ai or automation process to be used?

Result

The result and total score out of 18 will be displayed here when you get to the end of the questions.