top of page

Trust is core to our purpose. That is why we focus on Responsible AI

Our commitment is to advance safe, secure, and trustworthy AI:
AI is bringing limitless potential to push us forward as a society — but with great potential comes great risks.

 

That is why we have trained Alice, a digital avatar, on Australian Government legislation and principles to make it easier to understand what this means. Have a go, and let us know what you think! 

Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.

What are the risks when adopting AI?

 

When you use AI to support business-critical decisions based on sensitive data, you need to be sure that you understand what AI is doing, and why. Is it making accurate, bias-aware decisions? Is it violating anyone’s privacy? Can you govern and monitor this powerful technology? Globally, organizations recognize the need for Responsible AI but are at different stages of the journey. While there is a huge opportunity to leverage AI within your business there are known associated risks that you should be aware of.

Model inaccuracy

The model may make predictions that are incorrect and outside tolerable levels for a specific business problem

Model drift

The model may originally produce reliable and accurate results however, over time the model starts to degrade in performance

Model hallucinations

Generative AI models may create results that are unexpected and not desired.

How can you effectively mitigate these risks?

 

Our AI solutions experts can work with you to mitigate these risks. As part of the AI Microsoft for Startups Founders Hub. we proudly align with the Microsoft Responsible AI Standard. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Faireness

Reliability & saftey

Privacy & Security

Transparency

Inclusiveness

​Accountability

bottom of page