News & Insights

AI Ethics and Law: Unmasking the Culprit in the Age of AI Advancement and the need for new laws

Who is to blame?

Who should be held responsible when AI malfunctions or is used for malicious purposes? How do current laws address these emerging issues? Globally, this topic has garnered substantial attention, yet it remains a subject characterised by each peculiar circumstance.

Artificial Intelligence (AI) refers to the field of computer science and technology that focuses on creating systems and algorithms that can perform tasks that typically require human intelligence. These tasks include problem-solving, decision-making, learning from data, understanding natural language, recognising patterns, and more.

AI is changing many industries fast. It brings new ideas and challenges. We need to think about what’s right and what’s legal when we use AI. This mix of ethics and laws is complicated but important.

In August, Brand Partners wrote to the Supreme Court regarding a concerning discovery related to Zoom’s data collection practices, which included capturing user data and facial images, for the purpose of training their AI models.

In Australia, there have been concerns before about data collection and AI facial recognition technology, including its use in sports stadiums. The restrictions of what AI can do with this information, or possibly what people could do with this information using AI, are not clear. While current laws cover crimes involving AI, the fast progress of AI technology is raising new, complicated issues about accountability, especially as AI gets closer to being able to show its own personal intent leading to possible engagement in criminal actions.

In the US, Courts have begun to deal with complex and evolving legal issues arising from the use of AI, including catastrophic failures of autonomous vehicles utilising AI.  Where an autonomous vehicle causes death or injury Courts have considered if liability is attributable to the manufacturer or the individual (if any) behind the wheel, and have also considered specific contextual factors, including the presence of manufacturing defects and the duty of care. As of now, most AI related criminal activity has some link to human engagement which makes legal apprehension possible. What happens legally, when human interaction decreases, and AI begins to become less reliant? This remains a grey area worldwide.

Question: Apply current law or develop New Law?

In Australia, AI is generally regarded in a similar light as robots – which are viewed as a tool rather than entities capable of committing crimes. Nonetheless, like robots or a car, AI can be utilised to engage in unlawful activities. Given the rapidly advancing capabilities of AI, it is becoming increasingly imperative to update existing legal frameworks to encompass ethical considerations pertinent to AI use.

Discussions have been revolving around concepts such as vicarious liability, and holding the creators of these AI systems accountable for their actions. Ultimately, establishing unambiguous regulations, standards, and ethical guidelines is critical, especially in fields such as law where human discernment is vital.

AI is developing faster than ever and is quite evidently changing the technological landscape. For this reason many issues such as determining responsibility for AI decisions and actions have arisen. Issues encompassing privacy protection, safety regulations, job displacement, and national security are dominating discussions in this area. As AI continues its relentless advance, it is certain that new challenges and risks will emerge.

Currently, Australia lacks specific legislation tailored to AI. Attempts at regulation have failed to yield concrete reforms. In the absence of dedicated AI laws, businesses must adhere to existing legal frameworks, encompassing general statutes like data protection and consumer rights, as well as sector-specific regulations.

This article can only skim the tip of the iceberg when it comes to the reasons why the rapid development of AI technology desperately calls for the rapid development of law in its response. To emphasise how fast and advanced this technology really is, consider the fact that the very article you are currently reading was formatted, reviewed, and even featured paragraphs generated partially by AI. Additionally, the AI Chat Bot known as ‘Chat GPT’ has contributed its own insights and opinions as to why the rapid development of AI calls for the creation of new AI legislation in Australia.

AI holds great promise for various applications, but without new legislation and ethical safeguards, its unchecked development can lead to terrifying consequences, including potential for death and destruction. The future is both fascinating and terrifying.

Who is to blame?

Who should be held responsible when AI malfunctions or is used for malicious purposes? How do current laws address these emerging issues? Globally, this topic has garnered substantial attention, yet it remains a subject characterised by each peculiar circumstance.