Moving

Algorithms auditing algorithms: GPT-4 a reminder that accountable AI is shifting past human scale

Join top leaders in San Francisco July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more

Artificial intelligence (AI) is revolutionizing industries, streamlining processes, and hopefully improving the quality of life for people around the world—all very exciting news. However, with the increasing influence of AI systems, it is crucial to ensure that these technologies are developed and implemented responsibly.

Responsible AI is not just about compliance with regulations and ethics; it is the key to creating more accurate and effective AI models.

In this article, we will discuss how responsible AI leads to more powerful AI systems; explore existing and upcoming regulations related to AI compliance; and emphasize the need for software and AI solutions to address these challenges.

Why does responsible AI lead to more accurate and effective AI models?

Responsible AI defines the obligation to design, develop and deploy AI models in a safe, fair and ethical manner. By ensuring that models work as expected—and don’t produce unwanted results—responsible AI can help increase trust, protect against harm, and improve model performance.

case

Transformation 2023

Join us July 11-12 in San Francisco as top leaders share how they’ve integrated and optimized AI investments for success and avoided common pitfalls.

Join Now

To be responsible, AI must be understandable. This is no longer a human problem; We need algorithms to help us understand the algorithms.

GPT-4, the latest version of OpenAI’s Large Language Model (LLM), is trained on the text and images of the web, and as we all know, the web is full of inaccuracies, ranging from minor misstatements to outright fakes. While these untruths can be dangerous in and of themselves, they inevitably also produce AI models that are less accurate and less intelligent. Responsible AI can help us solve these problems and move us towards developing better AI. Specifically, responsible AI can:

  1. Reduce prejudice: Responsible AI focuses on removing biases that can be inadvertently built into AI models during development. By actively working to remove bias in data collection, training, and implementation, AI systems are becoming more accurate and delivering better outcomes for a wider range of users.
  2. improve generalizability: Responsible AI encourages the development of models that work well in different environments and across different populations. By ensuring that AI systems are tested and validated against a variety of scenarios, the generalizability of these models is enhanced, resulting in more effective and adaptable solutions.
  3. Ensure transparency: Responsible AI emphasizes the importance of transparency in AI systems, making it easier for users and stakeholders to understand how decisions are made and how the AI ​​works. This includes providing understandable explanations of algorithms, data sources and possible limitations. By promoting transparency, responsible AI fosters trust and accountability, empowers users to make informed decisions, and encourages effective assessment and improvement of AI models.

AI compliance and ethics regulations

In the EU, the General Data Protection Regulation (GDPR) came into force in 2016 (and implemented in 2018) to enforce strict rules on data protection.

Businesses quickly realized they needed software to track where and how they were using consumer data, and then to ensure they were compliant with those regulations.

OneTrust is a company that quickly emerged to provide businesses with a platform to manage their data and processes related to privacy. OneTrust has seen incredible growth since its inception, with much of that growth driven by GDPR.

We believe that the current and near future states of the AI ​​Regulation reflect the timeframe of the Data Protection Regulation 2015/2016; The importance of responsible AI is beginning to be recognized around the world, with various regulations emerging to drive ethical development and use of AI.

  1. I HAVE action
    In April 2021, the European Commission proposed new rules – the EU AI Law – to create a legal framework for AI in the European Union. The proposal includes provisions on transparency, accountability and user rights to ensure that AI systems are safe and respect fundamental rights. We believe that the EU will continue to be at the forefront of AI regulation. The EU-AEOI is expected to be adopted at the end of 2023, with the legislation coming into force in 2024/2025.
  1. AI regulation and initiatives in the US
    The EU AEOI is likely to set the tone for regulation in the US and other countries. In the US, governing bodies such as the FTC are already enacting their own sets of rules, particularly in relation to AI decisions and bias; and NIST has published a risk management framework that is likely to influence US regulation.

So far, there has been little comment on the regulation of AI at the federal level, as the Biden government published the AI ​​Bill of Rights – a non-binding guide to the design and use of AI systems. However, Congress is also reviewing the Algorithm Accountability Act of 2022 to require impact assessments of AI systems to check for bias and effectiveness. But these regulations are not moving very quickly towards adoption.

Interestingly (but perhaps not surprisingly), much of the early effort to regulate AI in the US is at the state and local levels, with much of this legislation targeting HR tech and insurance. New York City already passed Local Law 144, also known as the NYC Bias Audit Mandate, effective April 2023, prohibiting companies from using automated hiring tools to hire candidates or promote employees in NYC, unless the tools have been independently verified bias.

California has proposed similar employment regulations regarding automated decision-making systems, and Illinois already has legislation in place regarding the use of AI in video interviews.

In the insurance sector, the Colorado Division of Insurance has proposed legislation known as the Algorithm and Predictive Model Governance Regulation aimed at “protecting consumers from unfair discrimination in insurance practices.”

The role of software in ensuring responsible AI

It’s pretty clear that regulators (starting in the EU and then expanding to other countries) and companies will take AI systems and associated data very seriously. There are significant fines for non-compliance and failures due to not understanding AI models – and we believe the company’s reputation is at risk.

Purpose-built software is required to track and manage compliance. Regulation will serve as an important tailwind for technology adoption. Specifically, the critical roles of software solutions in addressing the ethical and regulatory challenges associated with responsible AI include:

  1. AI model tracking and inventory: Software tools can help organizations maintain an inventory of their AI models, including their purpose, data sources, and performance metrics. This allows for better monitoring and management of AI systems, ensuring they are compliant with ethical guidelines and relevant regulations.
  2. AI risk assessment and monitoring: AI-powered risk assessment tools can assess the potential risks associated with AI models, such as B. prejudice, privacy concerns and ethical issues. By continuously monitoring these risks, organizations can proactively address potential issues and maintain responsible AI practices.
  3. Algorithm Auditing: In the future we can expect the emergence of algorithms capable of auditing other algorithms – the holy grail! With the massive amounts of data and computational power going into these models, this is no longer a human problem. This enables automated, real-time, unbiased evaluation of AI models, ensuring they meet ethical standards and meet regulatory requirements.

These software solutions not only streamline compliance processes, but also help develop and deploy more accurate, ethical, and effective AI models. By using technology to address the challenges of responsible AI, companies can increase trust in AI systems and realize their full potential.

The importance of responsible AI

In summary, responsible AI is the foundation for developing accurate, effective, and trustworthy AI systems; By removing bias, improving generalizability, ensuring transparency, and protecting user privacy, responsible AI leads to more powerful AI models. Compliance with regulations and ethical guidelines is critical to fostering public trust and acceptance of AI technologies, and as AI continues to advance and permeate our lives, the need for software solutions that support responsible AI practices will only increase gain weight.

By facing up to this responsibility, we can ensure the successful integration of AI into society and harness its power to create a better future for everyone!

Aaron Fleishman is a partner at Tola Capital.

data decision maker

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.

If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button