Print Article
SHARE

Oregon Attorney General (AG) recently issued Guidance on Artificial Intelligence (AI) (Guidance), highlighting the existing state law landscape that applies to various AI applications. Although the Guidance is directed at Oregon businesses and references Oregon law, this action reflects a broader national trend of applying current legal frameworks to emerging technologies.

While Oregon lacks specific “AI laws” at this time, several existing statutes govern AI use, including Oregon’s Unlawful Trade Practices Act, Consumer Privacy Act, and Equality Act.

Background on AI and Its Implications

The Guidance acknowledges that AI shows promise in streamlining tasks, personalizing services, and making data-driven decisions, which can be instrumental in such industries as healthcare and rental management. However, AI’s probabilistic nature raises concerns about fairness, accountability, and trustworthiness. Moreover, privacy issues arise from the vast amounts of personal data AI systems require, increasing risks of breaches and unauthorized use. Bias and discrimination are significant concerns, as AI systems trained on biased datasets can perpetuate and amplify social inequalities, particularly in areas like lending and hiring.

Transparency is another challenge. The complexity of AI systems often makes it difficult for consumers to understand or challenge decisions affecting their lives. When even AI creators struggle to explain their systems’ decision-making processes, accountability becomes limited.

In essence, while AI offers unprecedented opportunities for innovation and efficiency, it also introduces complex challenges that require careful consideration and management to ensure responsible and equitable implementation.

Application of Existing Laws to AI

The Guidance emphasizes that several existing Oregon laws may be implicated by various AI applications (the list is non-exhaustive):

  1. Unlawful Trade Practices Act (UTPA)

The UTPA protects consumers from misrepresentations in transactions, including those involving AI. Businesses offering or using AI products or services should be aware of several potential UTPA violations:

    • Failure to Disclose: Not disclosing known material defects or nonconformities of an AI product.
    • Misrepresentation: Falsely claiming AI product capabilities or using AI to misrepresent goods or services.
    • False Affiliations: Using AI to create false impressions of sponsorship, approval, or celebrity endorsements.
    • Pricing Deception: Employing AI to make misleading representations about price reductions.
    • Price Gouging: Using AI for unconscionable pricing during emergencies.
    • Robocall Deception: Utilizing AI-generated voices in robocalls to misrepresent caller identity or purpose. Please note that using AI voice in robocalls is also regulated by federal law and the Federal Communications Commission’s rules.
    • Unconscionable Tactics: Employing AI in ways that take advantage of consumer ignorance or provide no material benefit to consumers.
  1. Oregon Consumer Privacy Act (OCPA)

The Guidance highlights that the Oregon Consumer Privacy Act (OCPA) empowers Oregon consumers with control over their personal data, granting them the right to know about data collection and processing, access their personal information, correct inaccuracies, and request deletion of their data. These rights directly impact how AI systems, which often ingest vast amounts of personal data for training, must operate within the state.

Disclosure, Consent, and Opt-Out Mechanisms

Developers using personal data for AI training must provide clear disclosures in accessible privacy notices. When dealing with sensitive data categories as defined by the OCPA, explicit consumer consent is mandatory before using such data in AI model development or training. Importantly, developers acquiring third-party datasets for AI training may be classified as “controllers” under the OCPA, subjecting them to the same stringent standards as the original data collectors.

The OCPA prohibits the “legitimization” of previously collected personal data for AI training through retroactive or passive alterations to privacy notices or terms of use. Instead, affirmative consent is required for any new or secondary data uses. Consumers must be provided with a mechanism to withdraw their consent, and entities must cease data processing within 15 days of receiving a revocation request.

AI Profiling and Consumer Protection

Under the OCPA, consumers have the right to opt out of AI profiling for decisions that could have significant impacts, such as those related to housing, education, or lending. Entities must also respect consumers’ right to request data deletion, a requirement that extends to data used in AI models.

Data Protection Assessments

The OCPA mandates Data Protection Assessments before processing personal data for profiling or any activity presenting heightened risks to consumers. Given the complexity and potential risks associated with generative AI models, such assessments are likely necessary when incorporating consumer data into these systems. The Guidance specifically states that “feeding consumer data into AI models and processing it in connection with these models” presents “heightened risks to consumers” that require a data protection assessment.

  1. Oregon Consumer Information Protection Act

The Guidance notes that the Oregon Consumer Information Protection Act (OCIPA) applies to AI developers, data suppliers, users, and any entities that:

  • own
  • license
  • maintain
  • store
  • manage
  • collect
  • process
  • acquire
  • otherwise possess personal information.

OCIPA requires safeguarding personal information and promptly notifying affected consumers and the Oregon Attorney General in the event of a security breach meeting certain thresholds.

Violations of the OCIPA also are enforceable under Oregon’s Unlawful Trade Practices Act (UTPA). This means that non-compliance can result in legal consequences under the state’s consumer protection laws.

  1. Oregon Equality Act

The Guidance emphasizes that the Oregon Equality Act (OEA) prohibits discrimination in housing and public accommodations based on a comprehensive list of protected characteristics, including:

  • race
  • color
  • religion
  • sex
  • sexual orientation
  • gender identity
  • national origin
  • marital status
  • age
  • disability

For instance, the Guidance notes that a rental management company utilizing an AI-powered mortgage approval system may be in violation of the Act if:

  • the system consistently denies loans to qualified applicants from specific neighborhoods or ethnic backgrounds, and
  • these denials result from the AI being trained on historically biased data.

Implications for Businesses

  1. Knowledge: Businesses are encouraged to familiarize themselves with the applicable laws and regulations implicated by their AI use cases.
  2. Transparency and Accuracy: Businesses must ensure transparency in their AI products and services, disclosing limitations and potential biases.
  3. Data Protection: Companies using AI must implement robust data protection measures and comply with privacy regulations.
  4. Non-Discrimination: AI systems should be designed and monitored to prevent discriminatory outcomes in areas such as lending, hiring, and housing.
  5. Consumer Rights: Businesses must respect consumer rights regarding data collection, use, and deletion in AI applications.
  6. Risk Assessment: Companies should conduct thorough risk assessments when implementing AI systems, particularly for high-risk applications.

Conclusion

The Oregon AG’s Guidance serves as a reminder that existing laws apply to new technologies like AI and indicates that enforcement actions can be expected in Oregon. Businesses operating in Oregon or other states should review their AI practices to ensure compliance with consumer protection, privacy, anti-discrimination, and other applicable laws.

States are expected to take an increasingly prominent role in AI regulation. As this trend continues nationwide, companies should stay informed about the developing legal landscape in the jurisdictions where they operate (which for a majority of SaaS businesses is nationwide).

NAVIGATE THE BURGEONING DOMAIN OF ARTIFICIAL INTELLIGENCE LAW WITH
The CommLaw Group! 

In our Artificial Intelligence (AI) practice, we leverage our established subject matter expertise in data privacy, intellectual property law, and regulatory compliance (telemarketing and more) with our proven ability to successfully navigate the ever developing and uncertain technology law landscapes. Our attorney ranks include publishing experts, particularly in the field of legal matters related to AI, whose publications got international traction.  We closely follow regulatory and case law developments to guide businesses, developers, and investors on AI-related legal compliance and legal risk mitigation. 

CONTACT US NOW, WE ARE STANDING BY TO GUIDE YOUR COMPANY’S COMPLIANCE EFFORTS

Susan Duarte – Tel: 703-714-1318 / E-mail: sfd@commlawgroup.com

Brian Alexander – E-mail:  bal@commlawgroup.com

Diana James – Tel: 703-663-6757 / E-mail: daj@CommLawGroup.com

Ask An Attorney

Disclaimer: Please be advised that contacting our law firm through this contact form does not establish an attorney-client relationship. While we appreciate your interest in our services, we cannot guarantee the confidentiality of any information shared until an attorney-client relationship has been formally established. Therefore, we kindly request that you refrain from submitting any confidential or sensitive information through this form. Any information provided through this form will be treated as general inquiries and not as privileged or confidential communications. Thank you for your understanding.