Print Article
SHARE

Graphic of robot with a computer asking

In recent years, conversational AI has played a significant role in enhancing customer experience. Businesses in various sectors are leveraging AI solutions to boost efficiency and elevate customer satisfaction levels.

According to McKinsey, “personalization at scale can drive between 5 and 15% revenue growth for companies in the retail, travel, entertainment, telecom, and financial-services sectors.” Bloomberg wrote that generative AI has been shown to increase customer service representatives’ productivity by 14%. Although we have yet to see the data on the effects of widespread AI use in customer service and telemarketing, it is clear that using it correctly offers great benefits to businesses. 

Some novel applications of AI technology may implicate existing laws, even if it may be hard to immediately recognize an “old” concept under the new technological guise.

The legal landscape for AI in customer service and telemarketing is multifaceted, involving federal and state laws, industry-specific regulations, and emerging AI-specific legislation.

This article will guide you through the Top-7 essential legal aspects of using AI in customer service and telemarketing, helping you mitigate your legal exposure while leveraging AI’s potential.

1.  Ensure Robocalling Compliance

If you engage in outbound AI-generated voice calls to customers, you are likely conducting robocalls. The Federal Communications Commission (FCC) has clarified that calls made with AI-generated voices fall under the TCPA’s definition of robocalls, emphasizing the need for compliance to avoid hefty fines. As with all robocalls, TCPA consent requirements apply to AI calls. For the consent to marketing communications to be valid, it has to be:

  • prior,
  • written,
  • express,
  • sought by the business in a logically and topically related communication, and
  • obtained not as a condition for goods or services (although offering discounts and gifts as an incentive for consent is allowed).

Among other robocalling requirements, AI calls must disclose the caller’s identity and the purpose of the call and provide an opt-out mechanism during the call. 

New “One-to-One Consent” Rules for Leads

Beginning January 27, 2025, companies must maintain a record of each lead’s consent to be contacted by that specific business before initiating any marketing communication (“one-to-one” consent). “Bulk” consent to marketing communications will no longer be valid. Although the new rules do not come into force before January 27, 2025, telemarketers must ensure all their leads are compliant by that deadline, so the time to obtain valid consent is now. 

Secondary Liability of Cloud Communications Platform Providers

If you are a cloud communications platform provider, you may also be found liable if your customers violate the robocalling rules and other telemarketing laws under the “secondary liability” legal theory. 

In the recent infamous case of fraudulent robocalls impersonating President Biden, the FCC proposed to fine the provider through whose network the calls were placed $2,000,000 for falsely authenticating spoofed traffic with the highest level of attestation permitted under the STIR/SHAKEN rules.

In attributing secondary liability, the Federal Trade Commission and the FCC use the “knew or should have known” standard. Thus, providers are urged to implement and maintain robust “Know Your Customer” and “Know Your Traffic” measures, educate their customers on compliance, take active part in traceback efforts, and place technical guardrails on how their networks can be used to mitigate illegal robocalls.

As was extensively reported by our firm, Robocall Mitigation Database registration, up-to-date certification, and filing of compliant robocall mitigation plans are an indispensable part of robocall mitigation compliance.

2.  Disclose AI Use

It is not yet mandatory at the federal level to inform customers that they are interacting with an AI bot. There is a possibility that such a disclosure will be required in the future. Representative Frank Thune Jr. introduced the Do Not Disturb Act in Congress, which aims, among other provisions, to mandate the disclosure that a call is made using AI.

Several states require or, at least, encourage the disclosure of AI use in customer service scenarios:

  • Utah’s recent Artificial Intelligence Policy Act requires two types of disclosures:
    • Providers of specific regulated services must proactively inform consumers that they are interacting with generative AI.
    • Other businesses must disclose the use of AI if the consumer inquires about it.
  • California’s “Bot Disclosure Law” prohibits the use of bots to mislead others about their artificial identity to incentivize a transaction or influence a vote in an election. Disclosure of AI bot use is encouraged as a legal defense.

There is a growing trend towards disclosing AI usage, both for legal reasons and to enhance customer experience. Informing customers about AI involvement in interactions and providing the option to switch to a human agent can boost customer trust and reduce legal risks. 

3.  Voice Analytics: Ensure Compliance with Biometric Information Privacy Laws and Avoid Illegal Discrimination

Using AI for voice analytics, such as when creating voiceprints for customer authentication, implicates biometric information privacy laws. States like Illinois, Washington, and Texas require prior notification and express written consent for collecting and using biometric data, including voiceprints. 

Another important consideration when utilizing voice analytics is that companies must ensure their AI systems do not discriminate against customers based on voice traits that could indicate protected attributes like gender, race, age, national origin, or disability. This also involves making sure that AI bots are accessible to individuals with hearing or speech impairments.

The use of voice analytics technology that can identify customers or detect fraud by analyzing the truthfulness or falsity of a person’s speech requires obtaining prior customer consent in certain states. Specifically, the California Invasion of Privacy Act (CIPA) mandates that businesses must obtain express written consent from customers before examining their voice recordings to determine the truthfulness or falsity of the spoken content.

4.  Call Recording: Ensure Compliance with Wiretapping Laws

Using AI tools that involve recording customer calls may implicate federal and state wiretapping laws. Although customer consent for call recording is not mandated at the federal level and in the majority of states, certain jurisdictions (known as “two-party consent” states) require express prior consent from all parties involved in the call prior to the recording. These states are California, Delaware, Florida, Illinois, Maryland, Massachusetts, Montana, Nevada, New Hampshire, Pennsylvania, and Washington.

Since determining a customer’s current location to identify the relevant law can be challenging, and differentiating policies for residents of various states can be technically complex, it is advisable to obtain customer consent for recording to avoid legal pitfalls. 

Using Third-Party AI Software for Call Monitoring         
(Wiretap vs. Tape Recorder Issue)

If a service provider (for example, a bank) uses third-party AI software for call analytics in a “two-party consent” state, both the service provider and the AI software provider may need to seek customer consent to the call recording.

If the call recording is used beyond the service provider’s purposes, such as for AI model’s enhancement and machine learning independent of the provider, third-party AI call analytics software must be disclosed when obtaining consent for call recording. In this scenario, the AI system functions like a wiretap rather than a tape recorder, according to legal frameworks established decades ago. Therefore, both the service provider and the AI software provider must obtain consent for the recording in “two-party consent” states. If the service provider conceals the use of third-party AI, it could be charged with aiding and abetting illegal wiretapping by the AI software provider.

The Court of Appeals for the Third Circuit in Popa v. Harriet Carter Gifts, Inc. found that a shopping website and a marketing service violated Pennsylvania wiretapping law by collecting and sharing records of customers’ digital activities without their consent.

Recently, a case was filed against Navy Federal Credit Union which allegedly used third-party AI software for customer service calls. The software allegedly recorded and analyzed the calls without the callers’ knowledge or consent (see Paulino v. Navy Federal Credit Union et al.).

On the other hand, if the AI system’s role is limited to sharing the call recording and any insights with the service provider, it might be considered an extension of the service provider (like a tape recorder). In this case, a separate disclosure may not be necessary, but the service provider must still obtain prior express consent for the call recording in “two-party consent” states.

Call Recordings’ Security

Various laws mandate that businesses must protect certain customer information which can be communicated to them on a phone call from inadvertent disclosure. This includes ensuring that AI chatbots cannot be manipulated by bad actors to disclose other customers’ protected data. Regularly updating security protocols and conducting thorough risk assessments can help prevent data leaks and maintain compliance with privacy laws.

5.  Call Content Analytics and Machine Learning: Obtain the Consent for Sensitive Data Use or Notify Customers of Other Data Use

Depending on the type of data communicated by the customer on a call, call content analytics may require consumer consent or notice. Consent is required for certain sensitive data processing, including government-issued identifiers, health information, financial data, certain communications data (CPNI), and other information. It is important that data which might not be considered sensitive in and of itself may become sensitive based on the surrounding context.

For the analysis of most other types of information that can be communicated during a call, it is only necessary to include a disclosure in the privacy notice, along with providing individuals with the right to opt out of such processing. This includes data aggregation. 

Occasionally, the training data incorporated into a large language model can be disaggregated to its original form. Therefore, to comply with security requirements under various data protection laws, businesses must implement appropriate technical safeguards to reduce the risk of data disaggregation, which could be similar to a data breach.

AI Training on Call Contents

Training AI on call content can involve processing legally protected data if such information is shared during the call. Therefore, it is essential to inform customers about such data processing in a privacy notice and provide them with the option to opt out of this processing. It remains uncertain whether opting out would necessitate “machine unlearning,” which might not be technically feasible, but it is reasonable to expect that such data should at least be removed from the dataset for future training.

Businesses may soon be required to obtain opt-in consent for using consumer data in machine learning. The AI CONSENT Act, which aims to mandate this, was introduced in Congress in late March.

6.  Using Real People’s Voices: Comply with Right of Publicity Laws

Many voice AI solutions utilize the voices of real individuals. Most states recognize the right of publicity, granting individuals exclusive commercial rights to their likeness, including their voice. Some states, such as Tennessee with its Elvis Act, have specific laws against the non-consensual use of voices in AI-generated deepfakes.

When selecting a real person’s voice for your customer service AI, ensure you have their consent. Avoid using a voice that closely resembles a famous person’s voice, as this could infringe on their right of publicity.

7.  Mitigate Liability for AI Hallucinations
(Negligent Misrepresentation)

AI chatbots, while offering significant efficiencies, can sometimes produce outputs that are harmfully wrong, a phenomenon known as “hallucinations.” These hallucinations can lead to serious consequences if relied upon, including monetary losses, damaged reputations, or even more severe outcomes.

Negligent misrepresentation may occur when a business, which has a duty of care to provide accurate information, allows its AI chatbot to produce incorrect outputs that customers reasonably rely upon to their detriment. A notable example is the recent Canadian case involving Air Canada. The British Columbia Civil Resolution Tribunal found Air Canada liable for negligent misrepresentation after its AI chatbot provided incorrect information about bereavement fares, leading a customer to incur additional costs. The tribunal ruled that Air Canada was responsible for the chatbot’s misinformation and ordered the airline to compensate the customer for the financial loss incurred.

To mitigate the risks of committing a negligent misrepresentation when using AI chatbots, businesses can implement several strategies:

  1. Disclaimers and Warnings: Clearly state in the chatbot interface and Terms of Use that the information provided by the AI may not always be accurate and should be verified. This can help manage user expectations and reduce reliance on potentially incorrect outputs.
  2. Link the Source Document: When conversing with the customer about the contents of a document (for example, the company’s Terms of Use), link the source document in the AI chat response, disclaiming any accuracy of AI’s response. This will make it unreasonable to rely on AI’s output and will incentivize the customer to read the source document themselves.
  3. Less-Creative Mode: Configure the AI to operate in a less-creative mode, focusing on providing factual information from verified sources (such as the company’s policies) rather than generating creative responses.
  4. Switch to a Human Agent: Program the AI to recognize high-risk cases or instances where it cannot derive a satisfactory response from reliable sources and switch the conversation to a human agent.

Conclusion

The integration of AI in customer service and telemarketing offers significant benefits, but it also comes with legal responsibilities. By understanding and adhering to the relevant laws and regulations, businesses can harness the power of AI while safeguarding customer privacy and maintaining compliance. Stay informed, be transparent, and always prioritize the customer’s right to privacy and informed consent.

Navigating the legal landscape of AI may seem daunting, but with the right approach, it can be a smooth and rewarding journey. Embrace the future of customer service with confidence, knowing that you are well-equipped to handle the legal challenges that come with it.

CONTACT US! WE CAN HELP!

If your company is exploring the adoption or deployment of AI customer service or telemarketing, and you have questions or concerns about the legal considerations involved, the Technology Attorneys at Marashlian & Donahue, PLLC are here to assist you.

Our firm boasts extensive experience in advanced, enhanced, and emerging technologies, including privacy law, telemarketing, copyright, trademark, commercial law, and other relevant legal areas impacted by AI in communications. We understand the complexities and rapidly evolving legal landscape surrounding technology and are equipped to provide you with comprehensive guidance and support.

Don’t navigate these challenges alone—contact us to ensure that your AI initiatives are compliant. We look forward to helping you harness the full potential of AI while mitigating your legal exposure.

NAVIGATE THE BURGEONING DOMAIN OF
ARTIFICIAL INTELLIGENCE-ASSISTED TELEMARKETING WITH
The CommLaw Group! 

In our Artificial Intelligence (AI) practice, we leverage our established subject matter expertise in data privacy, intellectual property law, and regulatory compliance (telemarketing and more) with our proven ability to successfully navigate the ever developing and uncertain technology law landscapes. Our attorney ranks include publishing experts, particularly in the field of legal matters related to AI, whose publications got international traction.  We closely follow regulatory and case law developments to guide businesses, developers, and investors on AI-related legal compliance and legal risk mitigation. 

PLEASE CONTACT US NOW, WE ARE STANDING BY TO GUIDE YOUR COMPANY’S COMPLIANCE EFFORTS

Jonathan S. Marashlian – Tel: 703-714-1313 / E-mail: jsm@CommLawGroup.com
Michael Donahue — Tel: 703-714-1319 / E-mail: mpd@CommLawGroup.com
Robert H. Jackson –  (703) 714-1316 / E-mail: rhj@commlawgroup.com
Linda McReynolds – Tel: 703-714-1318 / E-mail: lgm@CommLawGroup.com
Diana James – Tel: 703-663-6757 / E-mail: daj@CommLawGroup.com

 

 

 

Ask An Attorney

Disclaimer: Please be advised that contacting our law firm through this contact form does not establish an attorney-client relationship. While we appreciate your interest in our services, we cannot guarantee the confidentiality of any information shared until an attorney-client relationship has been formally established. Therefore, we kindly request that you refrain from submitting any confidential or sensitive information through this form. Any information provided through this form will be treated as general inquiries and not as privileged or confidential communications. Thank you for your understanding.