Print Article
SHARE

Graphic of robot with a computer asking

Conversational AI has played a significant role in enhancing customer experience, with businesses in various sectors leveraging AI solutions to boost efficiency and elevate customer satisfaction levels. 

The legal landscape for AI in customer service and telemarketing is multifaceted, involving federal and state laws and regulations. Despite the current federal administration’s trend for AI deregulation, existing industry-specific laws still apply to AI-powered use cases, and some states are implementing AI-specific consumer protection laws, which can impact businesses operating nationwide (for example, online). State Attorney General in Massachusetts, California, and Oregon have already made statements that current consumer protection laws apply to AI, which is likely indicative of upcoming enforcement activity in the states. 

This article will guide you through the Top-7 essential legal aspects of using AI in customer service and telemarketing, helping you mitigate your legal exposure while leveraging AI’s potential. 

1.  Ensure Telemarketing Compliance 

If you engage in outbound AI-generated voice calls to customers, you are likely conducting robocalls. The Federal Communications Commission (FCC) has clarified that calls made with AI-generated voices fall under the Telephone Consumer Protection Act’s (TCPA) definition of robocalls, emphasizing the need for compliance to avoid hefty fines. TCPA consent requirements apply to AI calls. For a consumer’s consent1 to marketing communications to be valid, it has to be: 

  • prior, 
  • written, 
  • express, 
  • sought by the business in a logically and topically related communication, and 
  • obtained not as a condition for goods or services (although offering discounts and gifts as an incentive for consent is allowed). 

Among other TCPA requirements, AI calls must disclose the caller’s identity and the purpose of the call and provide an opt-out mechanism during the call.  

1 In December of 2023, the FCC adopted an additional “one-to-one” consent rule, which would require that (1) telemarketers maintain a record of each lead’s consent to be contacted by that specific business before initiating any marketing communication and (2) the telemarketing communications be “logically or topically related” to the consent provided by the consumer. “Bulk” consent to marketing communications would no longer be valid. This rule, which was set to take effect on January 27, 2025, was vacated by the 11th Circuit Court of Appeals on January 24, 2025. On the same date, the FCC released an order postponing the rule’s enforcement until January 27, 2026. It is unclear if the rule will become effective.  

2.  Disclose AI Use 

It is not yet mandatory at the federal level to inform customers that they are interacting with an AI bot. However, several states already require or, at least, encourage the disclosure of AI use in customer service scenarios, and more states have similar bills on the books: 

  • Utah’s recent Artificial Intelligence Policy Act requires two types of disclosures: 
  • Providers of specific regulated services must proactively inform consumers that they are interacting with generative AI. 
  • Other businesses must disclose the use of AI if the consumer inquires about it. 
  • California’s “Bot Disclosure Law” prohibits the use of bots to mislead others about their artificial identity to incentivize a transaction or influence a vote in an election. Disclosure of AI bot use is encouraged as a legal defense. 
  • Several states, including Hawaii, Idaho, Illinois, Massachusetts, and New York, have proposed legislation that would either mandate chatbot providers to clearly inform users that they are not interacting with a human or hold providers accountable for any misleading or deceptive communications generated by chatbots. 

There is a growing trend towards disclosing AI usage, both for legal reasons and to enhance customer experience. Informing customers about AI involvement in interactions and providing the option to switch to a human agent can boost customer trust and reduce legal risks.  

3.  Voice Analytics: Ensure Compliance with Biometric Information Privacy Laws and Avoid Illegal Discrimination 

Using AI for voice analytics, such as when creating voiceprints for customer authentication, implicates biometric information privacy laws. States like Illinois, Washington, and Texas require prior notification and express written consent for collecting and using biometric data, including voiceprints.  

Another important consideration when utilizing voice analytics is that companies must ensure their AI systems do not discriminate against customers based on voice traits that could indicate protected attributes like gender, race, age, national origin, or disability. This also involves making sure that AI bots are accessible to individuals with hearing or speech impairments. 

State High-Risk AI Decision-Making Systems Legislation 

    • The Colorado AI Act (effective February 1, 2026) aims to prevent discrimination when using “high-risk AI systems” meaning AI used in high-stakes decision-making like employment, education, healthcare, lending, and other essential services. The act requires that both developers and users of AI decision-making systems use reasonable care in protecting consumers from algorithmic discrimination. The act also grants consumer rights like opting out of algorithmic decision-making and appealing such decisions.   
    • The Illinois Amendment to the Human Rights Act (effective Jan 1, 2026) contains similar requirements to the Colorado AI Act but focuses on the use of AI in making employment-related decisions. 
    • The New York City Local Law 144 of 2021 (effective January 1, 2023) prohibits employers and employment agencies from using an automated employment decision tool unless they ensure a bias audit was done and provide required notices. 

The use of voice analytics technology that can identify customers or detect fraud by analyzing the truthfulness or falsity of a person’s speech requires obtaining prior customer consent in certain states. Specifically, the California Invasion of Privacy Act (CIPA) mandates that businesses must obtain express written consent from customers before examining their voice recordings to determine the truthfulness or falsity of the spoken content. 

4.  Call Recording: Ensure Compliance with Wiretapping Laws 

Using AI tools that involve recording customer calls may implicate federal and state wiretapping laws. Although customer consent for call recording is not mandated at the federal level and in the majority of states, certain jurisdictions (known as “two-party consent” states) require express prior consent from all parties involved in the call prior to the recording. These states are California, Delaware, Florida, Illinois, Maryland, Massachusetts, Montana, Nevada, New Hampshire, Pennsylvania, and Washington. 

Since determining a customer’s current location to identify the applicable law can be challenging, and differentiating policies for residents of various states can be technically complex, it is advisable to obtain customer consent for recording in all cases to avoid legal pitfalls.  

Using Third-Party AI Software for Call Monitoring          

One of the most frequently invoked “two-party consent” state wiretap laws by plaintiffs is the California Invasion of Privacy Act (CIPA). The current CIPA case law treats third-party call recording technology vendors employed by a party to the conversation as an extension of that party. However, with the advent of AI technology, plaintiffs’ attorneys began alleging that businesses using third-party AI technology to record and analyze customer communications without disclosing their use are aiding and abetting in the AI vendors’ illegal wiretapping or eavesdropping. The plaintiffs claim that AI vendors are no longer an extension of the service providers using third-party AI technology because the recordings are being used for the AI vendors’ own purposes, such as machine learning. 

Given that the law is unsettled in cases of third-party AI call analytics technology use, companies recording calls in “two-party consent” states are advised to seek customer consent to the call recording with the use of a disclosed AI vendor. 

Forms of Consent to Call Recording 

While it is the best practice to obtain affirmative consent from the consumer (such as an audible “yes” or the click of a button), some states’ courts have held that implied consent is established if a caller remains on the line after a disclosure is played that the call will be recorded. The notice of the call being recorded must be clear to establish implied consent, and the caller must have a reasonable opportunity to end the call if they object to recording. 

Call Recordings’ Security 

Various laws mandate that businesses must protect certain customer information which can be communicated to them on a phone call from inadvertent disclosure. This includes ensuring that AI chatbots cannot be manipulated by bad actors to disclose other customers’ protected data. Regularly updating security protocols and conducting thorough risk assessments can help prevent data leaks and maintain compliance with privacy laws. 

5.  Call Content Analytics and Machine Learning: Obtain the Consent for Sensitive Data Use or Notify Customers of Other Data Use 

Depending on the type of data communicated by the customer on a call, data privacy laws may be implicated, requiring consumer consent or notice for the use of that data by the business. Consent is required for certain sensitive data processing, including government-issued identifiers, health information, financial data, certain communications data (CPNI), and other information. It is important that data which might not be considered sensitive in and of itself may become sensitive based on the surrounding context. 

For the analysis or other use of most other types of personal data communicated on a call, most states’ data privacy laws only require a disclosure in the privacy notice, along with providing individuals with the right to opt out of such processing. We also suggest including a reference to the business’s privacy notice in the pre-recorded message played at the start of the call. 

Occasionally, training data incorporated into a large language model can be disaggregated to its original form. Therefore, to comply with security requirements under various data protection laws, businesses must implement appropriate technical safeguards to reduce the risk of data disaggregation, which could be similar to a data breach. 

AI Training on Call Contents 

Training AI on call content can involve processing protected personal data if such information is shared during the call. Therefore, it is essential to inform customers about such data processing in a privacy notice and provide them with the option to opt out of this processing. It remains uncertain whether a consumer’s exercise of their opt-out right would require “machine unlearning,” which may not be technically feasible. However, it is reasonable to expect that such data should at least be removed from the dataset for future training purposes. 

Although there has been movement to require businesses to obtain opt-in consent for using consumer data in machine learning at the federal level, it is unlikely that the bill will be adopted soon. 

6.  Using Real People’s Voices: Comply with Right of Publicity Laws 

Many voice AI solutions utilize the voices of real individuals. Most states recognize the right of publicity, granting individuals exclusive commercial rights to their likeness, including their voice. Some states, such as Tennessee with its ELVIS Act, have specific laws against the non-consensual AI use of voice likenesses. 

When selecting a real person’s voice for your customer service AI, ensure you have their consent. Avoid using a voice that closely resembles a famous person’s voice, as this could infringe on their right of publicity. 

7.  Mitigate Liability for AI Hallucinations 
(Negligent Misrepresentation and Other Potential Issues) 

AI chatbots, while offering significant efficiencies, can sometimes produce outputs that are harmfully wrong, a phenomenon known as “hallucinations.” These hallucinations can lead to serious consequences if relied upon, including monetary losses, damaged reputations, or even more severe outcomes. 

Negligent misrepresentation under U.S. common law may occur when a business, which has a duty of care to provide accurate information, allows its AI customer service agent to produce incorrect outputs that customers reasonably rely upon to their detriment.  

A notable example is a Canadian case involving Air Canada. The British Columbia Civil Resolution Tribunal found Air Canada liable for negligent misrepresentation after its AI chatbot provided incorrect information about bereavement fares, leading a customer to incur additional costs. The tribunal ruled that Air Canada was responsible for the chatbot’s misinformation and ordered the airline to compensate the customer for the financial loss incurred. Although this case originates from a different jurisdiction, it is grounded in similar common law principles that are applicable in the U.S. 

Other relevant high-profile cases involving conversational AI are lawsuits filed against Character AI bringing product liability, negligence, and deceptive trade practices claims related to minors’ use of the platform. A lot of these claims are related to the conversational AI’s lack of safety guardrails, which resulted in the AI giving harmful advice to its users. 

To mitigate the risks of committing negligent misrepresentation when using AI chatbots, businesses can implement several strategies: 

  1. Disclaimers and Warnings: Clearly state in the chatbot interface and Terms of Use that the information provided by the AI may not always be accurate and should be verified. This can help manage user expectations and reduce reliance on potentially incorrect outputs. Moreover, some state bills seek to require businesses using generative AI to warn customers that AI-generated outputs may be inaccurate, inappropriate, or harmful. 
  1. Link the Source Document: When conversing with the customer about the contents of a document (for example, the company’s Terms of Use), link the source document in the AI chat response, disclaiming any accuracy of AI’s response. This will make it unreasonable to rely on AI’s output and will encourage the customer to read the source document themselves. 
  1. Less-Creative Mode: Configure the AI to operate in a less-creative mode, focusing on providing information from verified sources (such as the company’s policies) rather than generating “creative” responses. 
  1. Switch to a Human Agent: Program the AI to recognize high-risk cases or instances where it cannot derive a satisfactory response from reliable sources and switch the conversation to a human agent. 
  1. Introduce Technical Guardrails: Introduce security measures and conduct regular audits to ensure safety and reliability of your AI customer service agents. If using third-party AI vendors, conduct due diligence when selecting a vendor and request AI safety audit results. 

Conclusion 

The integration of AI in customer service and telemarketing offers significant benefits, but it also comes with legal responsibilities. By understanding and adhering to the relevant laws and regulations, businesses can harness the power of AI while safeguarding customer privacy and maintaining compliance. Stay informed, be transparent, and always prioritize the customer’s right to privacy and informed consent. 

Navigating the legal landscape of AI may seem daunting, but with the right approach, it can be a smooth and rewarding journey. Embrace the future of customer service with confidence, knowing that you are well-equipped to handle the legal challenges that come with it. 

 

 

 

Ask An Attorney

Disclaimer: Please be advised that contacting our law firm through this contact form does not establish an attorney-client relationship. While we appreciate your interest in our services, we cannot guarantee the confidentiality of any information shared until an attorney-client relationship has been formally established. Therefore, we kindly request that you refrain from submitting any confidential or sensitive information through this form. Any information provided through this form will be treated as general inquiries and not as privileged or confidential communications. Thank you for your understanding.