Trusted Advisor on Balancing Legal Risks with Business Strategies
The remarkable capacity for Generative artificial intelligence (AI) to absorb information raises a critical question: what happens when we need these models to forget? Machine unlearning—the process of selectively removing specific data or knowledge from an AI model—presents a frontier fraught with technical challenges.
A user might demand their copyrighted text, sensitive personal information, or even harmful biases be excised from a model’s memory. Also, concerns over data privacy and compliance with existing regulations have intensified. Laws such as the General Data Protection Regulation (GDPR), Article 17, known as the “Right to Erasure,” and the California Consumer Privacy Act (CCPA) CCPA’s §1798.105, empower individuals to demand that businesses retract and remove their data from public access and third-party databases. But unlike a simple database deletion, unlearning in AI is far from straightforward.
Unlike traditional databases, which can simply delete records, generative AI models incorporate learned data into their intricate weight distributions, making targeted removal extremely difficult. The most definitive method of unlearning—retraining the model from scratch—remains largely impractical due to the immense computational costs and time involved, particularly for large language models (LLMs) trained on massive datasets.
Beyond the technical difficulties, machine unlearning raises ethical dilemmas. Who decides what should be forgotten? Biased or harmful content might need removal, but defining "harmful" is subjective and context-dependent. Moreover, how do we prove an AI model no longer "remembers" something when its outputs are probabilistic?
As AI continues to evolve, striking a balance between remembering and forgetting will be essential to ensuring these technologies remain trustworthy, compliant, and aligned with human values.
In a world where data is both fuel and liability, teaching AI to forget may prove as revolutionary as teaching it to learn.
This posting is offered for general information purposes only, and should not be taken as legal advice for any individual case or situation. It should not be copied or reproduced without the permission of Compliance Counsel Law Group.
Artificial intelligence (AI) has emerged as a transformative force across industries, driving innovation and efficiency. However, with its rapid adoption, a concerning practice has also surfaced: AI washing. This term refers to the misleading marketing strategy where companies exaggerate or falsely claim the integration or capabilities of AI within their products and services to attract attention or investment.
What Is AI Washing?
AI washing is similar to greenwashing, where businesses make unfounded environmental claims. In the context of AI, organizations might label their offerings as AI-powered even when such claims are superficial or entirely unfounded. This practice misleads customers and stakeholders by overstating the technological sophistication or potential of their products.
Why Is AI Washing a Problem?
AI washing erodes trust in genuine technological advancements and hinders informed decision-making by consumers and investors. When companies falsely claim to use AI, it distorts market dynamics and creates an uneven playing field for businesses that legitimately employ the technology.
Legal Risks Associated with AI Washing
While the promise of AI is immense, businesses must approach their use and representation of the technology with honesty and transparency. Avoiding AI washing is not only a matter of ethical practice but also crucial for mitigating significant legal and reputational risks. Companies must ensure that their claims about AI are accurate, substantiated, and align with the actual capabilities of their products and services.
This posting is offered for general information purposes only, and should not be taken as legal advice for any individual case or situation. It should not be copied or reproduced without the permission of Compliance Counsel Law Group.
Lenders are increasingly relying on artificial intelligence (AI), machine learning, and other digital technologies to determine credit eligibility. There is a concern that discriminatory lending practices involving AI can occur due to unintentional biases present in the data used to train AI models and in the design of algorithms. Because AI and big data make it possible to integrate large-scale information containing a greater number of data factors than ever before, does it open the door to factoring in too much data, thus creating the potential for bias? AI models learn from the input of historical data, and if certain variables are added to the mix, or are given more weight than others, AI decisioning may reflect unfair and discriminatory practices.
There are strong reasons to believe that with the use of large-scale information, AI will naturally rely on proxies in its decisioning that can discriminate on the basis of age, race or gender. Proxies are characteristics or data points that are not directly related to age, race or gender but can be used to make inferences or guesses about it. For example, whether someone owns a MAC or PC, or their type of phone, tablet, etc. can be an indicator of a person’s credit repayment patterns, but can also be indicators of a person’s age, race or gender. These protected class attributes may also be derived from email domain preferences such as whether the loan applicant uses Gmail, Hotmail, or an AOL account, etc., from which data on payment performance can also be associated. Protected class attributes can also be associated with the types of store credit accounts reported on the applicant’s credit history. Although such data can be correlated to loan repayment patterns, it is also a proxy for age, race, and gender. Assigning certain types of retail store credit accounts a weighted factor, can have disparate impact effects. What about time of day? AI may decide that submitting a loan application at 2 A.M. may be a sign of financial desperation, when it’s nothing more than an applicant who works the night shift. What if the applicant’s credit report shows numerous medical bills? Although paid current, AI may extrapolate medical bills to mean that the health of the applicant is in question, therefore they are not a good credit risk, resulting in credit denial or less affordable loan terms. And, then there is the scrutiny of first and last name, and zip code, resulting in potential discriminatory connections. With the goal to create greater access to credit, AI could be programmed to assess many nontraditional supplemental factors for borrowers who do not have a strong credit history. Could the use of expanded data factors have the opposite effect by resulting in discriminatory credit denials, or less attractive lending terms?
Given the opportunity to input an unlimited number of factors into the AI model under the belief that more data is better may result in the snowball effect akin to the problems credit scoring models experienced in the beginning with the overuse of data factors, and continue to struggle with accusations of bias. Just because there is a possible statistical relationship, does not mean that it is predictive of anything.
When it comes down to it, is more data truly needed to make a predictive credit decision beyond factors such as credit history, income stability and ability to repay? Is there any doubt that AI credit decisioning, just like credit scoring models, may result in credit applications being denied that may have been justifiably approved using a smaller window into the credit applicant’s soul?
This posting is offered for general information purposes only, and should not be taken as legal advice for any individual case or situation. It should not be copied or reproduced without the permission of Compliance Counsel Law Group.
As artificial intelligence (AI) technologies rapidly transform industries, general counsel must address the legal and ethical implications of their use. AI accountability has become a crucial area of focus, as regulators and stakeholders demand transparency, fairness, and responsibility. Some questions GCs should be asking:
1. What are the legal risks associated with AI implementation?
AI can introduce a range of legal risks, including liability for biased outcomes, intellectual property violations, data privacy breaches, and consumer protection issues. GCs should assess these risks in collaboration with cross-functional teams, particularly focusing on AI's decision-making processes and its alignment with applicable regulations.
2. How is AI transparency ensured?
Transparency is vital for building trust and maintaining compliance. GCs should investigate whether the AI models and algorithms used are understandable and explainable to both stakeholders and regulators.
3. Is the organization mitigating AI bias?
AI systems can inadvertently perpetuate biases present in training data. GCs must ensure that appropriate checks are in place to detect and mitigate such biases.
4. What data governance practices are in place?
Since AI heavily relies on data, GCs need to ensure that robust data governance frameworks are in place. This includes evaluating the sources of data, how it is collected, stored, processed, and shared. Compliance with privacy laws, consent requirements, and third-party data sharing should be top priorities.
5. Are intellectual property rights properly managed?
With AI creating new content and products, GCs must navigate the complexities of intellectual property law. Who owns the IP generated by AI? Did the AI infringe on another’s IP?
6. How are AI decisions affecting stakeholders?
GCs should ensure that AI systems do not negatively impact customers, employees, or other stakeholders. Automated decisions in hiring, lending, or healthcare can result in unintended harm or bring regulatory scrutiny.
7. What regulatory developments are on the horizon?
GCs should stay updated on AI regulatory frameworks regarding AI fairness and accountability.
General counsel must play a pivotal role in ensuring that AI technologies are deployed responsibly, ethically, and in compliance with the law. By asking the right questions, GCs can help their organizations navigate the complex legal landscape of AI while protecting against potential liabilities.
This posting is offered for general information purposes only, and should not be taken as legal advice for any individual case or situation. It should not be copied or reproduced without the permission of Compliance Counsel Law Group.
The Consumer Financial Protection Bureau (CFPB) recently released two Issues for the Spring 2024 edition of its Supervisory Highlights .
Issue 32 cites Auto Loan Furnishers for allegedly deficient credit reporting practices.
Issue 33 focuses on Mortgage Servicers engaged in UDAAPs and regulatory violations while processing payments.
Supervisory Highlights, Issue 32, Spring 2024:
https://files.consumerfinance.gov/f/documents/cfpb_supervisory-highlights_issue-32_2024-04.pdf
Supervisory Highlights, Issue 33, Spring 2024:
https://files.consumerfinance.gov/f/documents/cfpb_supervisory-highlights_issue-33_2024-04.pdf
The Consumer Financial Protection Bureau issued its final rule to implement Section 1071 of the Dodd-Frank Act. Section 1071 amended the Equal Credit Opportunity Act to require financial institutions to collect and report certain data in connection with credit applications made by small businesses, including women- or minority-owned small businesses. The final rule was accompanied by a “Grace Period Policy Statement” and a “Statement on Enforcement and Supervisory Practices Relating to Small Business Lending Rule under the Equal Credit Opportunity Act and Regulation B.”
Key provisions of the final rule include:
“Covered Financial Institution” Definition. As defined in Section 1002.105(a), a “financial institution” is any entity that engages in financial activity and includes both depository institutions and non-depository institutions such as online lenders, platform lenders, lenders involved in equipment and vehicle financing (captive financing companies and independent financing companies), and commercial finance companies. Motor vehicle dealers are excluded. A “covered financial institution” is defined in Section 1002.105(b) as a financial institution that originated at least 100 covered credit transactions for small businesses in each of the two preceding calendar years.
“Small Business” Definition. Section 1002.106(b) defines a “small business” as having the same meaning as a “small business concern” in the Small Business Act and that had gross annual revenue in the prior year of $5 million or less. The gross revenues threshold is to be adjusted for inflation every five years.
“Covered Application” Definition. Section 1002.103(a) defines a “covered application” as “an oral or written request for a covered credit transaction that is made in accordance with procedures used by a financial institution for the type of credit requested.” A request from a small business for a refinancing, unless otherwise excluded by the final rule, is a “covered application” even if no additional credit is requested. Excluded from that definition pursuant to Section 1002.103(b) are (1) reevaluation, extension, or renewal requests on an existing business account, unless the request seeks additional credit, and (2) inquiries and prequalification requests. (However, even if a reevaluation, extension or renewal request on an existing business account includes a request for additional credit, the transaction is not counted for purposes of determining if the financial institution is a covered financial institution.)
“Covered Credit Transaction” Definition. Section 1002.104 defines a “covered credit transaction” as an extension of credit primarily for business or commercial (including agricultural) purposes, but excluding (1) trade credit, (2) transactions reportable under the Home Mortgage Disclosure Act (HMDA), (3) insurance premium financing, (4) public utilities credit, (5) securities credit, and (6) incidental credit. The exclusions for HMDA-reportable transactions and insurance premium financing were not included in the CFPB’s proposal. In addition to loans, lines of credit, and credit cards, “covered credit transactions” include merchant cash advances and other sales-based financing. Consistent with the CFPB’s proposal, “covered credit transactions” also do not include (1) factoring, (2) leases, (3) consumer-designated credit that is used for business or agricultural purposes, or (4) the purchase of an originated credit transaction, an interest in a pool of credit transactions, or a partial interest in a credit transaction such as through a loan participation agreement.
Data Points. The final rule, in Sections 1002.107 and 1002.108, requires a covered financial institution to collect and annually report to the CFPB data on covered applications from small businesses. The data that must be reported and collected consists of:
With respect to the demographic information described in the last two bullet points, a financial institution cannot require an applicant to provide such information. If the applicant fails or declines to provide the information necessary to report a data point, the financial institution must report the failure or refusal to provide the information. However, financial institutions are not required or permitted to report these data points based on visual observation, surname, or any other basis, which differs from the approach under the Home Mortgage Disclosure Act (HMDA).
Compliance Dates; Special Transitional Rules. The final rule will be effective 90 days after it is published in the Federal Register. However, compliance with the rule will not be required as of that date. Pursuant to the final rule, a financial institution is a covered financial institution subject to the rule’s data collection and reporting requirements if it originated 100 or more covered transactions in each of the prior two calendar years. However, in Section 1002.114(b), the final rule contains staggered compliance dates that differ depending on the number of covered originations a covered financial institution originated in 2022 and 2023. These dates are as follows:
· Originated at least 500 covered originations in both 2022 and 2023;
· Did not originate 2,500 or more covered originations in both 2022 and 2023 and;
· Originated at least 100 covered originations in 2024.
The CFPB issued the following materials concurrently with the final rule:
On June 4, 2024, the Consumer Financial Protection Bureau (CFPB” issued a Consumer Financial Protection Circular 2024-03 (CFPB Circular 2024-03 https://www.consumerfinance.gov/compliance/circulars/consumer-financial-protection-circular-2024-03/ ) warning that the use of unlawful or unenforceable contract terms and conditions for consumer financial products or services may violate the prohibition on deceptive acts or practices in the Consumer Financial Protection Act (CFPA).
Pursuant to the CFPA, a representation, omission, act or practice is deceptive when:
1. The representation, omission, act, or practice misleads or is likely to mislead the consumer;
2. The consumer’s interpretation of the representation, omission, act, or practice is reasonable under the circumstances; and
3. The misleading representation, omission, act, or practice is material. A representation is “material” if it involves information that is important to the consumer and, hence, likely to affect their choice of, or conduct regarding a product.
The CFPB also referenced compliance Bulletin 2022-05 (CFPB Bulletin 2022-05 https://www.consumerfinancemonitor.com/2022/03/24/cfpb-issues-compliance-bulletin-on-consumer-reviews/) regarding consumer reviews. The CFPB reminded covered persons that they could be liable under the CFPA if they deceive consumers using form contract restrictions on consumer reviews that are unenforceable. The CFPB explained that “including an unenforceable material term in a consumer contract is deceptive, because it misleads consumers into believing the contract term is enforceable,” and that “disclaimers in a contract such as ‘subject to applicable law’ do not cure the misrepresentation caused by the inclusion of an unenforceable contract term.”
Aggressive enforcement is on the horizon now that businesses have had more than two years to comply with California’s landmark California Privacy Rights Act (CPRA) amendments set to take effect on January 1, 2023, with a look-back period, extending its protections retroactively to January 1, 2022.
On August 24, 2022, California AG Rob Bonta announced a $1.2M enforcement action settlement with beauty retailer Sephora USA, Inc. to resolve claims that Sephora violated the California Consumer Privacy Act (CCPA). The California Attorney General’s Office alleged that Sephora did not notify consumers that the company was selling personal information and did not honor consumer requests to opt-out of those sales.
Under the CCPA, a “sale” is broadly defined. In addition to covering direct sales of data, it can also include arrangements where data providers receive some benefit from permitting a third-party access to data covered under the CCPA. The allegation contains descriptions of Sephora’s privacy policy – both text of the policy, and how a user navigates the policy. The Sephora allegation signals that in addition to actual business practices, the California AG is also looking for CCPA noncompliance in a business’s privacy policy.
California continues to lead the way in U.S. privacy law. It is critical that your privacy program practices and policy strictly comply with the substantive requirements of the CCPA to withstand regulatory scrutiny.
On July 29, 2022, the Consumer Financial Protection Bureau (CFPB) and Department of Justice (DOJ) issued a joint letter to auto finance companies, reminding them of the protections afforded to servicemembers and their dependents under the Servicemembers Civil Relief Act (SCRA).
The SCRA covers both auto lending and leasing. The SCRA covers debts incurred before active duty. The CFPB is authorized to address unfair, deceptive, or abusive practices related to auto financing for all members of the public, including servicemembers, under the Consumer Financial Protection Act.
The DOJ and CFPB continue to focus on ensuring the rights of servicemembers under the SCRA. It is advisable for auto finance companies to review their SCRA policy, procedures and compliance program to ensure compliance with the SCRA.
The California governor recently signed SB 362 (the “Act”), which will impose regulations on data brokers by allowing consumers to request the deletion of their personal data that was collected. The Act will allow the California Privacy Protection Agency (CPPA) to create an “accessible deletion mechanism” to make a streamlined method for consumers to delete their collected information available by January 1, 2026.
Among other amendments, businesses that meet the definition of a data broker will be required to register every year with the CPPA, instead of with the attorney general. Additionally, the Act requires data brokers to provide more information during its yearly registration, including: (i) if they collect the personal information of minors; (ii) if the data broker collects consumers’ precise geolocation; (iii) if they collect consumers’ reproductive health care data; (iv) “[b]eginning January 1, 2029, whether the data broker has undergone an audit as described in subdivision (e) of Section 1798.99.86, and, if so, the most recent year that the data broker has submitted a report resulting from the audit and any related materials to the California Privacy Protection Agency”; and (v) a link on its website with details on how consumers may delete their personal information, correct inaccurate personal information, learn what personal information is collected and how it is being used, learn how to opt out of the sale or sharing of personal information, learn how to access their collected personal information, and learn how to limit the use and disclosure of their sensitive personal information. Moreover, administrative fines for violations of the Act, payable to the CPPA, have increased from $100 to $200, and data brokers that fail to delete information for each deletion request face a penalty of $200 per day the information is not deleted.
The Act further requires that data brokers submit a yearly report of the number of requests received for consumer information deletion, and the number of requests denied. The yearly report must also include the median and mean number of days in which the data broker responded to those requests.
The Insights page is not intended as legal advice to any person or company but instead is provided for news information purposes only.