
Artificial intelligence is playing a growing role in the tenant screening process across the U.S., promising faster and more detailed evaluations of prospective renters. However, as these tools gain popularity, concerns are mounting among experts and advocates that AI may reinforce discrimination, especially when transparency and human oversight are lacking. In an increasingly tight housing market, these screening systems could mean the difference between stable housing and repeated rejections.
Millions Remain Invisible to Traditional Credit Systems

According to Snappt, a PropTech company specializing in AI-based tenant screening, 28 million adults in the U.S. are “credit invisible”, and another 21 million are “unscorable.” Many of these individuals are financially responsible but use nontraditional financial tools like debit cards or peer-to-peer apps instead of credit cards. This is especially true for younger renters, including many from Gen Z, who are often missed by legacy credit-based evaluations.
Beyond the Credit Score, Wider Financial Lens

Snappt’s platform aims to address this issue by evaluating more than just credit scores. It looks at a renter’s overall financial health, including rent-to-income ratios, cash flow, account stability, and expense habits. This allows leasing agents to assess applicants based on a fuller financial picture. For example, a gig worker with a steady income and no overdraft history might qualify where traditional systems would have failed them.
Fraudulent Documents on the Rise

Fraudulent rental applications have become more common in recent years. A Snappt survey found that over 85 percent of property managers had encountered fake pay stubs or altered financial records. In response, Snappt partners with financial data services like Finicity and Argyle to directly verify income from banks, gig platforms, and payroll systems, with user consent. This verification process helps flag inconsistent data and reduces the risk of fraud.
Balancing Data With Human Judgement

Snappt emphasizes that its technology does not allow final decisions about applicants to be made. Instead, it equips leasing teams with reliable data to make informed choices. The importance of maintaining human oversight is reinforced by examples like one tenant, initially flagged due to low credit caused by a divorce, who turned out to be a reliable resident for eight years after further evaluation.
Widespread Use of Algorithms With Limited Oversight

A recent survey by TechEquity Collaborative revealed that two-thirds of landlords in California use automated screening tools. Of these, about 20 percent pay for services that generate predictive risk scores, and 37 percent rely solely on the system’s recommendation without reviewing supporting data. Alarmingly, only 3 percent of renters surveyed knew which company had provided the screening report that may have led to their denial.
Federal Scrutiny and Legal Action

The growing role of AI in tenant screening has led to increased regulatory attention. In 2023, the U.S. Department of Housing and Urban Development (HUD) issued guidelines urging property owners to incorporate human review into AI-assisted decisions. That same year, the Consumer Financial Protection Bureau fined TransUnion $23 million for inaccurate data used in tenant screenings. Several lawsuits followed, including one involving CoreLogic and another against an AI chatbot that rejected a housing inquiry.
Regulatory Backlash Under New Administration

With the new administration in place, industry lobbying has intensified. The National Apartment Association and the National Multifamily Housing Council recently sent a letter to President Trump requesting the rollback of over 30 housing regulations. One specific request was the withdrawal of HUD’s guidance on AI use in screening, arguing that such policies raise development costs and slow down housing availability. Seasonal income fluctuations or one-off expenses can wrongly flag an applicant as risky without someone to contextualize the data.