The bill would only apply to companies that either make more than $50 million per year or have data for at least one million people or devices. Small businesses would theoretically be safe.
The senators saw this as a civil rights issue and pointed to recent incidents as examples. Facebook is still facing a charge of housing discrimination after it let advertisers exclude people in ways that could be racist or sexist, while Amazon shut down an automated recruiting tool after it was found discriminating against women. Facial recognition also has bias problems. It's a modern form of practices like "real estate steering" (where black couples were discouraged from getting homes in some neighborhoods), Sen. Booker said, but more insidious as it's "significantly harder to detect." In theory, this would prevent companies from ignoring the potential for bias.
There's no guarantee the bill will become law, and there are questions as to how well it would work. Would a company still be held responsible if a biased algorithm slipped through the cracks? What if an algorithm doesn't cover potentially biased data? As helpful as the legislation might be in ensuring fairness, it could also lead to some uncertainty and added overhead that might not always be necessary.