April 2023 US Policy Update
Not Every Social Problem has a Tech Solution
Across the world – in the United States, Europe and the United Kingdom, Australia, and elsewhere – laws are being written and implemented that mandate technology companies, their staff, and software developers, to secretly surveil their customers on the government’s behalf. This is most frequently justified in the battle against child sexual abuse material (CSAM), something ethical societies universally abhor. As a developer advocacy organization, it’s not our role to take sides on broad social issues that don’t uniquely impact the community we represent. Balancing the loss of consumer privacy and security against improvements in online CSAM detection isn’t a developer issue but an all-of-society issue. Everyone should be free to weigh that balance for themselves. But when the argument is distorted by deliberately mischaracterizing what is technically possible, we need to speak up.
There is not – and never will be – a technology that can inspect and detect CSAM in online systems while also ensuring user privacy and data security. In end-to-end encryption (E2EE), the sender has a key, and the receiver has a key. If anyone else has a key, there can never be confidence that the message is authentic or secure. It’s as simple as that.
More Deliberation Needed Before Senate Considers CSAM Bills
Devs work hard to make the internet a safer place. They have long operated under the premise that consumer trust is paramount since users won’t pay for services they distrust. This mindset protects users, helping to create a more robust internet where devs can design and deploy new products and services. These products and services make our lives healthier, more convenient, and more prosperous. Make no mistake, the entire app ecosystem believes those behind child exploitation should be prosecuted to the fullest extent of the law. The ecosystem also believes that efforts by Congress to curb this exploitation should target the perpetrators. And that’s why the Senate Judiciary Committee’s consideration of the EARN IT Act and STOP CSAM Act is so troubling.
If the Senate Judiciary Committee passes these bills, they will head to the Senate floor for consideration. Taken together, they place a significant burden on the app ecosystem that will hamper our industry’s ability to protect children. Users have rightly called for enhanced security features to ensure their identities, online habits, personal information, and more remain safe and secure. Devs have responded by employing end-to-end encryption in their products to protect users and maintain consumer trust. The government can’t have it both ways – it can’t prod devs to make sure their products are secure while also demanding an open back door. These concerns, along with removing Section 230 liability safeguards – the bedrock of the internet – will have serious consequences for the internet and the users who depend on it.
*Update – The EARN IT Act was passed out of the Senate Judiciary Committee with unanimous support. The bill now heads to the Senate floor for consideration. It is not yet known when that vote will take place. Senator Lindsey Graham (R-SC) said if the bill does not pass in the Senate, he will introduce legislation to repeal Section 230 of the Communications Decency Act. The Developers Alliance will continue its work to highlight the consequences of the EARN IT Act and repeal of Section 230. The STOP CSAM Act has not yet been considered by the Senate Judiciary Committee.
Pair of Children’s Privacy Bills Introduced in Senate
Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) introduced the Kids Online Safety Act (KOSA). The bill, according to a press release, is “comprehensive bipartisan legislation to protect children online and hold Big Tech accountable.” The bill would 1) require social media platforms to enable minors to protect their data and disable algorithmic recommendations; 2) enable parents to report harms to children on social media platforms; 3) make social media platforms liable for harmful content they host; 4) require social media platforms to perform annual independent audits to gauge their risks to minors; and 4) give academic and public interest groups access to datasets from social media platforms for research purposes.
A summary of the legislation can be found here. The bipartisan bill has widespread support, being cosponsored by more than two dozen senators. Given the level of support, it is likely to receive committee consideration and could receive consideration by the full Senate.
Similarly, Senators Ed Markey (D-MA) and Bill Cassidy (R-LA) introduced the Children and Teens’ Online Privacy Protection Act (COPPA 2.0). The bill would extend and update the Children’s Online Privacy Protection of 1998. The newly introduced bill would 1) broaden who is protected by the old law, increasing the age to 16 from 13; 2) prohibit apps and websites from sending personalized advertisements to minors; 3) create an “Eraser Button” for parents and minors enabling them to delete personal information where technologically feasible; 4) establish a “Digital Marketing Bill of Rights for Teens” that limits the collection of personal information of teens; and 5) create a “Youth and Marketing Privacy Division” at the Federal Trade Commission.
Much like KOSA, this legislation will enjoy broad bipartisan support. We anticipate that it will receive committee consideration, and could receive consideration by the full Senate.
For our part, we remain very concerned that both bills could create more privacy problems than they solve. Determining who is a minor via an age verification mechanism is technically difficult, will increase the amount of personally identifiable information devs have to collect to comply, and ultimately leads to universal verification of all ages. We believe a single comprehensive privacy regulation better serves devs and users, rather than multiple regulations that cover various age groups.
Consensus at House Hearing that new Privacy Regs Could Benefit Industry
The House Innovation, Data, and Commerce Subcommittee held a hearing titled “Addressing America’s Data Privacy Shortfalls: How a National Standard Fills Gaps to Protect Americans’ Personal Information.” The hearing covered an array of topics, but there was a general consensus among both the members of Congress and witnesses that a new comprehensive privacy law would benefit not just consumers, but industry as well. Witnesses were quick to point out that the current patchwork of state privacy laws that firms have to navigate is not sustainable. A nationwide, comprehensive law (that also includes minors) is the approach witnesses highlighted as helping to spur innovation and growth, and importantly, protecting consumers.
Other key themes that emerged during the hearing were data minimization and age verification. There was general agreement that data minimization, while possibly a difficult pill to swallow for devs, is needed to protect consumers and grow user trust. Finally, witnesses were quick to point out how 1) difficult it is to verify ages online; and 2) employing age verification mechanisms could ultimately create more privacy headaches for consumers.
This was the sixth privacy hearing held in the House of Representatives this year alone. There is a strong push to reintroduce and pass the American Data Privacy and Protection Act this Congress. The bill passed the House Energy and Commerce Committee last summer with broad bipartisan support, but never received a floor vote. Opponents of the bill in both the House and Senate believe the bill is too weak. The Alliance applauded committee passage last summer as a “solid first step.”
White House Huddles with Tech Execs to Chat AI
The White House hosted a summit to discuss AI risks. The summit was attended by executives from Alphabet, Microsoft, OpenAI, and Anthropic. It was held to “emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society.” Ahead of the gathering, the White House announced an initiative that it believes will drive innovation and the responsible growth of AI. The initiative:
- Provides $140 million in new funding to the National Science Foundation to launch seven new National AI Research Institutes;
- Establishes public assessments of existing generative AI systems to allow this technology to be thoroughly evaluated by thousands of AI partners and experts; and
- Empowers the Office of Management and Budget to release draft policy guidance on AI.
FTC, Others Continue Misguided Attempts to Curb Innovation
The Federal Trade Commission, along with the Department of Justice, Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission released a joint statement detailing their authorities to enforce existing laws and regulations surrounding AI. The agencies have, at various times, outlined their concerns about the harmful uses of AI.
The FTC’s statement read in part:
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats. Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
The Developers Alliance and its members work hard to deploy products that are safe, practical, and dependable for their users. With those principles in mind, devs create products that grow consumer trust, and in turn, their business. Government agencies, including the FTC, have sought to issue top-down, heavy-handed regulations that fail to take into account consumer wants and needs, and developers’ responses to those. Devs work hard to earn and keep consumer trust, and regulators should practice humility before levying new regulations that could hurt consumers, devs, and our global competitiveness.
Developer Workshop on AI Challenges at the World Summit AI (Americas) in Montreal
AI regulations are still in flux with no definite decisions, but every government plans to set local rules. As an industry, what can we predict from the emerging draft rules, codes and frameworks? On April 19th, Developers Alliance presented a workshop at the World Summit AI in Montreal about “Three practical steps to handle future regulations.” World Summit AI Americas brought together AI leaders, academics, and researchers for conversations around understanding the impact of AI on the products we consume as a society. The event also showcased keynotes and panel discussions about AI opportunities and challenges for health, security, economics, business, and governance.
Phil Dawson, Head of AI Policy at Armilla AI provided insights into data models, their quality assurance offering, and the alignment tools they were building for generative AI. Bruce Gustafson, CEO of Developers Alliance offered insights into assessing and mitigating potential failure modes. Developers questioned the hurdles with regard to evaluation of features, mindset of traditional programming models, and the volume of data in terms of distribution. Common practices were further discussed in relation to the Data Nutrition Label and the cost to store and manage different versions of data. The need for a comprehensive standard was proposed that would contain treatment of unwanted bias, assessment of the ML systems and responsible AI with regard to ISO standards.