“Viewpoint Diversity” & Fake News Take Center Stage While The “Election Of A Lifetime” and Google v. Oracle Loom

The September 2020 US Policy Update

Untitled design 4.png


Republicans Stress Support For “Viewpoint Diversity”

Senate Republicans introduced the Online Freedom and Viewpoint Diversity Act on September 9th. The bill calls for narrowing in on the scope of protection of platforms facing liability under “good Samaritan” screening of offensive material. Proponents of the bill cite that it is aimed at addressing alleged anti-GOP bias on social media, however, Democrats are concerned that passing the bill would force platforms to host misinformation —including COVID-related misinformation— and other unsavory content rather than exercise their discretion to remove it.

Silicon Valley Aims To Play A More Active Role Against Fake News

Facebook recently announced that they will halt all new election ads a week before Americans cast their ballots on November 3rd. Further, they have stated they will ban post-election political ads that declare a winner before it has been formally announced. Additionally, a CNBC report has also confirmed that Facebook’s independent content oversight board, sometimes called the company’s “Supreme Court,” still plans to launch before the US 2020 elections. Announced in 2018, the board aims to resolve moderation disputes on both Facebook and Instagram.  

Google has also announced that they will place a temporary ban on political ads, specifically those referencing the 2020 election outcome after November 3rd. The ban includes Google’s ad platforms as well as their video site YouTube. These acts by the two major platforms are intended to avoid premature claims by candidates that they have won. This is especially important in light of the increase of mail-in ballots likely contributing to delayed election results. 


The Case Is Submitted – Google v. Oracle SCOTUS Case Begins

Developers seldom join a U.S. Supreme Court case. This one could upend everything we have learned about how code is written – and cost developers money. With only eight justices listening, Google and Oracle have each made their final pitch in the years-long battle to determine how copyright law applies to APIs. Google’s position is that you either can’t copyright APIs or that copying one is “fair use” and thus allowed. Oracle, in turn, feels that APIs are covered by copyright, Google copied them, and so they owe us $9B+. What does this mean for the developer community? Does using an API in your code leave you open for paying licensing fees and royalties down the road? You can read more of our work on the case in from our CEO & President Bruce Gustafson here.

Facebook Would Like You To Check Your Data Use Or Risk Losing API Access

On September 10th, Facebook announced it would be launching its much-anticipated Data Use Checkup. Facebook described the launch as “a new annual process for developers to ensure API access and data use to comply with the Facebook Platform Policy.“ Developers will have 60 days to review their permissions and API access, as well as making sure that their data use complies with the Facebook Platform Terms and Developer Policies or risk losing their API access. 

They also released the following guidelines to prepare developers for the process:

  • Update contact information in Notification Settings

  • Assign an app admin for each app within your organization

  • Audit apps, removing any apps no longer needed

  • Review approved permissions and features your apps have access to, removing any your app no longer uses

Intermediary Liability & CDA Section 230

Sen. Graham Wants To Modernize Your Online Content

On September 21st, Senator Lindsey Graham (R-SC) introduced the Online Content Policy Modernization Act, (S. 4632). The bill’s stated purpose is to “…amend title 17, United States Code, to establish an alternative dispute resolution program for copyright small claims, to amend the Communications Act of 1934 to modify the scope of protection from civil liability for “good Samaritan” blocking and screening of offensive material, and for other purposes.” The bill would reform Section 230 to provide an “alternative resolution for copyright disputes.” Critics of the bill have stated that it is a duplicate of many of the proposals in the aforementioned Online Freedom and Viewpoint Diversity Act. 

Privacy & Encryption

Has California Gone Too Far? Or Not Far Enough?

California continues the debate on privacy with the upcoming Prop 24 ballot measure. The Initiative puts to voters whether to expand the recently enacted California Consumer Privacy Act (CCPA) as it is supported by backers who don’t think the CCPA has gone far enough to protect consumers. Alliance Opinion: While the CCPA is far from perfect, shouldn’t they take a few months to enforce it and allow companies to adapt before changing the moving target of compliance?

Portland Banishes Facial Recognition

Portland, Oregon passed the strongest facial recognition bans in the country this past month. The law bans the technology in both privately-owned places as well as those owned by the city. Individuals who have their rights violated due to parties not complying with the new law have the right to seek recourse and sue the non-complying entity. Cities across the country, including San Francisco, and Boston have already implemented city-wide bans on facial recognition software. The law goes into effect in January 2021.

Accountability Office Releases Airport Facial Recognition Report 

The U.S. GAO (Government Accountability Office) has released a 101-page report examining the use of facial recognition technology in airport and border enforcement. The U.S. Customs and Border Protection (CBP) Biometric Entry-Exit P
rogram and the Transportation Security Administration (TSA) facial recognition field tests were the focuses. The U.S. GAO released the following overview of the report: 

“This report addresses (1) the status of CBP’s deployment of FRT, (2) the extent to which CBP has incorporated privacy protection principles, (3) the extent to which CBP has assessed the accuracy and performance of its FRT, and (4) the status of TSA’s testing and deployment of FRT and how TSA has incorporated privacy protection principles. GAO conducted site visits to observe CBP’s and TSA’s use of FRT, which were selected to include all three travel environments—air, land, and sea; reviewed program documents; and interviewed DHS officials.”

On the CBP’s usage metrics, “Testing found that air exit exceeded its accuracy goals—for example, identifying over 90 percent of travelers correctly—but did not meet a performance goal to capture 97 percent of traveler photos because airlines did not consistently photograph all travelers.”

On the Transportation Security Administration (TSA) pilot tests, “…given the limited nature of these tests, it is too early to fully assess TSA’s compliance with privacy protection principles.”

Finally, the GAO issues five recommendations to CBP:

  1. “Ensure privacy notices are complete, “

  2. “ensure notices are available at locations using FRT,”

  3. develop and implement a plan to audit its program partners for privacy compliance, 

  4. “develop and implement a plan to capture required traveler photos at air exit,”

  5. “and ensure it is alerted when air exit performance falls below established thresholds. DHS concurred with the recommendations.”

The full report and more details can be found at the U.S. GAO site here. 

Anti-Encryption Bill, EARN IT Act, Earns Coalition Of Derision 

A Coalition letter against the EARN IT Act had gone out, citing concerns over first amendment protections and actual efficacy of what the bill sought to accomplish in the first place per 4th amendment concerns. Developers Alliance has spoken out against the EARN IT Act in the past due to the negative impact it would have on the necessary encryption. 

Foreign Apps

The Chinese-owned apps saga boiled over in September and is still a moving target. Legislators have been in a frenzy the last few months over the apps (most notoriously TikTok and WeChat) grave security concerns —  notably with allegations of the platform sharing user data with the Chinese government. President Trump released an executive order banning the apps from operating in the United States. 

#SaveTikTok Takes Off

In an effort to #SaveTikTok, the platform has been navigating a variety of avenues to ease the administration’s concerns, including a potential deal with American companies Oracle and Walmart owning a share of the platform to ensure oversight. In the courts, the fight continues with both WeChat and TikTok suing the administration because of the impact on their users and business practices per the ramifications of the executive order. As for the apps themselves? While they would be effectively out of business in the U.S. if the Executive Order were to go into effect, they have seen exponential downloads in the last weeks following the announcement as consumers fear they will no longer be able to download them.

And Your App’s Country Of Origin?

After much examination of applications like TikTok and WhatsApp, a new piece of legislation, the American Privacy Protection (or APP) Act S.4669, will require software developers to list the country of origin for their application. The bill, introduced September 23rd by Senators Rick Scott and Catherine Cortez Masto, will require the FTC to enforce this mandate. Application marketplaces and platforms, such as Apple’s App Store will be required to add this information to application listings, as well as disclosing where information the app gathers is stored. 


Would You Describe Your AI As “Excellent?” The Center For AI Excellence Is Here To Help

On September 45th, the House of Representatives passed the AI in Government Act of 2019 (H.R. 2575). Introduced by Rep. McNerney, Jerry [D-CA-9] in 2019, the bill seeks to establish an “AI Center of Excellence” underneath the General Services Administration. The center will “advise and promote the efforts of the federal government in developing innovative uses of artificial intelligence (AI) to benefit the public, and improve cohesion and competency in the use of AI.” On the 15th, the bill was placed on the Senate Legislative Calendar under General Orders. 

Self Drive Act Looks To Provide Regulatory Tools 

On September 23rd, Representative Bob Latta (R-Ohio) introduced the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act, or Self Drive Act (H.R.3388). The stated goal of the bill was to “provide standards for the testing and deployment of self-driving cars.” Additionally, it would provide tools to the National Highway Traffic Safety Administration (NHTSA) with regulatory tools to adapt future roads for self-driving cars.

The National Artificial Intelligence Strategy Has Arrived 

On September 16th, Representatives Will Hurd (R-Texas) and Robin Kelly (D-Ill.) introduced a resolution to call for a National Artificial Intelligence Strategy. The Representatives noted that they spe
nt nearly a year coordinating with “the expert stakeholders” and the Bipartisan Policy Center (BPC). The congressmen, along with the BPC on Workforce, National Security, Research and Development (R&D) and Ethics four white papers on the topic. The resolution also included recommendations from these four whitepapers. Rep. Hurd stated that “that the U.S. should take a global leadership role in artificial intelligence.”


The “Antitrust” Hearings Have Begun

On September 15th, the Subcommittee on Antitrust, Competition Policy, and Consumer Rights held the hearing “Stacking the Tech: Has Google harmed competition in online advertising?” The hearing was chaired by Sen. Mike Lee (R-UT). In a later blog, Sen. Lee said that while he found the hearing productive, he “was not satisfied with many of Google’s answers which felt scripted and unresponsive.” He later stated that his “…takeaway from the hearing was that Google has a commanding share of every piece of the online advertising market, and it appears to be using its leading market positions in search and online video to engage in tying on the advertiser side of its business, essentially forcing the vast majority of demand onto its platform. In turn, publishers are also forced to use Google’s platform because there really isn’t any other option.” 

The IoT Cybersecurity Improvement Act Passes Houses Unanimously

On September 14th, the Internet of Things (IoT) Cybersecurity Improvement Act, passed the U.S. House of Representatives. The bill was sponsored by Representatives Robin Kelly (D-Ill.), Will Hurd (R-Texas), and more than two dozen bipartisan others. The bill requires minimum security requirements, set by the National Institute of Standards and Technology (NIST), for all smart devices or “Internet Of Things” purchased by the U.S. government. Additionally, if the providing private-sector group detects any vulnerabilities in the device that could be exploited and used maliciously they must notify the federal government. 

Avatar photo

By Sarah Richard

Developers Alliance Policy Counsel & Head of US Policy

Related Content

STEM Forward to the AI and Sustainability Age

STEM Forward to the AI and Sustainability Age

Developers Alliance Joins Global Coalition Backing WTO’s E-commerce Initiative

Developers Alliance Joins Global Coalition Backing WTO’s E-commerce Initiative

Developers Alliance Co-sign, Alongside Five Other Tech Industry Associations, a Joint Statement on the Latest Developments of the Negotiations on the Artificial Intelligence Act (AI Act)

Developers Alliance Co-sign, Alongside Five Other Tech Industry Associations, a Joint Statement on the Latest Developments of the Negotiations on the Artificial Intelligence Act (AI Act)

Join the Alliance. Protect your interests.

©2020 Developers Alliance All Rights Reserved.