The federal government is already investigating Facebook. The question now is how much further it will go to regulate it.
Facebook founder and CEO Mark Zuckerberg is testifying before the EU Tuesday and Wednesday of this week in the wake of the Cambridge Analytica scandal and revelations about the platform’s role in both privacy issues and the dissemination of Russian disinformation during the 2016 presidential campaign to answer questions about Facebook’s past, current, and future actions.
But what Facebook will do on its own will likely no longer be enough. Calls have grown for the government to try to rein in the social media giant. Even Zuckerberg has acknowledged it might be time for regulators to step in.
Europe is taking some pretty significant steps in clamping down on Facebook and big tech at large. In the United States, where policymakers have traditionally been reluctant to regulate technology, it’s a bit more complicated.
Regulating Facebook is a complicated balancing act, multiple technology experts and Capitol Hill aides said in interviews. The company isn’t facing one scandal — it’s facing two: one about Russian disinformation and fake news, and one about user privacy and data security.
There are no easy answers about where Facebook’s responsibility begins and ends over what’s shared on its platform. In the United States, there’s also a First Amendment issue. When clamping down on what’s shared on social media, the government’s hands are actually more tied than, say, Facebook or Twitter through their terms of service.
“The idea of what they could be doing and what they should be doing is the dividing line,” Michelle De Mooy, the director for privacy and data at the Center for Democracy & Technology, stated.
Facebook could face a “breathtaking” federal fine
Congress and federal agencies are already considering several avenues to rein in Facebook. Arguably the biggest is an investigation that’s already underway: The Federal Trade Commission is looking into the possible misuse of personal information in the Cambridge Analytica scandal, which involved sharing data from as many as 87 million users. At issue is whether Facebook violated a 2011 consent decree with the FTC over charges it deceived consumers about their privacy.
The settlement required Facebook to give consumers “clear and prominent notice” and obtain their consent before sharing their information. And it barred the company from making any further deceptive privacy claims.
If the FTC finds Facebook did violate the 2011 agreement, it could be in deep trouble.
“If there was a violation, and let’s assume there was, the FTC is in a position to punch very hard if it wants to,” said former FTC Commissioner Bill Kovacic. “The potential monetary penalties under the status quo would be extraordinary.”
He said that each violation of the existing settlement could be punished by a fine of $40,000 — per day, per user. For a single wronged user over the course of a month, that’s a potential $1.2 million fine. And compounded among potentially tens of millions of users across several weeks and months, the amount would be astronomical. “Now, would the FTC say, ‘Here’s a bill for $1 trillion?’ No. But short of that, could they impose a breathtaking civil penalty?” Kovacic said.
He said the FTC would need to go through the Department of Justice to pursue a civil penalty, and it is unclear what the result would be. And, of course, we don’t know what yet the FTC will find, if anything. But the investigation could be a big one. “Think of this headline: biggest fine imposed on a business enterprise in the history of government regulation,” Kovacic said. “That would catch the attention of [Facebook] and of others in the industry.”
De Mooy warned that the results of the FTC investigation might not be so satisfying for harmed customers, even if it results in a blockbuster fine. “The problem with anything that the FTC does is that it’s not public,” she said. “If they come to a conclusion that there was a violation, that there were unfair and deceptive practices, we still wouldn’t know why they came to that conclusion.”
Congress wants to force more disclosure on Facebook ads
Meanwhile, there are other proposals on the table to impose new requirements on Facebook. Sens. Amy Klobuchar (D-MN), Mark Warner (D-VA), and John McCain (R-AZ) last October introduced the Honest Ads Act, which seeks to regulate online political advertising much in the same way as television, radio, and print are. The legislation has largely stalled, with Senate Rules Committee Chair Richard Shelby (R-AL) expressing little interest in holding hearings on it.
Sen. Roy Blunt (R-MO) is expected to take over for Shelby as chair of the Rules Committee, and the bill’s proponents hope he will express more of an interest in it, a Democratic aide told me.
After Cambridge Analytica and the continued information drip out of Facebook over what Russia did in 2016, public outcry may also push reluctant legislators to be more open to acting.
The Honest Ads Act would require social media companies to disclose which groups are running political advertisements and make reasonable efforts to ensure foreign governments and agents aren’t purchasing ads on their platforms. On Friday, Zuckerberg came out in support of the Honest Ads Act in a Facebook post, saying it would “raise the bar for all political advertising online.” Twitter announced its support for the legislation on Tuesday.
In the same post, Zuckerberg said Facebook would require political-leaning advertisers to verify their identity and location. Anyone who wants to run political or issue-based ads will need to be verified, and Facebook will label the ads and who paid for them. Facebook unveiled a similar authorization requirement for election ads in October.
Meanwhile, the California legislature is seeking to clamp down on bots. Democratic state Sen. Bob Hertzberg and Democratic State Assembly member Marc Levine have both introduced legislation that would require social platforms such as Facebook and Twitter to identify automated accounts — essentially, a sort of sticker that says, “I’m a bot.”
“They built the car and they allowed the Russians to get in it, gave them the keys, and allowed them to go speeding on the highway. And then they wrecked that car into our democracy,” Levine told me recently. “So big tech needs to take responsibility for the software that they are creating.”
In November, California voters will also have the opportunity to vote on the California Consumer Privacy Act, a ballot initiative that would require companies to disclose what information they gather and how they share and sell it, and give people the right to tell businesses what they can and cannot do with their data. Facebook, Google, AT&T, Verizon, and Comcast oppose it.
The Federal Election Commission is also contemplating amending its rules on for disclaimers on political communications, including advocacy and fundraising, online. In late March, it put out two alternative proposals on the matter.
Also in March, a bipartisan group of 37 state attorneys general sent a letter to Zuckerberg “demanding answers” about the company’s business practices and privacy protections.
There’s a lot that could be on the table, but it’s not clear whether it will be
What else could the federal government do? Plenty of proposals are floating around, including mandating new guidelines on transparency and data portability (the ability of users to essentially own their data, have it deleted, and take it from one platform to another), adjusting a law to hold social media platforms liable for users’ content, and even potentially enacting comprehensive privacy legislation.
One possibility is broader legislation dealing with bots, perhaps modeled on the Better Online Ticket Sales Act (better known as the BOTS Act), a 2016 law meant to clamp down on ticket scalping and computer programs that sweep up large numbers of tickets in online sales.
Another is stricter privacy standards. The Obama administration proposed the Consumer Privacy Bill of Rights, outlining consumers’ rights to control their personal data and requirements for transparency and security. It failed to gain consensus twice, and, if anything, privacy has moved in the opposite direction: President Donald Trump in 2017 signed legislation repealing the FCC’s privacy protections for internet users.
“Efforts to set privacy standards have been ignored or even repealed,” said Rep. Frank Pallone (D-NJ), ranking member of the House committee Zuckerberg will testify before on Wednesday.
Advocates of a new comprehensive privacy in the United States hope that revelations about Facebook’s practices might push for more sweeping change. “What’s become clear here is that this is not just a consumer protection issue,” said Rebecca MacKinnon, an internet freedom advocate and director of Ranking Digital Rights, a research initiative on global standards for freedom of expression and privacy in the digital space. “Privacy protection is a national security issue.”
“The idea of Congress passing a baseline privacy law is something we’ve championed,” De Mooy said. “It’s a good time to talk about what that actually looks like.”
Advocates of broader regulation for Facebook also suggest expanding the FTC’s authority and lightening some limitations on its jurisdiction. “The FTC has no authority over nonprofits, no authority over common carriers like telecommunications or transportation, airlines, banking,” Kovacic said. “To be a really effective national privacy regulator, you have to have a broad scope of authority over everything that faces the consumer in all contexts.”
Another option would be to revisit Section 230 of the Communications Decency Act, a 1996 law that provides immunity from liability to online platforms for content generated by its users. Essentially, the law says that Facebook is like a library, not a newspaper — if you go to a library and check out a book on how to build a bomb, the library isn’t liable for that. If a newspaper publishes an article explaining how to do it and encouraging it, that’s another story.
Congress just passed legislation that rolls back portions of Section 230 for cases of sex trafficking, and that could potentially open the door for further meddling with the law. Proponents of Section 230 warn it could open up a Pandora’s box of threats to internet freedom and actually have the opposite effect of what is intended, in the case of the sex trafficking bill pushing illicit activity into even darker corners of the internet.
Sen. Ron Wyden (D-OR), who wrote Section 230, warned that changing it would “punch a hole in the legal framework of the open internet” in a speech on the Senate floor. One congressional aide said he believes that Section 230 will be the “central discussion point” on what the internet looks like over the next several years.
One big sticking point in regulating Facebook in America is the First Amendment
Part of what explains why the United States has been so reluctant to enact regulations on the internet and technology is the matter of free speech, as mandated by the US Constitution. Simply put, there is a lot the government just can’t control when it comes to what people do and do not say online.
“From a starting point, we have to recognize particularly here in the United States, with the First Amendment, there is a real limit to what regulation, what government action can do around online content,” said Emma Llanso, director of the Center for Data & Technology’s free expression project. “There are certainly things that are illegal content, so that is more of an area where talking about regulations could make sense, but so much of what comes up in general discussion about this is out of reach of government action from the get-go.”
The Honest Ads Act and FEC guidelines may be able to do something about political advertising and transparency online. But when it comes to policing hate speech, propaganda, and even fake news, it’s just a different story.
And that’s where the companies themselves have to come in. The US Congress might not be able to keep people from bullying online, but Twitter’s terms of service can.
“People tend to confuse what the government can do with what these individual companies can do,” said Karen Kornbluh, a senior fellow for digital policy at the Council on Foreign Relations and former ambassador to the Organization for Economic Cooperation and Development under the Obama administration. “It’s not a First Amendment issue for companies to take down misleading ads, hate speech, or hoaxes.”
She added that companies often face political or financial pressure to take down or lead up content and should be clearer in their terms of service about what they take down and why.
Llanso said getting more transparency from platforms about their content moderation practices, including the numbers and scope of the material there that’s being flagged and is being taken down, could help shape policy prescriptions as well. “Until we get better information into the public discourse about how these platforms are shaping the information environments that they control, we are sort of talking about policy options in the dark.”
She concurred that given limits to government control of free speech, companies do have more freedom to police what’s out there. “Any platform that I can think of has a content policy that is more restrictive than the First Amendment would permit the government to do,” she said.
There are, of course, risks to putting so much impetus on companies to act and turning them into the arbiters of what is and isn’t allowed online. MacKinnon said she’s worried about a setup that turns companies “basically into private judges, juries, and executioners when it comes to online speech.”
But if Facebook, as it says, wants to do better, that’s certainly a way. Zuckerberg told reporters last week that it currently has 15,000 people working on security and content review and plans to have 20,000 by the end of the year.
Antonio García-Martinez, who worked on Facebook’s targeted ads from 2011 to 2013, pointed out to me recently that Facebook already does plenty of self-policing and in political advertising specifically uses a strategy similar to the ones it uses in other advertising arenas. Alcohol ads, for example, are only shown to users of a certain age in the United States, another age in Spain, and not at all in Saudi Arabia, where alcohol is illegal.
“Facebook actually goes in and programmatically figures out what’s an alcohol ad and then applies business logic to it saying what’s allowable,” García-Martinez stated. “And if you break the rules enough, the account gets frozen.”
The hard truth is that the horse is already out of the barn
Zuckerberg’s congressional testimony and his and other Facebook executives’ mea culpa media blitz is perhaps the start of taking a hard look at privacy protection, data, and information manipulation online. But there’s a long road ahead — and a lot of what’s already been done is, well, done.
Facebook has admitted the majority of its users’ information has been accessed by third parties, that it scans messages, and that it keeps pretty much all of your data forever. It just announced it found more evidence of Russian troll accounts. Zuckerberg last week said uncovering nefarious content is going to be a “never-ending battle” and that you “never fully solve security.”
“I think we will dig through this hole, but it will take a few years,” Zuckerberg recently told Vox’s Ezra Klein. “I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.”
The issue is, of course, a lot of data is already out there, the 2016 election is already over, and consumers’ trust in Facebook has already been breached.
“There is an element that it’s too little, too late. But we still want Facebook to make some changes, and we will ask Zuckerberg some questions about what changes he’s making. But we also have to realize that a lot of this information is already out there, and so that has to be thought about, in terms of regulation and legislation — what do we do going forward, but also what do we do for the stuff that’s out there?” Rep. Pallone said. “I don’t know that there’s an easy answer out there.”
The best answer is to hire more cyber-security professionals.
If you are interested in a career in cyber-security, it is time for you to contact ABCO Technology.
You can reach our campus by telephone at: (310) 216-3067.
Email all questions to: info@abcotechnology.edu
Financial aid is available to all students who qualify for funding.
ABCO Technology is located at:
11222 South La Cienega. Blvd. STE #588
Los Angeles, Ca. 90304
Cyber-security professionals needed, train and certify for a fulfilling career today!