Discover more from OLD GOATS with Jonathan Alter
Regulate Tech — Big Time
Ruminating with Roger McNamee about the dangers of “surveillance capitalism.”
Roger McNamee, 67, is a venture capitalist and big thinker who made a fortune with early investments in Facebook and other tech companies before famously turning against Silicon Valley in 2017. The following year he wrote Zucked: Waking Up to the Facebook Catastrophe. Earlier this month, I took part in a semi-regular Zoom call of old goats organized by Les Francis, who served as a senior official in Jimmy Carter’s White House. Former House Majority Leader Dick Gephardt, who co-chairs the bipartisan Council for Responsible Social Media, brought Roger on the call and he was fascinating. Excerpts:
My career has been blind and dumb luck. I started in the investment business on the first day of the longest bull market in history [August 20, 1982]. They assigned me to cover technology, which meant that I was like Zelig or Forrest Gump. I found myself repeatedly in the right place at the right time. I was in the room when Marc Andreessen presented Netscape, which became the first browser. I was in the room when Jeff Bezos presented Amazon, which became Amazon. I was there the day that Larry [Page] and Sergey [Brin] presented Google and then, later on, I got to be Mark Zuckerberg’s mentor [most momentously, when Zuckerberg took his advice and decided not to sell Facebook in 2006]. I've had so many highlights that it never occurred to me I would ever find myself on the other side.
What happened was, the culture of Silicon Valley changed. It started in the 1950s and ‘60s as an amalgam of the space program and the (kind of) hippie values of what eventually became Apple and Atari. Those two [cultures] were both very idealistic; they were about empowering people with technology, and I think that made Silicon Valley a uniquely aspirational part of American culture.
Then the financial crisis hit in 2008 and the Federal Reserve Board went to a policy of zero percent interest rates. This came at a particularly dangerous point [technologically] because, for the first time, you could make a product that touched almost every consumer in the world. Before that, there were never the technology resources, the processing power, memory storage bandwidth, to do more than a very narrow thing. But suddenly, you could do whatever you wanted. There was access to data on every human being and the ability to process it in real time, and that allowed people to shift culturally from this notion of using technology to empower people to using technology to exploit human weakness. This was only possible because the Fed drove interest rates to zero, which meant that startups could raise unlimited amounts of money for increasingly preposterous ideas.
So Facebook and Google, which were already very successful, were the innovators who crafted a business model based on data. It's called “surveillance capitalism”, though you don't need to remember those words. The basic notion is this: Imagine that you’re Facebook or Google and you have three billion active users. They're all using your products constantly. And from that you learn a lot about them. But you don't learn everything you want to know. For that you need to go to Oracle and other data brokers, where you buy every financial transaction a person makes and even the minute-by-minute location from their cell phone, which you can get in real time. All of their medical tests and their prescriptions, anything that they do online, anything that they do in applications.
And so what Facebook and Google did was they built these gigantic models, first of the whole population, to find out the [general] patterns in people's behavior, and then, secondly, to analyze what they know [specifically] about this person so they can match them to the patterns of society.
“People are using Google because they're trying to get from one place to another or they're trying to get the answer to a question. They're using Facebook to reach their friends or family. It never occurs to them that the company that is providing that service is actually an adversary. “
If you think about this, it’s a huge change. People are using Google because they're trying to get from one place to another or they're trying to get the answer to a question. They're using Facebook to reach their friends or family. It never occurs to them that the company that is providing that service is actually an adversary. But that's what happened.
You’ve heard of behavioral economics, right? This notion that human beings are, in fact, not rational actors, that they very predictably make decisions in emotional ways, for instance placing too much weight on the most recent thing that happened to them. When you map this with data, you can identify what somebody is likely to do in any given situation. That's what Google and Facebook did. But when the price of app invention went to zero in 2009 because borrowing money was free, all of a sudden that notion got applied not just to social media, but by Uber and Lyft to take advantage of drivers, by DoorDash to take advantage of restaurants, by financial technology to take advantage of holes in banking regulation, all the way through cryptocurrency and artificial intelligence.
The problem with all of these things is culture. Silicon Valley realized that in a world with no rules and regulations, in a world where capital is free, in a world where there was unlimited data about every person available basically for free, they didn't need to develop apps to make people more productive. They could just manipulate them and capture value that way. So Silicon Valley over the last 14 years went from being the place that generated productivity to a zero sum, predatory, parasitic thing that has done unbelievable harm to democracy, to public health and to public safety. That's the problem.
“So Silicon Valley over the last 14 years went from being the place that generated productivity to a zero sum, predatory, parasitic thing that has done unbelievable harm to democracy, to public health and to public safety.”
So when somebody says, “Wait a minute, we cannot regulate tech!” they usually start by saying that this is about innovation. If you regulate it, the innovation goes away. That is wrong.
Engineering, as President Carter [an engineer by training] would tell you, is about optimizing in a world filled with constraints. The problem is that there haven't been any constraints on Silicon Valley. If the government told them, “You can't hurt people” or “You can't undermine democracy,” they would very quickly adapt to that and make very, very good products that do not hurt people and don't undermine democracy. But since there are no rules requiring them to protect people, they don't.
In fact, it's worse than that. Culturally, as engineers, they prize economic efficiency. And when you prize economic efficiency over absolutely everything else, things like democracy and the right to make your own choices are vulnerable. Democracy and self determination are inefficient because they require deliberation. And so companies like Google and Facebook are actually culturally in opposition [to democracy].
During his 2012 reelection campaign, President Obama worked with Facebook on a project where Obama fans would give permission to the Obama campaign to then reach out to all of their friends.
I wrote about this in my second Obama book. The project was called “Targeted Sharing” and it helped Obama beat Mitt Romney.
It was also a violation of the consent decree with Obama’s own Federal Trade Commission, which required that anyone whose data was given away had to give advanced permission.
In 2016, the Trump campaign took that idea and ran with it. And what Trump wound up doing was much more serious, though legal. His people found a company called Cambridge Analytics that had managed to marry 30 million voter files to Facebook user IDs — that’s one in eight voters. If you know anything about advertising, when you're trying to reach people in advertising, you start with a custom audience and the people you're trying to reach. And then the platform helps you find everybody else who looks like that. When you have one in eight of the target audience, the level of precision with which you can target things is simply extraordinary. And Trump had the best custom audience that had ever been created in politics. They didn't use it to register people or get out the vote. They used it to suppress the vote of people unlikely to vote for Trump.
You’re talking about disinformation like Hillary is dying, the pope endorses Trump, and lies about where and when people should vote.
It worked magnificently, and it [helped] get him elected. By the time 2016 was over, every politician in America realized that Facebook — if they put their thumb on the scale — could [help] determine the outcome of the election. Not necessarily directly, but at least indirectly. Facebook was undermining democracy but the only people in a position to legislate change were so hopelessly conflicted that they weren't going to do anything. This is what I discovered when I started trying to educate Washington in 2017.
Unfortunately, the incentives are misaligned in both Washington and in journalism. The big journalistic organizations continue to treat every new product of the technology world like it is somehow the next big thing. That’s how they're treating artificial intelligence today — as if this version of artificial intelligence is actually a good thing. It's not. It's mostly smoke and mirrors. It's mostly BS.
“The big journalistic organizations continue to treat every new product of the technology world like it is somehow the next big thing. That’s how they're treating artificial intelligence today — as if this version of artificial intelligence is actually a good thing. It's not. It's mostly smoke and mirrors. It's mostly BS.”
Meanwhile, we have minority rule in the United States, right? The Supreme Court changes the rules. Policymaking is moving from Congress to this combination of courts and state legislatures. And the question is, how do we fix it? The most important thing, in my opinion, as a starting place, is to recognize that America's tech industry has played an essential role in undermining democracy, undermining public health during a pandemic, undermining public safety, particularly with respect to children.
To me, the message is really simple: Democracy and the right to make your own choices. And the tech industry right now is in opposition to that. We have several avenues to go after them on national security. There's a big concern that TikTok is owned by China. TikTok is awful and I would ban them in a heartbeat if I had the ability to do so. But let's not kid ourselves. China can go to Oracle and all these other data brokers and get 15,000 to 20,000 data points on every American for something like $100,000. And there's nothing to keep Oracle from selling them that so they can then use that data to target people. Facebook and Google and Facebook let them do that at will. If we're actually worried from a national security point of view, as we should be, we should be doing something about all these companies.
When Google sits there and says, “You can't regulate us because you need us to compete with China on AI,” I go, “That's nonsense. The way you compete against China is with aircraft, entertainment, carbonated beverages, and things derived from American values.”
The thing I would encourage you to take away from this conversation is the following: Regulating healthcare is complicated. Regulating financial markets and banking is complicated. Regulating tech is not complicated at all. We only need three things:
First, safety. You simply have to hold tech to the same standard we hold drugs and aircraft and cars. You’ve got to prove your thing as safe before you can put it in the market.
Second, privacy. You need to have a regulation that says, “I'm sorry, but you're not allowed to manipulate people. You cannot use their most intimate data in a third party commercial sense. You can only use data to deliver a specific service that they have requested.”
Third, competition. We need a way for alternatives to come to market, which is [currently] impossible.
So safety, privacy, competition. We've done this to industries for the last 140 years. There's no reason why we can't do it here. It will take hard work and voter engagement to get it done.
“So safety, privacy, competition. We've done this to industries for the last 140 years. There's no reason why we can't do it here. It will take hard work and voter engagement to get it done.”
Normal people don't understand how the technology works. And because they don't understand it, they have no clue what to do about it. The technology is really opaque and few of us are computer engineers.
These companies know more about human attention than anyone on Earth, right? Their businesses is grabbing your attention and if they make you angry, or afraid, they can hold your attention longer. So they have consciously gone out there and manipulated our perception of the industry to think three things: That it's complex, it’s inevitable, and there's only one path forward. None of these are true. Everything about this thing is Potemkin — a false impression created for financial and power gains.
You don't need to know anything about the technology to regulate. Let's take artificial intelligence. The key thing to understand is that Chat GPT or any of those large language models is only as good as the data on which it is trained. And this has been the dirty little secret of what is called AI from the beginning. Initially, there were three uses of AI that dominated: predictive policing, mortgage review, and resume review. In all three cases, huge harm has been done by the application. So predictive policing results in massive over-policing of black and brown neighborhoods. Blackbox AI allows them to have a patina of legitimacy in what is a demonstrably illegitimate exercise. The same thing turns out to be true in the banking industry, which is quite comfortable with digital redlining. Being able to blame it on a black box is more politically palatable. And it turns out that employers would rather not hire older people or women or people of color and the software guys have figured out how to [do that without much notice]. What you're looking at here is a class of technology where you actually don't need to understand how it works, just the consequences.
You ask, “Did this company take appropriate steps to ensure the elimination of bias and the elimination of harm?” And quite honestly, none of these people are doing that.
When I wrote about Franklin Roosevelt's famous first 100 days I found out that none of the people in the Roosevelt White House knew squat about banking or Wall Street. But they had a broad principle that they applied. As Roosevelt said, “To the ancient dictum of caveat emptor — ‘let the buyer beware’— we are adding a new one for all time, ‘let the seller beware.’” And that's really all he needed to do to regulate. He established the SEC and put Joe Kennedy — who knew all of Wall Street’s scams — in charge. So you don't need to know the details and the engineering to regulate.
The other thing about AI regulation, or what I think of as algorithm reform, is that in one essential respect it is unlike health and safety regulation, unlike the FDA, which I think should otherwise be the model for the new government agency. If you make a mistake in approving a drug, people can die from the side effects. But if you make a mistake in regulating tech, it can be easily fixed in the second iteration — the 2.0 regulation — so that the downside of hasty regulation is minimal.
So, Roger, can you go down one level from your principles of safety, privacy and competition to build support for something more modest but still critical — what Ezra Klein calls “legibility” and others have just called transparency? On that front, shouldn’t we start, as AI pioneer Gary Marcus said recently in The New York Times Magazine, with requiring that all AI-generated content be watermarked? It’s easier with images but eventually couldn’t we learn what percentage of a document was written by a computer?
And instead of just studying the problem more, the usual Washington response, why not use today’s bipartisan support for regulating tech to strike while the iron’s hot with a bill this year?
I agree with the way you framed the problem and we need an agency a level above the FDA that makes demonstrated safety a condition of market access, and that applies this retroactively as well proactively. Let's just say we waved a magic wand and had real regulation right now. There is not one technology product in the market today that touches consumers that would be allowed to continue in business without major modification.
It starts with executive action like what we’re hearing from Lina Khan [chair of the Federal Trade Commission]. On artificial intelligence, it's not a six month moratorium. It’s requiring these systems to train on highly valid data, which means you have to spend a lot more money creating it. Silicon Valley is going to scream but we should say: You played a huge role in undermining democracy, in undermining public health and killing our kids. [teen suicides connected to social media have skyrocketed]. So why should we pay any attention to you at all?
In 2021, The Wall Street Journal published “The Facebook Files,” 10,000 pages of internal Facebook studies disclosed by a former senior Facebook executive, Frances Haugen, that showed that every time Facebook was criticized, they did a study to see whether it was true or not, and every time it was worse than everybody imagined. One of the core things here relates to children. There's a clear connection between the design of Instagram and psychological problems among teenagers, particularly teenage girls. When Instagram was created, it was the first photo sharing app for the iPhone. And the iPhone’s original camera wasn't very good. So they put in filters that made things look better than they look in real life. If you use a filter on somebody who already looks better than most people do, they're going to look like a god. Putting in filters made it possible to monetize Instagram because obviously a lot of advertising is about making people jealous. Instagram was designed to create jealousy in teenage girls.
Now, are there any studies that suggest that that might lead to psychological harm? Well, it turns out there's about 500 feet of studies that show exactly that. And Facebook did some of them internally that show, without a doubt, that Instagram [owned by Facebook] was doing this. It's in the original design, right? This is like the AI thing. You don't need to look at the details of how the product works. Frances Haugen just looked at how it was designed. She goes in front of Congress and what did Facebook do? A week later, they rushed to change the name of the company to Meta, and gave a demonstration of the “metaverse.” Policymakers and journalists pivoted 180 degrees and [hardly anyone] has ever heard of Francis Haugen since. She provided documentary evidence of Facebook committing felonies that Mark Zuckerberg could go to prison for but no one in law enforcement has followed up.
At The Council for Responsible Social Media, we've been working on regulation for a year and we interact with members of Congress and policymakers from both parties. There's good news and bad news. The good news is that members in both parties know a lot more about this than in the past. They’re more worried about it by far than they were a year ago. The bad news is that the Republicans generally don't want government anywhere near this solution. So that really worries me. But having said that, we had a great hearing at the Senate Judiciary Committee [this spring] with [Senate Majority Whip] Dick Durbin. All the members sat through the whole thing, which is extremely rare, and we had testimony from two members of our council who are mothers of kids who killed themselves because of being bullied on Facebook and Instagram [and other social media]. And at the end of the hearing, we got some encouraging indications of bipartisan progress underway. Now, we’ve all got to be from Missouri [The “Show Me State”] when we hear these kinds of things. I'm skeptical that Republicans would ever before anything that involves the government to that extent, but that's my report.
I’m curious about why Jonathan Alter thinks that there is strong bipartisan support for anything directed at Silicon Valley. Where is the incentive for the Republican Party to do anything that involves government regulation or government telling anybody what to do? There’s a problem with Democrats, too. Every political consultant tells every candidate, “Do not regulate Big Tech. You need them to raise all your money.”
I just want to explain why I don't think I'm being Pollyannish about this now. Let's stipulate all of what you said. Yes, it’s still a long shot. But if you look at the history of change in America, and in other countries, it doesn't happen, doesn't happen, doesn't happen — then it happens all at once. Ron DeSantis actually wants to use government to destroy Disney. He and almost all of the Republicans are not bound by any philosophical constraints the way they were even just a few years ago. They're totally about power and if they think they can win elections by beating up on Silicon Valley, they will do it in a heartbeat. They are already starting to do it. You can hear it in their rhetoric on the campaign trail. So sometimes we operate under old assumptions about the opposition.
As for Democrats being dependent on Silicon Valley money, it's true. And the bipartisan antitrust bill co-sponsored by Sens. Amy Klobuchar and Chuck Grassley — aimed at Big Tech — actually hinders efforts at content moderation by opening tech companies to more lawsuits from right-wingers. But because the industry has been saying “regulate us,” there’s an opportunity, as Sam Altman, CEO of OpenAI, indicated in congressional testimony this week. These people realize they’ve got to get on the side of public opinion. So we have a moment here. The public is scared about robots running everything, which gives us a chance to create the sense of urgency needed for legislation. Maybe there’s a present-day Dick Gephardt who can get in there and get the job done. I don't think it's unrealistic to think that's possible. And Democratic defeatism never got anybody anywhere.
Yep. Just so we understand the dimensions of this: On Facebook, there are three billion users globally. There are no guardrails, no safety nets, no policing of a business model based on making people angry or afraid. Not to mention ripping them off. When you have complete data, and you can give it to advertisers, who are the people in that environment who are going to be most attracted to it? The answer is, scammers and criminals.
The case for regulation is overwhelming. Thanks, Roger.