That FCC lawyer was Kevin Werbach, now a Wharton professor of legal studies and business ethics. His paper, “Digital Tornado: The Internet and Telecommunications Policy,” predicted that the internet’s feedback loop — what is produced by online activities would inform the internet continuously and create new activities — was an “endless spiral of connectivity” like the vortex of a tornado that would keep fueling its growth. Wired magazine called it a “seminal FCC working paper designed to help frame future debates on Internet policy in a pro-competitive context.”
Werbach argued that the FCC’s traditional regulations for telecom and broadcasting don’t apply because the internet is substitutable for all media. At the time, Congress had just passed the most sweeping revisions to communications law in more than 60 years. The paper urged the government and industry incumbents to largely keep their hands off the internet so as not to stump this growth. “The bright future I see could be 20 to 30 years away,” Werbach told Wired in 1997, “but ultimately, it’s unstoppable.”
Today, dial-up internet access and flip phones seem quaint, even antiquated. Social media has emerged as an internet innovation that is shaping popular opinion. Broadband speeds have risen by nearly 35,000 times, from 28.8 Kilobits per second to 1 Gigabit per second — or faster. Self-driving cars are being tested on the roads; virtual and augmented reality are changing media and gaming. Everyday appliances are connecting to the internet. Cryptocurrencies and the blockchain are upending financial institutions and processes; artificial intelligence and machine learning are reshaping society.
Over the last 20 years, the internet generally has been lightly regulated, whether the president was a Republican or a Democrat. But despite all the advances, was it the right approach given the current spate of headaches that includes fake news, a surge in the sophistication of cybercrimes, digital domination by a handful of tech giants, burgeoning privacy issues, bias in algorithms and projected massive job losses from artificial intelligence? Or did regulators strike the right balance as a whole, given the internet’s benefits in everything from health care access to mobile payments?
Just Right or Not Enough Regulations?
“In many ways, we got it right,” said Werbach at the recently held “After the Digital Tornado” conference, which examined the impact of the last 20 years of internet public policy. “There were these innovative, small, new companies and new techniques that were developing that needed to be protected both against government power as well as private power,” he said. But Werbach also acknowledged that “in some ways we were too naïve in not anticipating the new kind of power that those new companies, platforms and new techniques could create over time.”
“In many ways, we got it right.”–Kevin Werbach
A lighter regulatory touch was thought possibly to be adequate back then since the internet was going to give ordinary people a voice and the market was decentralized, said Gigi Sohn, former senior staff member at the FCC and president of Public Knowledge, a nonprofit public interest group. “There was a hope that the government can have a more laissez faire attitude towards communications, perhaps for the first time,” she said. But then deregulation did not lead to more competition, with the number of internet service providers falling from a high of 7,000 to a handful today. “That’s really what we got wrong. Competition is not a given,” Sohn said. Moreover, “too much of a hands-off attitude” has led to the rise of tech behemoths and “putting that genie back into the bottle is going to be very, very difficult.”
Globally, the view of the internet also has changed in striking ways, said Sally Wentworth, an internet policy expert who has worked at the White House and the State Department. Twenty years ago, the U.S. tried to convince the rest of the world of the internet’s potential. Today, as other countries recognize the economic benefits of the internet even though they may still want to control it, the West has turned more cautious after seeing how the internet has affected democratic processes. “It’s really remarkable, from an international level, in some ways, how much the discussion has changed,” said Wentworth, now vice president of global policy development at the nonprofit Internet Society.
Such handwringing, however, should not minimize the many benefits that internet innovations have brought to society such as the power of networks. “Platforms are really good things — they unlock a lot of value and innovation,” said Christopher Yoo, professor of law, communication, computer and information science at Penn. Also, collecting data has led to a lot of services people value highly, such as product recommendations or better search results. “How do we get to the middle ground and not throw the baby out with the bathwater?” he asked.
Werbach also pointed to an innovation that could be as revolutionary as the internet: blockchain, which is a distributed ledger underpinning cryptocurrencies, smart contracts and other applications that is not governed by a central authority such as a government or a bank. “Blockchain has a similar kind of potential. We don’t know if it will reach a similar level of influence, but it’s similar in that it is a foundational, open technology not just for connecting systems for messaging but for connecting value.”
Data-driven Innovation
The internet age has brought about a new regulatory challenge, said Viktor Mayer-Schonberger, professor of internet governance and regulation at the Oxford Internet Institute. Regulators know how to deal with a company that has the scale and scope to dominate a market, such as a major automaker. But in the digital age, network effects — the more people use the network, the more valuable it is — also can lead to market concentration.
That’s the case with Google, for example. As Google gathers more data from its users, its search results get better, which attracts more users, and so on. This effectively makes it tough for rivals to challenge Google. “The very large players who have access to a lot of data and have the capacity to use that data to learn are getting better and better,” Mayer-Schonberger said.
“How do we get to the middle ground and not throw the baby out with the bathwater?”–Christopher Yoo
What can break up this market concentration? Historically, new ideas or innovations can disrupt incumbents. “That’s the idea of creative destruction,” Mayer-Schonberger said. “We have creative destruction to counter market concentration.” Typically, human ingenuity — and a patent system that protects such ideas — lead to disruptive innovation. But “what if human ingenuity isn’t the source of innovation anymore?” he said. Today, machines are learning from data to spark innovation — for example, educational systems that use feedback data to improve outcomes for kids. “This is actually happening,” Mayer-Schonberger added.
Data has become the currency leading to market concentration and machines are innovating using this data. So perhaps one way to fight this market power would be to make monopolists share this data with competitors, said Mayer-Schonberger. Regulators would require companies above a certain size to share slices of their data with anyone, and the larger the firm the more it has to share. “We force larger companies with bigger market share to have to share a bigger slice of their data than smaller companies,” he said. “You have to share it with everyone and anyone in the industry that wants it.” The slices of data would be random — this is key because the data should be heterogeneous.
Innovation seems to be a fickle process that comes in cycles, added Tim Wu, a professor at Columbia Law School who is best known for creating the concept of network neutrality. He pointed out that every 20 years to 30 years, a “deck-cleaning” innovation comes along, whether it is the personal computer or the radio, that disrupts incumbents. This leads to an “open age” where many startups get into the market. The next stage is an “early domination golden age” where a few companies come out on top.
However, stagnation sets in as these newly dominant companies care more about protecting their position then sparking true innovation, Wu said. One way to fight digital monopolists is to assume that they slow down innovation even without direct evidence that they are quashing rivals. He cites news reports of startups avoiding businesses that Facebook is in because they don’t want to compete with the social network giant. “This led to the idea of breakups without [mis]conduct,” Wu said.
Herbert Hovenkamp, a Wharton professor of legal studies and business ethics and a law professor at the University of Pennsylvania, said one way to keep digital markets competitive is to look less at market share and more on product differentiation. Market share moves faster in the tech world — where eyeballs can quickly switch from one website or mobile app to another — than the industrial world like automobile manufacturing. “Market shares are much more elastic in digital markets where sales can triple and quintuple in a matter of months if the demand is there,” he said.
“Market shares are much more elastic in digital markets where sales can triple and quintuple in a matter of months.”–Herbert Hovenkamp
But product differentiation can bring about more competition, Hovenkamp argued. For example, Match.com may be the largest dating app with the most members, but it hasn’t signed up all single people. Some folks prefer to date people of a certain ethnicity, so they use other dating apps. These niche apps do add competition to the dating market because not everyone wants the same thing. “Different dating sites offer different sets of features and appeal to different groups,” he said.
Can Algorithms Be Fair?
Algorithms, or the pieces of code that direct machines to do smart things, are supposed to make processes more efficient. But given that it takes in data and constantly trains itself, one result could be unfair stereotyping. For example, if an algorithm sees that loan applicants with a lower income threshold from a certain ethnic group tend to have a higher rate of defaults, it can train itself to weed out people with that profile. However, “you don’t find out whether people you didn’t give a loan to would have repaid you or not if you gave them the loan,” said Michael Kearns, professor of computer and information science at Penn.
One possible solution: Design the algorithm as a zero-sum game between two internal players, Kearns said. One is a learner; it takes the data and learns from it to make decisions, such as whether to approve a loan. The other is a regulator, which audits the decision-making to see if there is a subgroup that is being discriminated against. “This is all inside the code,” he said. “The equilibrium of this game turns out to be whatever model is maximally accurate from a predictive sense while meeting the constraints that it’s fair on all these subgroups.”
Algorithms may be driving a lot of decisions, but human choices also are a determining factor, said Kartik Hosanagar, Wharton professor of technology and digital business as well as marketing. “A lot of attention has been focused on the algorithm. But I’m also going to point out that it’s not just the data and algorithm. It’s the human being as well. … It’s how they come together.” For example, algorithms let people personalize their Facebook experience to only include like-minded friends. “These echo chambers can be particularly problematic when it comes to news and certain kinds of media that forms our opinions and determines how we engage in social discourse in a democracy.”
“Regulation is coming whether Silicon Valley likes it or not.”–Bruce Schneier
Cybersecurity has gotten more difficult to implement, too. The computerization of everyday devices — fridges, microwaves, washers and dryers, among others — means there are more avenues for hackers. And the results can be dangerous. While hacking of personal computers in the past meant there could be a loss of privacy, confidentiality, integrity and availability, when a car is hacked there is a real risk to life and property, said Bruce Schneier, a renowned security expert and Harvard fellow.
“It’s the difference between someone crashing your spreadsheet on your computer and you lose the data and someone crashing your heart defibrillator and you lose your life,” Schneier said. “It’s the same CPU, same software, same attack code … with wildly different consequences. That’s the world we’re moving into, kind of without paying too much attention.” As long as a connected DVR works, for instance, people may not care if it could be used as an attack bot.
Here’s another risk. Computer companies fix software glitches with updates. But that means there is a team of dedicated people patching the software and sending out updates to users, Schneier said. For makers of home appliances, for example, there typically is no such team. Usually, they hire external developers to do the software who leave after the project is over. There are no constant updates to patch software against viruses.
Another danger is that in a software glitch, all affected computers malfunction. But that’s especially dangerous when fleets of cars are affected. “Computerized cars will all work great until none of them do. Failure happens to everybody at once,” Schneier said. He believes the solution to increased cybersecurity risks is more regulation, whether techies are ready or not. “What I see coming is government regulation and I see it coming in a big way,” he said. “Regulation is coming whether Silicon Valley likes it or not. Being Libertarian is fine when it doesn’t matter. When your stuff kills people, it does matter. The stakes become too high.”
No hay comentarios.:
Publicar un comentario